Monthly Archives: December 2011

Audiophiles and the Need to be Special

I love laughing at audiophiles.

If you’re not familiar with the term, audiophiles are people who are really into top-end audio equipment. In itself, that’s fine. But there’s a very active and vocal subset of the audiophile community that’s built up their self-image around the idea that they’re special. They don’t just have better audio equipment than you do, but they have better appreciation of sound quality than you do. In fact, their hearing is better than yours. They can hear nuances in sound quality that you can’t, because they’re so very, very special. They’ve developed this ability, you see, because they care more about music than you do.

It’s a very human thing. We all really want to be special. And when there’s something that’s really important to us – like music is for many people – there’s a very natural desire to want to be able to appreciate it on a deep level, a special level reserved only for people who really value it. But what happens when you take that desire, and convince yourself that it’s not just a desire? You wind up turning into a sucker who’s easy to fleece for huge quantities of money on useless equipment that can’t possibly work.

I first learned about these people from my old friend John Vlissides. John died of brain cancer about 5 years ago, which was incredibly sad. But back in the day, when we both worked at IBM Research, he and I were part of a group that ate lunch together every day. John was a reformed audiophile, and used to love talking about the crazy stuff he used to do.

Audiophiles get really nutty about things like cables. For example, John used to have the cables linking his speakers to his amp suspended from the ceiling using non-conductive cord. The idea behind that is that electrical signals are carried, primarily, on the outer surface of the wire. If the cable was sitting on the ground, it would deform slighly, and that would degrade the signal. Now, of course, there’s no perceptible difference, but a dedicated audiophile can convince themselves that they can hear it. In fact, this is what convinced John that it was all craziness: he was trained as an electrical engineer, and he sat down and worked out how much the signal should change as a result of the deformation of the copper wire-core, and seeing the real numbers, realized that there was no way in hell that he was actually hearing that tiny difference. Right there, that’s an example of the math aspect of this silliness: when you actually do the math, and see what’s going on, even when there’s a plausible explanation, the real magnitude of the supposed effect is so small that there’s absolutely no way that it’s perceptible. In the case of wire deformation, the magnitude of the effect on the sound produced by the signal carried by the wire is so small that it’s essentially zero – we’re talking about something smaller than the deformation of the sound waves caused by the motion of a mosquito’s wings somewhere in the room.

John’s epiphany was something like 20 years ago. But the crazy part of the audiophile community hasn’t changed. I encountered two instances of it this week that reminded me of this silliness and inspired me to write this post. One was purely accidental: I just noticed it while going about my business. The other, I noticed on boing-boing because the first example was already in my mind.

First, I was looking for an HDMI video cable for my TV. At the moment, we’ve got both an AppleTV and a cable box hooked up to our TV set. We recently found out that under our cable contract, we could get a free upgrade of the cable box, and the new box has HDMI output – so we’d need a new cable to use it.

HDMI is a relatively new standard video cable for carrying digital signals. Instead of old-fashioned analog signals that emulate the signal recieved by a good-old TV antenna like we used to use, HDMI uses a digital stream for both audio and video. Compared to old-fashioned analog, the quality of both audio and video on a TV using HDMI is dramatically improved. Analog signals were designed way, way back in the ’50s and ’60s for the televisions that they were producing then – they’re very low fidelity signals, which are designed to produce images on old TVs, which had exceedingly low resolution by modern standards.

The other really great thing about a digital system like HDMI is that digital signals don’t degrade. A digital system takes a signal, and reduces it to a series of bits – signals that can be interpreted as 1s and 0s. That series of bits is divided into bundles called packets. Each packet is transmitted with a checksum – an additional number that allows the receiver to check that it received the packet correctly. So for a given packet of information, you’ve either received it correctly, or you didn’t. If you didn’t, you request the sender to re-send it. So you either got it, or you didn’t. There’s no in-between. In terms of video quality, what that means is that the cable really doesn’t matter very much. It’s either getting the signal there, or it isn’t. If the cable is really terrible, then it just won’t work – you’ll get gaps in the signal where the bad packets dropped out – which will produce a gap in the audio or video.

In analog systems, you can have a lot of fuzz. The amplitude of the signal at any time is the signal – so noise effects that change the amplitude are changing the signal. There’s a very real possibility that interference will create real changes in the signal, and that those changes will produce a perceptible result when the signal is turned into sound or video. For example, if you listen to AM radio during a thunderstorm, you’ll hear a burst of noise whenever there’s a bolt of lightning nearby.

But digital systems like HDMI don’t have varying degrees of degradation. Because the signal is reduced to 1s and 0s – if you change the amplitude of a 1, it’s still pretty much going to look like a one. And if the noise is severe enough to make a 1 look like a 0, the error will be detected because the checksum will be wrong. There’s no gradual degradation.

But audiophiles… ah, audiophiles.

I was looking at these cables. A basic six-foot-long HDMI cable sells for between 15 and 25 dollars. But on the best-buy website, there’s a clearance cable for just $12. Great! And right next to it, there’s another cable. Also six feet long. For $240 dollars! 20-times higher, for a friggin’ digital cable! I’ve heard, on various websites, the rants about these crazies, but I hadn’t actually paid any attention. But now, I got to see it for myself, and I just about fell out of my chair laughing.

To prolong the entertainment, I went and looked at the reviews of this oh-so-amazing cable.

People who say there is NO difference between HDMI cables are just trying to justify to themselves to go cheap. Now it does depend on what you are connecting the cable between. If you put this Carbon HDMI on a Cable or Satellite box, you probably won’t see that much of a difference compared to some middle grade cables.

I connected this cable from my PS3 to my Samsung to first test it, then to my receiver. It was a nice upgrade from my previous Cinnamon cable, which is already a great cable in it’s own right. The picture’s motion was a bit smoother with gaming and faster action, but I still want to check the link to the guide about gaming monitors my fried sent me. I also noticed that film grain looked a little cleaner, not sure why though.

The biggest upgrade was with my audio though. Everything sounded a little crisper with more detail. I also noticed that the sound fields were more distinct. Again not sure exactly why, but I will take the upgrade.

All and all if you want the best quality, go Audio Quest and specifically a Carbon HDMI. You never have to upgrade your HDMI again with one of these guys. Downfall though is that it is a little pricey.

What’s great about it: Smooth motion and a little more definition in the picture

What’s not so great: Price

It’s a digital cable. The signal that it delivers to your TV and stereo is not the slightest bit different from the signal delivered by the $12 clearance cable. It’s been reduced by the signal producing system to a string of 1s and 0s – the identical string of 1s and 0s on both cables – and that string of bits is getting interpreted by exactly the same equipment on the receiver, producing exactly the same audio and video. There’s no difference. It has nothing to do with how good your ears are, or how perceptive you are. There is no difference.

But that’s nothing. The same brand sells a $700 cable. From the reviews:

I really just bought 3 of these. So if you would like an honest review, here it is. Compared to other Audio Quest cables, like the Vodka, you do not see a difference unless you know what to look for and have the equipment that can actually show the difference. Everyone can see the difference in a standard HDMI to an HDMI with Silver in it if you compare, but the difference between higher level cables is more subtle. Audio is the night and day difference with these cables. My bluray has 2 HDMI outs and I put one directly to the TV and one to my processor. My cable box also goes directly to my TV and I use Optical out of the TV because broadcast audio is aweful. The DBS systems keeps the cable ready for anything and I can tell that my audio is clean instantly and my picture is always flawless. They are not cheap cables, they are 100% needed if you want the best quality. I am considering stepping up to Diamond cables for my theater room when I update it. Hope this helps!

And they even have a “professional quality” HDMI cable that sells for well over $1000. And the audiophiles are all going crazy, swearing that it really makes a difference.

Around the time I started writing this, I also saw a post on BoingBoing about another audiophile fraud. See, when you’re dealing with this breed of twit who’s so convinced of their own great superiority, you can sell them almost anything if you can cobble together a pseudoscientific explanation for why it will make things sound better.

This post talks about a very similar shtick to the superexpensive cable: it’s a magic box which… well, let’s let the manufacturer explain.

The Blackbody ambient field conditioner enhances audio playback quality by modifying the interaction of your gear’s circuitry with the ambient electromagnetic field. The Blackbody eliminates sonic smearing of high frequencies and lowers the noise floor, thus clarifying the stereo image.

This thing is particularly fascinating because it doesn’t even pretend to hook in to your audio system. You just position it close to your system, and it magically knows what equipment it’s close to and “harmonizes” everything. It’s just… magic! But if you’re really special, you’ll be able to tell that it works!

Hydrinos: Impressive Free Energy Crackpottery

Back when I wrote about the whole negative energy rubbish, a reader wrote to me, and asked me to write something about hydrinos.

For those who are lucky enough not to know about them, hydrinos are part of another free energy scam. In this case, a medical doctor named Randell Mills claims to have discovered that hydrogen atoms can have multiple states beyond the typical, familiar ground state of hydrogen. Under the right conditions, so claims Dr. Mills, the electron shell around a hydrogen atom will compact into a tighter orbit, releasing a burst of energy in the process. And, in fact, it’s (supposedly) really, really easy to make hydrogen turn into hydrinos – if you let a bunch of hydrogen atoms bump in to a bunch of Argon atoms, then presto! some of the hydrogen will shrink into hydrino form, and give you a bunch of energy.

Wonderful, right? Just let a bunch of gas bounce around in a balloon, and out comes energy!

Oh, but it’s better than that. There are multiple hydrino forms: you can just keep compressing and compressing the hydrogen atom, pushing out more and more energy each time. The more you compress it, the more energy you get – and you don’t really need to compress it. You just bump it up against another atom, and poof! energy.

To explain all of this, Dr. Mills further claims to have invented a new
form of quantum mechanics, called “grand unified theory of classical quantum mechanics” (CQM for short) which provides the unification between relativity and quantum mechanics that people have been looking for. And, even better, CQM is fully deterministic – all of that ugly probabilistic stuff from quantum mechanics goes away!

The problem is, it doesn’t work. None of it.

What makes hydrinos interesting as a piece of crankery is that there’s a lot more depth to it than to most crap. Dr. Mills hasn’t just handwaved that these hydrino things exist – he’s got a very elaborate detailed theory – with a lot of non-trivial math – to back it up. Alas, the math is garbage, but it’s garbage-ness isn’t obvious. To see the problems, we’ll need to get deeper into math than we usually do.

Let’s start with a couple of examples of the claims about hydrinos, and the kind of favorable clueless press they’ve received.

Here is an example of how hydrino supporters explain them:

In 1986 Randell Mills MD developed a theory that hydrogen atoms could shrink, and release lots of energy in the process. He called the resultant entity a “Hydrino” (little Hydrogen), and started a company called Blacklight Power, Inc. to commercialize his process. He published his theory in a book he wrote, which is available in PDF format on his website. Unfortunately, the book contains so much mathematics that many people won’t bother with it. On this page I will try to present the energy related aspect of his theory in language that I hope will be accessible to many.

According to Dr. Mills, when a hydrogen atom collides with certain other atoms or ions, it can sometimes transfer a quantity of energy to the other atom, and shrink at the same time, becoming a Hydrino in the process. The atom that it collided with is called the “catalyst”, because it helps the Hydrino shrink. Once a Hydrino has formed, it can shrink even further through collisions with other catalyst atoms. Each collision potentially resulting in another shrinkage.

Each successive level of shrinkage releases even more energy than the previous level. In other words, the smaller the Hydrino gets, the more energy it releases each time it shrinks another level.

To get an idea of the amounts of energy involved, I now need to introduce the concept of the “electron volt” (eV). An eV is the amount of energy that a single electron gains when it passes through a voltage drop of one volt. Since a volt isn’t much (a “dry cell” is about 1.5 volts), and the electric charge on an electron is utterly minuscule, an eV is a very tiny amount of energy. Nevertheless, it is a very representative measure of the energy involved in chemical reactions. e.g. when Hydrogen and Oxygen combine to form a water molecule, about 2.5 eV of energy is released per water molecule formed.

When Hydrogen shrinks to form a second level Hydrino (Hydrogen itself is considered to be the first level Hydrino), about 41 eV of energy is released. This is already about 16 times more than when Hydrogen and Oxygen combine to form water. And it gets better from there. If that newly formed Hydrino collides with another catalyst atom, and shrinks again, to the third level, then an additional 68 eV is released. This can go on for quite a way, and the amount gets bigger each time. Here is a table of some level numbers, and the energy released in dropping to that level from the previous level, IOW when you go from e.g. level 4 to level 5, 122 eV is released. (BTW larger level numbers represent smaller Hydrinos).

And some of the press:

Notice a pattern?

The short version of the problem with hydrinos is really, really simple.

The most fundamental fact of nature that we’ve observed is that everything tends to move towards its lowest energy state. The whole theory of hydrinos basically says that that’s not true: everything except hydrogen tends to move towards its lowest energy state, but hydrogen doesn’t. It’s got a dozen or so lower energy states, but none of the abundant quantities of hydrogen on earth are ever observed in any of those states unless they’re manipulated by Mills magical machine.

The whole basis of hydrino theory is Mills CQM. CQM is rubbish – but it’s impressive looking rubbish. I’m not going to go deep into detail; you can see a detailed explanation of the problems here; I’ll run through a short version.

To start, how is Mills claiming that hydrinos work? In CQM, he posits the existence of electron shell levels closer to the nucleus than the ground state of hydrogen. Based on his calculations, he comes up with an energy figure for the difference between the ground state and the hydrino state. Then he finds other substances that have the property that boosting one electron into a higher energy state would cost the same amount of energy. When a hydrogen atom collides with an atom that has a matching electron transition, the hydrogen can get bumped into the hydrino state, while kicking an electron into a higher orbital. That electron will supposedly, in due time, fall back to its original level, releasing the energy differential as a photon.

On this level, it sort-of looks correct. It doesn’t violate conservation of energy: the collision between the two atoms doesn’t produce anything magical. It’s just a simple transfer of energy. That much is fine.

It’s when you get into the details that it gets seriously fudgy.

Right from the start, if you know what you’re doing, CQM goes off the rails. For example, CQM claims that you can describe the dynamics of an electron in terms of a classical wave charge-density function equation. Mills actually gives that function, and asserts that it respects Lorentz invariance. That’s crucial – Lorentz invariance is critical for relativity: it’s the fundamental mathematical symmetry that’s the basis of relativity. But his equation doesn’t actually respect Lorentz invariance. Or, rather, it does – but only if the electron is moving at the speed of light. Which it can’t do.

Mills goes on to describe the supposed physics of hydrinos. If you work through his model, the only state that is consistent with both his equations, and his claim that the electrons orbit in a spherical shell above the atom – well, if you do that, you’ll find that according to his own equations, there is only one possible state for a hydrogen atom – the conventional ground state.

It goes on in that vein for quite a while. He’s got an elaborate system, with an elaborate mathematical framework… but none of the math actually says what he says it says. The Lorentz invariance example that I cited above – that’s typical. Print an equation, say that it says X, even though the equation doesn’t say anything like X.

But we can go a bit further. The fundamental state of atoms is something that we understand pretty well, because we’ve got so many observations, and so much math describing it. And the thing is, that math is pretty damned convincing. That doesn’t mean that it’s correct, but it does mean that any theory that wants to replace it must be able to describe everything that we’ve observed at least as well as the current theory.

Why do atoms have the shape that they do? Why are the size that they are? It’s not a super easy thing to understand, because electrons aren’t really particles. They’re something strange. We don’t often think about that, but it’s true. They’re deeply bizarre things. They’re not really particles. Under many conditions, they behave more like waves than like particles. And that’s true of the atom.

The reason that atoms are the size that they are is because the electron “orbitals” have sizes and shapes that are determined by resonant frequencies of the wave-like aspects of electrons. What Mills is suggesting is that there are a range of never-before observed resonant frequencies of electrons. But the math that he uses to support that claim just doesn’t work.

Now, I’ll be honest here. I’m not nearly enough of a physics whiz to be competent to judge the accuracy of his purported quantum mechanical system. But I’m still pretty darn confident that he’s full of crap. Why?

I’m from New Jersey – pretty much right up the road from where his lab is. Going to college right up the road from him, I’ve been hearing about his for a long time. He’s been running this company for quite a while – going on two decades. And all that time, the company has been constantly issuing press releases promising that it’s just a year away from being commercialized! It’s always one step away. But never, never, has he released enough information to let someone truly independent verify or reproduce his results. And he’s been very deceptive about that: he’s made various claims about independent verification on several occasions.

For example, he once cited that his work had been verified by a researcher at Harvard. In fact, he’d had one of his associates rent a piece of equipment at Harvard, and use it for a test. So yes, it was tested by a researcher – if you count his associate as a legitimate researcher. And it was tested at Harvard. But the claim that it was tested by a researcher at Harvard is clearly meant to imply that it was tested by a Harvard professor, when it wasn’t.

For something around 20 years, he’s been making promises, giving very tightly controlled demos, refusing to give any real details, refusing to actually explain how to reproduce his “results”, and promising that it’s just one year away from being commercialized!

And yet… hydrogen is the most common substance in the universe. If it really had a lower energy state that what we call it’s ground state, and that lower energy state was really as miraculous as he claims – why wouldn’t we see it? Why hasn’t it ever been observed? Substances like Argon are rare – but they’re not that rare. Argon has been exposed to hydrogen under laboratory conditions plenty of times – and yet, nothing anamalous has even been observed. All of the supposed hydrino catalysts have been observed so often under so many conditions – and yet, no anamolous energy has even been noticed before. But according to Mills, we should be seeing tons of it.

And that’s not all. Mills also claims that you can create all sorts of compounds with hydrinos – and naturally, every single one of those compounds is positively miraculous! Bonded with silicon, you get better semiconductors! Substitute hydrinos for regular hydrogen in a battery electrolyte, and you get a miracle battery! Use it in rocket fuel instead of common hydrogen, and you get a ten-fold improvement in the performance of a rocket! Make a laser from it, and you can create higher-density data storage and communication systems. Everything that hydrinos touch is amazing

But… not one of these miraculous substances has ever been observed before. We work with silicon all the time – but we’ve never seen the magic silicon hydrino compound. And he’s never been willing to actually show anyone any of these miracle substances.

He claims that he doesn’t show it because he’s protecting his intellectual property. But that’s silly. If hydrinos existed, then just telling us that these compounds exist and have interesting properties should be enough for other labs to go ahead and experiment with producing them. But no one has. Whether he shows the supposed miracle compounds or not doesn’t change anyone else’s ability to produce those. Even if he’s keeping his magic hydrino factory secret, so that no one else has access to hydrinos, by telling us that these compounds exist, he’s given away the secret. He’s not protecting anything anymore: by publically talking about these things, he’s given up his right to patent the substances. It’s true that he still hasn’t given up the rights to the process of producing them – but publicly demonstrating these alleged miracle substances wouldn’t take away any legal rights that he hasn’t already given up. So, why doesn’t he show them to you?

Because they don’t exist.

Building Structure in Category Theory: Definitions to Build On

The thing that I think is most interesting about category theory is that what it’s really fundamentally about is structure. The abstractions of category theory let you talk about structures in an elegant way; and category diagrams let you illustrate structures in a simple visual way. Morphisms express the structure of a category; functors are higher level morphisms that express the structure of relationships between categories.

In my last category theory post, I showed how you can use category theory to describe the basic idea of symmetry and group actions. Symmetry is, basically, an immunity to transformation – that is, a kind of structural property of an object or system where applying some kind of transformation to that object doesn’t change the object in any detectable way. The beauty of category theory is that it makes that definition much simpler.

Symmetry transformations are just the tip of the iceberg of the kinds of structural things we can talk about using categories. Category theory lets you build up pretty much any mathematical construct that you’d like to study, and describe transformations on it in terms of functors. In fact, you can even look at the underlying conceptual structure of category theory using category theory itself, by creating a category in which categories are objects, and functors are the arrows between categories.

So what happens if we take the same kind of thing that we did to get group actions, and we pull out a level, so that instead of looking at the category of categories, focusing on arrows from the specific category of a group to the category of sets, we do it with arrows between members of the category of functors?

We get the general concept of a natural transformation. A natural transformation is a morphism from functor to functor, which preserves the full structure of morphism composition within the categories mapped by the functors. The original inventor of category theory said that natural transformations were the real point of category theory – they’re what he wanted to study.

Suppose we have two categories, C and D. And suppose we also have two functors, F, G : C → D. A natural transformation from F to G, which we’ll call η maps every object x in C to an arrow ηx : F(x) → G(x). ηx has the property that for every arrow a : x → y in C, ηy º F(a) = G(a) º ηx. If this is true, we call ηx the component of η for (or at) x.

That paragraph is a bit of a whopper to interpret. Fortunately, we can draw a diagram to help illustrate what that means. The following diagram commutes if η has the property described in that paragraph.

natural-transform.jpg

I think this is one of the places where the diagrams really help. We’re talking about a relatively straightforward property here, but it’s very confusing to write about in equational form. But given the commutative diagram, you can see that it’s not so hard: the path ηy º F(a) and the path G(a) º η<sub compose to the same thing: that is, the transformation η hasn’t changed the structure expressed by the morphisms.

And that’s precisely the point of the natural transformation: it’s a way of showing the relationships between different descriptions of structures – just the next step up the ladder. The basic morphisms of a category express the structure of the category; functors express the structure of relationships between categories; and natural transformations express the structure of relationships between relationships.

Of course, this being a discussion of category theory, we can’t get any further without some definitions. To get to some of the interesting material that involves things like natural transformations, we need to know about a bunch of standard constructions: initial and final objects, products, exponentials… Then we’ll use those basic constructs to build some really fascinating constructs. That’s where things will get really fun.

So let’s start with initial and finial objects.

An initial object is a pretty simple idea: it’s an object with exactly one arrow to each of the other objects in the category. To be formal, given a category C, an object o \in Obj(C) is an initial object if and only if \forall b \in Obj(c): \exists_1 f: o \rightarrow b \in Mor(C). We generally write 0_c for the initial object in a category. Similarly, there’s a dual concept of a terminal object 1_c, which is object for which there’s exactly one arrow from every object in the category to 1_c.

Given two objects in a category, if they’re both initial, they must be isomorphic. It’s pretty easy to prove: here’s the sketch. Remember the definition of isomorphism in category theory. An isomorphism is an arrow f : a \rightarrow b, where \exists g : b \rightarrow a) such that f \circ g = 1_b and g \circ f = 1_a. If an object is initial, then there’s an arrow from it to every other object — including the other initial object. And there’s an arrow back, because the other one is initial. The iso-arrows between the two initials obviously compose to identities.

Now, let’s move on to categorical products. Categorical products define the product of two objects in a category. The basic concept is simple – it’s a generalization of cartesian product of two sets. It’s important because products are one of the major ways of building complex structures using simple categories.

Given a category C, and two objects a,b \in Obj(C), the categorical product a times b consists of:

  • An object p, often written a \times b;
  • two arrows p_a and p_b, where p \in Obj(C), p_a : p \rightarrow a, and p_b : p \rightarrow b.
  • a “pairing” operation, which for every object c \in C, maps the pair of arrows f : c \rightarrow a and
    g : c \rightarrow b to an arrow \langle f,g \rangle : c \rightarrow a\times b, where \langle f, g \rangle has the
    following properties:

    1. p_a \circ \langle f,g \rangle = f
    2. p_b \circ  \langle f,g \rangle = g
    3. \forall h : c \rightarrow a \times b: \langle p_a \circ h, p_b \circ h \rangle = h

The first two of those properties are the separation arrows, to get from the product to its components; and the third is the merging arrow, to get from the components to the product. We can say the same thing about the relationships in the product in an easier way using a commutative diagram:

One important thing to understand is that categorical products do not have to exist. This definition doen not say that given any two objects a and b, that a times b is a member of the category. What it says is what the categorical product
looks like if it exists. If, for a given pair a and b of objects, there is an object that meets this definition, then the product of a and b exists in the category. If not, it doesn’t. For many categories, the products don’t exist for some or even all of the objects in the category. But as we’ll see later, the categories for which the products do exist have some really interesting properties.

Second Law Silliness from Sewell

So, via Panda’s Thumb, I hear that Granville Sewell is up to his old hijinks. Sewell is a classic creationist crackpot, who’s known for two things.

First, he’s known for chronically recycling the old “second law of thermodynamics” garbage. And second, he’s known for building arguments based on “thought experiments” – where instead of doing experiments, he just makes up the experiments and the results.

The second-law crankery is really annoying. It’s one of the oldest creationist pseudo-scientific schticks around, and it’s such a terrible argument. It’s also a sort-of pet peeve of mine, because I hate the way that people generally respond to it. It’s not that the common response is wrong – but rather that the common responses focus on one error, while neglecting to point out that there are many deeper issues with it.

In case you’ve been hiding under a rock, the creationist argument is basically:

  1. The second law of thermodynamics says that disorder always increases.
  2. Evolution produces highly-ordered complexity via a natural process.
  3. Therefore, evolution must be impossible, because you can’t create order.

The first problem with this argument is very simple. The second law of thermodynamics does not say that disorder always increases. It’s a classic example of my old maxim: the worst math is no math. The second law of thermodynamics doesn’t say anything as fuzzy as “you can’t create order”. It’s a precise, mathematical statement. The second law of thermodynamics says that in a closed system:

 Delta S geq int frac{delta Q}{T}

where:

  1. S is the entropy in a system,
  2. Q is the amount of heat transferred in an interaction, and
  3. T is the temperature of the system.

Translated into english, that basically says that in any interaction that involves the
transfer of heat, the entropy of the system cannot possible be reduced. Other ways of saying it include “There is no possible process whose sole result is the transfer of heat from a cooler body to a warmer one”; or “No process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work.”

Note well – there is no mention of “chaos” or “disorder” in these statements: The second law is a statement about the way that energy can be used. It basically says that when
you try to use energy, some of that energy is inevitably lost in the process of using it.

Talking about “chaos”, “order”, “disorder” – those are all metaphors. Entropy is a difficult concept. It doesn’t really have a particularly good intuitive meaning. It means something like “energy lost into forms that can’t be used to do work” – but that’s still a poor attempt to capture it in metaphor. The reason that people use order and disorder comes from a way of thinking about energy: if I can extract energy from burning gasoline to spin the wheels of my car, the process of spinning my wheels is very organized – it’s something that I can see as a structured application of energy – or, stretching the metaphor a bit, the energy that spins the wheels in structured. On the other hand, the “waste” from burning the gas – the heating of the engine parts, the energy caught in the warmth of the exhaust – that’s just random and useless. It’s “chaotic”.

So when a creationist says that the second law of thermodynamics says you can’t create order, they’re full of shit. The second law doesn’t say that – not in any shape or form. You don’t need to get into the whole “open system/closed system” stuff to dispute it; it simply doesn’t say what they claim it says.

But let’s not stop there. Even if you accept that the mathematical statement of the second law really did say that chaos always increases, that still has nothing to do with evolution. Look back at the equation. What it says is that in a closed system, in any interaction, the total entropy must increase. Even if you accept that entropy means chaos, all that it says is that in any interaction, the total entropy must increase.

It doesn’t say that you can’t create order. It says that the cumulative end result of any interaction must increase entropy. Want to build a house? Of course you can do it without violating the second law. But to build that house, you need to cut down trees, dig holes, lay foundations, cut wood, pour concrete, put things together. All of those things use a lot of energy. And in each minute interaction, you’re expending energy in ways that increase entropy. If the creationist interpretation of the second law were true, you couldn’t build a house, because building a house involves creating something structured – creating order.

Similarly, if you look at a living cell, it does a whole lot of highly ordered, highly structured things. In order to do those things, it uses energy. And in the process of using that energy, it creates entropy. In terms of order and chaos, the cell uses energy to create order, but in the process of doing so it creates wastes – waste heat, and waste chemicals. It converts high-energy structured molecules into lower-energy molecules, converting things with energetic structure to things without. Look at all of the waste that’s produced by a living cell, and you’ll find that it does produce a net increase in entropy. Once again, if the creationists were right, then you wouldn’t need to worry about whether evolution was possible under thermodynamics – because life wouldn’t be possible.

In fact, if the creationists were right, the existence of planets, stars, and galaxies wouldn’t be possible – because a galaxy full of stars with planets is far less chaotic than loose cloud of hydrogen.

Once again, we don’t even need to consider the whole closed system/open system distinction, because even if we treat earth as a closed system, their arguments are wrong. Life doesn’t really defy the laws of thermodynamics – it produces entropy exactly as it should.

But the creationist second-law argument is even worse than that.

The second-law argument is that the fact that DNA “encodes information”, and that the amount of information “encoded” in DNA increases as a result of the evolutionary process means that evolution violates the second law.

This absolutely doesn’t require bringing in any open/closed system discussions. Doing that is just a distraction which allows the creationist to sneak their real argument underneath.

The real point is: DNA is a highly structured molecule. No disagreement there. But so what? In the life of an organism, there are virtually un-countable numbers of energetic interactions, all of which result in a net increase in the amount of entropy. Why on earth would adding a bunch of links to a DNA chain completely outweigh those? In fact, changing the DNA of an organism is just another entropy increasing event. The chemical processes in the cell that create DNA strands consume energy, and use that energy to produce molecules like DNA, producing entropy along the way, just like pretty much every other chemical process in the universe.

The creationist argument relies on a bunch of sloppy handwaves: “entropy” is disorder; “you can’t create order”, “DNA is ordered”. In fact, evolution has no problem with respect to entropy: one way of viewing evolution is that it’s a process of creating ever more effective entropy-generators.

Now we can get to Sewell and his arguments, and you can see how perfectly they match what I’ve been talking about.

Imagine a high school science teacher renting a video showing a tornado sweeping through a town, turning houses and cars into rubble. When she attempts to show it to her students, she accidentally runs the video backward. As Ford predicts, the students laugh and say, the video is going backwards! The teacher doesn’t want to admit her mistake, so she says: “No, the video is not really going backward. It only looks like it is because it appears that the second law is being violated. And of course entropy is decreasing in this video, but tornados derive their power from the sun, and the increase in entropy on the sun is far greater than the decrease seen on this video, so there is no conflict with the second law.” “In fact,” the teacher continues, “meteorologists can explain everything that is happening in this video,” and she proceeds to give some long, detailed, hastily improvised scientific theories on how tornados, under the right conditions, really can construct houses and cars. At the end of the explanation, one student says, “I don’t want to argue with scientists, but wouldn’t it be a lot easier to explain if you ran the video the other way?”

Now imagine a professor describing the final project for students in his evolutionary biology class. “Here are two pictures,” he says.

“One is a drawing of what the Earth must have looked like soon after it formed. The other is a picture of New York City today, with tall buildings full of intelligent humans, computers, TV sets and telephones, with libraries full of science texts and novels, and jet airplanes flying overhead. Your assignment is to explain how we got from picture one to picture two, and why this did not violate the second law of thermodynamics. You should explain that 3 or 4 billion years ago a collection of atoms formed by pure chance that was able to duplicate itself, and these complex collections of atoms were able to pass their complex structures on to their descendants generation after generation, even correcting errors. Explain how, over a very long time, the accumulation of genetic accidents resulted in greater and greater information content in the DNA of these more and more complicated collections of atoms, and how eventually something called “intelligence” allowed some of these collections of atoms to design buildings and computers and TV sets, and write encyclopedias and science texts. But be sure to point out that while none of this would have been possible in an isolated system, the Earth is an open system, and entropy can decrease in an open system as long as the decreases are compensated by increases outside the system. Energy from the sun is what made all of this possible, and while the origin and evolution of life may have resulted in some small decrease in entropy here, the increase in entropy on the sun easily compensates this tiny decrease. The sun should play a central role in your essay.”

When one student turns in his essay some days later, he has written,

“A few years after picture one was taken, the sun exploded into a supernova, all humans and other animals died, their bodies decayed, and their cells decomposed into simple organic and inorganic compounds. Most of the buildings collapsed immediately into rubble, those that didn’t, crumbled eventually. Most of the computers and TV sets inside were smashed into scrap metal, even those that weren’t, gradually turned into piles of rust, most of the books in the libraries burned up, the rest rotted over time, and you can see see the result in picture two.”

The professor says, “You have switched the pictures!” “I know,” says the student. “But it was so much easier to explain that way.”

Evolution is a movie running backward, that is what makes it so different from other phenomena in our universe, and why it demands a very different sort of explanation.

This is a perfect example of both of Sewell’s usual techniques.

First, the essential argument here is rubbish. It’s the usual “second-law means that you can’t create order”, even though that’s not what it says, followed by a rather shallow and pointless response to the open/closed system stuff.

And the second part is what makes Sewell Sewell. He can’t actually make his own arguments. No, that’s much too hard. So he creates fake people, and plays out a story using his fake people and having them make fake arguments, and then uses the people in his story to illustrate his argument. It’s a technique that I haven’t seen used so consistency since I read Ayn Rand in high school.

The Annoying CTMU Thread

I had to close down the original comment thread discussing my rant about the crankery of Chris Langan’s CTMU.

The cost in server time to repeatedly retrieve that massive comment thread was getting excessive. All of Scientopia’s server costs are still coming out of my pocket, and I can’t afford a higher server bill. Since the server usage was approaching the point at which we’d need to up the resources (and thus, the bill), something had to be done.

So I shut down comments on that post. Interested parties are welcome to continue the discussion in the comment-thread under this post.

Free Energy Crankery and Negative Mass Nonsense

I’ve got a couple of pet peeves.

As the author of this blog, the obvious one is bad math. And as I always say, the worst math is no math.

Another pet peeve of mine is free energy. Energy is, obviously, a hugely important thing to our society. And how we’re going to get the energy we need is a really serious problem – almost certainly the biggest problem that we face. Even if you’ve convinced yourself that global warming isn’t an issue, energy is a major problem. There’s only so much coal and oil that we can dig up – someday, we’re going to run out.

But there are tons of frauds out there who’ve created fake machines that they claim you can magically get energy from. And there are tons of cranks who are all-too-ready to believe them.

Take, for example, Bob Koontz.

Koontz is a guy with an actual physics background – he got his PhD at the University of Maryland. I’m very skeptical that he’s actually stupid enough to believe in what he’s selling – but nonetheless, he’s made a donation-based business out of selling his “free energy” theory.

So what’s his supposed theory?

It sounds impossible, but it isn’t. It is possible to obtain an unlimited amount of energy from devices which essentially only require that they be charged up with negative mass electrons and negative mass positrons. Any physicist should be able to convince himself of this in a matter of minutes. It really is simple: While ordinary positive mass electrons in a circuit consume power, negative mass electrons generate power. Why is that? For negative mass electrons and negative mass positrons, Newton’s second law, F = ma becomes F = -ma.

But acquiring negative mass electrons and negative mass electrons is not quite as simple as it sounds. They are exotic particles that many physicists may even doubt exist. But they do exist. I am convinced of this — for good reasons.

The Law of Energy Conservation

The law of energy conservation tells us that the total energy of a closed system is constant. Therefore, if such a system has an increase in positive energy, there must be an increase in negative energy. The total energy stays constant.

When you drop an object in the earth’s gravitational field, the object gains negative gravitational potential energy as it falls — with that increase in negative energy being balanced by an increase of positive energy of motion. But the object does not lose or gain total energy as it falls. It gains kinetic energy while it gains an equal amount of negative gravitational energy.

How could we have free energy? If we gain positive energy, we must also generate negative energy in exactly the same amount. That will “conserve energy,” as physicists say. In application, in the field of “free energy,” that means generating negative energy photons and other negative energy particles while we get the positive energy we are seeking. What is the problem, then? The problem involves generating the negative energy particles.

a

So… there are, supposedly, “negative energy” particles that correspond to electrons and positrons. These particles have never been observed, and under normal circumstances, they have no effect on any observable phenomenon.

But, we’re supposed to believe, it really exists. And it means that we can get free energy without violating the conservation of energy – because the creation of an equal amount of invisible, undetectable, effectless negative energy balances out whatever positive energy we create.

So what is negative energy?

That’s where the bad math comes in. Here’s his explanation:

When Paul Dirac, the Nobel prize-winning physicist was developing the first form of relativistic quantum mechanics he found it necessary to introduce the concept of negative mass electrons. This subsequently led Dirac to develop the idea that a hole in a sea of negative mass electrons corresponded to a positron, otherwise known as an antielectron. Some years later the positron was observed and Dirac won the Nobel prize.

Subsequent to the above, there appears to have been no experimental search for these negative mass particles. Whether or not negative mass electrons and negative mass positrons exist is thus a question to which we do not yet have an answer. However, if these particles do exist, their unusual properties could be exploited to produce unlimited amounts of energy — as negative mass electrons and negative mass positrons, when employed in a circuit, produce energy rather than consume it. Newton’s 2nd law F = ma becomes F = – ma and that explains why negative mass electrons and negative mass positrons produce energy rather than consume it. I believe that any good physicist should be able to see this quite quickly.

The following paragraph is actually wrong. There is such a thing as relativistic quantum mechanics. QM and special relativity are compatible, and the relativistic QM fits is at that intersection. General relativity and QM remains an unsolved problem, as discussed below. I’m leaving the original paragraph, because it seems dishonest to just delete it, like I was pretending that I never screwed up.

There is no such thing as relativistic quantum mechanics. One of the great research areas of modern physics is the attempt to figure out how to unify quantum mechanics and relativity. Many people have tried to find a unifying formulation, but no one has yet succeeded. There is no theory of relativistic QM.

It’s actually a fascinating subject. General relativity seems to be true: every test that we can dream up confirms GR. And quantum mechanics also appears to be true: every test that we can dream up confirms the theory of quantum mechanics. And yet, the two are not compatible.

No one has been able to solve this problem – not Dirac, not anyone.

Even within the Dirac bit… there is a clever bit of slight-of-hand. He starts by saying that Dirac proposed that there were “negative mass” electrons. Dirac did propose something like that – but the proposal was within the frame of mathematics. Without knowing about the existence of the positron, he worked through the implications of relativity, and would up with a model which could be interpreted as a sea of “negative mass” electrons with holes in it. The holes are positrons.

To get a sense of what this means, it’s useful to pull out a metaphor. In semiconductor physics, when you’re trying to describe the behavior of semiconductors, it’s often useful to talk about things backwards. Instead of talking about how the electrons move through a semiconductor, you can talk about how electron holes move. An electron hole is a “gap” where an electron could move. Instead of an electron moving from A to B, you can talk about an electron hole moving from B to A.

The Dirac derivation is a similar thing. The real particle is the positron. But for some purposes, it’s easier to discuss it backwards: assume that all of space is packed, saturated, with “negative mass” electrons. But there are holes moving through that space. A hole in a “negative mass”, negatively charged field is equivalent to a particle with positive mass and positive charge in an empty, uncharged space – a positron.

The catch is that you need to pick your abstraction. If you want to use the space-saturated-with-negative-mass model, then the positron doesn’t exist. You’re looking at a model in which there is no positron – there is just a gap in the space of negative-mass particles. If you want to use the model with a particle called a positron, then the negative mass particles don’t exist.

So why haven’t we been searching for negative-mass particles? Because they don’t exist. That is, we’ve chosen the model of reality which says that the positron is a real particle. Or to be slightly more precise: we have a very good mathematical model of many aspects of reality. In that model, we can choose to interpret it as either a model in which the positive-mass particles really exist and the negative-mass particles exist only as an absence of particles; or we can interpret it as saying that the negative-mass particles exist, and the positive mass ones exist only as an absence of negative-mass particles. In either case, that model provides an extremely good description of what we observe about reality. But that model does not predict that both the positive and negative mass particles both really exist in any meaningful sense. By observing and calculating the properties of the positive mass particles, we adopt the interpretation that positive mass particles really exist. Every observation that we make of the properties of positive mass particles is implicitly an observation of the properties of negative-mass particles. The two interpretations are mathematical duals.

Looking at his background and at at other things on his site, I think that Koontz is, probably, a fraud. He’s not dumb enough to believe this. But he’s smart enough to realize that there are lots of other people who are dumb enough to believe it. Koontz has no problem with pandering to them in the name of his own profit. What finally convinced me of that was his UFO-sighting claim here. Absolutely pathetic.