Category Archives: Bad Math

Lenovo and the Superfish Scam

I’m a bit late to the scene here, but I think it’s still worth making a stab at this. Also, this is being written on an airplane, where I am travelling home from San Francisco to NY with a case of bronchitis. So I am not exactly at my best.

Lenovo, one of the largest makers of windows-based laptops, sold out its customers as part of one of the worst deliberate violations of computer security I’ve ever seen, by shipping a piece of software called Superfish pre-installed on its computers. Superfish is, with absolutely no exaggeration, one of the most serious, unethical, despicable things I’ve seen in quite a lot time. It’s appalling.

So what is it, and what’s the big deal?

We need to start with some background, and talk a bit about how secure connections work on the internet.

Every time that you visit a website with a secure connection (a URL that starts with https), you’re using a protocol called TLS (formerly SSL). TLS is designed to do two things:

  1. Ensure that you are talking to who you think you’re talking to.
  2. Ensure that no one but you and the person you wanted to talk to can actually see what you’re saying.

The way that it does both of those is based on encryption. Every time you create a secure connect to a website, you’re exchanging credentials with the site to ensure that they’re who they say they are, and then based on those credentials, you establish an encryption key for the rest of your communication.

That connection-establishment process is the critical bit. You need to get some information that allows you to trust them to be who they claim to be. The security and integrity of everything that happens over the connection depends on the truth and integrity of that initial piece of identity verification.

The identity verification piece of TLS is built using public key cryptography, as part of a standard infrastructure for key maintenance and verification called X.509.

I’ve written about public key crypto before – see here. The basic idea behind public key crypto is that you’ve got two keys, called the public and private keys. Anything which is encrypted with the public key can only be decrypted with the private key; anything which is encrypted with the private key can only be decrypted using the public key. Your public key is available to anyone who wants it; no one but you has access to your private key.

If you receive a message from me, and you can decrypt it with my public key, then you know, without a doubt, that you can be sure that I’m the one who encrypted it. Only my private key could have encrypted a message that could then be decrypted with my public key.

For that to work, though, there’s one thing that you need to be sure of: you need to be absolutely sure that the public key that you’ve got is really my public key. If some clever person managed to somehow give you a different key, and convince you that it’s mine, then they can send you messages, and they’ll look exactly as if they came from me. If I handed you my public key on a USB thumbdrive, then you’re sure that the key came from me – but if you received it online, haw can you be sure that it was really me that gave it to you? Maybe someone switched it along the way?

In X.509, we use an idea of delegated trust. That is, we have some small collection of fundamental trusted authorities. Those authorities can issue public/private key pairs, so when someone need a public key, they can go to them and ask for it, and they’ll create one. The authority gives them a certificate, which is a copy of the new public key encrypted by the authority using their private key.

Now, when someone connects to a website, the target site can state who they are by sending a copy of the certificate. The client site recieves the certificate, decrypts it using the authorities public key, and then starts using that public key to encrypt their communications.

If the two sides can keep talking, then the client knows who it’s talking to. It got a public key, and it’s using that public key to talk to the server; so the server couldn’t decrypt the communication unless it had the public key; and it trusts that that it got the right public key, because it was encrypted with the private key of the certificate authority.

This is great as far as it goes, but it leaves us with a single certificate authority (or, at best, a small group). With billions of human users, and possibly trillions of networked devices, having a single authority isn’t manageable. They simple can’t produce enough keys for everyone. So we need to use our trust in the certificate authority to expand the pool of trust. We can do that by saying that if the certificate authority can declare that a particular entity is trustworthy, then we can use that entity itself as a verifier. So now we’ve taken a single trusted authority, and expanded that trust to a collection of places. Each of those newly trusted entities can now also issue new keys, and can certify their validity, by showing their certificate, plus the new encrypted public key. In general, anyone can issue a public key – and we can check its validity by looking at the chain of authorities that verified it, up to the root authority.

There’s a catch to this though: the base certificate providers. If you can trust them, then everything works: if you’ve got a certificate chain, you can use it to verify the identity of the party you’re talking to. But if there’s any question about the validity of the root certificate provider, if there’s any question whether or not you have the correct, valid public key for that provider, then you’re completely hosed. Ultimately, there’s some piece of seed information which you have to start off with. You need to accept the validity of an initial certificate authority based on some other mechanism. The people who sold you your computer, or the people who built your web browser, generally install a root certificate – basically the public key for a trusted certificate authority.

If that root certificate isn’t trustworthy, then nothing that results from it can be trusted. The untrustworthy root certificate can be used by an unscrupulous person to create new certificates allowing them to masquerade as anything that they want.

In particular, an untrustworthy root certificate it makes it easy to perform a man-in-the-middle attack. Suppose you want to talk to your bank, and somehow Joe Schmoe has planted a bad root certificate on your computer:

  1. You try to connect to Bank.
  2. Joe intercepts the request. He accepts your connection request, pretending to be the bank. Using his fake root certificate, he sends you a certificate claiming to be the bank.
  3. Now, you’re connected to Joe, believing that you’re connected to the bank. You try to log in to your account, sending your username and password.
  4. Joe receives your request, connects to the bank, passing on your request.
  5. Joe logs in to the bank on your behalf. The bank returns its successful login screen, showing your account numbers and balances.
  6. Joe copies the successful login screen from his connection to the bank to you.
  7. Every time that you send information to “the bank”, Joe decrypts it using his private key, and then sends it on to the bank masquerading as you. Similarly, when the bank sends something to you, he decrypts it using the banks public key, and then encrypts it with his his private key to send it to you.
  8. You now believe that you are connected to the bank over a secure connection. Everything that you do looks exactly as if you’re connected to the bank over a security encrypted link.
  9. Joe now has your login information, and control of your bank accounts.

In the Lenovo fiasco, Lenovo installed a system called Superfish, which deliberately installs a bad root certificate, and then uses that root certificate to create man-in-the-middle attacks on every secure connection ever made with the computer.

Why does it do that? Purportedly for ad retargeting: it uses its man-in-the-middle to decrypt supposedly secure connections so that it can replace ads in the pages that you view. That way, Lenovo and Superfish get advertising money for the pages you view, instead of the page-owners.

It’s spectacularly despicable. It’s fundamentally compromising everything you do with your computer. It’s doing it in a way that you can’t detect. And it’s doing it all in a sleazy attempt to steal advertising money.

Based on this, I’ve got two pieces of advice:

  1. Don’t ever buy a Lenovo computer. Never, ever, ever. A company that would even consider installing something like this isn’t a company that you can ever trust.
  2. If you have a Lenovo computer already, get rid of it. You could reformat your disks and install a fresh untainted copy of your operating system – but if your hardware manufacturer is a sleezy as Lenovo has demonstrated themselves to be, then there’s no reason to believe that that’s going to be sufficient.

This is, by far, the worst thing that I’ve ever seen a computer manufacturer do. They deserve to be run out of business for this.

Flat Earthers Can’t Do Math!

Running this blog, I regularly check referrers – that is, the sites who’s links to GM/BM actually bring readers to the blog. It’s interesting to see who’s linking, and it helps give me an idea which posts people find most interesting. Also, every once in a while, it gives me something interesting to blog about. Like today. I got linked to by the Flat Earth Society!

The FES is the grand-daddy of hopeless crackpot organizations. They are, in all seriousness, a group of people who believe that the earth is flat. They’ll tell you that the space program is a total fraud. Every picture you’ve seen from space is faked. No one ever went to the moon. Depending on which flat-earther you talk to, satellites are either a fraud, or they’re just great big balloons floating above the earth’s surface. It’s all an eloborate conspiracy for some nefarious reason.

The FES set up a group of forums. And in one of them, someone posted something about that old -1/12 nonsense. It’s the worst misunderstanding/misrepresentation of the idea behind that that I’ve seen so far, and let me tell you, that’s really saying something.

I’ll keep this short so that it might get read.

Analytic continuation allows us to approximate an infinite series as a function. The more terms you add to the series, the more accurately you describe the function. So at the limit, when you have all the terms, that series can be considered equal to that function.

This allows us to assign meaningful numerical values to infinite series, even divergent ones.

For example, the infinite series “1 + 2 + 3 + 4 + 5… ” can be evaluated. However, the result is disturbingly counter-intuitive. If you stop adding terms at any finite point, you’ll have a larger and larger number as the result. However, if you “evaluate this after an infinite number of terms, you’ll find the sum to actually be -1/12.

There are a variety of proofs for this, and this number is demonstrably a meaningful . This result is actually seen in physics. Furthermore this result is a foundation in string theory for the number of required dimensions.

Analytic computation isn’t about converging infinite series. It’s not about adding more and more terms. What he’s describing is, simply, the idea of an infinite series that converges to a value. For example, consider the following:

 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \cdots

That series taken to infinity is equal to 2. You can keep drawing it out. At any point, the sum of the series is less than 2. 1 1/2, 1 3/4, 1 511/512, 1 65535/65536, on and on, forever. But if you keep following it, using the mathematical concept of limits, it’s easy to show that the sum of the full series is exactly 2.

You don’t need to bring analytic continuations into the picture to show that – just limits. In fact, analytical continuations don’t help with solving that problem – they’ve got nothing to do with it.

Analytical continuation is both much simpler (in concept) and much harder (in practice) than convergence of infinite series.

Analytical continuations come into play when we have an partial function for something, which we’re using as an partial solution for the value of something else. There are lots of problems where we don’t know a perfect, total closed-form equation for some function that we’re interested in. But we’ve managed to find a closed-form equation that matches the function we’re interested in a lot of the time. Then, using some pretty hairy complex (in the sense of complex numbers, although it’s also pretty complicated) analysis tricks, we can figure out what the value of the target function should be at places where our partial solution is undefined. So even though the partial solution doesn’t work some of the time, we can use that partial solution to derive the actual solution. That process of using analysis of the partial solution to get at least some of the undefined points where it the partial solution doesn’t work is analytical continuation.

In the case of the infamous -1/12 argument, we’re trying to probe at a really important thing called Riemman’s zeta function. The Zeta function describes fundamental properties of prime numbers. It’s an important deep thing which ends up having a lot of applications when you’re dealing with number theory, topology, and differential equations, among other things. Because of how it describes some fundamental properties, any concrete application of those fundamental properties ends up involving Zeta as well – including things like string theory, which propose a topological structure for the univese.

We don’t know, in general, how to write a simple equation for Zeta. We do know how to write an equation that is a partial subset of the zeta function – a equation that describes a function that works much of the time, but which is not zeta, and which is not defined in some places where zeta is.

Using analytical continuation, we can compute the value of the zeta function in places where the equation that we use for a partial approximation does not work, where that equation cannot and will not ever produce a result.

Using analytical continuation, \zeta(-1) = -1/12. At -1, the equation that we know for computing zeta in some places does not work. At all. But analytical continuation shows us what the value of \zeta is at that point: it does not tell us the value of computing that equation at -1: that’s doesn’t work.

(In terms of physics, the application of this is interesting. The zeta function is used in a lot of places in string theory, and it’s normally “defined” in physics text as being precisely the series that gives us the partial approximation of zeta. So in string theory physics, when you find that series, it’s not actually that series; it’s zeta, which was expanded out into that (technically incorrect) series. Since it’s zeta, you can replace it with zeta. Since zeta(-1)=-1/12, that means that in those physics equations, you can effectively pretend that 1+2+3+4+5+…=-1/12. It isn’t, but a combination of a notational convenience and a shortcut to avoid a long-winded explanation of analytical continuation means that too many physicists are taught to believe that it is.)

So, shocking as it might be: flat earthers – they’re not just wrong about geography!

Don’t worry about playing with guns. You can’t hurt anyone.

Pardon me. Someone is wrong on the internet. I must do something!

Some stupid fucking dumb-ass idiot was playing with a loaded gun. He knew it was loaded. And he grabbed it by the trigger to pick it up. Because, you see, he knew that it’s a persnickety old gun, and the trigger usually jams, so he figured it’d be safe to grab it by the trigger. What could possibly go wrong?

Naturally, it went off, and hit his next door neighbor in the head, killing him. He’d just brought his wife and newborn child home when he was killed by the spectacular stupidity of his neighbor.

The police, discussing the case, said:

“The odds, I’d guess, are 1 in infinity,” said BCSO Maj. Tommy Ford. “This was just tragic.”

Those of us who aren’t mathematical morons know that a “1 in infinity” chance is the probability better known as 0.

You might say that I’m being pedantic. You’d be right. But it’s a stupid statement that tries to minimize the incredible stupidity of what happened here. This wasn’t a crazy freakish chance. A handgun is a device which is specifically designed to be an efficient and effective means for killing people. It is not a toy. Picking up a gun by its trigger is, for all intents and purposes, no different from just randomly aiming the gun and pulling that trigger. The chances of hitting someone aren’t huge, but they aren’t tiny, either.

This wasn’t a super-rare freak accident. This was stupidity and negligence by a jackass. Diminishing it, trying to pretend that it was just freakish is a way of making excuses for the
sub-human scumbag who killed an innocent man. Calling it one-in-infinity – that is, saying that it was impossible even though it wasn’t even particularly unlikely – is just a way of covering up for the fact that our insanely irresponsible way of dealing with deadly weapons kills innocent people.

We do a lot of that in America. Guns are downright sacred here. You can insult Jesus with less repurcussions that insulting a gun owner. A community figure like a local policeman can’t just point out that a gun was used in a horrible, stupid, irresponsible, evil way. Because that would be seen as a threat to the so-called “guns rights” shits who think that any number of innocents getting killed is an acceptable price in order to ensure that they can keep their precious penis replacements.

The man who killed his neighbor in this incident should never have been able to have that gun. He was a convicted felon, prohibited from owning a gun by state law. But we can’t do something sensible like, say, check whether someone can legally own a gun before letting them buy one. That would be an unacceptable infringement on their freedom.

Meanwhile, this shit is being charged with manslaughter. After all, it wasn’t murder. He was just playing with his precious, precious toy.

Innovation isn’t just hardware!

I’m a bit late to the party on this, but hey, such is life! I work on infrastructure at Twitter, and we’ve been going crazy trying to get stuff deployed in time to be ready for the world cup. So I haven’t had time to write before now!

Anyway, late last week, a professor known for stupid self-promoting stunts announced that the Turing test had been passed! This was, according to said professor, a huge thing, a really big deal, a historic event!

(Perhaps almost as big a deal as the time he had an RFID chip implanted in his arm, and announced that now he was the world’s first cyborg!)

Lots of people have written about the stupidity of the claim. The alleged “winner” was a program that pretended to be a teenaged kid with ADD who wasn’t a native english speaker. It didn’t even attempt to simulate intelligence, just to mislead the judges by providing excuses for its incoherence.

But that’s not what I wanted to comment on. Like I said, I’m late to the game, and that means that the problems with the alleged winner of the competition to pass the Turing test have been covered many times already. What I wanted to comment on was a theme I saw in several of the explanations. Here’s a typical example, taken from an article in the New Yorker:

Here’s what Eugene Goostman isn’t: a supercomputer. It is not a groundbreaking, super-fast piece of innovative hardware but simply a cleverly-coded piece of software, heir to a program called ELIZA that was first developed—as a joke—in the nineteen-sixties.

This is an example of what I call “IBM Thinking”. I used to work for IBM, and one of the (many) frustrations there was a deep-seated belief that “innovation” and “technology” meant hardware. Software is just the silly unimportant stuff that runs on top of hardware; hardware is what matters.

According to this attitude, any advance in technology, any new achievement, must happen because someone created better hardware that made it possible. A system that beats the turing test? If it’s real, it must be a new supercomputer! If it’s not, then it’s not interesting!

This is, in a word, nonsense.

Hardware advances. But so does software. And these days, the big advances are more likely to be in software than in hardware.

There’s a mathematical law, called Church’s thesis, which we’ve known about for a long time. Put simply, it says that there is a maximum limit in computation. All computing devices, no matter how they’re designed or built, are ultimately, at best, equivalent to other computing computing devices. It doesn’t matter whether you’re a Turing machine, or a PC, or a supercomputing cluster – the set of problems that can be solved by computation is fixed. Different devices may be able to solve a given problem faster by some amount, but it can’t solve problems that are truly unsolvable.

We’ve gotten to the point where we can build incredibly fast computers. But those computers are all built on the same basic model of computing that we’ve been using for decades. When a problem that seemed unsolvable before becomes solvable now, most of the time, that’s not because there was some problem with old hardware that made the problem unsolvable – it’s because people have figured out how to write software that solves the problem. In fact, a lot of recent innovations in hardware became possible not because of some fundamental change in the hardware, but because people worked out clever software to help design and lay out circuits on silicon.

To take one example that’s very familiar to me (because my wife is one of the technical leads of the project), consider IBM’s Watson – the computer that beat the two best human Jepoardy players. IBM has stressed, over and over again, that Watson was a cluster of machines built on IBM’s Power architecture. But the only reason that they used Power was marketing. What made Watson special was that a team of brilliant researchers solved a very hard problem with clever software. Watson wasn’t a supercomputer. It was a cluster of off-the-shelf hardware. It could easily have been built from a collection of ultra-cheap PC motherboards stacked together with a high-speed network. The only thing that made Watson special – the thing that made Watson possible had nothing to do with hardware. It’s just a bunch of cleverly-coded software. That’s it.

To get even closer to home: I work on software that lets Twitter run its system on a cluster of thousands and thousands of cheap machines. The hardware is really unimpressive, except for its volume. It’s a ton of cheap PC motherboards, mounted in rack after rack, with network cables connecting the racks. Google has the same thing, but with even more machines. Amazon has the same thing, except that they might even have more machines that Google! If you handed me a ton of money, I – an idiot when it comes to hardware – could build a datacenter like Twitter’s. It would be a huge amount of work – but there’d be nothing inventive about building a new cluster. In fact, that’s the whole point of cluster-based computing: it’s both cheaper and easier to just buy a couple of thousand cheap machines, and distribute your work among them than it is to buy and program one huge machine capable of doing it all by itself.

What makes those clusters special isn’t the hardware. It’s all software. How do you take a thousand, or ten thousand, or a hundred thousand, or a million computers, and make them useful? With software. It’s just clever software – two systems, called Mesos and Aurora, which take that collection of thousands upon thousands of machines, and turn it into something that we can easily program, and easily share between thousands of different programs all running on it.

That doesn’t take away from the accomplishment. It just puts the focus where it belongs. “Clever software” isn’t some kind of trick. It’s the product of hard work, innovation, and creativity. It’s where many – or even most – of the big technological advances that we’re watching today are coming from. Innovation and advance aren’t just something that happens in hardware.

The Heartbleed Bug

There’s a lot of panic going around on the internet today, about something called the Heartbleed bug. I’ve gotten questions, so I’m giving answers.

I’ve heard lots of hype. Is this really a big deal?

Damn right it is!

It’s pretty hard to wrap your head around just how bad this actually is. It’s probably even more of a big deal that the hype has made it out to be!

This bug affects around 90% of all sites on the internet that use secure connections. Seriously: if you’re using the internet, you’re affected by this. It doesn’t matter how reputable or secure the sites you connect to have been in the past: the majority of them are probably vulnerable to this, and some number of them have, in all likelihood, been compromised! Pretty much any website running on linux or netbst, using Apache or NGINX as its webserver is vulnerable. That means just about every major site on the net.

The problem is a bug in a commonly used version of SSL/TLS. So, before I explain what the bug is, I’ll run through a quick background.

What is SSL/TLS?

When you’re using the internet in the simplest mode, you’re using a simple set of communication protocols called TCP/IP. In basic TCP/IP protocols, when you connect to another computer on the network, the data that gets sent back and forth is not encrypted or obscured – it’s just sent in the clear. That means that it’s easy for anyone who’s on the same network cable as you to look at your connection, and see the data.

For lots of things, that’s fine. For example, if you want to read this blog, there’s nothing confidential about it. Everyone who reads the blog sees the same content. No one is going to see anything private.

But for a lot of other things, that’s not true. You probably don’t want someone to be able to see your email. You definitely don’t want anyone else to be able to see the credit card number you use to order things from Amazon!

To protect communications, there’s another protocol called SSL, the Secure Sockets Layer. When you connect to another site that’s got a URL starting with https:, the two computers establish an encrypted connection. Once an SSL connection is established between two computers, all communication between them is encrypted.

Actually, on most modern systems, you’re not really using SSL. You’re using a successor to the original SSL protocol called TLS, which stands for transport layer security. Pretty much everyone is now using TLS, but many people still just say SSL, and in fact the most commonly used implementation of it is in package called OpenSSL.

So SSL/TLS is the basic protocol that we use on the internet for secure communications. If you use SSL/TLS correctly, then the information that you send and receive can only be accessed by you and the computer that you’re talking to.

Note the qualifier: if you use SSL correctly!

SSL is built on public key cryptography. What that means is that a website identifies itself using a pair of keys. There’s one key, called a public key, that it gives away to everyone; and there’s a second key, called a private key, that it keeps secret. Anything that you encrypt with the public key can only be decrypted using the private key; anything encrypted with the private key can only be decrypted using the public key. That means that if you get a message that can be decrypted using the sites public key, you know that no one except the site could have encrypted it! And if you use the public key to encrypt something, you know that no one except that site will be able to decrypt it.

Public key cryptography is an absolutely brilliant idea. But it relies on the fact that the private key is absolutely private! If anyone else can get a copy of the private key, then all bets are off: you can no longer rely on anything about that key. You couldn’t be sure that messages came from the right source; and you couldn’t be sure that your messages could only be read by an authorized person.

So what’s the bug?

The SSL protocol includes something called a heartbeat. It’s a periodic exchange between the two sides of a connection, to let them know that the other side is still alive and listening on the connection.

One of the options in the heartbeat is an echo request, which is illustrated below. Computer A wants to know if B is listening. So A sends a message to B saying “Here’s X bytes of data. Send them back to me.” Then A waits. If it gets a message back from B containing the same X bytes of data, it knows B was listening. That’s all there is to it: the heartbeat is just a simple way to check that the other side is actually listening to what you say.

heartbeat

The bug is really, really simple. The attacker sends a heartbeat message saying “I’m gonna send you a chunk of data containing 64000 bytes”, but then the data only contains one byte.

If the code worked correctly, it would say “Oops, invalid request: you said you were going to send me 64000 bytes of data, but you only sent me one!” But what the buggy version of SSL does is send you that 1 byte, plus 63,999 bytes of whatever happens to be in memory next to wherever it saved the byte..

heartbleed

You can’t choose what data you’re going to get in response. It’ll just be a bunch of whatever happened to be in memory. But you can do it repeatedly, and get lots of different memory chunks. If you know how the SSL implementation works, you can scan those memory chunks, and look for particular data structures – like private keys, or cookies. Given a lot of time, and the ability to connect multiple times and send multiple heartbeat requests each time you connect, you can gather a lot of data. Most of it will be crap, but some of it will be the valuable stuff.

To make matters worse, the heartbeat is treated as something very low-level which happens very frequently, and which doesn’t transfer meaningful data. So the implementation doesn’t log heartbeats at all. So there’s no way of even identifying which connections to a server have been exploiting this. So a site that’s running one of the buggy versions of OpenSSL has no way of knowing whether or not they’ve been the target of this attack!

See what I mean about it being a big deal?

Why is it so widespread?

When I’ve written about security in the past, one of the things that I’ve said repeatedly is: if you’re thinking about writing your own implementation of a security protocol, STOP. Don’t do it! There are a thousand ways that you can make a tiny, trivial mistake which completely compromises the security of your code. It’s not a matter of whether you’re smart or not; it’s just a simple statement of fact. If you try to do it, you will screw something up. There are just so many subtle ways to get things wrong: it takes a whole team of experts to even have a chance to get it right.

Most engineers who build stuff for the internet understand that, and don’t write their own cryptosystems or cryptographic protocols. What they do is use a common, well-known, public system. They know the system was implemented by experts; they know that there are a lot of responsible, smart people who are doing their best to find and fix any problems that crop up.

Imagine that you’re an engineer picking an implementation of SSL. You know that you want as many people trying to find problems as possible. So which one will you choose? The one that most people are using! Because that’s the one that has the most people working to make sure it doesn’t have any problems, or to fix any problems that get found as quickly as possible.

The most widely used version of SSL is an open-source software package called OpenSSL. And that’s exactly where the most bug is: in OpenSSL.

How can it be fixed?

Normally, something like this would be bad. But you’d be able to just update the implementation to a new version without the bug, and that would be the end of the problem. But this case is pretty much the worst possible case: fixing the implementation doesn’t entirely fix the problem. Because even after you’ve fixed the SSL implementation, if someone got hold of your private key, they still have it. And there’s no way to know if anyone stole your key!

To fix this, then, you need to do much more than just update the SSL implementation. You need to cancel all of your keys, and replace them new ones, and you need to get everyone who has a copy of your public key to throw it away and stop using it.

So basically, at the moment, every keypair for nearly every major website in the world that was valid yesterday can no longer be trusted.

Yet Another Cantor Crank: Size vs Cardinality

Over the weekend, a reader sent me links to not one but two new Cantor cranks!

Sadly, one of them is the incoherent sort – the kind of nutjob who strings together words in meaningless ways. Without a certain minimal rationality, there’s nothing I can say. What I try to do on this blog isn’t just make fun of crackpots – it’s explain what they get wrong! If a crackpot strings together words randomly, and no one can make any sense of just what the heck they’re saying, there’s no way to do that.

On the other hand, the second guy is a whole different matter. He’s making a very common mistake, and he’s making it very clearly. So for him, it’s well worth taking a moment and looking at what he gets wrong.

My mantra on this blog has always been: “the worst math is no math”. This is a perfect example.

First, I believe that Cantor derived a false conclusion from the diagonal method.

I believe that the primary error in the theory is not with the assertion that the set of Real Numbers is a “different size” than the set of Integers. The problem lies with the assertion that the set of Rational Numbers is the “same size” as the set of Integers. Our finite notion of size just doesn’t extend to infinite sets. Putting numbers in a list (i.e., creating a one-to-one correspondence between infinite sets) does not show that they are the same “size.”

This becomes clear if we do a two step version of the diagonal method.

Step One: Lets start with the claim: “Putting an infinite set in a list shows that it is the same size as the set of Integers.”

Step Two: Claiming to have a complete list of reals, Cantor uses the diagonal method to create a real number not yet in the list.

Please, think about this two step model. The diagonal method does not show that the rational numbers are denumerable while the real numbers are not. The diagonal method shows that the assertion in step one is false. The assertion in step one is as false for rational numbers as it is for real numbers.

The diagonal method calls into question the cross-section proof used to show that the rational numbers are the same size as the integers.

Cantor didn’t talk about size. He never asserted anything about size. He asserted something about cardinality.

That might sound like a silly nitpick: it’s just terminology, right?

Wrong. What does size mean? Size is an informal term. It’s got lots of different potential meanings. There’s a reasonable definition of “size” where the set of natural numbers is larger than the set of even natural numbers. It’s a very simple definition: given two sets of objects A and B, the size of B is larger than the size of A if A is a subset of B.

When you say the word “size”, what do you mean? Which definition?

Cantor defined a new way of defining size. It’s not the only valid measure, but it is a valid measure which is widely useful when you’re doing math. The measure he defined is called cardinality. And cardinality, by definition, says that two sets have the same cardinality if and only if it’s possible to create a one-to-one correspondance between the two sets.

When our writer said “Our finite notion of size just doesn’t extend to infinite sets”, he was absolutely correct. The problem is that he’s not doing math! The whole point of Cantor’s work on cardinality was precisely that our finite notion of size doesn’t extend to infinite sets. So he didn’t use our finite notion of size. He defined a new mathematical construct that allows us to meaningfully and consistently talk about the size of infinite sets.

Throughout his website, he builds a complex edifice of reasoning on this basis. It’s fascinating, but it’s ultimately all worthless. He’s trying to do math, only without actually doing math. If you want to try to refute something like Cantor’s diagonalization, you can’t do it with informal reasoning using words. You can only do it using math.

This gets back to a very common error that people make, over and over. Math doesn’t use fancy words and weird notations because mathematicians are trying to confuse non-mathematicians. It’s not done out of spite, or out of some desire to exclude non-mathematicians from the club. It’s about precision.

Cantor didn’t talk about the cardinality of infinite sets because he thought “cardinality” sounded cooler and more impressive than “size”. He did it because “size” is an informal concept that doesn’t work when you scale to infinite sets. He created a new concept because the informal concept doesn’t work. If you’re argument against Cantor is that his concept of cardinality is different from your informal concept of size, you’re completely missing the point.

Run! Hide your children! Protect them from math with letters!

Normally, I don’t write blog entries during work hours. I sometimes post stuff then, because it gets more traffic if it’s posted mid-day, but I don’t write. Except sometimes, when I come accross something that’s so ridiculous, so offensive, so patently mind-bogglingly stupid that I can’t work until I say something. Today is one of those days.

In the US, many school systems have been adopting something called the Common Core. The Common Core is an attempt to come up with one basic set of educational standards that are applied consistently in all of the states. This probably sounds like a straightforward, obvious thing. In my experience, most Europeans are actually shocked that the US doesn’t have anything like this. (In fact, at best, it’s historically been standardized state-by-state, or even school district by school district.) In the US, a high school diploma doesn’t really mean anything: the standards are so widely varied that you can’t count on much of anything!

The total mishmash of standards is obviously pretty dumb. The Common Core is an attempt to rationalize it, so that no matter where you go to school, there should be some basic commonality: when you finish 5th grade, you should be able to read at a certain level, do math at a certain level, etc.

Obviously, the common core isn’t perfect. It isn’t even necessarily particularly good. (The US being the US, it’s mostly focused on standardized tests.) But it’s better than nothing.

But again, the US being the US, there’s a lot of resistance to it. Some of it comes from the flaky left, which worries about how common standards will stifle the creativity of their perfect little flower children. Some of it comes from the loony right, which worries about how it’s a federal takeover of the education system which is going to brainwash their kiddies into perfect little socialists.

But the worst, the absolute inexcusable worst, are the pig-ignorant jackasses who hate standards because it might turn children into adults who are less pig-ignorant than their parents. The poster child for this bullshit attitude is State Senator Al Melvin of Arizona. Senator Melvin repeats the usual right-wing claptrap about the federal government, and goes on
to explain what he dislikes about the math standards.

The math standards, he says, teach “fuzzy math”. What makes it fuzzy math? Some of the problems use letters instead of numbers.

The state of Arizona should reject the Common Core math standards, because the math curicculum sometimes uses letters instead of numbers. After all, everyone knows that there’s nothing more to math than good old simple arithmetic! Letters in math problems are a liberal conspiracy to convince children to become gay!

The scary thing is that I’m not exaggerating here. An argument that I have, horrifyingly, heard several times from crazies is that letters are used in math classes to try to introduce moral relativism into math. They say that the whole reason for using letters is because with numbers, there’s one right answer. But letters don’t have a fixed value: you can change what the letters mean. And obviously, we’re introducing that into math because we want to make children think that questions don’t have a single correct answer.

No matter where in the world you go, you’ll find stupid people. I don’t think that the US is anything special when it comes to that. But it does seem like we’re more likely to take people like this, and put them into positions of power. How does a man who doesn’t know what algebra is get put into a position where he’s part of the committee that decides on educational standards for a state? What on earth is wrong with people who would elect someone like this?

Senator Melvin isn’t just some random guy who happened to get into the state legislature. He’s currently the front-runner in the election for Arizona’s next governor. Hey Arizona, don’t you think that maybe, just maybe, you should make sure that your governor knows high school algebra? I mean, really, do you think that if he can’t understand a variable in an equation, he’s going to be able to understand the state budget?!

Everyone stop implementing programming languages, right now! It's been solved!

Back when I was a student working on my PhD, I specialized in programming languages. Lucky for me I did it a long time ago! According to Wired, if I was working on it now, I’d be out of luck – the problem is already solved!

See, these guys built a new programming language which solves all the problems! I mean, just look how daft all of us programming language implementors are!

Today’s languages were each designed with different goals in mind. Matlab was built for matrix calculations, and it’s great at linear algebra. The R language is meant for statistics. Ruby and Python are good general purpose languages, beloved by web developers because they make coding faster and easier. But they don’t run as quickly as languages like C and Java. What we need, Karpinski realized after struggling to build his network simulation tool, is a single language that does everything well.

See, we’ve been wasting our time, working on languages that are only good for one thing, when if only we’d had a clue, we would have just been smart, and built one perfect language which was good for everything!

How did they accomplish this miraculous task?

Together they fashioned a general purpose programming language that was also suited to advanced mathematics and statistics and could run at speeds rivaling C, the granddaddy of the programming world.

Programmers often use tools that translate slower languages like Ruby and Python into faster languages like Java or C. But that faster code must also be translated — or compiled, in programmer lingo — into code that the machine can understand. That adds more complexity and room for error.

Julia is different in that it doesn’t need an intermediary step. Using LLVM, a compiler developed by University of Illinois at Urbana-Champaign and enhanced by the likes of Apple and Google, Karpinski and company built the language so that it compiles straight to machine code on the fly, as it runs.

Ye bloody gods, but it’s hard to know just where to start ripping that apart.

Let’s start with that last paragraph. Apparently, the guys who designed Julia are geniuses, because they used the LLVM backend for their compiler, eliminating the need for an intermediate language.

That’s clearly a revolutionary idea. I mean, no one has ever tried to do that before – no programming languages except C and C++ (the original targets of LLVM). Except for Ada. And D. And fortran. And Pure. And Objective-C. And Haskell. And Java. And plenty of others.

And those are just the languages that specifically use the LLVM backend. There are others that use different code generators to generate true binary code.

But hey, let’s ignore that bit, and step back.

Let’s look at what they say about how other people implement programming languages, shall we? The problem with other languages, they allege, is that their implementations don’t actually generate machine code. They translate from a slower language into a faster language. Let’s leave aside the fact that speed is an attribute of an implementation, not a language. (I can show you a CommonLisp interpreter that’s slow as a dog, and I can show you a CommonLisp interpreter that’ll knock your socks off.)

What do the Julia guys actually do? They write a front-end that generates LLVM intermediate code. That is, they don’t generate machine code directly. They translate code written in their programming languages into code written in an abstract virtual machine code. And then they take the virtual machine code, and pass it to the LLVM backend, which translates from virtual code to actual true machine code.

In other words, they’re not doing anything different from pretty much any other compiled language. It’s incredibly rare to see a compiler that actually doesn’t do the intermediate code generation. The only example I can think of at the moment is one of the compilers for Go – and even it uses some intermediates internally.

Even if Julia never displaces the more popular languages — or if something better comes along — the team believes it’s changing the way people think about language design. It’s showing the world that one language can give you everything.

That said, it isn’t for everyone. Bezanson says it’s not exactly ideal for building desktop applications or operating systems, and though you can use it for web programming, it’s better suited to technical computing. But it’s still evolving, and according to Jonah Bloch-Johnson, a climate scientist at the University of Chicago who has been experimenting with Julia, it’s more robust than he expected. He says most of what he needs is already available in the language, and some of the code libraries, he adds, are better than what he can get from a seasoned language like Python.

So, our intrepid reporter tells us, the glorious thing about Julia is that it’s one language that can give you everything! This should completely change the whole world of programming language design – because us idiots who’ve worked on languages weren’t smart enough to realize that there should be one language that does everything!

And then, in the very next paragraph, he points out that Julia, the great glorious language that’s going to change the world of programming language design by being good at everything, isn’t good at everything!

Jeebus. Just shoot me now.

I’ll finish with a quote that pretty much sums up the idiocy of these guys.

“People have assumed that we need both fast and slow languages,” Bezanson says. “I happen to believe that we don’t need slow languages.”

This sums up just about everything that I hate about what happens when idiots who don’t understand programming languages pontificate about how languages should be designed/implemented.

At the moment, in my day job, I’m doing almost all of my programming in Python. Now, I’m not exactly a huge fan of Python. There’s an awful lot of slapdash and magic about it that drive me crazy. But I can’t really dispute the decision to use it for my project, because it’s a very good choice.

What makes it a good choice? A certain kind of flexibility and dynamicism. It’s a great language for splicing together different pieces that come from different places. It’s not the fastest language in the world. But for my purposess, that’s completely irrelevant. If you took a super-duper brilliant, uber-fast language with a compiler that could generate perfectly optimal code every time, it wouldn’t be any faster than my Python program. How can that be?

Because my Python program spends most of its time idle, waiting for something to happen. It’s talking to a server out on a datacenter cluster, sending it requests, and then waiting for them to complete. When they’re done, it looks at the results, and then generates output on a local console. If I had a fast compiler, the only effect it would have is that my program would spend more time idle. If I were pushing my CPU anywhere close to its limits, using less CPU before going idle might be helpful. But it’s not.

The speed of the language doesn’t matter. But by making my job easier – making it easier to write the code – it saves something much more valuable than CPU time. It saves human time. And a human programmer is vastly more expensive than another 100 CPUs.

We don’t specifically need slow languages. But no one sets out to implement a slow language. People implement useful languages. And they make intelligent decisions about where to spend their time. You could implement a machine code generator for Python. It would be an extremely complicated thing to do – but you could do it. (In fact, someone is working on an LLVM front-end for Python! It’s not for Python code like my system, but there’s a whole community of people who use Python for implementing numeric processing code with NumPy.) But what’s the benefit? For most applications, absolutely nothing.

According the the Julia guys, the perfectly rational decision to not dedicate effort to optimization when optimization won’t actually pay off is a bad, stupid idea. And that should tell you all that you need to know about their opinions.

Bad Math from the Bad Astronomer

This morning, my friend Dr24Hours pinged me on twitter about some bad math:

And indeed, he was right. Phil Plait the Bad Astronomer, of all people, got taken in by a bit of mathematical stupidity, which he credulously swallowed and chose to stupidly expand on.

Let’s start with the argument from his video.


We’ll consider three infinite series:

S1 = 1 - 1 + 1 - 1 + 1 - 1 + ...
S2 = 1 - 2 + 3 - 4 + 5 - 6 + ...
S3 = 1 + 2 + 3 + 4 + 5 + 6 + ...

S1 is something called Grandi’s series. According to the video, taken to infinity, Grandi’s series alternates between 0 and 1. So to get a value for the full series, you can just take the average – so we’ll say that S1 = 1/2. (Note, I’m not explaining the errors here – just repeating their argument.)

Now, consider S2. We’re going to add S2 to itself. When we write it, we’ll do a bit of offset:

1 - 2 + 3 - 4 + 5 - 6 + ...
    1 - 2 + 3 - 4 + 5 + ...
==============================
1 - 1 + 1 - 1 + 1 - 1 + ...

So 2S2 = S1; therefore S2 = S1=2 = 1/4.

Now, let’s look at what happens if we take the S3, and subtract S2 from it:

   1 + 2 + 3 + 4 + 5 + 6 + ...
- [1 - 2 + 3 - 4 + 5 - 6 + ...]
================================
   0 + 4 + 0 + 8 + 0 + 12 + ... == 4(1 + 2 + 3 + ...)

So, S3 – S2 = 4S3, and therefore 3S3 = -S2, and S3=-1/12.


So what’s wrong here?

To begin with, S1 does not equal 1/2. S1 is a non-converging series. It doesn’t converge to 1/2; it doesn’t converge to anything. This isn’t up for debate: it doesn’t converge!

In the 19th century, a mathematician named Ernesto Cesaro came up with a way of assigning a value to this series. The assigned value is called the Cesaro summation or Cesaro sum of the series. The sum is defined as follows:

Let A = {a_1 + a_2 + a_3 + ...}. In this series, s_k = Sigma_{n=1}^{k} a_n. s_k is called the kth partial sum of A.

The series A is Cesaro summable if the average of its partial sums converges towards a value C(A) = lim_{n rightarrow infty} frac{1}{n}Sigma_{k=1}^{n} s_k.

So – if you take the first 2 values of A, and average them; and then the first three and average them, and the first 4 and average them, and so on – and that series converges towards a specific value, then the series is Cesaro summable.

Look at Grandi’s series. It produces the partial sum averages of 1, 1/2, 2/3, 2/4, 3/5, 3/6, 4/7, 4/8, 5/9, 5/10, … That series clearly converges towards 1/2. So Grandi’s series is Cesaro summable, and its Cesaro sum value is 1/2.

The important thing to note here is that we are not saying that the Cesaro sum is equal to the series. We’re saying that there’s a way of assigning a measure to the series.

And there is the first huge, gaping, glaring problem with the video. They assert that the Cesaro sum of a series is equal to the series, which isn’t true.

From there, they go on to start playing with the infinite series in sloppy algebraic ways, and using the Cesaro summation value in their infinite series algebra. This is, similarly, not a valid thing to do.

Just pull out that definition of the Cesaro summation from before, and look at the series of natural numbers. The partial sums for the natural numbers are 1, 3, 6, 10, 15, 21, … Their averages are 1, 4/2, 10/3, 20/4, 35/5, 56/6, = 1, 2, 3 1/3, 5, 7, 9 1/3, … That’s not a converging series, which means that the series of natural numbers does not have a Cesaro sum.

What does that mean? It means that if we substitute the Cesaro sum for a series using equality, we get inconsistent results: we get one line of reasoning in which a the series of natural numbers has a Cesaro sum; a second line of reasoning in which the series of natural numbers does not have a Cesaro sum. If we assert that the Cesaro sum of a series is equal to the series, we’ve destroyed the consistency of our mathematical system.

Inconsistency is death in mathematics: any time you allow inconsistencies in a mathematical system, you get garbage: any statement becomes mathematically provable. Using the equality of an infinite series with its Cesaro sum, I can prove that 0=1, that the square root of 2 is a natural number, or that the moon is made of green cheese.

What makes this worse is that it’s obvious. There is no mechanism in real numbers by which addition of positive numbers can roll over into negative. It doesn’t matter that infinity is involved: you can’t following a monotonically increasing trend, and wind up with something smaller than your starting point.

Someone as allegedly intelligent and educated as Phil Plait should know that.

The Latest Update in the Hydrino Saga

Lots of people have been emailing me to say that there’s a new article out about Blacklight, the company started by Randall Mills to promote his Hydrino stuff, which claims to have an independent validation of his stuff, and announcing the any-day-now unveiling of the latest version of his hydrino-based generator.

First of all, folks, this isn’t an article, it’s a press release from Blacklight. The Financial Post just printed it in their online press-release section. It’s an un-edited release written by Blacklight.

There’s nothing new here. I continue to think that this is a scam. But what kind of scam?

To find out, let’s look at a couple of select quotes from this press release.

Using a proprietary water-based solid fuel confined by two electrodes of a SF-CIHT cell, and applying a current of 12,000 amps through the fuel, water ignites into an extraordinary flash of power. The fuel can be continuously fed into the electrodes to continuously output power. BlackLight has produced millions of watts of power in a volume that is one ten thousandths of a liter corresponding to a power density of over an astonishing 10 billion watts per liter. As a comparison, a liter of BlackLight power source can output as much power as a central power generation plant exceeding the entire power of the four former reactors of the Fukushima Daiichi nuclear plant, the site of one of the worst nuclear disasters in history.

One ten-thousandth of a liter of water produces millions of watts of power.

Sounds impressive, doesn’t it? Oh, but wait… how do we measure energy density of a substance? Joules per liter, or something equivalent – that is, energy per volume. But Blacklight is quoting energy density as watts per liter.

The joule is a unit of energy. A joule is a shorthand for frac{text{kilogram}*text{meter}^2}{text{second}^2}. Watts are a different unit, a measure of power, which is a shorthand for frac{text{kilogram}*text{meter}^2}{text{second}^3}. A watt is, therefore, one joule/second.

They’re quoting a rather peculiar unit there. I wonder why?

Our safe, non-polluting power-producing system catalytically converts the hydrogen of the H2O-based solid fuel into a non-polluting product, lower-energy state hydrogen called “Hydrino”, by allowing the electrons to fall to smaller radii around the nucleus. The energy release of H2O fuel, freely available in the humidity in the air, is one hundred times that of an equivalent amount of high-octane gasoline. The power is in the form of plasma, a supersonic expanding gaseous ionized physical state of the fuel comprising essentially positive ions and free electrons that can be converted directly to electricity using highly efficient magnetohydrodynamic converters. Simply replacing the consumed H2O regenerates the fuel. Using readily-available components, BlackLight has developed a system engineering design of an electric generator that is closed except for the addition of H2O fuel and generates ten million watts of electricity, enough to power ten thousand homes. Remarkably, the device is less than a cubic foot in volume. To protect its innovations and inventions, multiple worldwide patent applications have been filed on BlackLight’s proprietary technology.

Water, in the alleged hydrino reaction, produces 100 times the energy of high-octane gasoline.

Gasoline contains, on average, about 11.8 kWh/kg. A milliliter of gasoline weighs about 7/10ths of a gram, compared to the 1 gram weight of a milliter of water; therefore, a kilogram of gasoline should contain around 1400 milliliters. So, let’s take 11.8kWh/kg, and convert that to an equivalent measure of energy per milliter: about 8 1/2 kWh/milliliter. How does that compare to hydrinos? Oh, wait… we can’t convert those, now can we? Because they’re using power density. And the power density of a substance depends not just on how much power you can extract, but how long it takes to extract it. Explosives have fantastic power density! Gasoline – particularly high octane gasoline – is formulated to try to burn as slowly as possible, because internal combustion engines are more efficient on a slower burn.

To bring just a bit of numbers into it, TNT has a much higher power density than gasoline. You can easily knock down buildings with TNT, because of the way that it emits all of its energy in one super short burst. But it’s energy density is just 1/4th the energy density of gasoline.

Hmm. I wonder why Mills is using the power density?

Here’s my guess. Mills has some bullshit process where he spikes his generator with 12000 amps, and gets a microsecond burst of energy out. If you can produce 100 joules from one milliliter in 1/1000th of a second, that’s a power density of 100,000 joules per milliliter.

Suddenly, the amount of power that’s being generated isn’t so huge – and there, I would guess, is the key to Mills latest scam. If you’re hitting your generating apparatus with 12,000 amperes of electric current, and you’re producing microsecond burst of energy, it’s going to be very easy to produce that energy by consuming something in the apparatus, without that consumption being obvious to an observer who isn’t allowed to independently examine the apparatus in detail.


Now, what about the “independent verification”? Again, let’s look at the press release.

“We at The ENSER Corporation have performed about thirty tests at our premises using BLP’s CIHT electrochemical cells of the type that were tested and reported by BLP in the Spring of 2012, and achieved the three specified goals,” said Dr. Ethirajulu Dayalan, Engineering Fellow, of The ENSER Corporation. “We independently validated BlackLight’s results offsite by an unrelated highly qualified third party. We confirmed that hydrino was the product of any excess electricity observed by three analytical tests on the cell products, and determined that BlackLight Power had achieved fifty times higher power density with stabilization of the electrodes from corrosion.” Dr. Terry Copeland, who managed product development for several electrochemical and energy companies including DuPont Company and Duracell added, “Dr. James Pugh (then Director of Technology at ENSER) and Dr. Ethirajulu Dayalan participated with me in the independent tests of CIHT cells at The ENSER Corporation’s Pinellas Park facility in Florida starting on November 28, 2012. We fabricated and tested CIHT cells capable of continuously producing net electrical output that confirmed the fifty-fold stable power density increase and hydrino as the product.”

Who is the ENSER corporation? They’re an engineering consulting/staffing firm that’s located in the same town as Blacklight’s offices. So, pretty much, what we’re seeing is that Mills hired his next door neighbor to provide a data-free testimonial promising that the hydrino generator really did work.

Real scientists, doing real work, don’t pull nonsense like this. Mills has been promising a commercial product within a year for almost 25 years. In that time, he’s filed multiple patents, some of which have already expired! And yet, he’s never actually allowed an independent team to do a public, open test of his system. He’s never provided any actual data about the system!

He and his team have claimed things like “We can’t let people see it, it’s secret”. But they’re filing patents. You don’t get to keep a patent secret. A patent application, under US law, must contain: “a description of how to make and use the invention that must provide sufficient detail for a person skilled in the art (i.e., the relevant area of technology) to make and use the invention.”. In other words, if the patents that Mills and friends filed are legally valid, they must contain enough information for an interested independent party to build a hydrino generator. But Mills won’t let anyone examine his supposedly working generators. Why? It’s not to keep a secret!


Finally, the question that a couple of people, including one reporter for WiredUK asked: If it’s all a scam, why would Mills and company keep on making claims?

The answer is the oldest in the book: money.

In my email this morning, I got a new version of a 419 scam letter. It’s from a guy who claims to be the nephew of Ariel Sharon. He claims that his uncle owned some farmland, including an extremely valuable grove of olive trees, in the occupied west bank. Now, he claims, the family wants to sell that land – but as Sharon’s, they can’t let their names get in to the news. So, he says, he wants to “sell” the land to me for a pittance, and then I can sell it for what it’s really worth, and we’ll split the profits.

When you read about people who’ve fallen for 419 scams, you find that the scammers don’t ask for all of the money up front. They start off small: “There is a $500 fee for the transfer”. When they get that, they show you some “evidence” in the form of an official-looking transfer-clearance recepit. But then they say that there’s a new problem, and they need money to get around it. “We were preparing to transfer, but the clerk became suspicious; we need to bribe him!”, “There’s a new financial rule that you can’t transfer sums greater that $10000 to someone without a Nigerian bank account containing at least $100,000”. It’s a continual process. They always show some kind of fake document at each step of the way. The fakes aren’t particularly convincing unless you really want to be convinced, but they’re enough to keep the money coming.

Mills appears to be operating in very much the same vein. He’s getting investors to give him money, promising that whatever they invest, they’ll get back manifold when he starts selling hydrino power generators! He promises they’ll be on market within a year or two – five at most!

Then he comes up with either a demonstration, or the testimonial from his neighbor, or the self-publication of his book, or another press release talking about the newest version of his technology. It’s much better than the old one! This time it’s for real – just look at these amazing numbers! It’s 10 billion watts per liter, a machine that fits on your desk can generate as much power as a nuclear power plant!! We just need some more money to fix that pesky problem with corrosion on the electrodes, and then we’ll go to market, and you’ll be rich, rich, rich!

It’s been going on for almost 25 years, this constant cycle of press release/demo/testimonial every couple of years. (Seriously; in this post, I showed links to claims from 2009 claiming commercialization within 12 to 18 months; from 2005 claiming commercialization within months; and claims from 1999 claiming commercialization within a year.) But he always comes up with an excuse why those deadlines needed to be missed. And he always manages to find more investors, willing to hand over millions of dollars. As long as suckers are still willing to give him money, why wouldn’t he keep on making claims?