Tag Archives: bad math

Big Bang Bogosity

One of my long-time mantras on this blog has been “The worst math is no math”. Today, I’m going to show you yet another example of that: a recent post on Boing-Boing called “The Big Bang is Going Down”, by a self-proclaimed genius named Rick Rosner.

First postulated in 1931, the Big Bang has been the standard theory of the origin and structure of the universe for 50 years. In my opinion, (the opinion of a TV comedy writer, stripper and bar bouncer who does physics on the side) the Big Bang is about to collapse catastrophically, and that’s a good thing.

According to Big Bang theory, the universe exploded into existence from basically nothing 13.7-something billion years ago. But we’re at the beginning of a wave of discoveries of stuff that’s older than 13.7 billion years.

We’re constantly learning more about our universe, how it works, and how it started. New information isn’t necessarily a catastrophe for our existing theories; it’s just more data. There’s constantly new data coming in – and as yet, none of it comes close to causing the big bang theory to catastrophically collapse.

The two specific examples cited in the article are:

  1. one quasar that appears to be younger than we might expect – it existed just 900 million years after the current estimate of when the big bang occurred. That’s very surprising, and very exciting. But even in existing models of the big bang, it’s surprising, but not impossible. (No link, because the link in the original article doesn’t work.)
  2. an ancient galaxy – a galaxy that existed only 700 million years after the big bang occurred – contains dust. Cosmic dust is made of atoms much larger than hydrogen – like carbon, silicon, and iron, which are (per current theories) the product of supernovas. Supernovas generally don’t happen to stars younger than a couple of billion years – so finding dust in a galaxy less than a billion years after the universe began is quite surprising. But again: impossible under the big bang? No.

The problem with both of these arguments against the big bang is: they’re vague. They’re both handwavy arguments made about crude statements about what “should” be possible or impossible according to the bing bang theory. But neither comes close to the kind of precision that an actual scientific argument requires.

Scientists don’t use math because they like to be obscure, or because they think all of the pretty symbols look cool. Math is a tool used by scientists, because it’s useful. Real theories in physics need to be precise. They need to make predictions, and those predictions need to match reality to the limits of our ability to measure them. Without that kind of precision, we can’t test theories – we can’t check how well they model reality. And precise modelling of reality is the whole point.

The big bang is an extremely successful theory. It makes a lot of predictions, which do a good job of matching observations. It’s evolved in significant ways over time – but it remains by far the best theory we have – and by “best”, I mean “most accurate and successfully predictive”. The catch to all of this is that when we talk about the big bang theory, we don’t mean “the universe started out as a dot, and blew up like a huge bomb, and everything we see is the remnants of that giant explosion”. That’s an informal description, but it’s not the theory. That informal description is so vague that a motivated person can interpret it in ways that are consistent, or inconsistent with almost any given piece of evidence. The real big bang theory isn’t a single english statement – it’s many different mathematical statements which, taken together, produce a description of an expansionary universe that looks like the one we live in. For a really, really small sample, you can take a look at a nice old post by Ethan Siegel over here.

If you really want to make an argument that it’s impossible according to the big bang theory, you need to show how it’s impossible. The argument by Mr. Rosner is that the atoms in the dust in that galaxy couldn’t exist according to the big bang, because there wasn’t time for supernovas to create it. To make that argument, he needs to show that that’s true: he needs to look at the math that describes how stars form and how they behave, and then using that math, show that the supernovas couldn’t have happened in that timeframe. He doesn’t do anything like that: he just asserts that it’s true.

In contrast, if you read the papers by the guys who discovered the dust-filled galaxy, you’ll notice that they don’t come anywhere close to saying that this is impossible, or inconsistent with the big bang. All they say is that it’s surprising, and that we made need to revise our understanding of the behavior of matter in the early stages of the universe. The reason that they say that is because there’s nothing there that fundamentally conflicts with our current understanding of the big bang.

But Mr. Rosner can get away with the argument, because he’s being vague where the scientists are being precise. A scientist isn’t going to say “Yes, we know that it’s possible according to the big bang theory”, because the scientist doesn’t have the math to show it’s possible. At the moment, we don’t have sufficient precise math either way to come to a conclusion; we don’t know. But what we do know is that millions of other observations in different contexts, different locations, observed by different methods by different people, are all consistent with the predictions of the big bang. Given that we don’t have any evidence to support the idea that this couldn’t happen under the big bang, we continue to say that the big bang is the theory most consistent with our observations, that it makes better predictions than anything else, and so we assume (until we have evidence to the contrary) that this isn’t inconsistent. We don’t have any reason to discard the big bang theory on the basis of this!

Mr. Rosner, though, goes even further, proposing what he believes will be the replacement for the big bang.

The theory which replaces the Big Bang will treat the universe as an information processor. The universe is made of information and uses that information to define itself. Quantum mechanics and relativity pertain to the interactions of information, and the theory which finally unifies them will be information-based.

The Big Bang doesn’t describe an information-processing universe. Information processors don’t blow up after one calculation. You don’t toss your smart phone after just one text. The real universe – a non-Big Bang universe – recycles itself in a series of little bangs, lighting up old, burned-out galaxies which function as memory as needed.

In rolling cycles of universal computation, old, collapsed, neutron-rich galaxies are lit up again, being hosed down by neutrinos (which have probably been channeled along cosmic filaments), turning some of their neutrons to protons, which provides fuel for stellar fusion. Each calculation takes a few tens of billions of years as newly lit-up galaxies burn their proton fuel in stars, sharing information and forming new associations in the active center of the universe before burning out again. This is ultra-deep time, with what looks like a Big Bang universe being only a long moment in a vast string of such moments across trillions or quadrillions of giga-years.

This is not a novel idea. There are a ton of variations of the “universe as computation” that have been proposed over the years. Just off the top of my head, I can rattle off variations that I’ve read (in decreasing order of interest) by Minsky (can’t find the paper at the moment; I read it back when I was in grad school), by Fredkin, by Wolfram, and by Langan.

All of these theories assert in one form or another that our universe is either a massive computer or a massive computation, and that everything we can observe is part of a computational process. It’s a fascinating idea, and there are aspects of it that are really compelling.

For example, the Minsky model has an interesting explanation for the speed of light as an absolute limit, and for time dilation. Minksy’s model says that the universe is a giant cellular automaton. Each minimum quanta of space is a cell in the automaton. When a particle is located in a particular cell, that cell is “running” the computation that describes that particle. For a particle to move, the data describing it needs to get moved from its current location to its new location at the next time quanta. That takes some amount of computation, and the cell can only perform a finite amount of computation per quanta. The faster the particle moves, the more of its time quantum are dedicated to motion, and the less it has for anything else. The speed of light, in this theory, is the speed where the full quanta for computing a particle’s behavior is dedicated to nothing but moving it to its next location.

It’s very pretty. Intuitively, it works. That makes it an interesting idea. But the problem is, no one has come up with an actual working model. We’ve got real observations of the behavior of the physical universe that no one has been able to describe using the cellular automaton model.

That’s the problem with all of the computational hypotheses so far. They look really good in the abstract, but none of them come close to actually working in practice.

A lot of people nowadays like to mock string theory, because it’s a theory that looks really ogood, but has no testable predictions. String theory can describe the behavior of the universe that we see. The problem with it isn’t that there’s things we observe in the universe that it can’t predict, but because it can predict just about anything. There are a ton of parameters in the theory that can be shifted, and depending on their values, almost anything that we could observe can be fit by string theory. The problem with it is twofold: we don’t have any way (yet) of figuring out what values those parameters need to have to fit our universe, and we don’t have any way (yet) of performing an experiment that tests a prediction of string theory that’s different from the predictions of other theories.

As much as we enjoy mocking string theory for its lack of predictive value, the computational hypotheses are far worse! So far, no one has been able to come up with one that can come close to explaining all of the things that we’ve already observed, much less to making predictions that are better than our current theories.

But just like he did with his “criticism” of the big bang, Mr. Rosner makes predictions, but doesn’t bother to make them precise. There’s no math to his prediction, because there’s no content to his prediction. It doesn’t mean anything. It’s empty prose, proclaiming victory for an ill-defined idea on the basis of hand-waving and hype.

Boing-Boing should be ashamed for giving this bozo a platform.

Run! Hide your children! Protect them from math with letters!

Normally, I don’t write blog entries during work hours. I sometimes post stuff then, because it gets more traffic if it’s posted mid-day, but I don’t write. Except sometimes, when I come accross something that’s so ridiculous, so offensive, so patently mind-bogglingly stupid that I can’t work until I say something. Today is one of those days.

In the US, many school systems have been adopting something called the Common Core. The Common Core is an attempt to come up with one basic set of educational standards that are applied consistently in all of the states. This probably sounds like a straightforward, obvious thing. In my experience, most Europeans are actually shocked that the US doesn’t have anything like this. (In fact, at best, it’s historically been standardized state-by-state, or even school district by school district.) In the US, a high school diploma doesn’t really mean anything: the standards are so widely varied that you can’t count on much of anything!

The total mishmash of standards is obviously pretty dumb. The Common Core is an attempt to rationalize it, so that no matter where you go to school, there should be some basic commonality: when you finish 5th grade, you should be able to read at a certain level, do math at a certain level, etc.

Obviously, the common core isn’t perfect. It isn’t even necessarily particularly good. (The US being the US, it’s mostly focused on standardized tests.) But it’s better than nothing.

But again, the US being the US, there’s a lot of resistance to it. Some of it comes from the flaky left, which worries about how common standards will stifle the creativity of their perfect little flower children. Some of it comes from the loony right, which worries about how it’s a federal takeover of the education system which is going to brainwash their kiddies into perfect little socialists.

But the worst, the absolute inexcusable worst, are the pig-ignorant jackasses who hate standards because it might turn children into adults who are less pig-ignorant than their parents. The poster child for this bullshit attitude is State Senator Al Melvin of Arizona. Senator Melvin repeats the usual right-wing claptrap about the federal government, and goes on
to explain what he dislikes about the math standards.

The math standards, he says, teach “fuzzy math”. What makes it fuzzy math? Some of the problems use letters instead of numbers.

The state of Arizona should reject the Common Core math standards, because the math curicculum sometimes uses letters instead of numbers. After all, everyone knows that there’s nothing more to math than good old simple arithmetic! Letters in math problems are a liberal conspiracy to convince children to become gay!

The scary thing is that I’m not exaggerating here. An argument that I have, horrifyingly, heard several times from crazies is that letters are used in math classes to try to introduce moral relativism into math. They say that the whole reason for using letters is because with numbers, there’s one right answer. But letters don’t have a fixed value: you can change what the letters mean. And obviously, we’re introducing that into math because we want to make children think that questions don’t have a single correct answer.

No matter where in the world you go, you’ll find stupid people. I don’t think that the US is anything special when it comes to that. But it does seem like we’re more likely to take people like this, and put them into positions of power. How does a man who doesn’t know what algebra is get put into a position where he’s part of the committee that decides on educational standards for a state? What on earth is wrong with people who would elect someone like this?

Senator Melvin isn’t just some random guy who happened to get into the state legislature. He’s currently the front-runner in the election for Arizona’s next governor. Hey Arizona, don’t you think that maybe, just maybe, you should make sure that your governor knows high school algebra? I mean, really, do you think that if he can’t understand a variable in an equation, he’s going to be able to understand the state budget?!

Bad Arithmetic and Blatant Political Lies

I’ve been trying to say away from the whole political thing lately. Any time that I open my mouth to say anything about politicians, I get a bunch of assholes trying to jump down my throat for being “biased”. But sometimes, things just get too damned ridiculous, and I can’t possibly let it go without comment.

In the interests of disclosure: I despise Mitt Romney. Despite that, I think he’s gotten a very unfairly hard time about a lot of things. Let’s face it, the guys a rich investor. But that’s been taken by the media, and turned in to the story through which everything is viewed, whether it makes sense or not.

For example, there’s the whole $10,000 bet nonsense. I don’t think that that made a damned bit of sense. It was portrayed as “here’s a guy so rich that he can afford to lose $10,000”. But… well, let’s look at it from a mathematical perspective.

You can assess the cost of a bet by looking at it from probability. Take the cost of losing, and multiply it by the probability of losing. That’s the expected cost of the bet. So, in the case of that debate moment, what was the expected cost of the bet? $0. If you know that you’re betting about a fact, and you know the fact, then you know the outcome of the bet. It’s a standard rhetorical trick. How many of us have said “Bet you a million dollars”? It doesn’t matter what dollar figure you attach to it – because you know the fact, and you know that the cost of the bet, to you, is 0.

But… Well, Mitt is a rich asshole.

As you must have heard, Mitt released his income tax return for last year, and an estimate for this year. Because his money is pretty much all investment income, he paid a bit under 15% in taxes. This is, quite naturally, really annoying to many people. Those of us who actually have jobs and get paid salaries don’t get away with a tax rate that low. (And people who are paid salary rather than investment profits have to pay the alternative minimum tax, which means that they’re not able to deduct charity the way that Mitt is.)

So, in an interview, Mitt was asked about the fairness of a guy who made over twenty million dollars a year paying such a low rate. And Mitt, asshole that he is, tried to cover up the insanity of the current system, by saying:

Well, actually, I released two years of taxes and I think the average is almost 15 percent. And then also, on top of that, I gave another more 15 percent to charity. When you add it together with all of the taxes and the charity, particularly in the last year, I think it reaches almost 40 percent that I gave back to the community.

I don’t care about whether the reasoning there is good or not. Personally, I think it’s ridiculous to say “yeah, I didn’t pay taxes, but I gave a lot of money to my church, so it’s OK.” But forget that part. Just look at the freaking arithmetic!

He pays less than 15% in taxes.

He pays 15% in charity (mostly donations to his church).

What’s less than 15 + 15?

It sure as hell isn’t “almost 40 percent”. It’s not quite 30 percent. This isn’t something debatable. It’s simple, elementary school arithmetic. It’s just fucking insane that he thinks he can just get away with saying that. But he did – they let him say that, and didn’t challenge it at all. He says “less than 15 + 15 = almost 40”, and the interviewer never even batted an eye.

And then, he moved on to something which is a bit more debatable:

One of the reasons why we have a lower tax rate on capital gains is because capital gains are also being taxed at the corporate level. So as businesses earn profits, that’s taxed at 35 percent, then as they distribute those profits as dividends, that’s taxed at 15 percent more. So, all total, the tax rate is really closer to 45 or 50 percent.

Now, like I said, you can argue about that. Personally, I don’t think it’s a particularly good argument. The way that I see it, corporations are a tradeoff. A business doesn’t need to be a corporation. You become a corporation, because transforming the business into a quasi-independent legal entity gives you some big advantages. A corporation owns its own assets. You, as an individual who owns part of a corporation, aren’t responsible for the debts of the corporation. You, as an individual who owns part of a corporation, aren’t legally liable for the actions (such as libel) of the corporation. The corporation is an independent entity, which owns its own assets, which is responsible for its debts and actions. In exchange for taking on the legal status on an independent entity, that legal entity becomes responsible for paying taxes on its income. You give it that independent legal status in order to protect yourself; and in exchange, that independent legal status entails an obligation for that independent entity to pay its own taxes.

But hey, let’s leave that argument aside for the moment. Who pays the cost of the corporate taxes? Is it the owners of the business? Is it the people who work for the business? Is it someone else?

When they talk about their own ridiculously low tax rates, people like Mitt argue that they’re paying those taxes, and they want to add those taxes to the total effective tax that they pay.

But when they want to argue about why we should lower corporate tax rates, they pull out a totally different argument, which they call the “flypaper theory“. The flypaper theory argues that the burden of corporate taxes falls on the employees of the company – because if the company didn’t have to pay those taxes, that money would be going to the employees as salary – that is, the taxes are part of the overall expenses paid by the company. A company’s effective profits are (revenue – expenses). Expenses, in turn, are taxes+labor+materials+…. The company makes a profit of $P to satisfy its shareholders. So if you took away corporate taxes, the company could continue to make $P while paying its employees more. Therefore, the cost of the corporate taxes comes out of the salaries of the corporations employees.

You can make several different arguments – that the full burden of taxes fall on to the owners, or that the full burden of taxes falls on the employees, or that the full burden of taxes falls on the customers (because prices are raised to cover them). Each of those is something that you could reasonably argue. But what the conservative movement in America likes to do is to claim all of those: that the full burden of corporate taxes falls on the employees, and the full burden of corporate taxes falls on the customers, and the full burden of corporate taxes falls on the shareholders.

That’s just dishonest. If the full burden falls on one, then none of the burden falls on anyone else. The reality is, the burden of taxes is shared between all three. If there were no corporate taxes, companies probably would be able to pay their employees more – but there’s really no way that they’d take all of the money they pay in taxes, and push that into salary. And they’d probably be able to lower prices – but they probably wouldn’t lower prices enough to make up the entire difference. And they’d probably pay more in dividends/stock buybacks to pay the shareholders.

But you don’t get to count the same tax money three times.

Hydrinos: Impressive Free Energy Crackpottery

Back when I wrote about the whole negative energy rubbish, a reader wrote to me, and asked me to write something about hydrinos.

For those who are lucky enough not to know about them, hydrinos are part of another free energy scam. In this case, a medical doctor named Randell Mills claims to have discovered that hydrogen atoms can have multiple states beyond the typical, familiar ground state of hydrogen. Under the right conditions, so claims Dr. Mills, the electron shell around a hydrogen atom will compact into a tighter orbit, releasing a burst of energy in the process. And, in fact, it’s (supposedly) really, really easy to make hydrogen turn into hydrinos – if you let a bunch of hydrogen atoms bump in to a bunch of Argon atoms, then presto! some of the hydrogen will shrink into hydrino form, and give you a bunch of energy.

Wonderful, right? Just let a bunch of gas bounce around in a balloon, and out comes energy!

Oh, but it’s better than that. There are multiple hydrino forms: you can just keep compressing and compressing the hydrogen atom, pushing out more and more energy each time. The more you compress it, the more energy you get – and you don’t really need to compress it. You just bump it up against another atom, and poof! energy.

To explain all of this, Dr. Mills further claims to have invented a new
form of quantum mechanics, called “grand unified theory of classical quantum mechanics” (CQM for short) which provides the unification between relativity and quantum mechanics that people have been looking for. And, even better, CQM is fully deterministic – all of that ugly probabilistic stuff from quantum mechanics goes away!

The problem is, it doesn’t work. None of it.

What makes hydrinos interesting as a piece of crankery is that there’s a lot more depth to it than to most crap. Dr. Mills hasn’t just handwaved that these hydrino things exist – he’s got a very elaborate detailed theory – with a lot of non-trivial math – to back it up. Alas, the math is garbage, but it’s garbage-ness isn’t obvious. To see the problems, we’ll need to get deeper into math than we usually do.

Let’s start with a couple of examples of the claims about hydrinos, and the kind of favorable clueless press they’ve received.

Here is an example of how hydrino supporters explain them:

In 1986 Randell Mills MD developed a theory that hydrogen atoms could shrink, and release lots of energy in the process. He called the resultant entity a “Hydrino” (little Hydrogen), and started a company called Blacklight Power, Inc. to commercialize his process. He published his theory in a book he wrote, which is available in PDF format on his website. Unfortunately, the book contains so much mathematics that many people won’t bother with it. On this page I will try to present the energy related aspect of his theory in language that I hope will be accessible to many.

According to Dr. Mills, when a hydrogen atom collides with certain other atoms or ions, it can sometimes transfer a quantity of energy to the other atom, and shrink at the same time, becoming a Hydrino in the process. The atom that it collided with is called the “catalyst”, because it helps the Hydrino shrink. Once a Hydrino has formed, it can shrink even further through collisions with other catalyst atoms. Each collision potentially resulting in another shrinkage.

Each successive level of shrinkage releases even more energy than the previous level. In other words, the smaller the Hydrino gets, the more energy it releases each time it shrinks another level.

To get an idea of the amounts of energy involved, I now need to introduce the concept of the “electron volt” (eV). An eV is the amount of energy that a single electron gains when it passes through a voltage drop of one volt. Since a volt isn’t much (a “dry cell” is about 1.5 volts), and the electric charge on an electron is utterly minuscule, an eV is a very tiny amount of energy. Nevertheless, it is a very representative measure of the energy involved in chemical reactions. e.g. when Hydrogen and Oxygen combine to form a water molecule, about 2.5 eV of energy is released per water molecule formed.

When Hydrogen shrinks to form a second level Hydrino (Hydrogen itself is considered to be the first level Hydrino), about 41 eV of energy is released. This is already about 16 times more than when Hydrogen and Oxygen combine to form water. And it gets better from there. If that newly formed Hydrino collides with another catalyst atom, and shrinks again, to the third level, then an additional 68 eV is released. This can go on for quite a way, and the amount gets bigger each time. Here is a table of some level numbers, and the energy released in dropping to that level from the previous level, IOW when you go from e.g. level 4 to level 5, 122 eV is released. (BTW larger level numbers represent smaller Hydrinos).

And some of the press:

Notice a pattern?

The short version of the problem with hydrinos is really, really simple.

The most fundamental fact of nature that we’ve observed is that everything tends to move towards its lowest energy state. The whole theory of hydrinos basically says that that’s not true: everything except hydrogen tends to move towards its lowest energy state, but hydrogen doesn’t. It’s got a dozen or so lower energy states, but none of the abundant quantities of hydrogen on earth are ever observed in any of those states unless they’re manipulated by Mills magical machine.

The whole basis of hydrino theory is Mills CQM. CQM is rubbish – but it’s impressive looking rubbish. I’m not going to go deep into detail; you can see a detailed explanation of the problems here; I’ll run through a short version.

To start, how is Mills claiming that hydrinos work? In CQM, he posits the existence of electron shell levels closer to the nucleus than the ground state of hydrogen. Based on his calculations, he comes up with an energy figure for the difference between the ground state and the hydrino state. Then he finds other substances that have the property that boosting one electron into a higher energy state would cost the same amount of energy. When a hydrogen atom collides with an atom that has a matching electron transition, the hydrogen can get bumped into the hydrino state, while kicking an electron into a higher orbital. That electron will supposedly, in due time, fall back to its original level, releasing the energy differential as a photon.

On this level, it sort-of looks correct. It doesn’t violate conservation of energy: the collision between the two atoms doesn’t produce anything magical. It’s just a simple transfer of energy. That much is fine.

It’s when you get into the details that it gets seriously fudgy.

Right from the start, if you know what you’re doing, CQM goes off the rails. For example, CQM claims that you can describe the dynamics of an electron in terms of a classical wave charge-density function equation. Mills actually gives that function, and asserts that it respects Lorentz invariance. That’s crucial – Lorentz invariance is critical for relativity: it’s the fundamental mathematical symmetry that’s the basis of relativity. But his equation doesn’t actually respect Lorentz invariance. Or, rather, it does – but only if the electron is moving at the speed of light. Which it can’t do.

Mills goes on to describe the supposed physics of hydrinos. If you work through his model, the only state that is consistent with both his equations, and his claim that the electrons orbit in a spherical shell above the atom – well, if you do that, you’ll find that according to his own equations, there is only one possible state for a hydrogen atom – the conventional ground state.

It goes on in that vein for quite a while. He’s got an elaborate system, with an elaborate mathematical framework… but none of the math actually says what he says it says. The Lorentz invariance example that I cited above – that’s typical. Print an equation, say that it says X, even though the equation doesn’t say anything like X.

But we can go a bit further. The fundamental state of atoms is something that we understand pretty well, because we’ve got so many observations, and so much math describing it. And the thing is, that math is pretty damned convincing. That doesn’t mean that it’s correct, but it does mean that any theory that wants to replace it must be able to describe everything that we’ve observed at least as well as the current theory.

Why do atoms have the shape that they do? Why are the size that they are? It’s not a super easy thing to understand, because electrons aren’t really particles. They’re something strange. We don’t often think about that, but it’s true. They’re deeply bizarre things. They’re not really particles. Under many conditions, they behave more like waves than like particles. And that’s true of the atom.

The reason that atoms are the size that they are is because the electron “orbitals” have sizes and shapes that are determined by resonant frequencies of the wave-like aspects of electrons. What Mills is suggesting is that there are a range of never-before observed resonant frequencies of electrons. But the math that he uses to support that claim just doesn’t work.

Now, I’ll be honest here. I’m not nearly enough of a physics whiz to be competent to judge the accuracy of his purported quantum mechanical system. But I’m still pretty darn confident that he’s full of crap. Why?

I’m from New Jersey – pretty much right up the road from where his lab is. Going to college right up the road from him, I’ve been hearing about his for a long time. He’s been running this company for quite a while – going on two decades. And all that time, the company has been constantly issuing press releases promising that it’s just a year away from being commercialized! It’s always one step away. But never, never, has he released enough information to let someone truly independent verify or reproduce his results. And he’s been very deceptive about that: he’s made various claims about independent verification on several occasions.

For example, he once cited that his work had been verified by a researcher at Harvard. In fact, he’d had one of his associates rent a piece of equipment at Harvard, and use it for a test. So yes, it was tested by a researcher – if you count his associate as a legitimate researcher. And it was tested at Harvard. But the claim that it was tested by a researcher at Harvard is clearly meant to imply that it was tested by a Harvard professor, when it wasn’t.

For something around 20 years, he’s been making promises, giving very tightly controlled demos, refusing to give any real details, refusing to actually explain how to reproduce his “results”, and promising that it’s just one year away from being commercialized!

And yet… hydrogen is the most common substance in the universe. If it really had a lower energy state that what we call it’s ground state, and that lower energy state was really as miraculous as he claims – why wouldn’t we see it? Why hasn’t it ever been observed? Substances like Argon are rare – but they’re not that rare. Argon has been exposed to hydrogen under laboratory conditions plenty of times – and yet, nothing anamalous has even been observed. All of the supposed hydrino catalysts have been observed so often under so many conditions – and yet, no anamolous energy has even been noticed before. But according to Mills, we should be seeing tons of it.

And that’s not all. Mills also claims that you can create all sorts of compounds with hydrinos – and naturally, every single one of those compounds is positively miraculous! Bonded with silicon, you get better semiconductors! Substitute hydrinos for regular hydrogen in a battery electrolyte, and you get a miracle battery! Use it in rocket fuel instead of common hydrogen, and you get a ten-fold improvement in the performance of a rocket! Make a laser from it, and you can create higher-density data storage and communication systems. Everything that hydrinos touch is amazing

But… not one of these miraculous substances has ever been observed before. We work with silicon all the time – but we’ve never seen the magic silicon hydrino compound. And he’s never been willing to actually show anyone any of these miracle substances.

He claims that he doesn’t show it because he’s protecting his intellectual property. But that’s silly. If hydrinos existed, then just telling us that these compounds exist and have interesting properties should be enough for other labs to go ahead and experiment with producing them. But no one has. Whether he shows the supposed miracle compounds or not doesn’t change anyone else’s ability to produce those. Even if he’s keeping his magic hydrino factory secret, so that no one else has access to hydrinos, by telling us that these compounds exist, he’s given away the secret. He’s not protecting anything anymore: by publically talking about these things, he’s given up his right to patent the substances. It’s true that he still hasn’t given up the rights to the process of producing them – but publicly demonstrating these alleged miracle substances wouldn’t take away any legal rights that he hasn’t already given up. So, why doesn’t he show them to you?

Because they don’t exist.

Second Law Silliness from Sewell

So, via Panda’s Thumb, I hear that Granville Sewell is up to his old hijinks. Sewell is a classic creationist crackpot, who’s known for two things.

First, he’s known for chronically recycling the old “second law of thermodynamics” garbage. And second, he’s known for building arguments based on “thought experiments” – where instead of doing experiments, he just makes up the experiments and the results.

The second-law crankery is really annoying. It’s one of the oldest creationist pseudo-scientific schticks around, and it’s such a terrible argument. It’s also a sort-of pet peeve of mine, because I hate the way that people generally respond to it. It’s not that the common response is wrong – but rather that the common responses focus on one error, while neglecting to point out that there are many deeper issues with it.

In case you’ve been hiding under a rock, the creationist argument is basically:

  1. The second law of thermodynamics says that disorder always increases.
  2. Evolution produces highly-ordered complexity via a natural process.
  3. Therefore, evolution must be impossible, because you can’t create order.

The first problem with this argument is very simple. The second law of thermodynamics does not say that disorder always increases. It’s a classic example of my old maxim: the worst math is no math. The second law of thermodynamics doesn’t say anything as fuzzy as “you can’t create order”. It’s a precise, mathematical statement. The second law of thermodynamics says that in a closed system:

 Delta S geq int frac{delta Q}{T}

where:

  1. S is the entropy in a system,
  2. Q is the amount of heat transferred in an interaction, and
  3. T is the temperature of the system.

Translated into english, that basically says that in any interaction that involves the
transfer of heat, the entropy of the system cannot possible be reduced. Other ways of saying it include “There is no possible process whose sole result is the transfer of heat from a cooler body to a warmer one”; or “No process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work.”

Note well – there is no mention of “chaos” or “disorder” in these statements: The second law is a statement about the way that energy can be used. It basically says that when
you try to use energy, some of that energy is inevitably lost in the process of using it.

Talking about “chaos”, “order”, “disorder” – those are all metaphors. Entropy is a difficult concept. It doesn’t really have a particularly good intuitive meaning. It means something like “energy lost into forms that can’t be used to do work” – but that’s still a poor attempt to capture it in metaphor. The reason that people use order and disorder comes from a way of thinking about energy: if I can extract energy from burning gasoline to spin the wheels of my car, the process of spinning my wheels is very organized – it’s something that I can see as a structured application of energy – or, stretching the metaphor a bit, the energy that spins the wheels in structured. On the other hand, the “waste” from burning the gas – the heating of the engine parts, the energy caught in the warmth of the exhaust – that’s just random and useless. It’s “chaotic”.

So when a creationist says that the second law of thermodynamics says you can’t create order, they’re full of shit. The second law doesn’t say that – not in any shape or form. You don’t need to get into the whole “open system/closed system” stuff to dispute it; it simply doesn’t say what they claim it says.

But let’s not stop there. Even if you accept that the mathematical statement of the second law really did say that chaos always increases, that still has nothing to do with evolution. Look back at the equation. What it says is that in a closed system, in any interaction, the total entropy must increase. Even if you accept that entropy means chaos, all that it says is that in any interaction, the total entropy must increase.

It doesn’t say that you can’t create order. It says that the cumulative end result of any interaction must increase entropy. Want to build a house? Of course you can do it without violating the second law. But to build that house, you need to cut down trees, dig holes, lay foundations, cut wood, pour concrete, put things together. All of those things use a lot of energy. And in each minute interaction, you’re expending energy in ways that increase entropy. If the creationist interpretation of the second law were true, you couldn’t build a house, because building a house involves creating something structured – creating order.

Similarly, if you look at a living cell, it does a whole lot of highly ordered, highly structured things. In order to do those things, it uses energy. And in the process of using that energy, it creates entropy. In terms of order and chaos, the cell uses energy to create order, but in the process of doing so it creates wastes – waste heat, and waste chemicals. It converts high-energy structured molecules into lower-energy molecules, converting things with energetic structure to things without. Look at all of the waste that’s produced by a living cell, and you’ll find that it does produce a net increase in entropy. Once again, if the creationists were right, then you wouldn’t need to worry about whether evolution was possible under thermodynamics – because life wouldn’t be possible.

In fact, if the creationists were right, the existence of planets, stars, and galaxies wouldn’t be possible – because a galaxy full of stars with planets is far less chaotic than loose cloud of hydrogen.

Once again, we don’t even need to consider the whole closed system/open system distinction, because even if we treat earth as a closed system, their arguments are wrong. Life doesn’t really defy the laws of thermodynamics – it produces entropy exactly as it should.

But the creationist second-law argument is even worse than that.

The second-law argument is that the fact that DNA “encodes information”, and that the amount of information “encoded” in DNA increases as a result of the evolutionary process means that evolution violates the second law.

This absolutely doesn’t require bringing in any open/closed system discussions. Doing that is just a distraction which allows the creationist to sneak their real argument underneath.

The real point is: DNA is a highly structured molecule. No disagreement there. But so what? In the life of an organism, there are virtually un-countable numbers of energetic interactions, all of which result in a net increase in the amount of entropy. Why on earth would adding a bunch of links to a DNA chain completely outweigh those? In fact, changing the DNA of an organism is just another entropy increasing event. The chemical processes in the cell that create DNA strands consume energy, and use that energy to produce molecules like DNA, producing entropy along the way, just like pretty much every other chemical process in the universe.

The creationist argument relies on a bunch of sloppy handwaves: “entropy” is disorder; “you can’t create order”, “DNA is ordered”. In fact, evolution has no problem with respect to entropy: one way of viewing evolution is that it’s a process of creating ever more effective entropy-generators.

Now we can get to Sewell and his arguments, and you can see how perfectly they match what I’ve been talking about.

Imagine a high school science teacher renting a video showing a tornado sweeping through a town, turning houses and cars into rubble. When she attempts to show it to her students, she accidentally runs the video backward. As Ford predicts, the students laugh and say, the video is going backwards! The teacher doesn’t want to admit her mistake, so she says: “No, the video is not really going backward. It only looks like it is because it appears that the second law is being violated. And of course entropy is decreasing in this video, but tornados derive their power from the sun, and the increase in entropy on the sun is far greater than the decrease seen on this video, so there is no conflict with the second law.” “In fact,” the teacher continues, “meteorologists can explain everything that is happening in this video,” and she proceeds to give some long, detailed, hastily improvised scientific theories on how tornados, under the right conditions, really can construct houses and cars. At the end of the explanation, one student says, “I don’t want to argue with scientists, but wouldn’t it be a lot easier to explain if you ran the video the other way?”

Now imagine a professor describing the final project for students in his evolutionary biology class. “Here are two pictures,” he says.

“One is a drawing of what the Earth must have looked like soon after it formed. The other is a picture of New York City today, with tall buildings full of intelligent humans, computers, TV sets and telephones, with libraries full of science texts and novels, and jet airplanes flying overhead. Your assignment is to explain how we got from picture one to picture two, and why this did not violate the second law of thermodynamics. You should explain that 3 or 4 billion years ago a collection of atoms formed by pure chance that was able to duplicate itself, and these complex collections of atoms were able to pass their complex structures on to their descendants generation after generation, even correcting errors. Explain how, over a very long time, the accumulation of genetic accidents resulted in greater and greater information content in the DNA of these more and more complicated collections of atoms, and how eventually something called “intelligence” allowed some of these collections of atoms to design buildings and computers and TV sets, and write encyclopedias and science texts. But be sure to point out that while none of this would have been possible in an isolated system, the Earth is an open system, and entropy can decrease in an open system as long as the decreases are compensated by increases outside the system. Energy from the sun is what made all of this possible, and while the origin and evolution of life may have resulted in some small decrease in entropy here, the increase in entropy on the sun easily compensates this tiny decrease. The sun should play a central role in your essay.”

When one student turns in his essay some days later, he has written,

“A few years after picture one was taken, the sun exploded into a supernova, all humans and other animals died, their bodies decayed, and their cells decomposed into simple organic and inorganic compounds. Most of the buildings collapsed immediately into rubble, those that didn’t, crumbled eventually. Most of the computers and TV sets inside were smashed into scrap metal, even those that weren’t, gradually turned into piles of rust, most of the books in the libraries burned up, the rest rotted over time, and you can see see the result in picture two.”

The professor says, “You have switched the pictures!” “I know,” says the student. “But it was so much easier to explain that way.”

Evolution is a movie running backward, that is what makes it so different from other phenomena in our universe, and why it demands a very different sort of explanation.

This is a perfect example of both of Sewell’s usual techniques.

First, the essential argument here is rubbish. It’s the usual “second-law means that you can’t create order”, even though that’s not what it says, followed by a rather shallow and pointless response to the open/closed system stuff.

And the second part is what makes Sewell Sewell. He can’t actually make his own arguments. No, that’s much too hard. So he creates fake people, and plays out a story using his fake people and having them make fake arguments, and then uses the people in his story to illustrate his argument. It’s a technique that I haven’t seen used so consistency since I read Ayn Rand in high school.

What happens if you don't understand math? Just replace it with solipsism, and you can get published!

About four years ago, I wrote a post about a crackpot theory by a biologist named Robert Lanza. Lanza is a biologist – a genuine, serious scientist. And his theory got published in a major journal, “The American Scholar”. Nevertheless, it’s total rubbish.

Anyway, the folks over at the Encyclopedia of American Loons just posted an entry about him, so I thought it was worth bringing back this oldie-but-goodie. The original post was inspired by a comment from one of my most astute commenters, Mr. Blake Stacey, where he gave me a link to Lanza’sarticle.

The article is called “A New Theory of the Universe”, by Robert Lanza, and as I said, it was published in the American Scholar. Lanza’s article is a rotten piece of new-age gibberish, with all of the usual hallmarks: lots of woo, all sorts of babble about how important consciousness is, random nonsensical babblings about quantum physics, and of course, bad math.

Continue reading

Hold on tight: the world ends next saturday!

(For some idiot reason, I was absolutely certain that today was the 12th. It’s not. It’s the tenth. D’oh. There’s a freakin’ time&date widget on my screen! Thanks to the commenter who pointed this out.)

A bit over a year ago, before the big move to Scientopia, I wrote about a loonie named Harold Camping. Camping is the guy behind the uber-christian “Family Radio”. He predicted that the world is going to end on May 21st, 2011. I first heard about this when it got written up in January of 2010 in the San Francisco Chronicle.

And now, we’re less than two weeks away from the end of the world according to Mr. Camping! So I thought hey, it’s my last chance to make sure that I’m one of the damned!

Continue reading

Another Crank comes to visit: The Cognitive Theoretic Model of the Universe

When an author of one of the pieces that I mock shows up, I try to bump them up to the top of the queue. No matter how crackpotty they are, I think that if they’ve gone to the trouble to come and defend their theories, they deserve a modicum of respect, and giving them a fair chance to get people to see their defense is the least I can do.

A couple of years ago, I wrote about the Cognitive Theoretic Model of the Universe. Yesterday, the author of that piece showed up in the comments. It’s a two-year-old post, which was originally written back at ScienceBlogs – so a discussion in the comments there isn’t going to get noticed by anyone. So I’m reposting it here, with some revisions.

Stripped down to its basics, the CTMU is just yet another postmodern “perception defines the universe” idea. Nothing unusual about it on that level. What makes it interesting is that it tries to take a set-theoretic approach to doing it. (Although, to be a tiny bit fair, he claims that he’s not taking a set theoretic approach, but rather demonstrating why a set theoretic approach won’t work. Either way, I’d argue that it’s more of a word-game than a real theory, but whatever…)

The real universe has always been theoretically treated as an object, and specifically as the composite type of object known as a set. But an object or set exists in space and time, and reality does not. Because the real universe by definition contains all that is real, there is no “external reality” (or space, or time) in which it can exist or have been “created”. We can talk about lesser regions of the real universe in such a light, but not about the real universe as a whole. Nor, for identical reasons, can we think of the universe as the sum of its parts, for these parts exist solely within a spacetime manifold identified with the whole and cannot explain the manifold itself. This rules out pluralistic explanations of reality, forcing us to seek an explanation at once monic (because nonpluralistic) and holistic (because the basic conditions for existence are embodied in the manifold, which equals the whole). Obviously, the first step towards such an explanation is to bring monism and holism into coincidence.

Continue reading

E. E. Escultura and the Field Axioms

As you may have noticed, E. E. Escultura has shown up in the comments to this blog. In one comment, he made an interesting (but unsupported) claim, and I thought it was worth promoting up to a proper discussion of its own, rather than letting it rage in the comments of an unrelated post.

What he said was:

You really have no choice friends. The real number system is ill-defined, does not exist, because its field axioms are inconsistent!!!

This is a really bizarre claim. The field axioms are inconsistent?

I’ll run through a quick review, because I know that many/most people don’t have the field axioms memorized. But the field axioms are, basically, an extremely simple set of rules describing the behavior of an algebraic structure. The real numbers are the canonical example of a field, but you can define other fields; for example, the rational numbers form a field; if you allow the values to be a class rather than a set, the surreal numbers form a field.

So: a field is a collection of values F with two operations, “+” and “*”, such that:

  1. Closure: ∀ a, b ∈ F: a + b in F ∧ a * b ∈ f
  2. Associativity: ∀ a, b, c ∈ F: a + (b + c) = (a + b) + c ∧ a * (b * c) = (a * b) * c
  3. Commutativity: ∀ a, b ∈ F: a + b = b + a ∧ a * b = b * a
  4. Identity: there exist distinct elements 0 and 1 in F such that ∀ a ∈ F: a + 0 = a, ∀ b ∈ F: b*1=b
  5. Additive inverses: ∀ a ∈ F, there exists an additive inverse -a ∈ F such that a + -a = 0.
  6. Multiplicative Inverse: For all a ∈ F where a != 0, there a multiplicative inverse a-1 such that a * a-1 = 1.
  7. Distributivity: ∀ a, b, c ∈ F: a * (b+c) = (a*b) + (a*c)

So, our friend Professor Escultura claims that this set of axioms is inconsistent, and that therefore the real numbers are ill-defined. One of the things that makes the field axioms so beautiful is how simple they are. They’re a nice, minimal illustration of how we expect numbers to behave.

So, Professor Escultura: to claim that that the field axioms are inconsistent, what you’re saying is that this set of axioms leads to an inevitable contradiction. So, what exactly about the field axioms is inconsistent? Where’s the contradiction?

Representational Crankery: the New Reals and the Dark Number

There’s one kind of crank that I haven’t really paid much attention to on this blog, and that’s the real number cranks. I’ve touched on real number crankery in my little encounter with John Gabriel, and back in the old 0.999…=1 post, but I’ve never really given them the attention that they deserve.

There are a huge number of people who hate the logical implications of our definitions real numbers, and who insist that those unpleasant complications mean that our concept of real numbers is based on a faulty definition, or even that the whole concept of real numbers is ill-defined.

This is an underlying theme of a lot of Cantor crankery, but it goes well beyond that. And the basic problem underlies a lot of bad mathematical arguments. The root of this particular problem comes from a confusion between the representation of a number, and that number itself. “\frac{1}{2}” isn’t a number: it’s a notation that we understand refers to the number that you get by dividing one by two.

There’s a similar form of looniness that you get from people who dislike the set-theoretic construction of numbers. In classic set theory, you can construct the set of integers by starting with the empty set, which is used as the representation of 0. Then the set containing the empty set is the value 1 – so 1 is represented as { 0 }. Then 2 is represented as { 1, 0 }; 3 as { 2, 1, 0}; and so on. (There are several variations of this, but this is the basic idea.) You’ll see arguments from people who dislike this saying things like “This isn’t a construction of the natural numbers, because you can take the intersection of 8 and 3, and set intersection is meaningless on numbers.” The problem with that is the same as the problem with the notational crankery: the set theoretic construction doesn’t say “the empty set is the value 0″, it says “in a set theoretic construction, the empty set can be used as a representation of the number 0.

The particular version of this crankery that I’m going to focus on today is somewhat related to the inverse-19 loonies. If you recall their monument, the plaque talks about how their work was praised by a math professor by the name of Edgar Escultura. Well, it turns out that Escultura himself is a bit of a crank.

The specify manifestation of his crankery is this representational issue. But the root of it is really related to the discomfort that many people feel at some of the conclusions of modern math.

A lot of what we learned about math has turned out to be non-intuitive. There’s Cantor, and Gödel, of course: there are lots of different sizes of infinities; and there are mathematical statements that are neither true nor false. And there are all sorts of related things – for example, the whole ideaof undescribable numbers. Undescribable numbers drive people nuts. An undescribable number is a number which has the property that there’s absolutely no way that you can write it down, ever. Not that you can’t write it in, say, base-10 decimals, but that you can’t ever write down anything, in any form that uniquely describes it. And, it turns out, that the vast majority of numbers are undescribable.

This leads to the representational issue. Many people insist that if you can’t represent a number, that number doesn’t really exist. It’s nothing but an artifact of an flawed definition. Therefore, by this argument, those numbers don’t exist; the only reason that we think that they do is because the real numbers are ill-defined.

This kind of crackpottery isn’t limited to stupid people. Professor Escultura isn’t a moron – but he is a crackpot. What he’s done is take the representational argument, and run with it. According to him, the only real numbers are numbers that are representable. What he proposes is very nearly a theory of computable numbers – but he tangles it up in the representational issue. And in a fascinatingly ironic turn-around, he takes the artifacts of representational limitations, and insists that they represent real mathematical phenomena – resulting in an ill-defined number theory as a way of correcting what he alleges is an ill-defined number theory.

His system is called the New Real Numbers.

In the New Real Numbers, which he notates as R^*, the decimal notation is fundamental. The set of new real numbers consists exactly of the set of numbers with finite representations in decimal form. This leads to some astonishingly bizarre things. From his paper:

3) Then the inverse operation to multiplication called division; the result of dividing a decimal by another if it exists is called quotient provided the divisor is not zero. Only when the integral part of the devisor is not prime other than 2 or 5 is the quotient well defined. For example, 2/7 is ill defined because the quotient is not a terminating decimal (we interpret a fraction as division).

So 2/7ths is not a new real number: it’s ill-defined. 1/3 isn’t a real number: it’s ill-defined.

4) Since a decimal is determined or well-defined by its digits, nonterminating decimals are ambiguous or ill-defined. Consequently, the notion irrational is ill-defined since we cannot cheeckd all its digits and verify if the digits of a nonterminaing decimal are periodic or nonperiodic.

After that last one, this isn’t too surprising. But it’s still absolutely amazing. The square root of two? Ill-defined: it doesn’t really exist. e? Ill-defined, it doesn’t exist. \pi? Ill-defined, it doesn’t really exist. All of those triangles, circles, everything that depends on e? They’re all bullshit according to Escultura. Because if he can’t write them down in a piece of paper in decimal notation in a finite amount of time, they don’t exist.

Of course, this is entirely too ridiculous, so he backtracks a bit, and defines a non-terminating decimal number. His definition is quite peculiar. I can’t say that I really follow it. I think this may be a language issue – Escultura isn’t a native english speaker. I’m not sure which parts of this are crackpottery, which are linguistic struggles, and which are notational difficulties in reading math rendered as plain text.

5) Consider the sequence of decimals,

(d)^na_1a_2…a_k, n = 1, 2, …, (1)

where d is any of the decimals, 0.1, 0.2, 0.3, …, 0.9, a_1, …, a_k, basic integers (not all 0 simultaneously). We call the nonstandard sequence (1) d-sequence and its nth term nth d-term. For fixed combination of d and the a_j’s, j = 1, …, k, in (1) the nth term is a terminating decimal and as n increases indefinitely it traces the tail digits of some nonterminating decimal and becomes smaller and smaller until we cannot see it anymore and indistinguishable from the tail digits of the other decimals (note that the nth d-term recedes to the right with increasing n by one decimal digit at a time). The sequence (1) is called nonstandard d-sequence since the nth term is not standard g-term; while it has standard limit (in the standard norm) which is 0 it is not a g-limit since it is not a decimal but it exists because it is well-defined by its nonstandard d-sequence. We call its nonstandard g-limit dark number and denote by d. Then we call its norm d-norm (standard distance from 0) which is d > 0. Moreover, while the nth term becomes smaller and smaller with indefinitely increasing n it is greater than 0 no matter how large n is so that if x is a decimal, 0 < d < x.

I think that what he’s trying to say there is that a non-terminating decimal is a sequence of finite representations that approach a limit. So there’s still no real infinite representations – instead, you’ve got an infinite sequence of finite representations, where each finite representation in the sequence can be generated from the previous one. This bit is why I said that this is nearly a theory of the computable numbers. Obviously, undescribable numbers can’t exist in this theory, because you can’t generate this sequence.

Where this really goes totally off the rails is that throughout this, he’s working on the assumption that there’s a one-to-one relationship between representations and numbers. That’s what that “dark number” stuff is about. You see, in Escultura’s system, 0.999999… is not equal to one. It’s not a representational artifact. In Escultura’s system, there are no representational artifacts: the representations are the numbers. The “dark number”, which he notates as d^*, is (1-0.99999999…) and is the smallest number greater than 0. And you can generate a complete ordered enumeration of all of the new real numbers, {0, d^*, 2d^*, 3d^*, ..., n-2d^*, n-d^*, n, n+d^*, ...}.

Reading Escultura, every once in a while, you might think he’s joking. For example, he claims to have disproven Fermat’s last theorem. Fermat’s theorem says that for n>2, there are no integer solutions for the equation x^n + y^n = z^n. Escultura says he’s disproven this:

The exact solutions of Fermat’s equation, which are the counterexamples to FLT, are given by the triples (x,y,z) = ((0.99…)10^T,d*,10^T), T = 1, 2, …, that clearly satisfies Fermat’s equation,

x^n + y^n = z^n, (4)

for n = NT > 2. Moreover, for k = 1, 2, …, the triple (kx,ky,kz) also satisfies Fermat’s equation. They are the countably infinite counterexamples to FLT that prove the conjecture false. One counterexample is, of course, sufficient to disprove a conjecture.

Even if you accept the reality of the notational artifact d^*, this makes no sense: the point of Fermat’s last theorem is that there are no integer solutions; d^* is not an integer; (1-d^*)10 is not an integer. Surely he’s not that stupid. Surely he can’t possibly believe that he’s disproven Fermat using non-integer solutions? I mean, how is this different from just claiming that you can use (2, 3, 351/3) as a counterexample for n=3?

But… he’s serious. He’s serious enough that he’s published published a real paper making the claim (albeit in crackpot journals, which are the only places that would accept this rubbish).

Anyway, jumping back for a moment… You can create a theory of numbers around this d^* rubbish. The problem is, it’s not a particularly useful theory. Why? Because it breaks some of the fundamental properties that we expect numbers to have. The real numbers define a structure called a field, and a huge amount of what we really do with numbers is built on the fundamental properties of the field structure. One of the necessary properties of a field is that it has unique identity elements for addition and multiplication. If you don’t have unique identities, then everything collapses.

So… Take \frac{1}{9}. That’s the multiplicative inverse of 9. So, by definition, \frac{1}{9}*9 = 1 – the multiplicative identity.

In Escultura’s theory, \frac{1}{9} is a shorthand for the number that has a representation of 0.1111…. So, \frac{1}{9}*9 = 0.1111....*9 = 0.9999... = (1-d^*). So (1-d^*) is also a multiplicative identity. By a similar process, you can show that d^* itself must be the additive identity. So either d^* == 0, or else you’ve lost the field structure, and with it, pretty much all of real number theory.