Category Archives: Bad Physics

Elon Musk’s Techno-Religion

A couple of people have written to me asking me to say something about Elon Musk’s simulation argument.

Unfortunately, I haven’t been able to find a verbatim quote from Musk about his argument, and I’ve seen a couple of slightly different arguments presented as being what Musk said. So I’m not really going to focus so much on Musk, but instead, just going to try to take the basic simulation argument, and talk about what’s wrong with it from a mathematical perspective.

The argument isn’t really all that new. I’ve found a couple of sources that attribute it to a paper published in 2003. That 2003 paper may have been the first academic publication, and it might have been the first to present the argument in formal terms, but I definitely remember discussing this in one of my philophy classes in college in the late 1980s.

Here’s the argument:

  1. Any advanced technological civilization is going to develop massive computational capabilities.
  2. With immense computational capabilities, they’ll run very detailed simulations of their own ancestors in order to understand where they came from.
  3. Once it is possible to run simulations, they will run many of them to explore how different parameters will affect the simulated universe.
  4. That means that advanced technological civilization will run many simulations of universes where their ancestors evolved.
  5. Therefore the number of simulated universes with intelligent life will be dramatically larger than the number of original non-simulated civilizations.

If you follow that reasoning, then the odds are, for any given form of intelligent life, it’s more likely that they are living in a simulation than in an actual non-simulated universe.

As an argument, it’s pretty much the kind of crap you’d expect from a bunch of half drunk college kids in a middle-of-the-night bullshit session.

Let’s look at a couple of simple problems with it.

The biggest one is a question of size and storage. The heart of this argument is the assumption that for an advanced civilization, nearly infinite computational capability will effectively become free. If you actually try to look at that assumption in detail, it’s not reasonable.

The problem is, we live in a quantum universe. That is, we live in a universe made up of discrete entities. You can take an object, and cut it in half only a finite number of times, before you get to something that can’t be cut into smaller parts. It doesn’t matter how advanced your technology gets; it’s got to be made of the basic particles – and that means that there’s a limit to how small it can get.

Again, it doesn’t matter how advanced your computers get; it’s going to take more than one particle in the real universe to simulate the behavior of a particle. To simulate a universe, you’d need a computer bigger than the universe you want to simulate. There’s really no way around that: you need to maintain state information about every particle in the universe. You need to store information about everything in the universe, and you need to also have some amount of hardware to actually do the simulation with the state information. So even with the most advanced technology that you can possible imagine, you can’t possible to better than one particle in the real universe containing all of the state information about a particle in the simulated universe. If you did, then you’d be guaranteeing that your simulated universe wasn’t realistic, because its particles would have less state than particles in the real universe.

This means that to simulate something in full detail, you effectively need something bigger than the thing you’re simulating.

That might sound silly: we do lots of things with tiny computers. I’ve got an iPad in my computer bag with a couple of hundred books on it: it’s much smaller than the books it simulates, right?

The “in full detail” is the catch. When my iPad simulates a book, it’s not capturing all the detail. It doesn’t simulate the individual pages, much less the individual molecules that make up those pages, the individual atoms that make up those molecules, etc.

But when you’re talking about perfectly simulating a system well enough to make it possible for an intelligent being to be self-aware, you need that kind of detail. We know, from our own observations of ourselves, that the way our cells operates is dependent on incredibly fine-grained sub-molecular interactions. To make our bodies work correctly, you need to simulate things on that level.

You can’t simulate the full detail of a universe bigger that the computer that simulates it. Because the computer is made of the same things as the universe that it’s simulating.

There’s a lot of handwaving you can do about what things you can omit from your model. But at the end of the day, you’re looking at an incredibly massive problem, and you’re stuck with the simple fact that you’re talking, at least, about building a computer that can simulate an entire planet and its environs. And you’re trying to do it in a universe just like the one you’re simulating.

But OK, we don’t actually need to simulate the whole universe, right? I mean, you’re really interested in developing a single species like yourself, so you only care about one planet.

But to make that planet behave absolutely correctly, you need to be able to correctly simulate everything observable from that planet. Its solar system, you need to simulate pretty precisely. The galaxy around it needs less precision, but it still needs a lot of work. Even getting very far away, you’ve got an awful lot of stuff to simulate, because your simulated intelligences, from their little planet, are going to be able to observe an awful lot.

To simulate a planet and its environment with enough precision to get life and intelligence and civilization, and to do it at a reasonable speed, you pretty much need to have a computer bigger than the planet. You can cheat a little bit, and maybe abstract parts of the planet; but you’ve got to do pretty good simulations of lots of stuff outside the planet.

It’s possible, but it’s not particularly useful. Because you need to run that simulation. And since it’s made up of the same particles as the things it’s simulating, it can’t move faster than the universe it simulates. To get useful results, you’d need to build it to be massively parallel. And that means that your computer needs to be even larger – something like a million times bigger.

If technology were to get good enough, you could, in theory, do that. But it’s not going to be something you do a lot of: no matter how advanced technology gets, building a computer that can simulate an entire planet and its people in full detail is going to be a truly massive undertaking. You’re not going to run large numbers of simulations.

You can certainly wave you hands and say that the “real” people live in a universe without the kind of quantum limit that we live with. But if you do, you’re throwing other assumptions out the window. You’re not talking about ancestor simulation any more. And you’re pretending that you can make predictions based on our technology about the technology of people living in a universe with dramatically different properties.

This just doesn’t make any sense. It’s really just techno-religion. It’s based on the belief that technology is going to continue to develop computational capability without limit. That the fundamental structure of the universe won’t limit technology and computation. Essentially, it’s saying that technology is omnipotent. Technology is God, and just as in any other religion, it’s adherents believe that you can’t place any limits on it.

Rubbish.

Not a theory! Really! It’s not a theory!

I know I’ve been terrible about updating my blog lately. I’ve got some good excuses. (The usual: very busy with work. The less usual: new glasses that I’m having a very hard time adapting to. Getting old sucks. My eyes have deteriorated to the point where my near vision is shot, and the near-vision correction in my lenses needed to get jumped pretty significantly, which takes some serious getting used to.) And getting discussions of type theory right is a lot of work. Type theory in particular takes a lot of work, because it’s a subject that I really want to get right, because it’s so important in my profession, and because so few people have actually written about it in a way that’s accessible to non-mathematicians.

Anyway: rest assured that I’m not dropping the subject, and I hope to be getting back to writing more very soon. In the meantime, I decided to bring you some humorous bad math.

Outside the scientific community, one of the common criticisms of science is that scientific explanations are “just a theory”. You hear this all the time from ignorant religious folks trying to criticize evolution or the big bang (among numerous other things). When they say that something is just a theory, what they mean is that it’s not a fact, it’s just speculation. They don’t understand what the word “theory” really means: they think that a theory and a fact are the same class of things – that an idea starts as a theory, and becomes a fact if you can prove it.

In science, we draw a distinction between facts and theories, but it’s got nothing to do with how “true” something is. A fact is an observation of something that happens, but doesn’t say why it happens. The sun produces light. That’s a fact. The fact doesn’t say why that happens. It doesn’t have to say how it happens. But it does. That’s an observation of a fact. A theory is an explanation of a set of facts. The combined gravitational force of all of the particles in the sun compress the ones in the center until quantum tunnelling allows hydrogen atoms to combine and fuse producing energy, which eventually radiates as the heat and light that we observe. The theory of solar hydrogen fusion is much more than the words in the previous sentence: it’s an extensive collection of evidence and mathematics that explains the process in great detail. Solar hydrogen fusion – mathematical equations and all – is a theory that explains the heat and light that we observe. We’re pretty sure that it’s true – but the fact that it is true doesn’t mean that it’s not a theory.

Within the scientific community, we criticize crackpot ideas by saying that they’re not a theory. In science, a theory means a well-detailed and tested hypothesis that explains all of the known facts about something, and makes testable predictions. When we say that something isn’t a theory, we mean that it doesn’t have the supporting evidence, testability, or foundation in fact that would be needed to make something into a proper theory.

For example, intelligent design doesn’t qualify as a scientific theory. It basically says “there’s stuff in the world that couldn’t happen unless god did it”. But it never actually says how, precisely, to identify any of those things that couldn’t happen without god. Note that this doesn’t mean that it’s not true. I happen to believe that it’s not – but whether it’s true or not has nothing to do with whether, scientifically, in qualifies as a theory.

That’s a very long, almost Oracian introduction to today’s nonsense. This bit of crackpottery, known as “the Principle of Circlon Synchronicity”, written by one James Carter, has one really interesting property: I agree, 100%, with the very first thing that Mr. Carter says about his little idea.

The Principle of Circlon Synchronicity is not a Theory

They’re absolutely correct. It’s not a theory. It’s a bundle of vague assumptions, tied together by a shallow pretense at mathematics.

The “introduction” to this “principal” basically consists of the author blindly asserting that a bunch of things aren’t theories. For example, his explanation for why the principal of circlon synchronicity is not a theory begins with:

There are many different theories that have been used to explain the nature of reality. Today, the most popular of these are quantum mechanics, special relativity, string theories, general relativity and the Big Bang. Such theories all begin with unmeasured metaphysical assumptions such as fields and forces to explain the measurements of various phenomena. Circlon synchronicity is a purely mechanical system that explains local physical measurements. You only need a theory to explain physical measurements in terms of non-local fields, forces and dimensions.

This is a novel definition of “theory”. It has absolutely nothing to do with what the rest of us mean by the word “theory”. Basically, he thinks that his explanations, because they are allegedly simple, mechanical, and free of non-local effects, aren’t theories. They’re principals.

The list of things that don’t need a theory, according to Mr. Carter, is extensive.
For example:

The photon is not a theory. The photon is a mechanical measurement of mass. The photon is a conjoined matter-antimatter pair that is the basic form of mass and energy in the Living Universe. All photons move at exactly the speed of light relative to one another within the same absolute space. Photons are produced when a proton and electron are joined together to form a hydrogen atom. The emission of a photon is a mini annihilation with part of the electron and part of the proton being carried away by the photon. A photon with mass and size eliminates the need for both Planck’s constant and the Heisenberg uncertainty principle and also completely changes the meaning of the equation E=MC2. This is not a theory of a photon. It is the measurements describing the nature of the photon.

This is where we start on the bad math.

A photon is a quantum of light, or some other form of electromagnetic radiation. It doesn’t have any mass. But even if it didn’t: a photon isn’t a measurement. A photon is a particle (or a wave, depending on how you deal with it.) A measurement is a fundamentally different thing. If you want to do math that describes the physical universe, you’ve got to be damned careful about your units. If you’re measuring mass, you your units need to be mass units. If you’re describing mass, then the equations that derive your measurements of mass need to have mass units. If a photon is a measurement of mass, then what’s its unit?

Further, you can’t take an equation like e=mc^2, and rip it out of context, while asserting that it has exactly the same meaning that it did in its original context. Everyone has seen that old equation, but very few people really understand just what it means. Mr. Carter is not part of that group of people. To him, it’s just something he’s seen, which he knows is sciency, and so he grabs on to it and shouts about it in nonsensical ways.

But note, importantly, that even here, what Mr. Carter is doing isn’t science. He’s absolutely right when he says it’s not a theory. He asserts that the whole meaning of e=mc^2 changes because of his new understanding of what light is; but he doesn’t ever bother to explain just what that new understanding is, or how it differs from the old one.

He makes some hand-waves about how you don’t need the uncertainty principle. If his principles had a snowballs chance in hell of being correct, that might be true. The problem with that assertion is that the uncertainty principle isn’t just a theory. It’s a theory based on observations of facts that absolutely require explanations. There’s a great big fiery-looking ball up in the sky that couldn’t exist without uncertainty. Uncertainty isn’t just a pile of equations that someone dreamed up because it seemed like fun. It’s a pile of equations that were designed to try to explain the phenomena that we observe. There are a lot of observations that demonstrate the uncertainty principle. It doesn’t disappear just because Mr. Carter says it should. He needs to explain how his principles can account for the actual phenomena we observe – not just the phenomena that he wants to explain.

Similarly, he doesn’t like the theory of gravity.

We do not need a theory of gravity to explain exactly how it works. Gravity is a simple measurement that plainly shows exactly what gravity does. We use accelerometers to measure force and they exactly show that gravity is just an upwardly pointing force caused by the physical expansion of the Earth. The gravitational expansion of matter does not require a theory. It is just the physical measurement of gravity that shows exactly how it works in a completely mechanical way without any fields or non-local interactions. You only need a theory to explain a non-local and even infinite idea of how gravity works in such a way that it can’t be directly measured. Gravity not a theory.

Once again, we see that he really doesn’t understand what theory means. According to him, gravity can be measured, and therefore, it’s not a theory. Anything that can be measured, according to Mr. Carter, can’t be a theory: if it’s a fact, it can’t be a theory; even more, if it’s a fact, it doesn’t need to be explained at all. It’s sort-of like the fundamentalists idea of a theory, only slightly more broken.

This is where you can really see what’s wrong with his entire chain of reasoning. He asserts that gravity isn’t a theory – and then he moves in to an “explanation” of how gravity works which simply doesn’t fit.

We do not need a theory of gravity to explain exactly how it works. Gravity is a simple measurement that plainly shows exactly what gravity does. We use accelerometers to measure force and they exactly show that gravity is just an upwardly pointing force caused by the physical expansion of the Earth. The gravitational expansion of matter does not require a theory. It is just the physical measurement of gravity that shows exactly how it works in a completely mechanical way without any fields or non-local interactions. You only need a theory to explain a non-local and even infinite idea of how gravity works in such a way that it can’t be directly measured. Gravity not a theory.

The parade of redefinitions marches on! “Exactly” now means “hand-wavy”.

We’re finally getting to the meat of Mr. Carter’s principle. He’s a proponent of the same kind of expanding earth rubbish as Neal Adams. Gravity has nothing to do with non-local forces. It’s all just the earth expanding under us. Of course, this is left nice and vague: he mocks the math behind the actual theory of gravity, but he can’t actually show that his principal works. He just asserts that he’s defined exactly how it works by waving his hands really fast.

I can disprove his principle of gravity quite easily, by taking my phone out of my pocket, and opening Google maps.

In 5 seconds flat (which is longer than it should take!), Google maps shows me my exact position on the map. It does that by talking to a collection of satellites that are revolving around the earth. The positions of those satellites are known with great accuracy. They circle the earth without the use of any sort of propellant. If Mr. Carter (or Mr. Adams, who has a roughly equivalent model) were correct – if gravity was not, in fact, a force attracting mass to other masses, but instead was an artifact of an expanding earth – then the “satellites” that my phone receives data from would not be following an elliptical path around the earth. They’d be shooting off into the distance, moving in a perfectly straight line. But they don’t move in a straight line. They continue to arc around the earth, circling around and around, without any propulsion.

In any reasonable interpretation of the expanding earth? That doesn’t make sense. There’s no way for them to orbit. Satellites simply can’t work according to his theory. And yet, they do.

Of course, I’m sure that Mr. Carter has some hand-wavy explanation of just why satellites work. The problem is, whatever explanation he has isn’t a theory. He can’t actually make predictions about how things will behave, because his principles aren’t predictive.

In fact, he even admits this. His whole screed turns out to be a long-winded advertisement for a book that he’ll happily sell you. As part of the FAQ for his book, he explains why (a) he can’t do the math, and (b) it doesn’t matter anyway:

The idea that ultimate truth can be represented with simple mathematical equations is probably totally false. A simple example of this is the familiar series of circular waves that move away from the point where a pebble is dropped into a quiet pool of water. While these waves can be described in a general way with a simple set of mathematical equations, any true and precise mathematical description of this event would have to include the individual motion of each molecule within this body of water. Such an equation would require more than the world’s supply of paper to print and its complexity would make it virtually meaningless.

The idea of the circlon is easy to describe and illustrate. However, any kind of mathematical description of its complex internal dynamics is presently beyond my abilities. This deficiency does not mean that circlon theory cannot compete with the mathematically simplistic point-particle and field theories of matter. It simply means that perhaps ultimate truth is not as easily accessible to a mathematical format as was once hoped.

It’s particularly interesting to consider this “explanation” in light of some recent experiments in computational fluid dynamics. Weather prediction has become dramatically better in the last few years. When my father was a child, the only way to predict when a hurricane would reach land was to have people watching the horizon. No one could make accurate weather predictions at all, not even for something as huge as a storm system spanning hundreds of miles! When I was a child, weathermen rarely attempted to predict more than 2 days in advance. Nowadays, we’ve got 7-day forecasts that are accurate more often than the 2-day forecasts were a couple of decades ago. Why is that?

The answer is something called the Navier Stokes equations. The Navier-Stokes equations are a set of equations that describe how fluids behave. We don’t have the computational power or measurement abilities to compute N-S equations to the level of single molecules – but in principle, we absolutely could. The N-S equations – which demonstrably work remarkably well even when you’re just computing approximations – also describe exactly the phenomenon that Mr. Carter asserts can’t be represented with mathematical equations.

The problem is: he doesn’t understand how math or science work. He has no clue of how equations describe physical phenomena in actual scientific theories. The whole point of math is that it gives you a simple but precise way of describing complex phenomena. A wave in a pool of water involves the motion of an almost unimaginable number of particles, with a variety of forces and interactions between those particles. But all of them can be defined by reasonably simple equations.

Mr. Carter’s explanations are, intuitively, more attractive. If you really want to understand relativity, you’re going to need to spend years studying math and physics to get to the point where its equations make sense to you. But once you do, they don’t just explain things in a vague, hand-wavy way – they tell you exactly how things work. They make specific, powerful, precise predictions about how things will behave in a range of situations that match reality to the absolute limits of our ability to measure. Mr. Carter’s explanations don’t require years of study; they don’t require to study esoteric disciplines like group theory or tensor theory. But they also can’t tell you much of anything. Relativity can tell you exactly what adjustment you need to make to a satellite’s clock in order to make precise measurements of the location of a radio receiver on the ground. Mr. Carter’s explanations can’t even tell you how the satellite got there.

Big Bang Bogosity

One of my long-time mantras on this blog has been “The worst math is no math”. Today, I’m going to show you yet another example of that: a recent post on Boing-Boing called “The Big Bang is Going Down”, by a self-proclaimed genius named Rick Rosner.

First postulated in 1931, the Big Bang has been the standard theory of the origin and structure of the universe for 50 years. In my opinion, (the opinion of a TV comedy writer, stripper and bar bouncer who does physics on the side) the Big Bang is about to collapse catastrophically, and that’s a good thing.

According to Big Bang theory, the universe exploded into existence from basically nothing 13.7-something billion years ago. But we’re at the beginning of a wave of discoveries of stuff that’s older than 13.7 billion years.

We’re constantly learning more about our universe, how it works, and how it started. New information isn’t necessarily a catastrophe for our existing theories; it’s just more data. There’s constantly new data coming in – and as yet, none of it comes close to causing the big bang theory to catastrophically collapse.

The two specific examples cited in the article are:

  1. one quasar that appears to be younger than we might expect – it existed just 900 million years after the current estimate of when the big bang occurred. That’s very surprising, and very exciting. But even in existing models of the big bang, it’s surprising, but not impossible. (No link, because the link in the original article doesn’t work.)
  2. an ancient galaxy – a galaxy that existed only 700 million years after the big bang occurred – contains dust. Cosmic dust is made of atoms much larger than hydrogen – like carbon, silicon, and iron, which are (per current theories) the product of supernovas. Supernovas generally don’t happen to stars younger than a couple of billion years – so finding dust in a galaxy less than a billion years after the universe began is quite surprising. But again: impossible under the big bang? No.

The problem with both of these arguments against the big bang is: they’re vague. They’re both handwavy arguments made about crude statements about what “should” be possible or impossible according to the bing bang theory. But neither comes close to the kind of precision that an actual scientific argument requires.

Scientists don’t use math because they like to be obscure, or because they think all of the pretty symbols look cool. Math is a tool used by scientists, because it’s useful. Real theories in physics need to be precise. They need to make predictions, and those predictions need to match reality to the limits of our ability to measure them. Without that kind of precision, we can’t test theories – we can’t check how well they model reality. And precise modelling of reality is the whole point.

The big bang is an extremely successful theory. It makes a lot of predictions, which do a good job of matching observations. It’s evolved in significant ways over time – but it remains by far the best theory we have – and by “best”, I mean “most accurate and successfully predictive”. The catch to all of this is that when we talk about the big bang theory, we don’t mean “the universe started out as a dot, and blew up like a huge bomb, and everything we see is the remnants of that giant explosion”. That’s an informal description, but it’s not the theory. That informal description is so vague that a motivated person can interpret it in ways that are consistent, or inconsistent with almost any given piece of evidence. The real big bang theory isn’t a single english statement – it’s many different mathematical statements which, taken together, produce a description of an expansionary universe that looks like the one we live in. For a really, really small sample, you can take a look at a nice old post by Ethan Siegel over here.

If you really want to make an argument that it’s impossible according to the big bang theory, you need to show how it’s impossible. The argument by Mr. Rosner is that the atoms in the dust in that galaxy couldn’t exist according to the big bang, because there wasn’t time for supernovas to create it. To make that argument, he needs to show that that’s true: he needs to look at the math that describes how stars form and how they behave, and then using that math, show that the supernovas couldn’t have happened in that timeframe. He doesn’t do anything like that: he just asserts that it’s true.

In contrast, if you read the papers by the guys who discovered the dust-filled galaxy, you’ll notice that they don’t come anywhere close to saying that this is impossible, or inconsistent with the big bang. All they say is that it’s surprising, and that we made need to revise our understanding of the behavior of matter in the early stages of the universe. The reason that they say that is because there’s nothing there that fundamentally conflicts with our current understanding of the big bang.

But Mr. Rosner can get away with the argument, because he’s being vague where the scientists are being precise. A scientist isn’t going to say “Yes, we know that it’s possible according to the big bang theory”, because the scientist doesn’t have the math to show it’s possible. At the moment, we don’t have sufficient precise math either way to come to a conclusion; we don’t know. But what we do know is that millions of other observations in different contexts, different locations, observed by different methods by different people, are all consistent with the predictions of the big bang. Given that we don’t have any evidence to support the idea that this couldn’t happen under the big bang, we continue to say that the big bang is the theory most consistent with our observations, that it makes better predictions than anything else, and so we assume (until we have evidence to the contrary) that this isn’t inconsistent. We don’t have any reason to discard the big bang theory on the basis of this!

Mr. Rosner, though, goes even further, proposing what he believes will be the replacement for the big bang.

The theory which replaces the Big Bang will treat the universe as an information processor. The universe is made of information and uses that information to define itself. Quantum mechanics and relativity pertain to the interactions of information, and the theory which finally unifies them will be information-based.

The Big Bang doesn’t describe an information-processing universe. Information processors don’t blow up after one calculation. You don’t toss your smart phone after just one text. The real universe – a non-Big Bang universe – recycles itself in a series of little bangs, lighting up old, burned-out galaxies which function as memory as needed.

In rolling cycles of universal computation, old, collapsed, neutron-rich galaxies are lit up again, being hosed down by neutrinos (which have probably been channeled along cosmic filaments), turning some of their neutrons to protons, which provides fuel for stellar fusion. Each calculation takes a few tens of billions of years as newly lit-up galaxies burn their proton fuel in stars, sharing information and forming new associations in the active center of the universe before burning out again. This is ultra-deep time, with what looks like a Big Bang universe being only a long moment in a vast string of such moments across trillions or quadrillions of giga-years.

This is not a novel idea. There are a ton of variations of the “universe as computation” that have been proposed over the years. Just off the top of my head, I can rattle off variations that I’ve read (in decreasing order of interest) by Minsky (can’t find the paper at the moment; I read it back when I was in grad school), by Fredkin, by Wolfram, and by Langan.

All of these theories assert in one form or another that our universe is either a massive computer or a massive computation, and that everything we can observe is part of a computational process. It’s a fascinating idea, and there are aspects of it that are really compelling.

For example, the Minsky model has an interesting explanation for the speed of light as an absolute limit, and for time dilation. Minksy’s model says that the universe is a giant cellular automaton. Each minimum quanta of space is a cell in the automaton. When a particle is located in a particular cell, that cell is “running” the computation that describes that particle. For a particle to move, the data describing it needs to get moved from its current location to its new location at the next time quanta. That takes some amount of computation, and the cell can only perform a finite amount of computation per quanta. The faster the particle moves, the more of its time quantum are dedicated to motion, and the less it has for anything else. The speed of light, in this theory, is the speed where the full quanta for computing a particle’s behavior is dedicated to nothing but moving it to its next location.

It’s very pretty. Intuitively, it works. That makes it an interesting idea. But the problem is, no one has come up with an actual working model. We’ve got real observations of the behavior of the physical universe that no one has been able to describe using the cellular automaton model.

That’s the problem with all of the computational hypotheses so far. They look really good in the abstract, but none of them come close to actually working in practice.

A lot of people nowadays like to mock string theory, because it’s a theory that looks really ogood, but has no testable predictions. String theory can describe the behavior of the universe that we see. The problem with it isn’t that there’s things we observe in the universe that it can’t predict, but because it can predict just about anything. There are a ton of parameters in the theory that can be shifted, and depending on their values, almost anything that we could observe can be fit by string theory. The problem with it is twofold: we don’t have any way (yet) of figuring out what values those parameters need to have to fit our universe, and we don’t have any way (yet) of performing an experiment that tests a prediction of string theory that’s different from the predictions of other theories.

As much as we enjoy mocking string theory for its lack of predictive value, the computational hypotheses are far worse! So far, no one has been able to come up with one that can come close to explaining all of the things that we’ve already observed, much less to making predictions that are better than our current theories.

But just like he did with his “criticism” of the big bang, Mr. Rosner makes predictions, but doesn’t bother to make them precise. There’s no math to his prediction, because there’s no content to his prediction. It doesn’t mean anything. It’s empty prose, proclaiming victory for an ill-defined idea on the basis of hand-waving and hype.

Boing-Boing should be ashamed for giving this bozo a platform.

Bad Math from the Bad Astronomer

This morning, my friend Dr24Hours pinged me on twitter about some bad math:

And indeed, he was right. Phil Plait the Bad Astronomer, of all people, got taken in by a bit of mathematical stupidity, which he credulously swallowed and chose to stupidly expand on.

Let’s start with the argument from his video.


We’ll consider three infinite series:

S1 = 1 - 1 + 1 - 1 + 1 - 1 + ...
S2 = 1 - 2 + 3 - 4 + 5 - 6 + ...
S3 = 1 + 2 + 3 + 4 + 5 + 6 + ...

S1 is something called Grandi’s series. According to the video, taken to infinity, Grandi’s series alternates between 0 and 1. So to get a value for the full series, you can just take the average – so we’ll say that S1 = 1/2. (Note, I’m not explaining the errors here – just repeating their argument.)

Now, consider S2. We’re going to add S2 to itself. When we write it, we’ll do a bit of offset:

1 - 2 + 3 - 4 + 5 - 6 + ...
    1 - 2 + 3 - 4 + 5 + ...
==============================
1 - 1 + 1 - 1 + 1 - 1 + ...

So 2S2 = S1; therefore S2 = S1=2 = 1/4.

Now, let’s look at what happens if we take the S3, and subtract S2 from it:

   1 + 2 + 3 + 4 + 5 + 6 + ...
- [1 - 2 + 3 - 4 + 5 - 6 + ...]
================================
   0 + 4 + 0 + 8 + 0 + 12 + ... == 4(1 + 2 + 3 + ...)

So, S3 – S2 = 4S3, and therefore 3S3 = -S2, and S3=-1/12.


So what’s wrong here?

To begin with, S1 does not equal 1/2. S1 is a non-converging series. It doesn’t converge to 1/2; it doesn’t converge to anything. This isn’t up for debate: it doesn’t converge!

In the 19th century, a mathematician named Ernesto Cesaro came up with a way of assigning a value to this series. The assigned value is called the Cesaro summation or Cesaro sum of the series. The sum is defined as follows:

Let A = {a_1 + a_2 + a_3 + ...}. In this series, s_k = Sigma_{n=1}^{k} a_n. s_k is called the kth partial sum of A.

The series A is Cesaro summable if the average of its partial sums converges towards a value C(A) = lim_{n rightarrow infty} frac{1}{n}Sigma_{k=1}^{n} s_k.

So – if you take the first 2 values of A, and average them; and then the first three and average them, and the first 4 and average them, and so on – and that series converges towards a specific value, then the series is Cesaro summable.

Look at Grandi’s series. It produces the partial sum averages of 1, 1/2, 2/3, 2/4, 3/5, 3/6, 4/7, 4/8, 5/9, 5/10, … That series clearly converges towards 1/2. So Grandi’s series is Cesaro summable, and its Cesaro sum value is 1/2.

The important thing to note here is that we are not saying that the Cesaro sum is equal to the series. We’re saying that there’s a way of assigning a measure to the series.

And there is the first huge, gaping, glaring problem with the video. They assert that the Cesaro sum of a series is equal to the series, which isn’t true.

From there, they go on to start playing with the infinite series in sloppy algebraic ways, and using the Cesaro summation value in their infinite series algebra. This is, similarly, not a valid thing to do.

Just pull out that definition of the Cesaro summation from before, and look at the series of natural numbers. The partial sums for the natural numbers are 1, 3, 6, 10, 15, 21, … Their averages are 1, 4/2, 10/3, 20/4, 35/5, 56/6, = 1, 2, 3 1/3, 5, 7, 9 1/3, … That’s not a converging series, which means that the series of natural numbers does not have a Cesaro sum.

What does that mean? It means that if we substitute the Cesaro sum for a series using equality, we get inconsistent results: we get one line of reasoning in which a the series of natural numbers has a Cesaro sum; a second line of reasoning in which the series of natural numbers does not have a Cesaro sum. If we assert that the Cesaro sum of a series is equal to the series, we’ve destroyed the consistency of our mathematical system.

Inconsistency is death in mathematics: any time you allow inconsistencies in a mathematical system, you get garbage: any statement becomes mathematically provable. Using the equality of an infinite series with its Cesaro sum, I can prove that 0=1, that the square root of 2 is a natural number, or that the moon is made of green cheese.

What makes this worse is that it’s obvious. There is no mechanism in real numbers by which addition of positive numbers can roll over into negative. It doesn’t matter that infinity is involved: you can’t following a monotonically increasing trend, and wind up with something smaller than your starting point.

Someone as allegedly intelligent and educated as Phil Plait should know that.

The Latest Update in the Hydrino Saga

Lots of people have been emailing me to say that there’s a new article out about Blacklight, the company started by Randall Mills to promote his Hydrino stuff, which claims to have an independent validation of his stuff, and announcing the any-day-now unveiling of the latest version of his hydrino-based generator.

First of all, folks, this isn’t an article, it’s a press release from Blacklight. The Financial Post just printed it in their online press-release section. It’s an un-edited release written by Blacklight.

There’s nothing new here. I continue to think that this is a scam. But what kind of scam?

To find out, let’s look at a couple of select quotes from this press release.

Using a proprietary water-based solid fuel confined by two electrodes of a SF-CIHT cell, and applying a current of 12,000 amps through the fuel, water ignites into an extraordinary flash of power. The fuel can be continuously fed into the electrodes to continuously output power. BlackLight has produced millions of watts of power in a volume that is one ten thousandths of a liter corresponding to a power density of over an astonishing 10 billion watts per liter. As a comparison, a liter of BlackLight power source can output as much power as a central power generation plant exceeding the entire power of the four former reactors of the Fukushima Daiichi nuclear plant, the site of one of the worst nuclear disasters in history.

One ten-thousandth of a liter of water produces millions of watts of power.

Sounds impressive, doesn’t it? Oh, but wait… how do we measure energy density of a substance? Joules per liter, or something equivalent – that is, energy per volume. But Blacklight is quoting energy density as watts per liter.

The joule is a unit of energy. A joule is a shorthand for frac{text{kilogram}*text{meter}^2}{text{second}^2}. Watts are a different unit, a measure of power, which is a shorthand for frac{text{kilogram}*text{meter}^2}{text{second}^3}. A watt is, therefore, one joule/second.

They’re quoting a rather peculiar unit there. I wonder why?

Our safe, non-polluting power-producing system catalytically converts the hydrogen of the H2O-based solid fuel into a non-polluting product, lower-energy state hydrogen called “Hydrino”, by allowing the electrons to fall to smaller radii around the nucleus. The energy release of H2O fuel, freely available in the humidity in the air, is one hundred times that of an equivalent amount of high-octane gasoline. The power is in the form of plasma, a supersonic expanding gaseous ionized physical state of the fuel comprising essentially positive ions and free electrons that can be converted directly to electricity using highly efficient magnetohydrodynamic converters. Simply replacing the consumed H2O regenerates the fuel. Using readily-available components, BlackLight has developed a system engineering design of an electric generator that is closed except for the addition of H2O fuel and generates ten million watts of electricity, enough to power ten thousand homes. Remarkably, the device is less than a cubic foot in volume. To protect its innovations and inventions, multiple worldwide patent applications have been filed on BlackLight’s proprietary technology.

Water, in the alleged hydrino reaction, produces 100 times the energy of high-octane gasoline.

Gasoline contains, on average, about 11.8 kWh/kg. A milliliter of gasoline weighs about 7/10ths of a gram, compared to the 1 gram weight of a milliter of water; therefore, a kilogram of gasoline should contain around 1400 milliliters. So, let’s take 11.8kWh/kg, and convert that to an equivalent measure of energy per milliter: about 8 1/2 kWh/milliliter. How does that compare to hydrinos? Oh, wait… we can’t convert those, now can we? Because they’re using power density. And the power density of a substance depends not just on how much power you can extract, but how long it takes to extract it. Explosives have fantastic power density! Gasoline – particularly high octane gasoline – is formulated to try to burn as slowly as possible, because internal combustion engines are more efficient on a slower burn.

To bring just a bit of numbers into it, TNT has a much higher power density than gasoline. You can easily knock down buildings with TNT, because of the way that it emits all of its energy in one super short burst. But it’s energy density is just 1/4th the energy density of gasoline.

Hmm. I wonder why Mills is using the power density?

Here’s my guess. Mills has some bullshit process where he spikes his generator with 12000 amps, and gets a microsecond burst of energy out. If you can produce 100 joules from one milliliter in 1/1000th of a second, that’s a power density of 100,000 joules per milliliter.

Suddenly, the amount of power that’s being generated isn’t so huge – and there, I would guess, is the key to Mills latest scam. If you’re hitting your generating apparatus with 12,000 amperes of electric current, and you’re producing microsecond burst of energy, it’s going to be very easy to produce that energy by consuming something in the apparatus, without that consumption being obvious to an observer who isn’t allowed to independently examine the apparatus in detail.


Now, what about the “independent verification”? Again, let’s look at the press release.

“We at The ENSER Corporation have performed about thirty tests at our premises using BLP’s CIHT electrochemical cells of the type that were tested and reported by BLP in the Spring of 2012, and achieved the three specified goals,” said Dr. Ethirajulu Dayalan, Engineering Fellow, of The ENSER Corporation. “We independently validated BlackLight’s results offsite by an unrelated highly qualified third party. We confirmed that hydrino was the product of any excess electricity observed by three analytical tests on the cell products, and determined that BlackLight Power had achieved fifty times higher power density with stabilization of the electrodes from corrosion.” Dr. Terry Copeland, who managed product development for several electrochemical and energy companies including DuPont Company and Duracell added, “Dr. James Pugh (then Director of Technology at ENSER) and Dr. Ethirajulu Dayalan participated with me in the independent tests of CIHT cells at The ENSER Corporation’s Pinellas Park facility in Florida starting on November 28, 2012. We fabricated and tested CIHT cells capable of continuously producing net electrical output that confirmed the fifty-fold stable power density increase and hydrino as the product.”

Who is the ENSER corporation? They’re an engineering consulting/staffing firm that’s located in the same town as Blacklight’s offices. So, pretty much, what we’re seeing is that Mills hired his next door neighbor to provide a data-free testimonial promising that the hydrino generator really did work.

Real scientists, doing real work, don’t pull nonsense like this. Mills has been promising a commercial product within a year for almost 25 years. In that time, he’s filed multiple patents, some of which have already expired! And yet, he’s never actually allowed an independent team to do a public, open test of his system. He’s never provided any actual data about the system!

He and his team have claimed things like “We can’t let people see it, it’s secret”. But they’re filing patents. You don’t get to keep a patent secret. A patent application, under US law, must contain: “a description of how to make and use the invention that must provide sufficient detail for a person skilled in the art (i.e., the relevant area of technology) to make and use the invention.”. In other words, if the patents that Mills and friends filed are legally valid, they must contain enough information for an interested independent party to build a hydrino generator. But Mills won’t let anyone examine his supposedly working generators. Why? It’s not to keep a secret!


Finally, the question that a couple of people, including one reporter for WiredUK asked: If it’s all a scam, why would Mills and company keep on making claims?

The answer is the oldest in the book: money.

In my email this morning, I got a new version of a 419 scam letter. It’s from a guy who claims to be the nephew of Ariel Sharon. He claims that his uncle owned some farmland, including an extremely valuable grove of olive trees, in the occupied west bank. Now, he claims, the family wants to sell that land – but as Sharon’s, they can’t let their names get in to the news. So, he says, he wants to “sell” the land to me for a pittance, and then I can sell it for what it’s really worth, and we’ll split the profits.

When you read about people who’ve fallen for 419 scams, you find that the scammers don’t ask for all of the money up front. They start off small: “There is a $500 fee for the transfer”. When they get that, they show you some “evidence” in the form of an official-looking transfer-clearance recepit. But then they say that there’s a new problem, and they need money to get around it. “We were preparing to transfer, but the clerk became suspicious; we need to bribe him!”, “There’s a new financial rule that you can’t transfer sums greater that $10000 to someone without a Nigerian bank account containing at least $100,000”. It’s a continual process. They always show some kind of fake document at each step of the way. The fakes aren’t particularly convincing unless you really want to be convinced, but they’re enough to keep the money coming.

Mills appears to be operating in very much the same vein. He’s getting investors to give him money, promising that whatever they invest, they’ll get back manifold when he starts selling hydrino power generators! He promises they’ll be on market within a year or two – five at most!

Then he comes up with either a demonstration, or the testimonial from his neighbor, or the self-publication of his book, or another press release talking about the newest version of his technology. It’s much better than the old one! This time it’s for real – just look at these amazing numbers! It’s 10 billion watts per liter, a machine that fits on your desk can generate as much power as a nuclear power plant!! We just need some more money to fix that pesky problem with corrosion on the electrodes, and then we’ll go to market, and you’ll be rich, rich, rich!

It’s been going on for almost 25 years, this constant cycle of press release/demo/testimonial every couple of years. (Seriously; in this post, I showed links to claims from 2009 claiming commercialization within 12 to 18 months; from 2005 claiming commercialization within months; and claims from 1999 claiming commercialization within a year.) But he always comes up with an excuse why those deadlines needed to be missed. And he always manages to find more investors, willing to hand over millions of dollars. As long as suckers are still willing to give him money, why wouldn’t he keep on making claims?

This one's for you, Larry! The Quadrature BLINK Kickstarter

After yesterday’s post about the return of vortex math, one of my coworkers tweeted the following at me:

Larry’s a nice guy, even if he did give me grief at my new-hire orientation. So I decided to take a look. At oh my, what a treasure he found! It’s a self-proclaimed genius with a wonderful theory of everything. And he’s running a kickstarter campaign to raise money to publish it. So it’s a lovely example of profound crackpottery, with a new variant of the buy my book gambit!

To be honest, I’m a bit uncertain about this. At times, it seems like the guy is dead serious; at other times, it seems like it’s an elaborate prank. I’m going to pretend that it’s completely serious, because that will make this post more fun.

So, what exactly is this theory of everything? I don’t know for sure. He’s dropping hints, but he’s not going to tell us the details of the theory until enough people buy his book! But he’s happy to give us some hints, starting with an explanation of what’s wrong with physics, and why a guy with absolutely no background in physics or math is the right person to revolutionize physics! He’ll explain it to us in nine brief points!

First: Let me ask you a question. Since the inclusion of Relativity and Dirac’s Statistical Model, why has Physics been at loose ends to unify the field? Everyone has tried and failed, and for this reason so many have pointed out: what we don’t need, is another TOE, Theory of Everything. So if I was a Physicist, my theory would probably just be one of these… a failed TOE based on the previous literature.

But why do these theories fail? One thing for sure is that in academia every new ideas stems from previously accepted ideas, with a little tweak here or there. In the main, TOEs in Physics have this in common, and they all have failed. What does this tell you?

See, those physicists, they’re all just trying the same stuff, and they all failed, therefore they’ll never succeed.

When I look at modern physics, I see some truly amazing things. To pull out one particularly prominent example from this year, we’ve got the higgs boson. He’ll sneer at the higgs boson a bit later, but that was truly astonishing: decades ago, based on a deep understanding of the standard model of particle physics, a group of physicists worked out a theory of what mass was and how it worked. They used that to make a concrete prediction about how their theory could tested. It was untestable at the time, because the kind of equipment needed to perform the experiment didn’t exist, and couldn’t exist with current technology. 50 years later, after technology advanced, their prediction was confirmed.

That’s pretty god-damnned amazing if you ask me.

Based on the arguments from our little friend, a decade ago, you could have waved your hands around, and said that physicists had tried to create theories about why things had mass, and they’d failed. Therefore, obviously, no theory of mass was going to come from physics, and if you wanted to understand the universe, you’d have to turn to non-physicists.

On to point two!

Second: the underlying assumptions in Physics must be wrong, or somehow grossly mis-specified.

That’s it. That’s the entire point. No attempt to actually support that argument. How do we know that the underlying assumptions in physics must be wrong? Because he says so. Period.

Third: Who can challenge the old paradigm of Physics, only Copernicus? Physicists these days cannot because they are too inured of their own system of beliefs and methodologies. Once a PhD is set in place, Lateral Thinking, or “thinking outside the box,” becomes almost impossible due to departmental “silo thinking.” Not that physicists aren’t smart – some are genius, but like everyone in the academic world they are focused on publishing, getting research grants, teaching and other administrative duties. This leaves little time for creative thinking, most of that went into the PhD. And a PhD will not be accepted unless a candidate is ready and willing to fall down the “departmental silo.” This has a name: Catch 22.

It’s the “good old boys” argument. See, all those physicists are just doing what their advisors tell them to; once they’ve got their PhD, they’re just producing more PhDs, enforcing the same bogus rules that their advisors inflicted on them. Not a single physicist in the entire world is willing to buck this! Not one single physicist in the world is willing to take the chance of going down as one of the greatest scientific minds in history by bucking the conventional wisdom.

Except, of course, there are plenty of people doing that. For an example, right off the top of my head, we’ve got the string theorists. Sure, they get lots of justifiable criticism. But they’ve worked out a theory that does seem to describe many things about the universe. It’s not testable with present technology, and it’s not clear that it will ever be testable with any kind of technology. But according to Bretholt’s argument, the string theorists shouldn’t exist. They’re bucking the conventional model, and they’re getting absolutely hammered for it by many of their colleagues – but they’re still going ahead and working on it, because they believe that they’re on to something important.

Fourth: There is not much new theory-making going on in Physics since its practitioners believe their Standard Model is almost complete: just a few more billion dollars in research and all the colors of the Higgs God Particle may be sorted, and possibly we may even glimpse the Higgs Field itself. But this is sort of like hunting down terrorists: if you are in control of defining what a terrorist is, then you will never be out of a job or be without a budget. This has a name too: Self-Fulfilling Prophesy. The brutal truth…

Right, there’s not much new theory-making going on in physics. No one is working on string theory. There’s no one coming up with theories about dark matter or dark energy. There’s no one trying to develop a theory of quantum gravity. No one ever does any of this stuff, because there’s no new theory-making going on.

Of course, he hand-waves one of the most fantastic theory-confirmations from physics. The higgs got lots of press, and lots of people like to hand-wave about it and overstate what it means. (“It’s the god particle!”) But even stripped down to its bare minimum, it’s an incredible discovery, and for a jackass like this to wave his hands and pretend that it’s meaningless and we need to stop wasting time on stuff like the LHC and listen to him: I just don’t even know the right words to describe the kind of disgust it inspires in me.

Fifth: Who then can mount such a paradigm-breaking project? Someone like me, prey tell! But birds like me just don’t sit around the cage and get fat, we fly to the highest vantage point, and see things for what they are! We have a name as well: Free Thinkers. We are exactly what your mother warned you of… There’s a long list of us include Socrates, Christ, Buddha, Taoist Masters, Tibetan Masters, Mohammed, Copernicus, Newton, Maxwell, Gödel, Hesse, Jung, Tesla, Planck… All are Free Thinkers, confident enough in their own knowledge and wisdom that they are willing to risk upsetting the applecart! We soar so humanity can peer beyond its petty day to day and discover itself.

There’s two things that really annoy me about this paragraph. First of all, there’s the arrogance. This schmuck hasn’t done anything yet, but he sees fit to announce that he’s up there with Newton, Maxwell, etc.

Second, there’s the mushing together of scientists and religious figures. Look, I’m a religious jew. I don’t have anything against respecting theology, theologians, or religious authorities. But science is different. Religion is about subjective experience. Even if you believe profoundly in, say, Buddhism, you can’t just go through the motions of what Buddha supposedly did and get exactly the same result. There’s no objective, repeatable way of testing it. Science is all about the hard work of repeatable, objective experimentation.

He continues point 5:

This chain might have included Einstein and Dirac had they not made three fatal mistakes in Free Thinking: They let their mathematical machine dictate what was true rather than using mathematics only to confirm their observations, they got fooled by their own anthropomorphic assumptions, and then they rooted these assumptions into their mathematical methods. This derailed the last two generations of scientific thinking.

Here’s where he strays into the real territory of this blog.

Crackpots love to rag on mathematics. They can’t understand it, and they want to believe that they’re the real geniuses, so the math must be there to confuse things!

Scientists don’t use math to be obscure. Learning math to do science isn’t some sort of hazing ritual. The use of math isn’t about making science impenetrable to people who aren’t part of the club. Math is there because it’s essential. Math gives precision to science.

Back to the Higgs boson for a second. The people who proposed the Higgs didn’t just say “There’s a field that gives things mass”. They described what the field was, how they thought it worked, how it interacted with the rest of physics. The only way to do that is with math. Natural language is both too imprecise, and too verbose to be useful for the critical details of scientific theories.

Let me give one example from my own field. When I was in grad school, there was a new system of computer network communication protocols under design, called OSI. OSI was complex, but it had a beauty to its complexity. It carefully divided the way that computer networks and the applications that run on them work into seven layers. Each layer only needed to depend on the details of the layer beneath it. When you contrast it against TCP/IP, it was remarkable. TCP/IP, the protocol that we still use today, is remarkably ad-hoc, and downright sloppy at times.

But we’re still using TCP/IP today. Why?

Because OSI was specified in english. After years of specification, several companies and universities implemented OSI network stacks. When they connected them together, what happened? It didn’t work. No two of the reference implementations could talk to each other. Each of them was perfectly conformant with the specification. But the specification was imprecise. To a human reader, it seemed precise. Hell, I read some of those specifications (I worked on a specification system, and read all of specs for layers 3 and 4), and I was absolutely convinced that they were precise. But english isn’t a good language for precision. It turned out that what we all believed was perfectly precise specification actually had numerous gaps.

There’s still a lot of debate about why the OSI effort failed so badly. My take, having been in the thick of it is that this was the root cause: after all the work of building the reference implementations, they realized that their specifications needed to go back to the drawing board, and get the ambiguities fixed – and the world outside of the OSI community wasn’t willing to wait. TCP/IP, for all of its flaws, had a perfectly precise specification: the one, single, official reference implementation. It might have been ugly code, it might have been painful to try to figure out what it meant – but it was absolutely precise: whatever that code did was right.

That’s the point of math in science: it gives you that kind of unambiguous precision. Without precision, there’s no point to science.

Sixth: What happens to Relativity when the assumptions of Lorentz’ space-time is removed? Under these assumptions, the speed of light limits the speed of moving bodies. The Lorentz Transformation was designed specifically to set this speed limit, but there is no factual evidence to back it up. At first, the transformation assumed that there would be length and time dilations and a weight increase when travelling at sub-light speeds. But after the First Misguided Generation ended in the mid 70’s, the weight change idea was discarded as untenable. It was quietly removed because it implied that a body propagating at or near the speed of light would become infinitely massive and turn into a black hole. Thus, the body would swallow itself up and disappear!

Whoops… bad assumption!

The space contraction idea was left intact because it was imperative to Hilbert’s rendition of the space-time geodesic that he devised for Einstein in 1915. Hilbert was the best mathematician of his day, if not ever! He concocted the mathematical behemoth called General Relativity to encapsulate Einstein’s famous insight that gravitation was equivalent to an accelerating frame. Now, not only was length assumed to contract, but space was assumed to warp and gravitation was assumed to be an accelerating frame, though no factual evidence exists to back up these assumptions!

Whoops… 3 bad assumptions in a row!

This is an interestingly bizarre argument.

Relativity predicts a change in mass (not weight!) as velocity increases. That prediction has not changed. It has been confirmed, repeatedly, by numerous experiments. The entire reasoning here is based on the unsupported assertion that relativistic changes in mass have been discarded as incorrect. But that couldn’t be farther from the truth!

Similarly, he’s asserting that the space-warping effects of gravity – one of the fundamental parts of general relativity – is incorrect, again without the slightest support.

This is going to seem like a side-track, but bear with me:

When I came in to my office this morning, I took out my phone and used foursquare to check in. How did that work? Well, my phone received signals from a collection of satellites, and based on the tiny differences in data contained in those signals, it was able to pinpoint my location to precisely the corner of 43 street and Madison avenue, outside of Grand Central Terminal in Manhattan.

To be able to pinpoint my location that precisely, it ultimately relies on clocks in the satellites. Those clocks are in orbit, moving very rapidly, and in a different position in earths gravity well. Space-time is less warped at their elevation than it is here on earth. Relativity predicts that based on that fact, the clocks in those satellites must move at a different rate than clocks here on earth. In order to get precise positions, those clocks need to be adjusted to keep time with the receivers on the surface of the earth.

If relativity – with its interconnected predictions of changes in mass, time, and the warp of space-time – didn’t work, then the corrections made by the GPS satellites wouldn’t be needed. And yet, they are.

There are numerous other examples of this. We’ve observed relativistic effects in many different ways, in many different experiments. Despite what Mr. Bretholt asserts, none of this has been disproven or discarded.

Seventh: Many, many, many scientists disagree with Relativity for these reasons and others, but Physics keeps it as a mainstream idea. It has been violated over and over again in various space programs, and is rarely used in the aerospace industry when serious results are expected. Physics would like to correct Relativity because it doesn’t jive with the Quantum Standard Model, but they can’t conceive how to fix it.

In Quadrature Theory the problem with Relativity is obvious and easily solved. The problem is that the origin and nature of space is not known, nor is the origin and nature of time or gravitation. Einstein did not prove anything about gravitation, norhas anyone since. The “accelerating frame” conjecture is for the convenience of mathematics and sheds no light on the nature of gravitation itself. Quantum Chromo Dynamics, QCD, hypothesizes the “graviton” on the basis of similarly convenient mathematics. Many scientists disagree with such “force carrier” propositions: they are all but silenced by the trends in Physics publishing, however. The “graviton” is, nevertheless, a mathematical fiction similar to Higgs Boson.

Whoops… a couple more bad assumptions, but where did they come from?

Are there any serious scientists who disagree with relativity? Mr. Bretholt doesn’t actually name any. I can’t think of any credible ones. Certainly pretty much all physicists agree that there’s a problem because both relativity and quantum physics both appear to be correct, but they’re not really compatible. It’s a major area of research. But that’s a different thing from saying that scientists “disagree” with or reject relativity. Relativity has passed every experimental test that anyone has been able to devise.

Of course, it’s completely true that Einstein didn’t prove anything about gravity. Science doesn’t deal with proof. Science devises models based on observations. It tries to find the best predictive model of the universe that it can, based on repeated observation. Science can disprove things, by showing that they don’t match our observations of reality, but it can’t prove that a theory is correct. So we can never be sure that our model is correct – just that it does a good job of making predictions that match further observations. Relativity could be completely, entirely, 100% wrong. But given everything we know now, it’s the best predictive theory we have, and nothing we’ve been able to do can disprove it.

Ok, I’ve gone on long enough. If you want to see his last couple of points, go ahead and follow the link to his “article”. After all of this, we still haven’t gotten to anything about what his supposed new theory actually says, and I want to get to just a little bit of that. He’s not telling us much – he wants money to print his book! – but what little he says is on his kickstarter page.

So let me introduce that modification: it’s called Quadrature, or Q. Quadrature arose from Awareness as the original separation of Awareness from itself. This may sound strangely familiar; I elaborate at length about it in BLINK. The Theory of Quadrature develops Q as the Central Generating Principle that creates the Universe step by step. After a total of 12 applications of Quadrature, it folds back on itself like a snake biting its tail. Due to this inevitable closure, the Universe is complete, replete with life, energy and matter, both dark and light. As a necessary consequence of this single Generating Principle, everything in the Universe is ultimately connected through ascending levels of Awareness.

The majesty and mystery of Awareness and its manifestation remains, but this vision puts us inside as co-creative participants. I think you will agree that this is highly desirable from a metaphysical point of view. Quadrature is the mechanism that science has been looking for to unify these two points of view. Q has been foreshadowed in many ways in both physics and metaphysics. As developed in BLINK, Quadrature Theory can serve as a Theory of Everything.

Pretty typical grandiose crackpottery. This looks an awful lot like a variation of Langan’s CTMU. It’s all about awareness! And there’s a simple “mathematical” construct called “quadrature” that makes it all work. Of course, I can’t tell you what quadrature is. No, you need to pay me! Give me money! And then I’ll deign to explain it to you.

To make a long story short, Quadrature Theory supports four essential claims that undermine Relativity, Quantum Mechanics, and Cosmology while placing these disciplines back on a more secure foundation once their erroneous assumptions have been removed. These are:

  1. The origin of space and its nature arise from Quadrature. Space is shown to be strictly rectilinear; space cannot warp under any conditions.
  2. The origin of the Tempic Field and its nature arise from Quadrature. This field facilitates all types of energetic interaction and varies throughout space. The idea of time arises solely from transactions underwritten by the Tempic Field. Therefore, time as we know it here on Earth is a local anomaly, which uniquely affects all interactions including the speed of light. “C,” in fact, is a velocity, and is variable in both speed and direction depending on the gradient of the Tempic Field. Thus, “C” varies drastically off-planet!
  3. Spin is a fundamental operation in space that constitutes the only absolute measurement. Its density throughout space is non-linear and it generates a variable Tempic Field within spinning systems such as atoms, or galaxies. This built-in “time” serves to hold the atom together eternally, and has many other consequences for Quantum Mechanics and Cosmology.
  4. Gravity is also a ringer in physics. Nothing of the fundamental origin of gravity is known, though we know how to use it quite well. Given the consequence of Spin, gravity can be traced to forms that have closed Tempic Fields. The skew electric component of spinning systems will align to create an aggregated, polarized, directional field: gravity.

Pop science, of course, loves to talk about black holes, worm holes, time warps and all manner of the ridiculous in physics. There is much more fascinating stuff than this in my book, and it is completely consistent with what is observable in the Universe. For example, I propose the actual purpose of the black hole and why every galaxy has one. At any rate, perhaps you now have an inkling of why Quadrature Theory is a Revolution Waiting to Happen!

Pure babble, stringing together words in nonsensical ways. As my mantra goes: the worst math is no math. Here he’s arguing that rigorous, well-tested mathematical models are incorrect – because vague reasons.

Vortex Math Returns!

Cranks never give up. That’s something that I’ve learned in my time writing this blog. It doesn’t matter how stupid an idea is. It doesn’t matter how obviously wrong, how profoundly ridiculous. No matter what, cranks will continue to push their ridiculous ideas.

One way that this manifests is the comments on old posts never quite die. Years after I initially write a post, I still have people coming back and trying to share “new evidence” for their crankery. George Shollenberger, the hydrino cranks, the Brown’s gas cranks, the CTMU cranks, they’ve all come back years after a post with more of the same-old, same-old. Most of the time, I just ignore it. There’s nothing to be gained in just rehashing the same old nonsense. It’s certainly not going to convince the cranks, and it’s not going to be interesting to my less insane readers. But every once in a while, something comes along in those comments, something that’s actually new and amusing comes along. Today I’ve got an example of that for you: one of the proponents of Markus Rodin’s “Vortex Math” has returned to tell us the great news!

I have linked Vortex Based Mathematics with Physics and can prove most physics using vortex based mathematics. I am writing an article call “Temporal Physics of Vortex Based Mathematics” here: http://www.vortexspace.org

This is a lovely thing, even without needing to actually look at his article. Just start at the very first line! He claims that he can “prove most of physics”.

Science doesn’t do proof.

What science does is make observations, and then based on those observations produce models of the universe. Then, using that model, it makes predictions, and compares those predictions with further observations. By doing that over and over again, we get better and better models of how the universe works. Science is never sure about anything – because all it can do is check how well the model works. It’s always possible that any model doesn’t describe how things actually work. But it gives us a good approximation, in a way that allows us to understand how things work. Or, not quite how things work, but how we can affect the world by our actions. Our model might not capture what’s really happening – but it’s got predictive power.

To give an example of this: our model of the universe says that the earth orbits the sun, which is orbits the galactic core, which is moving through the universe. It’s possible that this is wrong. You can propose an alternative model in which the earth is the stationary center of the universe, and everything moves around it. As a model, it’s not very attractive, because to make it fit our observations, it requires a huge amount of complexity – it’s a far, far more complex model than our standard one, and it’s much harder to use to make accurate predictions. But it can be made to work, just as well as our standard one. It’s possible that that’s how the universe actually works. I don’t think any reasonable person actually believes that the universe works that way, but it’s possible that our entire model is wrong. Science can’t prove that our model is correct. It can just show that it’s the simplest model that matches our observations.

But Mr. Calhoun claims that he can prove physics. That claim shows that he has no idea of what science is, or what science means. And if he doesn’t understand something that simple, why should we trust him to understand any more?

Ah, but when we take a look at some of his writings… it’s a lovely pile of rubbish. Remember the mantra of this blog? The worst math is no math. Mr. Calhoun’s writing is a splendid example of this. He claims to be doing science, math, and mathematical proofs – but when you actually look at his writing, there’s not a spec of genuine math to be found!

Let’s start with a really quick reminder of what vortex math is. Take the sequence of doubling in natural numbers in base-10. 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, …. If, for each of those numbers, you sum the digits until you get a single digit result, you get: 1, 2, 4, 8, 7, 5, 1, 2, 4, 8, 7, 5, … It turns into a repeated sequence, 1, 2, 4, 8, 7, 5, over and over again. You can do the same thing in the reverse direction, by halving: 1, 0.5, 0.25, 0.125, 0.0625, 0.03125, 0.015625, 0.0078125, where the digits sum to 1, 5, 7, 8, 4, 2, 1, 5, …

According to Rodin, this demonstrates something profound. This is the heart of Vortex mathematics: this cycle in the numbers shows that there’s some kind of energy flow that is fundamental to the universe, based on this kind of repeating sequence.

So, how does Mr. Calhoun use this? He thinks that he can connect it to black holes and white holes:

Do not forget that we already learned that black holes suck in matter while “compressing” it; and, on the other side of the black hole is a white hole that then takes the same matter and spits it back out while “de-compressing” the matter. The “magnetic warp” video on Youtube shows the same torus shape Marko had illustrated in his “vortex based mathematics” video [see below]:

You can clearly see the vortex in the center of the torus magnets. This is made possible using two Ferrofluid Hele-Shaw Cells [Hele-Shaw effect]. Here are a few links about using ferrofluid hele-shaw cell to view magnetic fields:

http://en.wikipedia.org/wiki/Hele-Shaw_flow

http://www2.warwick.ac.uk/fac/cross_fac/iatl/ejournal/issues/volume2issue1/snyder/

Here is a quote from a Youtube user about the magnets:

“Walter Rawls, a? scientist who did a great deal of research with Albert Roy Davis, said that he believes at the center of every magnet there is a miniature black hole.”

I have not verified the above statement about Walter Rawls as of yet. However, the above images prove beyond doubt Marko’s torus universe mathematical geometry. Now lets take a look at Marko’s designs:

The pictures look kind-of-like this silly torus thing that Rodin likes to draw: therefore they prove beyond doubt that Rodin’s rubbish is correct! Wow, now that’s a mathematical proof!

It gets worse from there.

The next section is “The Physics of Time”.

If you looked at the Youtube videos of the true motion of the Earth through space you now know that we are literally falling into a black hole that is at the center of the galaxy. The motion of the Earth; all of the rotation and revolution, all of that together is caused by space-time. Time is acually the rate and pattern of the motion of matter as it moves through space. It is the fourth dimension. you have probably heard this if you have studied Einstien theories: “As an object moves faster the rate of its motion [or time] slows down”. Sounds like an oxymoron doesn’t it? Well it not so strange once you understand how the fabric of space-time relates to Vortex Based Mathematics.

Motion of the Earth

The planet Earth rotates approx every twenty-four hours. It makes a complete 360o rotation every twenty-four hours. That amount of time is the frequency of the rate of rotation.

Looking down from the north pole of the Earth, you will see that if we divide the sphere into 36 equal parts the sunrise would have to pass through all of the degrees of the sphere in order to make a complete cycle:

Remember the Earth is a “giant magnet” that is spinning. The electromagnetic field of this “giant magnet” is moving out of the north pole [which is really at the geographic south pole] and going to the south pole [which again is really at the geographic north pole]. This electromagnetic field is moving or spinning [see youtube video at top] according to a frequency or cycle.

I don’t know if you realize this, but matter can be compressed or expanded without it being destroyed. A black hole does not de-molecularize matter then in passing to the white hole reassemble it again. Nothing that is demolecularized can naturally be put back together again. If an object is destroyed then is it destroyed; there is no reassembly. Matter can be however, compressed and decompressed. As you probably know and have heard this before there is an huge amount of distance between the atoms in your body. Like the giant void of space and much like the distances between planets in our solar system; the atomic matter in our bodies is just as similar in the amount of space between each atom.

What fills the spaces between each atom? Well, Its space-time. It is the fabric of the inertia ether that all matter in space moves through. Spacetime or what I call “etherspace” is what I have come to realize as “the space in between the spaces”. This “etherspace” can be compressed and then decompressed. Etherspace can enable all of the matter in your body to be greatly compressed without your body being destroyed; and at the same time functioning as it normally should. The ether space then allows your body to be decompressed again; all the while functioning as it should.

It is the movement of spacetime or “ether space” that is causing the rotation and revolving of the planet we live on. It is also responsible for the motions of all of the bodies in space.

Magnets will, whether great or small, act as engines for etherspace. They pull in etherspace at the south pole and also pump out etherspace at the north pole of the magnet. All magnets do this; the great planet earth all the way to the little magnet that sticks to your refridgerator door. Vortex based mathematics prove all of this. I will show you.

As I stated earlier the Earth is a giant magnet and if we apply the Vortex Based Mathematics to the 10o degree spacings of this “giant magnet” lets see what happens. Now we are going to see the de-compression of space-time eminatiing from the true north pole of the giant magnet of the Earth. Let’s deploy a doubling circuit to the spacings of the planet. We will start at 0o and go all the way to 360o .

Calhoun certainly shows that he’s a worthy inheritor of the mantle of Rodin. Rodin’s entire rubbish is really based on taking a fun property of our particular base-10 numerical notation, and without any good reason, believing that it must be a profound fundamental property of the universe. Calhoun takes two arbitrary things: the 360 degree conventional angle measurement, and the 24 hour day, and likewise, without any good reason, without even any argument, believes that they are fundamental properties of the universe.

Where does the 24 hour day come from? I did a bit of research, and there are a couple of possible arguments. It appears to date back to the old empire of Egypt. The argument that I found most convincing is based on how the Egyptians counted on their hands. They did a lot of things in base-12, because using your thumb to point out the joints of the fingers on your hand, you can count to 12. The origin of our base-10 is based on using fingers to count; base-12 is similar, but based on a slightly different way of counting on your fingers. Using base-12, they decided to describe time in terms of counting periods of light and darkness: 12 bright periods, 12 dark ones. There’s nothing scientific or fundamental about it: it’s an arbitrary way of measuring time. The Greeks adopted it from the Egyptians; the Romans adopted it from the Greeks; and we adopted it from the Romans. There is no fundamental reason why it is the one true correct way of measuring time.

Similarly, the 360 degree system of angular measure is not the least bit fundamental. It dates back to the Babylonians. In writing, the Babylonions used a base-60 system, instead of our base-10. In their explorations of geometry, they observed that if you inscribed a hexagon inside of a circle, each of the segments of the hexagon was the same length as the radius of the circle. So they measured an angle in terms of which segment of the inscribed hexagon it crossed. Within those sig segments, they divided them into sixty sections, because what else would people who use base-60 use? And then to subdivide those, they used 60 again. The 360 degree system is a random historical accident, not a profound truth.

I don’t want to get too far off track (or too farther off track), but: In fact, when you’re talking about angles, there is a fundamental measurement, called a radian. Whenever you do math using angles, you end up needing to introduce a conversion factor which converts your angle into radians.

Anyway – this rubbish about the 24 hour day and 360 degree circle are what passes for math in Calhoun’s world. This is as close to math or to correctness that Calhoun gets.

What’s even worse is his babble about black holes and white holes.

Both black and white holes are theoretical predictions of relativity. The math involved is not simple: it’s based on Einstein’s field equations from general relativity:

 R_{munu} - frac{1}{2}g_{munu}R + g_{mueta}Lambda = frac{8pi G}{c^4}T_{munu}

In this equation, the subscripted variables are all symmetric 4×4 tensors. Black and white holes are “solutions” to particular configurations of those tensors. This is not elementary math, not by a long-shot. But if you want to really talk about black and white holes, this is how you do it.

Translating from the math into prose is always a problem, because the prose is far less precise, and it’s inevitably misleading. No matter how well you think you understand based on the prose, you don’t understand the concept, because you haven’t been told enough, in a precise enough way, to actually understand it.

That said, the closest I can come is the following.

We’ll start with black holes. Black holes are much easier to understand: put enough mass into a small enough area of space, and you wind up with a boundary line, called the event horizon, where anything that crosses that boundary, no matter what – even massless stuff like light – can never escape. We believe, based on careful analysis, that we’ve observed black holes in our universe. (Or rather, we’ve seen evidence that they exist; you can’t actually see a black hole; but you can see its effects.) We call a black hole a singularity, because nothing beyond the event horizon is visible – it looks like a hole in space. But it isn’t: it’s got a mass, which we can measure. Matter goes in to a black hole, and crosses the event horizon. We can no longer see the matter. We can’t observe what happens to it once it crosses the horizon. But we know it’s still there, because we can observe the mass of the hole, and it increases as matter enters.

(It was pointed out to me on twitter that my explanation of the singularity is wrong. See what happens when you try to explain mathematical stuff non-mathematically?)

White holes are a much harder idea. We’ve never seen one. In fact, we don’t really think that they can exist in our universe. In concept, they’re the opposite of a black hole: they are a region with a boundary than nothing can ever cross. In a black hole, you can’t cross the boundary an escape; in a white hole, once something crosses the boundary, it can’t ever re-enter. White holes only exist in a strange conceptual case, called an eternal black hole – that is, a black hole that has been there forever, which was never formed by gravitational collapse.

There are some folks who’ve written speculative work based on the solutions to the white hole field equations that suggest that our universe is the result of a white hole, inside of the event horizon of a black hole in an enclosing universe. But in this solution, the white hole exists for an infinitely small period of time: all of the matter in it ejects into a new space-time realm in an instant. There’s no actual evidence for this, beyond the fact that it’s an interesting way of interpreting a solution to the field equations.

All of this is a long-winded way of saying that when it comes to black holes, Calhoun is talking out his ass. A black hole is not one end of a tunnel that leads to a white hole. If you actually do the math, that doesn’t work. A black hole does not “compress” matter and pass it to a white hole which decompresses it. A black hole is just a huge clump of very dense matter; when something crosses the event horizon of a black hole, it just becomes part of that clump of matter.

His babble about magnetism is similar: we’ve got some very elegant field equations, called Maxwell’s equations, which describe how magnetism and electric fields work. It’s beautiful, if complex, mathematics. And they most definitely do not describe a magnet as something that “pumps eitherspace from the south pole to the north pole”.

There’s no proof here. And there’s no math here. There’s nothing here but the midnight pot-fueled ramblings of a not particularly bright sci-fi fan, who took some wonderful stories, and believed that they were based on something true.