Elon Musk’s Techno-Religion

A couple of people have written to me asking me to say something about Elon Musk’s simulation argument.

Unfortunately, I haven’t been able to find a verbatim quote from Musk about his argument, and I’ve seen a couple of slightly different arguments presented as being what Musk said. So I’m not really going to focus so much on Musk, but instead, just going to try to take the basic simulation argument, and talk about what’s wrong with it from a mathematical perspective.

The argument isn’t really all that new. I’ve found a couple of sources that attribute it to a paper published in 2003. That 2003 paper may have been the first academic publication, and it might have been the first to present the argument in formal terms, but I definitely remember discussing this in one of my philophy classes in college in the late 1980s.

Here’s the argument:

  1. Any advanced technological civilization is going to develop massive computational capabilities.
  2. With immense computational capabilities, they’ll run very detailed simulations of their own ancestors in order to understand where they came from.
  3. Once it is possible to run simulations, they will run many of them to explore how different parameters will affect the simulated universe.
  4. That means that advanced technological civilization will run many simulations of universes where their ancestors evolved.
  5. Therefore the number of simulated universes with intelligent life will be dramatically larger than the number of original non-simulated civilizations.

If you follow that reasoning, then the odds are, for any given form of intelligent life, it’s more likely that they are living in a simulation than in an actual non-simulated universe.

As an argument, it’s pretty much the kind of crap you’d expect from a bunch of half drunk college kids in a middle-of-the-night bullshit session.

Let’s look at a couple of simple problems with it.

The biggest one is a question of size and storage. The heart of this argument is the assumption that for an advanced civilization, nearly infinite computational capability will effectively become free. If you actually try to look at that assumption in detail, it’s not reasonable.

The problem is, we live in a quantum universe. That is, we live in a universe made up of discrete entities. You can take an object, and cut it in half only a finite number of times, before you get to something that can’t be cut into smaller parts. It doesn’t matter how advanced your technology gets; it’s got to be made of the basic particles – and that means that there’s a limit to how small it can get.

Again, it doesn’t matter how advanced your computers get; it’s going to take more than one particle in the real universe to simulate the behavior of a particle. To simulate a universe, you’d need a computer bigger than the universe you want to simulate. There’s really no way around that: you need to maintain state information about every particle in the universe. You need to store information about everything in the universe, and you need to also have some amount of hardware to actually do the simulation with the state information. So even with the most advanced technology that you can possible imagine, you can’t possible to better than one particle in the real universe containing all of the state information about a particle in the simulated universe. If you did, then you’d be guaranteeing that your simulated universe wasn’t realistic, because its particles would have less state than particles in the real universe.

This means that to simulate something in full detail, you effectively need something bigger than the thing you’re simulating.

That might sound silly: we do lots of things with tiny computers. I’ve got an iPad in my computer bag with a couple of hundred books on it: it’s much smaller than the books it simulates, right?

The “in full detail” is the catch. When my iPad simulates a book, it’s not capturing all the detail. It doesn’t simulate the individual pages, much less the individual molecules that make up those pages, the individual atoms that make up those molecules, etc.

But when you’re talking about perfectly simulating a system well enough to make it possible for an intelligent being to be self-aware, you need that kind of detail. We know, from our own observations of ourselves, that the way our cells operates is dependent on incredibly fine-grained sub-molecular interactions. To make our bodies work correctly, you need to simulate things on that level.

You can’t simulate the full detail of a universe bigger that the computer that simulates it. Because the computer is made of the same things as the universe that it’s simulating.

There’s a lot of handwaving you can do about what things you can omit from your model. But at the end of the day, you’re looking at an incredibly massive problem, and you’re stuck with the simple fact that you’re talking, at least, about building a computer that can simulate an entire planet and its environs. And you’re trying to do it in a universe just like the one you’re simulating.

But OK, we don’t actually need to simulate the whole universe, right? I mean, you’re really interested in developing a single species like yourself, so you only care about one planet.

But to make that planet behave absolutely correctly, you need to be able to correctly simulate everything observable from that planet. Its solar system, you need to simulate pretty precisely. The galaxy around it needs less precision, but it still needs a lot of work. Even getting very far away, you’ve got an awful lot of stuff to simulate, because your simulated intelligences, from their little planet, are going to be able to observe an awful lot.

To simulate a planet and its environment with enough precision to get life and intelligence and civilization, and to do it at a reasonable speed, you pretty much need to have a computer bigger than the planet. You can cheat a little bit, and maybe abstract parts of the planet; but you’ve got to do pretty good simulations of lots of stuff outside the planet.

It’s possible, but it’s not particularly useful. Because you need to run that simulation. And since it’s made up of the same particles as the things it’s simulating, it can’t move faster than the universe it simulates. To get useful results, you’d need to build it to be massively parallel. And that means that your computer needs to be even larger – something like a million times bigger.

If technology were to get good enough, you could, in theory, do that. But it’s not going to be something you do a lot of: no matter how advanced technology gets, building a computer that can simulate an entire planet and its people in full detail is going to be a truly massive undertaking. You’re not going to run large numbers of simulations.

You can certainly wave you hands and say that the “real” people live in a universe without the kind of quantum limit that we live with. But if you do, you’re throwing other assumptions out the window. You’re not talking about ancestor simulation any more. And you’re pretending that you can make predictions based on our technology about the technology of people living in a universe with dramatically different properties.

This just doesn’t make any sense. It’s really just techno-religion. It’s based on the belief that technology is going to continue to develop computational capability without limit. That the fundamental structure of the universe won’t limit technology and computation. Essentially, it’s saying that technology is omnipotent. Technology is God, and just as in any other religion, it’s adherents believe that you can’t place any limits on it.


22 thoughts on “Elon Musk’s Techno-Religion

  1. Janne

    I don’t think that’s a particularly convincing counter-argument. You could just say that the real universe is actually much bigger and/or finely divided than the (relatively) small, coarse simulated one we live in. Any simulation involves doing simplifications; they could reasonably assume that as long as the macroscopic effects are largely identical, it doesn’t matter if the simulated physicists are discovering a slightly different system than the real one.

    A much more damning (I think) rebuttal is the original point #2: “With immense computational capabilities, they’ll run very detailed simulations of their own ancestors in order to understand where they came from.”

    Oh really? Got any citation for that? See any huge interest or big push in our world for allocating resources to do that sort of thing?

    Rather, if current trends is any guide, I’d expect those “immense computational resources” to be spent on generating endless amount of hyper-realistic porn, game universes and fake social presences. I’d say assumption #2 needs to be much better motivated before the rest of that argument goes anywhere.

    1. markcc Post author

      The idea behind this whole argument is that the point of the effort is to understand how their own intelligence developed. Building a drastically simplified universe doesn’t really allow you to do that.

      The whole idea behind the universe simulation is to test your understanding of your own universe, and how it developed intelligent, self-aware life.

      If you understand enough about where intelligence comes from to simplify the universe enough to make it trivially computable, then you don’t need to run the simulation. If you are running the simulation, it’s because you don’t understand, and you need to test things. And in that case, it’s very important to capture the real complexity of the environment in which intelligent life could develop.

      But the key point of my argument is really just that no matter how advanced you make your technology, it’s going to require massive resources to build a universe simulation sufficiently detailed to evolve intelligent life.

      This silly argument is really premised on the assumption that near-infinite computational capability will become effectively free. I don’t think that’s true.

  2. Steve Ruble

    I enjoyed reading this, because almost every time I thought, “But what about…” you addressed that objection in the next paragraph. I think you covered most of the bases when it comes to the standard, interesting version of the simulation argument.

    There’s also the “solipsistic” version of the simulation argument, where the claim is that each individual person gets a simulated world, which only needs to have a resolution fine enough to handle whatever they’re currently observing. That basically reduces to ye olde brain-in-a-vat argument, and is about as boring to talk about.

    I agree that belief in The Simulators is “just techno-religion”, but out of all the ridiculous things one could speculate about, I think this is one of the more interesting. People who put forward this idea are probably imagining a middle ground between the down-to-the-particle simulation you refute here and the solipsistic simulation – they’re basically imagining that the world is a MMO computer game with a really good physics model and dynamic texture scaling. That avoids most of your criticisms, I think, but I’m still not sure what the point of the whole argument is supposed to be, other than something to bs about.

  3. Spencer E Bliven

    Can you cite your sources? It’s not clear to me that anyone really embraces the simulated universe hypothesis any more than anyone really believes in solipsism. It’s an interesting thought experiment, but if Musk really believed it then why would he care about preserving humanity so much? I suspect you’re misrepresenting his views with your inflammatory title.

      1. markcc Post author

        That’s one of the articles I found. The problem is, they don’t quote him explaining the argument, just meta-quotes about it.

        If I could have found a transcript of his full explanation of the argument, I would have used it. But paraphrases?
        I don’t think it’s fair to quote someone talking about what he said. Better to just present the argument as I understand it in my own words, and be clear that I’m just doing my best working from paraphrases.

  4. Aleks

    I disagree with Musk for other reasons, but I understand the argument you’re making, and I also agree with it… Up to a point.

    What if you didn’t need to hold state for every particle unless absolutely required to do so? Then you could theoretically approximate reality well enough that no one living in it would be the wiser…and only when they force the simulation’s hand do you need to give them what they’re asking for. I mean, they could design some sort of experiment with a slit, shoot electrons through it, and marvel at the probability spread on the other side. But the moment they begin observing what’s going on, we’d need to collapse that wave function…

    1. markcc Post author

      I don’t think that saves it, for two reasons.

      One: I think that the information currently available to us strongly suggests that intelligent self-awareness is a product of sub-molecular interactions. Within our cells, there are billions of things happening that would require a very high level of precision to simulate correctly; if those things weren’t happening (or being simulated precisely), then life – much less intelligence – wouldn’t work. Try looking at how things like protein manufacture happen within a cell – it’s fascinating!

      Two: To run the simulation with multiple granularities, where you run coarse-grained unless you need fine-grained results, you’d need to
      (a) flesh out the simulation to maximum detail when one of your simulated entities does something that would be affected by the lack of precision; and
      (b) detect when one of your simulated entities or some instrument that they have designed would be affected by the lack of precision.

    1. markcc Post author

      Well, my friends in college didn’t do pot. They just drank. Personally, I never really did that, either. For a guy with social anxiety, losing control of your behavior through drinking isn’t a good thing.

  5. Matthew Spencer

    “Paul uncovered his eyes. and looked around the room. Away from a few dazzling patches of direct
    sunshine, everything glowed softly in the diffuse light: the matte white brick walls, the
    imitation (imitation) mahogany furniture; even the posters-Bosch, Dali, Ernst, and Giger-looked
    harmless, domesticated. Wherever he turned his gaze (if nowhere else), the simulation was utterly
    convincing; the spotlight of his attention made it so. Hypothetical light rays were being traced
    backward from individual rod and cone cells on his simulated retinas, and projected out into the
    virtual environment to determine exactly what needed to be computed: a lot of detail near the
    center of his vision, much less toward the periphery.
    Objects out of sight didn’t ‘vanish’ entirely, if they influenced the ambient light, but Paul knew
    that the calculations would rarely be pursued beyond the crudest first-order approximations:
    Bosch’s Garden of Earthly Delights reduced to an average reflectance value, a single gray
    rectangle-because once his back was turned, any more detail would have been wasted. Everything in
    the room was as finely resolved, at any given moment, as it needed to be to fool him-no more, no
    He had been aware of the technique for decades. It was something else to experience it. He
    resisted the urge to wheel around suddenly, in a futile attempt to catch the process out- but for
    a moment it was almost unbearable, just knowing what was happening at the edge of his vision. The
    fact that his view of the room remained flawless only made it worse, an irrefutable paranoid
    fixation: No matter how fast you turn your head, you’ll never even catch a glimpse of what’s going
    on all around you…”

    Greg Egan, Permutation City

    What about limiting the spatial resolution to only what is being observed? For instance, the microscopic world doesn’t get simulated unless someone is looking through a microscope, necessitating fine detail where first order approximation and the macro-effects of said microscopic space would usually do? Like graphics cards that are efficient at drawing a scene with a virtual camera view and not drawing everything that was programmed in the shot, instead only what is visible to the camera. Surely this and other as-of-yet unforeseen algorithmic optimizations and lossless compressions in the future could make a pretty convincing simulated universe with minimal resources. Something the size of a dyson sphere to simulate another convincing reality.

    1. markcc Post author

      The catch is that the scenario you describe is an existing, intelligent, self-aware mind whose functioning is outside of the simulation, but whose perceptions are being fed from inside the simulation.

      In a system like that, you can rely on figuring out where the perception is focused, and providing enough detail there to fool the viewer.

      But in the kind of thing that Musk is describing, the intelligences exist entirely inside the simulation. So the simulation has to be good enough to produce self-aware intelligences. And from what we can understand, that means that for each mind, you need to perform orders of magnitude more computation just to operate the mind than you would to produce high-fidelity perceptions to that mind.

  6. טשעבוראַשקע (@tsukertokhes)

    Wouldn’t such a simulation rely on randomness (entropy) as well? That makes the simulation non computable by its very nature. I guess you could use the randomness from a source in the universe, but that seems like when you went down a few layers it would degrade.

    1. markcc Post author

      Well, you’d need to somehow simulate entropy in your universe.

      But entropy isn’t really “randomness”. As long as you had some way of choosing probabilistic outcomes that wasn’t predictable for the entities within your simulated universe, I think it would work.

      The catch, of course, being the fact that doing that significantly adds to the complexity of the simulation.

  7. thenewphalls

    All other arguments aside, this idea of the universe being a simulation feels a lot like the “God of the gaps” reincarnated as some kind of new-age techno-philosophical babble.

    As no one else has mentioned it, I’ll point out that this idea was popularized by Nick Bostrom in the early 2000s – and I believe even he took it (or at least inspiration for it) from an earlier source. On the back of Tesla and such, Elon Musk just seems poised to become the Stephen Hawking (i.e. the “this guy is saying something so we should definitely listen because everything he says is so deep and profound and 100% correct” darling) of the tech world.

  8. Joker_vD

    Yeah, not a very convincing argument. After all, the actual universe may as well be fully continuous, and our quantum nature is just a simulation artifact. There are at least two sci-fi short stories with exactly such a premise: “timespace foam” is a discretization errors, lambda term and the DE are hacks to prevent universe from collapsing, etc.

    1. markcc Post author

      If that’s the case, then it’s really a pretty shitty simulation. Because we’ve already been able to observe quantum level effects involved in our basic metabolism. Which means that as an ancestor simulation, we’re hopelessly compromised.

      And it does nothing to address the fundamental size problem.

  9. Bim

    All I know is that these sub-universes are going to have to be a little better than John Conway’s and John Von Neumann’s cellular automata. These things are _designed_ to be run on digital hardware (and are super interesting), but are still no good for simulating universes with people.

    Better be on the lookout though – any John could be the inventor of a new and improved next-gen universe simulator.

  10. Chris

    Wouldn’t they only need to simulate the part of the universe they need to know about? Since the speed of light is finite, they only need to simulate a finite volume. Given that they are arbitrarily far into the future, they could have claimed a much bigger area.

    1. markcc Post author

      Yes, they could.

      But there’s still a pretty massive scalability issue. Even if you only wanted to simulate a single solar system at high fidelity: you’d need something on the order of a computing system 1/10th the size of a solar system just to do the solar system at a reasonable speed. And that doesn’t consider all of the outside influences that affect that solar system.

      Pretty much no matter what assumptions you make, you wind up with a scenario where running this simulation isn’t a lightweight thing that you can do many times. Running an accurate enough simulation to capture intelligent life requires massive resources: massive enough that the argument about how there’ll be thousands of simulations to every one “natural” universe doesn’t add up.


Leave a Reply