Wednesday, May 21, 2008

Life in a Lazy Universe

I've seen lots of articles on simulated reality recently. Supposing for a bit that this is a sensible hypothesis to entertain and not just a science fictional conceit, what might be the consequences of supposing that such a 'reality' is running on a machine with lazy evaluation?

If your task were to simulate a universe well enough for its inhabitants to be convinced it was seamless, then there is an obvious optimisation that could be made. You could simulate just those parts that could be perceived by the inhabitants. But the catch is that if an inhabitant were to explore a new region, the simulation would be required to fill in that region. Just creating a blank new area wouldn't do, it's current state would need to be consistent with having had a plausible history and so the simulation would be required to fill in not just its current state, but also its past. This is precisely what is provided by lazy evaluation - demand driven evaluation that takes place only when results are observed. It seems natural that such a simulation should make use of laziness.

But such optimisation doesn't need to be limited to lazily computing 'new regions' where by 'region' we mean a spatially localised volume. We could also imagine implementing level of detail. If we don't look closely at an object in our grasp, there's no need to compute every single detail of it. We need only provide detail at the resolution at which it can be perceived. We'd like this built into every aspect of the simulation so that anything within it is computed only to a desired accuracy. I've spoken a few times about such a data structure in posts like this. This is precisely what the real numbers give us. Computationally, a real number is an object which when given a degree of accuracy, returns a rational number (or some other finitely manipulatable structure) that represents the real to this accuracy. The wikipedia article suggests that 'the use of continua in physics constitutes a possible argument against the simulation of a physical universe'. This is diametrically opposed to what I might argue: in fact the presence of continua suggests the existence of an efficient demand driven simulation with level of detail. The problem is that the people who have been thinking about these issues have been thinking in terms of traditional imperative style programming where a real number is typically represented to some fixed finite precision. But in fact there are infinitely many real numbers that can be represented exactly if we think in terms of lazy data structures.

But this is all idle speculation if we don't make predictions. So here is one. In a computer simulated universe, all physics must obviously be computable. But all computable functions are continuous. So in a simulated universe we'd expect the physical laws involving real numbers to make use of them only in a continuous way. Sound familiar?


The next section is an aside and I need to give a spoiler alert as I'll be mentioning some aspects of Greg Egan's Permutation City.

What exactly what is meant by 'demand driven' above? If we have implemented a simulated universe I expect we would be interested in looking in on it from time to time. So this is what is usually meant by 'demand'. Whenever the program performs I/O to show us what is going on, it would trigger the evaluation of the lazy thunks that had been sitting around. But consider how the universe might appear to its inhabitants. Whether or not we look in on it us surely irrelevant to the inhabitants, assuming we just look and don't try to communicate. But if I/O isn't performed, functional programmers don't make much of a distinction between an unevaluated thunk and an evaluated one. They are observationally equivalent. So do we in fact need to run the simulation for its inhabitants to consider themselves to exist? Anyway, I won't say anything more about this because Permutation City deals with this issue at length. I'm just rephrasing it as a lazy evaluation issue.

My next post will be on life in a monadic universe.


Update: I'm going to add some detail to the above. Suppose we represent real numbers as infinite sequences of digits. Suppose also that we have some module in a simulation that, let's say, resolves collisions between particles. You might ask it "what is the velocity, to 20 digits, of particle A after the collision". It must then look at all of the relevant inputs, decide how accurately it needs them, and then propagate new requests. For example it might work like this "if I need the velocity of A to 20 digits I need the mass of particle B to 25 digits and the velocity of particle C to 26 digits..." and so on. The idea of demand driven modules that produce outputs to a given accuracy by making requests of their inputs to a given accuracy exactly parallels the subject of analysis in mathematics. There you ask questions like "if I need to know f(x,y) with an error of no more than δ, what ε of error can I tolerate in x and y?". Functions that allow this kind of analysis are precisely the continuous ones. In other words, a simulated universe that is built from continuous functions lends itself to a demand driven implementation. One might argue that the whole of the fields of analysis (and topology) are the study of functions that can be evaluated in this way. And of course, in our universe, the functions we study in physics are typically the continuous functions. So contrary to the claim that the existence of continua argues against the idea that the universe is a simulation, I'd like to point out that they might actually make it more convenient to simulate our universe. In fact, we use this fact all the time in ordinary everyday physics simulations because we know we only need to work to a certain level of accuracy to get accurate results.

The claim that this universe can be simulated, and the claim that this universe is a simulation, are philosophical claims that I don't like to waste time on. But the claim that the existence of continua blocks the notion that the universe is a simulation is a fairly precise statement of logic, mathematics and physics. I claim it is wrong, and that's the only claim I'm making here.

32 comments:

  1. Familiar? Not really. On the contrary, I imagine quantum mechanics close-up as a soup of erratic particles which jump non-continuously and randomly all the time. But perhaps you are referring to an earlier post which dispelled views like mine?

    I can't wait for your next post! I've blogged about life in a monadic universe too, but I'm sure you'll do it better, as always.

    ReplyDelete
  2. Despite all the talk in pop science articles about random particle soups, modern physics takes place, by and large, in nice vector spaces built from real numbers (via the complex numbers). But I'll be saying something more about this next post. And I'll have to read what you wrote.

    ReplyDelete
  3. Perhaps the universe IS lazy and Heisenberg found the limit of the computational resources.

    ReplyDelete
  4. Didn't get the "sound familiar ?".
    Could you elaborate ?

    ReplyDelete
  5. Modern physics seems to take place on the ugliest of vector spaces... Banach algebras.

    ReplyDelete
  6. Speaking of which, this sounds a lot like our universe is somehow being port rendered in the same way the graphics are being rendered in classic 3D engines, such as Doom, or Quake.

    In any case, it's an interesting post, and I'm looking forward to reading more posts.

    ReplyDelete
  7. "But in fact there are infinitely many real numbers that can be represented exactly if we think in terms of lazy data structures."

    That doesn't say much. You can always represent all integers with a very simple data structure.
    And by the way, there will always be 2^{aleph_0} real numbers that you cannot represent with _any_ (finite) data structure.

    ReplyDelete
  8. Lazy evaluation of unobserved phenomena? You mean, the content of the box isn't evaluated one way or the other until we look inside it? What does the cat think?

    ReplyDelete
  9. I don't believe it's possible to simulate a world lazily. The problem is all the "intervowen" causality. You mention the example of the room that does not need to be simulated when no one is there. However, when someone starts to observe the room, in order to compute its present state, we would also need to compute its past states.

    But how do we compute its past states? Obviously, we would also have to account for the rooms that are next to our simulated room. Maybe some wind came in trough that other door five minutes ago and blew away the newspaper from the table.

    So, let's say: In order to compute the state of a room, we also need to evaluate the state of its adjacent rooms. The problem with this is now that it is recursive. In order to compute the adjacent rooms, we also need to compute THEIR adjacent rooms, and so on. This means that You have to compute the whole house when someone observes just one room in it. And the same is true not only for houses, but also for the whole universe.

    Of course, one could make use of the fact that nothing can be faster than light. So if I want to "catch up" with the last 5 hours of a room, I only need to compute the last 5 hours of the universe in a ball around it whose radius is the distance that light can travel within 5 hours. But when You take one such point that is 5 light hours away from our focus, in order to compute its last 5 hours, You need to compute the last 5 hours of everything in a radius of 5 light hours around THAT point. It's basically the same as with the room and its doors.

    There is absolutely no way that we can save processing time here. When the "lazy universe" shall appear consistent to its inhabitants, we must compute every detail that might make it inconsistent if we would leave it out. And that is, as You might have guessed, EVERY detail in the whole universe. I'm sorry about that, but that is the major difference between regular computer programs and the universe "program".

    ReplyDelete
  10. I like your post, but there is little doubt the universe is NOT built on the reals. Everything from quantum mechanics to Planck's constant to the speed of light argues against it. We just don't see the granularity. Yet.

    ReplyDelete
  11. I am sorry to object again, but I believe that simulating some kind of interacting probabilities is much harder than simulating solid particles, not easier. I hope You can explain me how I'm wrong.

    ReplyDelete
  12. madoc's argument is true for a dense universe or a universe where you can't make accurate level-of-detail approximations.

    Take, for example, electric fields. Between galaxies there is almost nothing. Collisions and electromagnetic forces contribute almost nothing to what happens there. When calculating a particles position in the void, you can ignore the electric field contributions from far off particles and be correct to many decimal places.

    Next look at gravity. It will generally move according to a certain balance. It doesn't matter which starts collide with each other, the galaxies center of gravity is still the same. From a far off galaxy the contributions of the specific arrangements of stars is very small.

    In general, then, when you move across these voids the level of precision you need to calculate falls a lot.

    ReplyDelete
  13. Madoc,

    I disagree with you. Simulations, by definition, have bounds around the accuracy of what they try to simulate. Those bounds are often defined by the programmers, who determine the most critical aspects to be evaluated within the simulation.

    For example, if you simulate two ants fighting, you wouldn't need to simulate each and every atom and quark within the ant. Keeping the bounds of accuracy at some reasonable level, much higher than atoms and quarks, is sufficient because no significant gains result from increasing the accuracy by orders of magnitude.

    Likewise, if the primary goal of a "universe simulator" is to evaluate how its inhabitants react within the universe, it is not necessary to represent the entire universe down to quarks that are light years away. The behavior of particles that are not within the immediate domain of the inhabitants can be accumulated and generalized into aggregate "macro behavior."

    Also, another important aspect to consider is that simulations do not have to be executed in an entirely sequential fashion. For example, if Joe walks into a room that hasn't been calculated before, the simulation can say "Okay, in order to provide Joe with a sensible representation of the current state of this room, rewind the simulation to time minus 200 years, and add the room characteristics to the simulation's current scope." Thus, when replayed, the room would then have a proper historical context with which Joe can evaluate the room. And if the room had dependencies on other things (such as the room next to it), the simulation could macro-generalize by providing less level of detail the further the simulation wanders away from the room.

    ReplyDelete
  14. McCoyn,

    thanks for that convincing argument. I am sure that You are right about that. This would imply that, if we are part of a simulation, that simulation would could skip computations for other galaxies.

    But as far as stellar objects in our own galaxy are concerned... I think stating that one could simulate them in an unprecise way would be a mere speculation. Perhaps someone who knows more about that than me could prove or disprove this in a more scientific manner. But then again, maybe someone already did this. If anyone knows about such a paper, please give us the link!

    Also, I object to the assertion brought up here that the uncertainty principle and quantum physics show that our universe does lazy computation. Maybe we will discuss this point also. But anyway, thanks for Your response!

    ReplyDelete
  15. Dejay,

    as for the two ants fighting, I agree with You that we do not need to simulate the particles out of which the ants consist. However, in such a simulation, it would also be downright impossible to add an electron microscope that shows us simulated microscopic structures of the ants, simply because that level of detail is not present inside the simulation. This is different from our present universe, where You can put everything under the microscope and see all the small structures out of which it consists.

    I agree with McCoyn, who wrote his comment just before Yours, that distant galaxies might be optimized in a simulation by simulating them with some orders of magnitude less detail. But if anyone were ever to travel into another galaxy, the simulator would need to "track back", as You described, and simulate that whole galaxy from the beginning of simulation time, up to the present moment.

    But then, this would require for the simulator to store past states of the simulation, because they can never be finalized, closed and forgotten. It would not need to store every single past state, but at least make a "snapshot" of the whole simulation every now and then.

    Then, when we arrive close to a distant galaxy with our space ship, the simulator would revert to a past state, probably some million simulation years ago, add the galaxy to the detailed simulations, and simulate those millions of years of the such augmented universe. This is how I interpret Your suggestion, and I strongly agree that this sounds reasonable.

    However, that is much more complicated than what I have personally read about the simulation hypothesis. I think that this backtracking algorithm might have disastrous impacts on the simulator performance. But of course, I see that we have not yet been in any other galaxy. So, if our simulator was able to simulate our galaxy in modest detail, and our solar system and all of its regular visitors with great detail, that would probably suffice.

    When I wrote my objection, I thought about something else: leaving out atomar and sub-atomar detail when "nobody watches". That means, all the atoms and particles around me right now do not get simulated until I actually try to observe them in detail. When I am outside, only the effects of a moderate wind get simulated, but not every single molecule in the air. This is the idea that I just cannot agree to. And this is what I really had in mind when writing about the room and its adjacent rooms. I confess that I did not make this clear, sorry about that.

    ReplyDelete
  16. This is my first time on this blog, and I'm intrigued by the ideas. One of the things I wanted to compliment everyone on is the discussion. The tone is amiable, and it looks like people can share their ideas and if it doesn't come out exactly right the first time, people don't jump down each other's throat. It's very cool to see people sharing ideas and being charitable with one another. Count me in as an RSS subscriber. :)

    ReplyDelete
  17. This is a fascinating subject, and I wish I could add to the discussion of whether lazy evaluation of an element in a simulated universe would necessarily be dependent on a nearly infinite unfolding of thunks, as madoc suggests.

    The one point I would like to make is that, except considering the physical limits and our theoretical understanding of computers, the issue of "simulator performance" becomes a rather tricky concept if we consider that we ourselves are simulated minds in a simulated universe.

    To illustrate the point, If I am running a physics simulation of a bouncing ball on a modern PC and simultaneously on a commodore 64, the modern machine will finish the simulation much faster than the old one with the slower processor... but does the ball notice?

    That's the only coherent idea I have for now, but I would like to know whether people have been thinking of this problem in terms of a "Matrix"-style universe of plugged-in observers, or one of simulated observers.

    ReplyDelete
  18. For anyone interested in science fiction in this area, James P. Hogan's "Realtime Interrupt" is a fun read that shows at least some sophistication from a technical perspective (e.g., issues like the ones sigfpe raises, also re leaving a back door in the simulation). Hogan's book is of course only one among many works with this theme. On the less (?) fictional side, Nick Bostrom is an Oxford philosopher who has tried to argue about the probabilities that we actually live in such a simulation (see his paper "Are You Living in a Computer Simulation?").

    ReplyDelete
  19. I think it's certainly possible. I think the most interesting question is, can we "break" the simulation by looking at enough of the right places at the right time, so that the simulation has to make too many computations at the right time? From my understanding, it is hard to make a lazy system which also meets rigorous real time requirements.

    ReplyDelete
  20. But anonymous, there are no real-time requirements. However slow the simulation is, no inhabitants will notice.

    ReplyDelete
  21. I've been thinking about lazy simulations a fair bit as well.

    One important point is that assuming the simulated inhabitants are deterministic then you only need to inspect their limited views and perceptions for inconsistencies in the detail of the simulation. Not the entire simulation.

    If none of the inhabitants finds an inconsistency with simulation, then there is no need to fix anything.

    Now if one of the inhabitants does find an inconsistency you could do two things to fix it:

    1) Go back to the root cause of the error and rerun the simulation from there; for the sphere of influence it would affect.

    2) Change the simulated individuals memories/perceptions along with exiting simulation state to cleanup the inconsistency.

    So the simulation would probably be creating inconsistencies all over the place, and then fixing them as people notice them. Using that method you could use very basic models for most things that would run quite fast.

    People don't tend to take much notice of the world around them except for specific things they are focused on.

    ReplyDelete
  22. sigfpe,

    there are no real time requirements for the inhabitants of the simulation. But if the creators of the simulation in their world notice that the speed of the simulator goes down considerably, they might be forced to react in some way - hopefully not by stopping and restarting the whole thing, but instead by looking who was so smart to "break the rules", and do something nice to him.

    ReplyDelete
  23. So can you do time-travel with call/cc?

    ReplyDelete
  24. even a simulated universe resides within a "real universe". somewhere down the turtle tree we hit pay dirt. right?

    ReplyDelete
  25. trb, that's a really wonderful idea. In such a simulated universe all the laws of physics perhaps could be products of the simulated intelligences that explore it. Something like a universe-creating genetic algorithm.

    On a different note, someone over at reddit (where I was directed from) made the interesting point in regards to madoc's comment: that the fact that the speed of light is the theoretical limit of speed in our universe means that there is a rather computationally convenient built-in limit to how fast changes can propagate.

    ReplyDelete
  26. Thunks are observationally equivalent from the viewpoint of the programmer. I think that this is an important distinction; although there's no particular explanation of consciousness in this universe, and no real hope of finding one, leaving thunks unevaluated could still have an effect on it.

    If nobody ever calls main, a haskell program does nothing; if the focus of the simulation moves out to beyond the horizon, and never moves back, you can't expect anything to be evaluated again on Earth, ever; it would be like time stopped, for those of us who are NPCs.

    ReplyDelete
  27. You also have to consider whether the universe under simulation has sensitive dependence on initial conditions (SDOIC). The reason I bring this up, is that it affects how errors accumulate in any simulation.

    Say, for example, you begin simulating a room (as one of the posters mentioned) then to what level of detail do you need to simulate the surrounding room's past? If the universe has SDOIC, then lazy evaluation doesn't buy you anything, because as you work back from an initial starting point to those conditions some time in the past you inevitably have to eliminate those states that cannot have resulted in the currently observed state. Computationally, this would mean checking each node in a tree where the degree of each node is proportional to the degree of precision the model has. Unless one is very clever, calculating past states might involve evaluating all nodes in the tree. From this, I can't really believe that lazy evaluation would buy you anything computationally assuming the model has SDOIC.

    It would be much simpler, assuming SDOIC, to keep all state around and step the simulation forward in time, recalculating all interactions for all observers.

    ReplyDelete
  28. This all assumes that "time" is real: Simulated time as a function of simulator time. What if time is part of the simulation? That the simulator is outside of time.

    ReplyDelete
  29. trb, you are assuming that people are the point of the simulation and that we are the key observers. we could be just a help function going through garbage collection or something. no reason to make changes to lazy state for our sake specifically, unless our poking around is signifficantly and needlesly slowing the execution. Maybe we are supposed to poke around and collect error states or something.

    Second - if you introduce "quickfixes" in the tree, you are eliminating posibility of lazy evaluation beyond this value - things would just not add up. Maybe limited lookback is kool, really depends on the purpose of the implementation.

    ReplyDelete
  30. Suggesting that simulated reality is a "science fictional conceit" seems rather small-minded...

    ReplyDelete
  31. The term "conceit" isn't a value judgment. It's just a word that means a novel idea, often used to mean an idea on which a whole novel is based, something often found in science fiction.

    ReplyDelete
  32. It seems that all information must simultaneously exist in an infinite array open for every and any permutation in order for this to work. That lends credence to Buddhist field theory.

    ReplyDelete