Life in a Lazy Universe
I've seen lots of articles on simulated reality recently. Supposing for a bit that this is a sensible hypothesis to entertain and not just a science fictional conceit, what might be the consequences of supposing that such a 'reality' is running on a machine with lazy evaluation?
If your task were to simulate a universe well enough for its inhabitants to be convinced it was seamless, then there is an obvious optimisation that could be made. You could simulate just those parts that could be perceived by the inhabitants. But the catch is that if an inhabitant were to explore a new region, the simulation would be required to fill in that region. Just creating a blank new area wouldn't do, it's current state would need to be consistent with having had a plausible history and so the simulation would be required to fill in not just its current state, but also its past. This is precisely what is provided by lazy evaluation - demand driven evaluation that takes place only when results are observed. It seems natural that such a simulation should make use of laziness.
But such optimisation doesn't need to be limited to lazily computing 'new regions' where by 'region' we mean a spatially localised volume. We could also imagine implementing level of detail. If we don't look closely at an object in our grasp, there's no need to compute every single detail of it. We need only provide detail at the resolution at which it can be perceived. We'd like this built into every aspect of the simulation so that anything within it is computed only to a desired accuracy. I've spoken a few times about such a data structure in posts like this. This is precisely what the real numbers give us. Computationally, a real number is an object which when given a degree of accuracy, returns a rational number (or some other finitely manipulatable structure) that represents the real to this accuracy. The wikipedia article suggests that 'the use of continua in physics constitutes a possible argument against the simulation of a physical universe'. This is diametrically opposed to what I might argue: in fact the presence of continua suggests the existence of an efficient demand driven simulation with level of detail. The problem is that the people who have been thinking about these issues have been thinking in terms of traditional imperative style programming where a real number is typically represented to some fixed finite precision. But in fact there are infinitely many real numbers that can be represented exactly if we think in terms of lazy data structures.
But this is all idle speculation if we don't make predictions. So here is one. In a computer simulated universe, all physics must obviously be computable. But all computable functions are continuous. So in a simulated universe we'd expect the physical laws involving real numbers to make use of them only in a continuous way. Sound familiar?
The next section is an aside and I need to give a spoiler alert as I'll be mentioning some aspects of Greg Egan's Permutation City.
What exactly what is meant by 'demand driven' above? If we have implemented a simulated universe I expect we would be interested in looking in on it from time to time. So this is what is usually meant by 'demand'. Whenever the program performs I/O to show us what is going on, it would trigger the evaluation of the lazy thunks that had been sitting around. But consider how the universe might appear to its inhabitants. Whether or not we look in on it us surely irrelevant to the inhabitants, assuming we just look and don't try to communicate. But if I/O isn't performed, functional programmers don't make much of a distinction between an unevaluated thunk and an evaluated one. They are observationally equivalent. So do we in fact need to run the simulation for its inhabitants to consider themselves to exist? Anyway, I won't say anything more about this because Permutation City deals with this issue at length. I'm just rephrasing it as a lazy evaluation issue.
My next post will be on life in a monadic universe.
Update: I'm going to add some detail to the above. Suppose we represent real numbers as infinite sequences of digits. Suppose also that we have some module in a simulation that, let's say, resolves collisions between particles. You might ask it "what is the velocity, to 20 digits, of particle A after the collision". It must then look at all of the relevant inputs, decide how accurately it needs them, and then propagate new requests. For example it might work like this "if I need the velocity of A to 20 digits I need the mass of particle B to 25 digits and the velocity of particle C to 26 digits..." and so on. The idea of demand driven modules that produce outputs to a given accuracy by making requests of their inputs to a given accuracy exactly parallels the subject of analysis in mathematics. There you ask questions like "if I need to know f(x,y) with an error of no more than δ, what ε of error can I tolerate in x and y?". Functions that allow this kind of analysis are precisely the continuous ones. In other words, a simulated universe that is built from continuous functions lends itself to a demand driven implementation. One might argue that the whole of the fields of analysis (and topology) are the study of functions that can be evaluated in this way. And of course, in our universe, the functions we study in physics are typically the continuous functions. So contrary to the claim that the existence of continua argues against the idea that the universe is a simulation, I'd like to point out that they might actually make it more convenient to simulate our universe. In fact, we use this fact all the time in ordinary everyday physics simulations because we know we only need to work to a certain level of accuracy to get accurate results.
The claim that this universe can be simulated, and the claim that this universe is a simulation, are philosophical claims that I don't like to waste time on. But the claim that the existence of continua blocks the notion that the universe is a simulation is a fairly precise statement of logic, mathematics and physics. I claim it is wrong, and that's the only claim I'm making here.
If your task were to simulate a universe well enough for its inhabitants to be convinced it was seamless, then there is an obvious optimisation that could be made. You could simulate just those parts that could be perceived by the inhabitants. But the catch is that if an inhabitant were to explore a new region, the simulation would be required to fill in that region. Just creating a blank new area wouldn't do, it's current state would need to be consistent with having had a plausible history and so the simulation would be required to fill in not just its current state, but also its past. This is precisely what is provided by lazy evaluation - demand driven evaluation that takes place only when results are observed. It seems natural that such a simulation should make use of laziness.
But such optimisation doesn't need to be limited to lazily computing 'new regions' where by 'region' we mean a spatially localised volume. We could also imagine implementing level of detail. If we don't look closely at an object in our grasp, there's no need to compute every single detail of it. We need only provide detail at the resolution at which it can be perceived. We'd like this built into every aspect of the simulation so that anything within it is computed only to a desired accuracy. I've spoken a few times about such a data structure in posts like this. This is precisely what the real numbers give us. Computationally, a real number is an object which when given a degree of accuracy, returns a rational number (or some other finitely manipulatable structure) that represents the real to this accuracy. The wikipedia article suggests that 'the use of continua in physics constitutes a possible argument against the simulation of a physical universe'. This is diametrically opposed to what I might argue: in fact the presence of continua suggests the existence of an efficient demand driven simulation with level of detail. The problem is that the people who have been thinking about these issues have been thinking in terms of traditional imperative style programming where a real number is typically represented to some fixed finite precision. But in fact there are infinitely many real numbers that can be represented exactly if we think in terms of lazy data structures.
But this is all idle speculation if we don't make predictions. So here is one. In a computer simulated universe, all physics must obviously be computable. But all computable functions are continuous. So in a simulated universe we'd expect the physical laws involving real numbers to make use of them only in a continuous way. Sound familiar?
The next section is an aside and I need to give a spoiler alert as I'll be mentioning some aspects of Greg Egan's Permutation City.
What exactly what is meant by 'demand driven' above? If we have implemented a simulated universe I expect we would be interested in looking in on it from time to time. So this is what is usually meant by 'demand'. Whenever the program performs I/O to show us what is going on, it would trigger the evaluation of the lazy thunks that had been sitting around. But consider how the universe might appear to its inhabitants. Whether or not we look in on it us surely irrelevant to the inhabitants, assuming we just look and don't try to communicate. But if I/O isn't performed, functional programmers don't make much of a distinction between an unevaluated thunk and an evaluated one. They are observationally equivalent. So do we in fact need to run the simulation for its inhabitants to consider themselves to exist? Anyway, I won't say anything more about this because Permutation City deals with this issue at length. I'm just rephrasing it as a lazy evaluation issue.
My next post will be on life in a monadic universe.
Update: I'm going to add some detail to the above. Suppose we represent real numbers as infinite sequences of digits. Suppose also that we have some module in a simulation that, let's say, resolves collisions between particles. You might ask it "what is the velocity, to 20 digits, of particle A after the collision". It must then look at all of the relevant inputs, decide how accurately it needs them, and then propagate new requests. For example it might work like this "if I need the velocity of A to 20 digits I need the mass of particle B to 25 digits and the velocity of particle C to 26 digits..." and so on. The idea of demand driven modules that produce outputs to a given accuracy by making requests of their inputs to a given accuracy exactly parallels the subject of analysis in mathematics. There you ask questions like "if I need to know f(x,y) with an error of no more than δ, what ε of error can I tolerate in x and y?". Functions that allow this kind of analysis are precisely the continuous ones. In other words, a simulated universe that is built from continuous functions lends itself to a demand driven implementation. One might argue that the whole of the fields of analysis (and topology) are the study of functions that can be evaluated in this way. And of course, in our universe, the functions we study in physics are typically the continuous functions. So contrary to the claim that the existence of continua argues against the idea that the universe is a simulation, I'd like to point out that they might actually make it more convenient to simulate our universe. In fact, we use this fact all the time in ordinary everyday physics simulations because we know we only need to work to a certain level of accuracy to get accurate results.
The claim that this universe can be simulated, and the claim that this universe is a simulation, are philosophical claims that I don't like to waste time on. But the claim that the existence of continua blocks the notion that the universe is a simulation is a fairly precise statement of logic, mathematics and physics. I claim it is wrong, and that's the only claim I'm making here.