"What does it take to be a hero?" revisited
I was previously interested to see how die rolls in an RPG appear when conditioned on you having survived an unlikely situation. As might have been predicted, if the die rolls contribute to that survival in a largely additive way, for example by being damage scored against a large opponent, then the posterior distribution of the rolls looks exponentially tilted.
But the only virtue of the brute force Monte Carlo method I used was that it was easy to code. It's computationally wasteful. So I wrote a much more performant DSL in Python which I have put on github.
It uses numpy to achieve tolerable numerical performance, but in addition it uses two techniques beyond brute force to make it usable.
One challenge with a probabilistic language is to manage state.
First there's state "in the past": If you're computing probabilities that are sums of many large intermediate states, for example 100 die rolls, you run the risk of running foul of combinatorial explosion.If you write a loop like:
for i in range(100):
t += d(6)
you want to be sure that the += operation erases history (ie. previous values of t) so you aren't tracking all 6^100 individual histories. In this case it's easy but in other cases it might not be so obvious that you have unneeded state lying around. So the code has a simple (and incomplete) backward liveness pass to insert deletions of data that won't be used again. Whenever state is deleted, you can merge histories that are now indistinguishable.
And then there is state "in the future": sometimes you'd like to compute probabilities of data structures like lists but materializing a list results in state that can cause combinatorial explosion. So I support Python style generators allowing you to generate data lazily - for example permutations of cards. (Though even with laziness, the code is going to struggle with a 52 card deck.)
There's another important technique I used: this code uses brute force so there are many states, each corresponding to a possible set of values for some numpy objects. And we often want to perform a numpy operation for each of these values. We don't need to loop. In many cases a parameterised family of numpy operations is in fact a single numpy operation. This kind of transformation is ubiquitous in GPU computing.
Besides its own test suite I also used a large number of questions on the RPG Stack Exchange to build a library of examples for testing.
Of mathematical interest: most of the probability computations take place in a semiring. So most of the numerical computing is simply addition and multiplication. When that's all you're doing, there is the well known technique for working with large integers where you work modulo p[i] for some array of primes and only at the end reconstruct your final result using the Chinese remainder theorem. Less will known is that this works for rationals also. (I conjectured this was true, started deriving it myself, and then learnt there are published methods.) This means we can work with exact rational arithmetic using numpy without the need for a bignum library.
I originally wrote this code to target GPUs. On my Mac, numpy turned out to be comparable in speed to PyTorch and way faster than TensorFlow. I think this is because those libraries are optimised around data of fairly fixed shape passing through fixed pipelines whereas my code is very ad hoc. I've a feeling a few custom kernels would speed it up a lot. (I may be wrong about this but I do know I can write CUDA/Metal code directly that is many times faster than some of my dice-nine examples.)
And one final note. This is a deeply embedded DSL. I use Python as a host to give me an AST that I interpret. This isn't simply overloading of Python operators.


0 Comments:
Post a Comment
<< Home