Saturday, May 29, 2010

Constructing Intermediate Values

Introduction
The Intermediate Value Theorem, as it is usually told, tells us that if we have a continuous real-valued function f on the closed interval [0,1], with f(0)<0 and f(1)>0 then there is a point x in (0,1) such that f(x)=0. Here's a sketch of a proof:

Define three sequences li, ui, xi recursively as follows:

xi+1 = (li+ui)/2
(li+1, ui+1) = (li, xi+1) if f(xi+1)>0
(li+1, ui+1) = (xi+1, ui) otherwise

Set l0=0, u0=1.
The xi+1 are bracketed by the li and ui, and |ui-li|=2-i so the xi form a Cauchy sequence with some limit x. Because f is continuous, we get f(x)=0.

This proof not only shows that we have an intermediate value x where f crosses the x-axis, it also seems to describe a procedure for computing x. Let's use this strategy to write a numerical method.

A Numerical Method

> import Ratio
> infixl 6 .*

> ivt0 :: (Float -> Float) -> Float
> ivt0 f = ivt' 0 1 where
>    ivt' l u =
>          let z = 0.5*(l+u)
>          in if z==l || z==u || f z == 0
>               then z
>               else if f z < 0
>                   then ivt' z u
>                   else ivt' l z


Here are some simple functions we can use to test it:

> f0, g0 :: Float -> Float
> f0 x = 3*x-1
> g0 x = 2*x-1


ivt0 f0 and ivt0 f1 give the expected results.

Of course, it might not actually be the case that f (ivt f) == 0. We're dealing with the type Float here and there might not actually be a solution to f x == 0 to the precision of a Float. And it's not clear that the word continuous means anything here anyway because the limited precision of a Float precludes us from constructing the deltas and epsilons the definition of continuity demands.

Switching to Exact Real Arithmetic
So can we do better? Can we implement a function to compute intermediate values for exact real arithmetic? I recently talked about computable real numbers, and they are what I intend to use here. I'll represent the real number x using a sequence of rationals, xi, such that |xi-x|<2-i. Here's a suitable type:

> data R = R { runR :: Integer -> Rational }


(I'll only be calling it for non-negative arguments, but alas, Haskell has no natural number type.)

We'll need to display our numbers. Here's something cheap and cheerful that's good enough for this article:

> instance Show R where
>    show (R f) = show (fromRational (f 30) :: Float)


It displays a Float that is formed from a rational approximation that is within 2-30 of our value.

We can inject the rationals into the computable reals by using constant sequences:

> instance Fractional R where
>    fromRational x = R $ const $ fromRational x


An important point here is that two different sequences might define the same real. So when we construct functions mapping reals to reals we must be sure that they take different representations of the same real to representations of the same real. I'll call these 'well-defined'.

Now our problems begin. Haskell wants us to implement Eq before Num, but equality of computable reals is not decidable. Still, it is semi-decidable in that if two reals differ, we can detect this. Here's a partial equality function:

> instance Eq R where
>    R f == R g = eq' 2 0 f g where
>        eq' delta i f g = 
>               let fi = f i
>                   gi = g i
>               in if fi<gi-delta
>                   then False
>                   else if fi>gi+delta
>                       then False
>                       else eq' (delta/2) (i+1) f g


The idea is that if f i and g i can be separated by more than 2*2-i then the reals that f and g represent can't be equal.

We have to implement Ord too. If one computable real is bigger than another then they must differ by more than some power of 1/2. This means we can eventually distinguish between them. As a result, compare will always terminate if we compare distinct numbers:

> instance Ord R where
>    compare (R f) (R g) = compare' 2 0 f g where
>        compare' delta i f g =
>               let fi = f i
>                   gi = g i
>               in if fi<gi-delta
>                   then LT
>                   else if fi>gi+delta
>                       then GT
>                       else compare' (delta/2) (i+1) f g


We can define min and max straightforwardly:

>    min (R f) (R g) = R $ \i -> min (f i) (g i)
>    max (R f) (R g) = R $ \i -> max (f i) (g i)


And now we can define addition. We use the fact that if |x-x'|<ε and |y-y'|<ε then |(x+y)-(x'+y')|<2ε:

> instance Num R where
>    fromInteger n = R $ const $ fromInteger n
>    R f + R g = R $ \i -> let j = i+1 in f j + g j
>    negate (R f) = R $ negate . f


I'm going to skip multiplication of two reals. But it's easy to define multiplication of a rational with a real. Here I'm using the fact that if |x-x'|<ε, then |ax-ax'|<|a|ε:

> x .* R f = R $ mul x x f where
>    mul x x' f i | abs x' < 1 = x * f i
>                 | otherwise  = mul x (x'/2) f (i+1)


Now we're ready! It's straightforward to translate our previous algorithm. Note that we don't need a function to take the limit of a sequence. By definition we use sequences of rational approximations that are accurate to within 2-i so our ivt1 function merely needs to compute an approximation to our intermediate value that is accurate to within 2-i.

> ivt1 :: (R -> R) -> R
> ivt1 f = R $ \i -> ivt' 0 1 i where
>    ivt' :: Rational -> Rational -> Integer -> Rational
>    ivt' x y i =
>           let z = (1%2)*(x+y)
>           in if i==0
>                then z
>                else if f (fromRational z) < 0
>                    then ivt' z y (i-1)
>                    else ivt' x z (i-1)


Some new test functions:

> f, g :: R -> R
> f x = 3.*x-1
> g x = 2.*x-1


Note that (amazingly!) any well-defined function R -> R we implement is continuous because of the argument I sketched here.

And you'll find that ivt1 f gives the result you expect. But sadly, ivt1 g never returns. What went wrong?

Clutching Defeat from the Jaws of Victory
After one iteration, ivt1 sets z to the value we want. This seems like a particularly easy case. But the algorithm immediately fails because it then proceeds to compare f z with zero. We already know that our equality test must fail for this case. How do we fix our algorithm?

Consider the actions of any higher order function that operates on a continuous function f type R -> R. It will evaluate f at various real values. The return values from f will then be sampled at various values of i. Each sample corresponds to a bracket around a value of f. Consider a higher order function acting on the function below:



If our higher order function terminates, it can only compute a finite number of these brackets. So I can choose an ε small enough that I could nudge the graph up or down by ε without invalidating the brackets. I could provide the higher order function with a different function R -> R that responds in exactly the same way to the finite number of samples.

Now consider the case of an ivt function. It will respond the same way to two functions differing only by ε. The function in the graph above has been designed so that it is flat from 0.4 to 0.6. Nudging the function above by ε>0 up and down causes the crossing of the x-axis to move from x<0.4 to x>0.6. In other words, no algorithm could possibly return a valid intermediate value for all possible inputs because I can always contrive a pair of functions, to which ivt would respond identically, and yet which have widely separated points where they cross the x-axis. This shows that the problem of determining a crossing point of a continuous function that starts off less than zero, and ends up greater than zero, is uncomputable. We can do it for some functions. But no function can solve this problem for all continuous inputs. What's really weird about this is that it fails precisely when the problem seems easiest - when there are lots of zeroes!

A Revised Approach
But we don't need to give up. If we could ensure that at each bisection we could pick a point that wasn't exactly a zero, then we'd be fine. But how can we do this? Well here's a cheating way: we don't bother. We let that problem be the caller's responsibility. Instead of bisecting at each stage, we'll trisect. We'll pick a point that isn't a zero in the middle third by using a function pased in by the caller. Call this function g. g is of type Rational -> Rational -> Rational and the specification is that g x y must be a rational number between x and y that isn't a zero. Here's our new ivt2 function:

> ivt2 :: (Rational -> Rational -> Rational) -> (R -> R) -> R
> ivt2 g f = R $ \i ->
>     let delta = (1%2)^i
>         ivt' :: Rational -> Rational -> Rational
>         ivt' x y =
>               let l = (1%3)*(2*x+y)
>                   u = (1%3)*(x+2*y)
>                   z = g l u
>               in if u-l < delta
>                    then z
>                    else if f (fromRational z) < 0
>                        then ivt' z y
>                        else ivt' x z
>     in ivt' 0 1


How can we make use of this? Well suppose we want to find where a polynomial, f, crosses the x-axis. If the polynomial has degree n, it can only cross at n points. So if we pick n+1 values, f must evaluate to a non-zero value at one of them. We can examine the n+1 values of f(x) in parallel to find one that is non-zero. This must terminate. Here's an implementation for the case n+1=2. If we pass two real values to pickNonZero it will True or False according to whether the first or second argument is non-zero. It must only be called with a pair of numbers where one is non-zero:

> pickNonZero (R f) (R g) = pickNonZero' 0 1 where
>     pickNonZero' i e = let fi = f i
>                            gi = g i
>                        in if fi > e || fi < -e
>                           then True
>                           else if gi > e || gi < -e
>                               then False
>                               else pickNonZero' (i+1) ((1%2)*e)


Note how it examines the two values in parallel. If it simply tested the two numbers in turn against zero it would fail if the first were zero. (Interesting side note: we could write this using par.)

Suppose the function f is strictly monotonically increasing. Then we know that if y>x, then f(y)>f(x) so we can't have that both f(x) and f(y) are zero. We can use this to write a suitable helper function to write an intermediate value function that is 100% guaranteed to work for all monotonically increasing functions (assuming we don't run out of memory):

> monotonicPicker :: (R -> R) -> Rational -> Rational -> Rational
> monotonicPicker f x y = if pickNonZero
>                               (f (fromRational x)) (f (fromRational y))
>                             then x
>                             else y

> ivt3 :: (R -> R) -> R
> ivt3 f = ivt2 (monotonicPicker f) f


By varying monotonicPicker we can find crossing points for all manner of continuous function. Try ivt3 g. But no matter how hard we try, we can never write one that works for all continuous functions.

What went wrong?
So why did the Intermediate Value Theorem, despite its appearance of constructing an intermediate value, fail to yield an algorithm? For an answer to that, you'll have to wait until my next article. The surprising thing is that it's not a problem of analysis, but a problem of logic.

Acknowledgements
I have borrowed liberally from Paul Taylor, especially from here. However, I've chosen to phrase everything in terms of classical reasoning about Haskell programs. Any errors are, of course, mine. It was probably Andrej Bauer's awesome blog that got me interested in this stuff.

Afterword
I want to stress that I have made absolutely no mention of intuitionism or constructivism in what I've said above. I have simply used ordinary classical mathematics to reason about some Haskell programs. I'm leaving that for the next article.






Saturday, May 15, 2010

Optimising pointer subtraction with 2-adic integers.

Here is a simple C type and a function definition:


struct A
{
char x[7];
};

int diff(struct A *a, struct A *b)
{
return a-b;
}


It doesn't seem like there could be much to say about that. The A structure is 7 bytes long so the subtraction implicitly divides by 7. That's about it. But take a look at the assembly language generated when it's compiled with gcc:


movl 4(%esp), %eax
subl 8(%esp), %eax
imull $-1227133513, %eax, %eax
ret


Where is the division by 7? Instead we see multiplication by -1227133513. A good first guess is that maybe this strange constant is an approximate fixed point representation of 1/7. But this is a single multiplication with no shifting or bit field selection tricks. So how does this work? And what is -1227133513? Answering that question will lead us on a trip through some suprising and abstract mathematics. Among other things, we'll see how not only can you represent negative numbers as positive numbers in binary using twos complements, but that we can also represent fractions similarly in binary too.

But first, some history.

Some n-bit CPUs



That's an Intel 4004 microprocessor, the first microprocessor completely contained on a single chip. It was a 4 bit processor equipped with 4 bit registers. With 4 bits we can represent unsigned integers from 0 to 15 in a single register. But what happens if we want to represent larger integers?

Let's restrict ourselves to arithmetic operations using only addition, subtraction and multiplication and using one register per number. Then a curious thing happens. Take some numbers outside of the range 0 to 15 and store only the last 4 bits of each number in our registers. Now perform a sequence of additions, subtractions and multiplications. Obviously we usually expect to get the wrong result because if the final result is outside of our range we can't represent it in a single register. But the result we do get will have the last 4 bits of the correct result. This happens because in the three operations I listed, the value of a bit in a result doesn't depend on higher bits in the inputs. Information only propagates from low bit to high bit. We can think of a 4004 as allowing us to correctly resolve the last 4 bits of a result. From the perspective of a 4004, 1 and 17 and 33 all look like the same number. It doesn't have the power to distinguish them. But if we had a more powerful 8 bit processor like the 6502 in this machine, we could distinguish them:

Introduction
This is analogous to the situation we have with distances in physical world. With our eyes we can resolve details maybe down to 0.5mm. If we want to distinguish anything smaller we need more powerful equipment, like a magnifying class. When that fails we can get a microscope, or an electron microscope, or these days even an atomic force microscope. The more we pay, the smaller we can resolve. We can think of the cost of the equipment required to resolve two points as being a kind of measure of how close they are.

We have the same with computers. To resolve 1 and 17 we need an 8-bit machine. To resolve 1 and 65537 we need a 32-bit machine. And so on. So if we adopt a measure based on cost like in the previous paragraph, there is a sense in which 1 is close to 17, but 1 is even closer to 257, and it's closer still to 65537. We have this inverted notion of closeness where numbers separated by large (in the usual sense) powers of two are close in this new sense.

We have an interesting relationship between computing machines with different 'resolving' power. If we take an arithmetical computation on an N-bit machine, and then take the last M bits of the inputs and result, we get exactly what the M-bit machine would have computed. So an M-bit machine can be thought of as a kind of window onto the last M-bits onto an N-bit machine. Here's a sequence of machines:

Each machine provides a window onto the low bits of the previous machine in the sequence. But what happens at the "..." on the left? That suggests the bizarre idea that maybe all of these finite machines could be thought of as window to some infinite bit machine. Does that idea make any kind of sense?

I'll try to convince you that's a sensible idea by pointing out that it's something familiar to anyone who's taken a rigorous analysis course. (And I'll mention in passing that the above diagram illustrates a limit in an appropriate category!)

Mathematicians (often) build the real numbers from the rational numbers by a process known as completion. Consider a sequence like 1, 14/10, 141/100, 1414/1000, ... . The nth term is the largest fraction, with 10n in the denominator, such that its square is less than 2. It's well known that there is no rational number whose square is 2. And yet it feels like this sequence ought to be converging to something. It feels this way because successive terms in the sequence get as close to each other as you like. If you pick any ε there will be a term in the series, say x, with the property that later terms never deviate from x by more than ε. Such a sequence is called a Cauchy sequence. But these sequences don't all converge to rational numbers. A number like √2 is a gap. What are we to do?

Mathematicians fill the gap by defining a new type of number, the real number. These are by definition Cauchy sequences. Now every Cauchy sequence converges to a real number because, by definition, the real number it converges to is the sequence. For this to be anything more than sleight of hand we need to prove that we can do arithmetic with these sequences. But that's just a technical detail that can be found in any analysis book. So, for example, we can think of the sequence I gave above as actually being the square root of two. In fact, the decimal notation we use to write √2, 1.414213..., can be thought of as shorthand for the sequence (1, 14/10, 141/100, ...).

The notion of completeness depends on an an idea of closeness. I've described an alternative to the usual notion of closeness and so we can define an alternative notion of Cauchy sequence. We'll say that the sequence x1, x2, ... is a Cauchy sequence in the new sense if all the numbers from xn onwards agree on their last n bits. (This isn't quite the usual definition but it'll do for here.) For example, 1, 3, 7, 15, 31, ... define a Cauchy sequence. We consider a Cauchy sequence equal to zero if xn always has zero for its n lowest bits. So 2, 4, 8, 16, 32, ... is a representation of zero. We can add, subtract and multiply Cauchy sequences pointwise, so, for example, the product and sum of xn and yn has terms xnyn. Two Cauchy sequences are considered equal if their difference is zero. These numbers are called 2-adic integers.

Exercise: prove that if x is a 2-adic integer then x+0=x and that 0x=0.

There's another way of looking at 2-adic integers. They are infinite strings of binary digits, extending to the left. The last n digits are simply given by the last n digits of xn. For example we can write 1, 3, 7, 31, ... as ...1111111. Amazingly we can add subtract and multiply these numbers using the obvious extensions of the usaul algorithms. Let's add ...1111111 to 1:


...11111111
...00000001
-----------
...00000000


We get a carry of 1 that ripples off to infinity and gives us zeroes all the way.

We can try doing long multiplication of ...111111 with itself. We get:


...11111111
...1111111
...111111
...11111
...
-----------
...00000001


It's important to notice that even though there are an infinite number of rows and columns in that multiplication you only need to multiply and add a finite number of numbers to get any digit of the result. If you don't like that infinite arrangement you can instead compute the last n digits of the product by multiplying 11...n digits...111 by itself and taking the last n digits. The infinite long multiplication is really the same as doing this for all n and organising it in one big table.

So ...1111111 has many of the properties we expect of -1. Added to 1 we get zero and squaring it gives 1. It is -1 in the 2-adic integers. This gives us a new insight into twos complement arithmetic. The negative twos-complements are the truncated last n digits of the 2-adic representations of the negative integers. We should properly be thinking of twos-complement numbers as extending out to infinity on the left.

The field of analysis makes essential use of the notion of closeness with its δ and ε proofs. Many theorems from analysis carry over to the 2-adic integers. We find ourselves in a strange alternative number universe which is a sort of mix of analysis and number theory. In fact, people have even tried studying physics in p-adic universes. (p-adics are what you get when you repeat the above for base p numbers, but I don't want to talk about that now.) One consequence of analysis carrying over is that some of our intuitions about real numbers carry over to the 2-adics, even though some of our intuitive geometric pictures seem like they don't really apply. I'm going to concentrate on one example.

The Newton-Raphson Method
I hope everyone is familiar with the Newton-Raphson method for solving equations. If we wish to solve f(x)=0 we start with an estimate xn. We find the tangent to y=f(x) at x=xn. The tangent line is an approximation to the curve y=f(x) so we solve the easy problem of finding where the tangent line crosses the x-axis to get a new estimate xn+1. This gives the formula

xn+1 = xn-f(xn)/f'(xn).

With luck the new estimate will be closer than the old one. We can do some analysis to get some sufficient conditions for convergence.

The surprise is this: the Newton-Raphson method often works very well for the 2-adic integers even though the geometric picture of lines crossing axes doesn't quite make sense. In fact, it often works much better than with real numbers allowing us to state very precise and easy to satisfy conditions for convergence.

Now let's consider the computation of reciprocals of real numbers. To find 1/a we wish to solve f(x)=0 where f(x)=1/x-a. Newton's method gives the iteration xn+1 = xn(2-axn). This is a well know iteration that is used internally by CPUs to compute reciprocals. But for it to work we need to start with a good estimate. The famous Pentium divide bug was a result of it using an incorrect lookup table to provide the first estimate. So let's say we want to find 1/7. We might start with an estimate like 0.1 and quickly get estimates 0.13, 0.1417, 0.142848, ... . It's converging to the familiar 0.142857...

But what happens if we start with a bad estimate like 1. We get the sequence:


1
-5
-185
-239945
-403015701065
-1136951587135200126341705


It's diverging badly. But now let's look at the binary:


00000000000000000000000000000000000000000000000000000000000000000000000000000000001
11111111111111111111111111111111111111111111111111111111111111111111111111111111011
11111111111111111111111111111111111111111111111111111111111111111111111111101000111
11111111111111111111111111111111111111111111111111111111111111111000101011010110111
11111111111111111111111111111111111111111111010001000101010011001000110110110110111
11100001111001111011011101100100000010000000011011010110110110110110110110110110111


Our series may be diverging rapidly in the usual sense, but amazingly it's converging rapidly in our new 2-adic sense!

If it's really converging to a meaningful reciprocal we'd expect that if we multiplied the last n digits of these numbers by 7 then we'd get something that agreed with the number 1 in the last 7 digits. Let's take the last 32 digits:


10110110110110110110110110110111


and multiply by 7:


10100000000000000000000000000000001


The last 32 bits are


00000000000000000000000000000001.


So if we're using 32 bit registers, and we're performing multiplication, addition and subtraction, then this number is, to all intents and purposes, a representation of 1/7. If we interpret as a twos complements number, then in decimal it's -1227133513. And that's the mysterious number gcc generated.

Epilogue
There are many things to follow up with. I'll try to be brief.

Try compiling C code with a struct of size 14. You'll notice some extra bit shifting going on. So far I've only defined the 2-adic integers. But to get he reciprocal of every non-zero number we need numbers whose digits don't just extend leftwards to infinity but also extend a finite number of steps to the right of the "binary point". These are the full 2-adic numbers as opposed to merely the 2-adic integers. That's how the extra shifts can be interpreted. Or more simply, if you need to divide by 14 you can divide by 2 first and then use the above method to divide by 7.

I don't know how gcc generates its approximate 2-adic reciprocals. Possibly it uses something based on the Euclidean GCD algorithm. I wasn't able to find the precise line of source in a reasonable time.

An example of a precise version of the Newton-Raphson method for the p-adics is the Hensel lemma.

The last thing I want to say is that all of the above is intended purely to whet your appetite and point out that a curious abstraction from number theory has an application to compiler writing. It's all non-rigourous and hand-wavey. Recommend reading further at Wikipedia. I learnt most of what I know on the subject from the first few chapters of Koblitz's book p-adic Numbers, p-adic Analysis, and Zeta functions. The proof of the von Staudt–Clausen theorem in that book is mindblowing. It reveals that the real numbers and the p-adic numbers are equally valid ways to approximately get a handle on rational numbers and that there are whole alternative p-adic universes out there inhabited by weird versions of familiar things like the Riemann zeta function.

(Oh, and please don't take the talk of CPUs too literally. I'm fully aware that you can represent big numbers even on a 4 bit CPU. But what what I say about a model of computation restricted to multiplication, addition and subtraction in single n-bit registers holds true.)

Some Challenges
1. Prove from first principles that the iteration for 1/7 converges. Can you prove how many digits it generates at a time?
2. Can you find a 32 bit square root of 7? Using the Newton-Raphson method? Any other number? Any problems?

Acknowledgements
Update: I have replaced the images with images that are, to the best of my knowledge, public domain, or composited from public domain images. (Thanks jisakujien. My bad.)

Update
If you want to play with some p-adics yourself, there is some code to be found here. That also has code for transcendental functions applied to p-adics.

Here's some C code to compute inverses of odd numbers modulo 232 (assuming 32 bit ints). Like the real valued Newton method, it doubles the number of correct digits at each step so we only need 5 iterations to get 32 bits. (C code as I think it's traditional to twiddle one's bits in C rather than Haskell.)

#include <stdio.h>
#include <assert.h>

typedef unsigned int uint;

uint inverse(uint x)
{
uint y = 2-x;
y = y*(2-x*y);
y = y*(2-x*y);
y = y*(2-x*y);
y = y*(2-x*y);

return y;
}

int main()
{
uint i;
for (i = 1; i<0xfffffffe; i += 2)
{
assert (i*inverse(i) == 1);
}
}