Haskell is flavour of the month for me right now. Lots of mathematical algorithms that can be quite messy in an imperative language turn out to be nearly trivial in a lazy functional language.
I thought I'd have a go at implementing
formal power series in Haskell. There are some notoriously messy problems to deal with especially when implementing things like power series 'reversion', ie. given f finding the power series g s.t. f(g) = x. In other words, finding the power series of the inverse of a function defined by a power series. But in Haskell it's easy to give a recursive definitions so that like the mythical
orobouros the formal series feed themselves by eating their own tails. (Well, in this case they eat their own heads to feed their tails...)
Consider defining formal power series (FPS) addition in Haskell. It's just:
a ^+ b = zipWith (+) a b
Before multiplying series let's define:
(*!) _ 0 = 0
(*!) a b = a*b
(!*) 0 _ = 0
(!*) a b = a*b
They are just 'short cut' versions of multiplication.
That's trivial enough. Now onto multiplication. Basically we use
(a
0+a
1z+a
2z
2+...)(b
0+b
1z+b
2z
2+...)
=a
0b
0+ a
0(b
1z+b
2z
2+...)+(a
1z+a
2z
2+...)(b
0+b
1z+b
2z
2+...)
But the last term is just another convolution so we can write:
~(a:as) `convolve` (b:bs) = (a *! b):
((map (a !*) bs) ^+ (as `convolve` (b:bs)))
Almost as trivial.
The ~ marks an irrefutable pattern which we need later because we want to hold back our orobouros from being too eager in a later function definition.
Composition of functions is easy too.
(a
0+a
1z+a
2z
2+...)( (b
0+b
1z+b
2z
2+...) )
=a
0+a
1(b
0+b
1z+b
2z
2+...)+a
2(b
0+b
1z+b
2z
2+...)
2+...
Note that we need to assume b
0=0 to avoid having to sum an infinite series.
We can write this simply as the one liner:
compose (f:fs) (0:gs) = f:(gs `convolve` (compose fs (0:gs)))
Now comes the inverse. We have the a
i and want to solve for the b
i:
z = a
0+a
1(b
0+b
1z+b
2z
2+...)+a
2(b
0+b
1z+b
2z
2+...)
2+...
b
0+b
1z+b
2z
2+... = (z-a
0-a
2(b
0+b
1z+b
2z
2+...)
2-a
3(...)
3+...)/a
1This looks useless at first. But note that if we know b
0...b
n we can compute the right hand side up to b
n+1 (assuming a
0=0 and a
1!=0). So if we give Haskell a little
bootstrap by telling it the first couple of terms the above circular definition gives an immediate algorithm for reversion. Here's the code:
inverse (0:f:fs) = x where x = map (recip f *) (0:1:g)
_:_:g = map negate (compose (0:0:fs) x)
That's all there is to it!
In general, given any formula for x of the form x=f(x) we can use it to solve for x simply by defining x=f(x), as long as evaluating f(x) is one step ahead of x, ie. it can compute the (n+1)-th term using only the first n terms. And this doesn't just apply to algebraic formulae. We can use any function we like in the definition of f, including integrals and derivatives. We can even use tricks like the Newton-Raphson method. Except that we don't actually need to iterate more than once.
For example d/dx exp(f(x)) = exp(f(x))f'(x). So we can define exponentiation of power series via
exp(f(x)) = integral (exp(f(x))f'(x))
This translates into one line of Haskell. (Plus the one liner that defines 'integrate'). We can use other integro-differential equation formulae to give series for a wide variety of functions including hypergeometric functions.
Anyway, I now have my formal power series class defined to the point where it's an instance of class Floating. It's easy to verify things like cos
2x+sin
2x=1 or that the
Schwarzian derivative of the ratio of solutions to f''(x)+u(x)f(x)=0 is 2f(x) for formal power series.
I have writen a small combinatorial domain specific language that uses FPSes as generating functions to count numbers of combinatorial species. I also plan to do some stuff with
Sheffer sequences too.
I'll put the code on my web site some time soon.
And this is the last time I write about something that requires so many subscripts and superscripts in HTML. So no entries on supersymmetry.
Labels: mathematics