The Fibonacci Numbers, Coalgebraicaly
> {-# LANGUAGE TypeSynonymInstances #-}
What kind of Haskell programmer can I call myself if I haven't written a fiendishly complex piece of code to inefficiently compute the Fibonacci numbers. Unfortunately, what follows isn't particularly fiendish and isn't wholly inefficient, but it'll have to do.
My real motivation is this: I've mentioned before that one difference between mathematics and computer science is that mathematics has more of a bias towards algebraic rather than coalgebraic structures. But coalgebras do appear in mathematics. For example, every Hopf Algebra is a coalgebra (and an algebra) and Hopf algebras play a prominent role due to the amazing discovery that they have a nice connection with combinatorics, especially helping to organise the combinatorics that come up in renormalisation.
The first thing to note is that the computer science and mathematics definitions of coalgebra are closely related but different. An algebra is essentially a vector space over some field K with a bilinear associative product and multiplicative identity. If our algebra is V, then we have a multiplication map V×V -> V that is linear in each of its arguments. That's the same as saying that multiplication is a linear map V⊗V->V. In a coalgebra we have the converse, a linear comultiplication map Δ:V→V⊗V. Similarly, a unit in an algebra gives a way to map your ground field into the algebra, you map x to x.1 so a counit gives a map from V down to K. The full definition is on wikipedia.
The Fibonacci sequence can be thought of as a function N→R, where
f(0)=1
f(1)=1
f(n)=f(n-1)+f(n-2) for n>=2
> f 0 = 1
> f 1 = 1
> f i = f (i-1)+f (i-2)
Many functions satisfy sum formulae like ex+y=exey and tan(x+y)=(tan(x)+tan(y))/(1-tan x tan y). f also has one. What's more, the formula is essentially the definition of a coproduct on a coalgebra.
Define a Fibonacci-like sequence to be a sequence s satisfying just the last part of the definition of f:
s(n)=s(n-1)+s(n-2) for n>=2
Because I have chosen maps to R, Fibonacci-like sequences form a vector space over R.
Let S be the space of Fibonacci-like sequences. We're going to try to make this into a coalgebra. It's easy to come up with a candidate for a counit: ε(f) = f(0).
We can express this in Haskell:
> class VectorSpace v where
> zero :: v
> (.*) :: Float -> v -> v
> (.+) :: v -> v -> v
> (.-) :: v -> v -> v
> v .- w = v .+ ((-1) .* w)
> type Sequence = Int -> Float
> instance VectorSpace Sequence where
> zero _ = 0
> a .* v = \i -> a*v i
> v .+ w = \i -> v i+w i
> epsilon f = f 0
We now need to consider S⊗S. Tensor products of infinite dimensional spaces can be a bit delicate. But S is a two-dimensional space because it is completely specified by its first two terms. This means that S⊗S is the obvious thing: a 2D grid of values where each row and each column is an element of S.
> type Sequence2 = Int -> Int -> Float
> instance VectorSpace Sequence2 where
> zero _ _ = 0
> a .* v = \i j -> a*v i j
> v .+ w = \i j -> v i j+w i j
The tensor product operation is straightforward:
> (<*>) :: Sequence -> Sequence -> Sequence2
> v <*> w = \i j -> v i*w j
Let's define the operator D so that (Df)(i)=f(i+1). So D is a kind of shift operator. Dn shifts n times and we can form polynomials in D. Note that this space of polynomials, P, in D is also two-dimensional because D2=1+D by definition of Fibonacci-like.
> d :: Sequence -> Sequence
> d v i = v (i+1)
We can define a pairing between P and S by <f(D),s> = ε(f(D)s). This makes P dual to S. P has an obvious multiplication inherited from ordinary multiplication which we can denote by the map m:P×P→P. This suggests the idea of forming a kind of dual to multiplication, Δ:S→S⊗S, defined by
<f(D)g(D),s>=<f(D)⊗g(D),Δ(s)>
The fact that we have defined things through this duality ensures that (S,ε,Δ) is a coalgebra. (Note that the pairing extends to the tensor products via <f(D)⊗g(D),s⊗t>=<f(D),s><g(D),t>
By definition <Dm⊗Dn,Δ(s)>=<Dm+n,s>. So Δ(s) is simply the 2D grid made by arranging s along the edges and filling in with constant values along the diagonals.
> delta :: Sequence -> Sequence2
> delta v = \i j -> v (i+j)
Give Δ(s) we can extract s back out from it in many different ways. To get the ith element we need to pick j and k such that j+k=i and grab the ith element of the kth row. One choice is to pick k such that it is half of i rounded down:
> extract :: Sequence2 -> Sequence
> extract f i = let k = i `div` 2 in f k (i-k)
Now given a Fibonacci-like sequence s, we can think of the top left 2×2 block of Δ(s) as a matrix. Ie. s(i+j) for i=0,1;j=0,1 forms a matrix. It's clearly invertible by inspection. But there's a deeper reason why it's invertible. If there were a linear dependence between the rows of this matrix it would be a relationship between the shifts of the sequence and hence would give rise to a simpler definition of the Fibonacci sequence. For more general recursive sequences, if there is an nth order recurrence relation for the sequence, and no simpler relation, then the top left (n-1)⊗(n-1) block is full rank and invertible.
Write tij for the inverse of this matrix. Then
Σjf(i+j)tjk=δik for i,j,k=0,1.
Now here's a neat bit: any Fibonacci-like sequence is defined by its first n-1 elements. Any Fibonacci-like sequence is simply a linear combination of f and Df. So the above is actually true for any Fibonacci-like sequence. And as any shift of the Fibonacci sequence is itself a Fibonacci-like sequence, the above is true for any i,k>0.
Another matrix multiplication gives
Σj,kf(i+j)tjkf(k+l)=f(i+l) for j,k=0,1;i,l≥0
That's our Fibonacci summation formula. It tells us how to get f(i+l) in terms of f(i+j) and f(k+l) for j,k=0,1.
Using the definitions above we get
Σj,k<Di⊗Dl,tjkDjf⊗Dkf=e<Di+l,f>
So tjkDjf⊗Dkf satisfies the definition of Δ above, and so must in fact be the same thing. So we have:
Δ(f)=2s⊗f-f⊗Df-Df⊗f+Df⊗Df
(Where I actually computed the elements of the inverse matrix t.)
We can now use this as the basis for a recursive algorithm. We need the bootstrap function to get the recursion started:
> bootstrap :: [Float] -> Sequence -> Sequence
> bootstrap xs v i = if i<length xs then xs!!i else v i
> f' :: Sequence
> f' = bootstrap [1,1,2,3] $ extract $ 2 .* (f' <*> f') .- (f' <*> (d f')) .- ((d f') <*> f') .+ ((d f') <*> (d f'))
It's faster than the usual recursion. (Though that's not saying much.)
I'll leave it to you to extract out what's really going on above, and to generalise it to give sum formulae for other recurrence relations.
Note that the goal here isn't to write a fast algorithm. My main motivation is to view the sum formula as an identity about comultiplication and get some intuition about comultipliation so I can read some Hopf algebra papers.
> let's_go_slow = map f [0..]
> let's_go_faster = map f' [0..]
All of this, apart from the algorithm itself, is an unpacking and simplification of section 1 of Bialgebras of Recursive Sequences and Combinatorial Identities by Futia et al.
PS Tell me if you have trouble reading the above. I wrote a new pseudo-LaTeX to blogger-HTML converter and it might not actually be any good.