1. Regular Expressions
Every regular expression defines a language - the set of strings exactly matching the expression. Given a language L made from the symbols in S, define ∂sL, where s is a symbol in S, to be the language made by taking all of the prefixes in L that end just before an s, deleting the final s. For example, if the language were L={MADAM} then ∂ML={∅,MADA}, ∂AL={M,MAD} and ∂DL={MA}. We can write the regular expressions for these languages as 1+MADA, M+MAD and MA. So in this case, applying ∂s trivially gives another regular language. So here's a problem: is the set of prefixes of a regular language L, ending at s, also regular?
Let's use ∂s to define an operation on regular expressions that computes the regular expression for the prefixes.
Consider a regular expression of the form ab, where a and b are regular expresions. A prefix for the language ending at symbol s either ands in the 'a' part, corresponding to something matching ∂sa, or it ends in the b part in which case it matches a∂sb.
But so far I've just mentioned finite languages. What about the * operator. Well a prefix in a* must consist of a sequence of strings matching a but end as a prefix of a. Ie. we need ∂s(a*)=a*∂sa.
So it's not hard to see (I hope) that
∂ss = 1
∂ss' = 0 for symbol s'≠s
∂s(a+b) = ∂sa+∂sb
∂s(ab) = ∂sa+a∂sb
∂s(a*)=a*∂sa
works nicely and proves that the answer to the above question is yes.
2. Dissecting types
Rather than regurgitate someone else's words, take a peek at Dissecting Data Structures. Define the operator, acting on types, ∂, by ∂ F X = Δ F X 1, where Δ is my poor man's version of del with a vertical line through it. From the definition of Δ you can see it satisfies:
∂(K A) X = 0
∂Id X = 1
∂(F+G) X=∂F X+∂G X
&part(F×G) X=∂F X×G 1+F X×∂G X
If you read McBride's paper, you'll see that the ∂ operator slices a type in half, throwing away the elements in the right half, keeping just the 'shape' of the right half.
3. Fox's Free Derivatives
Check out the definition of the Fox derivative in section 4 of Free Differential Calculus. The Fox derivative is used to compute the Alexander polynomial of a knot.
But I hope you can see that we have yet another definition that looks very similar. The common theme is differentiation with a twist. When you differentiate a product fg, one term is the usual one, fD(g), but the other term isn't D(f)g but the g is replaced by a drastically simplified version (either 1, or g(1)). Both versions 1 and 2 have a notion of chopping something in half at some point and keeping the left half. (Actually, you could tweak the definition in part 2 so that g is replaced by 1 too, then you'd no longer preserve the shape of the right half of the type and you'd really have a slicing operator, and the analogies would become clearer.)
I just need to relate the * operator in part 1 to the derivative of the inverse in the free derivative. Let me do this in a hand-waving manner: Conceptually, a*=1+a+a²+… which might be considered the same as 1/(1-a). ∂(1/(1-a)) = (using 4(ii)) 1/(1-a)∂a = a*∂a.
So what does it all mean? I've no idea. I'm just pointing out that the Fox derivative, or something like it, has what I think are hitherto unnoticed applications to computer science. Anyway, some of this will become clearer when I post some Haskell code implementing no less than five different types of derivatives of regular expressions (some time in the next week when I have a spare moment), and that should partially answer my question that I posed on λtU.
On further reflection, I can say something about what this means. The way to view derivatives is as an operation that chops things in 'half', destroying something in the middle. The Leibniz rules corresponds to the notion that when you chop a product you must either chop in the left side of the product or the right side. In the usual Leibniz rule you reassemble the pieces on either side - leaving a hole. With the Fox derivative you throw away what's on the right. With the ∂ operator in part 2 you keep only the shape on the right. And McBride's Δ operator gives you a choice of what do do with the left and right hand sides.
There's a difference though between the "+" in regular expressions and the "+" in types: for a regular expression r, r+r=r, but for a type T, T+T=2T. I'm not sure if this could be a problem in your analogy.
ReplyDeleteI don't think there's no problem with T+T behaving differently from r+r. These operators aren't exactly the same thing because they're acting in different algebraic structures so I don't expect identical theorems about derivatives to hold in both. I'm just expecting to see some commonality of behaviour - just as there's some commonality between holes in types and ordinary calculus.
ReplyDeleteA very small URL problem for " Free Differential Calculus".
ReplyDelete