## Saturday, May 10, 2008

### Desingularisation and its applications

`> {-# OPTIONS -fno-warn-missing-methods #-}`

Here's a neat trick you can carry out with the Dual class that I've talked about many times before. Suppose you define a function like
`> sinc x = sin x/x`

Clearly this function has a singularity at x=0 where the division isn't defined. But this is an example of a removable singularity because it is possible to replace this function with another that is continuous (in fact, analytic) everywhere and is well defined at x=0. We could do this simply by defining:
`> sinc' 0 = 1> sinc' x = sin x/x`

but it'd be cool if we could automate the process. Here's a function that does exactly that:
`> desingularise f x = re (f (D x 1))`

As if by magic, we can now compute
`> ex1 = desingularise sinc 0`

We can try other functions too:
`> ex2 = desingularise (\x -> x/(exp x-1)) 0`

So how does it work? Well I've modified my usual definition of (/) so that it is able to divide one infinitesimal number by another:
`> instance Fractional a => Fractional (Dual a) where>   D 0 a'/D 0 b' = D (a'/b') 0>   D a a'/D b b' = D (a/b) ((b*a'-a*b')/(b*b))`

desingularise 'perturbs' the value of its argument infinitesimally so that when we pass in an argument of zero we end up evaluating our function not at zero, but somewhere infinitesimally close. So our division by zero becomes division of infinitesimals and the above definition comes into play. The first clause of the definition can also be viewed as an application of l'Hôpital's rule.

But it doesn't solve all of our problems. For example
`> ex3 = desingularise (\x -> sin x^2/x^2) 0`

doesn't quite work. You need two applications of l'Hôpital's rule. We can implement this easily with
`> ex4 = desingularise (desingularise (\x -> sin x^2/x^2)) 0`

That's fine if you know how bad your singularity is in advance. If you don't, you can use power series instead. None of these methods will work for this function however:
`> horrible x = exp (-1/x^2)`

Anyway, the fact that it doesn't work for all functions doesn't detract from the fact that it's a pretty approach from solving the problem when you have a fairly well behaved singularity.

I've not actually seen anyone else use dual numbers to smooth over singularities in this way, but a few years ago I did come across something very closely related that apparently caused a little controversy in the computational geometry world.

Suppose you want to determine whether or not a point lies within a possibly non-convex and arbitrarily complicated polygon. One algorithm is this: follow a line in any direction from the point to infinity (though just outside the bounding box of the polygon would do) and count how many times you cross the boundary of the polygon. If you count an odd number of crossings you're in the polygon, otherwise you're outside. It's beautifully simple and if nobody had thought of it before it'd be worthy of publication. But like the methods described in many computational geometry papers I've read, this one often fails in practice. There are at least two sources of difficulty. Firstly, computational geometry algorithms are subject to numerical errors. You often find yourself having to decide whether or not a point is on one side or another of a line, say, and getting inconsistent results because two different ways of computing the same thing result in different decisions being made because of rounding. That can be fixed by using something like arbitrary precision rational arithmetic. But even if you do this, there is still a problem.

When you follow a line from the point you're testing you may find that it doesn't simply cross one of the sides of the polygon, it may travel precisely through a vertex. If your code isn't written carefully you may accidentally count that as zero, one or two crossings. It's easy to write sloppy code that gets it right most of the time, but it's hard to work out every possible case where some kind of degeneracy might happen. What happens if the outward ray is actually collinear with a polygon face? Do vertex intersections and collinearities cover everything?

An ugly kludge is this: at the first sign of trouble, restart the algorithm but jiggle all the inputs randomly a bit. For a large class of algorithms this will work tolerably well, depending on your definition of tolerable. (Compare with Sard's theorem in Morse theory.) But it really is ugly, especially if you've gone through the hassle of using arbitrary precision arithmetic. But maybe you can see what's coming...

Back in '94 Edelsbrunner and M√ºcke came up with the idea of deforming the geometry infinitesimally, giving their method the utterly bizarre name "Simulation of Simplicity". I think it took people a while to realise that they were actually doing the same thing as people doing automatic differentiation and so the whole notion of an infinitesimal perturbation left people wondering what in fact they were doing. More seriously was this: writing robust computational geometry code that catches all of the degenerate cases is hard. How is it that such a simple algorithm could be slotted into a sloppily written polymorphic algorithm and turn into a watertight one? Was everyone in computational geometry going to be put out of work?

Luckily for the computational geometers it still takes quite a bit of work to get the details exactly right, and even when you do, the result isn't necessarily an efficient solution to the problem. I know that when I had a messy computational geometry problem to solve when I was working on MI:2 I just wrote code that turned a blind eye to degeneracies and aggressively pruned away anything that looked suspicious: like near zero length lines and near zero area faces. If a stray pixel poked through somewhere that it shouldn't, and tweaking the tolerance didn't fix it, an artist found a way to hide it. Sometimes beauty and elegance come second place to getting the job done on time.

`> data Dual a = D a a deriving (Show,Eq)> re (D a _) = a> im (D _ b) = b> instance Num a => Num (Dual a) where>   fromInteger i = D (fromInteger i) 0>   (D a a')+(D b b') = D (a+b) (a'+b')>   (D a a')-(D b b') = D (a-b) (a'-b')>   (D a a')*(D b b') = D (a*b) (a*b'+a'*b) > instance Floating a => Floating (Dual a) where>   cos (D a a') = D (cos a) (-sin a*a')>   sin (D a a') = D (sin a) (cos a*a')>   exp (D a a') = D (exp a) (exp a*a')` michiexile said...

So, could one use something like fix desingularize or similar to take care of all finitely removable singularities?

Wednesday, 14 May, 2008 sigfpe said...

Mikael,

There's a subtlety with this. If you apply desingularise to a function of type a -> a then internally it creates a function of type Dual a -> Dual a. So if you were to write a function that could apply desingularise n times then internally it must create objects of type Dual (Dual (Dual (... a))). I don't think the Haskell type system would allow the writing of such a function, though I'm not sure.

Wednesday, 14 May, 2008 Anonymous said...

So... i haven't taken the opportunity to read your postings on infinitesimals in detail, but this idea of desingularization caught my eye as related to something i've wanted to pursue. Robinson and others have shown that non-standard analysis proofs can be reduced to standard ones. i've often thought that there might be a lovely analogy to proof normalization (cut reduction) -- which (via Curry-Howard) -- is beta-reduction in disguise. If so, there might be a computational method/engine for real computation lurking there. In other words, can we derive a beta-reduction-like method for real computation out of the algorithm for turning non-standard (analysis) proofs into standard (analysis) proofs? Or, is there an solution to the "equation"

cut-elimination : beta-reduction :: non-standard analysis to standard analysis proof xform : X

Wednesday, 14 May, 2008 sigfpe said...

Meredith,

I suspect that the metatheorem that reduces proofs using infinitesimals to proofs not using them is far too complex to consider via Curry-Howard. Isn't it carried out with respect to an axiom system (ZF, I guess) that doesn't have an obvious computational interpretation via Curry-Howard? But otherwise what you're saying makes perfect sense to me.

Wednesday, 14 May, 2008 Anonymous said...

It's entirely possible what you say is true, but since LL you'd be surprised what has a Curry-Howard (CH)style presentation. A really good example is the work of Caires on a semantic approach to types for concurrency. See http://ctp.di.fct.unl.pt/~lcaires/papers/CALCO-Caires-1.0.pdf

Also, i've been thinking about what the actual mathematical content of CH is and i'm starting to suspect it's captured rather simple. See my series of postings at biosimilarity.blogspot.com

Wednesday, 14 May, 2008 Soča said...

There is a minor bug in the desingularisation code: it is putting a zero dual part on the result of (D 0 a')/(D 0 b'). Here is an example that manifests the bug:

> desingularise (\x -> ((x*x)/x)/x) 0

I think the right thing to do is to put "D (a'/b') undefined" where the current version reads "D (a'/b') 0".
--
Barak A. Pearlmutter <barak@cs.nuim.ie>

Tuesday, 27 May, 2008