The first trick was manipulating the operator D=d/dx as if it were an ordinary variable. Along with the rule f(D)exp(ax)=exp(ax)f(D+a) you can solve a wide variety of differential equations trivially. For example to find a solution to

d^2y/dx^2-6dy/dx+6y=x^2 exp(x)

write

(D-2)(D-3)y=x^2

y = 1/((D-2)(D-3)) exp(x) x^2

= exp(x) 1/((D-1)(D-2)) x^2

= exp(x) (1/2+3D/4+7D^2/8+3D/4+...) x^2 (Taylor series in D)

= (x^2/2+3x/2+7/4) exp(x)

(I haven't checked that BTW...) For complex problems this was a lot easier than what I was originally taught, which was to guess the form of the result with some unknown constants and then solve for the constants.

I've rarely seen this presented as a method for solving differential equations in a textbook. I actually learnt it from a schoolteacher. In my day we did differential equations at school. (That's high school for our American brethren.)

Anyway, I only found out in the last couple of years that this idea was due to Heaviside who in fact used 'p' instead of 'D'. There was a blog entry about it over at ChapterZero recently. It was pretty controversial stuff in its day but the work was eventually placed on a rigorous foundation.

The other trick I used quite a bit was a method for solving recurrence equations. Write x^[r] to mean x(x-1)(x-2)...(x-r+1). Define the linear operator d by df(x)=f(x+1)-f(x). Then d x^[r]=rx^[r-1]. Notice the formal similarity to D x^r = rx^(r-1). What this gives is is a vague 'metatheorem' which says that many theorems about the calculus of polynomials in x can be translated directly into theorems about difference equations by replacing x^r with x^[r] and D with d. This meant I could use many of Heaviside's tricks on difference equations.

Anyway, this trick is an example of umbral calculus, a term invented by Sylvester. I had a vague idea of this before as I found some obscure articles about it a couple of years ago. But I just found an entry on Wikipedia that explains what it's all about. Turns out there are many families of polynomials, P_n(x), that have the property that P_n(x) acts a lot like x^n. The word 'umbral' comes from the fact that the 'n' is a kind of fake shadowy exponent that isn't actually an exponent. The 'metatheorem' I mentioned above can be used for many other series of polynomials too, and that's what umbral calculus is all about. Again, this stuff was controversial in its day but was eventually made rigorous by Gian-Carlo Rota.

I find it curious that both the methods I was using were only put on a rigorous footing relatively recently. Maybe that explains why they're not in textbooks - they still haven't shaken off the stigma of being 'tricks' despite being methods that are easy to use and have plenty of applications.

BTW You can combine the above tricks by noting that d=exp(D)-1. Note that the 'ratio' of the two operators I've been discussing is D/(exp(D)-1), the generator function of the Bernoulli numbers. Thinking about this is how I was eventually led to the paper I'm trying to write up on Brion's theorem.

## 1 comment:

Interessting, but there are some glitches:

(D-3)(D-2)=D^2-5D+6, but the original equation has -6D. So you're solving the equation with -5D instead, which is impressive enough. In one line the exp function is missing, and the unused last term of the taylor series (very clever, it's infinite, but applied to a polynom almost all terms vanish...) is wrong.

I don't understand the equation for exchanging f(D) and exp(at) at all, must think more about it.

At school, we only multiplied dx away from d/dx, and then integrated; and I thought

thatwas strange.Post a Comment