Over the years many people have felt that mathematicians have made life too easy for themselves by using axioms that seem much like the proverbial sledgehammer cracking open a nut. Look at the way analysts and algebraists will wield the Axiom of Choice when they can prove what they need with much weaker axioms. Wouldn't it be more enlightening to carry out these proofs using the weakest axiom system possible so we can see exactly what is needed to make them provable?
It turns out that this is more or less what reverse mathematicians do. They take theorems such as the well known BolzanoWeierstrass theorem and try to figure out precisely how much mathematical machinery is required to prove them. There is no need to use all of ZFC, or even ZF, to prove such a result.
So reverse mathematicians sometimes start with something simpler like second order arithmetic. In its universe are integers and sets of integers, and nothing else. It can still talk about real numbers by encoding them as Cauchy sequences in the usual way but there is no way to encode subsets of the reals in this way. Maybe surprisingly, it is possible to represent continuous functions because the rationals are dense in the reals so continuous functions are defined by the values they take at rational numbers. It turns out that almost all classical mathematics can be encoded in this system even though it's much weaker than ZF.
But usually reverse mathematicians like to go even weaker still and work with recursive comprehension which is essentially the Peano axioms combined with induction and a restricted comprehension axiom. Using these weak tools it's still possible to prove things like the Intermediate Value Theorem, the uncountability of the reals and even results about metric spaces such as the Baire category theorem (but only for separable spaces).
One thing that has always intrigued me about number theory is how many theorems are proved by making excursions into analysis. For example, think about results in analytic number theory that can easily be stated using the language of Peano's axioms, and yet whose proofs require techniques like contour integration that make reference to infinite sized objects and limits. In 1988 Stephen G Simpson proved that such theorems can in fact be reduced to "primitive recursive arithmetic" and hence turned into "finitistic" proofs. On the other hand, I'm not sure that these proofs are necessarily going to give any insight into why the theorems are true. They probably end up being as unwieldy as Borbaki's definition of one.
Monday, February 06, 2006
Subscribe to:
Post Comments (Atom)
Some Links
About Me

Dan Piponi
 Blog: A Neighborhood of InfinityCode: GithubTwitter: sigfpeHome page: www.sigfpe.com
No comments:
Post a Comment