## Friday, December 24, 2010

### Generalising Gödel's Theorem with Multiple Worlds. Part II.

Introduction
Last time we looked at a method for testing whether propositions of provability logic were valid by looking at the consequences of propositions within nested collections of worlds. This lends itself naturally to an algorithm that we can implement in Haskell. The diagrams I used last time are a variant on what are known as tableaux. But tableaux can be used in a number of ways and so we need code that is suitably generalised. In the code that follows I've ensured that the core algorithm has an interface that is suitable for carrying out the four tasks I'll demand of it. This means that the code leans more towards practicality than mathematical elegance.

And I apologise in advance: this post is mostly a bunch of implementation details. We'll get back to some mathematics next time. Nonetheless, by the end of this post you'll have working code to test the validty of propositions of provability logic.

Central to tableau algorithms is pattern matching. I'd like the Haskell pattern matcher do much of the work, and to increase its flexibility I'll need this extension:

```> {-# LANGUAGE ViewPatterns #-}

```
We'll need these libraries too:

```> import Control.Applicative
> import Control.Arrow
> import Data.Function
> import List
> import Maybe
> import Text.Show

```
Logical Propositions
Now we'll need a bunch of logical operators. The first three are constructors for a proposition type and the rest are sugar to make expressions look nicer:

```> infixr 1 :->
> infixr 2 :\/
> infixr 3 :/\

> infixr 1 -->
> infixr 1 <--
> infixr 1 <->
> infixr 2 \/
> infixr 3 /\

> (\/)    = (:\/)
> (/\)    = (:/\)
> (-->)   = (:->)
> (<--)   = flip (:->)
> p <-> q = (p :-> q) :/\ (q :-> p)

```
Here's our basic proposition type:

```> data Prop = Letter String
>           | Prop :\/ Prop
>           | Prop :/\ Prop
>           | Prop :-> Prop
>           | Box Prop
>           | Dia Prop
>           | F
>           | T
>           | Neg Prop deriving (Eq, Ord)

```
I want show to know about operator precedence for propositions:

```> instance Show Prop where
>     showsPrec p (a :/\ b) = showParen (p>3) \$ showsPrec 3 a . showString " /\\ " . showsPrec 3 b
>     showsPrec p (a :\/ b) = showParen (p>2) \$ showsPrec 2 a . showString " \\/ " . showsPrec 2 b
>     showsPrec p (a :-> b) = showParen (p>1) \$ showsPrec 1 a . showString " --> " . showsPrec 1 b
>     showsPrec p (Neg r)   = showParen (p>4) \$ showString "Neg " . showsPrec 5 r
>     showsPrec p (Box r)   = showParen (p>4) \$ showString "Box " . showsPrec 5 r
>     showsPrec p (Dia r)   = showParen (p>4) \$ showString "Dia " . showsPrec 5 r
>     showsPrec p (Letter n)= showParen (p>5) \$ showsPrec 6 n
>     showsPrec p T         = showString "T"
>     showsPrec p F         = showString "F"

```
Some simple rules for simplification of some logical expressions:

```> simplify p = let simplify' (a :\/ F) = a
>                  simplify' (F :\/ b) = b
>                  simplify' (a :/\ T) = a
>                  simplify' (T :/\ b) = b
>                  simplify' (a :\/ T) = T
>                  simplify' (T :\/ b) = T
>                  simplify' (a :/\ F) = F
>                  simplify' (F :/\ b) = F
>                  simplify' (F :-> b) = T
>                  simplify' (T :-> b) = b
>                  simplify' (a :-> F) = Neg a
>                  simplify' (a :-> T) = T
>                  simplify' (Neg T) = F
>                  simplify' (Neg F) = T
>                  simplify' (Box T) = T
>                  simplify' (Dia F) = F
>                  simplify' z = z
>    in case p of
>        a :/\ b -> let a' = simplify a
>                       b' = simplify b
>                   in simplify' (a' :/\ b')
>        a :\/ b -> let a' = simplify a
>                       b' = simplify b
>                   in simplify' (a' :\/ b')
>        a :-> b -> simplify' (simplify a :-> simplify b)
>        Box a   -> simplify' (Box (simplify a))
>        Dia a   -> simplify' (Dia (simplify a))
>        Neg (Neg a) -> simplify a
>        a           -> a

```
Kinds of Proposition
I'm actually going to use more than one proposition type. So when we do case analysis I need to make my patterns more abstract so they can work with multiple types. I'm going to use the PropType type to represent the ways we're going to classify logical propositions. The types are:

1. Atomic: A single letter or its negation
2. Constant: Simply T or F or a negation thereof.
3. DoubleNegation.
4. Disjunction: used to represent things like a∧b or ¬(a∨b).
5. Conjunction: used to represent things like a∨b or ¬(a∧b).
6. Provability: These are statements about provability like those starting with ◻ or ¬◊.
7. Consistency: These are statements about consistency like those starting with ◊ or ¬◻.

As this type is a simple container we can make it a functor too.

```> data PropType a = Atomic a
>                 | Constant a
>                 | DoubleNegation a
>                 | Disjunction a a
>                 | Conjunction a a
>                 | Provability a
>                 | Consistency a

> instance Functor PropType where
>     fmap f (Atomic a)         = Atomic (f a)
>     fmap f (Constant a)       = Constant (f a)
>     fmap f (DoubleNegation a) = DoubleNegation (f a)
>     fmap f (Provability a)    = Provability (f a)
>     fmap f (Consistency a)    = Consistency (f a)
>     fmap f (Conjunction a b)  = Conjunction (f a) (f b)
>     fmap f (Disjunction a b)  = Disjunction (f a) (f b)

```
I'll introduce a typeclass that will allow us to use the PropType view to query propositions:

```> class PropTypeable a where
>     propType :: a -> PropType a
>     neg      :: a -> a
>     isF      :: a -> Bool
>     negative :: a -> Bool
>     positiveComponent :: a -> Prop

```
And now here we have the cases that I summarised in English above:

```> instance PropTypeable Prop where
>     propType (a :\/ b)        = Disjunction a b
>     propType (a :/\ b)        = Conjunction a b
>     propType (Neg (a :\/ b))  = Conjunction (Neg a) (Neg b)
>     propType (Neg (a :/\ b))  = Disjunction (Neg a) (Neg b)
>     propType (a :-> b)        = Disjunction (Neg a) b
>     propType (Neg (a :-> b))  = Conjunction a (Neg b)
>     propType (Neg (Neg a))    = DoubleNegation a
>     propType (Box a)          = Provability a
>     propType (Neg (Box a))    = Consistency (Neg a)
>     propType (Dia a)          = Consistency a
>     propType (Neg (Dia a))    = Provability (Neg a)
>     propType (Letter a)       = Atomic (Letter a)
>     propType (Neg (Letter a)) = Atomic (Neg (Letter a))
>     propType T                = Constant T
>     propType F                = Constant F
>     propType (Neg F)          = Constant T
>     propType (Neg T)          = Constant F
>     neg                       = Neg
>     isF F                     = True
>     isF (Neg T)               = True
>     isF _                     = False
>     positiveComponent (Neg a) = a
>     positiveComponent a       = a
>     negative (Neg _)          = True
>     negative _                = False

```
It'll be a while before we need the full generality so it's going to seem like overkill for the moment!

And some pre-packaged letters for convenience:

```> [a, b, c, d, p, q, r, s, t] = map (Letter . return) "abcdpqrst"

```
We're going to need some operations that act on lists.

placesWhere finds all of the elements of a list for which some predicate holds. Instead of just listing the elements that match, it lists the elements paired with the rest of the list after the matching element is removed. We can think of these pairs as elements and their surrounding context:

```> placesWhere p []     = []
> placesWhere p (x:xs) = let r = map (second (x:)) \$ placesWhere p xs
>                        in if p x then ((x, xs) : r) else r

```
This finds something in the intersection of two sets using a given 'equality' predicate for matching. As we may be using a predicate different from == we need to see both of the (possibly different) elements that satisfy the predicate.

```> findIntersection eq a b = listToMaybe [(x, y) | x <- a, y <- b, x `eq` y]

```
Sometimes we'll meet propositions that we can start to reason about using ordinary propositional calculus. These will match the propositional predicate:

```> propositional (propType -> DoubleNegation _) = True
> propositional (propType -> Conjunction _ _)  = True
> propositional (propType -> Disjunction _ _)  = True
> propositional _                              = False

```
On the other hand the provability predicate is used to identify propositions that need rules pertaining to provability and consistency:

```> provability (propType -> Provability _) = True
> provability (propType -> Consistency _) = True
> provability _                           = False

```
The Algorithm
And now we're almost ready to implement the tableau rules. Because we'll be using tableaux in a number of different ways I need lots of hooks into the algorithm that can perform different operations. I've collected all of these hooks into a single type. The algorithm will take a proposition of type prop (which will be Prop for the first three cases) and produce something of type result.

```> data TableauRules prop result = TableauRules {

```
If our result corresponds to a world that is self-contradictory it is said to close. Here's how we indicate a closed (and hence not really existing) world. In the simplest case we won't actually store any information about a world, just whether or not it closes. So closes will be the identity function:

```>     closes :: result -> Bool,

```
Occasionally we'll find a world with something obviously equivalent to F in it. It closes. Here's what we want to return in that case. The argument is the offending proposition:

```>     foundF :: prop -> result,

```
Sometimes we'll find a pair that obviosuly contradict, like a and ¬a:

```>     foundContradiction :: (prop, prop) -> result,

```
And sometimes we'll find an open world (ie. a real non-closed one). This function gets handed the list of propositions that have been found to hold in it:

```>     open :: [prop] -> result,

```
Here's what we do when we find a conjunction. I hope you remember that when we meet a conjunction we can delete it and replace it with the two subpropositions. conjRule is handed the subpropositions as well as the result from proceeding with the tableau rule for conjunctions:

```>     conjRule :: prop -> prop -> result -> result,

```
Disjunctions work a little differently. When handling a∨b, say, we need to handle two subtableaux, one with a and one with b. The first argument to disjRule rules is the disjunction itself, the next two are the left and right subpropositions, and the last two arguments are the results of continuing the two subtableaux.

```>     disjRule :: prop -> prop -> prop -> result -> result -> result,

```
With doubleNegation we get to see propositions that have undergone double negation elimination.

```>     doubleNegation :: prop -> result -> result,

```
When we use Dia to open new worlds we need to ensure that each of these subworlds is valid. Each subworld is processed with processWorld . For example, when we're drawing tableau diagrams we can use this book to draw a frame around the subtableaux. We then fold together these subworlds using combineWorlds:

```>     processWorld  :: prop -> result -> result,
>     combineWorlds :: result -> result -> result,

```
Lastly we have our driver function that kicks off the whole tableau algorithm on a list of propositions:

```>     tableau :: [prop] -> result -> result
> }

```
We can now implement a tableau computation algorithm that supports these hooks. We'll start with the check for whether or not we have an immediate contradiction:

```> simpleClosure rules ps = case find isF ps of
>    Just a  -> foundF rules a

```
Split the propositions into those that are negated and those that are not. We're looking for propositions in one part that directly contradict propositions in the other:

```>    Nothing ->
>      let (neg, pos) = partition negative ps
>          maybePair = findIntersection ((==) `on` positiveComponent) neg pos
>      in case maybePair of
>          Just pair -> foundContradiction rules pair
>          Nothing   -> open rules ps

```
Double negation elimination is straightforward to apply. Delete the original proposition and replace it with the version without double negation:

```> applyDNeg rules p a props = doubleNegation rules a \$
>   applyPropositional rules (a : delete p props)

```
The rule for handling conjunctions. We delete the conjunction from the current list of propositions and replace it with the two subpropositions:

```> applyConj rules p a b props = conjRule rules a b \$
>   applyPropositional rules (a : b : delete p props)

```
Disjunctions require running two separate subtableaux:

```> applyDisj rules p a b props =
>    let props' = delete p props
>        left   = applyPropositional rules (a : props')
>        right  = applyPropositional rules (b : props')
>    in disjRule rules p a b left right

```
Here we tie together the rules for propositional calculus. We use a bit of case analysis to decide which rule to apply, and if no rule applies we try the provability rules instead:

```> applyPropositional rules props =
>     let t = simpleClosure rules props in if closes rules t
>         then t
>         else case find propositional props of
>             Nothing -> applyProvability t rules props
>             Just p  -> case p of
>                 (propType -> DoubleNegation q) -> applyDNeg rules p q props
>                 (propType -> Conjunction a b)  -> applyConj rules p a b props
>                 (propType -> Disjunction a b)  -> applyDisj rules p a b props

```
When we've exhausted all possible rules from propositional calculus we scan for propositions like Dia p or Neg (Box p). These may imply the existence of subworlds. We then try to instantiate these subworlds, seeded according to the rules I gave in the previous article:

```> applyProvability t rules props =
>     let impliedWorlds = placesWhere consistency props

>         consistency (propType -> Consistency _) = True
>         consistency _ = False

>         testWorld (p@(propType -> Consistency q), props) =

```
In the following line, neg p corresponds to the application of Löb's Theorem. provabilities is the list of propositions inherited by a subworld from statements about provability in the parent:

```>              let tableau = runTableau rules (q : neg p : provabilities)
>                  provabilities = do
>                      p@(propType -> Provability q) <- props
>                      [p, q]
>              in processWorld rules p tableau

>     in foldr (combineWorlds rules) t (map testWorld impliedWorlds)

```
And finally, here's where we kick our algorithm off:

```> runTableau rules props = tableau rules props \$ applyPropositional rules props

```
Testing Validity
We can now use a set of simple rules to test the validity of propositions. As mentioned above, the return data is a Bool used to indicate whether a subworld was invalid. By and large, these rules do the trivial thing. Note how the rule for disjunction requires both of the alternatives to be invalid in order to completely invalidate, so we use (&&). But when considering subworlds, just one bad subworlds is enough to invalidate a world:

```> validRules = TableauRules {
>     closes = id,
>     open   = \_ -> False,

>     foundF             = \_ -> True,
>     foundContradiction = \_ -> True,

>     conjRule       = \_ _ t -> t,
>     disjRule       = \_ _ _ -> (&&),
>     doubleNegation = \_ t -> t,

>     combineWorlds = (||),
>     processWorld  = \_ t -> t,

>     tableau = \_ t -> t
> }

```
We can now write a simple validty test. We negate the proposition we're interested in and test whether the implied world closes:

```> valid p = runTableau validRules [neg p]

```
Here's a small regression test to ensure everything works. It's just a bunch of examples that I worked out by hand or lifted from Boolos's book:

```> valids = [
>         T,
>         a :-> a,
>         Box a :-> Box a,
>         Box a :-> Box (Box a),
>         Box (Box a :-> a) :-> Box a,
>         Box F <-> Box (Dia T),
>         let x = p :/\ q :-> r :-> a in Box (Box x :-> x) :-> Box x,
>         F :-> Dia p,
>         Box (Dia p) :-> Box (Box F :-> F),
>         (Box F \/ q /\ Dia (Box F /\ Neg q)) <->
>           (Dia (Box F \/ q /\ Dia (Box F /\ Neg q))
>           --> q /\ Neg (Box (Box F \/ q /\ Dia (Box F /\ Neg q)
>           --> q)))
>     ]

> invalids = [
>         F,
>         a :-> Box a,
>         Box a :-> a,
>         Box (Box a :-> a) :-> a,
>         Dia T,
>         Box (Dia T),
>         Neg (Box F),
>         (Box F \/ p /\ Dia (Box F /\ Neg q)) <->
>           (Dia (Box F \/ q /\ Dia (Box F /\ Neg q))
>           --> q /\ Neg (Box (Box F \/ q /\ Dia (Box F /\ Neg q)
>           --> q)))
>     ]

```
If everything is working, regress1 should give the result True.

```> regress1 = do
>     print \$ (and \$ map valid valids) &&
>             (and \$ map (not . valid) invalids)

```
That's enough implementation. In the next installment we'll start putting this code to work.

References
This code is an implementation of the algorithm described in Chapter 10 of The Logic of Provability. See that book if you want (1) a proof that the above algorithm always terminates and (2) that it really does correctly decide the validity of propositions of provability logic.

## Saturday, December 11, 2010

### Generalising Gödel's Theorem with Multiple Worlds. Part I.

Introduction
In his first incompleteness theorem, Gödel showed us that we can construct a sentence that denies its own provability. In his second incompleteness theorem he showed that an example of such a sentence is the one that asserts the inconsistency of arithmetic. If arithmetic is consistent then it can't prove its own consistency. On the other hand, if arithmetic is inconsistent then we can prove anything, and hence we can prove its consistency.

Can we generalise what Gödel did? For example, can we construct sentences that we can prove assert their own provability? What about sentences that deny that their provability is provable? Or what about sentences that assert that if they're provable then it's not provable that it's inconsistent that they imply that they're inconsistent with the rest of arithmetic?

Not only can we do these things, we can also write a computer program that generates such theorems for us. We can do so by working with the idea that a consistent set of axioms describes a world, and that a set of axioms able to talk about sets of axioms describes a world within a world, and a set of axioms that...well you can guess how it goes. A bit like Inception really.

Gödel's Theorem
Briefly, Gödel's first incompleteness theorem goes like this: we work with PA, the logical system built from Peano's Axioms. Within PA we can state and prove theorems about arithmetic like the fact that 1+2=3 or that for any prime there is always another greater prime. But even though PA is about numbers, we can talk about other things if we can encode them as numbers. In particular, Gödel came up with a scheme to encode propositions of PA as numbers. We use [P] to represent the number for the proposition P in Gödel's scheme. A proof is essentially just a list of propositions where each one is connected to some earlier ones by simple mechanical rules. These rules can be turned into arithmetical statements about the Gödel numbers for the propositions. So in the language of PA it is possible to assert that a particular number is the Gödel number of a provable proposition. Let Prov(n) denote the proposition of PA that says that n is the Gödel number of a provable proposition. Prov(n) is a proposition of PA and so is ¬Prov(n). Imagine we could find a proposition G with the property that G↔¬Prov([G]). It would assert its own unprovability. But it also appears to involve stuffing a representation of the Gödel number of G within G as well as all the rules for determining provability. Amazingly Gödel figured out how to do this (using tricks not unlike those used for quining). If G were false, G would be provable. So if we trust PA as a means of reasoning about proofs in PA, then G is true, though it can't be proved using the axioms of PA.

Provability Logic
We're going to be interested specifically in provability, so we don't need all of the power of PA. So we're going to work with a simplified domain specific logic called GL (for Gödel-Löb), otherwise known as Provability Logic. The idea is that sentences of GL will be shorthand for classes of statement in PA. GL will contain propositional calculus. So here's an example statement in GL: p∧q. The unknowns p and q represent propositions of PA and statements of GL are considered valid if they're provable whatever statements of PA they represent. For example, we could assign p="there are at least 10 primes" and q="7>2", in which case p∧q holds. But we could assign p="there are just 10 primes" in which case it's false. So p∧q isn't a valid proposition of GL as it doesn't hold for all p and q. On the other hand p→p∨q is valid because no matter what crazy propositions we assign to p and q, p→p∨q is true. (Of course, when I say "there are at least 10 primes", I mean a long and complicated sentence of PA that amounts to the same thing.)

But there's more to GL than propositional calculus. It also has the one-argument predicate ◻ which asserts that its argument is provable. More precisely, ◻p says that whatever proposition of PA p represents, let's say it's P, we have Prov([P]). It's just shorthand. Here's another example: ◻(q→◻p) says that whatever P we assign to p, and whatever Q we assign to q, Prov([Q →Prov([P])]). Or in English it says "for any propositions P and Q, it is provable that Q implies that P is provable".

We'll use ⊤ and ⊥ from propositional calculus as the always true and always false propositions in the usual way. ◻⊥ is the assertion that we can prove ⊥. In other words it's the assertion that PA is inconsistent. So now we can state Gödel's second incompleteness theorem: ¬◻⊥→¬◻¬◻⊥. If PA is consistent, we can't prove it.

We can also introduce the symbol ◊. This is just shorthand for ¬◻¬. ◊p says that it's not provable that p is false. In other words, it says that p is consistent with the rest of PA. So ◻ is provability and ◊ is consistency.

A set of assignments of propositions of PA to a bunch of letters in GL can be thought of as a world. For example, we can imagine a world in which p="2>1" and q="2<1". In this world, we have p and ¬q. The GL proposition ◻p says that we have a proof of p so it must be true in all worlds. Conversely, ◊p says that it's not true there is no world where p holds. In other words, it asserts the existence of a world where p holds. So worlds can talk about other worlds. If we have ◊◊p in some world, then it's asserting that there's another world in which ◊p holds. In other words, ◊◊p asserts there is a world in which it is asserted that there is another world where p holds. We can draw pictures to represent this. If a world has propositions that talk about another world then we draw the talked about world as a kind of subworld. Here's how we can picture ◊◊p:

Worlds are assignments of propositions of PA to letters of GL. But most of the time we won't particularly care about specific propositions themselves like "2>1". We'll be more interested in what truth assignments we can make to the propositions represented by the letters. So we can think of a world as a place where the letters of GL have been assigned truth values, true or false. And we can think of each world as containing subworlds consisting of propositions that can be proved or disproved by their parent worlds.

I'm going to spend most of my time looking at how we can explore and unfold all of the implications contained in a world.

The Rules of the Game
We can give some rules. If a world contains p∧q like this:

then it must contain both p and q as well:

I crossed out the p∧q as it became redundant. It's important to note that when I wrote p∧q we could have any propositions of GL standing in for p and q. So a world containing (p∨q)∧r must also contain p∨q and r. Other rules may apply too. If we had p∧q and r∧s we can unfold both to get p, q, r and s. This rule also kicks in if it applies to negated propositions that become conjunctions if we "push down" the negation using de Morgan's laws. For example if a world contains ¬(p→q) then it also contains p and ¬q.

If a world contains p∨q then we don't know exactly what it looks like.

There are two possibilities. We can draw both together like this:

The line in the middle means that the world either looks like what's on the left or it looks like what's on the right. Once we've indicated there are two alternatives then the original proposition became redundant again. When we have a vertical line like this then everything above the vertical line applies in both the left and right possibilities. This saves us having to split our world into two and copy all of our formulae to both sides. Like in the case of conjunctions this rule also kicks in for other kinds of disjunctions. So a world containing p→q splits into two separate worlds headed by ¬p and q.

If a world contains ⊥, or a contradiction, in means that it wasn't really a valid world after all. If we meet a world like this, we know that our starting point must have been been self-contradictory. If a world implies the existence of a subworld that isn't valid, then it must itself be invalid:

On the other hand, if we've split a world into two possibilities because we found p∨q then even if one branch is invalid the world might still be valid if the other branch is valid.

If we have a world containing ◻p:

then we know that p must be in every subworld. Actually, we know it must also hold in any subsubworld too, all the way down. This is because if we can prove something we can then use that proof directly to form a constructive proof that we can prove it. So that means ◻p holds in every subworld too:

We treat negation as above. So if we see ¬◻p we treat it like ◊¬p.

As we have also said, if we have a world with a ◊p in it then it must contain a subworld with p in it. Note that ◊ implies the existence of subworlds, but ◻ doesn't. It just tells us what must be in them if they exist.

But there's one last rule we'll need. It's Lob's theorem. This is the big theorem on which everything else I say depends. It states that ◻(◻p →p) →◻p. If we can prove that proving something implies its truth then we can prove it. I could sketch a proof here, but I highly recommend the cartoon proof here. In a way, Löb's theorem is a bit sad. The raison d'etre of mathematics is that we can use proofs to be sure of things. In other words, we take for granted that ◻p→p. But if we could prove this then we could prove ◻p, even if p were false! ◻p→p is not valid!

(Philosophical digression: Mathematicians assume ◻p→p. They don't assume ◻p→p because there's a proof. They assume it because experience has shown it to work. So I claim that ◻p→p is an empirical fact that we learn by scientific induction. That's controversial because I'm basically saying that much of mathematics is empirical - at least it is if you talk about truth rather than proof. If you disagree, don't worry about it. The rest of what I say here is independent of this digression.)

Anyway, we can flip Löb's theorem around to get ◊p →◊(p ∧¬◊p). So if we have a world in which ◊p holds:

then we have a subworld in which both p and ¬◊p hold:

And that's all we'll need apart from the obvious rule that we can remove double negation. Suppose we have a proposition of GL. We can use the rules above to extract as many implications as we can. Eventually there will come a point where there is nothing more we can do. If we can do this without hitting a contradiction then we've found a bunch of possible worlds for the proposition. If we can't find a valid world, however, the the original proposition must have been false. You might ask whether or not there any other rules we need in addition to the ones above. Maybe there are other theorems like Löb's theorem that we need to use. Amazingly it can be shown that no other rules are needed. This is a sure-fire terminating algorithm for determining whether or not a proposition of GL is valid! This is a powerful tool. We can now start constructing wild propositions like those I started with and find out whether they are valid. (Note that I've not in any way proved this procedure always works. You'll need to look at Boolos's book to see why.)

Some Proofs
Let's work through some examples. First I'll prove something from propositional calculus: (p∧q)∨(p∧r)→p. We start by drawing a world with the negation:

Now ¬(a→b) is the same as a∧¬b. So we get:

Now the only way to proceed is to consider the disjunction and consider two alternatives:

On both sides of the vertical line we find p. But that contradicts the ¬p we discovered earlier. So there is no way the negation of our original proposition can hold in any world. And therefore the original proposition must be valid.

How about showing Gödel's second incompleteness theorem this way. ◊⊤ says that ⊤ is consistent with the rest of arithmetic. Ie. it expresses the consistency of arithmetic. The theorem is then ◊⊤→¬◻◊⊤, ie. if PA is consistent, then PA can't prove it is consistent. We'll start with this:

We can deal with the negated implication like before. We can also deal straightforwardly with the double negation:

There's now no way to proceed except to use the ◊⊤ to open up a new world. Remember that when we use this we must use Löb's theorem as well as inheriting p and ◻p from the parent world's ◻p:

And we get a contradiction because we have both ◊⊤ and ¬◊⊤. So Gödel's second incompleteness theorem is indeed valid. Note that this isn't a demonstration from scratch. We've shown its validity from Löb's theorem. So this isn't really a useful way to show it's valid. But it *is* a useful way to show the validity of generalisations of Gödel's theorem. Unfortunately, I have to stop for now.

In the coming posts I'll implement all of the above as a computer program. If there's any ambiguity in what I've said I hope the source code to the program will resolve those ambiguities. What's more, our program won't just test the validity of a proposition but it will draw out a nice picture of our world with its subworlds. So you'll be able to trace through every step to make sure you understand!

But that's not all. We'll also see how to write a program to illustrate a bunch more theorems like Craig's Interpolation Lemma and Beth's Definability Theorem and then we'll finish with a program that is designed to construct self-referential propositions. In particular, given any self-referential description like "p is a proposition that is equivalent to the proposition that denies that it's provable that p's provability is consistent with p itself" it will solve to find p, even though GL doesn't allow us to directly construct self-referential propositions.

So let me recap: a world is an assignment of consistent truth values to the letters (and consequently propositions) of GL. Some of these propositions imply the existence of other subworlds with different truth values for these propositions. We draw these worlds as subworlds of the original world. For a world to be valid it mustn't contain any contradictions (unless the world contains a bunch of alternatives in which case just one alternative needs to be valid.) A proposition of GL is valid if unfolding its negation doesn't result in an invalid world.

Exercises
Which are the following are valid? If you report back any difficulties you have I can incorporate any needed revisions into the description above.
1. p→◻p
2. ◻p→◻◻p
3. ◊p→◊◻◻p
4. ◊(p→◻q) →◊(◻(◊(p∧q)))
5. ◻(p∧q∧r∧s) →◻p∧◻q∧◻r∧◻s

References
Most of this stuff is based on Chapter 10 of The Logic of Provability by Boolos. The idea of using multiple worlds to prove theorems is due to Kripke. I believe the procedure of unfolding the implications of a proposition in the tree-like way I describe above is due to Smullyan.

Update
Solutions to problems:
1. Not valid. Diagram:
2. Valid
3. Valid.
4. Valid.
5. Valid. Diagram: