Thursday, April 13, 2006

A Modal Logic of Near Certainty?

If this "□" isn't a square then the following will be unreadable for you. For some reason □ is less standard than ◊ and may be absent from a font when the latter is present, especially when printing.

So having read some stuff on modal logic (basically just what the Stanford Encyclopedia of Philosophy has to say) I was trying to make up a modal logic of my own. So what follows is just brainstorming. It might not make any sense at all if you try to work out the details. But as I was stuck in trying to figure out those details I thought I'd mention it here.

Usually □ is used to mean things like "it is necessarily the case that", "I know that" or "it ought to be the case that" but I thought I'd see if it could be used to mean "it is almost certainly the case that". Here's the intuition I'm trying to capture: □p would be roughly like saying p holds with probability 1-O(ε). So □□p would mean that we expect it holds with probability (1-O(ε))² which is the same as 1-O(ε). So we'd have □□p⇒□p. I intend "almost certainty" to mean so close to certainty that if A almost certainly tells the truth, and so do B, C, D up to Z, then if Z tells you that Y says that X claimed....that A said p then you could still be nearly certain that p is true.

Other examples of things that I'd expect to hold would be "continuity" conditions like

□A∧B⇒□(A∧B)⇒□A∧□B

and I'm nearly certain that

□A⇒◊A (where ◊A=¬□¬A by definition).


Anyway, it seems like a perfectly good modal logic to me. But when I tried to work out the possible world semantics I got nowhere, mainly because p⇒□p upsets things. I was also toying with the idea that "almost certain" is a bit like the strict mathematical sense of "almost always" and that there might be some kind of semantics that involves sets dense in some topology. But I couldn't work out the details of that either.

So can anyone who is knowledgable about these things tell me if what I'm trying to do is sensible? If I fill in more details can I use possible worlds to prove completeness and soundness?

The really wacky idea I had (and remember, I'm just brainstorming) is that you might be able to set up a differential calculus of logic. Just as you compute derivatives of a function w.r.t. x by perturbing x slightly and looking at how f(x) varies, □A might be seen as a slight deformation of A, hence my use of the term "continuity" above. I'd then expect to see a version of the Leibniz rule appear somewhere in the theorems of this modal logic.

Anyway, this might be going nowhere, so brainstorming over.

Update

Ordinary propositional calculus can be viewed as being set theory in disguise. Pick a set X. Now translate a logical proposition to a statement of set theory by 'atomic' propositions to subsets of X, ∧ to ∩, ∨ to ∪, ⊤ to X; ⊥ to ∅ and ¬ to the complementation in X operator. A proposition is true if and only if its translation always equals X. For example ¬p∨p translates to P∪(X\P) which is clearly X.

For my proposed modal logic put a topology on X. Then □ translates to the Cl operator where Cl(A) is the smallest closed set containing A. The things I want to be true again correspond to propositions whose traslations are all of X.

I think this modal logic captures exactly what I want. You can get a feel for it by thinking about propositions like x=3 on the real line. This is false "almost everywhere" (ie. on a dense subset of the real line, ie. a subset whose closure is the entire real line) so it makes some kind of sense to want to say □¬(x=3).

With some keywords to search on I was now able to find this paper. Page 29 (following the document's own page numbering) essentially describes what I have except that they have swapped □ and ◊. Essentially they're saying that this is the standard modal logic known as S4. This topological interpretation of S4 goes back to work by Tarski and McKinsey in 1944.

And now I can reason about almost-certainty with certainty.

13 comments:

Derek said...

(I'll use '[]' for the square operator, <> for diamond operator)

Misc comments (with brainstormy levels of accuracy).

- if []A means "almost certain," what is your definition of "almost?" Does a probability greater than 50% constitute "almost?"

That's sort of like a "there exists" statement... does just one exist, or a billion?

- does []A imply "Always A"?

- Gut feeling: []A is same as <>A.

- If you're dealing with probability, then things like []A ^ []B ^ []C ... etc would tend towards P=zero, with a large number of terms.

Derek said...

So can anyone who is knowledgable about these things tell me if what I'm trying to do is sensible?

Forgot to mention... I have no knowledge of these things!

sigfpe said...

if []A means "almost certain," what is your definition of "almost?" Does a probability greater than 50% constitute "almost?"

I'm trying to capture an intuition about a limiting probability that approaches one. The limit aspect is the central thing. I don't think this is an invalid thing to do. For example you can compute derivatives by using 'infinitesimals' that are smaller than any real number bigger than zero. This can be successfully formalised and even turned into a practical computational technique. So think of "almost certainly true" as being like a probability of 1-e where e is infinitesimal.

And I'm pretty sure <>A isn't []A. It might be if [] meant something like "maybe A" because then there's an obvious symmetry between []A and <>A. But by saying that []A is almost certainty I think I'm breaking that symmetry. In fact, intuitively speaking, if I felt that p had a 50% chance of being true I'd say that <>p was true but that []p wasn't.


If you're dealing with probability, then things like []A ^ []B ^ []C ... etc would tend towards P=zero, with a large number of terms.

The limit of a finite sum is just the sum of the limits. Or when dealing with infinitesimals any finite sum of a bunch of them still gives you another infinitesimal. So I think that [][][][]... should just be [].

And I just posted something in your blog...

Neel Krishnaswami said...

You definitely need to look at lax logic. It is a logic with a single modality circle (which I'll write as O), and has the axioms that:

A => OA

OOA => OA

A /\ OB => O(A /\ B)

This was originally invented by some people doing hardware verification, who wanted a modality of "truth under some constraints". Your modality of "almost certainly" seems like it should be very similar.

Amazingly, this modality turned out to be exactly the same as the monadic type constructor of Moggi's computational lambda calculus. In fact, if you look at the first two axioms as types, you can see them as the two natural transformations that make up a monad.

However, this is almost, but not quite, the diamond modality -- the main difference is the third axiom. This axiom (Moggi calls it a monad with a strength) does not hold for all monads.

Rowan Davies and Frank Pfenning did a proof-theoretic analysis of lax/computational logic, and discovered that you can decompose the circle modality as "possibly necessary". That is:

OA == <>[]A

sigfpe said...

I have some more justification for the [][][]...[]p=[]p law.

Consider calculus. Sometimes we reason about f(x) and f(y) where x and y are close by considering the derivative of f. For example we know that if f is differentiable, and x is close enough to y, then f(y) is close to f(x)+(y-x)*f'(x). They're not usually equal, but close is good enough for practical purposes. So using derivatives allows us to reason about physical quantities that aren't themselves derivatives, as long as we don't try to push our reasoning too far. (And in fact you could probably derive some theorem that specifies just how far y can wander from x before things break down.)

Similarly, in real life we don't know that p is "almost certainly true" in the sense I have defined. But we might know that it's 99.9999% sure. In that case, like in the case of calculus, we can assume [][]...[]p=p as an approximation, and as long as we don't push our reasoning too far we won't get bad results. (And in fact you could probably derive some theorem that says just how close your probability must be to 1 before a given formula, proven using the axioms I suggest, containing a certain number of []'s, becomes invalid.)

Neel Krishnaswami said...

Also, I should add that IIRC Curry originally invented lax logic in 1952, but that he didn't publish because he thought it was weird and unnatural.

And it's also worth pointing out that in categorical proof theory, if you take a Cartesian closed category you have a model of the simply typed lambda calculus, and then you can get a model of S4 modal logic if you also have a monad and a comonad over that category. The monad models possibility and the comonad models necessity.

sigfpe said...

neel,

I'm kicking myself for not seeing the connection with monads, even after wondering briefly about applying Curry-Howard. I'll definitely try reading up on lax logic.

I think that what I'm after might be close to Lax logic. In particular I want to 'break symmetry' so that [] isn't <> and []a -> <>a but not the converse.

Derek said...

Funny, it seems I've been seeing more references to Harryhausen all over. Strange.

I think your modal logic would have to work for finite sets. I can say that it's almost certain that some person is not the tallest man in the world, but the probability of that statement being true is 5999999999/6000000000, which is very close to 1, but not the same as the limit as some epsilon goes to zero.

Maybe you can differentiate probability laws, to see how a change in the "truthiness" of some parameters will bring the proposition closer to or farther from truth?

sigfpe said...

Aha! Got it figured out. This modal logic is basically topology in disguise. In fact, the theorems here about the interior of a set translate directly into the things I want to be theorems. Some time I'll write up the details.

Noticed a mistake too. I want <>A ->A -> []A

Kenny said...

Some of the points Derek is making are related to the "preface paradox" and "lottery paradox" (I've never really been convinced that they're two different paradoxes). The preface paradox points out the following common situation: an author writes a book, and presumably believes every single assertion in that book to high probability. However, she also knows that virtually every longish book has some errors, so in the preface she announces "this book contains at least one false statement" - and believes this to very high probability as well. In the lottery paradox, we note that given a large enough (fair) lottery, I can say of any ticket with certainty 1-ε that it will not win. And yet I can also say with equal certainty that the conjunction of these assertions is false.

I think these paradoxes are often attempts to show that belief (or knowledge) is not closed under conjunction, but it might be used also to show that high probability doesn't imply belief, or something else.

That's really interesting that you can convincingly give an account of S4 where the box and diamond are switched, with the diamond meaning near certainty! I'm guessing that you'll want some other condition to guarantee that your diamond is stronger than your box, rather than the other way around as is usual.

sigfpe said...

Kenny,

I'm guessing that you'll want some other condition

I think "K" [](a->b) -> ([]a->[]b), "4" []a->[][]a and "T" []A->A capture everything I want, with <> (instead of []) interpreted as "almost certainly". I think this might actually be a non-trivial novel way to look at this modal logic. I ought to actually try to come up with a theorem to back this up and clarify what I mean by "almost certain" rather than leaving it fuzzy.

I've been busy trying to code up a tableau based theorem prover for this system. I think I know exactly what I need to implement it.

I agree that the lottery paradox is highly relevant here. But thinking about probabilities again: if P(Ai)>1-ε then for any finite indexing set, I can choose ε so that the product of all of the P(Ai) is guaranteed to be as close to 1 as I want. On the other hand, if I were to allow quantifiers then this would become an infinite product and I'd no longer be able make this guarantee. In this case Derek's objection would become valid. (I guess I disagree with Derek, the proviso isn't that we work with finite sets but with finite propositions.)

Theo said...

With sigfpe, I'm not worried about Derek's lottery-paradox-style concerns. This is because I'm pretty comfortable considering "the number of statements in a book" or "the number of people entering a lottery", let alone "the number of people in the world" to be pretty darn close to infinity.

Sigfpe, not having actually worked out anything you said, I'd suggest that probably your topology on worlds should be at least Hausdorff. But I'm still worried about []A corresponding to the closure of A: then I easily have situations where both []A and [](-A).

Unless I'm misreading you. Are you using [], etc., to express statements about the size of sets of worlds within a certain semantic, or about all possible ways of assigning meaning (the latter being in analogy with your discussion of traditional prepositional logic)? Because, if the latter, then the discrete topology will always provide a system in which []A is only true if A=X.

sigfpe said...

theo,

Statements in this logic are considered valid iff their translation into topological terms is true for all topologies. The stuff about topology is pretty solid territory as it's all in here (though I was unaware of this when I started out). It's quite neat. If I get my theorem prover for this logic working (based on this) I'll be able to automatically prove statements about interiors and closures of intersections, unions and complements of sets that are true in all topologies.

The interpretation in terms of "almost certainty" is mine so it might not hold up.

Blog Archive