### S4 and Partial Evaluation

I never cease to be amazed by the ubiquity of mathematical ideas. I'm still not entirely certain that S4 captures what I mean by "almost certain" but it's neat that S4 does a half-decent job of representing what we mean when we say "I know that X". It also has another application that relates back to partial evaluation.

Under the Curry-Howard isomorphism we have an isomorphism between (intuitionistic) propositional calculus and 'types' in a programming language. If A and B are types then A∧B is the cartesian product type, A∨B is the disjoint union type and A→B is the type of functions that take an A as input and return a B. That's all standard stuff. But now we can augment these types with a new type □A which means a piece of data whose value can be worked out by the compiler. For example we normally think of 1.0 as an object of type Float, but it's really of type □Float because the compiler can do stuff with 1.0 during compilation. The meaning of that should become apparent in the next paragraph.

The function sin is of type □(Float->Float) because, like 1.0, sin is just a constant so the compiler knows its value. If the compiler knows the value of something that is a function, and it knows the value of its argument, then the compiler is in a position to perform the computation at compile time. In this case, the compiler could compute sin 1.0 during compilation rather than defer the computation to the final executable. In other words we have a function □(a→b)->(□a→□b). But this is just the axiom K of S4. It's also, essentially, the same as partial evaluation as a described here.

If the compiler knows the value of something, it's still at liberty to delay the evaluation until runtime. In other words, we have a map □a→a. This is the axiom of S4 known as T. Notice that we don't have the reverse. A value of type a might be computed from an input from the user. As the compiler is presumed to have run before the user gets to input anything to the program there's no way the compiler can have a general way to promote an object of type a to □a. This axiom might be used if we chose to write code to add a number of type □Float to one of type Float. The compiler could cast the one of type □Float to type Float in order to allow the addition be performed in the executable at runtime.

Though we don't have a→□a we do have □a→□□a. This is more or less by fiat. It just says the trivial thing that if you say it's known at compile time twice over then that's the same as knowing it once. So we have axiom 4 of S4.

So intuitionistic S4 can be viewed as the logic we need to annotate types with information that tells the compiler what it can compute at compile time and what should be left until runtime. In a sense it's a form of macro evaluation providing some of the features that the C/C++ preprocessor provides when it runs just before compilation. But what's nice is that it's expressed in the language itself by extending the type system rather than as a separate language (which the C/C++ preprocessor essentially is.)

Anyway, this description is very rough and there are a few gotchas with this interpretation. So I suggest reading a real paper on the subject. My summary doesn't correspond exactly to any one paper I looked at so rather than reference one I can use the great universal qualifier known as Google to express them all in one go. Apparently there does exist an implementation of a subset of ML that supports these modal types.

By the way, I thought I'd revisit self-reproducing code armed with this modal logic to see if I could produce a typed version of the theorem that a language with partial evaluation always has a fixed point. But the axioms of S4 seem to rule out this kind of self-reference because I can't construct a function whose type allows it to accept inputs of its own type.

There seems to also be a connection with temporal logic with □ saying something about the time when a piece of data is available.

Under the Curry-Howard isomorphism we have an isomorphism between (intuitionistic) propositional calculus and 'types' in a programming language. If A and B are types then A∧B is the cartesian product type, A∨B is the disjoint union type and A→B is the type of functions that take an A as input and return a B. That's all standard stuff. But now we can augment these types with a new type □A which means a piece of data whose value can be worked out by the compiler. For example we normally think of 1.0 as an object of type Float, but it's really of type □Float because the compiler can do stuff with 1.0 during compilation. The meaning of that should become apparent in the next paragraph.

The function sin is of type □(Float->Float) because, like 1.0, sin is just a constant so the compiler knows its value. If the compiler knows the value of something that is a function, and it knows the value of its argument, then the compiler is in a position to perform the computation at compile time. In this case, the compiler could compute sin 1.0 during compilation rather than defer the computation to the final executable. In other words we have a function □(a→b)->(□a→□b). But this is just the axiom K of S4. It's also, essentially, the same as partial evaluation as a described here.

If the compiler knows the value of something, it's still at liberty to delay the evaluation until runtime. In other words, we have a map □a→a. This is the axiom of S4 known as T. Notice that we don't have the reverse. A value of type a might be computed from an input from the user. As the compiler is presumed to have run before the user gets to input anything to the program there's no way the compiler can have a general way to promote an object of type a to □a. This axiom might be used if we chose to write code to add a number of type □Float to one of type Float. The compiler could cast the one of type □Float to type Float in order to allow the addition be performed in the executable at runtime.

Though we don't have a→□a we do have □a→□□a. This is more or less by fiat. It just says the trivial thing that if you say it's known at compile time twice over then that's the same as knowing it once. So we have axiom 4 of S4.

So intuitionistic S4 can be viewed as the logic we need to annotate types with information that tells the compiler what it can compute at compile time and what should be left until runtime. In a sense it's a form of macro evaluation providing some of the features that the C/C++ preprocessor provides when it runs just before compilation. But what's nice is that it's expressed in the language itself by extending the type system rather than as a separate language (which the C/C++ preprocessor essentially is.)

Anyway, this description is very rough and there are a few gotchas with this interpretation. So I suggest reading a real paper on the subject. My summary doesn't correspond exactly to any one paper I looked at so rather than reference one I can use the great universal qualifier known as Google to express them all in one go. Apparently there does exist an implementation of a subset of ML that supports these modal types.

By the way, I thought I'd revisit self-reproducing code armed with this modal logic to see if I could produce a typed version of the theorem that a language with partial evaluation always has a fixed point. But the axioms of S4 seem to rule out this kind of self-reference because I can't construct a function whose type allows it to accept inputs of its own type.

There seems to also be a connection with temporal logic with □ saying something about the time when a piece of data is available.

Labels: mathematics

## 2 Comments:

I had an idea about something similar to this idea using a modal logic.

If you have a programming language (like twelf) in which you do all computations with proofs about types then you can might be able to use S4 as a type of compile time type checking, mixing dynamic and static typing.

As an example let us take a language with a syntax similar to twelf and let us assume we have natural numbers and we want a generic addition.

Nat: type.

z: Nat.

s: Nat -> Nat.

is_nat_z: Nat(z).

is_nat: Nat(s(X)) <- Nat(X).

add_nat: Add(X,Y,Z) <- Nat(X),Nat(Y),Add_nat(X,Y,Z).

add_nat_z: Add_Nat(z,Y,Y).

add_nat_s: Add_Nat(s(X),Y,s(Z)) <- Add(X,Y,Z).

Now we pose the query:

[](Nat(X),Nat(y),Add(X,Y,Z))

We should get the statically typed version of Add_Nat instead of the generic version.

Just brainstorming again...

Maybe what you're describing is connected to epistemic logic. In epistemic logic we have a modal operator K where K means "I know that". Instead make K mean "the compiler knows that this is of type...". An object of type Int is an integer, but an object of type KInt is an integer and the compiler knows it at compile time. It's a bit tricky because we're dealing with intuitionistic logic so the standard literature on the subject doesn't all apply.

Post a Comment

## Links to this post:

Create a Link

<< Home