### More Laws of Form

I mentioned earlier that I was taking a fresh look at G Spencer Brown's Laws of Form because I found a document by Mark W Hopkins that redescribes the content of this book in more traditional mathematical notation.

Hopkins starts by converting Brown's spatial notation to a linear notation using 1 to represent the empty space and () to represent distinction. We have the usual transformation rules:

(()) -> 1 and ()() -> ()()

He places this in the framework of a transformational category which is esentially a category in which the objects are expressions and the arrows are applications of transformation rules. It's pretty obvious that the above system is isomorphic to the usual boolean algebra (0,1,\/,/\,~) with

1 -> 0

() -> 1

(a) -> ~a

a b -> a \/ b

Not much of interest there.

Brown also describes expressions with feedback. But these are nothing other than sequential logic circuits, ie. circuits where the state at time t+1 is a boolean function of the state at time t. Hopkins formalises these as simple finite state automata. Nothing new there either. I certainly don't see any advantage over a language like Verilog which is designed for sequential logic.

Now, suppose A= 1. Then ((A)A)->1. Similarly, if A=() then ((A)A)->1. So we have the rule ((A)A)->1. So if we extend the language of Laws of Form to include variables we'd like to be able to deduce ((A)A)->1 within the language. We certainly can't deduce it with the rules introduced above. So we must introduce some new transformation rules. One of them is ((A)A)->1. Mark Hopkins describes two such systems and shows how they are complete in the sense that we can make all of the deductions we expect to be able to make in the presence of variables. This is vaguely interesting but really we're just learning how to manipulate boolean expressions, nothing profound.

Unfortunately I don't have a copy of Laws of Form to hand so I can't check Hopkins description against Brown's. My last reading of the book certainly didn't make as much sense as Hopkins explanation so I'm anxious to compare. But if this really is the content of Laws of Form then there really isn't much to it. However, I didn't see any discussion of imaginary logic values in Hopkins' document. It may be that these values are simply oscillating states in sequential logic circuits. I really must obtain the book again.

Anyway, it's fun to explore the Laws of Form web site. Apparently a group at the now defunct Interval Research company was trying to make use of Laws of Form in reconfigurable computing. I also came across this. This last document claims to simplify mathematics but all it seems to do so introduce a cumbersome notation. I'd rather use lambda calculus or combinatory logic.

Update: I just found this published critique of Laws of Form.

Hopkins starts by converting Brown's spatial notation to a linear notation using 1 to represent the empty space and () to represent distinction. We have the usual transformation rules:

(()) -> 1 and ()() -> ()()

He places this in the framework of a transformational category which is esentially a category in which the objects are expressions and the arrows are applications of transformation rules. It's pretty obvious that the above system is isomorphic to the usual boolean algebra (0,1,\/,/\,~) with

1 -> 0

() -> 1

(a) -> ~a

a b -> a \/ b

Not much of interest there.

Brown also describes expressions with feedback. But these are nothing other than sequential logic circuits, ie. circuits where the state at time t+1 is a boolean function of the state at time t. Hopkins formalises these as simple finite state automata. Nothing new there either. I certainly don't see any advantage over a language like Verilog which is designed for sequential logic.

Now, suppose A= 1. Then ((A)A)->1. Similarly, if A=() then ((A)A)->1. So we have the rule ((A)A)->1. So if we extend the language of Laws of Form to include variables we'd like to be able to deduce ((A)A)->1 within the language. We certainly can't deduce it with the rules introduced above. So we must introduce some new transformation rules. One of them is ((A)A)->1. Mark Hopkins describes two such systems and shows how they are complete in the sense that we can make all of the deductions we expect to be able to make in the presence of variables. This is vaguely interesting but really we're just learning how to manipulate boolean expressions, nothing profound.

Unfortunately I don't have a copy of Laws of Form to hand so I can't check Hopkins description against Brown's. My last reading of the book certainly didn't make as much sense as Hopkins explanation so I'm anxious to compare. But if this really is the content of Laws of Form then there really isn't much to it. However, I didn't see any discussion of imaginary logic values in Hopkins' document. It may be that these values are simply oscillating states in sequential logic circuits. I really must obtain the book again.

Anyway, it's fun to explore the Laws of Form web site. Apparently a group at the now defunct Interval Research company was trying to make use of Laws of Form in reconfigurable computing. I also came across this. This last document claims to simplify mathematics but all it seems to do so introduce a cumbersome notation. I'd rather use lambda calculus or combinatory logic.

Update: I just found this published critique of Laws of Form.

Labels: mathematics

## 0 Comments:

Post a Comment

## Links to this post:

Create a Link

<< Home