tag:blogger.com,1999:blog-11295132.post385437415800622046..comments2015-05-30T22:32:59.468-07:00Comments on A Neighborhood of Infinity: Data and CodataDan Piponihttps://plus.google.com/107913314994758123748noreply@blogger.comBlogger24125tag:blogger.com,1999:blog-11295132.post-22188490208308931912013-05-20T13:27:51.347-07:002013-05-20T13:27:51.347-07:00Regarding:
> The rule is: you're only allo...Regarding:<br /><br />> The rule is: you're only allowed to use structural recursion with data and guarded recursion with codata<br /><br />I think the phrasing of your rule can be misleading unless the reader keeps in mind that all data is also codata. IIUC, you can use guarded recursion with data, too. Right?Dave Abrahamshttp://www.blogger.com/profile/11169343557510903330noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-11530176733681329302012-11-14T16:46:01.374-08:002012-11-14T16:46:01.374-08:00I don't see program termination as all that im...I don't see program termination as all that important. In an operating system, you want an event-based system that handles interface signals, updates internal data, and reflects the changes externally. The same goes for many programs that run in the OS. The only time something should halt is if a program is designed to perform exactly one calculation and end, returning the result, or if an event-driven program is sent a signal to stop.MCAndrehttp://www.blogger.com/profile/10025555896996100599noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-52998559302751392692011-01-31T15:48:37.635-08:002011-01-31T15:48:37.635-08:00Interesting (re)reading. Once I attempted (kind of...Interesting (re)reading. Once I attempted (kind of succeeded) to implement ZFC in Java; this kind of reasoning would make my attempt more succinct, I believe.<br /><br />Are you intentionally hiding the notions of algebra and coalgebra under more innocent notions of data and codata? This way it must be more digestable by the public, but maybe it's time (okay, it's 2011 now) to get out of closed with our functors, natural transformations, adjoints, monads etc?Vlad Patryshevhttp://www.blogger.com/profile/13466586996802181998noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-14456575294920032142007-10-05T15:25:00.000-07:002007-10-05T15:25:00.000-07:00Wow!!! This article (and reading the article about...Wow!!! This article (and reading the article about Total FP) gave me another idea:<BR/><BR/>That even the data and codata are exactly the same thing, not just duals of each other, because we can infer equally on both. The truth is that no inference of one can be proven valid without making some inference on the other.<BR/><BR/>As a consequence any axiomatic in a datasystem cannot be said valid aslong as we have not defined their associated coaxioms, and a system (like any programming language, should it be functional or imperative) cannot be complete as long as this is not true.<BR/><BR/>If the Turing machine is complete and can be defined using finite states and finite sets of inference rules, it can only exist because it is associated with a comachine. If you look closely at the Turing machine, it is complete not because of these finite states or finite inference rules, but because of the existence of the infinite band on which it operates and retreives/stores its data and instructions. The fact that the Turing complete system uses a simple infinite band (i.e. an infinite list) as its internal comachine is the reason why it works, but even this comachine operates with finite rules, using the infinite processing capabilities of the machine operating it (despite this machine has finite states and finite rules).<BR/><BR/>This consideration extends to the type system itself: this makes me think that type instances and types are no different, or equivalently, that elements and ur-elements in a set theory are also not different: neither of them can exist without the existence of the other.<BR/><BR/>Take for example the ur-element 'x' in a set theory, its dual is the infinite set of sets that can contain it. The concept of inclusion of sets cannot live without the dual concept of membership for ur-elements.<BR/><BR/>Why not unifying all this as a required condition for validity? Suppose that an algorithm can't be proven valid, or a predicate can't be proven to be true or false in a finite-time using a Turing machine, why dowe need to say they are undecidable?<BR/><BR/>For me, we don't need this, and what we really want is a complete system where inference is still possible. We can just build the complete system by stating that "undecidable" maps to "true" (and also its dual system where undecidable maps to "false"). So the twosystems coexist, work within the same complete universe where both "true" and "false" live simultaneously. We don"t need to artificially split the universe into a real universe made of finite elements and its dual couniverse made of infinite elements, because neither of these two can exist isolately.<BR/><BR/>Let's think about the type-system as a whole, a simple finite element like 0 (or 2), termed "ur-element" in set theories, does not exist without its coelement: the list of sets where it is a member is infinite, each of these sets being in fact a "type".<BR/><BR/>Rewriting this, 0 has an infinite number of types (including "Nat", "Int", "Complex", List...)<BR/><BR/>What we are infering with typesystems with notations like <e,t> is too restrictive, because the type "t" has been restricted into a first-order language.<BR/>We'd like to be able to replace *freely* either e (the ur-element) or t (its type) by assigning them any other type u.<BR/><BR/>For example: allowing inferences on <e,<t,u>> instead of just <e,t>, for any "meta-type" u, and then allowing u as a possible member of our universe. And for the system to be complete, all "meta-"levels (or higher order levels) are part of their effective co-type.<BR/><BR/><e,t> is then just a restriction of the type of e,and if you want it to be complete (per the Turing analogy) you must also admit that e is just one finite abstraction of an infinite set of <e,t> which is also just a particular realization of <e,<t,u>>, as well as <e,<t,<u,v>>> and so on...<BR/>To make the bracketed notation easier to handle, why do we need to restrict to the pair only form, when we could use the concept of possibibly infinite tuples?<BR/><BR/>In this case, the ur-element 'e' just needs to become a productive dual synonym to the infinite set of elements:<BR/>{ <e>, <e,t>, <e,t,u>, <e,t,u,v>... }<BR/>where each member of this sets just belongs to a n-order language, where n is the size of the tuple.<BR/><BR/>In the complete system, e exists only because of this dual set that is also a member of the same complete system. Let's not differenciate them, because when viewed from the Universe, they are of the same "type" (i.e. any inference on them can be made equally and proven at every level of language, the n-order language being just a projection, i.e. a restriction of the universe, exactly like the ur-elements that are part of this universe). In this case, the functions operating on elements in the universe are also members of this complete Universe.<BR/><BR/>Currying is also a partial view of what functions are really describing, and thinking about them with just this model is necessarily incomplete (meaning that prooving their validity becomes impossible with this restriction)<BR/><BR/>So what do we need? just an infinite set of valid recursion rules, where all languages and meta-languages can be defined with simple finite sets of inference rules.<BR/><BR/>This is possible! Because the infinite set of inference rules cannot exist without its dual set of finite rules defining the language in which the inference rules are specified for each n-order language! We don't need to prove that these rules are valid (in fact we can't because we would return to the dual reasoning system). So let's choose it ARBITRARILY in a way that works for our common logic.<BR/><BR/>So let's just build our complete system by a finite set of rules governing the inference rules used in each n-level machine. And then let's say that a assertion in level 'n' is prooven if it can be proven that its dual assertion in the type system at level (n+1) is infinite and undecidable. To prove that the dual assertion at level (n+1) is undecidable, we then just need to prove that it is decidable at the level (n+2). This will not prove anything for the level (n) but it will make it *productive*.<BR/><BR/>In other words, we won't be able to prove every theorem in a complete system if we just think in terms of data and codata (or ur-elements and their types which are only particular sets in which these ur-elements are members).<BR/><BR/>Let's just introduce cocodata (reasoning at level n+2) and say that to prove something at level n (starting at level 0 for ur-elements like simple constants) we can arbitrarily work at level 2,or 4...<BR/><BR/>It immediately appears the even/odd structure of this model, which is equivalent to the left/right arbitrary distinction in the <e,t> notation used in strong-type systems.<BR/><BR/>As a consequence, any strong type system limited to only one level of abstration of types will fail to be complete, so that it will be impossible to prove anything, but if we admit the existence of aninfinite number of meta-languages, then what is a type (or codata) at level n is just an ur-element (or data) at level (n+1) where it also has its own type.<BR/><BR/>So any replacement of <e,t> by <e,<t,u>>, rewritten <e,t,u> where u is an arbitrary type becomes valid. Then if we can safely ignore t in the triplet <e,t,u> and just infer on <e,,u> we can prove many things. Let's make a language where 'e' (ur-elements or data) and 'u' (meta-types) are the sameand can be part of the same valid sets.<BR/><BR/>The only problem for us is to define a minimum set of rules for allowing this such that it does not violates our initial axioms at level 0.<BR/><BR/>To do that, we must be able to define more strictly what are the axioms of our level 0, i.e. what are its primitives, for the system to be productive. To be productive, means that it canbe implemented in a finite-state Turing machine (without needing to represent what its I/O band works which is another blackbox just working based on contracts on what each of its storage cell can do with our limited Turing engine).<BR/><BR/>The basic Turing machine operates on a band that just has black or white cells: it jut uses this bit of information from the band and a finite number of bits in its internal state to determine a limited numner of actions that will change zero of more of its internal bit states, or modify the bit of information in the I/O band or move the band. This can be modelized by the simple concept of lists (for the band), and two ur-elements, each one being the codata of the other.<BR/><BR/>So the only thing that we need is a set of inferences between an empty list (nil) and non-empty lists, given distinct types. nil is of type Nil, non-emty lists means the type Colist.<BR/><BR/>If 0 is given type Bit, and 1 is dual to 0, then 1 is also of the same nature as types like Bit, a complete Bit type needs to include it, when excluding 0 as its meaning. This can be made by partitioning arbitrarily like this:<BR/><BR/> concept | co-concept<BR/> ----------------------+-----------<BR/> 0 | 1 or infinite<BR/> Bit | CoBit or non-empty list of Bit<BR/><BR/>and then saying that the "bit" is complete only through this duality<BR/>Then in dual formulas we'll have pairs like:<BR/><0,Bit> or <1,Bit><BR/>whiche can be freely replaced by inferences of:<BR/><0,Bit,list of Bit> or <1,Bit,list of Bit><BR/>and then the medial elements in those tuples removed or changed arbitrarily like:<BR/><0,Nat,non-empty infinite list of Bit> or <1,Nat,non-empty infinite list of Bit><BR/><BR/>(the third term of these pairs are blackboxes, like the Turing I/O band, it just models a band whose current position is reading as 0 or 1 depending on which of the triplet we speak about.)<BR/><BR/>The interesting thing is that "Nat" in the triplet is freely replacable ccording to the properties of the band, but not according to what it actually contains. This is analogous to the infinite numberof ways to represent a single bit on the band as a list of bits of arbitrary values or lengths (even if the length on the I/O band is finite, what is stored ater it does not matter and we can as well extend the band indefinitely by padding the same arbitrary default bit on it, so that we can make inference without knowing what is actually wrriten on those places of the Turing I/O band.<BR/><BR/>Now we need a language where we can assign static properties to the relations that link co-codata to the associated data, so that they respect the axioms on data and their associated inference rules. To check that these relations are valid, we can use normal inference because every thing is in a finite state, there's a finite number of rules and a finite number of axioms for the root level 0. If we can prove it, then we have proven that the inference rules used at level 1 are productive, even if they are undecidable, but the system will remain productive and usable, despite of the quite arbitrary choice of rules for thinking at level 2.<BR/><BR/>What do you think about this idea? Why not thinking about a language that permits such arbitrary definition based on productivity rather than decidability? Shouldn't it have the same power and completeness as a Turing machine?<BR/><BR/>For now Haskell just works on data and codata in a limited way, through its strong type engine, using currying as its only inference rule for handling the distinction between data and codata, but not permitting the enforcement of type rules between the the first parameter and the third in a function. Is we add such rule enforcement, then all we need to prove something with it, is to be able to rewrite our algorithms with such constraint.<BR/><BR/>So we need a subset of Haskell where no provable function has a type signature with less than 4 types, the 1st one being constrained by the 3rd, and the second one being constrained by the 4th, and a production rule that allows transforming 2 parameter functions into 4-parameter functions (more powerful than just currying which gives no constraints).<BR/><BR/>Example:<BR/>the type signature would become verifiable:<BR/>a->b->c->d provided that there exists an fixed inference/production rule between a and (a->b), and between c and (c->d), and this 4-parameter function, if it is provable computationally, would become a proof that the same function operating on a->c (totally curried as it takes one parameter and returns 1) is equally valid and provenby demonstating it in a constrained typesystem where all parameters are in the form a->b where b is the dual of a, except the last one, the function returning always an odd number of inputs and returning an odd number of values, the even parameters being always in the dual (necessarily infinite) space of the preceding parameter.<BR/><BR/>In addition, it would deprecate the separate specification of types and input/output variables: they would become part of the same expression.<BR/><BR/>So instead of writing:<BR/><BR/>doubled :: Int->Int<BR/> x -> y = times x 2<BR/> return y<BR/><BR/>and trying to see that it will return we will rewrite it as:<BR/><BR/>doubled :: Int -<> Type -> Int -<> Type<BR/>do<BR/> (x<>Int) -> (y<>Int) = (times<>Func) (x<>Int) (2<>Int)<BR/> return (y<>Int)<BR/><BR/>rewritable as this generalization (in the stricter type-system):<BR/><BR/>doubled' :: A -> T -> A -> T<BR/>do<BR/> (x:t) -> (y:t) = (times:func) (x:t) (2:t)<BR/> return (y:t)<BR/><BR/>which can be proven by currying it, or by replacing types by infinite lists and remapping for example the inference rules such as "recursion on infinite data tails" in classic programming, by "type-inference on finite type heads" under the new representation (but still having the possibility to recurse on tail)<BR/><BR/>The additional inference rules (in the stricter type-system of the language) enforced during the generalization is the type compatibility of all variables at even position, and the type compatibility of all variables at odd positions, so that it does not break the dualities needed for completeness, plus normal inference rules for proving the currified subfunctions with odd numberof parameters) by reducing them to axioms of the generalized but stricter type system.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-81539596548735691112007-08-16T16:22:00.000-07:002007-08-16T16:22:00.000-07:00Dan,I read some of the banana paper about a 18 mon...Dan,<BR/><BR/>I read some of the banana paper about a 18 months ago. Most of it made perfect sense but I hadn't the faintest clue what it was talking about in that sentence you quoted. It's funny returning to that paragraph now, after having made sense of the data/codata distinction, and seeing that it makes sense after all.<BR/><BR/>"there is use for a language with types in SET"<BR/><BR/>That would be a total functional language, ie. one where a 'function' has its proper set-theoretical mathematical meaning and has a sensible value for every element of its domain. But it wouldn't be Turing complete so that'd be a big design decision.<BR/><BR/>As for the name sigfpe. Years ago I was jealous of people who owned domains whose names were the names of Unix signals. So I did some searching and found that sigfpe.com and grabbed it. Having said that, I can make up all kinds of post hoc rationalisations about why it's appropriate, including some along the lines you suggest. And one very down-to-earth explanation is simply that when I was working at Cinesite I was the guy who quadrupled the speed of large amounts of code simply by noticing that under Irix, the CPU was spending vastly more time servicing SIGFPE exceptions than anything else. It seemed to fit in a bunch of ways, so I kept it.sigfpehttp://www.blogger.com/profile/08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-42831582295515067362007-08-16T15:43:00.000-07:002007-08-16T15:43:00.000-07:00OK, I think I see where I went wrong. According to...OK, I think I see where I went wrong. According to Meijer et al, "Functional Programming with Bananas, Lenses, Envelopes, and Barbed Wire", p. 2:<BR/><BR/>"Working in the category SET...means that finite data types (defined as initial algebras) and infinite data types (defined as final co-algebras) constitute two different worlds....Working in CPO has the advantage that the carriers of initial algebras and final co-algebras coincide, thus there is a single data type that comprises both finite and infinite elements. The price to be paid however is that partiality of both functions and values becomes unavoidable."<BR/><BR/>So Haskell does not distinguish between data and codata. I guess a summary of your post is that it could have, the design choice of types as objects of CPO instead of SET not being a predestined design choice, and that there is use for a language with types in SET.<BR/><BR/>I guess where I went wrong was that everywhere you read about the initial F-algebra that Haskell types use, but (almost) nowhere do they come out and say that these are simultaneously final F-coalgebras. The role of "undefined" in Haskell (that it is not necessary, but does enable finite and infinite to cohabit) seems pretty important to be kept such a secret.<BR/><BR/>Is this very topic why you named your blog sigfpe, because embracing the undefined allows you access to the "Neighborhood of Infinity"?Dan Westonhttp://www.blogger.com/profile/11009268051282300971noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-37212681784906859362007-08-15T17:13:00.000-07:002007-08-15T17:13:00.000-07:00Dan,Surely "possibly finite"="possibly infinite".A...Dan,<BR/><BR/>Surely "possibly finite"="possibly infinite".<BR/><BR/>Anyway, I definitely mean finite, and possibly infinite for the least and greatest fixed point respectively. As Haskell has no means of checking for infinite datatypes, and no way to stop you building them, it always builds codata.<BR/><BR/>In your example, PFL could equally be a data or codata types. Both x and y could be codata. But only x could also be data.<BR/><BR/>I'm not sure exactly what point you're stuck on so do ask further questions if needed.sigfpehttp://www.blogger.com/profile/08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-17181866967210560012007-08-15T14:00:00.000-07:002007-08-15T14:00:00.000-07:00My understanding is probably wrong, but I may not ...My understanding is probably wrong, but I may not be the only one, so I'll just ask: the LFP and GFP of Haskell's initial algebra data types is (I thought) different from what you're describing:<BR/><BR/><I>I.e. a list of a's is either the empty list [] or it's made from an a and a list of a's.<BR/>You can think of this as an equation in [a]. In Haskell we take this as uniquely defining what [a] is, but in reality there is more than one solution to this equation. Consider the type consisting of only finite lists. That satisfies this equation. A finite list is either an empty list, or an element followed by a finite list. Similarly a possibly infinite list is either an empty list, or an element followed by a possibly infinite list. There is an ambiguity. <B>Finite lists</B> form, what is in some sense, the <B>smallest</B> possible solution to this equation </I>[i.e. least fixed point solution? --DDW]<I>. The <B>possibly infinite lists</B> form the <B>largest</B> possible solution </I>[i.e. greatest fixed point solution? --DDW]<I>. Haskell takes the <B>largest</B> possible solution </I>[really? --DDW]<I>.</I><BR/><BR/>I thought the LFP (returned by fix or induced by Fix) was not a <I>finite</I> list but a <I>possibly finite</I> list, and that the GFP was not a <I>possibly infinite list</I> but a <I>truly infinite list (aka stream)</I>.<BR/><BR/>According to my (apparently wrong) understanding, the type PFL (short for Possibly FiniteList)<BR/><BR/>> data PFL a = Nil | Cons a (PFL a)<BR/><BR/>(as the least fixed point solution that Haskell generates) is inhabited by both<BR/><BR/>> x, y :: PFL Int<BR/>> x = Cons 3 Nil<BR/>> y = Cons 3 y<BR/><BR/>and that you'd have to ditch the Nil constructor to force the GFP solution using the implicit LFP meaning that Haskell imposes on data.<BR/><BR/>If I've been thinking about this wrong all this time, I'm going to kick myself.Dan Westonhttp://www.blogger.com/profile/11009268051282300971noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-31727416667804873352007-07-30T07:57:00.000-07:002007-07-30T07:57:00.000-07:00Are you sure apologies aren't in order?Are you sure apologies aren't in order?walthttp://www.arsmathematica.net/noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-30788925483235720552007-07-15T08:15:00.000-07:002007-07-15T08:15:00.000-07:00peter mc,Essentially structural and guarded recurs...peter mc,<BR/><BR/>Essentially structural and guarded recursion are the same as cata- and ana-mophisms which are the same as folds and unfolds in F-algebras and F-coalgebras. But there may be some more details needed to make this precise.sigfpehttp://www.blogger.com/profile/08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-86657611389587618822007-07-15T03:12:00.000-07:002007-07-15T03:12:00.000-07:00Thanks for explaining this so beautifully!A quick ...Thanks for explaining this so beautifully!<BR/><BR/>A quick question:<BR/><BR/>I assume that you're familiar with the concepts of anamorphism and catamorphism. Are these ideas equivalent to recursion and corecursion, or is the similarity just superficial?Peter Mchttp://www.blogger.com/profile/05863206648734186476noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-83552151432958835682007-07-15T01:51:00.000-07:002007-07-15T01:51:00.000-07:00Lots of good articles about coalgebra and corecurs...Lots of good articles about coalgebra and corecursion <A HREF="http://homepages.cwi.nl/~janr/papers/" REL="nofollow">here</A>.<BR/><BR/>There is something that is not yet clear to me : in the wikibook about the denotational semantic of Haskell it is written that the infinite list is a least upper bound. So, it is not the largest solution.<BR/><BR/>So, it looks like that due to lazy evaluation there is something special in the category of Haskell program and that the coalgebras are also algebras. But, I am not sure of it.alpheccarhttp://www.blogger.com/profile/14645433315403867431noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-58455191328034536622007-07-15T01:36:00.000-07:002007-07-15T01:36:00.000-07:00You currently havefibs = 1 : 1 : zipWith (+) fib (...You currently have<BR/><BR/>fibs = 1 : 1 : zipWith (+) fib (tail fib)<BR/><BR/>and probably want<BR/><BR/>fibs = 1 : 1 : zipWith (+) fibs (tail fibs)<BR/><BR/>fib n = fibs !! nAaron Denneyhttp://www.blogger.com/profile/15613957348593645695noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-72914277203445537042007-07-15T00:19:00.000-07:002007-07-15T00:19:00.000-07:00data [a] = [] | a : [a]It's easily provable that t...data [a] = [] | a : [a]<BR/><BR/>It's easily provable that this equation has a least element (because the relevant functor is monotonic), but it's not obvious to me as to whether or not it has a greatest one.<BR/><BR/>Does Haskell guarantee this?Pseudonymhttp://www.blogger.com/profile/04272326070593532463noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-70841347470570902832007-07-14T22:28:00.000-07:002007-07-14T22:28:00.000-07:00The codata link is 404'ing.It's true that mathemat...The codata link is 404'ing.<BR/><BR/>It's true that mathematicians tend to privilege things over cothings. Thanks for trying to break this.Nick Bornakhttp://www.blogger.com/profile/16896645842063632808noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-44539151917272708062007-07-14T21:37:00.000-07:002007-07-14T21:37:00.000-07:00"codata" link is broken"codata" link is brokenAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-70927136741562100472007-07-14T20:39:00.000-07:002007-07-14T20:39:00.000-07:00I am not a Haskell expert, but I think I have foun...I am not a Haskell expert, but I think I have found a minor mistake in your definition of sumSoFar. I believe it should be:<BR/><BR/>sumSoFar :: Int -> [Int] -> [Int]<BR/>sumSoFar x [] = [x]<BR/>sumSoFar x (y:ys) = x : sumSoFar (x+y) ys<BR/><BR/>Thanks for this insightful post, by the way.Alexandrehttp://www.blogger.com/profile/10338659962011326420noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-91377097433856371652007-07-14T20:38:00.000-07:002007-07-14T20:38:00.000-07:00How easy would it be to write a tool that will per...How easy would it be to write a tool that will perform this kind of check on a Haskell program?Peterhttp://www.blogger.com/profile/03799579139256306118noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-23066631668577264502007-07-14T20:13:00.000-07:002007-07-14T20:13:00.000-07:00julian,Having a compiler distinguish between data ...julian,<BR/><BR/>Having a compiler distinguish between data and codata is straightforward. I think it's probably easy to have compilers recognise structural and guarded recursion - but I'm not sure as I don't know what a full set of rules would look like. For example I don't know what happens when you start trying to "doubly recurse" over both data and codata. There's probably a paper on this somewhere.sigfpehttp://www.blogger.com/profile/08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-87915271260173205862007-07-14T18:15:00.000-07:002007-07-14T18:15:00.000-07:00Nice post, but since I too was first delighted by ...Nice post, but since I too was first delighted by fibs, I'm going to be niggling and point out that you've changed names in mid-binding (used fib in the zipWith).Cainhttp://www.blogger.com/profile/04767362067992713864noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-13514484182735805782007-07-14T18:12:00.000-07:002007-07-14T18:12:00.000-07:00Is Haskell's type system powerful enough that (wit...Is Haskell's type system powerful enough that (with a rewritten Prelude etc, but without changes to the core syntax) it could be be made to distinguish data and codata, and force coders to use them in correct ways?Julian Morrisonhttp://www.blogger.com/profile/01115506275519545033noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-20976521456599038472007-07-14T17:21:00.000-07:002007-07-14T17:21:00.000-07:00Thanks, I understand corecursion now a bit better....Thanks, I understand corecursion now a bit better.<BR/><BR/>I found it interesting to consider the form of the Church encodings of data and codata's Mu:<BR/><BR/>Data-land:<BR/><BR/>μ = λ (functor : * → *) . ∀ (mot : *) . (functor mot → mot) → mot<BR/><BR/>Codata-land:<BR/><BR/>μ = λ (functor : * → *) . ∀ (kont : *) . (∀ (seed : *) . (seed → functor seed) → kont) → kont<BR/><BR/>Programming languages should probably allow the type-μ to be overridden. It allows so much fun variation in the structure of recursive types...stefanorhttp://www.blogger.com/profile/17489341093078909229noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-50264907910137767912007-07-14T15:19:00.000-07:002007-07-14T15:19:00.000-07:00porges,If you follow my link to ars mathematica yo...porges,<BR/><BR/>If you follow my link to ars mathematica you'll see that writing a Wikipedia article was my original aim!sigfpehttp://www.blogger.com/profile/08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-13266291838637345472007-07-14T15:14:00.000-07:002007-07-14T15:14:00.000-07:00Nice post :)You seem to have a better "bottom-up" ...Nice post :)<BR/><BR/>You seem to have a better "bottom-up" understanding of this than I do (only having spent 18 months at university)... perhaps you'd like to contribute to the <A HREF="http://en.wikipedia.org/wiki/Total_functional_programming" REL="nofollow">Wikipedia article on total functional programming</A>?<BR/><BR/>The next thing I'd like to see is a restricted form of Haskell that does exactly these kind of termination checks!Porgeshttp://www.blogger.com/profile/02727258157936734796noreply@blogger.com