### [Haskell-cafe] continuations and monads

Q: Are the continuations in Scheme related to the monads from Haskell? If so, could someone elaborate on that? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] continuations and monads

Yes they are. Purely intuitively, you can see how writing code in a monadic style (using = a lot) is very similar to writing in continuation-passing style. You can express this the most directly with the continuation monad. Then, from this monad, you can express other monads. In some sense, the continuation monad is very fundamental. Take a look at The Mother of all Monads[1] from The Neighborhood of Infinity for more details. [1]: http://blog.sigfpe.com/2008/12/mother-of-all-monads.html?m=1 On Aug 17, 2013 7:02 PM, Christopher Howard christopher.how...@frigidcode.com wrote: Q: Are the continuations in Scheme related to the monads from Haskell? If so, could someone elaborate on that? __**_ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Is it possible to create an instance of MonadBaseControl for PropertyM (QuickCheck)/continuations?

Hi I'm working on a little experiment at the moment: using monadic QuickCheck to test the integration of my code and the database. I see some of my functions having properties like - given a database in this state, selectAll should return all rows, and so on. My initial attempt has worked nicely, and now I'm trying to test some more complicated properties, but I'm hitting a problem with overlapping primary keys, and this is because I'm not correctly cleaning up after each check. The simplest solution to this is to bracket property itself, and for that I turned to Control.Monad.Trans.Control in lifted-base, but I am struggling to actually write an instance for MonadBaseContol IO (PropertyM IO). It seems that PropertyM is a continuation monad transformer: newtype PropertyM m a = MkPropertyM { unPropertyM :: (a - Gen (m Property)) - Gen (m Property) } Given this, is it possible to even write an instance of MonadBaseControl? From the lifted-base documentation, it explicitly calls out ConT as *not* having instances, so I wonder if it can't be done. Sadly, I also lack intuition as to what MonadBaseControl really means - I think it means 'capture the state of this monad, so later we can go back to exactly the same state (or sequence of actions?)', but this is very flakey. So... is this possible, and if so how can I move forward? Thanks for any help or advice! - Ollie ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Really Simple explanation of Continuations Needed

On the general notion of continuations, I believe Matt Might's blog explains it quite well using Javascript. http://matt.might.net/articles/by-example-continuation-passing-style/ In the way of a simple example, he suggests that instead of writing function id(x) { return x ; } a CPS version might write: function id(x,ret) { ret(x) ; } etc... IMO things appear confusing to newbies (it happened to me once too) when people dont use intuitive names for obvious things like continuations On Fri, Sep 30, 2011 at 11:42 PM, Mark Spezzano mark.spezz...@chariot.net.au wrote: Hi, Can someone please give me a _lucid_ and _simple_ explanation of exactly how continuations can be used in Haskell? I've already had a look at most of the tutorials and explanations on the web, but I'm still confused. Continuations and CPS have me baffled. (I have most of the Haskell textbooks and even these are sketchy on Continuations) I don't understand the notion of the Cont monad and how it can be used for multitasking, backtracking and interrupting computations. I understand that functions take in a (continuation) function that represents the work remaining to do, but al of the explanations on the web and in technical papers seems to trip over themselves in explaining the fundamentals to a CPS-newbie. If anyone could explain such concepts to me in unambiguous, clear English then this would be very helpful. Thanks in advance for your help, Mark ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Really Simple explanation of Continuations Needed

Ozgur Akgun wrote: On 1 October 2011 11:55, Yves Parès limestr...@gmail.com wrote: BTW Heinrich, the evalState (sequence . repeat . State $ \s - (s,s+1)) 0 at the end doesn't work anymore. It should be replaced by : evalState (sequence . repeat . StateT $ \s - Identity (s,s+1)) 0 Or equivalently: evalState (sequence . repeat . state $ \s - (s,s+1)) 0 Thanks, I've changed it. Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Really Simple explanation of Continuations Needed

Hi, Can someone please give me a _lucid_ and _simple_ explanation of exactly how continuations can be used in Haskell? I've already had a look at most of the tutorials and explanations on the web, but I'm still confused. Continuations and CPS have me baffled. (I have most of the Haskell textbooks and even these are sketchy on Continuations) I don't understand the notion of the Cont monad and how it can be used for multitasking, backtracking and interrupting computations. I understand that functions take in a (continuation) function that represents the work remaining to do, but al of the explanations on the web and in technical papers seems to trip over themselves in explaining the fundamentals to a CPS-newbie. If anyone could explain such concepts to me in unambiguous, clear English then this would be very helpful. Thanks in advance for your help, Mark ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Really Simple explanation of Continuations Needed

Mark Spezzano wrote: Can someone please give me a _lucid_ and _simple_ explanation of exactly how continuations can be used in Haskell? I've already had a look at most of the tutorials and explanations on the web, but I'm still confused. Continuations and CPS have me baffled. (I have most of the Haskell textbooks and even these are sketchy on Continuations) I don't understand the notion of the Cont monad and how it can be used for multitasking, backtracking and interrupting computations. I understand that functions take in a (continuation) function that represents the work remaining to do, but all of the explanations on the web and in technical papers seems to trip over themselves in explaining the fundamentals to a CPS-newbie. If you just want to implement multitasking, backtracking or interrupting computations, without continuations, I recommend my Operational Monad Tutorial http://apfelmus.nfshost.com/articles/operational-monad.html The link to the Cont monad is explained at the very end. Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Really Simple explanation of Continuations Needed

Hi Heinrich, I'm really looking to use the Cont monad itself--but the link you gave me is also helpful, so thank you. If anyone else has words of wisdom to add to this thread please feel free to pitch in. Thanks, Mark On 01/10/2011, at 5:08 PM, Heinrich Apfelmus wrote: Mark Spezzano wrote: Can someone please give me a _lucid_ and _simple_ explanation of exactly how continuations can be used in Haskell? I've already had a look at most of the tutorials and explanations on the web, but I'm still confused. Continuations and CPS have me baffled. (I have most of the Haskell textbooks and even these are sketchy on Continuations) I don't understand the notion of the Cont monad and how it can be used for multitasking, backtracking and interrupting computations. I understand that functions take in a (continuation) function that represents the work remaining to do, but all of the explanations on the web and in technical papers seems to trip over themselves in explaining the fundamentals to a CPS-newbie. If you just want to implement multitasking, backtracking or interrupting computations, without continuations, I recommend my Operational Monad Tutorial http://apfelmus.nfshost.com/articles/operational-monad.html The link to the Cont monad is explained at the very end. Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Really Simple explanation of Continuations Needed

Having worked with the operational-package, I only can recommend it. In fact I was trying to do the same thing you are now. The only thing is that operational needs the use of GADTs, which come as an extension, but still are a useful and heavily used feature. BTW Heinrich, the evalState (sequence . repeat . State $ \s - (s,s+1)) 0 at the end doesn't work anymore. It should be replaced by : evalState (sequence . repeat . StateT $ \s - Identity (s,s+1)) 0 2011/10/1 Mark Spezzano mark.spezz...@chariot.net.au Hi Heinrich, I'm really looking to use the Cont monad itself--but the link you gave me is also helpful, so thank you. If anyone else has words of wisdom to add to this thread please feel free to pitch in. Thanks, Mark On 01/10/2011, at 5:08 PM, Heinrich Apfelmus wrote: Mark Spezzano wrote: Can someone please give me a _lucid_ and _simple_ explanation of exactly how continuations can be used in Haskell? I've already had a look at most of the tutorials and explanations on the web, but I'm still confused. Continuations and CPS have me baffled. (I have most of the Haskell textbooks and even these are sketchy on Continuations) I don't understand the notion of the Cont monad and how it can be used for multitasking, backtracking and interrupting computations. I understand that functions take in a (continuation) function that represents the work remaining to do, but all of the explanations on the web and in technical papers seems to trip over themselves in explaining the fundamentals to a CPS-newbie. If you just want to implement multitasking, backtracking or interrupting computations, without continuations, I recommend my Operational Monad Tutorial http://apfelmus.nfshost.com/articles/operational-monad.html The link to the Cont monad is explained at the very end. Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Really Simple explanation of Continuations Needed

Hi. On 1 October 2011 11:55, Yves Parès limestr...@gmail.com wrote: BTW Heinrich, the evalState (sequence . repeat . State $ \s - (s,s+1)) 0 at the end doesn't work anymore. It should be replaced by : evalState (sequence . repeat . StateT $ \s - Identity (s,s+1)) 0 Or equivalently: evalState (sequence . repeat . state $ \s - (s,s+1)) 0 Ozgur ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Really Simple explanation of Continuations Needed

Hello, Oleg has a introduction for delimited continuations which he presented during ICFP: http://okmij.org/ftp/continuations/index.html#tutorial Of course, it's worth mentioning that the Cont monad is actually doing delimited continuations, cf.: http://okmij.org/ftp/continuations/ContExample.hs Ultimately, the idea is simple, which may be part of the reason why it's so hard to explain (cf., monads). The short version is that at any point in the execution of a program we have some part of the program which has already run, and some part which has yet to run; the latter is the continuation, which is also often called the calling context. All the hoopla about continuations comes from the tricksy things we can do once we realize that programs have these two parts instead of only thinking about the history of the execution. Operationally, the way the Cont monad works is that we explicitly build up the continuation as a function to be called, and then runCont actually applies that function in order to yield a result. What use is this? Well, it gives us a version of the goto statement, namely call/cc. Another part of the problem (other than the simplicity) is that the term continuation has many different meanings in computer science. On the one hand we have the call/cc notion of continuations, which is what's captured by Scheme's call/cc and by the Cont monad. On the other hand we have the CPS transformation that many compilers use when optimizing code. But the topics of discussion for call/cc and CPS are very different, and so it's easy to get confused if you try to think of them as the same thing. They are closely related, but they're different enough that you should keep them separate until you have some understanding of what they're about. -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Delimited Continuations For Web Development?

Hi all, Is anyone in the Haskell community doing web development using delimited continuations? Oleg had a talk on it [1] using Ocaml and CGI but I haven't heard of anyone taking it further. -deech [1]http://okmij.org/ftp/continuations/index.html#shift-cgi ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] operations on lists with continuations

On Thu, Mar 3, 2011 at 12:33 AM, Mark Lentczner mark.lentcz...@gmail.com wrote: To make up for my total misunderstanding of what you were asking before, I hereby offer you the Plumbing module, available here: https://bitbucket.org/mtnviewmark/haskell-playground/src/2d022b576c4e/Plumbing.hs With it, I think you can construct the kinds of pipelines you describe with the composition aspects you desire: Indeed, it looks like a more thorough version of what I'm doing. I'm guessing the breaking into pairs thing would be ultimately more composable, by which I mean lead to fewer special case functions. :load Plumbing.hs [1 of 1] Compiling Plumbing ( Plumbing.hs, interpreted ) Ok, modules loaded: Plumbing. let filterUntil cond start end = (passUntil (=start) =|= pfilter cond) =+= passWhile (end) let filterUntilPlus1 cond start end = filterUntil cond start end =+= pass 1 filterUntil even 10 15 `pump` [1..] [2,4,6,8,10,11,12,13,14] filterUntilPlus1 even 10 15 `pump` [1..] [2,4,6,8,10,11,12,13,14,15] I would write the above as: let filterUntilPlus1 cond start end = Then.filter cond (=start) $ Then.takeWhile (end) $ take 1 filterUntilPlus1 even 10 15 [1..] [2,4,6,8,10,11,12,13,14,15] I think mine is less flexible but it interacts with the basic list functions more directly so it feels simpler to me. In a way it reminds me of STM, in that as long as you leave the last continuation on, it's still open and can be composed. But to get the actual result, you have to stick a non-continuation function on the end, at which point it no longer composes. I suppose that's the model for many many little DSLs though. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] operations on lists with continuations

On Wed, Mar 2, 2011 at 12:00 AM, Stephen Tetley stephen.tet...@gmail.com wrote: Maybe you've invented the ApoPrelude? If I were doing it I'd probably code them in terms of an apomorphism - unfoldr with flush. Unlike regular unfoldr which discards the final state, an apomorphism uses the final state to produce the tail of the output list. See Jeremy Gibbons paper Streaming representation-changers section 4.4. http://www.comlab.ox.ac.uk/jeremy.gibbons/publications/ Interesting, thanks for the link. Indeed it looks like a special case of a general concept someone has already gotten a fair bit of milage out of :) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] operations on lists with continuations

To make up for my total misunderstanding of what you were asking before, I hereby offer you the Plumbing module, available here: https://bitbucket.org/mtnviewmark/haskell-playground/src/2d022b576c4e/Plumbing.hs With it, I think you can construct the kinds of pipelines you describe with the composition aspects you desire: :load Plumbing.hs [1 of 1] Compiling Plumbing ( Plumbing.hs, interpreted ) Ok, modules loaded: Plumbing. let filterUntil cond start end = (passUntil (=start) =|= pfilter cond) =+= passWhile (end) let filterUntilPlus1 cond start end = filterUntil cond start end =+= pass 1 filterUntil even 10 15 `pump` [1..] [2,4,6,8,10,11,12,13,14] filterUntilPlus1 even 10 15 `pump` [1..] [2,4,6,8,10,11,12,13,14,15] - Mark ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] operations on lists with continuations

Maybe you've invented the ApoPrelude? If I were doing it I'd probably code them in terms of an apomorphism - unfoldr with flush. Unlike regular unfoldr which discards the final state, an apomorphism uses the final state to produce the tail of the output list. See Jeremy Gibbons paper Streaming representation-changers section 4.4. http://www.comlab.ox.ac.uk/jeremy.gibbons/publications/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] operations on lists with continuations

I have a few functions for operating on lists that take continuations: -- | Like takeWhile but with a continuation, so you can chain takes without -- copying. takeWhileThen :: (a - Bool) - ([a] - [a]) - [a] - [a] takeWhileThen _ _ [] = [] takeWhileThen f cont (x:xs) | f x = x : takeWhileThen f cont xs | otherwise = cont (x:xs) But of course this isn't enough, then I start wanting a takeThen. And then a filterThen. That makes me think plain explicit recursion would be clearer, but the problem I have with that is that it has an imperative feel. What I mean by that is that the result is a product of the changing state of the recursing function, and it's non-composable. E.g. filterUntil start end msgs = go where go [] = [] go (m:ms) | msg_start m = start = takeWhile ((end) . msg_start) (m : ms) | condition m = m : go ms | otherwise = go ms Whereas with filterThen I can write: filterUntil start end = filterThen condition ((=start) . msg_start) . takeWhile ((end) . msg_start) This looks much more functional to me, and can be composed: filterUntilPlus1 start end = filterThen condition ((=start) . msg_start) $ takeWhileThen ((end) . msg_start) $ take 1 So my question is, am I about to discover this already exists in the Prelude, or is unnecessary because of something I'm overlooking? I imagine it must be a special case of some kind of generalization, right? Surely someone has thought of this and taken it much further than I have. I suppose this starts to look like maybe a CPS version of a parser... but it seems lighter-weight for simple tasks. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Continuations and coroutines

Yves Parès wrote: It helps me understand better, but would you have some simple code that would do that ? You can look at the definition of the coroutine monad transformer in the monad-coroutine package as well: http://hackage.haskell.org/package/monad-coroutine The heart of the library is in the data type newtype Coroutine s m r = Coroutine { resume :: m (Either (s (Coroutine s m r)) r) } where s is an arbitrary functor (like Yield, for example), m is an arbitrary monad, and r is the coroutine's final result type. You can also read the Trampolined Style and The essence of multitasking papers: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.45.5447rep=rep1type=pdf http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.4514rep=rep1type=pdf 2010/6/19 Paul Johnson p...@cogito.org.uk mailto:p...@cogito.org.uk On 19/06/10 10:36, Yves Parčs wrote: Hello, I saw on the haskell wikibook that coroutines could be implemented by using continuations : http://en.wikibooks.org/wiki/Haskell/Continuation_passing_style#Example:_coroutines (unhappily, the section is empty) Since I'm actually learning the wonders of continuations, I just wonder : how ? Coroutines depend on the ability to suspend and resume execution. A continuation acts as the resume point in the current function. The callCC function in the continuation monad takes a function that expects the continuation as an argument (which is how you get access to it). So you say something like: yield = callCC $ \continuation - Then you would typically store the continuation somewhere and call some other previously stored continuation to switch contexts. Continuations can be used to pass data back into the continuation: you call the continuation with an argument, and that argument becomes the return value of the callCC. In this case you probably just want to use (). You typically have a queue for continuations, so the new continuation goes on the back of the queue and then you call the head of the queue. Obvious modifications for priority, simulated time, real time or whatever else you are trying to schedule. This implies some kind of monadic state to store the queue in, so you will probably make your monad of type ContT (State Queue) If you want a thread to wait, say on a semaphore, then you have a queue of continuations in the semaphore data structure. Is this any help? Paul. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe -- Mario Blazevic mblaze...@stilo.com Stilo International This message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure, copying, or distribution is strictly prohibited. If you are not the intended recipient(s) please contact the sender by reply email and destroy all copies of the original message and any attachments. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Re: Continuations and coroutines

Paul Johnson wrote: Yves Parès wrote: It helps me understand better, but would you have some simple code that would do that ? http://www.cs.chalmers.se/~koen/pubs/jfp99-monad.ps You can also understand coroutines and continuations in terms of operational semantics. Here is a reimplementation of Koen Claessen's poor man's concurrency monad based on this approach: PoorMansConcurrency.hs http://projects.haskell.org/operational/examples.html Regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Continuations and coroutines

On 19/06/10 17:23, Yves Parès wrote: It helps me understand better, but would you have some simple code that would do that ? http://www.cs.chalmers.se/~koen/pubs/jfp99-monad.ps Paul. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Continuations and coroutines

On 20/06/10 22:03, Paul Johnson wrote: On 19/06/10 17:23, Yves Parès wrote: It helps me understand better, but would you have some simple code that would do that ? http://www.cs.chalmers.se/~koen/pubs/jfp99-monad.ps Except that the paper I'm trying to refer to seems to have fallen off the net. Its A Poor Man's Concurrency Monad. Does anyone have a copy? Paul. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Continuations and coroutines

Hello, I saw on the haskell wikibook that coroutines could be implemented by using continuations : http://en.wikibooks.org/wiki/Haskell/Continuation_passing_style#Example:_coroutines(unhappily, the section is empty) Since I'm actually learning the wonders of continuations, I just wonder : how ? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Continuations and coroutines

On 19/06/10 10:36, Yves Parès wrote: Hello, I saw on the haskell wikibook that coroutines could be implemented by using continuations : http://en.wikibooks.org/wiki/Haskell/Continuation_passing_style#Example:_coroutines (unhappily, the section is empty) Since I'm actually learning the wonders of continuations, I just wonder : how ? Coroutines depend on the ability to suspend and resume execution. A continuation acts as the resume point in the current function. The callCC function in the continuation monad takes a function that expects the continuation as an argument (which is how you get access to it). So you say something like: yield = callCC $ \continuation - Then you would typically store the continuation somewhere and call some other previously stored continuation to switch contexts. Continuations can be used to pass data back into the continuation: you call the continuation with an argument, and that argument becomes the return value of the callCC. In this case you probably just want to use (). You typically have a queue for continuations, so the new continuation goes on the back of the queue and then you call the head of the queue. Obvious modifications for priority, simulated time, real time or whatever else you are trying to schedule. This implies some kind of monadic state to store the queue in, so you will probably make your monad of type ContT (State Queue) If you want a thread to wait, say on a semaphore, then you have a queue of continuations in the semaphore data structure. Is this any help? Paul. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Continuations and coroutines

It helps me understand better, but would you have some simple code that would do that ? 2010/6/19 Paul Johnson p...@cogito.org.uk On 19/06/10 10:36, Yves Parčs wrote: Hello, I saw on the haskell wikibook that coroutines could be implemented by using continuations : http://en.wikibooks.org/wiki/Haskell/Continuation_passing_style#Example:_coroutines(unhappily, the section is empty) Since I'm actually learning the wonders of continuations, I just wonder : how ? Coroutines depend on the ability to suspend and resume execution. A continuation acts as the resume point in the current function. The callCC function in the continuation monad takes a function that expects the continuation as an argument (which is how you get access to it). So you say something like: yield = callCC $ \continuation - Then you would typically store the continuation somewhere and call some other previously stored continuation to switch contexts. Continuations can be used to pass data back into the continuation: you call the continuation with an argument, and that argument becomes the return value of the callCC. In this case you probably just want to use (). You typically have a queue for continuations, so the new continuation goes on the back of the queue and then you call the head of the queue. Obvious modifications for priority, simulated time, real time or whatever else you are trying to schedule. This implies some kind of monadic state to store the queue in, so you will probably make your monad of type ContT (State Queue) If you want a thread to wait, say on a semaphore, then you have a queue of continuations in the semaphore data structure. Is this any help? Paul. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Monadic presentation of delimited continuations and dissection

Dear Haskellians, After reading through the Dybvig, Jones and Sabry paper on the monadic presentation of delimited continuations, it seems like one can come up with a direct representation of the control contexts and meta continuations framework as an instance of McBride's dissection mechanism. Do either of you know if that work has already been done? McBride doesn't use that as an example in his Clowns and Jokers paper. Best wishes, --greg -- L.G. Meredith Managing Partner Biosimilarity LLC 1219 NW 83rd St Seattle, WA 98117 +1 206.650.3740 http://biosimilarity.blogspot.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

On Thu, May 13, 2010 at 8:13 PM, wren ng thornton w...@freegeek.org wrote: Andrea Vezzosi wrote: On Thu, May 13, 2010 at 10:51 AM, wren ng thornton w...@freegeek.org wrote: Andrea Vezzosi wrote: wren ng thornton wrote: With this change [1] I can't notice any difference for your benchmark[2]. Then again, all the runTest calls take 0 msec and I've had no luck making the computation take much time; perhaps your computer can detect a difference. On my machine, with ghc-6.12.1, yours and the original ErrCPS give quite similar results, both ~2x slower than Either. However it's important to note that these results are highly dependent on the monadic expressions being evaluated, with a different benchmark you can get an huge speedup with the CPS versions. That's very curious. After installing Criterion, my machine (OSX 10.5.8 2.8GHz Intel Core2Duo, GHC 6.12.1 with -O2) shows only 1% difference between my ErrCPS and Either on this benchmark. Alas, I can't print kernel density graphs since Crieterion charts are broken on 6.12. It seems buggy that your platform would behave so much differently... I got the measurements from the original code, could you share the code that uses criterion instead? The 1% number was buggy because I hadn't factored the generation of random lists out of the benchmark. But, having fixed that, I still can't replicate your numbers: I get 12us for Either, vs 17us for EitherCPS. http://community.haskell.org/~wren/wren-extras/test/Control/Monad/ErrCPS/CriterionBenchmark.hs Yet another version of the same benchmark, this time using Microbench: http://community.haskell.org/~wren/wren-extras/test/Control/Monad/ErrCPS/MicrobenchBenchmark.hs Microbench seems to replicate your numbers better: 2551.930ns vs 4466.832ns (or 391.86 vs 223.87 calls per second)--- though this is getting into the range where there might be Int overflow issues corrupting the results (a similar problem showed up when benchmarking Data.Trie vs Data.Map), so it may warrant further investigation. That might be the case, i'm on 64bit: sai...@astarte:~$ uname -a Linux astarte 2.6.32-ARCH #1 SMP PREEMPT Tue Feb 23 19:43:46 CET 2010 x86_64 Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz GenuineIntel GNU/Linux sai...@astarte:~$ ./CriterionBenchmark warming up estimating clock resolution... mean is 6.834442 us (80001 iterations) found 1240 outliers among 79998 samples (1.6%) 1131 (1.4%) high severe estimating cost of a clock call... mean is 107.2316 ns (41 iterations) benchmarking Either collecting 100 samples, 1039 iterations each, in estimated 683.8220 ms bootstrapping with 10 resamples mean: 6.563462 us, lb 6.553649 us, ub 6.570454 us, ci 0.950 std dev: 41.74602 ns, lb 23.76971 ns, ub 67.67842 ns, ci 0.950 found 8 outliers among 100 samples (8.0%) 2 (2.0%) low severe 4 (4.0%) high mild 2 (2.0%) high severe variance introduced by outliers: 0.990% variance is unaffected by outliers benchmarking ErrCPS collecting 100 samples, 1 iterations each, in estimated 1.334000 s bootstrapping with 10 resamples mean: 13.14468 ms, lb 13.10442 ms, ub 13.18208 ms, ci 0.950 std dev: 198.3150 us, lb 182.0600 us, ub 220.7957 us, ci 0.950 variance introduced by outliers: 0.993% variance is unaffected by outliers If i'm reading it correctly this gives even worse results: 6us vs. 13ms -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

On Fri, 14 May 2010, Derek Elkins wrote: You did it wrong. All you did was Church encode the Either type. Your bind is still doing a case-analysis. All you have to do is use ContT r (Either e). The bind implementation for ContT is completely independent of the underlying monad. It doesn't even require the m in ContT r m to be a functor, let alone a monad. Therefore the ContT bind doesn't do any case-analysis because it doesn't know anything about the underlying monad. One way to look at what is happening is to compare it to Andrzej Filiniski's work in Representing Monads and Representing Layered Monads. What I don't get is that the bind operation for ContT and ErrCPS look so similar to me ContT (stripping off the newtype constructor/destructors): m = k = \c - m (\a - k a c) ErrCPS (stripping off the newtype constructor/destructors): m = f = \err good - m err (\x - f x err good) I don't get why one is slow but not the other? -- Russell O'Connor http://r6.ca/ ``All talk about `theft,''' the general counsel of the American Graphophone Company wrote, ``is the merest claptrap, for there exists no property in ideas musical, literary or artistic, except as defined by statute.'' ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

Where is my bind statement doing a case analysis? Isn't it just propagating, in a sense, the case analysis that came from values coming into the monad via return or via throwError? Also, why wouldn't callCC work here? I'm not that familiar with the ContT monad so any more details would be very much appreciated. Max On May 15, 2010, at 6:40 AM, Derek Elkins wrote: On Fri, May 14, 2010 at 4:53 PM, Antoine Latter aslat...@gmail.com wrote: On Fri, May 14, 2010 at 4:25 PM, Derek Elkins derek.a.elk...@gmail.com wrote: You did it wrong. All you did was Church encode the Either type. Your bind is still doing a case-analysis. All you have to do is use ContT r (Either e). The bind implementation for ContT is completely independent of the underlying monad. It doesn't even require the m in ContT r m to be a functor, let alone a monad. Therefore the ContT bind doesn't do any case-analysis because it doesn't know anything about the underlying monad. One way to look at what is happening is to compare it to Andrzej Filiniski's work in Representing Monads and Representing Layered Monads. Would you then use callCC to get the required short-circuit-on-error behavior? A church encoding of Either coded as a monad transformer still wouldn't hit the inner monad on bind, even if it is weaving the left and right continuations together. callCC wouldn't work well here. What would work better is another control operator commonly called 'control' which does not resume if the passed in continuation isn't invoked. However, it's usually even clearer (or at least more concise) in these situations to work with the continuation passing style directly. -- fail directly using CPS fail :: String - ContT r (Either String) a fail s = ContT $ \k - Left s ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

On Sat, May 15, 2010 at 2:28 PM, Max Cantor mxcan...@gmail.com wrote: Where is my bind statement doing a case analysis? Isn't it just propagating, in a sense, the case analysis that came from values coming into the monad via return or via throwError? What you did was reimplement the Either -type- not the Either -monad-. To see this lets make a complete interface to Either and provide the two implementations of that, now, abstract data type. Every function using Either can be written using the following interface: class EitherLike e where injLeft :: a - e a b injRight :: b - e a b either :: (a - c) - (b - c) - e a b - c And here are two implementations: instance EitherLike Either where injLeft = Left injRight = Right either = Prelude.either type CEEither a b = forall c. (a - c) - (b - c) - c instance EitherLike CEEither where injLeft a = \l r - l a injRight b = \l r - r b either f g e = e f g Now we can write your functions and the standard Either monad definitions in terms of this abstract interface. retErrCPS :: a - ErrCPS e m a retErrCPS x = ErrCPS $ \_ good - good x return x = Right x retEither x = injRight x retErrCPS x = ErrCPS $ injRight x bindErrCPS :: ErrCPS e m b - (b - ErrCPS e m a) - ErrCPS e m a bindErrCPS m f = ErrCPS $ \err good - runErrCPS m err $ \x - runErrCPS (f x) err good bindErrCPS m f = ErrCPS $ either injLeft (runErrCPS . f) (runErrCPS m) Left e = _ = Left e Right a = f = f a bindEither m f = either injLeft f m So, modulo wrapping and unwrapping, the code is identical. In version of GHC prior to pointer tagging, a case analysis on Either would be implemented very much like the Church-encoded eliminator, i.e. in case e of Left a - f a, Right b - g b pre-pointer tagging GHC would push (essentially) f and g on a stack and enter e, e would then choose which of f or g to return to. So your representation is still doing a case analysis, it is just representing it a different way. Also, why wouldn't callCC work here? I'm not that familiar with the ContT monad so any more details would be very much appreciated. It's hard to implement a global abort with callCC. You can implement a local one easily by using an outer callCC to provide an escape continuation, but you have to explicitly pass around this escape continuation. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

On Fri, May 14, 2010 at 4:25 PM, Derek Elkins derek.a.elk...@gmail.com wrote: You did it wrong. All you did was Church encode the Either type. Your bind is still doing a case-analysis. All you have to do is use ContT r (Either e). The bind implementation for ContT is completely independent of the underlying monad. It doesn't even require the m in ContT r m to be a functor, let alone a monad. Therefore the ContT bind doesn't do any case-analysis because it doesn't know anything about the underlying monad. One way to look at what is happening is to compare it to Andrzej Filiniski's work in Representing Monads and Representing Layered Monads. Here's a bit more fleshed out version of what Derek is talking about, for those following along at home: http://hpaste.org/fastcgi/hpaste.fcgi/view?id=25515#a25515 Derek - should I be doing something smarter in 'catch'? I couldn't think of anything obvious. Antoine ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

On Sat, May 15, 2010 at 9:20 PM, Antoine Latter aslat...@gmail.com wrote: On Fri, May 14, 2010 at 4:25 PM, Derek Elkins derek.a.elk...@gmail.com wrote: You did it wrong. All you did was Church encode the Either type. Your bind is still doing a case-analysis. All you have to do is use ContT r (Either e). The bind implementation for ContT is completely independent of the underlying monad. It doesn't even require the m in ContT r m to be a functor, let alone a monad. Therefore the ContT bind doesn't do any case-analysis because it doesn't know anything about the underlying monad. One way to look at what is happening is to compare it to Andrzej Filiniski's work in Representing Monads and Representing Layered Monads. Here's a bit more fleshed out version of what Derek is talking about, for those following along at home: http://hpaste.org/fastcgi/hpaste.fcgi/view?id=25515#a25515 Derek - should I be doing something smarter in 'catch'? I couldn't think of anything obvious. No, that's pretty much what you should be doing also note, for conceptual purposes, that the const (Left e) is equivalent to (Left e =). In Representing Monads to actually perform an effect it gets reified back into a data structure, in this case Either e a, manipulated as appropriate, then reflected back into an implicit effect. The reify function is just applying to the identity continuation so your catch can be written more clearly. reify :: Monad m = ContT r m r - m r reify m = runContT m return catch :: (e - Error e a) - Error e a - Error e a catch handler m = case reify (unE m) of Left e - handler e; Right a - return a ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

On Mon, 10 May 2010, Max Cantor wrote: Based on some discussions in #haskell, it seemed to be a consensus that using a modified continuation monad for Error handling instead of Eithers would be a significant optimization since it would eliminate a lot of conditional branching (everytime = is called in the Either monad, there is a conditional. I assumed that GHC also has optimizations for conditional branching. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

You did it wrong. All you did was Church encode the Either type. Your bind is still doing a case-analysis. All you have to do is use ContT r (Either e). The bind implementation for ContT is completely independent of the underlying monad. It doesn't even require the m in ContT r m to be a functor, let alone a monad. Therefore the ContT bind doesn't do any case-analysis because it doesn't know anything about the underlying monad. One way to look at what is happening is to compare it to Andrzej Filiniski's work in Representing Monads and Representing Layered Monads. On Mon, May 10, 2010 at 4:38 AM, Max Cantor mxcan...@gmail.com wrote: Based on some discussions in #haskell, it seemed to be a consensus that using a modified continuation monad for Error handling instead of Eithers would be a significant optimization since it would eliminate a lot of conditional branching (everytime = is called in the Either monad, there is a conditional. I implemented a ErrCPS monad which does exactly that, but the speed has been disappointing. It runs almost exactly 3x slower than a drop in replacement using the MonadError instance of Either from mtl. mkEMA and midError are basically toy functions but I dont know why Either is so much faster. I've experimented with putting some seq's in the bindErrCPS and even {-# INLINE (=) #-} in the Monad instance, but to no avail. I've copy/pasted the code below, any suggestions on optimization, or if this is simply a bad idea would be much appreciated. Strangely, compiling with -O2 seems to have no effect on the speed: -Max {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE Rank2Types #-} module Main where import Control.Applicative import Control.Monad.Error -- hiding (foldM) import Control.Monad.Trans import Control.Monad hiding (foldM) import System.Random import Control.Monad.Identity (runIdentity, Identity) import Control.Monad.Reader.Class import Data.Time.LocalTime as Time -- for benchmarking import Data.Time.Calendar (Day) import Data.Time.LocalTime (getZonedTime) midError :: MonadError String m = Double - Double - m Double midError a b = if (b 1) then throwError check val else let r = (a + b) / 2 in r `seq` (return r) mkEMA l = foldM midError 1 l newtype ErrCPS e m a = ErrCPS { runErrCPS :: forall r . (e - m r) -- error handler - (a - m r) -- success handler - m r } {-# INLINE retErrCPS #-} retErrCPS :: a - ErrCPS e m a retErrCPS x = ErrCPS $ \_ good - good x {-# INLINE bindErrCPS #-} bindErrCPS :: ErrCPS e m b - (b - ErrCPS e m a) - ErrCPS e m a bindErrCPS m f = ErrCPS $ \err good - runErrCPS m err $ \x - runErrCPS (f x) err good instance Monad m = Monad (ErrCPS e m) where return = retErrCPS (=) = bindErrCPS main :: IO () main = do let n = 50 runEither e b g = either b g e runTest f s = do sg - newStdGen let l = take n $ randomRs (2, 5) sg mapM_ (\e - e `seq` return ()) l stopwatch $ f (mkEMA l) (putStr . show) (putStr . (s ++) . show) forever $ do runTest runEither either: runTest runErrCPS errCPS: ErrCPS based code seems to run almost exactly 3x slower than the Either based code: errCPS: 37453.226 Action ran in: 30 msec either: 26803.055 Action ran in: 11 msec errCPS: 15840.626 Action ran in: 34 msec either: 32556.881 Action ran in: 10 msec errCPS: 38933.121 Action ran in: 30 msec either: 35370.820 Action ran in: 11 msec ... instance (Error e, Monad m) = MonadError e (ErrCPS e m) where throwError = errCPS catchError m f = ErrCPS $ \err good - runErrCPS m (\e - runErrCPS (f e) err good) good -- * MTL stuff instance MonadTrans (ErrCPS e ) where lift m = ErrCPS $ \_ good - m = good instance (MonadIO m) = MonadIO (ErrCPS e m ) where liftIO = lift . liftIO Random utility stuff stopwatch :: IO () - IO () stopwatch act = do t1 - getFastTimeOfDay act t2 - getFastTimeOfDay putStrLn $ Action ran in: ++ show (t2 - t1) ++ msec type FastTimeOfDay = Int -- | Return the current trading day. This should respect the -- fact that the Trading Day ranges from -- SingTime 6am (UTC -02:00) to SST 5:59 am (UTC -1:59). getTradingDay :: IO Day getTradingDay = error getTradingDay undefined getFastTimeOfDay :: IO FastTimeOfDay getFastTimeOfDay = getZonedTime = (return . fastFromTimeOfDay . Time.localTimeOfDay . Time.zonedTimeToLocalTime) timeOfDayFromFast :: FastTimeOfDay - Time.TimeOfDay timeOfDayFromFast fast = Time.TimeOfDay { Time.todHour = fromIntegral (fast `div` (3600 * 1000)) , Time.todMin = fromIntegral (fast `div` (60 * 1000)) `mod` 60 , Time.todSec =

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

On Fri, May 14, 2010 at 4:25 PM, Derek Elkins derek.a.elk...@gmail.com wrote: You did it wrong. All you did was Church encode the Either type. Your bind is still doing a case-analysis. All you have to do is use ContT r (Either e). The bind implementation for ContT is completely independent of the underlying monad. It doesn't even require the m in ContT r m to be a functor, let alone a monad. Therefore the ContT bind doesn't do any case-analysis because it doesn't know anything about the underlying monad. One way to look at what is happening is to compare it to Andrzej Filiniski's work in Representing Monads and Representing Layered Monads. Would you then use callCC to get the required short-circuit-on-error behavior? A church encoding of Either coded as a monad transformer still wouldn't hit the inner monad on bind, even if it is weaving the left and right continuations together. Antoine ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

On Fri, May 14, 2010 at 4:53 PM, Antoine Latter aslat...@gmail.com wrote: On Fri, May 14, 2010 at 4:25 PM, Derek Elkins derek.a.elk...@gmail.com wrote: You did it wrong. All you did was Church encode the Either type. Your bind is still doing a case-analysis. All you have to do is use ContT r (Either e). The bind implementation for ContT is completely independent of the underlying monad. It doesn't even require the m in ContT r m to be a functor, let alone a monad. Therefore the ContT bind doesn't do any case-analysis because it doesn't know anything about the underlying monad. One way to look at what is happening is to compare it to Andrzej Filiniski's work in Representing Monads and Representing Layered Monads. Would you then use callCC to get the required short-circuit-on-error behavior? A church encoding of Either coded as a monad transformer still wouldn't hit the inner monad on bind, even if it is weaving the left and right continuations together. callCC wouldn't work well here. What would work better is another control operator commonly called 'control' which does not resume if the passed in continuation isn't invoked. However, it's usually even clearer (or at least more concise) in these situations to work with the continuation passing style directly. -- fail directly using CPS fail :: String - ContT r (Either String) a fail s = ContT $ \k - Left s ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

Andrea Vezzosi wrote: wren ng thornton wrote: With this change [1] I can't notice any difference for your benchmark[2]. Then again, all the runTest calls take 0 msec and I've had no luck making the computation take much time; perhaps your computer can detect a difference. On my machine, with ghc-6.12.1, yours and the original ErrCPS give quite similar results, both ~2x slower than Either. However it's important to note that these results are highly dependent on the monadic expressions being evaluated, with a different benchmark you can get an huge speedup with the CPS versions. That's very curious. After installing Criterion, my machine (OSX 10.5.8 2.8GHz Intel Core2Duo, GHC 6.12.1 with -O2) shows only 1% difference between my ErrCPS and Either on this benchmark. Alas, I can't print kernel density graphs since Crieterion charts are broken on 6.12. It seems buggy that your platform would behave so much differently... mkEMA is in fact quite peculiar, since there's no catchError and the throwError call is rarely (or never?) made, and thanks to foldM you get that (=) is only used in a right associated way, which is the ideal situation for Either. Indeed, mkEMA is a sort of worst-case comparison that doesn't take advantage of the ability to short-circuit. -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

Antoine Latter wrote: While I also offer a transformer version of MaybeCPS, the transformer *does* suffer from significant slowdown. Also, for MaybeCPS it's better to leave the handlers inline in client code rather than to abstract them out; that helps to keep things concrete. So perhaps you should first try a direct CPS translation: Is the CPS transformed MaybeT slower because it's done in 2-CPS, rather than in 1-CPS like the MaybeCPS? I only did MaybeT in 2-CPS because it was the easiest, not because I thought it would be easiest. I'm not sure how much of it is due to the 2-CPS vs how much is due to the loss of concrete case-analysis and tail-calls in crucial areas. As I noted in comments on the non-transformer version, there are a number of subtle issues such as the choice between let-binding and case analysis which have major effects on performance, so it's tricky to make a MaybeCPS which doesn't impose a performance overhead. A big part of the problem is that once you move to the transformer version, you can't just jump to the next handler--- you also need to carry around whatever the other monad would add to Nothing. Once you move to 2-CPS, the representation is similar enough to LogicT (==ListCPST) that you seem to loose the benefits of Maybe over []. -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

On Thu, May 13, 2010 at 10:51 AM, wren ng thornton w...@freegeek.org wrote: Andrea Vezzosi wrote: wren ng thornton wrote: With this change [1] I can't notice any difference for your benchmark[2]. Then again, all the runTest calls take 0 msec and I've had no luck making the computation take much time; perhaps your computer can detect a difference. On my machine, with ghc-6.12.1, yours and the original ErrCPS give quite similar results, both ~2x slower than Either. However it's important to note that these results are highly dependent on the monadic expressions being evaluated, with a different benchmark you can get an huge speedup with the CPS versions. That's very curious. After installing Criterion, my machine (OSX 10.5.8 2.8GHz Intel Core2Duo, GHC 6.12.1 with -O2) shows only 1% difference between my ErrCPS and Either on this benchmark. Alas, I can't print kernel density graphs since Crieterion charts are broken on 6.12. It seems buggy that your platform would behave so much differently... I got the measurements from the original code, could you share the code that uses criterion instead? mkEMA is in fact quite peculiar, since there's no catchError and the throwError call is rarely (or never?) made, and thanks to foldM you get that (=) is only used in a right associated way, which is the ideal situation for Either. Indeed, mkEMA is a sort of worst-case comparison that doesn't take advantage of the ability to short-circuit. -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

Andrea Vezzosi wrote: On Thu, May 13, 2010 at 10:51 AM, wren ng thornton w...@freegeek.org wrote: Andrea Vezzosi wrote: wren ng thornton wrote: With this change [1] I can't notice any difference for your benchmark[2]. Then again, all the runTest calls take 0 msec and I've had no luck making the computation take much time; perhaps your computer can detect a difference. On my machine, with ghc-6.12.1, yours and the original ErrCPS give quite similar results, both ~2x slower than Either. However it's important to note that these results are highly dependent on the monadic expressions being evaluated, with a different benchmark you can get an huge speedup with the CPS versions. That's very curious. After installing Criterion, my machine (OSX 10.5.8 2.8GHz Intel Core2Duo, GHC 6.12.1 with -O2) shows only 1% difference between my ErrCPS and Either on this benchmark. Alas, I can't print kernel density graphs since Crieterion charts are broken on 6.12. It seems buggy that your platform would behave so much differently... I got the measurements from the original code, could you share the code that uses criterion instead? The 1% number was buggy because I hadn't factored the generation of random lists out of the benchmark. But, having fixed that, I still can't replicate your numbers: I get 12us for Either, vs 17us for EitherCPS. http://community.haskell.org/~wren/wren-extras/test/Control/Monad/ErrCPS/CriterionBenchmark.hs Yet another version of the same benchmark, this time using Microbench: http://community.haskell.org/~wren/wren-extras/test/Control/Monad/ErrCPS/MicrobenchBenchmark.hs Microbench seems to replicate your numbers better: 2551.930ns vs 4466.832ns (or 391.86 vs 223.87 calls per second)--- though this is getting into the range where there might be Int overflow issues corrupting the results (a similar problem showed up when benchmarking Data.Trie vs Data.Map), so it may warrant further investigation. -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

On Wed, May 12, 2010 at 7:50 AM, wren ng thornton w...@freegeek.org wrote: wren ng thornton wrote: Here's one big difference: newtype ErrCPS e m a = ErrCPS { runErrCPS :: forall r . (e - m r) -- error handler - (a - m r) -- success handler - m r } The analogous version I use is: newtype MaybeCPS a = MaybeCPS (forall r. (a - Maybe r) - Maybe r) While I also offer a transformer version of MaybeCPS, the transformer *does* suffer from significant slowdown. Also, for MaybeCPS it's better to leave the handlers inline in client code rather than to abstract them out; that helps to keep things concrete. So perhaps you should first try a direct CPS translation: newtype ErrCPS e a = ErrCPS (forall r. (a - Either e r) - Either e r) runErrCPS :: ErrCPS e a - Either e a runErrCPS (ErrCPS f) = f return I'd be curious if this version suffers the same slowdown. With this change [1] I can't notice any difference for your benchmark[2]. Then again, all the runTest calls take 0 msec and I've had no luck making the computation take much time; perhaps your computer can detect a difference. On my machine, with ghc-6.12.1, yours and the original ErrCPS give quite similar results, both ~2x slower than Either. However it's important to note that these results are highly dependent on the monadic expressions being evaluated, with a different benchmark you can get an huge speedup with the CPS versions. mkEMA is in fact quite peculiar, since there's no catchError and the throwError call is rarely (or never?) made, and thanks to foldM you get that (=) is only used in a right associated way, which is the ideal situation for Either. In a larger program one might mix the two to get the best of both worlds i guess, and maybe we can make a library where each combinator from Control.Monad is reimplemented with the most fitting alternative behind the scenes. the nice part is that you can get the CPS version in a generic way using Codensity: http://hackage.haskell.org/packages/archive/mmtl/0.1/doc/html/Control-Monad-Codensity.html You may want to see what standard benchmarking tools like Microbench[3] or the magnificent Criterion[4] have to say. I'd do it myself, but I haven't had a chance to reinstall everything since getting my new computer (due to the installation issues on newer versions of OSX). [1] http://community.haskell.org/~wren/wren-extras/src/Control/Monad/ErrCPS.hs [2] http://community.haskell.org/~wren/wren-extras/test/Control/Monad/ErrCPS/MaxCantorBenchmark.hs [3] http://hackage.haskell.org/package/microbench [4] http://hackage.haskell.org/package/criterion -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

On Tue, May 11, 2010 at 8:28 PM, wren ng thornton w...@freegeek.org wrote: Max Cantor wrote: Based on some discussions in #haskell, it seemed to be a consensus that using a modified continuation monad for Error handling instead of Eithers would be a significant optimization since it would eliminate a lot of conditional branching (everytime = is called in the Either monad, there is a conditional. I implemented a ErrCPS monad which does exactly that, but the speed has been disappointing. It runs almost exactly 3x slower than a drop in replacement using the MonadError instance of Either from mtl. I have noticed speedup in my CPS version of Maybe[1] (kidnapped from the Wiki) so the difference is curious. Jan-Willem's comments about closures are significant when doing CPS work, but I'd expect Maybe and Either to perform similarly, whatever their performance is. It's been a while since I've benchmarked MaybeCPS, so perhaps I now have the slowdown too. Let's look at the code and see if we can find other differences... [1] http://community.haskell.org/~wren/wren-extras/src/Control/Monad/MaybeCPS.hs Here's one big difference: newtype ErrCPS e m a = ErrCPS { runErrCPS :: forall r . (e - m r) -- error handler - (a - m r) -- success handler - m r } The analogous version I use is: newtype MaybeCPS a = MaybeCPS (forall r. (a - Maybe r) - Maybe r) While I also offer a transformer version of MaybeCPS, the transformer *does* suffer from significant slowdown. Also, for MaybeCPS it's better to leave the handlers inline in client code rather than to abstract them out; that helps to keep things concrete. So perhaps you should first try a direct CPS translation: Is the CPS transformed MaybeT slower because it's done in 2-CPS, rather than in 1-CPS like the MaybeCPS? I only did MaybeT in 2-CPS because it was the easiest, not because I thought it would be easiest. Antoine ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

Max Cantor wrote: Based on some discussions in #haskell, it seemed to be a consensus that using a modified continuation monad for Error handling instead of Eithers would be a significant optimization since it would eliminate a lot of conditional branching (everytime = is called in the Either monad, there is a conditional. I implemented a ErrCPS monad which does exactly that, but the speed has been disappointing. It runs almost exactly 3x slower than a drop in replacement using the MonadError instance of Either from mtl. I have noticed speedup in my CPS version of Maybe[1] (kidnapped from the Wiki) so the difference is curious. Jan-Willem's comments about closures are significant when doing CPS work, but I'd expect Maybe and Either to perform similarly, whatever their performance is. It's been a while since I've benchmarked MaybeCPS, so perhaps I now have the slowdown too. Let's look at the code and see if we can find other differences... [1] http://community.haskell.org/~wren/wren-extras/src/Control/Monad/MaybeCPS.hs Here's one big difference: newtype ErrCPS e m a = ErrCPS { runErrCPS :: forall r . (e - m r) -- error handler - (a - m r) -- success handler - m r } The analogous version I use is: newtype MaybeCPS a = MaybeCPS (forall r. (a - Maybe r) - Maybe r) While I also offer a transformer version of MaybeCPS, the transformer *does* suffer from significant slowdown. Also, for MaybeCPS it's better to leave the handlers inline in client code rather than to abstract them out; that helps to keep things concrete. So perhaps you should first try a direct CPS translation: newtype ErrCPS e a = ErrCPS (forall r. (a - Either e r) - Either e r) runErrCPS :: ErrCPS e a - Either e a runErrCPS (ErrCPS f) = f return I'd be curious if this version suffers the same slowdown. -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

wren ng thornton wrote: Here's one big difference: newtype ErrCPS e m a = ErrCPS { runErrCPS :: forall r . (e - m r) -- error handler - (a - m r) -- success handler - m r } The analogous version I use is: newtype MaybeCPS a = MaybeCPS (forall r. (a - Maybe r) - Maybe r) While I also offer a transformer version of MaybeCPS, the transformer *does* suffer from significant slowdown. Also, for MaybeCPS it's better to leave the handlers inline in client code rather than to abstract them out; that helps to keep things concrete. So perhaps you should first try a direct CPS translation: newtype ErrCPS e a = ErrCPS (forall r. (a - Either e r) - Either e r) runErrCPS :: ErrCPS e a - Either e a runErrCPS (ErrCPS f) = f return I'd be curious if this version suffers the same slowdown. With this change [1] I can't notice any difference for your benchmark[2]. Then again, all the runTest calls take 0 msec and I've had no luck making the computation take much time; perhaps your computer can detect a difference. You may want to see what standard benchmarking tools like Microbench[3] or the magnificent Criterion[4] have to say. I'd do it myself, but I haven't had a chance to reinstall everything since getting my new computer (due to the installation issues on newer versions of OSX). [1] http://community.haskell.org/~wren/wren-extras/src/Control/Monad/ErrCPS.hs [2] http://community.haskell.org/~wren/wren-extras/test/Control/Monad/ErrCPS/MaxCantorBenchmark.hs [3] http://hackage.haskell.org/package/microbench [4] http://hackage.haskell.org/package/criterion -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

Based on some discussions in #haskell, it seemed to be a consensus that using a modified continuation monad for Error handling instead of Eithers would be a significant optimization since it would eliminate a lot of conditional branching (everytime = is called in the Either monad, there is a conditional. I implemented a ErrCPS monad which does exactly that, but the speed has been disappointing. It runs almost exactly 3x slower than a drop in replacement using the MonadError instance of Either from mtl. mkEMA and midError are basically toy functions but I dont know why Either is so much faster. I've experimented with putting some seq's in the bindErrCPS and even {-# INLINE (=) #-} in the Monad instance, but to no avail. I've copy/pasted the code below, any suggestions on optimization, or if this is simply a bad idea would be much appreciated. Strangely, compiling with -O2 seems to have no effect on the speed: -Max {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE Rank2Types #-} module Main where import Control.Applicative import Control.Monad.Error -- hiding (foldM) import Control.Monad.Trans import Control.Monad hiding (foldM) import System.Random import Control.Monad.Identity (runIdentity, Identity) import Control.Monad.Reader.Class import Data.Time.LocalTime as Time -- for benchmarking import Data.Time.Calendar (Day) import Data.Time.LocalTime (getZonedTime) midError :: MonadError String m = Double - Double - m Double midError a b = if (b 1) then throwError check val else let r = (a + b) / 2 in r `seq` (return r) mkEMA l = foldM midError 1 l newtype ErrCPS e m a = ErrCPS { runErrCPS :: forall r . (e - m r) -- error handler - (a - m r) -- success handler - m r } {-# INLINE retErrCPS #-} retErrCPS :: a - ErrCPS e m a retErrCPS x = ErrCPS $ \_ good - good x {-# INLINE bindErrCPS #-} bindErrCPS :: ErrCPS e m b - (b - ErrCPS e m a) - ErrCPS e m a bindErrCPS m f = ErrCPS $ \err good - runErrCPS m err $ \x - runErrCPS (f x) err good instance Monad m = Monad (ErrCPS e m) where return = retErrCPS (=) = bindErrCPS main :: IO () main = do let n = 50 runEither e b g = either b g e runTest f s = do sg - newStdGen let l = take n $ randomRs (2, 5) sg mapM_ (\e - e `seq` return ()) l stopwatch $ f (mkEMA l) (putStr . show) (putStr . (s ++) . show) forever $ do runTest runEither either: runTest runErrCPS errCPS: ErrCPS based code seems to run almost exactly 3x slower than the Either based code: errCPS: 37453.226 Action ran in: 30 msec either: 26803.055 Action ran in: 11 msec errCPS: 15840.626 Action ran in: 34 msec either: 32556.881 Action ran in: 10 msec errCPS: 38933.121 Action ran in: 30 msec either: 35370.820 Action ran in: 11 msec ... instance (Error e, Monad m) = MonadError e (ErrCPS e m) where throwError = errCPS catchError m f = ErrCPS $ \err good - runErrCPS m (\e - runErrCPS (f e) err good) good -- * MTL stuff instance MonadTrans (ErrCPS e ) where lift m = ErrCPS $ \_ good - m = good instance (MonadIO m) = MonadIO (ErrCPS e m ) where liftIO = lift . liftIO Random utility stuff stopwatch :: IO () - IO () stopwatch act = do t1 - getFastTimeOfDay act t2 - getFastTimeOfDay putStrLn $ Action ran in: ++ show (t2 - t1) ++ msec type FastTimeOfDay = Int -- | Return the current trading day. This should respect the -- fact that the Trading Day ranges from -- SingTime 6am (UTC -02:00) to SST 5:59 am (UTC -1:59). getTradingDay :: IO Day getTradingDay = error getTradingDay undefined getFastTimeOfDay :: IO FastTimeOfDay getFastTimeOfDay = getZonedTime = (return . fastFromTimeOfDay . Time.localTimeOfDay . Time.zonedTimeToLocalTime) timeOfDayFromFast :: FastTimeOfDay - Time.TimeOfDay timeOfDayFromFast fast = Time.TimeOfDay { Time.todHour = fromIntegral (fast `div` (3600 * 1000)) , Time.todMin = fromIntegral (fast `div` (60 * 1000)) `mod` 60 , Time.todSec = fromRational $ (fromIntegral fast) / 1000 } fastFromTimeOfDay :: Time.TimeOfDay - FastTimeOfDay fastFromTimeOfDay t = fromIntegral $ ((Time.todHour t) * 360) + ((Time.todMin t) * 6) + (round $ 1000 * Time.todSec t) instance (Monad m) = Functor (ErrCPS e m) where fmap f m = ErrCPS $ \err good - runErrCPS m err (good . f) instance (Monad m) = Applicative (ErrCPS e m) where pure = return f * a = do f' - f a' - a return $ f' a' errCPS :: forall e m a . e - ErrCPS e m a errCPS e = ErrCPS $ \err _ - err e ___

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

On Mon, May 10, 2010 at 5:38 AM, Max Cantor mxcan...@gmail.com wrote: Based on some discussions in #haskell, it seemed to be a consensus that using a modified continuation monad for Error handling instead of Eithers would be a significant optimization since it would eliminate a lot of conditional branching (everytime = is called in the Either monad, there is a conditional. My suspicion, based on using a similar monad to implement IO in Eager Haskell, is that you're creating a lot of closures. This is rather more expensive in general than the extra control flow required to inspect the Eithers. In more detail: CPS works well if the compiler can inline most of the continuation passing and turn your code back into direct style, at least along the no failures path. In this case you can avoid creating closures except at what would have been actual function call points in your original code, and at catch points for the error continuation. However, I expect that you're probably calling functions that are polymorphic in Monad (stuff like mapM etc.) that is not being inlined or specialized. These end up building a continuation rather naively on the heap. You're essentially moving the call stack to the heap, and the compiler can't assist you in moving it back again; hence, slow code. To make matters worse, you get a lot more branch prediction leverage with pointer-tagged Either than you possibly could with a closure invocation on a modern architecture. But I suspect that's rather unimportant compared to allocation time / memory footprint issues here. -Jan-Willem Maessen I implemented a ErrCPS monad which does exactly that, but the speed has been disappointing. It runs almost exactly 3x slower than a drop in replacement using the MonadError instance of Either from mtl. mkEMA and midError are basically toy functions but I dont know why Either is so much faster. I've experimented with putting some seq's in the bindErrCPS and even {-# INLINE (=) #-} in the Monad instance, but to no avail. I've copy/pasted the code below, any suggestions on optimization, or if this is simply a bad idea would be much appreciated. Strangely, compiling with -O2 seems to have no effect on the speed: -Max [... code snipped] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

Makes sense. From what you wrote, it seems like this might be a dead-end and can't really be optimized away. Do you agree? Max On May 10, 2010, at 8:38 PM, Jan-Willem Maessen wrote: On Mon, May 10, 2010 at 5:38 AM, Max Cantor mxcan...@gmail.com wrote: Based on some discussions in #haskell, it seemed to be a consensus that using a modified continuation monad for Error handling instead of Eithers would be a significant optimization since it would eliminate a lot of conditional branching (everytime = is called in the Either monad, there is a conditional. My suspicion, based on using a similar monad to implement IO in Eager Haskell, is that you're creating a lot of closures. This is rather more expensive in general than the extra control flow required to inspect the Eithers. In more detail: CPS works well if the compiler can inline most of the continuation passing and turn your code back into direct style, at least along the no failures path. In this case you can avoid creating closures except at what would have been actual function call points in your original code, and at catch points for the error continuation. However, I expect that you're probably calling functions that are polymorphic in Monad (stuff like mapM etc.) that is not being inlined or specialized. These end up building a continuation rather naively on the heap. You're essentially moving the call stack to the heap, and the compiler can't assist you in moving it back again; hence, slow code. To make matters worse, you get a lot more branch prediction leverage with pointer-tagged Either than you possibly could with a closure invocation on a modern architecture. But I suspect that's rather unimportant compared to allocation time / memory footprint issues here. -Jan-Willem Maessen I implemented a ErrCPS monad which does exactly that, but the speed has been disappointing. It runs almost exactly 3x slower than a drop in replacement using the MonadError instance of Either from mtl. mkEMA and midError are basically toy functions but I dont know why Either is so much faster. I've experimented with putting some seq's in the bindErrCPS and even {-# INLINE (=) #-} in the Monad instance, but to no avail. I've copy/pasted the code below, any suggestions on optimization, or if this is simply a bad idea would be much appreciated. Strangely, compiling with -O2 seems to have no effect on the speed: -Max [... code snipped] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: save-the-world or serializable continuations

On Thu, Feb 25, 2010 at 11:04, Ben midfi...@gmail.com wrote: hello -- i'm wondering if there is any support for a common-lisp style save-the-world function in GHC, or more generally serializable continuations (in the Continuation monad?) And if not, what would it take to have it? It would be very useful for long-running restartable computations, amongst other things. I realize it is a very tricky thing to do right, but even simplified approximations would be great. This might not be what you're looking for, but it sounds similar and might be a place to start: http://hackage.haskell.org/package/Workflow -- Love in Jesus Christ, John Alfred Nathanael Chee http://www.biblegateway.com/ http://www.sunsetpres.org/ http://web.cecs.pdx.edu/~chee/ ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

### [Haskell-cafe] Re: [Haskell] ANN: Reusable Corecursive Queues via Continuations

On Mon, Jun 22, 2009 at 10:31:34PM -0400, Leon Smith wrote: Also, a substantially revised draft of the associated paper, Lloyd Allison's Corecursive Queues: Why Continuations Matter is now available. [3] This paper will appear in the upcoming Monad Reader issue 14, and comments would be most appreciated. As per [1], it seems n+k patterns will be removed from Haskell', so you may want to remove them from the paper as well. [1] http://hackage.haskell.org/trac/haskell-prime/wiki/Status -- Felipe. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell] ANN: Reusable Corecursive Queues via Continuations

In a purely functional setting, real-time queues are traditionally thought to be much harder to implement than either real-time stacks or amortized O(1) queues. In “Circular Programs and Self-Referential Structures,” [1] Lloyd Allison uses corecursion to implement a queue by defining a lazy list in terms of itself. This provides a simple, efficient, and attractive implementation of real-time queues. Now, control-monad-queue is available on hackage [2], which offers this technique in a reusable library. As Allison's queue is not fully persistent, it cannot be a first class value; rather they are encoded in specific algorithms written in an extended continuation passing style, and the use of continuations seems necessary in order to abstract the corecursive queue operations. Also, a substantially revised draft of the associated paper, Lloyd Allison's Corecursive Queues: Why Continuations Matter is now available. [3] This paper will appear in the upcoming Monad Reader issue 14, and comments would be most appreciated.It derives a reusable implementation in an explicit continuation-passing style from Allison's original code, demonstrates an alternate reusable implementation in direct style using mapCont, and argues that mapCont cannot be expressed in terms of callCC, return, and (=). Finally, this approach performs well in practice, when compiled with optimization using GHC. However, performance has regressed from GHC 6.8.3 to 6.10.3, and there is a curious performance anomaly associated with the generalized corecursive abstraction. [1] http://www.csse.monash.edu.au/~lloyd/tildeFP/1989SPE/ [2] http://hackage.haskell.org/package/control-monad-queue [3] http://blog.melding-monads.com/2009/06/22/control-monad-queue/ Best, Leon ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell

### [Haskell-cafe] GUIs, FRP, (Delimited) Continuations and Zippers

Could either of those approaches (FRP / Delimited Continuations) be a solution for implementing complex GUI code? I think the answer is generally yes; I have tried writing a user interface which has a form with several controls; a change in one control may affect all other controls on the form (or change the form). I have tried it for a particular GUI: the web page -- displayed in a browser interacting with a CGI script. The choice of CGI was deliberate because it is harder than FastCGI: it requires the ability to save the state of the interaction. The continuation must outlive its process. The point I was trying to make is that the GUI is programmed as if it were a uncursed console application: you send one question, get a reply, send another question, etc. With a GUI, questions are sent in parallel and answers are delivered in any order; yet the programmer can still think he deals with a regular console application; as if the interaction never happens and the program merely reads the data from a file. http://okmij.org/ftp/Computation/Continuations.html#shift-cgi It was done in OCaml though (because OCaml has native persistent delimited continuations). ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] GUIs, FRP, (Delimited) Continuations and Zippers

Am Samstag, 16. Mai 2009 16:18 schrieb Günther Schmidt: Hi all, In my app, there is one part which has a rather complicated GUI logic, it involves n drop downs with n choices each. Whenever the current selection in one of the drop downs changes by user interaction, the other (n-1) drop downs need to be notified and their item list need to possible change too. Now I have managed to code all this and it actually behaves correctly. But I'm also using tons of IORefs and tons of bookkeeping code for it. While I'd not be ashamed to show any other part of my code to another Haskeller, this part of the code is the most clumsiest I've ever written. And I have no clue if that piece of code *can* be written in any other way, ie. without the tons of IORefs and bookkeeping. The GUI library is WXHaskell. In the last few days I read up on Conal Elliotts FRP stuff (reactive) but also on Olegs ZFS (Zippers, Delimited Continuations), the latter leaving me totally baffled. Could either of those approaches (FRP / Delimited Continuations) be a solution for implementing complex GUI code? Günther Hello Günther, FRP can definitely help you a lot for these kinds of problems. You have n widgets where each widget outputs a discrete signal (event (stream)) of requests from the user. These are transformed and combined to form signals (behaviors) that describe the contents of the n widgets over time. Have you looked at Grapefruit? It’s an FRP library, I’m developing, which already has integrated GUI support. If you want to try it out, better take the darcs version instead of the released version since there have been some important developments since the release date. You find Grapefruit’s homepage here: http://haskell.org/haskellwiki/Grapefruit. If you have questions, please ask. Best wishes, Wolfgang ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] GUIs, FRP, (Delimited) Continuations and Zippers

Hi all, In my app, there is one part which has a rather complicated GUI logic, it involves n drop downs with n choices each. Whenever the current selection in one of the drop downs changes by user interaction, the other (n-1) drop downs need to be notified and their item list need to possible change too. Now I have managed to code all this and it actually behaves correctly. But I'm also using tons of IORefs and tons of bookkeeping code for it. While I'd not be ashamed to show any other part of my code to another Haskeller, this part of the code is the most clumsiest I've ever written. And I have no clue if that piece of code *can* be written in any other way, ie. without the tons of IORefs and bookkeeping. The GUI library is WXHaskell. In the last few days I read up on Conal Elliotts FRP stuff (reactive) but also on Olegs ZFS (Zippers, Delimited Continuations), the latter leaving me totally baffled. Could either of those approaches (FRP / Delimited Continuations) be a solution for implementing complex GUI code? Günther ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Delimited continuations: please comment

On Sat, Feb 14, 2009 at 2:04 AM, Brandon S. Allbery KF8NH allb...@ece.cmu.edu wrote: liftIO is defined there, I believe. Many of the monad modules re-export it with their MonadTrans definitions, but apparently Control.Monad.CC doesn't so you need to go to the source. Yeah, I knew the answer :D It was sort of a joke... Cristiano ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Re: Delimited continuations: please comment

On Fri, Feb 13, 2009 at 2:05 AM, Chung-chieh Shan ccs...@post.harvard.edu wrote: ... It's not unheard of for the scheduler to react in different ways to the same system call -- I'm thinking of reading from a file, for example. Sure, I went implementing something slitghtly different to double check my understanding of delconts. You clearly understand the whole idea, and your code demonstrates it in a nice way. Oleg and I have found this programming style particularly convenient when we need to - fork processes (i.e., backtrack in the monad), - run the same processes under different schedulers (e.g., a debugger), - nest the applications of schedulers (i.e., provide virtualization). Thanks for your feedback. Cristiano ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Delimited continuations: please comment

On 2009 Feb 12, at 11:55, Cristiano Paris wrote: import Control.Monad.Trans -- why do I have to import this? liftIO is defined there, I believe. Many of the monad modules re- export it with their MonadTrans definitions, but apparently Control.Monad.CC doesn't so you need to go to the source. -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allb...@kf8nh.com system administrator [openafs,heimdal,too many hats] allb...@ece.cmu.edu electrical and computer engineering, carnegie mellon universityKF8NH PGP.sig Description: This is a digitally signed message part ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Delimited continuations: please comment

Hi, I'm experimenting with delimited continuations in the effort to understand how they work and when it's convenient to use them. Consider this piece of code (install the CC-delcont before running it): {-# LANGUAGE NoMonomorphismRestriction #-} import Control.Monad.CC import Control.Monad.Trans -- why do I have to import this? data Monad m = Susp m a b = Stop | Susp (a - m (Susp m a b)) job = reset $ \p - let askMsg = shift p $ \k - return $ Susp $ k . return in do x - askMsg liftIO $ putStrLn $ x was ++ show x y - askMsg liftIO $ putStrLn $ y was ++ show y return Stop scheduler j = do Susp nj - j Susp nj - nj Hello! nj World! return undefined main = runCCT $ scheduler job which produces the output: [pa...@bagend haskell]$ runhaskell dc.hs x was Hello! y was World! [pa...@bagend haskell]$ The goal of this is to have a test-case implementation of the system call mechanism found in operating systems, like the one described by Oleg in (see page 3): http://okmij.org/ftp/papers/context-OS.pdf In effect, this is a bit different from the syscall service routine described by Oleg, as the scheduler function reacts in different ways for subsequent calls (the first time feeds Hello!, the second one World!, in a nice monad style). Yet, I liked the separation between the scheduler and the job, which are two completely different values and which I tried to keep. As this is (almost) my first time using delconts, could you provide feedback, comments, opinions about my piece of code and the topic in general (convenience, performances, alternatives and so on)? Thank you, Cristiano ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Re: Delimited continuations: please comment

Cristiano Paris cristiano.pa...@gmail.com wrote in article afc62ce20902120855i77acf725p1069aab21037a...@mail.gmail.com in gmane.comp.lang.haskell.cafe: In effect, this is a bit different from the syscall service routine described by Oleg, as the scheduler function reacts in different ways for subsequent calls (the first time feeds Hello!, the second one World!, in a nice monad style). Yet, I liked the separation between the scheduler and the job, which are two completely different values and which I tried to keep. It's not unheard of for the scheduler to react in different ways to the same system call -- I'm thinking of reading from a file, for example. As this is (almost) my first time using delconts, could you provide feedback, comments, opinions about my piece of code and the topic in general (convenience, performances, alternatives and so on)? You clearly understand the whole idea, and your code demonstrates it in a nice way. Oleg and I have found this programming style particularly convenient when we need to - fork processes (i.e., backtrack in the monad), - run the same processes under different schedulers (e.g., a debugger), - nest the applications of schedulers (i.e., provide virtualization). -- Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig Attending a mathematics lecture is like walking through a thunderstorm at night. Most of the time you are lost, wet and miserable but at rare intervals there is a flash of lightening and the whole countryside is lit up. - Tom Koerner ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Is this related to continuations somehow?

So, I was reading a bit about continuations the other day, and, since I've been thinking about good ways of expressing chess strategies in Haskell, I thought that I'd play around a bit with something like continuations for game-playing strategies. The idea is that you have combinators that allow you full access to the strategies which remain to be applied. In this way, strategies can activate and de-activate other strategies. Here's a simple little toy app for Tic-Tac-Toe (Naughts and Crosses): http://codepad.org/nN9JsxFK You can run main on 'example', and see that it searches every line and fails. And, as you can see, it aborts after finding a win in example2. This would be easily extensible to say things like if you've seen a blocking move, and you don't have a win, then play the blocking move, and of course the other deep intricacies of the game. My question is, is this, in fact, related to continuations somehow? Could continuations simplify it? Or am I doing something completely different? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Haskell w/ delimited continuations

Call-by-name lambda-calculus is strictly more expressive (in Felleisen sense) than call-by-value lambda-calculus, and the call-by-need (aka, lazy) lambda-calculus is observationally equivalent to the call-by-name. One can add shift/reset to any of these calculi (CBV shift/reset is most known; there are several CBN shift/reset, including the one I'm particularly attached to; in principle one can add shift/reset to call-by-need). Adding control effects (shift/reset) changes the expressivity results. Now all three calculi are distinct and none subsumes the other. For example, the expression reset( (\x - 1) (abort 2)) evaluates to 1 in call-by-name and evaluate to 2 in call-by-value. The expression reset ((\x - x + x) (shift f f)) has the type int-int in call-by-need (it is a function \x - x + x) and it has the type int-int-int in call-by-name (and it is the curried addition function). The fact that call-by-need is no longer observationally equivalent to call-by-name and sharing becomes observable is the most distressing. It disables many optimizations GHC is allowed to do. Types help: there are call-by-name calculi with shift/reset with effect typing; one can look at the type and see what control effect an expression may make. That will still permit GHC optimize pure expressions and leave effectful expressions as they are. Alas, that type system is complex and I don't think inference is decidable there due to the presence of subtyping (one must annotate at least some of the binders with types, in particular, the binders of shift). It seems the simplest solution is to confine shift/reset to a monad. Regarding purity: the obligatory reference is Amr Sabry. What is a Purely Functional Language? In J. Functional Programming, 8(1), 1-22, Jan. 1998. http://www.cs.indiana.edu/~sabry/papers/purelyFunctional.ps Please see the definition 4.7. As Matthias Blume said, a bit informally, evaluation of a pure expression should not depend on CBN or CBV or some other such strategy. By this definition, an expression that involves shift/reset is not pure, as the above examples demonstrate. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Haskell w/ delimited continuations

On Sat, Feb 23, 2008 at 1:05 AM, [EMAIL PROTECTED] wrote: Adding control effects (shift/reset) changes the expressivity results. Now all three calculi are distinct and none subsumes the other. For example, the expression reset( (\x - 1) (abort 2)) evaluates to 1 in call-by-name and evaluate to 2 in call-by-value. The expression reset ((\x - x + x) (shift f f)) has the type int-int in call-by-need (it is a function \x - x + x) and it has the type int-int-int in call-by-name (and it is the curried addition function). Aha. Okay, so shift/reset exposes evaluation order, amongst other things. Hm... thank you very much! -- Taral [EMAIL PROTECTED] Please let me know if there's any further trouble I can give you. -- Unknown ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Haskell w/ delimited continuations

On Sat, Feb 23, 2008 at 1:05 AM, [EMAIL PROTECTED] wrote: reset ((\x - x + x) (shift f f)) This one doesn't typecheck, since you can't unify the types (a - r) and r. -- Taral [EMAIL PROTECTED] Please let me know if there's any further trouble I can give you. -- Unknown ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Re: Haskell w/ delimited continuations

Taral [EMAIL PROTECTED] wrote in article [EMAIL PROTECTED] in gmane.comp.lang.haskell.cafe: On Sat, Feb 23, 2008 at 1:05 AM, [EMAIL PROTECTED] wrote: reset ((\x - x + x) (shift f f)) This one doesn't typecheck, since you can't unify the types (a - r) and r. Some type systems for delimited continuations, such as Danvy and Filinski's (http://www.daimi.au.dk/~danvy/Papers/fatc.ps.gz; DIKU TR 89/12), allows changing the answer type and admits this code. -- Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig God exists since mathematics is consistent, and the devil exists since its consistency cannot be proved. -- Hermann Weyl. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Haskell w/ delimited continuations

My understanding of these things is limited, but what would stop me, theoretically speaking, of making a version of ghc with these primitives added: type Prompt r reset :: (Prompt r - r) - r shift :: Prompt r - ((a - _) - r) - a (Where _ is either r or forall b. b) -- Taral [EMAIL PROTECTED] Please let me know if there's any further trouble I can give you. -- Unknown ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Haskell w/ delimited continuations

You might want to take a look at http://www.haskell.org/pipermail/haskell/2007-December/020034.html which shows an implementation of delimited continuations in Haskell98 and possibly gets rid of any requirement of implementing primitives. -- ryan On 2/22/08, Taral [EMAIL PROTECTED] wrote: My understanding of these things is limited, but what would stop me, theoretically speaking, of making a version of ghc with these primitives added: type Prompt r reset :: (Prompt r - r) - r shift :: Prompt r - ((a - _) - r) - a (Where _ is either r or forall b. b) -- Taral [EMAIL PROTECTED] Please let me know if there's any further trouble I can give you. -- Unknown ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Haskell w/ delimited continuations

See also, http://hackage.haskell.org/cgi-bin/hackage-scripts/package/CC-delcont An implementation of multi-prompt delimited continuations based on the paper, A Monadic Framework for Delimited Continuations, by R. Kent Dybvig, Simon Peyton Jones and Amr Sabry reset :: MonadDelimitedCont p s m = (p a - m a) - m a shift :: MonadDelimitedCont p s m = p b - ((m a - m b) - m b) - m a ryani.spam: You might want to take a look at http://www.haskell.org/pipermail/haskell/2007-December/020034.html which shows an implementation of delimited continuations in Haskell98 and possibly gets rid of any requirement of implementing primitives. -- ryan On 2/22/08, Taral [EMAIL PROTECTED] wrote: My understanding of these things is limited, but what would stop me, theoretically speaking, of making a version of ghc with these primitives added: type Prompt r reset :: (Prompt r - r) - r shift :: Prompt r - ((a - _) - r) - a (Where _ is either r or forall b. b) -- Taral [EMAIL PROTECTED] Please let me know if there's any further trouble I can give you. -- Unknown ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Re: Haskell w/ delimited continuations

On Fri, 2008-02-22 at 14:27 -0800, Taral wrote: On 2/22/08, Taral [EMAIL PROTECTED] wrote: reset :: (Prompt r - r) - r shift :: Prompt r - ((a - _) - r) - a The point of the question is about shift/reset with *these types*. I know there are implementations with other types. Nothing but sanity is stopping you. If you make a new language, you can do whatever you like. However, with shift and reset you can represent any effect, so you would utterly lose purity. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Re: Haskell w/ delimited continuations

On 2/22/08, Taral [EMAIL PROTECTED] wrote: reset :: (Prompt r - r) - r shift :: Prompt r - ((a - _) - r) - a The point of the question is about shift/reset with *these types*. I know there are implementations with other types. -- Taral [EMAIL PROTECTED] Please let me know if there's any further trouble I can give you. -- Unknown ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Re: Haskell w/ delimited continuations

On 2/22/08, Derek Elkins [EMAIL PROTECTED] wrote: Nothing but sanity is stopping you. If you make a new language, you can do whatever you like. However, with shift and reset you can represent any effect, so you would utterly lose purity. Can you give an example of an impure function created using these primitives? -- Taral [EMAIL PROTECTED] Please let me know if there's any further trouble I can give you. -- Unknown ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Re: Haskell w/ delimited continuations

On Fri, 2008-02-22 at 15:13 -0800, Taral wrote: On 2/22/08, Derek Elkins [EMAIL PROTECTED] wrote: Nothing but sanity is stopping you. If you make a new language, you can do whatever you like. However, with shift and reset you can represent any effect, so you would utterly lose purity. Can you give an example of an impure function created using these primitives? shift and reset but see these slides http://cs.ioc.ee/mpc-amast06/msfp/filinski-slides.pdf and/or one or both of http://citeseer.ist.psu.edu/filinski94representing.html http://citeseer.ist.psu.edu/filinski99representing.html ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Re: Haskell w/ delimited continuations

On 2/22/08, Derek Elkins [EMAIL PROTECTED] wrote: shift and reset I was under the impression that reset was a pure function. What side effects does it have? -- Taral [EMAIL PROTECTED] Please let me know if there's any further trouble I can give you. -- Unknown ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Re: Haskell w/ delimited continuations

On Fri, 2008-02-22 at 19:04 -0800, Taral wrote: On 2/22/08, Derek Elkins [EMAIL PROTECTED] wrote: shift and reset I was under the impression that reset was a pure function. What side effects does it have? It depends on how you define pure function. It's not particularly relevant and I mostly included it as I consider them a pair. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Re: Haskell w/ delimited continuations

On 2/22/08, Taral [EMAIL PROTECTED] wrote: shift :: Prompt r - ((a - _) - r) - a (Where _ is either r or forall b. b) It occurs to me that _ has to be r, otherwise the subcontinuation can escape. -- Taral [EMAIL PROTECTED] Please let me know if there's any further trouble I can give you. -- Unknown ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Re: ANNOUNCE: CC-delcont-0.1; Delimited continuations for Haskell

Hello again, I apologize for replying to myself, but since no one else is talking to me, I suppose I have no choice. :) Anyhow, in case some people were intrigued, but simply didn't speak up (and because I was interested in seeing how easily it could be done), I took the liberty of implementing a version of the parser inverter that mimics the OCaml semantics pretty closely (I think). As I mentioned, this involves making a list data type that incorporates monads, so that it can be lazy in the side effects used to produce it. In short it looks like this: data MList' m a = MNil | MCons a (MList m a) type MList m a = m (MList' m a) So, each list tail (including the entire list) is associated with a side effect, which has the ultimate effect that you can build lists in ways such as: toMList :: Monad m = m (Maybe t) - MList m t toMList gen = gen = maybe nil (`cons` toMList gen) This is the MList analogue of the toList function from the previous list (slightly modified here to demonstrate the similarity): toList :: Monad m = m (Maybe a) - m [a] toList gen = gen = maybe (return []) (\c - liftM (c:) $ toList gen) However, toList uses liftM, which will strictly sequence the effects (the recursive toList call has to complete before the whole list is returned), whereas toMList simply adds the *monadic action* to produce the rest of the list as the tail, and so the side effects it entails don't actually occur until a consumer asks to see that part of the list. So, the proof is in the output. The sample program (source included as an attachment) demonstrates normal lexing (where the underlying monad is just IO) and inverted lexing (which uses delimited continuations layered over IO). The 'lexing' is just the 'words' function adapted to MLists (I thought about doing a full-on parser, but I think that'd require making the parser a monad transformer (essentially) over the base monad, which would be complex, to say the least). The relevant parts look like so: normalLex :: IO () normalLex = printTokens (wordsML (liftList The quick brown fox jumps over the lazy dog)) reqLex :: CCT ans IO () reqLex = do p1 - begin p2 - provideSome The quick brown p1 pStrLn Break 1 p3 - provideSome fox jumps over p2 pStrLn Break 2 p4 - provideSome the laz p3 pStrLn Break 3 provideSome y dog p4 = finish pStrLn Rollback provideSome iest dog p4 = finish return () Which main invokes appropriately. Output looks like so: Normal Lexing - The quick brown fox jumps over the lazy dog - Inverted Lexing --- The quick brown Break 1 fox jumps over Break 2 the Break 3 lazy dog Rollback laziest dog --- So, success! Tokens are printed out as soon as the lexer is able to recognize them, properly interleaved with other IO side effects, and resuming from an intermediate parse does not cause duplication of output. So, that wasn't really that hard to hack up. However, I should mention that it wasn't trivial, either. When converting list functions to MList functions, you have to be very careful not to perform side effects twice. For instance, my first pass gave output like: ... he uick rown Break 1 ox ... Although it worked fine with the normal lexer. The culprit? I had written nullML like so: nullML :: Monad m = MList m a - m Bool nullML m = isNothing `liftM` uncons m But in that version, testing for null, and then using the list performs side effects twice, and due to the way the delimited continuations produce MLists, characters were getting dropped! The correct version is: nullML :: Monad m = MList m a - m (Bool, MList m a) nullML m = uncons m = maybe (return (True, nil)) (\(a,m') - return (False, a `cons` m')) Which returns both whether the list is null, and a new list that won't perform a duplicate side effect. So, I guess what I'm saying is that reasoning about code with lots of embedded side effects can be difficult. :) As a final aside, it should be noted that to get the desired effect (that is, laziness with interleaved side effects), it's important to make use of the monadic data structures as much as possible. For instance, wordsML produces not an (m [MList m a]) or MList m [a] or anything like that (although the latter may work), but an MList m (MList m a), which is important for the effects to be able to get a hold over printTokens. However, if you want to produce something that's not a list, say, a tree, you'll have to write an MTree, or, in general, one lazy-effectful data structure

### [Haskell] ANNOUNCE: CC-delcont-0.1; Delimited continuations for Haskell

Hello, After a bit of hacking, and much documenting, I'm pleased to announce a preliminary release of a delimited continuation library for Haskell. The implementation is drawn from A Monadic Framework for Delimited Continuations[1] by Dybvig, Petyon Jones and Sabry, although it has some abstraction available over said implementation (there is both a monad and a transformer; control operations will work interchangeably with either. In addition, all four control operators from Shift to Control[2] are implemented). In addition, I have done the adaptation necessary to include the Haskell implementation of the dynamic scoping investigated in Delimited Dynamic Binding[3] by Kiselyov, Shan and Sabry, and it has been included. One can think of it as embeddings of the reader or state monads into the delimited control monads (only more flexible). In the future, I'd like to also, possibly, include something like Oleg's generic zippers, and possibly other useful abstractions based on delimited continuations, if I find them. But that's work for another day, I suppose. If you wish to try it out, a package is available on hackage: Page: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/CC-delcont-0.1 Tabrall: http://hackage.haskell.org/packages/archive/CC-delcont/0.1/CC-delcont-0.1.tar.gz However, I should warn that it's fairly dependent on GHC HEAD at the moment: 1) It uses the Unsafe.Coerce module 2) It uses GADTs to store type class evidence 3) It uses GHC-features that haddock will choke on, so haddock.ghc will be required for docs (and I haven't gotten that working yet to check that the haddock is bug-free), which in turn requires GHC HEAD. 1 2 could be relaxed if there's interest. Please feel free to let me know if you find any bugs, or have any suggestions for improvements. I'll follow up this message with a message containing an example program and a discussion thereof. Cheers, Dan Doel 1: http://www.cs.indiana.edu/~sabry/papers/monadicDC.pdf 2: http://www.cs.rutgers.edu/~ccshan/recur/recur.pdf 3: http://okmij.org/ftp/papers/DDBinding.pdf ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell

### [Haskell] Re: ANNOUNCE: CC-delcont-0.1; Delimited continuations for Haskell

This is intended as an illustration of how one might use the CC-delcont library, and/or what it might be good for at all. For more, a mailing list search for 'oleg' would likely be fruitful, as this library is heavily derived from the delimited continuation implementation he's used in past mails. Essentially, this is a port of Oleg's 'Incremental, undoable parsing' as seen on the OCaml mailing list. It does lack some of the properties of the OCaml implementation, however (which I'll talk more about later). The original mail is available here: http://caml.inria.fr/pub/ml-archives/caml-list/2007/07/7a34650001bf6876b71c7b1060ac501f.en.html This message is (hopefully) literate haskell, and should be able to be run directly, as long as the CC-delcont library is installed. {-# OPTIONS_GHC -fglasgow-exts #-} module Parse where import Text.ParserCombinators.Parsec import Control.Monad import Control.Monad.Trans import Control.Monad.CC import Control.Monad.CC.Dynvar The Inverter First comes a datatype that is, in some sense, a reification of the act of parsing. Done, of course, means that the parsing is over, and it holds the result of parsing. ReqChar means that the parser is paused, waiting for more input. It holds a position, indicating how many characters it's read so far. data StreamReq m a where Done:: Monad m = a - StreamReq m a ReqChar :: Monad m = Int - (Maybe Char - m (StreamReq m a)) - StreamReq m a fromDone :: StreamReq m a - a fromDone (Done a) = a instance Show (StreamReq m a) where show (Done _) = Done show (ReqChar p _) = Position: ++ show p And now comes the magic. toList below takes a function that, given a position in a stream (list), returns the element that should be at that position, and builds such a stream. The important part is that this function works in the context of an arbitrary monad. streamInv is such a position - element function, but when asked for a character, it captures 'the rest of the stream production,' and reifies it into a StreamReq using delimited continuations. This is what allows one to, in essence, get a pointer into 'the act of parsing,' and pass such things around, and even have multiple pointers in play at once. streamInv :: MonadDelimitedCont p s m = p (StreamReq m a) - Int - m (Maybe Char) streamInv p pos = shift p (\sk - return $ ReqChar pos (sk . return)) toList :: Monad m = (Int - m (Maybe a)) - m [a] toList f = toList' 0 where toList' n = f n = maybe (return []) (\c - liftM (c:) $ toList' (n+1)) The following three functions operate on the resumable parsers, either providing input, or telling the parser that there is no more input. Hopefully they're fairly straight forward. provide :: Monad m = Char - StreamReq m a - m (StreamReq m a) provide _ d@(Done _)= return d provide c (ReqChar _ f) = f $ Just c finish :: Monad m = StreamReq m a - m (StreamReq m a) finish d@(Done _)= return d finish (ReqChar _ f) = f Nothing provideSome :: Monad m = String - StreamReq m a - m (StreamReq m a) provideSome [] s = return s provideSome (x:xs) s = provide x s = provideSome xs The following is sort of a wrapper. It turns a parsing function into a resumable parser. As can be seen, the parser function is pure; you can turn any existing parse function into a resumable parser in this manner. In other words, there's no need to worry about having to go through all your code, putting in delimited continuation constraints on all your functions; no monad pollution is involved. invertParse :: (MonadIO m, MonadDelimitedCont p s m) = (String - a) - m (StreamReq m a) invertParse parser = reset $ \p - (Done . parser) `liftM` toList (streamInv p) -- The parser -- This is the parser that will be inverted in the example. It reads a sum of several numbers; something like: 1 + 23 + 46 + 59 + 102 and returns the list of numbers to be summed (as an [Integer]). As can be seen, this is just an ordinary Parsec parser. plus= char '+' spaces return (++) number = many1 digit = \n - spaces return (read n :: Integer) total p = p = \a - getInput = guard . null return a sump= total $ chainl1 (liftM return number) plus - Usage - This portion finally makes use of the above stream inversion to interesting effect (I hope). It will repeatedly ask for input to parse, gradually feeding that input into the parser, until a blank line is provided, at which point it assumes the user is finished. However, at each user input, the program stops to see if the input up to that point would have been a successful parse. If so, it saves that partial parser. If, when the user signals the end of input, the parse fails, the program will resume taking input from the last successful parse. The saving of the successful parses is achieved through the use of the dynamically scoped variables also included

### [Haskell-cafe] ANNOUNCE: CC-delcont-0.1; Delimited continuations for Haskell

Hello, After a bit of hacking, and much documenting, I'm pleased to announce a preliminary release of a delimited continuation library for Haskell. The implementation is drawn from A Monadic Framework for Delimited Continuations[1] by Dybvig, Petyon Jones and Sabry, although it has some abstraction available over said implementation (there is both a monad and a transformer; control operations will work interchangeably with either. In addition, all four control operators from Shift to Control[2] are implemented). In addition, I have done the adaptation necessary to include the Haskell implementation of the dynamic scoping investigated in Delimited Dynamic Binding[3] by Kiselyov, Shan and Sabry, and it has been included. One can think of it as embeddings of the reader or state monads into the delimited control monads (only more flexible). In the future, I'd like to also, possibly, include something like Oleg's generic zippers, and possibly other useful abstractions based on delimited continuations, if I find them. But that's work for another day, I suppose. If you wish to try it out, a package is available on hackage: Page: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/CC-delcont-0.1 Tabrall: http://hackage.haskell.org/packages/archive/CC-delcont/0.1/CC-delcont-0.1.tar.gz However, I should warn that it's fairly dependent on GHC HEAD at the moment: 1) It uses the Unsafe.Coerce module 2) It uses GADTs to store type class evidence 3) It uses GHC-features that haddock will choke on, so haddock.ghc will be required for docs (and I haven't gotten that working yet to check that the haddock is bug-free), which in turn requires GHC HEAD. 1 2 could be relaxed if there's interest. Please feel free to let me know if you find any bugs, or have any suggestions for improvements. I'll follow up this message with a message containing an example program and a discussion thereof. Cheers, Dan Doel 1: http://www.cs.indiana.edu/~sabry/papers/monadicDC.pdf 2: http://www.cs.rutgers.edu/~ccshan/recur/recur.pdf 3: http://okmij.org/ftp/papers/DDBinding.pdf ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Re: ANNOUNCE: CC-delcont-0.1; Delimited continuations for Haskell

This is intended as an illustration of how one might use the CC-delcont library, and/or what it might be good for at all. For more, a mailing list search for 'oleg' would likely be fruitful, as this library is heavily derived from the delimited continuation implementation he's used in past mails. Essentially, this is a port of Oleg's 'Incremental, undoable parsing' as seen on the OCaml mailing list. It does lack some of the properties of the OCaml implementation, however (which I'll talk more about later). The original mail is available here: http://caml.inria.fr/pub/ml-archives/caml-list/2007/07/7a34650001bf6876b71c7b1060ac501f.en.html This message is (hopefully) literate haskell, and should be able to be run directly, as long as the CC-delcont library is installed. {-# OPTIONS_GHC -fglasgow-exts #-} module Parse where import Text.ParserCombinators.Parsec import Control.Monad import Control.Monad.Trans import Control.Monad.CC import Control.Monad.CC.Dynvar The Inverter First comes a datatype that is, in some sense, a reification of the act of parsing. Done, of course, means that the parsing is over, and it holds the result of parsing. ReqChar means that the parser is paused, waiting for more input. It holds a position, indicating how many characters it's read so far. data StreamReq m a where Done:: Monad m = a - StreamReq m a ReqChar :: Monad m = Int - (Maybe Char - m (StreamReq m a)) - StreamReq m a fromDone :: StreamReq m a - a fromDone (Done a) = a instance Show (StreamReq m a) where show (Done _) = Done show (ReqChar p _) = Position: ++ show p And now comes the magic. toList below takes a function that, given a position in a stream (list), returns the element that should be at that position, and builds such a stream. The important part is that this function works in the context of an arbitrary monad. streamInv is such a position - element function, but when asked for a character, it captures 'the rest of the stream production,' and reifies it into a StreamReq using delimited continuations. This is what allows one to, in essence, get a pointer into 'the act of parsing,' and pass such things around, and even have multiple pointers in play at once. streamInv :: MonadDelimitedCont p s m = p (StreamReq m a) - Int - m (Maybe Char) streamInv p pos = shift p (\sk - return $ ReqChar pos (sk . return)) toList :: Monad m = (Int - m (Maybe a)) - m [a] toList f = toList' 0 where toList' n = f n = maybe (return []) (\c - liftM (c:) $ toList' (n+1)) The following three functions operate on the resumable parsers, either providing input, or telling the parser that there is no more input. Hopefully they're fairly straight forward. provide :: Monad m = Char - StreamReq m a - m (StreamReq m a) provide _ d@(Done _)= return d provide c (ReqChar _ f) = f $ Just c finish :: Monad m = StreamReq m a - m (StreamReq m a) finish d@(Done _)= return d finish (ReqChar _ f) = f Nothing provideSome :: Monad m = String - StreamReq m a - m (StreamReq m a) provideSome [] s = return s provideSome (x:xs) s = provide x s = provideSome xs The following is sort of a wrapper. It turns a parsing function into a resumable parser. As can be seen, the parser function is pure; you can turn any existing parse function into a resumable parser in this manner. In other words, there's no need to worry about having to go through all your code, putting in delimited continuation constraints on all your functions; no monad pollution is involved. invertParse :: (MonadIO m, MonadDelimitedCont p s m) = (String - a) - m (StreamReq m a) invertParse parser = reset $ \p - (Done . parser) `liftM` toList (streamInv p) -- The parser -- This is the parser that will be inverted in the example. It reads a sum of several numbers; something like: 1 + 23 + 46 + 59 + 102 and returns the list of numbers to be summed (as an [Integer]). As can be seen, this is just an ordinary Parsec parser. plus= char '+' spaces return (++) number = many1 digit = \n - spaces return (read n :: Integer) total p = p = \a - getInput = guard . null return a sump= total $ chainl1 (liftM return number) plus - Usage - This portion finally makes use of the above stream inversion to interesting effect (I hope). It will repeatedly ask for input to parse, gradually feeding that input into the parser, until a blank line is provided, at which point it assumes the user is finished. However, at each user input, the program stops to see if the input up to that point would have been a successful parse. If so, it saves that partial parser. If, when the user signals the end of input, the parse fails, the program will resume taking input from the last successful parse. The saving of the successful parses is achieved through the use of the dynamically scoped variables also included

### Re: [Haskell-cafe] Re: Playing with delimited continuations

Anyhow, thanks for the input, and the pointers to the papers and such (and writing so many of them in the first place. :) Incidentally, I really enjoyed your Delimited continuations in operating systems paper. Reading that one really made things click for me as to how delimited continuations can actually show up in real systems, as opposed to just being an esoteric, abstract construct). i'd like to chime in there!-) just as i hadn't been comfortable with call/cc before someone made the link to negation, i hadn't been thinking much about delimited continuations before your paper made the (now obvious;-) connection to (nested) evaluation contexts, which i tend to use all the time. sometimes, such simple connections are all that is needed. btw, for me call/cc and delimited continuations, in contrast to continuations in general, were not esoteric, abstract, but entirely too pragmatic, looking like powerful hacks, able to be bent to any purpose, seemingly lacking in foundations, though not in applications. makes me wonder if non-haskellers see monads and monadic i/o in the same way as non-schemers see continuations and call/cc (the one too abstract, the other too hacky?-). thanks, claus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Re: Playing with delimited continuations

The ZFS library contains the most up-to-date implementation of the CC monad and the transformer. I have a few other versions scattered around, but they are probably not relevant. I found the CC_FrameT.hs to be the fastest one. Is that you can think of normal continuations as delimited continuations with a global prompt p0 that denotes the end of the computation. You can emulate this using the reader monad to store the prompt: Alternatively, one can use implicit parameters. They are quite handy in this case. So, I suppose my first question is if anyone can see a better way of generalizing the types of the control operators to work with the arbitrarily nested monad stacks typically used in MTL. I believe the answer to this question is NO. That means, (delimited) continuations expose the fundamental limitation of monadic transformers. Monadic transformation approach imposes the strict separation of layers. They layers are fixed at compile time and can't dynamically change. Alas, there are many _practically significant_ circumstances where the layers have to change dynamically, have to interleave. Our ICFP06 paper discusses this point a bit, and the accompanying code contains the relevant examples. http://okmij.org/ftp/packages/DBplusDC-README.txt Please see the README file and the file reader.hs in the source code. When we first realized that monadic transformers had a significant limitation, we too felt odd. Please see also below. First, the prompt module needs to use unsafeCoerce, or something equivalent. But, of course, unsafeCoerce feels dirty to me, and it'd be nice if some cool type system hackery could be used instead of going around the type system. Alternatively, if we have reference cells available (STRef or IORef), then we don't need unsafeCoerce. The whole code is pure Haskell with no unsafe operations, and is type safe and sound. This outcome is not an accident: given monadic style, the CC monad and reference cells are `equivalent'. Given one, we can get the other. Any type system that is capable of typing reference cells will give us delimited continuations (provided we have the monadic style so we can deal with the `stack'). One part of that claim, from multi-prompt delimited continuations to reference cells, was formulated and proven in the ICFP06 paper. The converse was formulated in an appendix to the paper, which was taken out because we ran out of space-time. The proof was done by construction, which is not generally available (although the pure OCaml implementation of CC shows how that can be done). The only drawback of using STRef or IORef to implement CC is that CC is no longer a transformer (because neither ST nor IO are). That may not be such a big drawback as the only benefit of CC being transformer (for us, at least) is that we can lay it out over the IO monad. Well, actually there is another benefit, of limiting the effects of the code: even if the CC m code will execute in the IO monad, the code cannot know that. Incidentally, there is actually little need to mix CC with other monad transformers like reader, writer, RWST, Error, etc. The CC monad subsumes them all! That is the important theoretical result by Filinski: delimited continuations can express every expressible monad. This has important practical benefits: we can use real names (variables of the type Prompt) to operate various pieces of state, environment, errors, etc. We no longer need to count the number of 'lift'. The overall performance may be better, too: each monadic layer adds at least one closure. CC monad can implement threads, and I have a hunch CC monad can implement STM. Second is a minor point. There's a sequence datatype in the paper that represents the delimited stack: data Seq s r a b ... Which represents a sequence of frames that take a value of type 'a' to a value of type 'b'. The paper mentions that the empty constructor should have type 'Seq s r a a', but that it's impossible to do that, so they instead have it take a function that provides evidence of the equivalence between a and b, using 'id' and a smart constructor to have the same effect. But, with GADTs, it's now possible to have a constructor with the right type directly, so I did that in my implementation. Have you checked the latest draft of the paper from Amr Sabry's home page? They might have changed that point. Anyway, there is an implementation data Seq s r a = PushP (Prompt.Prompt r a) (Seq s r a) | forall c. PushSeg (s r a c) (Seq s r c) | forall c. PushCO (a - c) (Seq s r c) type SubSeq s r a b = Seq s r b - Seq s r a that has no EmptyS constructor as you can see. So, the trouble you have encountered goes away. The authors are aware of that minor point. The point is truly minor so perhaps it is not bothering with. If you'd like that code, please check with the authors first, because it is wholly based on their work

### [Haskell-cafe] Re: Playing with delimited continuations

On Thursday 05 July 2007, [EMAIL PROTECTED] wrote: I believe the answer to this question is NO. That means, (delimited) continuations expose the fundamental limitation of monadic transformers. I suppose this is a bit disheartening, but I suppose it's also good to know I wasn't totally off track. Closure is nice. Our ICFP06 paper discusses this point a bit, and the accompanying code contains the relevant examples. http://okmij.org/ftp/packages/DBplusDC-README.txt Please see the README file and the file reader.hs in the source code. I stuck this on my reading list during my brief searches a few days ago. I'll have to get busy reading it. Alternatively, if we have reference cells available (STRef or IORef), then we don't need unsafeCoerce. The whole code is pure Haskell with no unsafe operations, and is type safe and sound. Good to know. I suppose ST gives me less of an icky feeling than unsafeCoerce. Incidentally, there is actually little need to mix CC with other monad transformers like reader, writer, RWST, Error, etc. The CC monad subsumes them all! This had actually crossed my mind, although making CC work more like the rest of the monads in MTL, rather than the somewhat more isolated ST seemed like a noble goal. Perhaps time would be better spent working out analogues/instances of classes using just the CC monad, though, for times when you want to use the interfaces (but not necessarily the actual transformers, due to the issues above) along with delimited continuations. I suppose it's something to think about. Have you checked the latest draft of the paper from Amr Sabry's home page? They might have changed that point. Anyway, there is an implementation data Seq s r a = PushP (Prompt.Prompt r a) (Seq s r a) | forall c. PushSeg (s r a c) (Seq s r c) | forall c. PushCO (a - c) (Seq s r c) type SubSeq s r a b = Seq s r b - Seq s r a that has no EmptyS constructor as you can see. So, the trouble you have encountered goes away. The authors are aware of that minor point. The point is truly minor so perhaps it is not bothering with. If you'd like that code, please check with the authors first, because it is wholly based on their work. As a matter of fact, I had two versions of the paper, but neither was the latest. The newest version (if it's what I now have) does use GADTs to enforce the 'Seq s r a a' type of EmptyS, but more surprisingly than that (to me, at least), it no longer has a coercion constructor, as they no longer use functions for evidence. Instead, it uses another GADT as the evidence, and unsafeCoerce on that. Eliminating PushCO and such altogether seems attractive. Anyhow, thanks for the input, and the pointers to the papers and such (and writing so many of them in the first place. :) Incidentally, I really enjoyed your Delimited continuations in operating systems paper. Reading that one really made things click for me as to how delimited continuations can actually show up in real systems, as opposed to just being an esoteric, abstract construct). Regards, Dan Doel ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Playing with delimited continuations

Hello, My interest was recently caught reading some of Oleg Kiselyov's material on delimited continuations, and so I decided to look into them a little closer. Don Stewart also mentioned in passing on #haskell that'd it'd be nice to have support for delimited continuations either in the standard library, or in some other easily installable package. So, I thought I'd see what such a beast might look like, and dug up the Dybvig, Petyon-Jones, Sabry paper[1] on the subject that I'd read long ago, but probably not understood much. :) After a day or so of hacking, I have (what I think is) a proper implementation of the monad and transformer, along with a suitable typeclass, and instances for the various transformers in the MTL. However, I have by no means tested it extensively (as I don't have a lot of ideas off hand for monad stacks involving delimited continuations), and I'm not totally thrilled with the results I have, so I thought I'd ask for advice/commentary here. Code is attached. First, I guess, should come an example of code using the delimited continuations: pop = do (h:t) - get put t return h abortP p e = withSubCont p (\_ - e) loop 0 _ = return 1 loop n p = do i - pop if i == 0 then abortP p (return 0) else do r - loop (n-1) p return (i*r) test1 n l = runDelCont (runStateT (newPrompt = \p - pushPrompt p (loop n p)) l) test2 n l = runState (runDelContT (newPrompt = \p - pushPrompt p (loop n p))) l So, loop finds the product of the first n numbers stored in a list in the state, but aborts immediately if it sees a 0. test1 and test2 stack the monads in the opposite order, but the results are the same in this case (it isn't a very sophisticated example). Another example, from the paper, is that you can think of normal continuations as delimited continuations with a global prompt p0 that denotes the end of the computation. You can emulate this using the reader monad to store the prompt: type Continue r b a = ReaderT (Prompt.Prompt r b) (DelCont r) a runContinue :: (forall r. Continue r b b) - b runContinue ct = runDelCont (newPrompt = \p - pushPrompt p (runReaderT ct p)) callCC' f = withCont (\k - pushSubCont k (f (reify k))) where reify k v = abort (pushSubCont k (return v)) abort e = withCont (\_ - e) withCont e = ask = \p - withSubCont p (\k - pushPrompt p (e k)) loop2 l = callCC' (\k - loop' k l 1) where loop' _ [] n = return (show n) loop' k (0:_) _ = k The answer must be 0. loop' k (i:t) n = loop' k t (i*n) So, the loop computes the product of a list of numbers, returning a string representation thereof, but aborts with a different message if it sees 0. Again, nothing special, but it seems to work. However, this is where I started to run into some uglines that followed from my design choices. Most flows from the typeclass: class (Monad m) = MonadDelCont p s m | m - p s where newPrompt :: m (p a) pushPrompt :: p a - m a - m a withSubCont :: p b - (s a b - m b) - m a pushSubCont :: s a b - m a - m b So, 'm' is the delimited control monad in question, 'p' is the type of prompts associated with said monad, and 's' is the associated type of subcontinuations that take an 'a', and produce a 'b'. Those four functions are the generalizations of the four control operators from the paper. The crux of the matter is the way the prompts and subcontinuations are typed. A prompt 'p a' can have values of type 'a' returned through it, which is fine in the vanilla DelCont monad. However, in a monad transformed by a StateT, a computation 'm a' isn't returning an 'a' through the prompt. It's actually returning an '(a,s)', due to the state threading. And the same is an issue with the subcontinuation. So, I came up with a couple wrappers: newtype PairedPrompt s p a = PP { unPP :: p (a, s) } newtype PairedSubCont s k a b = PSC { unPSC :: k (a, s) (b, s) } And then you can declare instances like so: instance (MonadDelCont p s m) = MonadDelCont (PairedPrompt t p) (PairedSubCont t s) (StateT t m) where ... Where the declarations not only lift the delimited control actions, but wrap and unwrap the prompts and subcontinuations appropriately. However, this has two issues: 1) It seems kind of kludgy at first glance, although maybe that's just me. 2) It doesn't always work correctly. Consider the following code: loop3 l = callCC' (\k - loop' k l 1) where loop' _ []n = return n loop' k (0:_) _ = k 0 loop' k (i:t) n = tell [n] loop' k t (i*n) It does almost the same thing as loop2, only it has some writer output, too. And we'd like to write: test3 l = runContinue (runWriterT (loop3 l)) but we can't, because that's a type error, because the prompt is created outside of runWriterT, where it will have type

### [Haskell-cafe] Continuations

Folks, My current setup involves threads that block on a MVar/TMVar until they read an event from it and then proceed. I would like to convert these threads into continuations whereby a continuation is saved when an event is requested and I can call that continuation when an even arrives. I had done this in Lisp before like this: (defun/cc receive (game) (let/cc k ;; save continuation (setf (continuation game) k) ;; TODO: start a timer here )) (defun/cc ask-for-blind (game amount context) (let ((posted nil) (seat nil) (active (car context)) (small-blind-p (= (small-blind$ game) amount))) (while (and (not posted) (car active)) (setf seat (pop active)) ;; skip people who are waiting ;; for the big blind if small blind ;; is being posted. (unless (and (waiting-for-bb-p seat) small-blind-p) (setf (state seat) 'sitting-out) (setf (current game) seat) (send (player seat) 'blind amount) (let* ((cmd (receive game)) --- note the call to receive here (action (first cmd)) (bet (second cmd)) (inplay$ (inplay$ (player seat ... How would this translate into Haskell and the Cont monad? Alternatively, I would appreciate an example that requests, say, two Ints by saving a continuation each time and returns the sum. Thanks, Joel -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: [Haskell-cafe] Continuations

On Dec 23, 2005, at 1:06 PM, Bulat Ziganshin wrote: hm... you are waste much time unsystematically optimizing random-selected parts of program. It's an assumption that you are making. what you want to buy with continuations and, more important, why you need it? To try to implement thread priorities. I would like to use continuations instead of threads and pick the next continuation to run based on how much time it has to responde to the poker server. JR Alternatively, I would appreciate an example that requests, say, two JR Ints by saving a continuation each time and returns the sum. do a - readLn :: IO Int b - readLn :: IO Int return (a+b) [...] amazed? :) Amused yes, amazed no. The code does not save a continuation after requesting each integer and does not allow me to call/cc it saved continuation with an integer of my choice. This is pretty much what I'm looking for: http://lisp.tech.coop/Web% 2FContinuation Thanks, Joel -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Problem with continuations and typing

Jerzy Karczmarczuk wrote: : | zeros fc sc = sc 0 zeros | | fails to compile as well. *I do not ask why, I know*. | | But I would like to continue this exercice along these lines, without too much | exotism (no monads, yet...), for my students. Do you have any simple work-around? | Introduce some algebraic constructors? Perhaps higher-rank polymorphism could do | something (but then I would have to explain it to my folk...) : How about this for a non-exotic algebraic type? newtype G a b = G{ unG :: b - (a - G a b - b) - b } glist g = unG g [] (\b g' - b : glist g') zeros = G (\no yes - yes 0 zeros) disj g1 g2 = G (\no yes - unG g1 (unG g2 no yes) (\b g1' - yes b (disj g1' g2))) I haven't had much practice with continuations, so don't know whether I've just lost some generality there. But it does support *some* avoidance of higher-rank polymorphism, through the use of good old partial application. For example, the type of the state variable s doesn't leak into the result type of unfold: unfold f s= G (\no yes - case f s of Nothing - no Just (s', b) - yes b (unfold f s')) HTH, Tom ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell-cafe] Problem with continuations and typing

I resend a posting which has been blocked by some daemon according to whom I am not a memeber of this list (and which has not been cleared by Simon M.) == I tried to cook-up a simple version of nondeterministic computations using the two-continuation model. I got stuck on an 'infinite type' error, and my question is: have you seen a *concrete* (really concrete) Haskell implementation of a similar model, I could inspire myself on? (I know a few theoretical works on that). (And for pedagogical reasons I wanted to work this until the end without ever using the magic word monad... No = nor other similar abominations.) A generator is something which takes two continuations * Failure continuation, a parameter-less function, for example failure () = error No more solutions * Success continuation, a function which takes two parameters * a value which it may return * another generator which may return further values. (This is a particular version of the model. There are others). A trivial success continuation returns the intercepted value. accept x gen = x -- (or: accept = const) The trivial generators are the one which fails, and one which returns just one fixed value: nogen fcnt scnt = fcnt () unit x fcnt scnt = scnt x nogen Getting the next generator (skipping the first value) may be obtained with next gen fc sc = gen fc (\_ nxt - nxt fc sc) Of course next (unit x) =-= nogen. Alternative. Here I got stuck. Now, try to define the (disj gen1 gen2). This is a generator which launches gen1 and if it fails, it launches gen2. But if gen1 succeeds, the 'next' generator it provides should also invoke gen2 in the case of failure, and this is recursive. So: disj gen1 gen2 fc sc = gen1 (\_ - gen2 fc sc) (\x nxt - sc x (disj nxt gen2)) Intuitively it is OK, (e.g. works in Scheme). But Haskell protests, (disj nxt gen2) cannot be typed. I do not ask why, I see myself... An attempt to collect all the generated instances in a list fails as well for the same reason: glist gen = gen (\_-[]) (\x nxt - x : glist nxt) Even simpler, a generator which never fails, and returns zeros zeros fc sc = sc 0 zeros fails to compile as well. *I do not ask why, I know*. But I would like to continue this exercice along these lines, without too much exotism (no monads, yet...), for my students. Do you have any simple work-around? Introduce some algebraic constructors? Perhaps higher-rank polymorphism could do something (but then I would have to explain it to my folk...) I would hate this... Gracias. Jerzy Karczmarczuk ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

### [Haskell] Re: Monads vs. continuations

Gregory Woodhouse [EMAIL PROTECTED] wrote in article [EMAIL PROTECTED] in gmane.comp.lang.haskell.general: Okay, this seems sensible enough. Loosely speaking, I see this code as getting a line, checking to see if it's empty. If it is, it just quits (returning the empty value). Otherwise, it prints line, and invokes itself through a *new* do statement. That seems awfully like using a continuation to track the input stream as part of the environment. But it seems obvious to me that here is something I'm not understanding here. I think of the do as providing an appropriate continuation (one in which the line just read is gone) to pass to the next call. Given your understanding of continuations, it may be more appropriate for you to think in terms of not do but the underlying combinations of return and = that do is syntactic sugar for. It may also help to consider monads other than IO, for example the input and output monads described in Section 7.3 of Philip Wadler's article Comprehending Monads. I suspect that your understanding described in the paragraph above is essentially correct. You can also find more information on the relation between continuations and monads in: - Section 7.4 of Combinations Monads, - Andrzej Filinski's articles Representing Monads and Representing Layered Monads, and his dissertation Controlling Effects, - Philip Wadler's article Monads and Composable Continuations, - Section 3.2 of Philip Wadler's article How to Declare an Imperative. -- Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig 2005-11-06 Against Exploiting the Environment in War http://tinyurl.com/adhg9 2005-11-20 Universal Children's Day http://www.un.org/depts/dhl/children_day/ 2005-11-25 Elimination of Violence Against Women http://tinyurl.com/drd57 2005-11-25 Buy Nothing Day http://www.buynothingday.co.uk/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell

### Re: [Haskell] Monads vs. continuations

On 31/10/05, Gregory Woodhouse [EMAIL PROTECTED] wrote: Newbie alert: I have some superficial familiarity with continuations as they occur in traditional denotational semantics, but certainly not a deep understanding. I also have a special interest in distributed and/or concurrent processing, so it only seemed natural to me that the monad concept was one I'd want to confront head on. Now, I warn you: I am quite new to Haskell (though I have some prior exposure to Scheme and a background in mathematics). Now, I've just started reading through Thompson's text and came across this example: goUntilEmpty :: IO () goUntilEmpty = do line - getLine if (line == []) then return () else (do putStrLn line goUntilEmpty) Okay, this seems sensible enough. Loosely speaking, I see this code as getting a line, checking to see if it's empty. If it is, it just quits (returning the empty value). Otherwise, it prints line, and invokes itself through a *new* do statement. That seems awfully like using a continuation to track the input stream as part of the environment. But it seems obvious to me that here is something I'm not understanding here. I think of the do as providing an appropriate continuation (one in which the line just read is gone) to pass to the next call. Okay, maybe I'm taking things a bit too literally here, but I seem to recall that a monad is an algebraic object with right and left identity and an associative composition. I understand the monad here takes a value (()) and returns an object IO (), and do becomes a functor of sorts, taking ordinary functions and mapping them to new functions having their codomain in a new category (an instance of a monad?) This is where it seems to me that I must be getting the terminology wrong. Can someone help me out here? Perhaps you're referring to a monoid. Since you seem to have some familiarity with category theory, check out http://en.wikipedia.org/wiki/Monad_%28category_theory%29 for a formal definition of monads and some background. Translating between notation, μ = join and η = return in Haskell. The application of the functor T to functions is called fmap, or liftM, which should always be equivalent. The functor behind a monad is always an endofunctor, that is, from the category to itself. In this case, you'll be interested in the category of Haskell types and Haskell-definable functions between them. For a much gentler description and one way in which monads relate to programming, check out http://www.haskell.org/hawiki/MonadsAsContainers which is an article that I wrote. For a different perspective on the programming aspects than presented by my article (both are important in practice) check out http://www.nomaware.com/monads/html/ Do notation is a shorthand for a bunch of algebraic operations which make the code look like imperative code. Desugaring goUntilEmpty results in something along the lines of: goUntilEmpty :: IO () goUntilEmpty = getLine = \line - if line == [] -- better written as null line then return () else putStrLn line = \x - goUntilEmpty where x = f = join (fmap f x), to write it in terms of the operations defined on wikipedia. Hope these links are useful to you :) - Cale ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell

### Re: [Haskell] Monads vs. continuations

I don't know if this helps, but there's a straightforward way to understand the IO monad in terms of continuation passing. You can think of a value of type IO a as being a CPS expression with a hole in it; the hole is to be filled with a continuation which expects a value of type a. The only way to fill the hole is by using =, whose second argument is a continuation with another (nested) hole in it. So effectively with = you build a CPS expression from the outside in. The final continuation, which takes () and aborts the program, is ultimately filled in by the runtime system. This viewpoint doesn't work for other monads, since they always provide some sort of destructuring operations on monadic values, e.g. runState or the standard list deconstructors. But it works fine for IO provided you ignore the existence of unsafePerformIO and friends. -- Ben ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell

### Re: [Haskell] Monads vs. continuations

On Oct 31, 2005, at 3:02 AM, Cale Gibbard wrote: Perhaps you're referring to a monoid. Since you seem to have some familiarity with category theory, check out http://en.wikipedia.org/wiki/Monad_%28category_theory%29 for a formal definition of monads and some background. Translating between notation, μ = join and η = return in Haskell. The application of the functor T to functions is called fmap, or liftM, which should always be equivalent. The functor behind a monad is always an endofunctor, that is, from the category to itself. In this case, you'll be interested in the category of Haskell types and Haskell-definable functions between them. This was actually quite helpful. If someone had told me that a monad was a functor in the first place, this would all have been much less mysterious. (BTW, I have indeed encountered adjoint functors, Hom and Tensor, in the context of algebraic topology, so the article did leave me with a bit more of a sense of having my feet on the ground.) For a much gentler description and one way in which monads relate to programming, check out http://www.haskell.org/hawiki/MonadsAsContainers which is an article that I wrote. This is an excellent article! I found it extremely useful and lucid. To be sure, it's going to take a bit more time to let all these ideas sink in, but I liked the approach of focusing on fmap and join (= always seemed mysterious). Believe it or not, though I've read that lists are monads, this is perhaps the first time I've seen it spelled out just how. === Gregory Woodhouse [EMAIL PROTECTED] It is foolish to answer a question that you do not understand. --G. Polya (How to Solve It) ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell

### Re: [Haskell] Monads vs. continuations

On Oct 31, 2005, at 9:37 AM, Ben Rudiak-Gould wrote: I don't know if this helps, but there's a straightforward way to understand the IO monad in terms of continuation passing. You can think of a value of type IO a as being a CPS expression with a hole in it; the hole is to be filled with a continuation which expects a value of type a. The only way to fill the hole is by using =, whose second argument is a continuation with another (nested) hole in it. So effectively with = you build a CPS expression from the outside in. The final continuation, which takes () and aborts the program, is ultimately filled in by the runtime system. This viewpoint doesn't work for other monads, since they always provide some sort of destructuring operations on monadic values, e.g. runState or the standard list deconstructors. But it works fine for IO provided you ignore the existence of unsafePerformIO and friends. -- Ben No, that's definitely helpful. The analogy in the case of IO was obvious, but I had a sense that it was getting in the way of understanding what a monad is (i.e., by leading me to focus on the wrong issues). === Gregory Woodhouse [EMAIL PROTECTED] One must act on what has not yet happened. --Lao Tzu ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell

### [Haskell] Monads vs. continuations

Newbie alert: I have some superficial familiarity with continuations as they occur in traditional denotational semantics, but certainly not a deep understanding. I also have a special interest in distributed and/or concurrent processing, so it only seemed natural to me that the monad concept was one I'd want to confront head on. Now, I warn you: I am quite new to Haskell (though I have some prior exposure to Scheme and a background in mathematics). Now, I've just started reading through Thompson's text and came across this example: goUntilEmpty :: IO () goUntilEmpty = do line - getLine if (line == []) then return () else (do putStrLn line goUntilEmpty) Okay, this seems sensible enough. Loosely speaking, I see this code as getting a line, checking to see if it's empty. If it is, it just quits (returning the empty value). Otherwise, it prints line, and invokes itself through a *new* do statement. That seems awfully like using a continuation to track the input stream as part of the environment. But it seems obvious to me that here is something I'm not understanding here. I think of the do as providing an appropriate continuation (one in which the line just read is gone) to pass to the next call. Okay, maybe I'm taking things a bit too literally here, but I seem to recall that a monad is an algebraic object with right and left identity and an associative composition. I understand the monad here takes a value (()) and returns an object IO (), and do becomes a functor of sorts, taking ordinary functions and mapping them to new functions having their codomain in a new category (an instance of a monad?) This is where it seems to me that I must be getting the terminology wrong. Can someone help me out here? === Gregory Woodhouse [EMAIL PROTECTED] The most incomprehensible thing about the world is that it is at all comprehensible. --Albert Einstein (1879-1955) ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell

### Re: no continuations

In article [EMAIL PROTECTED], I wrote: I don't believe you. My implementation uses Haskell's mfix, which looks like a Y to me. I certainly don't use anything like set!. Actually, on looking at the code for my monad, it turns out I do. I spent awhile trying to figure out how to make a monad that could lift IO, and was also an instance of both MonadCont (so I could do call-with-current-continuation) and MonadFix (so I could do letrec the way I wanted it). I use CPS, and my implementation of mfix actually uses newIORef, writeIORef and readIORef directly. But I'd forgotten... -- Ashley Yakeley, Seattle WA ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: no continuations

In article [EMAIL PROTECTED], [EMAIL PROTECTED] wrote: (let ((cont #f)) (letrec ((x (call-with-current-continuation (lambda (c) (set! cont c) 0))) (y (call-with-current-continuation (lambda (c) (set! cont c) 0 (if cont (let ((c cont)) (set! cont #f) (set! x 1) (set! y 1) (c 0)) (+ x y Could you tell me what does this test return on your system? It causes hscheme to exit silently. Very odd. I'll try to fix it, but I suspect it's something fundamental to my design, and connected to precisely these issues. -- Ashley Yakeley, Seattle WA ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: no continuations

I tried the following letrec correctness test using your interpret.php, and unfortunately, the interpreter returned no answer. (let ((cont #f)) (letrec ((x (call-with-current-continuation (lambda (c) (set! cont c) 0))) (y (call-with-current-continuation (lambda (c) (set! cont c) 0 (if cont (let ((c cont)) (set! cont #f) (set! x 1) (set! y 1) (c 0)) (+ x y Could you tell me what does this test return on your system? Now, why the exact implementation of letrec is important. Let us consider the following code that involves a non-deterministic choice. (define *k* #f) ; A rough-and-dirty ambient combinator. It's easier to write it ; than to look it up... (define (amb alt1 alt2) (call-with-current-continuation (lambda (k) (set! *k* (lambda () (set! *k* #f) (k alt2))) alt1))) (define (fail) (*k*)) (display (letrec ((val1 5) (proc (amb (lambda () (display In first choice) (newline) (fail)) (lambda () (display The second choice) (newline) 42))) (val2 7) ) (let ((old-vals (list val1 val2))) (set! val1 '*bad*) (set! val2 '*bad*) (list old-vals (proc) So, we bind val1 to 5, val2 to 7, and proc to the first choice. We proceed to evaluate the body of letrec with the first choice. We mutate val1 and val2, and evaluate our first choice, which didn't work out. So, we try the second choice. The correct implementation of letrec (e.g., Petite Chez Scheme, SCM) will *restore* the values of val1 and val2! That is, the changes made during the evaluation of the first choice will be backed out, and we start the second choice using the same original values of val1 and val2. Choices must indeed be evaluated in the same environment, otherwise, they can't be called non-deterministic. So, if we evaluate the test on a conforming Scheme implementation, we get In first choice The second choice ((5 7) 42) Alas, many Scheme systems do not implement letrec correctly. Therefore, when we try the program on one of these systems (e.g., Gambit, Bigloo, Scheme48), we see In first choice The second choice ((*bad* 7) 42) A sad interaction between the choices. ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: no continuations

In article [EMAIL PROTECTED], [EMAIL PROTECTED] wrote: Similarly, R5RS obligates any Scheme implementation to resort to assignments when processing a letrec form. Not mine! I do use a polyvariadic fixed-point function. (define circular (letrec ((c (cons 'x c))) c)) (list-head circular 10) = (x x x x x x x x x x) Try it yourself at http://hscheme.sourceforge.net/interpret.php. I also make the fixed-point function available as call-with-result, it's more or less equivalent to this: (lambda (f) (letrec ((x (f x))) x)) An implementation may not use a (polyvariadic) Y to implement letrec, unless the implementation can prove that the difference is unobservable for the form in question. Do you have an example of use of Y for letrec where a program would violate R5RS? -- Ashley Yakeley, Seattle WA ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: no continuations

Ashley Yakeley wrote: In article [EMAIL PROTECTED], [EMAIL PROTECTED] wrote: Similarly, R5RS obligates any Scheme implementation to resort to assignments when processing a letrec form. Not mine! I do use a polyvariadic fixed-point function. An implementation may not use a (polyvariadic) Y to implement letrec, unless the implementation can prove that the difference is unobservable for the form in question. Do you have an example of use of Y for letrec where a program would violate R5RS? http://groups.google.com/groups?selm=976rij%24jd1%241%40news.gte.com In this post to c.l.scheme, Dorai Sitaram writes: letrec with set! is certainly different from letrec with Y, and you don't need call/cc to distinguish the two. (define *keep-track* '()) (letrec ((fact (lambda (n) (set! *keep-track* (cons fact *keep-track*)) (if (= n 0) 1 (* n (fact (- n 1))) (fact 8)) and then do (eq? (car *keep-track*) (cadr *keep-track*)) If letrec is set!-based (as in Scheme), the result is #t. If it is Y-based, the result is #f. Why this is should be obvious if you mentally (or with pencil) trace what Y does. Scheme's letrec defines recursive procedures by making the lexical variable bound to a recursive procedure whose body contains the references to the same lexical variable. In other words, data recursion in the underlying environment is used to represent the recursive procedure perceived by the user. The fixed-point approach does not (and clearly cannot) do that. There is no wrong choice in the sense that alternative choices were cut off. Users have enough machinery to define their preferred version of letrec using syntactic extension. But the letrec that comes with Scheme is an extremely good and pragmatic one, and is more efficient than a Y-based letrec could be expected to be. --d HTH, /david ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe

### RE: no continuations

On Friday, January 09, 2004 2:48 AM, Ashley Yakeley [SMTP:[EMAIL PROTECTED] wrote: Do you have an example of use of Y for letrec where a program would violate R5RS? Sure, take a look at my implementation of Ben Rudiak-Gould's implementation of Alan Bawden's implementation of boxes. In 4.2.2 of R5RS, it says, re letrec: Semantics: The variables are bound to fresh locations holding undefined values, the inits are evaluated in the resulting environment (in some unspecified order), each variable is assigned to the result of the corresponding init, the body is evaluated in the resulting environment, and the value(s) of the last expression in body is(are) returned. Each binding of a variable has the entire `letrec' expression as its region, making it possible to define mutually recursive procedures. The result of the corresponding init is *assigned to* each variable (anyone know why the wording is backward above?), and that is after the inits are evaluated, which is after the variables are bound. There was a discussion on comp.lang.scheme a couple of years ago about this. http://groups.google.com/groups?hl=enlr=ie=UTF-8oe=UTF-8th=fdcf3 554852a3cadseekm=3AC66F16%40MailAndNews.com#link1 http://groups.google.com/groups?hl=enlr=ie=UTF-8oe=UTF-8th=a47e0 3e456b2dc2aseekm=200102220358.TAA77339%40adric.cs.nps.navy.mil#link1 http://groups.google.com/groups?hl=enlr=ie=UTF-8oe=UTF-8th=58a68 6525be78d16rnum=1 ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: no continuations

In article [EMAIL PROTECTED], [EMAIL PROTECTED] wrote: In this post to c.l.scheme, Dorai Sitaram writes: letrec with set! is certainly different from letrec with Y, and you don't need call/cc to distinguish the two. (define *keep-track* '()) (letrec ((fact (lambda (n) (set! *keep-track* (cons fact *keep-track*)) (if (= n 0) 1 (* n (fact (- n 1))) (fact 8)) and then do (eq? (car *keep-track*) (cadr *keep-track*)) If letrec is set!-based (as in Scheme), the result is #t. If it is Y-based, the result is #f. Why this is should be obvious if you mentally (or with pencil) trace what Y does. Does Haskell mfix count as Y? My implementation is mfix-based, and the above code returns 40320 #t. Try it yourself at http://hscheme.sourceforge.net/interpret.php if you don't believe me. I'd be very interested to know if my implementation of Scheme varies from R5RS due to this issue. -- Ashley Yakeley, Seattle WA ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: no continuations

Ashley Yakeley wrote: Similarly, R5RS obligates any Scheme implementation to resort to assignments when processing a letrec form. Not mine! I do use a polyvariadic fixed-point function. I'm sorry but you don't have the choice in the matter -- if you wish to call your implementation R5RS-compliant. R5RS _requires_ letrec to use assignments. The latter issue has been extensively discussed, on comp.lang.scheme, in Amr Sabry's Technical report Recursion as a computational effect and in many other places. Here's the exact quote from R5RS, Section 4.2.2 Semantics [of letrec]: The variables are bound to fresh locations holding undefined values, the inits are evaluated in the resulting environment (in some unspecified order), each variable is assigned [sic!] to the result of the corresponding init, the body is evaluated in the resulting environment, and the value(s) of the last expression in body is(are) returned. (define circular (letrec ((c (cons 'x c))) c)) I'm afraid that is not a R5RS compliant code. R5RS states (in the same Section 4.2.2) One restriction on letrec is very important: it must be possible to evaluate each init without assigning or referring to the value of any variable . If this restriction is violated, then it is an error. In the quoted code, the init is (cons 'x c), and it is impossible to evaluate that expression according to the semantics of Scheme without referring to the value of variable c. If it were (define circular (letrec ((c (cons 'x (delay c c)) then there is no contradiction with R5RS. Do you have an example of use of Y for letrec where a program would violate R5RS? http://google.com/groups?selm=7eb8ac3e.0304131423.4f103d4f%40posting.google.com The difference between the Y and set! approaches to letrec *is* observable. I must state that the exact conformance to the R5RS semantics of letrec is important -- for example, for the code that uses the non-deterministic choice operator 'amb' or for the code that uses shift/reset. Otherwise, bizarre behavior can occur -- and has occurred. I can send you an example privately. ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe

### Re: no continuations

In article [EMAIL PROTECTED], [EMAIL PROTECTED] wrote: (define circular (letrec ((c (cons 'x c))) c)) I'm afraid that is not a R5RS compliant code. Indeed not, it merely demonstrates fixed-point behaviour. Nevertheless, allowing this as an extension does not make my implementation non-compliant. See section 1.3.2 on this point. Do you have an example of use of Y for letrec where a program would violate R5RS? http://google.com/groups?selm=7eb8ac3e.0304131423.4f103d4f%40posting.google.co m The difference between the Y and set! approaches to letrec *is* observable. I don't believe you. My implementation uses Haskell's mfix, which looks like a Y to me. I certainly don't use anything like set!. But my implementation passes Dorai Sitaram's test: (define *keep-track* '()) (letrec ((fact (lambda (n) (set! *keep-track* (cons fact *keep-track*)) (if (= n 0) 1 (* n (fact (- n 1))) (fact 8)) (eq? (car *keep-track*) (cadr *keep-track*)) My implementation returns 40320 #t ...which is apparently correct behaviour for R5RS. Indeed I get exactly the same result in mzscheme and guile. Again, I encourage you to try for yourself at http://hscheme.sourceforge.net/interpret.php (though it's a bit slow). -- Ashley Yakeley, Seattle WA ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe