It has been brought to my attention (as errata editor of the revised
H'98 report) that there is a bug in the language definition,
concerning strictness annotations on datatypes.
In section 4.2.1, the translation of strict components of a data
constructor is defined as
(\ x1 .
Simon Marlow wrote:
I agree with you. And that is how it used to be, but then
some people didn't think that was convenient enough so now
we are stuck with a seq that (IMHO) stinks. :)
Having a seq that works on anything is occasionally very useful for
fixing space leaks, and the type class ve
On 05 October 2005 17:05, Lennart Augustsson wrote:
> Wolfgang Jeltsch wrote:
>> Am Mittwoch, 5. Oktober 2005 16:22 schrieb Simon Marlow:
>>
>>> [...]
>>
>>
>>> Also, GHC's optimiser currently treats (_|_ :: IO a) and (do _|_;
>>> return ()) as interchangeable, which is naughty, and people have
Wolfgang Jeltsch wrote:
Am Mittwoch, 5. Oktober 2005 16:22 schrieb Simon Marlow:
[...]
Also, GHC's optimiser currently treats (_|_ :: IO a) and (do _|_; return
()) as interchangeable, which is naughty, and people have occasionally
noticed, but the benefits can sometimes be huge. It is this
Am Mittwoch, 5. Oktober 2005 17:01 schrieb Simon Marlow:
> On 05 October 2005 15:46, Ross Paterson wrote:
> > On Wed, Oct 05, 2005 at 03:22:29PM +0100, Simon Marlow wrote:
> >> Also, GHC's optimiser currently treats (_|_ :: IO a) and (do _|_;
> >> return ()) as interchangeable, which is naughty, an
Am Mittwoch, 5. Oktober 2005 16:22 schrieb Simon Marlow:
> [...]
> Basically anything for which the report doesn't give the full code, except
> of course primitives which usually must be strict.
Why must primitives be strict? I wouldn't consider putChar undefined an
undefined action. In my opi
Am Mittwoch, 5. Oktober 2005 16:22 schrieb Simon Marlow:
> [...]
> Also, GHC's optimiser currently treats (_|_ :: IO a) and (do _|_; return
> ()) as interchangeable, which is naughty, and people have occasionally
> noticed, but the benefits can sometimes be huge. It is this distinction
> that mak
On Wed, Oct 05, 2005 at 04:01:09PM +0100, Simon Marlow wrote:
> No, of course I don't expect the monad laws to hold :)
>
> But the intended meaning of
>
> (do _|_; return () :: IO ()) `seq` True
>
> is True, not _|_, right? This isn't made explicit in the report, but
> it's how we all
On 05 October 2005 15:46, Ross Paterson wrote:
> On Wed, Oct 05, 2005 at 03:22:29PM +0100, Simon Marlow wrote:
>> Also, GHC's optimiser currently treats (_|_ :: IO a) and (do _|_;
>> return ()) as interchangeable, which is naughty, and people have
>> occasionally noticed, but the benefits can some
On Wed, Oct 05, 2005 at 03:22:29PM +0100, Simon Marlow wrote:
> Also, GHC's optimiser currently treats (_|_ :: IO a) and (do _|_; return
> ()) as interchangeable, which is naughty, and people have occasionally
> noticed, but the benefits can sometimes be huge.
What's wrong with identifying them?
gt;
> Looking at GHC's library code we see that it is indeed forcing the
> char early:
>
> hPutChar :: Handle -> Char -> IO ()
> hPutChar handle c =
> c `seq` do
> ...
Fixed. However, I have a hunch that there are a *lot* of library
functions whose strictness is
On Tue, 2005-10-04 at 13:46 +0100, Malcolm Wallace wrote:
> I wrote:
>
> > > ghc:
> > > putChar _|_ -> _|_
> > >
> > > hugs:
> > > putChar _|_ -> valid IO ()
> >
> > I think it comes down to buffering behaviour doesn't it?
>
> Having reviewed the IRC logs, I see I was talking nonsense.
>
> You
I wrote:
> > ghc:
> > putChar _|_ -> _|_
> >
> > hugs:
> > putChar _|_ -> valid IO ()
>
> I think it comes down to buffering behaviour doesn't it?
Having reviewed the IRC logs, I see I was talking nonsense.
You want to be able to store a closure for (putChar undefined) in a
data structure, whi
John Meacham <[EMAIL PROTECTED]> writes:
> ghc:
> putChar _|_ -> _|_
> putStr _|_ -> valid IO ()
>
> hugs:
> putChar _|_ -> valid IO ()
> putStr _|_ -> valid IO ()
I think it comes down to buffering behaviour doesn't it? Should the
character be evaluated when it is added to the output buffer,
The report does not seem to specify whether
putChar _|_ is _|_ or not. (although it might be implied somewhere I
didn't see)
I orginally noticed this when jhc treated it as so and I considered
this a bug since the argument should not be evaluated until the action
is actually executed.
however,
Ben Lippmeier <[EMAIL PROTECTED]> writes:
> To gloss over details: it'll reduce x far enough so it knows that it's
> an Integer, but it won't nessesarally compute that integers value.
No, Integers don't contain any lazy components.
It statically knows that it's an integer.
--
__("< M
Gary Morris wrote:
ioexptmod :: Integer -> Integer -> Integer -> Int -> IO Integer
ioexptmod base expt n keySize = return $! exptmod base expt n keySize
My hope was that the use
of $! would force it to compute the exponentiation while I was timing
-- and the average times are around 30K cl
Hello everyone,
I've been playing with implementing the Kocher attacks on RSA in
Haskell. For the simplest version, I decided to implement the
exponentiation in the same module. However, my initial tests suggest
that the times don't have any correlation with the operations being
performed. I'm
Hello,
I looked at the implementation of Writer, WriterT, State, StateT, RWS and
RWST. They all use tuples to knit the result with the written value and/or
state.
Now, there seems to be an inconsistency between the transformer and
non-transformer variants concerning strictness. The non
> Is there a good reason one can't do:
> data Foo = Foo {bar::!String}
Just add a space after the ::
{bar:: !String}
J.A.
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell
Is there a good reason one can't do:
data Foo = Foo {bar::!String}
-Alex-
__
S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com
___
Haskell mailing list
[EMAIL PROTECTED]
http:/
ug. You may have seen this message while
loading the source containing the strict constructor definition:
WARNING: ignoring polymorphic case in interpreted mode.
Possibly due to strict polymorphic/functional constructor args.
Your program may leak space unexpectedly.
which means that GHCi
ing on K is not affected by
strictness flags.
so. we define:
> data L = L Int deriving Show
> data S = S !Int deriving Show
and, as expected, we get:
*Strict> L undefined
L *** Exception: Prelude.undefined
*Strict> L $! undefined
*** Exception: Prelude.undefined
*Strict>
> And in the olden days (Before Haskell) there was:
>
> Kewley and Glynn1989
> J.M. Kewley and K. Glynn.
> Evaluation Annotations for Hope+.
> In Glasgow Workshop on Functional Programming, Workshops in Computing,
> pages 329-337, Fraserburgh, Scotland, 1989. Springer-Verlag.
Now r
Ennals writes:
>
> I know this is slightly off topic but...
>
>
> Does anyone know if there are any papers published anywhere on strictness
> annotations?
>
>
> It seems that it would be nice to be able to cite a paper on the concept, but,
> as far as I
Robert Ennals wrote:
> Although it proposes them for Haskell, it says itself that they were already
> present in Clean
Section 8.3 (6 pages) of Plasmeijer and van Eekelen's book
Functional Programming and Parallel Graph Rewriting
Addison-Wesley
is devoted to (Clean-like)
> Robert,
>
> Strictness annotations were proposed for Haskell in the paper
> "Implementing Haskell Overloading" by Lennart Augustsson.
>
> http://citeseer.nj.nec.com/augustsson93implementing.html
>
> It only has a small section on strictness annotations but
Robert,
Strictness annotations were proposed for Haskell in the paper
"Implementing Haskell Overloading" by Lennart Augustsson.
http://citeseer.nj.nec.com/augustsson93implementing.html
It only has a small section on strictness annotations but this is as close
as I can get to yo
I know this is slightly off topic but...
Does anyone know if there are any papers published anywhere on strictness
annotations?
It seems that it would be nice to be able to cite a paper on the concept, but,
as far as I can tell, no such paper exists.
-Rob
Jay Cox <[EMAIL PROTECTED]> writes:
> On Thu, 14 Mar 2002, Brian Huffman wrote:
>
> > In Haskell you can produce the desired behavior by using pattern guards.
> > Since the pattern guards always get evaluated before the result does, they
> > can be used to make things more strict. Here is the fo
On Saturday, March 16, 2002, 03:16 CET Jay Cox wrote:
> [...]
> I think I may eventually attempt to write a haskell lazyness/strictness FAQ.
Great! I'm very interested in it.
> [...]
Wolfgang
___
Haskell mailing list
[EMAIL P
Alright. I know the haskell community probably gets tired of my long
winded posts. I This post probably shouldn't even be on
[EMAIL PROTECTED] (more like on haskell-cafe). I also realize that
these posts may not mean much to you; many of you may have figured out
most of this strictness bus
matt hellige writes (to the haskell mailing list):
>[..] consider:
>sum 0 x = x
>sum x y = x + y
>
>if the first argument is 0, we don't need to inspect the second
>argument at all.
But sum returns its second argument, so it's still strict in that
argument.
Cheers,
Ronny Wichers Schr
On Thu, 14 Mar 2002, Brian Huffman wrote:
> In Haskell you can produce the desired behavior by using pattern guards.
> Since the pattern guards always get evaluated before the result does, they
> can be used to make things more strict. Here is the foldl example:
>
> strict x = seq x True
>
> fold
On Thu, 14 Mar 2002, Andrew Butterfield wrote:
> I think the Clean type system does stuff like this - it certainly supports
> strictness analysis and annotations:
> - see http://www.cs.kun.nl/~clean/ for more details
Thanks to both you and to Bernard James POPE for the repl
At 22:47 13/03/02 -0600, Jay Cox wrote:
>Perhaps what could be done about this strictness business is to make a
>kind of strictness annotation. Perhaps something that says (force the
>second argument of function F before every call to F (including any time F
>calls itself)).
>
trict on first argument.
> define instance Num for Peano.
>
> I dont even know if you could talk about strictness in either argument
> with church numerals. (and I'm to lazy to remind myself what a church
> numeral looks like precisely so that I could find out.)
>
i su
At 22:47 13/03/02 -0600, Jay Cox wrote:
>Perhaps what could be done about this strictness business is to make a
>kind of strictness annotation. Perhaps something that says (force the
>second argument of function F before every call to F (including any time F
>calls itself)).
>
...
data Peano = Zero | Succ (Peano)
sumpeano blah (Succ x) = sumpeano (Succ blah) x
sumpeano blah Zero = blah
sumpeano not strict on first argument.
define instance Num for Peano.
I dont even know if you could talk about strictness in either argument
with church numerals. (and I'm to l
Both parallel and sequential computation must be carefully controlled to
produce good parallel and distributed Haskell programs. Several languages
including Glasgow parallel Haskell and Eden use *evaluation strategies*:
overloaded polymorphic functions to describe the amount of evaluation.
>
yeah, I doublevote for deepSeq being part of the libraries or a
'blessed' extension. I would like to do things like deepSeq the abstract
tree of a compiled language then force a GC, thus making sure that the
original file text gets all cleaned up properly. deepSeq would be a much
nicer way of deal
Dean Herington wrote:
>
> `seq` forces evaluation of only the top-level construct in its first
> argument. (($!) similarly for its second argument.) I would guess your
> "newcounts" are structured (probably a tuple or list), in which case you are
> not forcing evaluation deeply enough. See
> ht
Amanda Clare wrote:
>
> Dean Herington wrote:
> >
> > `seq` forces evaluation of only the top-level construct in its first
> > argument. (($!) similarly for its second argument.) I would guess your
> > "newcounts" are structured (probably a tuple or list), in which case you are
> > not forcing
Dean Herington wrote:
>
> `seq` forces evaluation of only the top-level construct in its first
> argument. (($!) similarly for its second argument.) I would guess your
> "newcounts" are structured (probably a tuple or list), in which case you are
> not forcing evaluation deeply enough. See
> h
`seq` forces evaluation of only the top-level construct in its first
argument. (($!) similarly for its second argument.) I would guess your
"newcounts" are structured (probably a tuple or list), in which case you are
not forcing evaluation deeply enough. See
http://haskell.org/pipermail/haskell
I have some code which is being unnecessarily lazy (and occupying too
much heap space). The code should read and process several files one by
one. What's happening is that all files get read in but the processing
is delayed by laziness, and the files are being retained. It looks
something like th
> Ratio defines
> data (Integral a) => Ratio a = !a :% !a
> which GHC seems to implement as specified, but nhc and hugs seem to use
> data (Integral a) => Ratio a = a :% a
> Does this not have different strictness properties?
It does. In nhc98's case, t
Hallo!
Does anybody know of a paper that describes ways how to force strict evaluation
at some places and lazy evaluation at others?
And I am also interested in a guideline when to use strict evaluation and when
lazy.
Any pointers appreciated!
Thanks,
Andreas
__
etc
Adding more twiddles means less eager matching. I don't know whether
Hugs implements this.
Simon
| -Original Message-
| From: S. Doaitse Swierstra [mailto:[EMAIL PROTECTED]]
| Sent: 01 March 2001 11:26
| To: [EMAIL PROTECTED]
| Subject: strictness question
|
|
| I ran into a differ
On Fri, Mar 02, 2001 at 06:58:16PM +, Marcin 'Qrczak' Kowalczyk wrote:
> Toplevel ~ in let doesn't change anything. But nested ~'s do make
> a difference. When a variable of a pattern is evaluated, the whole
> pattern is matched. When you protect a subpattern by ~ deferring its
> matching and
t;Adding more twiddles means less eager matching. I don't know whether
>Hugs implements this.
>
>Simon
>
>| -Original Message-
>| From: S. Doaitse Swierstra [mailto:[EMAIL PROTECTED]]
>| Sent: 01 March 2001 11:26
>| To: [EMAIL PROTECTED]
>| Subject: strictn
Thu, 1 Mar 2001 12:25:33 +0100, S. Doaitse Swierstra <[EMAIL PROTECTED]> pisze:
> From the Haskell manual I understand that pattern matching in "let"'s
> should be done lazily, so the addition of a collection of ~'s should
> not make a difference.
Toplevel ~ in let doesn't change anything. But
I ran into a difference between GHC and Hugs. The following code:
f (P p) ~(P q) = P (\ k -> \inp -> let (((pv, (qv, r)), m), st) =
p (q k) inp
in (((pv qv , r ), m), st))
runs fine with Hugs but blows up with GHC, whereas:
f (P p) ~(P q) = P (\ k
I know it's complete heresy to say so, but I use laziness very
little in Haskell, while I probably pay quite a lot for it
in CPU time and memory, because of all those thunks which have to be
stored. However I prefer Haskell's type classes, syntax and
purity to, say, Standard ML. So I wonder whet
"C.Reinke" <[EMAIL PROTECTED]> writes:
> So foldl is indeed tail recursive, but this doesn't help if its
> operator isn't strict because the tail recursion only builds up the
> expression to be evaluated. Making strictness explicit by defining a
> variant o
nt chain, and that builds up
> a giant stack.
So foldl is indeed tail recursive, but this doesn't help if its
operator isn't strict because the tail recursion only builds up the
expression to be evaluated. Making strictness explicit by defining a
variant of foldl that evaluates its accum
t; >> us that the semantic distinction of strictness versus nonstrictness
> >> should be our concern, rather than the operational notions of
> >> eagerness and laziness.
>
> "Frank A. Christoph" <[EMAIL PROTECTED]>:
> >Please elucidate. Where d
Joe Fasel wrote:
> Actually, I think we were originally thinking of laziness, rather
> than nonstrictness, and weren't considering languages like Id as
> part of our domain, but Arvind and Nikhil (quite correctly) convinced
> us that the semantic distinction of strictness ver
>Joe Fasel wrote:
>> Actually, I think we were originally thinking of laziness, rather
>> than nonstrictness, and weren't considering languages like Id as
>> part of our domain, but Arvind and Nikhil (quite correctly) convinced
>> us that the semantic
Frank Christoph wrote,
| Ah, right. Someone mentioned just recently (I forget who---sorry) that
| nothing in the Report forces a Haskell implementation to use call-by-need. I
| guess this is a manifestation of the change of direction, from laziness to
| non-strictness...?
My point was meant to
On 19-Jul-1999, Jan Brosius <[EMAIL PROTECTED]> wrote:
> will Haskell compiled programs be faster by using more strictness
> annotations;
Strictness annotations on functions don't help much, since the compilers
generally do a fine job of inferring strictness of functions.
But th
Hans Aberg tries to help me/JJGR :
> At 10:40 +0100 1999/06/07, Jerzy Karczmarczuk wrote:
> >When I tried (Hugs, +h4M, interactively, just show, no print)
> >with 1, it bombs on control stack overflow.
>
> Is this a Windows version? The thing is that on primitive OS's, a parameter
> stack ch
Juan Jose Garcia Ripoll <[EMAIL PROTECTED]> wrote:
> can anybody point me to tutorials, papers, etc, on how to properly
> annotate strictness in Haskell code? I am concerned with the following
> stupid piece of code that eats a lot of memory and takes an incredible
> amount
At 10:40 +0100 1999/06/07, Jerzy Karczmarczuk wrote:
>When I tried (Hugs, +h4M, interactively, just show, no print)
>with 1, it bombs on control stack overflow.
Is this a Windows version? The thing is that on primitive OS's, a parameter
stack check must often be implemented by hand in order t
8BIT
Juan Jose Garcia Ripoll wrote:
> Hi,
>
> can anybody point me to tutorials, papers, etc, on how to properly
> annotate strictness in Haskell code? I am concerned with the following
> stupid piece of code that eats a lot of memory and takes an incredible
> amount of time to
is your *real* problem, your example is
of course artificial.
Anyway, if you ask about strictness annotations... I must say
that until today I managed to gain a little time to have some extra
beers thanks to the laziness rather than to strictness. Then of
course foldl is a little delicate...
Hans w
At 14:29 +0200 1999/06/05, Juan Jose Garcia Ripoll wrote:
>can anybody point me to tutorials, papers, etc, on how to properly
>annotate strictness in Haskell code? I am concerned with the following
>stupid piece of code that eats a lot of memory and takes an incredible
>amount of tim
Hi,
can anybody point me to tutorials, papers, etc, on how to properly
annotate strictness in Haskell code? I am concerned with the following
stupid piece of code that eats a lot of memory and takes an incredible
amount of time to produce some output. I hope somebody will help me in
finding what
Jeffrey R. Lewis wrote:
| Hmm... indeed. I wonder if there's any reason why zipWith can't just be fully lazy
| so that we don't need to twiddle with transpose. I.e., define it as:
|
| zipWith :: (a->b->c) -> [a]->[b]->[c]
| zipWith z ~(a:as) ~(b:bs) = z a b : zi
The other day, I tried to transpose an infinite list of finite list:
Simplified example:
transpose (repeat [1..5])
This won't terminate, since transpose is defined as
transpose :: [[a]] -> [[a]]
transpose = foldr
(\xs xss -> zipWith (:)
Jonas Holmerin wrote:
> The other day, I tried to transpose an infinite list of finite list:
> Simplified example:
>
> transpose (repeat [1..5])
>
> This won't terminate, since transpose is defined as
>
> transpose :: [[a]] -> [[a]]
> transpose = foldr
>
In all of this, I neglected to mention *why* I think unlifted tuples
are a good idea. I've given various reasons, but not the real one.
The real one is: Embarassment. I wrote an implementation of linear
logic in Haskell. It took a while before I discovered why my
implementation got into a loo
here
is more than one possible constructor for the type.
Another solution, which generalizes an earlier proposal of mine for
strictness annotations, is as follows. We allow lifted, unlifted and
smash products. (In a smash product, if any component of a product is
_|_ then the entire product
> To correctly evaluate seq (x, y) 5 it would be necessary to concurrently
> evaluate x and y, since (x, y) is bottom if and only if both x and y are
> bottom. (I enjoy finding a flaw in Miranda because there are so few to
> be found!)
Another flaw: There is a seq hidden in foldl.
-
Theoretical arguments regarding the distinction between lifted vs
unlifted tuples (i.e., any type declaration with single disjunct) are
too esoteric for my taste. However, there are some practical reasons
to choose one over the other. In the Id implementation, no
distinction is made between li
I wrote:
|Thus, it would indeed be reasonable for the type of seq to determine
|that f x `seq` y is all right, whereas f `seq` y is not permissible.
|Similarly, I think it would be consistent to have unlifted products,
|but not give them data instances, so that (x,y) `seq` z is not allowed,
|
he type,
and also, bottom and (\x -> bottom) could probably be trivially
distinguished by an isFunction type predicate.)
Your implementation may well have different representations corresponding
to bottom and (\x -> bottom), but that's a far cry from saying that
they shouldn't abstrac
Paul Hudak notes:
|Similarly, given an equation:
|
| f (Foo x y) = y
|
|
|If Foo is strict in its first component then I can't use this equation
|at will; I need to qualify it:
|
| f (Foo x y) = yif x /= _|_
|
|
|(And again, the first equ
Strictness annotations are not annotations, since they change the
meaning of a program. Let's use the term strictness indicators.
As I mentioned in an earlier message to this mail group, with
> f (Pair a b) = b
the value of (f (Pair x 5)) may not be 5, when Pair involves st
Strictness annotations do not completely remove the need for unlifted
products. (However, on balance I am inclined to stay with lifted
products only, rather than add a new language feature.)
In a lifted product, bottom /= (bottom, bottom). That is, a new bottom
is added onto the produce
Paul writes,
I think it's important to realize that laws aren't being entirely
lost -- they're just being weakened a (wee) bit, in the form of
carrying an extra constraint. For example, eta conversion:
\x -> f x = f
must simply be modified slightly:
\x -
Phil writes:
> In the absence of convincing answers, I'd rather have as many laws
> as possible, hence my preference for unlifted tuples and products.
Here's another law that I find useful:
If we write
f p = p
where p is some pattern&expression then I expect f to be the identity
func
If Lennart was asking, `Shall we make laws a paramount design feature
of Haskell, and therefore go for unlifted tuples, unlifted functions,
and no n+k or literal patterns', my answer would be `let's go for it'.
But I suspect what Lennart is really asking is `Shall we ignore laws,
have lifted tup
ain I point out that despite our original intents, we still
need to reason about _|_'s -- in pattern-matching, for example --
and "strictness bugs" have become infamous (:-)!
-Paul
Paul and Phil write,
| What are the disadvantages of having a lifted function space?
|
| I think the main one is that we lose unrestricted eta
| conversion. But maybe that's not such a big deal either.
|
| We keep claiming that functional languages are good because they
| sa
I've separated this from my previous note, because it's about the
precise question of strictness annotations rather than the more general
question of laws.
I would rather tell someone that to define a new type exactly
isomorphic to an old type they need to write
new
Paul writes,
What are the disadvantages of having a lifted function space?
I think the main one is that we lose unrestricted eta
conversion. But maybe that's not such a big deal either.
We keep claiming that functional languages are good because they
satisfy lots of la
I would rather tell someone that to define a new type exactly
isomorphic to an old type they need to write
newtype Type = Constructor typeexp
then tell them that they need to write
data Type = Constructor !typeexp
The latter smacks too much of magic. This is clearly a m
plus constants. Maybe this is even more reason to put
strictness into the type system! :-)
Similarly, given an equation:
f (Foo x y) = y
If Foo is strict in its first component then I can't use this equation
at will; I need to qualify it:
f (Foo x y) = yif x /= _|_
(And ag
(This is a message on strictness, etc. I was too busy to reply
earlier when the discussion first began).
Like Ian, I would like to suggest that we lift functions in Haskell.
Originally there was a good reason not to: there was no need (and
indeed no way) to distinguish _|_ from \x->_|_.
Following recent discussions about strictness annotations, and the
reservations people had about introducing them into standard Haskell, I
thought I would mention that there is another way of thinking about them that
might be helpful.
You can think of a type !t as meaning `t without _|_
I like the idea of having some way to force the evaluation of things in
a functional language. For example, it seems like a good idea to be
able to force both components of a complex number to be evaluated
always. However, I see one problem with strictness annotations in a
data declaration
gt; (because of previous discussion about the difficulty with strictness in function
> types), it does make perfect sense to say
>
> newtype New a b = MakeNew (a->b)
>
> In short, using a strictness "annotation" (not really an annotation anyway,
> since it chang
>But just because they call it `lazy' doesn't mean that it really is
>the essence of laziness. I prefer to use the more neutral name `lifted
>lambda calculus' for their calculus.
I disagree. In the simplest case (just lambdas, variables and applications,
i.e. no explicit constructors), it is *
Gerald Ostheimer notes that in Abramsky and Ong's lazy lambda calculus
that (\x -> bottom) differs from bottom. That's correct.
But just because they call it `lazy' doesn't mean that it really is
the essence of laziness. I prefer to use the more neutral name `lifted
lambda calculus' for their
> I thought this inequality was one of the distinguishing characteristics of
> lazy functional programming relative to the standard lambda-calculus. To
> quote from Abramsky's contribution to "Research Topics in Functional
> Programming", Addison-Wesley 1990:
>
>Let O == (\x.xx)(\x.xx) be t
I have been following this discussion with interest and I'd like
some clarification.
Wadler writes:
> But just because they call it `lazy' doesn't mean that it really is
> the essence of laziness.
What is really been called `lazy' and how is the `essence of
laziness' defined?
Also, forgive my
> So, as Lennart says, if we allow constructors to be strict in functions
> then we have to change the semantics to distinguish _|_ from (\x -> _|_).
> I, for one, am deeply reluctant to do so; I certainly have no good handle on
> the consequences of doing so. Does anyone else?
I thought this i
(This message assumes we head for the strictness-annotation-on-constructor-arg
solution. I'll respond to Phil's comments in my next msg.)
The problem with polymorphic strictness
~~~
John asks what the problem is with strict constructor args. As L
I like John's idea with a class Strict, but I think there should also be
a second class Eval for computing whnf's:
class Strict a where
strict :: a -> Bool
class Eval a where
eval :: a -> Bool
Example: for Complex we get:
instance Strict a => Strict (
1 - 100 of 106 matches
Mail list logo