Why is there a space leak here?

2001-05-28 Thread David Bakin



Why is there a space leak in foo1 but not in foo2?  
(I.e., in Hugs Nov '99) foo1 eats cells (and eventually runs out) where foo2 
doesn't.   That is, if I do (length (foo1 100)) I eventually run 
out of cells but (length (foo2 100)) runs fine (every GC returns basically 
the same amount of space).  Something must be wrong in flatten but it 
follows the pattern of many functions in the prelude (which I'm trying to learn 
from).
 
I have been puzzling over this for nearly a full day (getting 
this reduced version from my own code which wasn't working).  In general, 
how can I either a) analyze code looking for a space leak or b) experiment 
(e.g., using Hugs) to find a space leak?  Thanks!  -- 
Dave
 
-- This has a space leak, e.g., when 
reducing (length (foo1 100))foo1 m   = take m 
v    where  v = 1 : flatten 
(map triple v)  triple x = [x,x,x]
 
-- This has no space leak, e.g., when 
reducing (length (foo2 100))foo2 m   = take m 
v    where  v = 1 : flatten 
(map single v)  single x = [x]
 
-- flatten a list-of-listsflatten :: 
[[a]] -> [a]flatten 
[] = 
[]flatten ([]:xxs)   = flatten 
xxsflatten ((x':xs'):xxs) = x' : flatten' xs' xxsflatten' [] 
xxs    = flatten xxsflatten' (x':xs') 
xxs  = x': flatten' xs' xxs
 
 


RE: Recursive types?

2001-05-22 Thread David Bakin

Thank you.

Now, on seeing your Tree example I tried to use it by defining height
using structural recursion.  But I couldn't find a valid syntax to
pattern match - or otherwise define the function - on the type CT.  Is
there one?  Or, maybe, are constructor classes the only way to use these
types?  If that's the case - is that inherent in these types or just the
way Haskell does it?  Because I was thinking that although you don't
know much about these types (e.g., the operations you can perform on k)
you should be able to compute height (& size) and leaves :: Tree c k e
-> [e] and even replMin on the information that's there in the type.

-- Dave


-Original Message-
From: Tom Pledger [mailto:[EMAIL PROTECTED]]
Sent: Monday, May 21, 2001 7:25 PM
To: [EMAIL PROTECTED]
Subject: Recursive types?


David Bakin writes:
 | I'm having trouble understanding recursive types (e.g., as described
in
 | Functional Programming with Overloading and Higher-Order Polymorphism
by
 | Jones.
 |  
 | He gives as an example
 |  
 |  
 | > data Mu f = In (f (Mu f))
 |  
 | > data NatF s = Zero | Succ s
 | > type Nat = Mu NatF
 |  
 | Among the things I don't get are:
 |  
 | 1.  Comparing this to the non-higher-order type:
 |  
 | > data NatX = Zero | Succ NatX
 |  
 | What are the advantages of the higher-order type?  What are the
 | disadvantages (incl. runtime costs)?
 |  
 | 2.  How are values of type Nat represented? (It helps me to think
about
 | these things concretely.)

_|_
In _|_
In Zero
In (Succ _|_)
In (Succ (In _|_))
In (Succ (In Zero))
In (Succ (In (Succ _|_)))
In (Succ (In (Succ (In _|_
In (Succ (In (Succ (In Zero

and so on.  If you never care about distinguishing In _|_ from _|_,
you can save space and time by declaring `newtype Mu ...' instead of
`data Mu ...', in which case Nat's run-time representation is the same
size as NatX's.

One advantage of such higher-order types is reusability.  For example,
this

type CT c k e
= c k (Tree c k e)-- Contained Tree, container, key, element
data Tree c k e
= Empty
| Node (CT c k e) e (CT c k e)

can be used with no frills

newtype Plain k t = Plain t
... :: Tree Plain () e

or with every subtree labelled

... :: Tree (,) String e

or with every subtree inside the ST monad

import ST
... :: Tree STRef s e

etc.  I don't know whether this is a shining example of an advantage,
and am keen to see other comments.

Regards,
Tom

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Recursive types?

2001-05-21 Thread David Bakin



I'm having trouble 
understanding recursive types (e.g., as described in Functional Programming 
with Overloading and Higher-Order Polymorphism by 
Jones.
 
He gives as an 
example
 
 
> data Mu f = In 
(f (Mu f))
 
> data NatF s = 
Zero | Succ s
> type Nat = Mu 
NatF
 
Among the things I 
don't get are:
 
1.  Comparing 
this to the non-higher-order type:
 
> data NatX = 
Zero | Succ NatX
 
What are the 
advantages of the higher-order type?  What are the disadvantages (incl. 
runtime costs)?
 
2.  How are 
values of type Nat represented? (It helps me to think about these things 
concretely.)
 
Thanks!  -- 
Dave
 
 


RE: Combinator library gets software prize

2001-01-21 Thread David Bakin

This article is very good, and having read the conference paper earlier
in the year I finished it with only one question:  What's a 'quant' ...
and is it good or bad to be one?

"Ten years ago, Jean-Marc Eber, then a quant at Société
Générale, ..."

-- Dave

-Original Message-
From: Simon Peyton-Jones
Sent: Saturday, January 20, 2001 5:10 AM
To: [EMAIL PROTECTED]
Subject: Combinator library gets software prize


You might enjoy this article, in Risk Magazine, a journal
of the financial engineering industry.  Combinators in the real
world!

http://www.risk.net/riskawards2001/softwareproduct.htm
 

Simon

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell
 



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: syntactic sugar for "arrows"

1999-01-28 Thread David Bakin (Exchange)

Except that he does say in the conclusion of the paper:  "This paper
proposes the replacement of monads as a structuring tool for combinator
libraries, by arrows."  

I don't think he made that argument though; I think the argument he made was
that arrows could be an alternative structuring tool for combinator
libraries that supply additional power at some cost in convenience.  The
second paragraph of the conclusion pretty much describes when arrows could
and should be used.  And he ends the paper with "In short, we believe that
arrows offer a useful extension to the generality of generic library
interfaces", a reasonable (and understated) claim.

-- Dave

-Original Message-
From: David Barton [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 28, 1999 7:44 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: syntactic sugar for "arrows"


Michael Hobbs writes:

   Yes, I can see how the arrow paradigm would work very well for
   things such as parsing LL(1) grammars, but I could not see how such
   a scheme could become a _replacement_ for monads in general purpose
   programming.  Perhaps I was expecting the wrong thing from the
   concept...

I agree, and (not to speak for John Hughes, but I will anyway) I
suspect John would as well. 





RE: syntactic sugar for "arrows"

1999-01-28 Thread David Bakin (Exchange)

I'm working my way through both papers now (arrows, and deterministic
error-correcting parsers).  They're very interesting.  

I wanted to point out that I see a similarity to work being done in the C++
community with "template metaprograms" and "active libraries", where some
library developers (typically for numerical "codes") are exploring the outer
limits of the C++ template facility in order to build libraries that
"self-configure" at compile time to produce optimized code depending on both
context and platform.  So the library developer produces code that sort of
does an application specific 'static analysis' at compile time that changes
the algorithms and/or data structures that the library supplies to the
calling program depending on the context at the call site.  Two of the more
active investigation/development efforts in this line are Blitz++ and Pooma
(see links below).  

It turns out that this method is pretty tricky for the library writer but
easy to use for the library user - seems like the arrows technique would be
similar.  But that could be acceptable - there's no reason a 'general'
functional programmer needs to write code that does a static analysis of his
program in order to get better performance - that's probably properly the
domain of the compiler writer and library writer.  (I guess what this kind
of work does is it allows the library writer to contribute optimizations in
this way, previously only the compiler writer got to.)

Also, C++ template metaprogramming tends to break existing C++ compilers or
expose grievous performance problems in the compiler (that more 'normal' C++
coding style don't) - it will be interesting to see if arrows do the same
for functional compilers!

Some links:

Blitz++ web page: http://seurat.uwaterloo.ca/blitz/
PoomaII distribution: http://www.acl.lanl.gov/pooma/
Generative Matrix Computation Library:
http://nero.prakinf.tu-ilmenau.de/~czarn/gmcl/
Paper on 'active libraries':
http://extreme.indiana.edu/~tveldhui/papers/oo98.html

-- Dave





Re: Haskell 1.3 Draft Report

1995-05-19 Thread David Bakin


Hi.  For the TeX-impaired, is there any chance of sticking postscript files
on an ftp site?  Thanks!  -- Dave

>A draft of the Haskell 1.3 report is available by FTP from
>ftp.dcs.glasgow.ac.uk [130.209.240.50] in
>
>   pub/haskell/report/draft-report-1.3.dvi.gz  [Report]
>   pub/haskell/report/draft-libraries-1.3.dvi.gz   [Libraries]
>
>Highlights include:
>
>   Monadic I/O
>   A split into prelude and libraries, with qualified names
>   Strict data types
>   Some minor syntactic revisions
>
>We are planning to revise this and release it in time for FPCA '95.
>There will definitely be additional prelude and library changes;
>including several new libraries.
>
>Feedback is welcome and will be taken into account when revising the
>report, but please remember that we will be very busy over the next few
>weeks (I am also away for the next two weeks!).  Please mail typos., minor
>notes on syntax etc. to me; substantive comments should be sent to
>[EMAIL PROTECTED]
>
>Regards,
>Kevin
>
>
>

--
Dave Bakin  How much work would a work flow flow if a  #include
510-922-5678work flow could flow work?