Re: [Haskell] memory management

2009-08-04 Thread Colin Runciman

Nathan,

 I'm interested in research relating to memory management in 
Haskell. I'm at the point where I don't know enough to have very 
specific questions, but I'm especially interested in garbage collection 
in Haskell, and any available statistics (such as, how long does a thunk 
typically live before its evaluated, after its evaluated?), or tools 
that would let me get that sort of information more easily. If any one 
could be so kind as to point me to relevant research papers or other 
documentation, it would be very much appreciated.


In the early to mid '90s we built various heap-profiling tools to 
examine the characteristics of heap data in lazy functional programs. 
You can find papers describing this work by Googling heap profiling. 
You may be particularly interested in the investigation of heap lag 
and heap drag -- see for example the ICFP'96 paper.  Others have 
worked on similar tools since, but I'm not sure how extensive heap 
profiling facilities are in ghc, the most widely used implementation of 
Haskell.


Regards
Colin R

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Abusing quickcheck to check existential properties

2008-10-22 Thread Colin Runciman

Norman,


I guess what I would like is to reuse most of the mechanisms in
QuickCheck to have it say one of these two things:

  1. Found an satisfying instance after 73 tries: [gives instance]

  2. After 100 tries, could not find a satisfying instance.

Like failure, the first tells you something definite about your
program.  And like passing 100 tests, the second tells you nothing.


What you're asking for is similar to what SmallCheck could give you, but 
in the context of exhaustive testing of small values not sample testing 
 of random values.  However:


1. As most existentials are within the scope of a universal, there are 
many instances tested, so when a witness is indeed found nothing is 
reported.


2. A report is given only when no witness can be found within the 
specified search depth -- or when more than one can be found in the case 
of a unique existential.


Perhaps your final remark is tongue-in-cheek?  I agree the question of 
what passed 100 tests tells you is tricky; but it isn't nothing. 
Anyway, in SmallCheck knowing that any witness could only be beyond the 
search depth does tell you something (eg. the expected witness might be 
a position in a list where the search depth is the length of the list).


Colin R
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ANN: SmallCheck 0.4

2008-05-27 Thread Colin Runciman

SmallCheck 0.4: another lightweight testing library in Haskell
--

A new version of SmallCheck can be obtained from:

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/smallcheck

or alternatively from

http://www.cs.york.ac.uk/fp/smallcheck0.4.tar

The difference from 0.3 is that module SmallCheck is now Test.SmallCheck
and has been cabal-packaged.

SmallCheck is similar to QuickCheck (Claessen and Hughes 2000-) but
instead of testing for a sample of randomly generated values, SmallCheck
tests properties for all the finitely many values up to some depth,
progressively increasing the depth used.

Folk-law: if there is any case in which a program
fails, there is almost always a simple one.

Corollary: if a program does not fail in any
simple case, it almost never fails.

Other possible sales pitches:
* write test generators for your own types more easily
* be sure any counter-examples found are minimal
* write properties using existentials as well as universals
* establish complete coverage of a defined test-space
* display counter-examples of functional type

Comments and suggestions welcome.

Colin Runciman

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ANN: SmallCheck 0.3

2008-05-06 Thread Colin Runciman

SmallCheck 0.3: another lightweight testing library in Haskell
--

A new version of SmallCheck can be obtained from:

  http://www.cs.york.ac.uk/fp/smallcheck0.3.tar

Main differences from 0.2:
* existential quantifiers now have unique variants for which two
  witnesses are reported when uniqueness fails;
* the over-generating coseries method for functions of functional
  arguments has been replaced;
* additional examples;
* test counters are now Integers, not Ints!

SmallCheck is similar to QuickCheck (Claessen and Hughes 2000-) but
instead of testing for a sample of randomly generated values, SmallCheck
tests properties for all the finitely many values up to some depth,
progressively increasing the depth used.

Folk-law: if there is any case in which a program
fails, there is almost always a simple one.

Corollary: if a program does not fail in any
simple case, it almost never fails.

Other possible sales pitches:
* write test generators for your own types more easily
* be sure any counter-examples found are minimal
* write properties using existentials as well as universals
* establish complete coverage of a defined test-space
* display counter-examples of functional type

Comments and suggestions welcome.

Colin Runciman
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] fun at York (reminder)

2007-11-13 Thread Colin Runciman

Fun people,

There will be Fun in the Afternoon at York on Thursday 22 November.
See http://sneezy.cs.nott.ac.uk/fun/ for details.

If you plan to come, and have not already mailed to say so, please send 
me a non-empty message with EITHER curried fun OR uncurried fun as 
the subject line -- participants opting for curried fun will convene at 
a nearby restaurant at 6pm.


Thanks,
Colin R


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ANN: SmallCheck0.2

2006-11-06 Thread Colin Runciman

SmallCheck 0.2: another lightweight testing library in Haskell
--

A new version of SmallCheck can be obtained from:

 http://www.cs.york.ac.uk/fp/smallcheck0.2.tar

Main differences from 0.1:
* choice of interactive or non-interactive test-drivers using
 iterative deepening;
* more pre-defined test-data generators, including revised
 Int, Integer, Float, Double, Nat and Natural.
* Additional examples.

SmallCheck is similar to QuickCheck (Claessen and Hughes 2000-) but
instead of testing for a sample of randomly generated values, SmallCheck
tests properties for all the finitely many values up to some depth,
progressively increasing the depth used.

   Folk-law: if there is any case in which a program
   fails, there is almost always a simple one.
   Corollary: if a program does not fail in any
   simple case, it almost never fails.

Other possible sales pitches:
* write test generators for your own types more easily
* be sure any counter-examples found are minimal
* write properties using existentials as well as universals
* establish complete coverage of a defined test-space
* display counter-examples of functional type

Comments and suggestions welcome.

Colin R

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] ANN: SmallCheck 0.1

2006-09-14 Thread Colin Runciman
Don,

Let's run QuickCheck (check) head to head with SmallCheck (scheck): 
...
lambdabot scheck \s - not (null s) == minimum (s :: [Int]) == (last . 
 sort) s
  Failed test no. 10. Test values follow.: [-1,-1,-1,-1,-1,-1,-1,0]

lambdabot check \s - not (null s) == minimum (s :: [Int]) == (last . 
 sort) s
 Falsifiable, after 1 tests: [2,1]
  

So your plugin  is based on depthCheck 8, not the iterative deepening
of smallCheck; otherwise smallCheck would report [-1,0] as the first
failure.

I'll add a 'batch' version of iterative deepening.

One thing needed for online use: some more instances for the various numeric
types might be useful, Float, Double, Ratio, Complex etc.
  

Fair point.  I admit that I rarely use numbers other than the non-negative
integers when programming. Even deciding to include -1 in the default Int
series was a trip into an alien world. :-)  

I'll add some simple default instances for other numeric types used by
the more arithmetically adventurous.

Colin

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ANN: SmallCheck 0.1

2006-09-13 Thread Colin Runciman
SmallCheck: another lightweight testing library in Haskell.

Folk-law: if there is any case in which a program fails, there is almost
always a simple one.

SmallCheck is similar to QuickCheck (Claessen and Hughes 2000-)
but instead of a sample of randomly generated values, SmallCheck
tests properties for all the finitely many values up to some depth,
progressively increasing the depth used.  For data values, depth means
depth of construction.  For functional values, it is a measure combining
the depth to which arguments may be evaluated and the depth of possible
results.

Other possible sales pitches:
* write test generators for your own types more easily
* be sure any counter-examples found are minimal
* write properties using existentials as well as universals
* establish complete coverage of a defined test-space
* display counter-examples of functional type

A new version of SmallCheck can be obtained from:
http://www.cs.york.ac.uk/fp/smallcheck0.1.tar
The differences from 0.0 are two fixes (space-fault, output buffering),
an 'unsafe' but sometimes useful Testable (IO a) instance and additional
examples.

Comments and suggestions welcome.

Colin R

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ANN: SmallCheck 0.0 another lightweight testing tool

2006-08-21 Thread Colin Runciman
I have written a prototype tool that is similar in spirit, and in some 
of its workings, to QuickCheck,
but based on exhaustive testing in a bounded space of test values.  
Sales pitch: wouldn't you like to ...

* write test generators for your own types more easily?
* be sure that any counter-examples found are minimal? (##)
* write properties using existentials as well as universals?
* establish complete coverage of a defined test-space?
* display counter-examples of functional type? (##)
* guarantee repeatable test results? (##)
For more details and a Haskell 98 module SmallCheck download the small 
tar file at:

http://www.cs.york.ac.uk/fp/smallcheck0.0.tar
If you try it, do let me know how you get on.  Comments  suggestions 
welcome.

Colin R

(##) There have been versions of QuickCheck with extras that address 
these issues in other ways.


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ANN: hpc -- Haskell Program Coverage

2006-08-10 Thread Colin Runciman
We are pleased to announce the first release of hpc, a new tool for 
Haskell developers.  Hpc records and displays Haskell program coverage. 
It provides coverage information of two kinds: source coverage and 
boolean-control coverage. Source coverage is the extent to which every 
part of the program was used, measured at three different levels: 
declarations (both top-level and local), alternatives (among several 
equations or case branches) and expressions (at every level). Boolean 
coverage is the extent to which each of the values True and False is 
obtained in every syntactic boolean context (ie. guard, condition, 
qualifier). Hpc displays both kinds of information in two different 
ways: textual reports with summary statistics (hpc-report) and sources 
with colour mark-up (hpc-source).


For further information and a tar-ball for hpc 0.2 see 
http://www.galois.com/~andy/hpc-intro.html


Colin Runciman and Andy Gill





___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] The Haskell mailing list

2005-11-16 Thread Colin Runciman




Simon,

  Can anyone remember when the Haskell mailing list first opened?  Does
anyone have an archive that goes right back to the beginning?
  
  

The attached posting to the fp mailing list, dated 2 April 1990,
included the announcement of the new haskell mailing list.

Colin R



From [EMAIL PROTECTED] Mon Apr  2 14:38:46 1990
Via:08006001/FTP.MAIL;  3 Apr 1990 13:46:21 GMT
Original-Via:40010180/FTP.MAIL;  2 Apr 1990 19:31:12-BST
Reply-To: [EMAIL PROTECTED]
Original-Sender: [EMAIL PROTECTED]
Precedence: bulk
Resent-Date: Mon, 2 Apr 90 10:38:46 EDT
Resent-Message-Id: [EMAIL PROTECTED]
Message-Id: [EMAIL PROTECTED]
Date: Mon, 2 Apr 90 10:38:46 EDT
From: Paul Hudak [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Release of Haskell Report
Resent-From: [EMAIL PROTECTED]
Resent-To: [EMAIL PROTECTED]
Original-Sender: [EMAIL PROTECTED]
Sender: [EMAIL PROTECTED]


  Announcing

  The Haskell Report
 Version  1.0
 1 April 1990

The Haskell Committee, formed in September 1987 to design a common
non-strict purely functional language, has (finally) completed its work
and wishes to announce a Report about the language.  Several preliminary
versions of the Report were released to get feedback, and many changes
have been incorporated since the first release in December '88.  The
current release will remain STABLE for at least one year.

You may get the report via anonymous FTP:
from Yale:  from Glasgow:
  ftp nebula.cs.yale.edu  ftp cs.glasgow.ac.uk
  (or ftp 128.36.13.1)
  login: anonymouslogin: guest
  password: anonymous password: your username
  type binary type binary
  cd pub/haskell-report   cd FPhaskell-report 
  get report-1.0.tar.Zget report-1.0.tar.Z
  get report-1.0.dvi.Zget report-1.0.dvi.Z
  get report-1.0.ps.Z get report-1.0.ps.Z
  quitquit

The .tar file contains the source document in Unix tape archive
format; alternatively the .dvi or .ps files may be used for printing
only.  All of the files are in compress format; to uncompress them
type uncompress filename at the Unix shell.  The report is over
100 pages long and has been formatted for double-sided copying.

Alternatively, you may get a hard copy by:

Sending $10 to:  Sending 5 pounds to:
The Haskell Project  The Haskell Project 
Department of Computer Science   Department of Computing Science 
Yale University  University of Glasgow   
Box 2158 Yale StationGlasgow  G12 8QQ
New Haven, CT 06520 USA  SCOTLAND

Two implementations of Haskell will soon be available, one built at the
University of Glasgow, the other at Yale University.  Watch this space
for announcements.

Finally, we hope to fix, improve and clarify the current Haskell design
through graceful evolution and with your help.  To this end, several
mailing lists have been set up:

1) A common mailing list for general discussions about Haskell.
   This list is reachable in one of two ways:
   [EMAIL PROTECTED]  or  [EMAIL PROTECTED]

2) To be added to the above mailing list, or to inquire about
   the implementation status of Haskell, send mail to either:
   [EMAIL PROTECTED]  or  [EMAIL PROTECTED]
   (whichever is the nearer site to you).

3) The implementation efforts at Glasgow and Yale will have their own
   mailing lists for users of the respective systems.  These will be
   announced at the time the implementations are released.
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] [ANNOUNCE] yhc - York Haskell Compiler

2005-11-14 Thread Colin Runciman

Bulat,


CR * Part of Tom's motivation for the new back-end is a nice implementation
CR of his Hat G-machine for tracing.

i'm interested whether this sort of things is possible as back-end for
GHC?

it will be great if current front-end for GHC which supports number of
widely used extensions can be used together with new sorts of
back-ends, including debugging and super-optimizing (like jhc) ones
 

I am aware of some experiments with alternative back-ends for ghc, but I 
don't know of any work on a ghc back-end generating portable bytecode.  
A few years ago some work was done towards a ghc-hugs fusion, but in the 
end hugs remained separate and the ghc people developed ghci.  Perhaps 
ghc and/or hugs developers can comment further?


So far as debugging back-ends for ghc are concerned, Robert Ennals and 
Simon PJ did a stop-and-look style debugger using speculative evaluation 
which perhaps is still distributed? For systems that record a complete  
computational trace, a modified abstract machine is an attractively 
efficient alternative to source-to-source transformation, but inevitably 
demands more cooperation from the front-end to provide the extra 
information needed.


Colin R

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] [ANNOUNCE] yhc - York Haskell Compiler

2005-11-14 Thread Colin Runciman

Thomas Davie wrote:


I haven't played around with nhc98 yet, but I was intrigued by its
small size and its (modestly-sized and simple) bytecoded
implementation. Should I now be more interested in Yhc instead? ;-)


As far as the YHC team is concerned, yes... As far as the nhc team  
is... I'm not sure, perhaps Malcolm or Colin would be kind enough to  
tell us.


I think the implied division here is an artificial one.

Tom Shackell's work on a new back-end for nhc98 is a welcome development 
and looks very promising.  It is a good idea to pursue a more portable 
and compact bytecode. Also, the back-end of the currently distributed 
nhc98 has hosted several experiments in memory management, profiling, 
tracing etc so stripping back to a more minimal run-time system is also 
attractive. 

However, the new back-end is still under development.  It does not yet 
support everything that the current nhc98 back-end does.  So in the 
short term, I'd still recommend application developers to use the 
standard nhc98 distribution if it runs on their computing platform.


In the medium-to-long term, it makes little sense to dissipate the 
efforts of a small number of York Haskell people by trying to maintain 
two distinct compilers with a common root.  When the new back-end is 
tried and tested, with the addition of profiling and tracing*, I hope it 
will become part of the nhc98 mainstream.


As for the name: Malcolm has been maintaining and distributing first 
nhc, then nhc98, from York for several years, and by now it has a lot 
of  York-written code in it; but we kept the name nhc by way of 
acknowledgement to Niklas R\{o}jemo, the original author, who brought 
nhc to York when he was a post-doc here in the mid '90s.


Colin R (and Malcolm W)

---
* Part of Tom's motivation for the new back-end is a nice implementation 
of his Hat G-machine for tracing.


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] post-doc job in functional programming

2005-08-13 Thread Colin Runciman
Research Associate Position Available -- Post-Doc, Functional Programming
(Second Posting -- application deadline 31 August)

Applications are invited for a fixed-term research position at the
Department of Computer Science, University of York, UK.  This position
is for a post-doctoral researcher to investigate the use of functional
programming systems in grid computing.  Specifically, the post is
available in connection with a recently awarded EPSRC grant: A Lazy
Polytypic Grid: Generic Data Visualization Methods That Adapt to Resources
Available.

The project is a collaboration between the University of Leeds
(grant holder: David Duke) and the University of York (grant holder:
Colin Runciman).  A post-doctoral researcher is being appointed at
each site. The researcher at Leeds is expected to specialise in data
visualisation and the researcher at York in functional programming
systems, but the intention is that work will be genuinely collaborative.
Both departments are active centres of research; for further information
about them see http://www.cs.york.ac.uk/ and http://www.comp.leeds.ac.uk/.

The essential qualifications for the post are these:

* a PhD in functional programming systems or equivalent
  research experience;
* well-developed skills in both the implementation and
  application of functional languages;
* an aptitude for working collaboratively in a small team with
  researchers from different domains of expertise;
* strong communication skills for written and oral presentation
  of ideas and results within and beyond the team.

Other desirable qualifications include:

* fluency in the Haskell programming language;
* knowledge of data visualisation methods;
* experience with grid computing.

Starting salary will be up to 27,116 pounds per annum and the position
is available for up to 3 years.

For details of how to apply please email [EMAIL PROTECTED] quoting reference
number CR05305 or see http://www.york.ac.uk/admin/persnl/jobs/. The
closing date is 31 August 2005.

For informal enquiries, or more details of the technical aims of the
project, you are welcome to contact me directly: e-mail Colin.Runciman
at cs.york.ac.uk or telephone +44 1904 432740. Please note, however,
that I am due to be away for the last 10 days of August.

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] post-doc job in functional programming

2005-08-01 Thread Colin Runciman
Research Associate Position Available
(Post-Doc, Functional Programming Systems)

Applications are invited for a fixed-term research position at the
Department of Computer Science, University of York, UK.  This position
is for a post-doctoral researcher to investigate the use of functional
programming systems in grid computing.  Specifically, the post is
available in connection with a recently awarded EPSRC grant: A Lazy
Polytypic Grid: Generic Data Visualization Methods That Adapt to Resources
Available.

The project is a collaboration between the University of Leeds
(grant holder: David Duke) and the University of York (grant holder:
Colin Runciman).  A post-doctoral researcher is being appointed at
each site. The researcher at Leeds is expected to specialise in data
visualisation and the researcher at York in functional programming
systems, but the intention is that work will be genuinely collaborative.
Both departments are active centres of research; for further information
about them see http://www.cs.york.ac.uk/ and http://www.comp.leeds.ac.uk/.

The essential qualifications for the post are these:

* a PhD in functional programming systems or equivalent
  research experience;
* well-developed skills in both the implementation and
  application of functional languages;
* an aptitude for working collaboratively in a small team with
  researchers from different domains of expertise;
* strong communication skills for written and oral presentation
  of ideas and results within and beyond the team.

Other desirable qualifications include:

* fluency in the Haskell programming language;
* knowledge of data visualisation methods;
* experience with grid computing.

Starting salary will be up to 27,116 pounds per annum and the position is
available for up to 3 years.

For details of how to apply please email [EMAIL PROTECTED] quoting reference
number CR05305 or see http://www.york.ac.uk/admin/persnl/jobs/. The closing
date is 31 August 2005.

For informal enquiries, or more details of the technical aims of the
project,
you are welcome to contact me directly: e-mail Colin.Runciman at
cs.york.ac.uk
or telephone +44 1904 432740. Please note, however, that I am due to be away
for the first 10 days and the last 10 days of August.

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] line-based interactive program

2005-07-08 Thread Colin Runciman
Christian Maeder wrote:

Could you also insert a prompt that is shown before the lines are read?
(The first prompt seems to be tricky assuming line buffering )
  

If line-buffering is assumed or imposed, of course it prevents the
programming of interactive applications where the units of input or
output are less than a line!  However, there is no need to build
line-buffering into the system, because it is easily defined in Haskell:

buffer xs = foldl const xs xs

lineBuffered = map buffer  .  lines

Regards
Colin R



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] line-based interactive program

2005-07-08 Thread Colin Runciman
Christian,

buffer xs = foldl const xs xs
I don't find it this easy nor a good programming practise.


I don't see why you should think it hard to define a function like
'buffer'.  The whole purpose of foldl is to encapsulate accumulation. 
It demands the full spine of its list argument to produce any result,
and that's what we want.  And we don't want any extra computation, just
the list argument itself; hence 'const' and 'xs'.

Your implicitly proposed programming practise is actually *not to
program* buffering at all!   Just have it as an ad hoc IO directive,
programmed less concisely and no more clearly in a low-level library or
run-time system.  How is that better?

My interaction depends on the (subtle order of) evaluation of a pure and
total function?

Pure, yes; total, no.

Many important things depend on order of evaluation in lazy programs:
for example, whether they compute a well-defined value at all!   The
interleaving of demand in the argument of a function with computational
progress in its result seems a perfectly natural view of interaction in
a lazy functional language.  This sort of interaction is what actually
happens when your function applications are evaluated whether you
exploit it or not.  I embrace it as part of lazy functional programming;
you prefer an appeal to something extraneous.

[I am conscious that we are using bandwidth on the main Haskell mailing
list for this little discussion -- perhaps we are about done, but if not
perhaps we should mail each other direct.]

Regards
Colin R

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] line-based interactive program

2005-07-07 Thread Colin Runciman




Christian Maeder wrote:

  

-- a filter program process an entire input to yield some output
type FilterProgram = [Line] - [Line]

  
  
Forget this, if it's not an (old) exercise
  

Yes, people don't write lazy functional programs in Haskell any more.
In the Era of Monadic Enlightenment, obfuscated imperative programming
is the Way To Go.  :-\  :-) 

However, for those who like to indulge the odd moment of nostalgia, in
the Haskell 98 Prelude there is:

interact :: (String - String) - IO ()

If this function does not work correctly with the Haskell
implementation you use, do report the fault to the implementors.

Colin R
(purveyor of old exercises)





___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] space behaviour of lazy recursive lists

2005-02-03 Thread Colin Runciman
Axel,
...
However, they do not directly solve my problem in the bigger program which
still has the same linearly growing memory requirement. The problem seems to be
very, very hard to find. I suspect it is related to lazyness as in the gibs
example, but I just cannot put my finger on the code that needs to be
changed. Is there any good method to track down this kind of problem? (I
tried all the ghc memory profiling techniques, that seemed promising to
me.)
 

If your program is in Haskell 98, not using any of the Glasgow 
extensions, you could also try compiling it for memory profiling under 
nhc.  The nhc heap profiler has some options not available in ghc.  Most 
but not all space problems due to laziness have similar effects across 
different implementations.

Colin R
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] factoring `if'

2004-10-11 Thread Colin Runciman
Serge,
How do you think, is the program (1) equivalent to (2) 
in the meaning of Haskell-98 ?

(1)   (\ x - (if p x then  foo (g x)  else  foo (h x))
 where
 p ... g ... h ... foo ...
 )
(2)   (\ x - foo  ((if p x then  g x  else  h x)
   where
   p ... g ... h ... foo ...
  )
 )
 

Both examples are illegal -- there are no where-expressions in Haskell, 
only where-equations. 

Ignoring the local definition part, however, the answer is no.  Lifting 
foo out of the branches of the conditional is only valid if foo is strict.

Colin R

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell] Re: elementary tracing for Haskell

2004-10-11 Thread Colin Runciman
Phil,
Are there any tools for beginning programmers that give traces of 
Haskell programs?  I want something like the following.  Given the 
defintion

sum [] = 0
sum (x:xs) = x + sum xs
typing
sum [1,2,3]
should yield this trace
sum [1,2,3]
1 + sum [2,3]
1 + 2 + sum [3]
1 + 2 + 3 + sum []
1 + 2 + 3 + 0
1 + 2 + 3
1 + 5
6
I know there are advanced tools like Hat, but (unless I'm missing 
something) they don't yield anything as naive as the above, and it's 
naive users I'm trying to help here.  -- P
There is a Hat tool called hat-anim that can do this kind of thing.  It 
was developed by Tom Davie as a student project.  It is not part of the 
current official release of Hat but is included in the CVS version.  
Tom is now a PhD student at Canterbury ([EMAIL PROTECTED]), supervised by 
Olaf Chitil ([EMAIL PROTECTED]).

Regards
Colin R
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] factoring `if'

2004-10-11 Thread Colin Runciman
Serge,
How do you think, is the program (1) equivalent to (2) 
in the meaning of Haskell-98 ?

(1)   (\ x - (if p x then  foo (g x)  else  foo (h x))
 where
 p ... g ... h ... foo ...
 )
(2)   (\ x - foo  ((if p x then  g x  else  h x)
   where
   p ... g ... h ... foo ...
  )
 )
 

Both examples are illegal -- there are no where-expressions in Haskell, 
only where-equations. 

Ignoring the local definition part, however, the answer is no.  Lifting 
foo out of the branches of the conditional is only valid if foo is strict.

Colin R

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Space behaviour hyperseq

2004-06-18 Thread Colin Runciman
Johannes,
the result of an application 'normal x' is always true ...
I understand how this works,
but do we agree that it looks outright ugly?
I don't see why
f x | normal x = ... x ...
is any more ugly than
f x@(x : xs) = ... x ...
or (far worse)
f ~(x : xs) = ... x ...
or strictness annotations, or uses of `seq`, all of which I try to avoid.
I prefer to use the ordinary stuff of a programming language to achieve 
pragmatic ends, so far as possible, rather than adding magical 
decorations.

We mean one thing (strictness) but we write something quite different
(an obviously useless computation of the constant True).
A 'normal' application doesn't have to mean one thing only: it is 
polymorphic, allowing distinct degrees of evaluation to be defined for 
distinct types.  The result of a normal application may be useless but 
the effect of computing it can be extremely useful for pragmatic reasons.

Can you explain this to students? Would you be proud of it?
I can and have explained it to students.  It is nothing wonderful, but 
it is nothing to be ashamed of.  It can even be rather elegant if used 
with care.

Well ... that's my opinion anyway!
Colin R
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: interact behaves oddly if used interactively

2003-10-01 Thread Colin Runciman
Christian Maeder wrote:

Malcolm Wallace wrote:
[...]
Surely the name suggests that interactive behaviour is required, i.e.
exactly some interleaving of input and output.  The chunk-size of the
interleaving should depend only on the strictness of the argument to
interact. 


I'm not happy that interleaving depends on the strictness. Lazy or 
strict evaluation should only change the behaviour of overall 
termination (lazy evaluation should terminate more often). I'ld rather 
implement interleaving (or interactive behaviour) explicitely by:

interact f = do
  s - getLine
  putStrLn (f s)
  interact f
(assuming line buffering) (Terminating with ctrl-c)

Surely also something is needed for endless character resources as
Tom pointed out.
Christian
In a lazy language, evaluation of arguments and results is interleaved.
This coroutine-like behaviour is an important and pleasing
characteristic, not a mistake to be avoided.
Lazy evaluation of String - String functions with the argument attached
to an input device  (eg. keyboard) and the result attached to an output
device (eg. screen) is therefore a conceptually lean and natural way
to express sinple interactive programs in a lazy functional language.  

Historically, the wish to preserve the option of looking at interaction
this way, if only for pedagogical reasons, was the reason for keeping
the interact function in Haskell even after the monadic revolution.
If line-buffering is needed, it is easily programmed in Haskell as
a (lazily evaluated!) function lineBuffered :: String - String.
If f :: String - String is the core function of the program one can
define main = interact (f . lineBuffered) and the fact that the
program relies on line-buffered input is clearly expressed in the
program itself.
Conversely, if line-buffering is built-in as part of interact, there
is no way to program it out when defining interact's argument.
Let not the eager imperative tail wag the lazy functional dog!

Colin R



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: awaitEval in Concurrent Haskell

2003-02-17 Thread Colin Runciman
Claus,


It may be possible to get the two representations together by applying the
predicate to a reader for x, generated from x, which would complement 
something like Hood's writer for x, generated from x. Just as the context
demanding parts of x isn't aware of triggering observations, the predicate
depending on parts of x need not be aware of having to wait for those
observations, and MVars could provide the plumbing between the implicit
readers and writers. See below for an outline.

Thanks for this further suggestion.  A solution along these lines might be
possible, but it would still be restricted in comparison with something
based on a more global awaitEval: the availability of data would only
be detected if the demand that prompted its evaluation was in the context
of the assertion-tagged expression.  Yes?

Regards
Colin R




___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: awaitEval in Concurrent Haskell

2003-02-17 Thread Colin Runciman
A couple of hours ago, I wrote (in reponse to Claus Reinke's suggestion):


Thanks for this further suggestion.  A solution along these lines 
might be
possible, but it would still be restricted ...

Actually a mild variant of Claus's proposal seems to work out
quite well.  Another way to avoid the problems with types is
to use a multi-parameter type class.  Little example attached.

So, thanks again Claus!

Regards
Colin R


-- A mini-experiment in concurrent data-driven assertions.
-- Colin Runciman after Claus Reinke after Andy Gill after ...
-- February 2003

import Control.Concurrent
import Char(isLower)
import System.IO.Unsafe(unsafePerformIO)

-- Each type a over which assertions are to be made is
-- encoded using a metatype b.

class Assert a b | a - b, b - a where
  assertW :: MVar b - a - a
  assertR :: MVar b - a
  
assert :: Assert a b = String - (a-Bool) - a - a
assert s p x = unsafePerformIO $ do
 mv - newEmptyMVar
 forkIO $ check s p (assertR mv)
 return $ assertW mv x
 
check :: String - (a - Bool) - a - IO ()
check s p x | p x   = return ()
| otherwise = putStrLn $ assertion failure:  ++ s

-- We can use assertions over characters, encoded as themselves.

instance Assert Char Char where
  assertW mv c = unsafePerformIO $ do
   putMVar mv c
   return c
  assertR mv   = unsafePerformIO $ do
   c - takeMVar mv
   return c
   
-- Here's the metatype encoding for lists; similar definitions
-- would be needed for other structured types.

data MetaList a = Nil
| Cons (MVar a) (MVar (MetaList a))
 
instance Assert a b = Assert [a] (MetaList b) where
  assertW mv [] = unsafePerformIO $ do
putMVar mv Nil
return []
  assertW mv (x:xs) = unsafePerformIO $ do
mvx  - newEmptyMVar
mvxs - newEmptyMVar
putMVar mv (Cons mvx mvxs)
return (assertW mvx x : assertW mvxs xs)
  assertR mv= unsafePerformIO $ do
ml - takeMVar mv
return $ case ml of
 Nil -
   []
 (Cons mvx mvxs) -
   (assertR mvx : assertR mvxs)

-- Finally, a simple example application.

singleCaseWords :: String - Bool
singleCaseWords  xs = all unmixed (words xs)

unmixed :: String - Bool
unmixed  = True
unmixed (c:cs) | isLower c = all isLower cs
   | otherwise = not (any isLower cs)
   
main = do
  input - getContents
  putStr (assert single-case words singleCaseWords input)
  
  
  



Re: awaitEval in Concurrent Haskell

2003-02-17 Thread Colin Runciman
Simon,


Does this mean you can womble along with Claus's suggestion?  I'm
feeling a bit swamped at the moment, and not keen to undertake another
open-ended implementation exercise.  So if you can manage without, and
perhaps use the experience to refine the specification of a Really
Useful Feature, that'd be great from my point of view.


Yes, I think that's a reasonable way to go.   So stand easy  :-) 

Colin R


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: awaitEval in Concurrent Haskell

2003-02-11 Thread Colin Runciman
C.Reinke wrote:


I'm not sure whether I understand what you have in mind
later on, but this first part sounds so remarkably like
something I've seen before, that I'll take my chances.

Do you remember Andy Gill's Hood from long ago?

Inside its implementation, it had a very similar problem:
writing information about an observed structure to some
observation trace, but without forcing premature evaluation
of the structure under observation ...
Nothing happens as long as the thing under observation is not
inspected by its context. Then, and precisely then, the
unsafePerformIO kicks in to record a piece of information and to
return the outer part of the thing, wrapping its components into
fresh observers.

Andy used this to store observations in a list, to be processed at
the end of the program run, but you can just as well send the
observations during evaluation, e.g., to a concurrent thread (with
the usual caveats). In particular, the sequencing of information
becoming available was detailed enough to inspire my own GHood;-)

With no implementation slot free, something like this might get your
student's project unstuck (e.g., replace sendObservation by assert)?
After all, my favourite justification for unsafePerformIO is as an
extension hook in the runtime system..


Yes, we considered using something like the Hood technique.   The problem
is that a path-annotated observation sequence is a rather unwieldy 
representation of
a data structure. As you say, Hood stores the observations to file, 
where they can be
post-processed beyond the confines of the observed Haskell computation. 
The scheme
works because the whole purpose of the application is just to observe 
the data in
the order that it becomes evaluated.

What we are after is a little different.  We need a way of attaching
an arbitrary boolean predicate to a data structure, with its own pattern 
of demand for
the components, but only proceeding as and when the needed components 
become evaluated
by the normal computation.  Perhaps data-driven is misleading; we want 
the sequence
of evaluation for an asserted predicate to remain demand driven as 
usual, but for progress
in that sequence to be constrained by the rule that no evaluation of the 
data structure
argument may be forced.

Colin R


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: awaitEval in Concurrent Haskell

2003-02-06 Thread Colin Runciman
Simon,


Next idea is to have a function
	await :: a - b - b
which is rather like 'seq' except that it waits for its first argument
to be evaluated, whereas 'seq' evaluates it.  Then you could write a new
version of 'length' which used await before every case expression. Is
that what you had in mind?  A bit fragile, isn't it?


Something like await would  be enough I think.  It would not be 
necessary to write new
versions of all the functions to be data driven, just a single 
(overloaded) function
to convert a demanding function into a data-driven one. Something like this:

asEvaluated xs = await xs (case xs of
  (x:xs) - asEvaluated x : 
asEvaluated xs
  []  - [])

dataDriven f = f . asEvaluated

assert p xs = thread (... dataDriven p xs ...) xs

How to implement 'await'.  We thought of three ways:

1.  Implement a primitive checkEval :: a - Bool, which tells whether
something is evaluated; if not, it yields to other threads, as well as
returning False.  So now you can have a busy-wait version of await thus:

	await x y = case checkEval x of
			True - y
 False - await x y

2.  Implement await directly, by putting the thread in a pool of threads
that the scheduler checks at regular intervals. They each just watch a
thunk.  Still a busy-waiting solution.

3.  Do the right thing: implement 'await' by copying the thunk,
replacing it by a smart indirection which also points to the
non-suspended thread. When the indirection is evaluated, it just puts
the suspended thread in the runnable pool before entering the thunk.
Actually, this is probably no harder than (2).


I agree that 2 seems the most straightfoward, and not quite so busy as 1.
Unless I am misunderstanding 3 it could lead to problems if scheduling
can occur before evaluation of the thunk reaches a normal form.


Now, it's true that it would take Simon or I less time to implement one
of these than it would take your student to do, but we have so *many*
things to implement, and each takes time.  And you will probably change
your mind several times about what the right design should be (that's
what research is like).  So there's a lot to be said for the wading in
yourself.


The formula for implementation time is roughly
   initiationRites + inherentDifficulty / skillLevel
and I am pretty sure we could get by with just the await primitive.
So if there is any way you could find an early slot to do a
basic implementation of await, it would be much appreciated.


What I can promise is that we'd turn around questions rapidly.


If we do end up trying to define await for ourselves, which source
files should we expect to modify and are there any likely pitfalls
in the subsequent rebuilding?

Thanks
Colin R




___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Q: Forcing repeated evaluation

2002-09-12 Thread Colin Runciman

Jan Kybic wrote:

Hello,
I have another question regarding the optimisation of Haskell code:
I have a relatively inexpensive function generating a long list, imagine
something like (I simplified a lot):

l = [ i*i*i | i - [0..n] ]   -- for very large n

This long list is consumed several times in the program:

x1 = f1 l
x2 = f2 x1 l
x3 = f3 x2 l

I found that the list l is calculated just once and that the
computational time is dominated by the allocations and garbage
collection. I want to try to force l to be generated on-the-fly
every time it is needed, to see if it improves performance.
What is a good way to do it? Would something like

unsafePerformIO $ return l

do the job? Isn'it there any flag for the compiler (ghc) to suggest
this optimisation? Thank you for your feedback.

Jan

A simple solution is to decaf the offending definition.
Give l an argument of trivial type: replace l both in its
definition and at all point of use by the application l ().






___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Weird profiling behaviour

2002-06-27 Thread Colin Runciman

Ketil Z. Malde wrote:

5.02 uses quicksort,

That's funny, since I see quadratic scaling, I must be hitting worst
case both times?  'sort' and  'sortBy' *are* implemented in the same
way, right?

Implementations of QuickSort on lists usually take the easy option of
using the head of the list as the threshold value for partitioning.
As a consequence QuickSort does indeed degenerate to quadratic
cost in the worst case.

Also, curiously enough, it could just as well be the problem that your
int-sorting phase has too *little* sorting to do, as this common
version of quickSort degenerates both for in-order and reverse-order
inputs.

Regards
Colin R




___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Weird profiling behaviour

2002-06-26 Thread Colin Runciman

Ketil Z. Malde wrote:

I have what I think is a really strange problem.  I have a fair sized
problem, which involves sorting a data set, first on labels (which are
Strings) and then on scores (which are Ints).

The strange thing is that string sorting is *vastly* faster than int
scoring!  Now, I've tried modifying the sorting routinges, moving them
into the same module, and so on, to no avail.  Is there any likely
explanation?  The pipeline is basically

 sortBy int_cmp . compound_equal . sortBy string_cmp

I hesitate to post the code, since it's part of a rather large
program, and in the hope that somebody will pop up and say that oh,
yes, it's a well know feature of 'sortBy'.  Or that it's an artifact
of profiling.  Or something like that.

Any, and I mean any, suggestions really, really welcome.

-kzm

Could it be that the string-comparison sort simply has less sorting to do
than the int-comparison sort?  The default definition of sortBy uses
insertion sort, so if the string-sort input happens to be already sorted
it takes linear time and if the int-sort input happens to be in
reverse order  it takes quadratic time.

Colin R



___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Weird profiling behaviour

2002-06-26 Thread Colin Runciman

Ketil Z. Malde wrote:

I have what I think is a really strange problem.  I have a fair sized
problem, which involves sorting a data set, first on labels (which are
Strings) and then on scores (which are Ints).

The strange thing is that string sorting is *vastly* faster than int
scoring!  Now, I've tried modifying the sorting routinges, moving them
into the same module, and so on, to no avail.  Is there any likely
explanation?  The pipeline is basically

 sortBy int_cmp . compound_equal . sortBy string_cmp

I hesitate to post the code, since it's part of a rather large
program, and in the hope that somebody will pop up and say that oh,
yes, it's a well know feature of 'sortBy'.  Or that it's an artifact
of profiling.  Or something like that.

Any, and I mean any, suggestions really, really welcome.

-kzm

Could it be that the string-comparison sort simply has less sorting to do
than the int-comparison sort?  The default definition of sortBy uses
insertion sort, so if the string-sort input happens to be already sorted
it takes linear time and if the int-sort input happens to be in
reverse order  it takes quadratic time.

Colin R



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: trees with pointers to parents memory gobbling

2002-06-15 Thread Colin Runciman

Hal Daume III wrote:

I have a datatype which (simplified) looks like:

data FBTree a = FBLeaf (Maybe (FBTree a)) a | FBBranch (Maybe (FBTree
a)) (FBTree a) (FBTree a)

is basically a tree with Maybe a parent node.  however, unlike the nice
non-haskell equivalent, they tend to eat up memory as you traverse
them.

Oh no they don't!  :-)

There is no space leak in the tree-traversal part of Hal's program, the 
problem
is that the program builds an iterated sequence of ever-deeper compositions
of findRoot and findLeftMostChild without demanding any of their results
until the very end of the sequence.

Quick plug: heap profiling showed me that the problem was with iterate;
the Hat tracing tools showed me that the tree traversal routines work fine.

I attach an amended version of Hal's program which does the 10 down-up
traversals without leaking.

Regards
Colin R



data Tree a = Leaf a | Branch (Tree a) (Tree a) deriving (Eq, Show)

data FBTree a =
FBLeaf (Maybe (FBTree a)) a 
  | FBBranch (Maybe (FBTree a)) (FBTree a) (FBTree a)

normalFBTree (FBLeaf _ _) = True
normalFBTree (FBBranch _ _ _) = True

instance Show a = Show (FBTree a) where
  showsPrec i = showsPrec i . unFBTree

mkFBTree = mkFBTree' Nothing
where
mkFBTree' par (Leaf a) = FBLeaf par a
mkFBTree' par (Branch l r) = this
where
this = FBBranch par (mkFBTree' (Just this) l) (mkFBTree' (Just this) r)

unFBTree (FBLeaf _ a) = Leaf a
unFBTree (FBBranch _ l r) = Branch (unFBTree l) (unFBTree r)

findRoot (FBLeaf (Just par) _) = findRoot par
findRoot (FBBranch (Just par) _ _) = findRoot par
findRoot t = t

findLeftMostChild (FBBranch _ l _) = findLeftMostChild l
findLeftMostChild t = t

tree =
  Branch
(Branch 
  (Branch
(Branch
  (Branch (Leaf 'h') (Branch (Leaf 'a') (Leaf 's')))
  (Leaf 'k'))
(Branch (Leaf 'e') (Leaf 'l')))
  (Leaf 'l'))
(Leaf '!')

fbtree = mkFBTree tree

updown n t | normalFBTree t =
  if n  0 then updown (n-1) (findLeftMostChild (findRoot t))
  else t

main = print (updown 10 fbtree)



Re: Finding primes using a primes map with Haskell and Hugs98

2000-12-20 Thread Colin . Runciman

 There are numerous ways of optimising sieving for primes, none of which
 have much to do with this list.  For example, two suggestions:
 (1) for each k modulo 2*3*5*7, if k is divisible by 2/3/5 or 7, ignore, otherwise
 sieve separately for this k on higher primes.  (Or you might use products of
 more or less primes, depending on memory and how high you were going.)
 ...
 I don't really see why any of this has anything to do with Haskell though.
 When it comes to seriously icky bit-twiddling algorithms I don't think Haskell
 has much to offer over C, especially as you'd have to make everything unboxed if
 you want comparable speed.

Forgive the self-reference, but the following short article is
all about this very topic:

C. Runciman,
Lazy wheel sieves and spirals of primes,
Journal of Functional Programming, v7, n2, pp219--226,
March 1997.


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



runtime optimization of haskell

2000-03-23 Thread Colin . Runciman

D. Tweed wrote:
| I'm just curious if there's anyone in the Haskell/FP community working on
| such things? (The closest thing I'm aware of is David Lester's stuff on
| throw away compilation (sorry no pointer))

Perhaps you were thinking of David *Wakeling*'s work?  See:

David Wakeling,
The dynamic compilation of lazy functional programs,
Journal of Functional Programming, 8(1), pp61-82,
January 1998.





post-doc position to suit Haskell enthusiast

1999-09-03 Thread Colin . Runciman


 Post-doctoral research in functional programming:
 two-year funded position available; applications invited.

Topic: tracing lazy functional computations.

  When: starting ASAP (at latest by 31 March 2000).

 Where: York, UK.

For details, enquiries, etc please contact:

Dr Colin Runciman
  Department of Computer Science
  University of York, YORK, YO10 5DD, UK
   
 Phone: +44 (1904) 432740 
   E-mail: [EMAIL PROTECTED]







re: Random Access Files in Haskell

1999-07-08 Thread Colin . Runciman

A little while ago Damir Medak asked:

| Any experiences or hints how to implement Random Access Files
| in Haskell?

The Binary library distributed with nhc13 and nhc98 supports
random access.  It can be used to implement indexed file
structures of various kinds.

See http://www.cs.york.ac.uk/fp/nhc98/libs/Binary.html which
includes a pointer to our ISMM'98 paper `The Bits between
The Lambdas'.

Colin R






re: virtual terminal in Haskell

1999-07-08 Thread Colin . Runciman

Marcin Kowalczyk (I think) wrote:

| I have to think about a good abstraction of terminal actions. I don't
| quite like ...  because it does not allow integration with arbitrary IO
| (or I miss something?) and it heavily depends on the terminal having
| particular properties, offered in a context-independent interface of
| control sequences. For example it does not fit with an extension of
| performing several actions virtually and rendering the screen contents
| with all of them applied at once.

One of my first Haskell programs was a virtual terminal.
It allowed integration with arbitrary I/O, abstracted away from
control sequences, and supported the accumulation of virtual
actions so that all could applied at once to the real display.

See http://www.cs.york.ac.uk/~colin/papers/skye91.ps.gz for 
a workshop paper about it.

A revised version of the paper was included as Chapter 5 in
`Applications of Functional Programming', C. Runciman and D. Wakeling
(Eds.), UCL Press, 1995.  ISBN: 1-85728-377-5.

Warning: my program pre-dates monadic I/O, and some language hindrances
noted in the paper have now been resolved.  However, I think most of
what I did and said is still more or less valid.

Colin R






MonadZero

1998-11-04 Thread Colin . Runciman

I'm with the Option 2 Zap MonadZero Today lobby.

I dislike ~ patterns for all sorts of reasons (won't drag up all *that*
again now!) but that the introduction or elimination of a ~ can alter
the *type* of an expression is particularly horrible.

The attempt to introduce notions of guaranteed evaluation in types is
something we have already backed off for Haskell '98 (eg. class Eval).

Haskell '98 does not attempt to distinguish non-empty or infinite lists
in the static type system, despite the extra guarantees this could
give.  Nor does it attempt to distinguish total from partial functions
(eg by pattern coverage) in the static type system, ditto.  So arguably
MonadZero is just a complicating anomaly.

As Eric says, there is a trade-off between the extent of static
guarantees on the one hand and pragmatic simplicity on the other.
I am unashamedly in favour of pragmatic simplicity, particularly when
the type system is already quite complex for widespread use.

Option 2.  Zap!

Colin R






some Standard Haskell issues

1998-08-07 Thread Colin . Runciman

* the name!

  Names including a date, like Haskell 98, or a specific use,
  like Teaching Haskell, could mislead.  Standard Haskell 1 is
  rather long (and ambiguous).  The reasons why Haskell 1.5
  suggests greater stability than Haskell 1.4 are too obscure.

  So if Standard Haskell says too much, my vote goes to Stable Haskell.

  If even that still sounds too much like a final resting point, rather
  than a base for further development, how about Root Haskell or Stock
  Haskell (or even Basic Haskell :-)?

* import and infix declarations anywhere in a module?

  I am against this proposal.  Collecting all such declarations
  at the head of the module is better for human readers.  Allowing
  them anywhere would also complicate and slow down program analysis
  that only requires module dependencies (eg. Haskell-specific `make'
  tools).

* layout rules

  A precise specification of the layout rules is clearly desirable.
  But the proposed change `relaxing' the rule for if-then-else seems a
  bit funny.  Isn't it at odds with the original ISWIM motivation for
  having layout rules at all?

* maximal munch and comments

  Explicitly allowing operators such as --- and -- is not just
  a clarification; it is a change in the comment convention. (cf. p8 of
  the 1.4 report `The sequence -- immediately terminates a symbol ...')

  Though it is attractive to allow a wide range of operator symbols
  I'm not convinced this change would be an improvement.

  --- comment
  --
  -- Is the previous line a comment? The line before that?

  Any change putting in doubt (or even preventing) the commenthood of
  an unbroken line of 2 or more dashes would be a pain.

* monomorphism restriction

  (Last but hardly least!)   Surely MR qualifies as a trap that
  it would be nice to clean up.  It takes three pages to explain in
  the 1.4 report, and there is plenty of evidence that programmers
  still fall over it frequently.

  Would it be too much/little to require all declaration groups in an
  exporting module to be unrestricted -- a straightforward syntactic
  condition?





more on name, MR and comments

1998-08-07 Thread Colin . Runciman

|  Names including a date, like Haskell 98, ... could mislead. 
| How would this be misleading?

There is a popular myth that newer is better.  Dating a language almost
guarantees that before long it is seen as ... well ... dated!

Haskell 2 (or Haskell 2005) might come along and be a big improvement;
but then again it might not.  This too can be seen in the history of
other programming languages!

Anyway, I agree with Jeff that one can get too hung up about the name. 
Haskell 98 would be wonderful compared with `Well, there's Haskell 1.4
at the moment, but actually there's another revision exercise in
progress and ...'.

|  Would it be too much/little to require all declaration groups in an
|  exporting module to be unrestricted -- a straightforward syntactic
|  condition?
| Personally, I'd like to junk the MR, but I don't follow your suggestion?

It is nothing new.  I'm just asking whether the sufficient condition for MR
noted in Rule 2 of 4.5.5 (p51 in 1.4 report) could sensibly be made a
requirement for groups including the declaration of an exported variable.
Although this would be even more restrictive, it would at least be simpler
to check -- and to express/understand in error messages.  Straw Man to
provoke more thought about MR.


PS I still think I would miss multidashes as comments!
 





haskell

1993-10-12 Thread Colin Runciman


John L writes:

 When I have tried to talk to individuals about natural number induction
 using (n+k) patterns, then the problems start. Because they are so
 unlike the form of patterns they have become used to they find all
 sorts of difficulties. What if n is negative. Ah yes, well it can't be.
 Why not. It just can't. etc.  ... Let's throw them [n+k patterns] out.

It sounds to me as if the problem is with negative numbers.
So, one more time ...  What about the *natural* numbers?
Doesn't anyone else program with these?  (Maybe just occasionally? :-)

Colin R