[Haskell-cafe] ANNOUNCE: xmonad 0.3

2007-09-04 Thread Donald Bruce Stewart

The xmonad dev team is pleased to announce the 0.3 release of xmonad. 

xmonad: a tiling window manager
   http://xmonad.org

About:

xmonad is a tiling window manager for X. Windows are arranged
automatically to tile the screen without gaps or overlap, maximising
screen use. All features of the window manager are accessible from the
keyboard: a mouse is strictly optional. xmonad is written and extensible
in Haskell. Custom layout algorithms, and other extensions, may be
written by the user in config files. Layouts are applied dynamically,
and different layouts may be used on each workspace. Xinerama is fully
supported, allowing windows to be tiled on several screens.

Features:

* Very stable, fast, small and simple.
* Automatic window tiling and management
* First class keyboard support: a mouse is unnecessary
* Full support for tiling windows on multi-head displays
* Full support for floating windows
* XRandR support to rotate, add or remove monitors
* Per-workspace layout algorithms
* Per-screens custom status bars
* Easy, powerful customisation and reconfiguration
* Large extension library
* Extensive documentation and support for hacking

Since xmonad 0.2, the following notable features and bug fixes have appeared:

New features:

  * floating layer support: transients windows are not tiled by default,
and windows may be dragged to and from a traditional floating layer
(which allows mouse-resizing, and overlapping windows).

  * improved Xinerama support: workspace switching reuses multiple
displays more effectively.

  * huge new extension library. Over 50 extensions to xmonad have been
contributed by users, and are available all in a standard library,
with documentation.

More information, screenshots, documentation and community resources are
available from:

http://xmonad.org

Xmonad is available from hackage, and via darcs. Happy hacking!

The Xmonad Team:

Spencer Janssen
Don Stewart
Jason Creighton

Xmonad has also received contributions from at least:

Alec Berryman Andrea Rossato Chris Mears
Daniel Wagner David Glasser David Lazar
David Roundy Hans Philipp Annen Joachim Fasting
Joe Thornber Kai Grossjohann Karsten Schoelzel
Michael Sloan Miikka Koskinen Neil Mitchell
Nelson Elhage Nick Burlett Peter De Wachter
Robert Marlow Sam Hughes Shachaf Ben-Kiki
Shae Erisson Simon Peyton Jones Stefan O'Rear

as well as many others on the IRC channel and mailing list. Thanks to everyone!
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hawiki articles

2007-09-03 Thread Donald Bruce Stewart
lemming:
 
 On Mon, 3 Sep 2007, Derek Elkins wrote:
 
  On Mon, 2007-09-03 at 14:57 +0200, Henning Thielemann wrote:
   In the current Haskell Wiki (haskell.org/haskellwiki) I found references
   to articles of the old Hawiki (haskell.org/hawiki), like OnceAndOnlyOnce
   and SeparationOfConcerns. Are the files still available somewhere?
 
  HaWiki was taken down for the not unreasonable reason that, since it is
  no longer being updated, there is a decent chunk of out-dated and thus
  misleading information on it.  Unfortunately there is a ridiculously
  large amount extremely good information on it that never got ported and
  frankly never is going to be ported (for good and bad reasons).
 
 ... and there was unfortunately no support in porting the stuff. I guess
 some simple program (perl -p -e 's/{{{/hask/g' :-) could have simplified
 a lot. Its however more difficult for me to do this via the web interface,
 than for the people who have access to the bare files.

The problem was the licensing. Only pages whose authors were known, and
who gave permission to license the work freely, were ported. And only
some of those pages actually got moved.

  Clearly making HaWiki live again would be a bad idea; haskellwiki is
  being used now for a reason.  However, having it up in stasis as it
  was with some prominent indication on each page that it is out of
  date/no longer updated/obsolete or whichever term suits you fancy should
  effectively solve the problem.  It should be possible and probably even
  desirable to distill the pages into static HTML documents so that
  MoinMoin would not be needed, if that is an issue.
 
 Since it is easier to port Wiki code than HTML code, I propose copying all
 hawiki pages as they are to haskellwiki in a directory like DEPRECATED.
 From there people can go on moving pages into the space of current pages.
 This would also allow to track where pages came from Hawiki.

We can't do that, due to the licensing issues. Details here:

http://haskell.org/haskellwiki/HaWiki_migration

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Serial Communications in Haskell

2007-09-03 Thread Donald Bruce Stewart
mmitar:
 Hi!
 
  Of course, I don't know how to call Windows API functions from Haskell,
  and I have no idea how to hook things to the IO library so that I can
  use a Handle for a serial port.  I'm looking for some advice on how to
  proceed.
 
 You can check how I did this in my Lego Mindstorms NXT interface,
 pre-beta version:
 
 http://www.forzanka.si/files/NXT.tgz
 
 It is for UNIX (POSIX?) systems but I had similar problems so I had to
 use FFI (foreign function interface) to setup a link. You will
 probably just have to replace that with Windows API calls. I hope.
 

That's really cool! I hope you can upload this to hackage soon.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: runghc printing result of main when main is not IO ()

2007-08-30 Thread Donald Bruce Stewart
tomasz.zielonka:
 On Thu, Aug 30, 2007 at 08:33:37AM +0100, Simon Marlow wrote:
  Tomasz Zielonka wrote:
  Hello!
  Consider:
  $ cat R.hs main = return [()]
  $ runghc R.hs [()]
  This was a bit surprising for me, because I thought that runghc
  mimics the way a compiled program behaves.
 
  This doesn't happen with 6.6.1, I believe we fixed it at some point by 
  having runghc perform main  return () instead of just main.
 
 Great! I'm sorry for not checking with a recent version :-/

It was a cute feature though -- great for demos where you can avoid a
print statement ;)

-- Don
___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [Haskell-cafe] Norvig's Sudoku Solver in Haskell

2007-08-28 Thread Donald Bruce Stewart
chaddai.fouche:
 For the translation of the above OCaml code, there is not much to do,
 in fact it is mostly functional, and so easily translated in Haskell
 code, note that I add a code to handle input of the form
 4.8.5.3..7..2.6.8.4..1...6.3.7.5..2.1.4..,
 to resolve it and print a solution :

Spencer Janssen also wrote a rather elegant translation, which you can
find on hpaste.org

import Data.List
import Data.Ord

n = 3 :: Int

invalid (i, j) (i', j') = i == i' || j == j' ||
  i `div` n == i' `div` n  j `div` n == j' `div` n

select p n p' ns | invalid p p' = filter (/= n) ns
 | otherwise= ns
  
add p n sols = sortBy (comparing (length . snd)) $ map f sols
  where f (p', ns) = (p', select p n p' ns)

search [] = [[]]
search ((p, ns):sols) = [(p, n):ss | n - ns, ss - search $ add p n sols]

You can see the development here,

http://hpaste.org/2348  

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How can I pass IOUArrays to FFI functions?

2007-08-21 Thread Donald Bruce Stewart
ryani.spam:
 
 Your code is broken in a most evil and insidious way.
 
 
 
Interesting.  This is for a toy project, so I'm not too
worried, but lets say I wanted to do this correctly and I
was set on using IOUArray for some reason. (The Haskell wiki
claims that StorableArray is slower; is that actually the
case?)
 
 
 
Which of the following fixes would work now?  Which has the
lowest probability of not working in the future?
 
 
 
1) Declare f to take Addr# and don't construct a Ptr Word32
 
I suspect this would be enough unless the GC changed to
some sort of continous GC which can happen even without an
allocation
 
 
 
2) Declare f to take MutableByteArray#
 
Is this good enough to make the collector happy?
 
 
 
3) Something else I haven't thought of?
 
If there was no other option, and StorableArray wasn't
slower, and I was working on a real project, I'd probably
wrap my own around ForeignPtr like Data.ByteString.

Yeah, we have ForeignPtr arrays and Foreign.Array /exactly/ for calling
to C safely. I don't know why people suggest all these other dodgy
solutions, when there's one that's guaranteed by the FFI spec to work.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC optimisations

2007-08-21 Thread Donald Bruce Stewart
phil:
 On Mon, Aug 20, 2007 at 09:57:38PM +0100, Simon Peyton-Jones wrote:
 GHC does some constant folding, but little by way of strength
 reduction, or using shifts instead of multiplication.  It's pretty
 easy to add more: it's all done in a single module.  Look at
 primOpRules in the module PrelRules.
 
 Patches welcome!  But please also supply test-suite tests that check
 the correctness of the rules.
 
 Sucking another example out of comp.lang.functional:
 
 This:
 
   import System
 
   f :: Int - Int - Int
   f s n = if n  0 then f (s+n) (n-1) else s
 
   main = do
 [n] - getArgs
 putStrLn $ show $ f 0 (read n) 
 
 is 3-4x slower than this:
 
   #include stdio.h
   #include stdlib.h
   #include assert.h
 
   int f(int s, int n) { 
 return n  0 ? f(s+n, n-1) : s;
   }
 
   int main(int argc, char *argv[]) { 
 assert(argc == 2);
 printf(%d\n, f(0, strtol(argv[1],0,0)));
   }
 
 The generated assembler suggests (if I've read it correctly) that gcc
 is spotting that it can replace the tail call with a jump in the C
 version, but for some reason it can't spot it for the Haskell version
 when compiling with -fvia-C (and neither does ghc itself using
 -fasm). So the haskell version ends up pushing and popping values on
 and off the stack for every call to f, which is a bit sad.
 

That doesn't sound quite right. The C version should get a tail call ,
with gcc -O2, the Haskell version should be a tail call anyway.

Let's see:

C
$ gcc -O t.c -o t 
$ time ./t 10
zsh: segmentation fault (core dumped)  ./t 10
./t 10  0.02s user 0.22s system 5% cpu 4.640 total

Turning on -O2

$ time ./t 10
-243309312
./t 10  1.89s user 0.00s system 97% cpu 1.940 total


And GHC:

$ ghc -O2 A.hs -o A
$ time ./A 10   
   
-243309312
./A 10  3.21s user 0.01s system 97% cpu 3.289 total

So, what, 1.6x slower than gcc -O2
Seems ok without any tuning.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing binary data.

2007-08-21 Thread Donald Bruce Stewart
dot:
 On Sun, 19 Aug 2007, Peter Cai wrote:
 
  My duty is writing a network server which talks to another server through a
  binary based private protocol.
 
 Haskell needs something like Erlang's bit syntax.
 
 http://erlang.org/doc/reference_manual/expressions.html#6.16
 http://erlang.org/doc/programming_examples/bit_syntax.html#4
 The IP header example in the latter is a brilliant real-world example.
 
 It has recently been upgraded to support arbitrary bit streams.
 See http://www.it.uu.se/research/group/hipe/papers/padl07.pdf
 

Yes, we've looked at this in the context of Data.Binary. Rather than
extending the core syntax, on option is to use Template Haskell,

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/BitSyntax-0.3

Another is to just use monad and pattern guards, which give quite
reasonable syntax.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] #haskell irc channel reaches 400 users

2007-08-20 Thread Donald Bruce Stewart

A small announcement :)

5 1/2  years after its inception, under the guiding hand of Shae Erisson
(aka shapr), the #haskell IRC channel[1] on freenode has finally reached
400 users!

To chart the growth, we can note that the channel was founded 
in late 2001, and had slow growth till 2006, reaching 200 users in
January of that year. Since then growth in the user base has been far
more rapid, reaching 300 users in Dec 2006, and 400 users now, in August
2007.

This puts the channel at around the 13th largest community of the 5500
freenode channels. For comparision, a sample of the state of the other
language communities:

#php 485
#perl472
##c++457
##c  445
#python  430
#ruby-lang   420
 
   #haskell 411

#lisp246
##java   236
##javascript 226
#perl6   144
#scheme  139
#erlang  118
#lua 105
#ocaml58

You can see the growth of the channel over here: 
http://www.cse.unsw.edu.au/~dons/irc

If you've not dropped by the channel yet, feel free to come and chat,
and toss around some lambdas! :)

Cheers,
  Don

1. http://www.haskell.org/haskellwiki/IRC_channel
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Data.HashTable.hashInt seems somewhat sub-optimal

2007-08-19 Thread Donald Bruce Stewart
Simon.Frankau:
 I tried submitting this bug through the tracker, but it seemed to give
 me permissions errors - probably a firewall issue here. :( Apologies.

Just a note that HashTable is pretty redundant in Haskell. For sizes you
can fit in memory, Map, IntMap or Data.Sequence perform at effectively
O(1) for reasonable purposes, and can hold larger numbers of elements
than Data.HashTable.

-- Don
___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [Haskell-cafe] Explaining monads

2007-08-11 Thread Donald Bruce Stewart
ronguida:
 
  Monads are undoubtedly more pervasive, and that could be because there
  aren't as many arrow and comonad tutorials, atomic ones or otherwise.
 
 Moreover, Comonad isn't even in the standard libraries (Hoogle returns
 no results for it).
 
 When I searched for tutorials on monads, I found lots of them.  In
 fact, I have heard that writing (yet another) monad tutorial is part
 of standard Haskell initiation.
 

Regarding the available tutorials, I've collected them under each
flavour here:

haskell.org/haskellwiki/Blog_articles/Monads 

Including 42 monad tutorials, 6 arrow tutorials, and 3 comonad
tutorials, and 0 on applicative functors.

The source for the `standard' (only) comonad library is available from
one of the sigfpe tutorials, but really should be on hackage.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell vs GC'd imperative languages, threading, parallelizeability (is that a word? :-D )

2007-08-10 Thread Donald Bruce Stewart
hughperkins:
Now, I did have kindof a shootout thread with Don and
Sebastien, calculating prime numbers, where Don managed to
get to within an order of magnitude of C# performance (I
think he got to about 70-80% of C# performance, cool!) -

Despite my better judgement, I'll just point out that you stopped
measuring the programs midway through. However, the shootout guys
didn't, and they publish the results here:

http://shootout.alioth.debian.org/gp4/benchmark.php?test=nsievebitslang=all

The GHC-compiled entry comes in at 1.2x C++, which I'm quite happy with.
Strangely, despite your assertions, the C# entry lags well down the list.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Small question

2007-08-10 Thread Donald Bruce Stewart
hughperkins:
   You'll find by the way that the imperative
GC'd, stack/heap protected languages run *significantly*
faster for many (not all I guess?) algorithms and
applications.

Wow. Big claims. It must be silly hat day on the Haskell lists.

We're trying hard to be friendly, perhaps you don't realise that your
inflammatory remarks are out of place here?

Now, just looking at just this assertion, for which you provide no
references: let's see, only imperative and GC'd eh?

Ruby?

http://shootout.alioth.debian.org/gp4/benchmark.php?test=alllang=ghclang2=ruby
 
JavaScript?

http://shootout.alioth.debian.org/gp4/benchmark.php?test=alllang=ghclang2=javascript
 
Python?

http://shootout.alioth.debian.org/gp4/benchmark.php?test=alllang=ghclang2=python
 

Hmm. Not looking so good so for for the imperative, GC'd languages.
  
Java?

http://shootout.alioth.debian.org/gp4/benchmark.php?test=alllang=ghclang2=java
 
C#?

http://shootout.alioth.debian.org/gp4/benchmark.php?test=alllang=ghclang2=csharp

Doesn't look too good for your assertion :(

Maybe there really isn't something fundamental about being `imperative'
or being garbage collected that matters performance-wise. Rather, having
a good optimising native code compiler is more to the point?




So, what do we do about this?  

Your unusual advocacy is interesting, but out of place on the Haskell
mailing list.  Given the community is growing rather rapidly, I'd like
to encourage you to contribute more quality material to the list, and to
tone down the sniping.

Recall that your comments go out to around 2000 people directly, and
further via Gmane. So making silly insults simply has the effect of
alienating the people whose help you might seek in the future.

To help more clearly think about the impact of noise on the mailing
list, and how to actively seek to improve (or maintain) quality, I like
to refer to this useful article:


http://headrush.typepad.com/creating_passionate_users/2006/12/how_to_build_a_.html

Give back to those who give you their time, rather than insulting them
with silly statements. If we can make this step, there may well be
unexpected benefits, in terms of collaboration and participation, that
you otherwise miss out.

-- Don (Trying to encourage friendly online communities)

Unfortunately, I suspect you'll snip out 90% of this mail, and reply
with some non sequitor. Please prove me wrong.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell vs GC'd imperative languages, threading, parallelizeability (is that a word? :-D )

2007-08-10 Thread Donald Bruce Stewart
hughperkins:
 
On 8/10/07, Donald Bruce Stewart [EMAIL PROTECTED]
wrote:
 
  It's using bit arrays.
 
Well I'm a total Haskell newbie, and you're using Haskell to
write imperative code, so it's really hard for me to read,
but looking at your code, you have: 

   (IOUArray Int Bool) -- an array of Bool
 
Bool is a 32-bit value in Haskell AFAIK?  (or possibly a
machine-dependent sized value, but certainly not a bit?)

No, STUArray s Int Bool is a bit array.
Look at the space use. Or try replacing Bool with Word32, and see what happens.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Slow IO or bad code?

2007-08-09 Thread Donald Bruce Stewart
j.vimal:
 Hi
 I am practicing writing code in haskell, by solving problems at this
 site. http://spoj.pl.
 The problem http://spoj.pl/problems/FASHION , is pretty simple.
 
 1. Given two lists A,B , of N numbers, sort them and take sum of products.
 i.e. Sum ai * bi
 
 I wrote a code, but seems to give Time limit exceeded!

We have a page for these SPOJ problems:


http://haskell.org/haskellwiki/SPOJ#Techniques_for_dealing_with_problems_efficiently

With bytestring IO, you can aim to be around the speed of OCaml or C++,
according to the existing bytestring entries.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] where to put handy functions?

2007-08-09 Thread Donald Bruce Stewart
rk:
 On 8/9/07, Chad Scherrer [EMAIL PROTECTED] wrote:
  Is there process for submitting functions for consideration for
  inclusion into future versions of the standard libraries? For example,
  I'd like to see this in Data.List:
 
 I imagine including it in std lib takes a while. Would it be a good
 idea to include it in MissingH http://software.complete.org/missingh
 in the mean time?

It is probably better not to stick everything in MissingH -- its too big
to be used easily. Smaller packages (say, Data.List.Extensions) make
more sense. However, getting ok for stuff in Data.List isn't too hard.
Just follow the libraries submission process:

http://haskell.org/haskellwiki/Library_submissions

Cheers,
  Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Dynamic thread management?

2007-08-09 Thread Donald Bruce Stewart
hughperkins:
 
Haskell/FP seems to have solved the hardest bit of
threading, which is making it obvious which bits of a
program are parallelizable, and which are not.
Remains to actually parallelize out the programs.  Am I
being naive or is this trivial?

Is there some reason why we cant just start a function
running in a single thread, whilst running profiling, then
after a while we check which bits of the function are taking
time to run, and are parellizable, and we parallelize those
out?

Perhaps have a look at this new paper:

Feedback directed implicit parallelism in Haskell
http://research.microsoft.com/~tharris/papers/2007-fdip.pdf

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Language support for imperative code. Was: Re: monad subexpressions

2007-08-08 Thread Donald Bruce Stewart
brianh:
 Hugh Perkins wrote:
 On 8/8/07, *Brian Hulley* [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
 
 In contrast, all the pure functional GUIs that I've seen...
 
 In defense of Haskell (wow!), note that imperative languages are not 
 without problems in GUIs.  In a multithreaded environment,
 If you're using multiple threads you'd need to be in the IO monad to 
 create/manipulate the MVars or TVars or whatever. (In contrast to eg 
 AliceML where you could easily use a synchronising logic variable 
 without having to leave all the familiar applicative coding comforts 
 behind to brave the fiercesome lonely toil of Monadia ;-))

Or use purely functional channels (Chan).

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] Haskell Weekly News: August 07, 2007

2007-08-07 Thread Donald Bruce Stewart
---
Haskell Weekly News
http://sequence.complete.org/hwn/20070807
Issue 64 - August 07, 2007
---

   Welcome to issue 64 of HWN, a weekly newsletter covering developments
   in the [1]Haskell community.

   This issue marks the second anniversary of the Haskell (not quite)
   Weekly News. Thanks to the Haskell community for support, content and
   for reading over the last two years!

   1. http://haskell.org/

Announcements

   OSCON Haskell Tutorial. Simon Peyton-Jones Appeared at the O'Reilly
   Open Source Convention (OSCON) in Portland, delivering a range of
   talks, including [2]A Taste of Haskell, [3]A Keynote on Functional
   Languages, [4]Nested Data Parallelism and [5]Transactional Memory for
   Concurrent Programming. Videos are available for most of these talks:
   [6]A Taste of Haskell: Part 1, [7]A Taste of Haskell: Part 2,
   [8]slides for A Taste of Haskell, [9]Transactional Memory for
   Concurrent Programming and [10]the NDP talk at the London Hugs
   meeting.

   2. http://conferences.oreillynet.com/cs/os2007/view/e_sess/14016
   3. http://conferences.oreillynet.com/cs/os2007/view/e_sess/14773
   4. http://conferences.oreillynet.com/cs/os2007/view/e_sess/14014
   5. http://conferences.oreillynet.com/cs/os2007/view/e_sess/14017
   6. http://blip.tv/file/324976
   7. http://www.blip.tv/file/325646/
   8. http://conferences.oreillynet.com/presentations/os2007/os_peytonjones.pdf
   9. http://www.blip.tv/file/317758/
  10. http://video.google.co.uk/videoplay?docid=370317485066035666

   hpodder 1.0. John Goerzen [11]announced version 1.0.0 of hpodder, the
   command-line podcatcher (podcast downloader) that just happens to be
   written in everyone's favorite language. You can get it [12]here.
   Version 1.0.0 sports a new mechanism for detecting and disabling feeds
   or episodes that repeatedly result in errors, updates to the Sqlite
   database schema, and several bugfixes.

  11. http://article.gmane.org/gmane.comp.lang.haskell.general/15452
  12. http://software.complete.org/hpodder

   encoding-0.1. Henning Günther [13]announced the release of 'encoding',
   a Haskell library to cope with many character encodings found on
   modern computers. At the moment it supports (much more is planned):
   ASCII, UTF-8, -16, -32, ISO 8859-* (alias latin-*), CP125* (windows
   codepages), KOI8-R, Bootstring (base for punycode)

  13. http://article.gmane.org/gmane.comp.lang.haskell.general/15481

   Dimensional 0.6: Statically checked physical dimensions. Björn
   Buckwalter [14]announced a library providing data types for performing
   arithmetic with physical quantities and units. Information about the
   physical dimensions of the quantities/units is embedded in their types
   and the validity of operations is verified by the type checker at
   compile time. The boxing and unboxing of numerical values as
   quantities is done by multiplication and division with units.

  14. http://article.gmane.org/gmane.comp.lang.haskell.cafe/26944

Hackage

   This week's new libraries in [15]the Hackage library database.

 * hgal-1.0.1. Jean Philippe Bernardy. [16]Computation automorphism
   group and canonical labeling of a graph

 * hpodder-1.0.3. John Goerzen. [17]Podcast Aggregator (downloader)

 * dlist-0.3.1. Don Stewart. [18]Differences lists: a list-like type
   supporting O(1) append

 * pointfree-1.0. Felix Martini. [19]Stand-alone command-line version
   of the point-less plugin for lambdabot

 * encoding-0.1. Henning Guenther. [20]A library for various
   character encodings

 * AppleScript-0.1.3. Wouter Swierstra. [21]Call AppleScript from Haskell

 * SDL-ttf-0.4.0. David Himmelstrup. [22]Binding to libSDL_ttf

 * Finance-Quote-Yahoo-0.2. Brad Clawsie. [23]Obtain quote data from
   finance.yahoo.com

 * xmobar-0.7. Andrea Rossato. [24]A Minimalistic Text Based Status Bar

  15. http://hackage.haskell.org/
  16. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hgal-1.0.1
  17. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hpodder-1.0.3
  18. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/dlist-0.3.1
  19. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/pointfree-1.0
  20. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/encoding-0.1
  21. 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/AppleScript-0.1.3
  22. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/SDL-ttf-0.4.0
  23. 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/Finance-Quote-Yahoo-0.2
  24. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/xmobar-0.7

Conference roundup

   OSCON. Simon Peyton-Jones gave a series of popular talks about Haskell
   and functional programming at OSCON, in Portland. Below are collected
   just some of the 

[Haskell-cafe] Haskell Weekly News: August 07, 2007

2007-08-07 Thread Donald Bruce Stewart
---
Haskell Weekly News
http://sequence.complete.org/hwn/20070807
Issue 64 - August 07, 2007
---

   Welcome to issue 64 of HWN, a weekly newsletter covering developments
   in the [1]Haskell community.

   This issue marks the second anniversary of the Haskell (not quite)
   Weekly News. Thanks to the Haskell community for support, content and
   for reading over the last two years!

   1. http://haskell.org/

Announcements

   OSCON Haskell Tutorial. Simon Peyton-Jones Appeared at the O'Reilly
   Open Source Convention (OSCON) in Portland, delivering a range of
   talks, including [2]A Taste of Haskell, [3]A Keynote on Functional
   Languages, [4]Nested Data Parallelism and [5]Transactional Memory for
   Concurrent Programming. Videos are available for most of these talks:
   [6]A Taste of Haskell: Part 1, [7]A Taste of Haskell: Part 2,
   [8]slides for A Taste of Haskell, [9]Transactional Memory for
   Concurrent Programming and [10]the NDP talk at the London Hugs
   meeting.

   2. http://conferences.oreillynet.com/cs/os2007/view/e_sess/14016
   3. http://conferences.oreillynet.com/cs/os2007/view/e_sess/14773
   4. http://conferences.oreillynet.com/cs/os2007/view/e_sess/14014
   5. http://conferences.oreillynet.com/cs/os2007/view/e_sess/14017
   6. http://blip.tv/file/324976
   7. http://www.blip.tv/file/325646/
   8. http://conferences.oreillynet.com/presentations/os2007/os_peytonjones.pdf
   9. http://www.blip.tv/file/317758/
  10. http://video.google.co.uk/videoplay?docid=370317485066035666

   hpodder 1.0. John Goerzen [11]announced version 1.0.0 of hpodder, the
   command-line podcatcher (podcast downloader) that just happens to be
   written in everyone's favorite language. You can get it [12]here.
   Version 1.0.0 sports a new mechanism for detecting and disabling feeds
   or episodes that repeatedly result in errors, updates to the Sqlite
   database schema, and several bugfixes.

  11. http://article.gmane.org/gmane.comp.lang.haskell.general/15452
  12. http://software.complete.org/hpodder

   encoding-0.1. Henning Günther [13]announced the release of 'encoding',
   a Haskell library to cope with many character encodings found on
   modern computers. At the moment it supports (much more is planned):
   ASCII, UTF-8, -16, -32, ISO 8859-* (alias latin-*), CP125* (windows
   codepages), KOI8-R, Bootstring (base for punycode)

  13. http://article.gmane.org/gmane.comp.lang.haskell.general/15481

   Dimensional 0.6: Statically checked physical dimensions. Björn
   Buckwalter [14]announced a library providing data types for performing
   arithmetic with physical quantities and units. Information about the
   physical dimensions of the quantities/units is embedded in their types
   and the validity of operations is verified by the type checker at
   compile time. The boxing and unboxing of numerical values as
   quantities is done by multiplication and division with units.

  14. http://article.gmane.org/gmane.comp.lang.haskell.cafe/26944

Hackage

   This week's new libraries in [15]the Hackage library database.

 * hgal-1.0.1. Jean Philippe Bernardy. [16]Computation automorphism
   group and canonical labeling of a graph

 * hpodder-1.0.3. John Goerzen. [17]Podcast Aggregator (downloader)

 * dlist-0.3.1. Don Stewart. [18]Differences lists: a list-like type
   supporting O(1) append

 * pointfree-1.0. Felix Martini. [19]Stand-alone command-line version
   of the point-less plugin for lambdabot

 * encoding-0.1. Henning Guenther. [20]A library for various
   character encodings

 * AppleScript-0.1.3. Wouter Swierstra. [21]Call AppleScript from Haskell

 * SDL-ttf-0.4.0. David Himmelstrup. [22]Binding to libSDL_ttf

 * Finance-Quote-Yahoo-0.2. Brad Clawsie. [23]Obtain quote data from
   finance.yahoo.com

 * xmobar-0.7. Andrea Rossato. [24]A Minimalistic Text Based Status Bar

  15. http://hackage.haskell.org/
  16. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hgal-1.0.1
  17. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hpodder-1.0.3
  18. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/dlist-0.3.1
  19. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/pointfree-1.0
  20. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/encoding-0.1
  21. 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/AppleScript-0.1.3
  22. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/SDL-ttf-0.4.0
  23. 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/Finance-Quote-Yahoo-0.2
  24. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/xmobar-0.7

Conference roundup

   OSCON. Simon Peyton-Jones gave a series of popular talks about Haskell
   and functional programming at OSCON, in Portland. Below are collected
   just some of the 

Re: [Haskell-cafe] Sudoku Solver

2007-08-06 Thread Donald Bruce Stewart
j.vimal:
 On 8/7/07, Hugh Perkins [EMAIL PROTECTED] wrote:
  Note that the official way to solve sudoku is to use dancing links, but
  I guess you are creating a naive implementation precisely as a base-line
  against which to measure other implementations?
 
 Well, Dancing Links (DLX) is just a data structure + few techniques
 to help with the backtrack, after modeling Sudoku as a contraint
 satisfaction problem.
 
 You can write a backtracking algorithm that is at least as fast as DLX :-)

See also,

http://haskell.org/haskellwiki/Sudoku

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sudoku Solver

2007-08-06 Thread Donald Bruce Stewart
hughperkins:
 
On 8/7/07, Donald Bruce Stewart [EMAIL PROTECTED]
wrote:
 
  See also,
  [2]http://haskell.org/haskellwiki/Sudoku
  -- Don
 
Just out of ... errr curiosity... which of those
implementations is the fastest?
 

No idea. You could compile them all with -O2, run them on a set of
puzzles, and produce a table of results :-)

I'm a little surprised no one's tried a parallel solution yet, actually.
We've got an SMP runtime for a reason, people!

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell FCGI server.

2007-08-04 Thread Donald Bruce Stewart
george.moschovitis:
 
is it possible to create a FCGI server that listens to a
specific port using the Haskell FCGI library?
The front end web server would then communicate with this
back end FCGI server through this port.
A small example would be really appreciated.
thanks,

Yep, certainly possible. lambdaweb does this, as should others. AFAIK
its just like using fastcgi in any other language.

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/fastcgi-3000.0.0

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] When is waitForProcess not necessary?

2007-08-02 Thread Donald Bruce Stewart
bayer:
 If one is calling runInteractiveCommand for a sure-thing returning a small
 amount of output (say, ls for a modest directory), is it necessary to call
 waitForProcess?
 
 My waitForProcess calls came under scrutiny when I tried to GHC profile a
 threaded process, which isn't possible. It turns out that my programs run
 faster if they don't call waitForProcess, and it sure seems empirically
 that they never fail to run correctly. It would seem to me that in a lazy
 language, later code wouldn't presume it was looking at an entire string
 until the actual EOF came through, completing the string.
 
 So at the end of a program, with nothing else to do but wait, why not wait
 implicitly rather than via an explicit call to waitForProcess? What am I
 missing?
 
 (I've searched this forum; an interesting example is runCommand in
 
 
 http://happs.org/HAppS/src/HAppS/Util/Common.hs
 
 but I'm asking why this isn't overkill in my scenario. I'm actually calling
 Markdown.pl on tiny files (source code of lengths a human would read), and
 it is certainly sluggish enough to be a fair test.)
 
 ___

You might be interested in :

http://www.cse.unsw.edu.au/~dons/code/newpopen/

In particular,

readProcess :: FilePath -- ^ command to run
- [String] -- ^ any arguments
- String   -- ^ standard input
- IO (Either ExitCode String)  -- ^ either the stdout, or an 
exitcode

Strict String IO for processes, without the zombies.

-- Don Stewart (Authorised by the People for System.IO.Strict party)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] problem building lambdabot

2007-07-30 Thread Donald Bruce Stewart
mvanier:
 Thanks, but this doesn't answer the question.  I can load up the 
 Control.Arrow module fine in ghci. Is there a problem with the packaging 
  information?
 

It needs the 'arrows' package from hackage, not just Control.Arrow.
You'll get by fine by just removing the 'arrows' dependency : its only
needed if you wish to run arrows code in @eval

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HDBC Laziness (was Re: HDBC or HSQL)

2007-07-29 Thread Donald Bruce Stewart
mutjida:
 Hello,
 
  I have heard from a number of people that this behavior is not very
  newbie-friendly.  I can see how that is true.  I have an API revision
  coming anyway, so perhaps this is the time to referse the default
  laziness of HDBC calls (there would be a '-version of everything with
  laziness enabled, still).
 
  Thoughts?
 
 I think this is a good idea.
 
 In general, I think effort should be made to remind people that lazy
 IO operations break Haskell's type abstractions and can thus lead to
 unexpected/dangerous behavior.
 

More generally, I think it should be at least obvious how to get hold of
a System.IO.Strict. We should provide that option (perhaps as part of
the 'strict' package on hackage.haskell.org).

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Videos of Haskell talks

2007-07-28 Thread Donald Bruce Stewart
A new wiki page collecting tutorials and presentations about Haskell, is
on the wiki:

http://haskell.org/haskellwiki/Video_presentations

Highlights include:

Transactional Memory for Concurrent Programming
Parametric Polymorphism and the Girard-Reynolds Isomorphism
Programming in the Age of Concurrency
Nested Data Parallelism in Haskell

If you know of other online video material, please add it!

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] advantages of using fix to define rcursive functions

2007-07-26 Thread Donald Bruce Stewart
voigt:
 Donald Bruce Stewart wrote:
 harald.rotter:
 
 Hi,
 
 I read about the usage of fix to define recursive functions. Although I
 think that I understood how to use fix, I still wonder what the
 advantages of fix are (as compared to the conventional approach to
 define recursive functions).
 
 Any hints are appreciated.
 
 So actually, I suppose it is useful for small, anonymous recursive 
 definitions.
 
 It also exposes the recursive computation structure for direct
 manipulation, enabling one to perform certain program
 transformations/refactorings. Search for fixpoint fusion and fixed
 point promotion. While one might say: that's the business of a
 compiler, actually existing ones are not very sophisticated in that
 regard, so one might want to do such transformations by hand...

Oh, excellent point. Just as naming particular loop structures (such as
'map' or 'unfoldr') enable more precise optimisations, such as fusion,
so naming recursive functions saves the compiler some work discovering
the name. 

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] advantages of using fix to define rcursive functions

2007-07-26 Thread Donald Bruce Stewart
harald.rotter:
 
 Hi,
 
 I read about the usage of fix to define recursive functions. Although I
 think that I understood how to use fix, I still wonder what the
 advantages of fix are (as compared to the conventional approach to
 define recursive functions).
 
 Any hints are appreciated.
 

There's no obvious advantage that I know of, other than being cute.
An example from xmonad:

allocaXEvent $ \p - fix $ \again - do
more - checkMaskEvent d enterWindowMask p
when more again

So actually, I suppose it is useful for small, anonymous recursive definitions.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fixity of

2007-07-26 Thread Donald Bruce Stewart
ross:
 On Fri, Jul 27, 2007 at 12:08:30AM +1000, Donald Bruce Stewart wrote:
  There's a theory this should work:
  
  getContents = lines  map read  sum  print
  
  But unfortunately we have:
  
   `(=)' [infixl 1] 
   `()' [infixr 1]
  
  Meaning we must write:
  
  getContents = (lines  map read  sum  print)
  
  Indeed, all Arrow ops are infixr. 
  Are there any technical/compelling reasons for this?
 
 Not that I can recall.  But it's the precedence rather than the
 associativity that's bothering you here.

Yes, I suppose it is a vote for 0.5 precedence, if they're not to
behave as for  and = ? Which are:

infixl 1  , =

So I can write:

print 'x'  getChar = ord  print 

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Fixity of

2007-07-26 Thread Donald Bruce Stewart
There's a theory this should work:

getContents = lines  map read  sum  print

But unfortunately we have:

 `(=)' [infixl 1] 
 `()' [infixr 1]

Meaning we must write:

getContents = (lines  map read  sum  print)

Indeed, all Arrow ops are infixr. 
Are there any technical/compelling reasons for this?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] UTF-16

2007-07-25 Thread Donald Bruce Stewart
andrewcoppin:
 I don't know if anybody cares, but... Today a wrote some trivial code to 
 decode (not encode) UTF-16.
 
 I believe somebody out there has a UTF-8 decoder, but I needed UTF-16 as 
 it happens. (I didn't bother decoding code points outside the BMP - I'm 
 sure you can figure out why.)
 
 If anybody is interested, I can share the code - but it's pretty minimal...

Perhaps you could polish it up, and provide it in a form suitable for
use as a patch to:

http://code.haskell.org/utf8-string/

that is, put it in a module:

Codec.Binary.UTF16.String

and provide the functions:

encode :: String - [Word8]
decode :: [Word8] - String

? And then submit that as a patch to Eric, the utf8 maintainer.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HDBC or HSQL

2007-07-25 Thread Donald Bruce Stewart
mutjida:
 Hello,
 
 Would you go as far to say that when new programmers ask which database
 binding to use, we should _recommend_ HDBC then?  (As we do gtk2hs, for
 the gui libraries).
 
 At this point in time, my advice to new Haskell programmers would be:
 first try Takusen, as long as it can connect to the server you're
 using (and don't hesitate to ask the maintainers and/or haskell-cafe
 questions); then try HDBC, perhaps avoiding the lazy retrieval
 function.

Thanks Jeff! That's exactly what I was looking for. 

Does anyone know why Takusen isn't on hackage yet? It appears to be
cabalised, and have a tarball:

http://hackage.haskell.org/packages/archive/pkg-list.html#cat:Database
http://darcs.haskell.org/takusen/

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell Weekly News: July 23, 2007

2007-07-22 Thread Donald Bruce Stewart
---
Haskell Weekly News
http://sequence.complete.org/hwn/20070723
Issue 63 - July 23, 2007
---

   Welcome to issue 63 of HWN, a weekly newsletter covering developments
   in the [1]Haskell community.

   This week, the HWN rises zombie-like from its repository, as your
   friendly HWN editor tries to get his PhD finished. This bumper issue
   is filled out with 100 new Haskell blog articles and dozens of new
   libraries!

   1. http://haskell.org/

Announcements

   Learn Haskell in 10 minutes. Chris Smith [2]prepared a new tutorial on
   the basics of Haskell

   2. http://haskell.org/haskellwiki/Learn_Haskell_in_10_minutes

   Haskell Program Coverage 0.4. Andy Gill [3]announced release 0.4 of
   Hpc, a tool for Haskell developers. Hpc is a tool-kit to record and
   display Haskell Program Coverage. Hpc includes tools that instrument
   Haskell programs to record program coverage, run instrumented
   programs, and display the coverage information obtained.

   3. http://article.gmane.org/gmane.comp.lang.haskell.general/15381

   Uniplate 1.0. Neil Mitchell [4]announced Uniplate (formerly known as
   Play), a library for boilerplate removal requiring only Haskell 98
   (for normal use) and optionally multi-parameter type classes (for more
   advanced features).

   4. http://article.gmane.org/gmane.comp.lang.haskell.general/15366

   Atom: Hardware description in Haskell. Tom Hawkins [5]announced Atom,
   a high-level hardware description language embedded in Haskell that
   compiles conditional term rewriting systems into conventional HDL.

   5. http://article.gmane.org/gmane.comp.lang.haskell.general/15341

   Catch. Neil Mitchell [6]announced a pattern-match checker for Haskell,
   named Catch. Do you sometimes encounter the dreaded 'pattern match
   failure: head' message? Do you have incomplete patterns which
   sometimes fail? Do you have incomplete patterns which you know don't
   fail, but still get compiler warnings about them? Would you like to
   statically ensure the absence of all calls to error? This is what
   Catch helps ... catch!

   6. http://article.gmane.org/gmane.comp.lang.haskell.general/15334

   Haskell Communities and Activities Report. Andres Loeh [7]announced
   that the Haskell Communities and Activities Report is now available,
   covering the increasingly diverse groups, projects and individuals
   working on, with, or inspired by Haskell.

   7. http://article.gmane.org/gmane.comp.lang.haskell.general/15302

   The Reduceron. Matthew Naylor [8]announced the Reduceron, a processor
   for executing Haskell programs on FPGA with the aim of exploring how
   custom architectural features can improve the speed in which Haskell
   functions are evaluated. Being described entirely in Haskell (using
   Lava), the Reduceron also serves as an interesting application of
   functional languages to the design of complex control circuits such as
   processors.

   8. http://article.gmane.org/gmane.comp.lang.haskell.general/15301

   Data.Derive. Neil Mitchell [9]announced Data.Derive, a library and a
   tool for deriving instances for Haskell programs. It is designed to
   work with custom derivations, SYB and Template Haskell mechanisms. The
   tool requires GHC, but the generated code is portable to all
   compilers. We see this tool as a competitor to DrIFT.

   9. http://article.gmane.org/gmane.comp.lang.haskell.general/15292

   Piffle, a packet filter language. Jaap Weel [10]announced Piffle, a
   compiler for a packet filter language in Haskell: a good example of
   how Haskell can be used in an application domain (low level computer
   networking) where people tend to use C for everything, including
   writing compilers.

  10. http://article.gmane.org/gmane.comp.lang.haskell.general/15290

   Towards a Programming Language Nirvana. Simon Peyton-Jones [11]appears
   on video, talking about the Haskell path to programming language
   Nirvana

  11. http://channel9.msdn.com/showpost.aspx?postid=326762

   Yi 0.2. Jean-Philippe Bernardy [12]announced the 0.2.0 release of the
   Yi editor. Yi is a text editor written and extensible in Haskell. The
   goal of Yi is to provide a flexible, powerful and correct editor core
   dynamically scriptable in Haskell. Yi si also a Haskell interpreter,
   very much like emacs is a Lisp interpreter, this makes really easy to
   dynamically hack, experiment and modify Yi. All tools and goodies
   written in haskell are also readily available from the editor. This is
   implemented by binding to the GHC API.

  12. http://article.gmane.org/gmane.comp.lang.haskell.general/15260

   Foreign.AppleScript. Wouter Swierstra [13]announced a library for
   compiling and executing AppleScript from Haskell. AppleScript is a
   scripting language available on all modern Apple computers. It can be
   used to script 

[Haskell-cafe] More on improving the mailing list

2007-07-17 Thread Donald Bruce Stewart
An interesting study on problem resolution and feedback on some
technical mailing lists,

How to Help Mailing Lists Help Readers
(Results of Recent Data Analysis) 
http://praxagora.com/andyo/professional/mailing_list_follow_up/

including graphs! :)

With conclusions at the end on how to improve mailing lists by
/integrating material into wikis/, and providing both theoretical and
practical advice for a given problem.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Getting lambdabot to work with 6.6.1

2007-07-17 Thread Donald Bruce Stewart
syntactically.vincenz:
 
Dear,
After a suggestion from quicksilver, I decided to write this
message.  To get lambdabot working on 6.6.1 you need to:
1) ensure you have the regexp-base, regexp-compat and
regexp-posix from hackage installed

The .cabal file now enforces this.

2) If you install them from user, make sure to add --user in
the build-script of lambdabot
3) Get hs-plugins from darcs
[1]http://www.cse.unsw.edu.au/~dons/code/hs-plugins/ , not
from hackage (Lemmih informed it's an issue with the .hi
parser)

quite so.

4) Fix Lib/Parser.hs and replace  606 with = 606 in
the #if GLASSGOW_HASKELL around the definition of
pling_name and co

Send a patch.

5) Remove the Maybe-definition of arbitrary in
scripts/ShowQ.hs

Oh, that's changed in QuickCheck (yet another API change in 6.6.1!).
Send a cpp patch.

6) Make sure to uncomment the runplugs section in
lambdabot.cabal

User the lambdabot.cabal.plugins file.

7) ...
8) Profit
Cheers

Good.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Getting lambdabot to work with 6.6.1

2007-07-17 Thread Donald Bruce Stewart
shachaf:
 I also commented out arrows as a dependency in the .cabal, I think.
 Was that not a good idea? it seemed to work.
 

You just won't be able to use arrows transformers and other fun things
in @eval.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] External Sort and unsafeInterleaveIO

2007-07-17 Thread Donald Bruce Stewart
midfield:
 hi folks --
 
 a haskell newbie here, searching for comments and wisdom on my code.
 
 i had a project to try to implement external sort in haskell as a
 learning exercise.  (external sort is sorting a list that is too large
 to fit in main memory, by sorting in chunks, spooling to disk, and
 then merging.  more properly there probably should be multiple stages,
 but for simplicity i'm doing a one-stage external sort.)
 
 the trick is the number of files can quickly grow very large, so it is
 best to use one large file and seek inside it around.  however as one
 can imagine the order-of-IO-operations becomes a bit tricky, if you're
 seeking file handles around underneath Data.ByteString.Lazy's nose.
 but late this night after not thinking about it for a while i had a
 brainstorm: rewrite hGetContents to keep the handle position in the
 right place!  it's all about judicious use of unsafeInterleaveIO.
 
 it seems to be rather fast, strangely faster than the usual sort at
 times.  it also seems to have nice memory characteristics, though not
 perfect.  it's hard to test because the normal sort function takes
 too much RAM on large lists, making my computer swap like mad.

I have to agree with Mr. Apfelmus here. This is lovely code. It is exactly
what the ByteString team hoped people would be able to write
ByteStrings: Zen of Haskell code, where you win by working at a high
level, rather than a low level. 

Thanks!

I've inserted some small comments though the  source:

 module ExternalSort where
 
 Sort a list of Ords offline.  We're doing this to be able to sort
 things without taking up too much memory (for example sorting lists
 too large to fit in RAM.)  Laziness is imperative, as is the
 order-of-operations.
 
 import Control.Monad
 import Data.List
 import qualified Data.Binary as Bin
 import qualified Data.ByteString.Lazy as B
 import qualified Data.ByteString as P (hGetNonBlocking, null)
 import Data.ByteString.Base (LazyByteString(LPS))
 import Foreign.Storable (sizeOf)
 import System.IO (openFile, hClose, hSeek, hTell, hIsEOF, hWaitForInput,
   Handle, IOMode(ReadMode, WriteMode),
   SeekMode(AbsoluteSeek))
 import System.IO.Unsafe (unsafeInterleaveIO)
 
 import qualified Data.Edison.Seq.ListSeq as LS
 import qualified Data.Edison.Coll.SplayHeap as Splay
 
 Conceptually, we sort a list in blocks, spool blocks to disk, then
 merge back.  However for IO performance it is better to read off
 chunks of elements off the sorted blocks from disk instead of
 elements-at-a-time.
 
 It would be better if these were in KBytes instead of # of elements.
 
 blocksize :: Int
 blocksize = 1
 
 Turn a list into a list of chunks.
 
 slice :: Int - [a] - [[a]]
 slice _ [] = []
 slice size l = (take size l) : (slice size $ drop size l)

That's unnecessary parenthesis, and I'd probably use splitAt here:

myslice :: Int - [a] - [[a]]
myslice _ [] = []
myslice n xs = a : myslice n b  where (a,b) = splitAt n xs

And just to check:

*M :m + Test.QuickCheck
*M Test.QuickCheck quickCheck (\n (xs :: [Int]) - n  0 == slice n xs == 
myslice n xs)
OK, passed 100 tests.

 
 Turn a list into a list of blocks, each of which is sorted.
 
 blockify :: (Ord a) = Int - [a] - [[a]]
 blockify bsize l = map sort $ slice bsize l

Possibly you could drop the 'l' parameter:

blockify n = map sort . slice n

 
 Serialize a block, returning the (absolute) position of the start.
 
 dumpBlock :: (Ord a, Bin.Binary a) = Handle - [a] - IO Integer
 dumpBlock h b = do
   start - hTell h
   B.hPut h $ Bin.encode b
   return start
 
 The actual sorting function.  We blockify the list, turning it into a
 list of sorted blocks, and spool to disk, keeping track of offsets.
 We then read back the blocks (lazily!), and merge them.
 
 externalSort [] = do return []
 externalSort l = do
   h - openFile ExternalSort.bin WriteMode
   idx - mapM (\x - dumpBlock h x) (blockify blocksize l)

idx - mapM (dumpBlock h) (blockify blocksize l)

   hClose h
   h - openFile ExternalSort.bin ReadMode
   blocks - mapM (\x - do {bs - hGetContentsWithCursor h x;
 return $ Bin.decode bs}) idx

Possibly

forM idx $ \x - decode `fmap` hGetContentsWithCursor h x


   return (kMerge $ blocks)
 
 Merging chunks.  K-way merge (and in fact external sort in general) is
 detailed in Knuth, where he recommends tournament trees.  The easiest
 thing is to probably use one of Okasaki's heaps.  I'll use splay
 heaps, because I don't know any better.
 
 It would be better if I changed Ord for blocks to only check the first
 element.
 
 kMerge :: (Ord a) = [[a]] - [a]
 kMerge [] = []
 kMerge l =
 let h = Splay.fromSeq l in
 kM (Splay.minElem h) (Splay.deleteMin h)
 where
 kM :: (Ord a) = [a] - Splay.Heap [a] - [a]
 kM l h
 | h == Splay.empty = l
 | otherwise =
 let next = Splay.minElem h
 (f, b) = span (\x 

Re: [Haskell-cafe] External Sort and unsafeInterleaveIO

2007-07-17 Thread Donald Bruce Stewart
midfield:
 hi --
 
 thanks for the useful comments!  i will definitely go through them
 carefully.  unfortunately for this code (but fortunately for me) i
 defend my dissertation on monday so i'm a little distracted right
 now.
 
 i'm more than happy to donate this code or whatever improvements
 happen to it.  actually, hGetContentsWithCursor seems like a candidate
 for inclusion with Data.ByteStrings or Data.Binary or something -- it
 seems like it might find other uses.  (i think you liked that bit of
 code because i ripped it off of you guys!  it's very short hamming

Can't fault that style ;)

 distance from the original.)  anyhow, all that will have to wait a
 couple weeks or so.  also i've never cabalized anything so i may come
 begging for help.

We have a tutorial for that, luckily:

http://haskell.org/haskellwiki/How_to_write_a_Haskell_program

And a tool to automate it, mkcabal, so should be fairly straightforward.

 
 at some point i thought i saw how to do recursive external sort, to
 keep memory usage truly constant, but with my current lack of sleep i
 have lost that illusion.  i'm also curious about the performance
 characteristics of this vs Prelude sort vs the version using the
 tournament mergesort apfelmus suggested.  i need to find a computer
 with a lot more RAM than my weakling laptop.  finally, it would be
 good to be able to have the blocksize controlled by Kb of RAM rather
 than # of elements, not sure how to get that information.
 
 ultimately this was part of my project to write lucene for haskell.  i
 think with this out of the way, plus all the Data.Binary / ByteString
 goodness, it shouldn't take too long.  keep writing good libraries for
 me!
 

Great. Good to see things working. 

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell for categorists

2007-07-16 Thread Donald Bruce Stewart
miguelimo38:
 Just being curious.
 
 There are a lot of tutorials ensuring the reader that, although
 Haskell is based on category theory, you don't have to know CT to use
 Haskell. So, is there ANY Haskell tutorial for those who do know CT?
 
 I don't need it, personally, but still...
 

I'd probably start here:

http://haskell.org/haskellwiki/Blog_articles/Mathematics#Category_theoretic

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] -O2 compile option can give speed increase over -O. Fasta shootout program test runs.

2007-07-16 Thread Donald Bruce Stewart
r.kelsall:
 I have been playing with the Fasta program in the shootout to see if
 I can make it umm faster. Starting from dons program on this page and
 adding some timing calculations as suggested on this wiki page
 
 http://shootout.alioth.debian.org/gp4/benchmark.php?test=fastalang=ghcid=2
 http://www.haskell.org/haskellwiki/Timing_computations
 
 I added different OPTIONS into the top line of the program did a
 ghc --make fasta.hs   and ran it each time with  fasta 250
 (This is one tenth of the shootout figure.) These runs all keep the
 existing OPTIONS of  -fbang-patterns -fexcess-precision
 
   Seconds   OPTIONS Added
   ---   -
40.5
40.5-funbox-strict-fields
40.4  {-# INLINE rand #-}
17.2-O
17.0-O  -fvia-C
14.4-O  -optc-march=pentium4
11.5-O2
11.2-O3
11.5-O3   {-# INLINE rand #-}
11.3-O2 -optc-march=pentium4
 
 There was a bit of variation, I've averaged over two runs. This is on
 an Intel Pentium D 2.66GHz running W2K and GHC 6.6.1.
 
 It seems the -O2 option can give a significant speed increase relative
 to just the -O option. This is contrary to the documentation which says
 
 http://www.haskell.org/ghc/docs/latest/html/users_guide/options-optimise.html
 http://www.haskell.org/ghc/docs/latest/html/users_guide/faster.html
 
 it won't make any difference. I guess it's program, architecture and
 operating system specific, but according to these figures the -O2 option
 seems well worth a try for programs that need speed. It may be that
 we sacrifice bounds checking or something important with -O2, I don't
 know.

Yes, -O2 is getting better, as new optimisations like SpecConstr are
enabled by it. For shootout problems, I'd selectively test with -O2, and
if it is better, use that.

Good work! And yes, I see that it is currently compiled with:

-O fbang-patterns -fexcess-precision  -fglasgow-exts  -optc-march=pentium4

if -O2 is consistently better here, then we could happily switch.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Newbie question about tuples

2007-07-15 Thread Donald Bruce Stewart
bf3:
 Donald:
 Yeah, there's some known low level issues in the code generator
 regarding heap and stack checks inside loops, and the use of registers
 on x86.
 
 But note this updated paper,
 http://www.cse.unsw.edu.au/~chak/papers/CLPKM07.html
 
 Add another core to your machine and it is no longer 4x slower :)
 Add 15 more cores and its really no longer 4x slower :
   
 Maybe this is yet another newbie stupid question, but do you mean that 
 GHC does automatic multi-threading?  (Haskell seems very suitable for 
 that) Otherwise adding an extra core does not really help does it?
 

No, though that would be nice! You do have to program in a parallel
manner, either by using forkIO, Control.Parallel, or parallel arrays.
When you do, you have the option of such code scaling up to more cores
relatively easily.

My advice: starting writing threaded code now, with *lots* of threads,
so your program will have the ability to start using +RTS -N16 when you
get a new machine :)

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] In-place modification

2007-07-15 Thread Donald Bruce Stewart
hughperkins:
 
Sebastian,
Why would I write a slow, complicated algorithm in C#?
I'm not making these comparisons for some academic paper,
I'm trying to get a feel for how the languages run in
practice.
And really in practice, I'm never going to write a prime
algorithm using merge and so on, I'd just use the original
naive Haskell algorithm, that runs 500 times slower (at
least) than my naive C# algo.  I'm just allowing you guys to
optimize to see how close you can get.
Note that the C# algo is not something created by C#
experts, it's just something I hacked together in like 2
minutes.

For fast, mutable prime sieves, see the shootout:


http://shootout.alioth.debian.org/gp4/benchmark.php?test=nsievebitslang=ghcid=4

(a bit sieve) is pretty fast, 1.8x highly optimised C, and also
readable, for what it does:

import Data.Array.IO
import Data.Array.Base
import System
import Text.Printf

main = do
n - getArgs = readIO . head :: IO Int
mapM_ (sieve . (1 *) . (2 ^)) [n, n-1, n-2]

sieve n = do
a - newArray (2,n) True :: IO (IOUArray Int Bool) -- an array of Bool
r - go a n 2 0
printf Primes up to %8d %8d\n (n::Int) (r::Int) :: IO ()

go !a !m !n !c
| n == m= return c
| otherwise = do
e - unsafeRead a n
if e
then let loop !j
| j = m= unsafeWrite a j False  loop 
(j+n)
| otherwise = go a m (n+1) (c+1)
 in loop (n+n)
else go a m (n+1) c

So perhaps just code up a mutable array version the same as for C# ?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] In-place modification

2007-07-15 Thread Donald Bruce Stewart
hughperkins:
 
Hey, guys, I just realized this test is not really fair!
I've been using the Microsoft .Net compiler ,which is a
proprietary closed-source compiler.
To be fair to Haskell, we should probably compare it to
other open source products, such as g++ and mono?
Here are the timings ;-)
Haskell
==
J:\dev\haskellghc -O2 -o primechaddai.exe PrimeChaddai.hs
J:\dev\haskellprimechaddai
number of primes: 664579
Elapsed time: 26.234

Oh, I think we can do a bit better than that
See below.


g++
===
J:\dev\test\testperfg++ -O2 -o prime.exe prime.cpp
J:\dev\test\testperfprime
number of primes: 664579
elapsed time: 0.984
mono

J:\dev\test\testperferase primecs.exe
J:\dev\test\testperfgmcs primecs.cs
J:\dev\test\testperfmono primecs.exe
number of primes: 664579
elapsed time: 0,719
Microsoft C#
=
J:\dev\test\testperfcsc /nologo primecs.cs
J:\dev\test\testperfprimecs
number of primes: 664579
elapsed time: 0,6875
Not only does mono come close to the Microsoft .Net time,
both mono and Microsoft .Net are faster than g++ ;-) and
whack Haskell.


I reimplemented your C++ program, could you time this please?


{-# OPTIONS -O2 -fbang-patterns #-}

import Data.Array.IO
import Data.Array.Base
import System
import Control.Monad
import Data.Bits
import Text.Printf

main = sieve 1000

sieve n = do
a - newArray (2,n) True :: IO (IOUArray Int Bool) -- an array of Bool
r - go a n 2 0
printf Primes up to %8d %8d\n (n::Int) (r::Int) :: IO ()

go !a !m !n !c
| n == m= return c
| otherwise = do
e - unsafeRead a n
if e then let loop !j
| j  m = do
x - unsafeRead a j
when x (unsafeWrite a j False)
loop (j+n)

| otherwise = go a m (n+1) (c+1)
  in loop (n `shiftL` 1)
 else go a m (n+1) c

On my machine:

$ ghc -o primes primes.hs

$ time ./primes
Primes up to 1000   664579
./primes  0.52s user 0.01s system 99% cpu 0.533 total

0.5s. So rather fast.

Enjoy!
  Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] In-place modification

2007-07-15 Thread Donald Bruce Stewart
hughperkins:
 
On 7/15/07, Donald Bruce Stewart [EMAIL PROTECTED]
wrote:
 [snip] unsafeWrite[snip]
 [snip]unsafeRead[snip]
Hi Donald, the idea is to use this for operational code, so
avoiding unsafe operations is preferable ;-)  You'll note
that the C# version is not using unsafe operations, although
to be fair that's because they worked out slower than the
safe version ;-)

unsafe' here just means direct array indexing. Same as the other
languages. Haskell's 'unsafe' is a little more paranoid that other
languages.

Also, the whole algorithm is bound to the IO Monad, which is
something I'd like to avoid if possible, since my entire
interest in Haskell stems from the possibilites of running
programs easily on 1 megacore processors in the future.

You're deciding that on a cache-thrashing primes benchmark?

Since the goal is to flip bits very quickly in the cache, you could
localise this to the ST monad then, as its perfectly pure on the
outside.

Anyway, congrats you got nearly as fast as C#!

Try the other version I just sent. This one trashes cache lines
needlessly.

What C# version are you using, by the way? (So I can check if it does
any tricks).

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] In-place modification

2007-07-15 Thread Donald Bruce Stewart
hughperkins:
 
On 7/15/07, Sebastian Sylvan [EMAIL PROTECTED]
wrote:
 
  I don't see what the point of this is? Why do timings of
  different
  algorithms? Of course you could do the same optimization
  in any
  language, so why do you think it's relevant to change the
  algorithm in
  *one* of the languages and then make comparisons?
 
Sebastien,
Well, as you yourself said, different languages work
differently, so there's no point in trying to directly
implement the C# algorithm in Haskell: it just wont work, or

In this case it is fine. You're setting bits in the cache. Please use the
same algorithm, or any conclusions are meaningless. 

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] In-place modification

2007-07-15 Thread Donald Bruce Stewart
hughperkins:
 
Hey, I just realized I can shave off another 30% in C# ;-)
So now the timings become:

Ok. So do the same thing to the Haskell program. The compilers should
produce pretty much identical assembly.


{-# OPTIONS -O2 -optc-O -fbang-patterns #-}

import Control.Monad.ST
import Data.Array.ST
import Data.Array.Base
import System
import Control.Monad
import Data.Bits

main = print (pureSieve 1000)

pureSieve :: Int - Int
pureSieve n = runST( sieve n )

sieve n = do
a - newArray (0,n-1) True :: ST s (STUArray s Int Bool)
let cutoff = truncate (sqrt (fromIntegral n)) + 1
go a n cutoff 2 0

go !a !m cutoff !n !c
  | n == m= return c
  | otherwise = do
  e - unsafeRead a n
  if e then
if n  cutoff
then let loop !j
  | j  m = do
  x - unsafeRead a j
  when x $ unsafeWrite a j False
  loop (j+n)

  | otherwise = go a m cutoff (n+1) (c+1)

in loop ( if n  46340 then n * n else n `shiftL` 1)
else go a m cutoff (n+1) (c+1)

   else go a m cutoff (n+1) c

$ ghc -o primes primes.hs
$ time ./primes
664579
./primes  0.38s user 0.00s system 95% cpu 0.392 total

And indeed, it runs nearly 50% faster.

All this benchmark does is thrash the cache, so every write that avoids 
dirtying the cache is worth avoiding, hence you should always check if
you need to set a bit. Given the same algorithm, any native code
compiler should produce roughly the same result, since its really a
hardware benchmark.
 
-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] In-place modification

2007-07-15 Thread Donald Bruce Stewart
dons:
 hughperkins:
  
 Hey, I just realized I can shave off another 30% in C# ;-)
 So now the timings become:
 
 Ok. So do the same thing to the Haskell program. The compilers should
 produce pretty much identical assembly.
 

Oh, and I forgot you count up by two now. Here's the Haskell
transliteration (again).


{-# OPTIONS -O2 -optc-O -fbang-patterns #-}

import Control.Monad.ST
import Data.Array.ST
import Data.Array.Base
import System
import Control.Monad
import Data.Bits

main = print (pureSieve 1000)

pureSieve :: Int - Int
pureSieve n = runST( sieve n )

sieve n = do
a - newArray (3,n) True :: ST s (STUArray s Int Bool)
let cutoff = truncate (sqrt (fromIntegral n)) + 1
go a n cutoff 3 1

go !a !m cutoff !n !c
  | n = m= return c
  | otherwise = do
  e - unsafeRead a n
  if e then
if n  cutoff
then let loop !j
  | j  m = do
  x - unsafeRead a j
  when x $ unsafeWrite a j False
  loop (j+n)

  | otherwise = go a m cutoff (n+2) (c+1)

in loop ( if n  46340 then n * n else n `shiftL` 1)
else go a m cutoff (n+2) (c+1)

   else go a m cutoff (n+2) c


Marginally faster:

$ time ./primes
664579
./primes  0.34s user 0.00s system 89% cpu 0.385 total

Very cache-dependent though, so widely varying runtimes could be
expected.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] In-place modification

2007-07-15 Thread Donald Bruce Stewart
dons:
 dons:
  hughperkins:
   
  Hey, I just realized I can shave off another 30% in C# ;-)
  So now the timings become:
  
  Ok. So do the same thing to the Haskell program. The compilers should
  produce pretty much identical assembly.
  
 
 Oh, and I forgot you count up by two now. Here's the Haskell
 transliteration (again).

Oh, also, I was using the wrong brackets in the last program!
Stick with me, because this makes the program go at least 100x faster.

First, we'll move the pureSieve into a library module:

{-# OPTIONS -O2 -optc-O -fbang-patterns #-}

module Primes (pureSieve) where

import Control.Monad.ST
import Data.Array.ST
import Data.Array.Base
import System
import Control.Monad
import Data.Bits

pureSieve :: Int - Int
pureSieve n = runST ( sieve n )

sieve n = do
a - newArray (3,n) True :: ST s (STUArray s Int Bool)
let cutoff = truncate (sqrt (fromIntegral n)) + 1
go a n cutoff 3 1

go !a !m cutoff !n !c
  | n = m= return c
  | otherwise = do
  e - unsafeRead a n
  if e then
if n  cutoff
then let loop !j
  | j  m = do
  x - unsafeRead a j
  when x $ unsafeWrite a j False
  loop (j+n)

  | otherwise = go a m cutoff (n+2) (c+1)

in loop ( if n  46340 then n * n else n `shiftL` 1)
else go a m cutoff (n+2) (c+1)

   else go a m cutoff (n+2) c

And now just a module to call it:

{-# OPTIONS -fth #-}

import Primes

main = print $( let x = pureSieve 1000 in [| x |] )

Pretty simple to compile and run this now:

$ ghc --make -o primes Main.hs
$ time ./primes
664579
./primes  0.00s user 0.01s system 228% cpu 0.003 total

Oh! Much faster. Looks like Haskell is 100x faster than C#.
Who gets fired? :)

-- Don
{-# OPTIONS -O2 -optc-O -fbang-patterns #-}

module Primes (pureSieve) where

import Control.Monad.ST
import Data.Array.ST
import Data.Array.Base
import System
import Control.Monad
import Data.Bits

pureSieve :: Int - Int
pureSieve n = runST ( sieve n )

sieve n = do
a - newArray (3,n) True :: ST s (STUArray s Int Bool)
let cutoff = truncate (sqrt (fromIntegral n)) + 1
go a n cutoff 3 1

go !a !m cutoff !n !c
  | n = m= return c
  | otherwise = do
  e - unsafeRead a n
  if e then
if n  cutoff
then let loop !j
  | j  m = do
  x - unsafeRead a j
  when x $ unsafeWrite a j False
  loop (j+n)

  | otherwise = go a m cutoff (n+2) (c+1)

in loop ( if n  46340 then n * n else n `shiftL` 1)
else go a m cutoff (n+2) (c+1)

   else go a m cutoff (n+2) c

{-# OPTIONS -fth -O2 -optc-O -fbang-patterns #-}

import Primes

main = print $( let x = pureSieve 1000 in [| x |] )
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] In-place modification

2007-07-15 Thread Donald Bruce Stewart
hughperkins:
 
Brandon wrote:
 Seems to me you get the best picture by picking two
algorithms, one
which favors C# and one which favors Haskell, and
implementing both
in both languages.
Sounds good to me.  What is a good problem that favors
Haskell?

NO. We just *did* this.

Firstly, to compare GHC Haskell and C#, look at the shootout:


http://shootout.alioth.debian.org/gp4/benchmark.php?test=alllang=ghclang2=csharp

C# does better than I expected! 2.6x faster than one Haskell program,
usually 2-4x slower.  Really poor at lightweight concurrency.
Don't do toy benchmarks here, fix the C# ones on the shootout!

Secondly, we just did this for prime sieves:

  * imperative bit sieves on this list, in C# and Haskell, roughly
identical runtimes, though Hugh didn't benchmark the fastest Haskell ones.

This is to be expected, every compiled languages runs into the cache
on this benchmark anyway, see here:

   http://shootout.alioth.debian.org/gp4/benchmark.php?test=nsievelang=all

  * lazy sieves, C# was 100x slower than the naive Haskell implementation.
That's the real story here, laziness is just hard and painful in C#.

However, if you're keen, and agreeing to implement the same algorithm on
both systems, I'd have a go in C# at 'chameneos', a concurrency
benchmark,

http://shootout.alioth.debian.org/gp4/benchmark.php?test=chameneoslang=all

or maybe 'pidigits', a lazy pi generator,


http://shootout.alioth.debian.org/gp4/benchmark.php?test=pidigitslang=ghcid=0

Should be a challenge in C#.

-- Don

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Newbie question about tuples

2007-07-15 Thread Donald Bruce Stewart
andrewcoppin:
 Donald Bruce Stewart wrote:
 bf3:
   
 Maybe this is yet another newbie stupid question, but do you mean that
 GHC does automatic multi-threading?  (Haskell seems very suitable for 
 that) Otherwise adding an extra core does not really help does it?
 
 No, though that would be nice! You do have to program in a parallel
 manner, either by using forkIO, Control.Parallel, or parallel arrays.
 When you do, you have the option of such code scaling up to more cores
 relatively easily.
 
 My advice: starting writing threaded code now, with *lots* of threads,
 so your program will have the ability to start using +RTS -N16 when you
 get a new machine :)
   
 
 I read somewhere that GHC's SMP support has been tested up to 40 cores.
 
 Pray tell me, what the heck kind of machine has 40 cores? (And where can 
 I buy mine from?? :-D LOL!)

40 cpus.

It's a midrange Sun Fire server, something like this one

http://www.sun.com/servers/midrange/sunfire_e6900/index.xml

You'll need more than spare change to get started though.
*However* 8 core amd64 machines are practically commodity boxes now. Go
get one.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Maintaining the community

2007-07-15 Thread Donald Bruce Stewart
zednenem:
 On 7/15/07, Derek Elkins [EMAIL PROTECTED] wrote:
 There is no version of bytestrings without stream fusion and there never
 was.  Bytestrings have no compiler support, it is just a library.
 
 I'm not sure that's correct. Stream fusion is a particular fusion
 technique that wasn't introduced until fairly recently.
 
 From what I can tell, none of the versions available from
 http://www.cse.unsw.edu.au/~dons/fps.html include it. You have to go
 to http://www.cse.unsw.edu.au/~dons/papers/CSL06.html and get the
 code from the fps-unstable branch.

That's right. Both stream fusion for lists and bytestrings are currently
only in darcs,

http://www.cse.unsw.edu.au/~dons/code/fps-unstable/
http://www.cse.unsw.edu.au/~dons/code/streams/list/

Bytestrings will be streamed by the next release.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] List of authors happy to have work moved totheHaskell wiki

2007-07-14 Thread Donald Bruce Stewart
claus.reinke:
 (sorry if you already know this, just want to clarify. All AIUI, IANAL,
 etc)
 
 neither am i!-)
 
 If you publish something under licence A, you still remain the copyright
 holder, and can later also publish it under licence B. You can also
 publish it combined with other material under licence B.
 
 yes, and nobody is forced to sign on to that waiver list, either. but some
 of those who do might find it hampering rather than encouraging their
 own contributions. knowing as they do that someone else might publish
 their results for and before them, and that they have given full permission
 for that to happen. i would not like to see list contributions from those 
 active community members falter because of such possible side-effects.
 
 most uses do not even seem to require that waiver. and those who do 
 not sign on might still be perfectly happy to respond to most requests 
 with a simple ok, go ahead.

I agree, most uses don't requrie the waiver. 

I mostly see it as useful for when we don't just want to link to a dry
mailing list, but instead use a post as the intial text for a page,
which is then further edited. 

 why should we have to think about licensing at all?
 
 If you want code you write to be distributed by Debian, for example,
 then you need to license it appropriately.
 
 yes, and for the wiki it makes sense, and for open-source projects
 it makes sense, and i prefer the least limiting licenses whenever possible.
 i was just pointing out that the default copyright might make more sense 
 for the mailing list, imho. a simple email informing the original authors
 shouldn't be that hard, should it?

I agree: it would be basic courtesy to point out that a wiki page has
been created. I expect that to continue to occur anyway. This also means
we don't lose material written by those who might later levae the community,
which is useful.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] In-place modification

2007-07-14 Thread Donald Bruce Stewart
derek.a.elkins:
 On Sun, 2007-07-15 at 00:53 +0200, Hugh Perkins wrote:
  There's really a tendency in this newsgroup to point people to huge
  documents, when a small copy and paste would make the answer so much
  more accessible ;-)
  
  Anyway... so reading through the paper, it looks like its using a
  priority queue?  Which basically is changing the algorithm somewhat
  compared to the C# version. 
  
  Anyway, if you can provide a working Haskell version, I'll be happy to
  run it.  I sortof suspect that if it gave results within 30% of the C#
  version someone would already have done so ;-)
 
 It's a six page document, and the point was in the first paragraph.
 
 Here's part of a giant thread about this that occurred in February:
 http://www.haskell.org/pipermail/haskell-cafe/2007-February/022854.html
 
 There is a zip file with 15 Haskell implementations and timing
 comparisons to a C version.  The author of that email is, incidentally,
 the author of the paper I referred to.

And those implementations are also in darcs now,

http://www.cse.unsw.edu.au/~dons/code/nobench/spectral/primes/

along with some others.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ray tracer

2007-07-14 Thread Donald Bruce Stewart
andrewcoppin:
 The Haskell ray tracer seems to be a pretty standard and widely-used 
 example program. But has anybody ever seriously tried to make a 
 production-grade implementation? (I.e., one that is user-friendly, 
 efficient, and with lots of functionallity.)
 

All the ones I know of are here:


http://haskell.org/haskellwiki/Applications_and_libraries/Graphics#Ray_tracing

None are production grade, as you define it.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] Newbie question about tuples

2007-07-14 Thread Donald Bruce Stewart
bf3:
 Thanks Bulat, but now you scattered my hopes that GHC would magically do all 
 these optimizations for me ;-)
 
 I must say that although the performance of Haskell is not really a concern 
 to me, I was a bit disappointed that even with all the tricks of the state 
 monad, unboxing, and no-bounds-check, the matrix-vector multiplication was 
 still 7 to 8 times slower than the C version. And at the end of the paper, 
 it's only a factor 4 slower. Okay, going from 300x slower to 4x slower is 
 impressive, but why is it *still* 4x slower? It would be interesting to 
 compare the assembly code generated by the C compiler versus the GHC 
 compiler; after all, we're just talking about a vector/matrix multiplication, 
 which is just a couple of lines of assembly code... And now I'm again talking 
 about  performance, nooo! ;-)
 
 http://www.cse.unsw.edu.au/~chak/papers/afp-arrays.ps.gz
 
 

Yeah, there's some known low level issues in the code generator
regarding heap and stack checks inside loops, and the use of registers
on x86.

But note this updated paper,
http://www.cse.unsw.edu.au/~chak/papers/CLPKM07.html

Add another core to your machine and it is no longer 4x slower :)
Add 15 more cores and its really no longer 4x slower :)

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [weird stuff] The Dodgy Diagonal

2007-07-14 Thread Donald Bruce Stewart
ctm:
 Hi Stefan
 
 Thanks for a very enlightening reply.
 
 In GHC 6.7.20070712 and Yhc, this is perfectly safe.
 
 
 In GRIN based systems like Jhc, this is *not* safe, since after
 evaluation comparisons are done using the full tag.
 
 
 It's now occurred to me that at a cost of some noise, I could have  
 done things
 a little differently
 
 On 14 Jul 2007, at 20:37, Stefan O'Rear wrote:
 
 On Sat, Jul 14, 2007 at 12:06:30PM +0100, Conor McBride wrote:
 A peculiar query for folks who know more about the internals of  
 Haskell
 compilers than I do. I attach the full code with all the bits and  
 pieces,
 but let me pull out the essentials in order to state the problem.
 
 newtype Id  x = Id x   -- element
 newtype K1 ax = K1 a   -- constant
 
  newtype Up1 f p q x = U1 (f (p x) (q x))
  type Sum1 = Up1 Either
  type Prod1 = Up1 (,)
 
 
 newtype Fst x y = Fst x
 newtype Snd x y = Snd y
 newtype K2 ax y = K2 a
 
  newtype Up2 f p q x y = U2 (f (p x y) (q x y))
  type Sum2 = Up2 Either
  type Prod2 = Up2 (,)
 
 
 class (Bifunctor b, Functor f) = Diag b f | b - f where
   diag :: b x x - f x
   gaid :: f x - b x x
 
 and then all of my coercions would (recursively) have been between
 newtype-isotopes.
 
 Would that have more universal traction? I realise, of course that it's
 all voodoo, and I shouldn't trust a thing.

I do wonder if there's some type families/associated type/witness games
that could be played here. Any thoughts on that, Conor?

http://haskell.org/haskellwiki/GHC/Indexed_types

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Frag

2007-07-14 Thread Donald Bruce Stewart
jon:
 
 I just stumbled upon this fast action 3D shooter written entirely in Haskell:
 
   http://haskell.org/haskellwiki/Frag
 
 After 15 minutes trying to build it I find that it segfaults. Can anyone else 
 get this to work?

Likely depends on your OpenGL version, and possibly even graphics card. 

It's not been updated in about a year and half, but last time I tried
it, it worked fine. Anyone with a bit more OpenGL-fu able to test it
against current HOpenGL?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Frag

2007-07-14 Thread Donald Bruce Stewart
mwassell:
 Donald Bruce Stewart wrote:
 jon:
   
 I just stumbled upon this fast action 3D shooter written entirely in 
 Haskell:
 
   http://haskell.org/haskellwiki/Frag
 
 After 15 minutes trying to build it I find that it segfaults. Can anyone 
 else get this to work?
 
 
 Likely depends on your OpenGL version, and possibly even graphics card. 
 
 It's not been updated in about a year and half, but last time I tried
 it, it worked fine. Anyone with a bit more OpenGL-fu able to test it
 against current HOpenGL?
   
 Builds easily and works for me with GHC 6.6.1 on widows (though). You 
 need to specify a level when running it and you will get a series of 
 messages about loading textures before the window appears. Does it get 
 this far?

I suspect there's some 64 bit uncleanliness in the FFI bindings used.
That wouldn't have been tested by Mun, I think.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Maintaining the community

2007-07-13 Thread Donald Bruce Stewart
claus.reinke:
 
personally, i tend to be more willing to answer questions
on the list than to fiddle with wiki markup and conventions,
but there is no reason why people who are happier with
wiki editing cannot extract content from list answers to the
wiki, especially if its a faq answer rather than a research
result. 

I've got a few tools that make wiki editing easier (shortcuts to open up
a new wiki page for editing in vim, syntax highlighting, console
access). These make wiki editing roughly as cheap as composing an email.

I tried an experiment this week of just taking someone's post (Conor's idiom
brackets), and putting directly on the wiki first, then letting the
author know that's happened.

How do people feel about allowing posts in -cafe to be placed on the
wiki, without extensive prior negotiation? What copyright do -cafe@
posts have?

If there was a rough consensus that this is ok, we could probably get a
lot more material directly on the wiki, since I for one would act first,
putting some interesting Clause Reinke posts there semi-verbatim, rather
than pondering whether to write an email to the author to seek
permission, or cojole them into doing it.

Should we feel free to put mailing list material onto the wiki?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functional dependencies *not* part of the next Haskell standard?

2007-07-13 Thread Donald Bruce Stewart
bulat.ziganshin:
 Hello Simon,
 
 Friday, July 13, 2007, 11:37:59 AM, you wrote:
 
  | I think the implementation is some 90% complete though, in GHC head.
  | Certainly you can write many associated types programs already -- the
  | missing part is finishing off associated type synonyms, iirc.
 
  ...and we have a working implementation of that too, thanks to Tom
  Schrijvers.  It's not in the HEAD yet, but it will be in a few weeks.
 
 so it will be a part of 6.8? great news! afaiu, ATS, rather than AT,
 is direct substitution for FD?

Functional dependencies desugar to indexed type families, an extension
of the original associated, in GHC head already. For the full story,
see the wiki page.

http://haskell.org/haskellwiki/GHC/Indexed_types

which includes an example of porting functional dependencies to 
associated types


http://haskell.org/haskellwiki/GHC/Indexed_types#A_associated_type_synonym_example

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Looking for final year project - using Haskell, or another functional language

2007-07-13 Thread Donald Bruce Stewart
bf3:
 Yes, for a newbie like me it was actually the reason to abandon Haskell
 initially; none of the examples at http://www.haskell.org/HOpenGL compiled!
 
 Another very cool albeit difficult project would be automatic retargeting of
 Haskell code to the graphics processor unit (GPU), or IBM Synergistic
 Processor Unit  (SPU aka Cell processor, if you can get your hands on such a
 board...). I think IBM has been working on something like that for
 imperative languages, but it would be interesting to see how far you one can
 go with Haskell. If this is not yet done of course...

We've got people working on it. Sean Lee[1] was running Haskell functions
on his Nvidia GPU as of yesterday :-)

-- Don

1. http://www.cse.unsw.edu.au/~seanl
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] List of authors happy to have work moved to the Haskell wiki

2007-07-13 Thread Donald Bruce Stewart
ctm:
 On 13 Jul 2007, at 14:47, Donald Bruce Stewart wrote:
 
 I tried an experiment this week of just taking someone's post  
 (Conor's idiom
 brackets), and putting directly on the wiki first, then letting the
 author know that's happened.
 
 Seemed entirely reasonable to me. If I have a spare moment, I might  
 even finally get around to registering so I can add the other thing I  
 usually bundle (for inserting effectful computations which don't  
 contribute an argument to the main function).
 
 If I'm writing for a mailing list, I'm certainly willing (if  
 surprised) to be exploited in this way.
 
 If there is any legalistic need for explicit permission, then we  
 should have a permanent permission system on an opt-in basis, perhaps  
 recorded in some suitable central and accessible place (like,  
 erm, ...). I'm in.


I've created a page to track contributors who are happy to have 
their work moved to the Haskell wiki, and thus explicitly licensed under
the `simple permissive license'.

http://haskell.org/haskellwiki/Haskell_Cafe_migration

Just add your names, so your code and text can escape the haskell-cafe
copyright monad :)

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Type system madness

2007-07-13 Thread Donald Bruce Stewart
wnoise:
 On 2007-07-13, Stefan O'Rear [EMAIL PROTECTED] wrote:
  He's not trying to report a bug; he's just complaining about base's
  long-known lack of support for non-latin1 encodings. (IIUC)
 
 Which is a bug.  Base needs to support (in an /obvious/ way)
 (1) direct I/O of octets (bytes), with no character interpretation set

Data.ByteString

 (2) I/O of text in UTF-8.

not in base, but see utf8-string on hackage.haskell.org.
  
 In addition, it would be nice to support 
 (3) (On Unix) use of locale to determine text encoding
 but users can work around this themselves, and will often need to, even

Hmm, there's System.Locale, but I've not used it for anything other than
dates.
 
 if (3) is supported.
 
 (2) can also be layered atop (1), but something is wrong if you have to
 write your own layer to do simple text input and output.  It's even
 worse if you can't without going to the FFI.

Yes, there's been a few encoding layers on top of Data.ByteString
written for other non-latin1 encodings.

 (1) can currently be done, but it's not at all clear how to do so, or
 once you have figured out how to do so, why it works.
 
 (This may be a bit out of date, but seeing this brought up again, I
 think not.)

I think its a little out of date, given Data.ByteString and utf8-string?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Newbie question about tuples

2007-07-13 Thread Donald Bruce Stewart
bulat.ziganshin:
 Hello peterv,
 
 Friday, July 13, 2007, 5:03:00 PM, you wrote:
 
  think the latest compilers are much better). Now when implementing something
  like this in Haskell, I would guess that its laziness would allow to
  interleave many of the math operations, reordering them to be as optimal
  as possible, removing many intermediate results (like processing streams).
 
 don't forget that laziness by itself makes programs an orders of
 magnitude slower :)
 

Or orders of magnitude faster, depending on your data structure. :)

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] List of authors happy to have work moved to the Haskell wiki

2007-07-13 Thread Donald Bruce Stewart
dons:
 ctm:
  On 13 Jul 2007, at 14:47, Donald Bruce Stewart wrote:
  
  I tried an experiment this week of just taking someone's post  
  (Conor's idiom
  brackets), and putting directly on the wiki first, then letting the
  author know that's happened.
  
  Seemed entirely reasonable to me. If I have a spare moment, I might  
  even finally get around to registering so I can add the other thing I  
  usually bundle (for inserting effectful computations which don't  
  contribute an argument to the main function).
  
  If I'm writing for a mailing list, I'm certainly willing (if  
  surprised) to be exploited in this way.
  
  If there is any legalistic need for explicit permission, then we  
  should have a permanent permission system on an opt-in basis, perhaps  
  recorded in some suitable central and accessible place (like,  
  erm, ...). I'm in.
 
 
 I've created a page to track contributors who are happy to have 
 their work moved to the Haskell wiki, and thus explicitly licensed under
 the `simple permissive license'.
 
 http://haskell.org/haskellwiki/Haskell_Cafe_migration
 
 Just add your names, so your code and text can escape the haskell-cafe
 copyright monad :)

After some discussion, it was decided to expand the scope of the
permission slightly while we're here, to include any media covered by
haskell.org:

* mailing lists
* old wiki
* hpaste.org
* irc channel

Since specifically on hpaste.org there's often good solutions that don't
appear elsewhere.

If you've already added your name, perhaps check that you agree to your
original contributions to any haskell.org media being treated under the
`simple permissive license'.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: money type ?

2007-07-12 Thread Donald Bruce Stewart
simon:
 Great - indeed,
 
  sum [1.85, 5.95, -7.80]
 8.881784197001252e-16
  sum [1.85::Money, 5.95, -7.80]
 0.00
 
 I'm not yet sure these will do the best thing in all arithmetic, but it 
 seems to be the right thing for now.
 
 Yes, I will need to read these also. Perhaps first reading the integer and 
 decimal digits as separate integers will help. I'm still exploring the 
 number types.

Roman Leschinskiy tells me that there are C (or C++?)  libraries for
locale-specific money handling, where given precisions are mandated in
particular countries, below which you must round. Perhaps we should have
a binding to this.

Anyway, sorting out how money is supposed to be represented in Haskell,
and documenting it, seems a very useful thing.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type system madness

2007-07-12 Thread Donald Bruce Stewart
andrewcoppin:
 Ketil Malde wrote:
 On Wed, 2007-07-11 at 20:10 +0100, Andrew Coppin wrote:
 
   
 When I tell the editor to save UTF-8, it inserts some weird BOM 
 character at the start of the file - and thus, any attempt at 
 programatically processing that file instantly fails. :-(
 
 
 While BOMs (Byte Order Mark) are pretty irrelevant to byte-oriented
 encodings like UTF-8, I think programs that fail on their presence can
 be considered buggy.
   
 
 Yay! Haskell's text I/O system is buggy. :-P

By the way Andrew, have you noticed that you're generating 50% of the
traffic on this list? Perhaps we can work a bit more on improving the
signal/noise ratio. My inbox can only take so much of this... ;)

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functional dependencies *not* part of the next Haskell standard?

2007-07-12 Thread Donald Bruce Stewart
jules:
 peterv wrote:
 instance Vector Vector2 where
   dot (V2 x1 y1) (V2 x2 y2) = x1 * x2 + y1 * y2
 
 Amazing, so simple it is, Yoda would say ;)
 
 I did not realize one could perform partial application on types when
 declaring instances (I mean not specifying the type of Vector2 in instance
 Vector Vector2).
 
 Now regarding these funcdeps, are they ill as the rumor goes?
 
 I don't think there is any danger of them being removed and not 
 replaced. The functionality is useful.
 
 Associated Types is widely viewed as a successor/replacement, but no 
 complete implementation exists yet:
 
 http://haskell.org/haskellwiki/GHC/Indexed_types

I think the implementation is some 90% complete though, in GHC head.
Certainly you can write many associated types programs already -- the
missing part is finishing off associated type synonyms, iirc.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Maintaining the community

2007-07-12 Thread Donald Bruce Stewart
As we sit here riding the Haskell wave:

http://www.cse.unsw.edu.au/~dons/tmp/cafe.png

with nearly 2000 (!) people reading haskell-cafe@, perhaps its time to
think some more about how to build and maintain this lovely Haskell
community we have. Just yesterday I received an email:

I posted it to Haskell-Cafe and received loads of brilliant
responses. Wow, those guys are awesome. I'm definitely going to
learn Haskell now.

Which is *exactly* the kind of (view of the) community we want to build
and encourage, so we can keep the Haskell project growing into the
future.

I think the main thing we need to remember is to help train new experts
in the community, to be fluent in the culture, ensuring that expertise
and a knowledge of the culture diffuses through the new people arriving.

That is, to help people progress from newbie, to intermediate, to
expert, and thus ensure the culture is maintained (avoiding `Eternal
September'). This graphic[1] sums the main issue up nicely, in my view:

http://headrush.typepad.com/photos/uncategorized/buildingausercommunity.jpg

And the steps to follow (people can think about how best they apply to
them) (also from [1]):


* Encourage newer users--especially those who've been active askers--to
  start trying to answer questions 

We're pretty good with this, but we can be *explicit* about it. If
you're taking a lot from the community, please put a lot back in (in 
terms
of writing about it, contributing answers, new libraries, and so on).

I note this is also exactly what the Summer of Code helps do too --
we've had several people paid to progress from newbie to expert, thanks
to the SoC.


* Give tips on how to answer questions

Answering politely, and in detail, explaining common misunderstandings
is better than one word replies.


* Adopt a near-zero-tolerance Be Nice policy when people answer questions

We are very good here already, both on email and IRC.


* Teach and encourage the more advanced users (including moderators) how to
  correct a wrong answer while maintaining the original answerer's dignity.

This is hard, perhaps people can think some more about this.


* Re-examine your reward/levels strategy for your community

This is also important: on the IRC channel we actually use
participation data 
(http://www.cse.unsw.edu.au/~dons/irc/haskell-07.html)
to determine who gets moderator privledges. For the community in
general, rewards are along the lines of going to the hackathon,
becoming the domain expert for some library. 


Cheers and happy hacking,
Don

[1]. 
http://headrush.typepad.com/creating_passionate_users/2006/12/how_to_build_a_.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Make it possible to evaluate monadic actions when assigning record fields

2007-07-11 Thread Donald Bruce Stewart
ctm:
 Indeed it can. Ignoring conventional wisdom about dirty linen, here are
 idiom brackets
 
  class Applicative i = Idiomatic i f g | g - f i where
idiomatic :: i f - g
 
  iI :: Idiomatic i f g = f - g
  iI = idiomatic . pure
 
  data Ii  =  Ii
 
  instance Applicative i= Idiomatic i x (Ii - i x) where
idiomatic xi Ii = xi
  instance Idiomatic i f g  = Idiomatic i (s - f) (i s - g) where
idiomatic sfi si= idiomatic (sfi * si)
 
 So that
 
   iI f x y Ii = f $ x * y
 
 Now add
 
  data Ji = Ji
 
  instance (Monad i, Applicative i)= Idiomatic i (i x) (Ji - i  
 x) where
idiomatic xii Ji = join xii
 
 and you've got
 
   iI f x y Ji = join $ f $ x * y

Very nice! Just so we don't forget this, I created a wiki page,

http://haskell.org/haskellwiki/Idiom_brackets

-- Don
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


[Haskell-cafe] increase in spam on the wiki

2007-07-11 Thread Donald Bruce Stewart
Looks like the spam protection measures are breaking down a bit. There's
been 4 spam incidents (at least) on the wiki in the past day. Ashley et
al, is there any easy fix? 

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.ByteString.dropWhile

2007-07-10 Thread Donald Bruce Stewart
drtomc:
 So the following isn't as clever as the line-noise Don posted, but
 should be in the ball-park.

Low level loops are irksome, but guaranteed to be quick :P

 dropFromEnds p = dropWhile p . dropWhileEnd p
 
 dropWhileEnd p bs = take (findFromEndUntil (not p) bs) bs
 
 takeWhileEnd p bs = drop (findFromEndUntil p bs) bs
 
 {- findFromEndUntil is in ByteString.hs, but is not exported -}

Yep, looks reasonable. With a bit of inlining (check the core) and you'll get
the same code anyway. Always good to roll a QuickCheck or two for this
kind of stuff, since off-by-one errors are rather easy.

This should get you into a testable state:

import qualified Data.ByteString  as S
import Test.QuickCheck.Batch
import Test.QuickCheck
import Text.Show.Functions
import System.Random

instance Arbitrary Word8 where
arbitrary = choose (97, 105)
coarbitrary c = variant (fromIntegral ((fromIntegral c) `rem` 4))

instance Random Word8 where
  randomR = integralRandomR
  random = randomR (minBound,maxBound)

integralRandomR :: (Integral a, RandomGen g) = (a,a) - g - (a,g)
integralRandomR  (a,b) g = case randomR (fromIntegral a :: Integer,
 fromIntegral b :: Integer) g of
(x,g) - (fromIntegral x, g)

-- define a model in [Word8]
tidy_model f = reverse . dropWhile f . reverse . dropWhile f

-- and check it
prop_tidy_ok f xs = tidy_model f xs == (S.unpack . tidy f . S.pack) xs

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell and GnuPG?

2007-07-10 Thread Donald Bruce Stewart
magnus:
 Continuing my life up-side-down? I'm looking for a Haskell wrapper
 around GPGME.  Is there such a beast?
 

Don't think there's such a binding yet. Would make a good contribution
though!

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] xkcd #287 NP-Complete

2007-07-10 Thread Donald Bruce Stewart
lemming:
 
 On Tue, 10 Jul 2007, Donald Bruce Stewart wrote:
 
  These smaller NP problems really love the list monad. here's roconnor's
  solution from #haskell:
 
  import Control.Monad
 
  menu = [(Mixed Fruit,215),(French Fries,275)
 ,(Side Salad,335),(Hot Wings,355)
 ,(Mozzarella Sticks,420),(Sampler Plate,580)]
 
  main = mapM_ print
  [ map fst y
  | i - [0..]
  , y - replicateM i menu
  , sum (map snd y) == 1505 ]
 
 Shouldn't we stay away from integer indices on lists?
 
 [ map fst y |
 y - concat (iterate (liftM2 (:) menu) [[]]),
 sum (map snd y) == 1505]

Also, wouldn't it be nice to bring back monad comprehensions...

Bring them back! No one's scared any more!

-- Don

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CPS versus Pattern Matching performance

2007-07-10 Thread Donald Bruce Stewart
tmorris:
 When you you use maybe :: b - (a - b) - Maybe a - b instead of
 pattern matching a returned Maybe value?
 
 Is there something a bit more concrete on this issue?

You mean, versus using 'case' or sugar for case?
It'll just inline to the same code.

For example:

maybe 10 (\n - n + 1) bigexpr

 = {inline maybe}

(\n f x - case x of
Nothing - n
Just x  - f x) 10 (\n - n + 1) bigexpr

 = {reduce}

case bigexpr of
Nothing - 10
Just x  - x+1

So use 'maybe' if its clearer -- it doesn't cost anything.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CPS versus Pattern Matching performance

2007-07-10 Thread Donald Bruce Stewart
tmorris:
 Thanks Don,
 Is your explanation specific to maybe? Or does that apply to all functions?
 
 Suppose the following function for lists:
 
 f :: [a] - b - (a - [a] - b) - b
 
 ...instead of pattern matching [] and (x:xs)

It really depends on the body of 'f'. If they're simple wrappers over
case analysis they should be inlined perfectly.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CPS versus Pattern Matching performance

2007-07-10 Thread Donald Bruce Stewart
lemming:
 
 On Tue, 10 Jul 2007, Tony Morris wrote:
 
  Is your explanation specific to maybe? Or does that apply to all functions?
 
  Suppose the following function for lists:
 
  f :: [a] - b - (a - [a] - b) - b
 
  ...instead of pattern matching [] and (x:xs)
 
 A foldr without recursion. I use such functions frequently in order to
 hide constructors of a data type. Does this kind of functions has a common
 name?

They're catamorphisms:

Bool  - cond/if-then-else
Maybe - maybe
(,)   - uncurry
[]- foldr

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sequence Classes

2007-07-10 Thread Donald Bruce Stewart
jcast:
 I was just meditating on array fusion using streams, and list fusion using 
 streams, and it struck me: the definition of the list functions over arrays 
 is the same as that of the list functions over lists.  From the ByteString 
 paper, we get:

Yes indeed, hence Data.Stream provides the common implemetations for
both arrays, lists and a few other fusible sequence types.
  
 map f = transformerBi (mapS f)
 foldr f z = consumerDn (foldrS f z)
 foldl f z = consumerUp (foldlS f z)
 replicate n = producerBi (replicateS n)
 
 etc.
 
 So why not say
 
 class Sequence t alpha where
   consumerUp, consumerDn, consumerBi
 :: (Stream alpha - beta) - t alpha - beta
   producerUp, producerDn, producerBi :: Stream alpha - t alpha
   transformerUp, transformerDn, transformerBi
 :: Sequence t beta = (Stream alpha - Stream beta) - t alpha - t beta
 
 map :: Sequence t alpha = (alpha - beta) - t alpha - t beta
 map f = transformerBi (mapS f)
 

You could do that, yep.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] In-place modification

2007-07-10 Thread Donald Bruce Stewart
allbery:
 
 On Jul 10, 2007, at 16:07 , Alex Queiroz wrote:
 
 As I replied to Hugh, the Universe of computers is not restricted
 to PCs. We, embedded developers, will be using C for a lot of time
 still.
 
 Doesn't nhc98 target embedded devices?

It's been used on embedded arm devices the size of a credit card. 

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Looking for final year project - using Haskell, or another functional language

2007-07-10 Thread Donald Bruce Stewart
walter1003:
 
Hi all,
 
I will soon be doing my last year in computer science.
One part of our last year encompasses quite a big project
which will
go over 3 terms and will account for 3 modules (45 credits).
I was thinking in doing something using functional languages
(preferably Haskell, as it's the one I know most).
Does anybody know anyone who would have a task suitable for
such a
project which would encompass the whole development life
cycle (maybe a sub-project?). I
would do this obviously for free; the client can be anyone
(industrial, academic, open source, etc. ... ), as long as
the project
is something serious and for practical usage.
I would be happy for any suggestions ...
Thanks
walter.

Have a look at the summer of code and 'wanted libraries' list.

http://hackage.haskell.org/trac/summer-of-code/
and
http://haskell.org/haskellwiki/Wanted_libraries
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Allocating enormous amounts of memory and wondering why

2007-07-10 Thread Donald Bruce Stewart
jeff:
 I switched to Data.Binary, which dropped me from 2.6GB to 1.5GB, and
 then I switched this afternoon to unboxed arrays from lists of floats,
 and that dropped me again from 1.5GB to 475MB.  I think, all told, that
 I'm in an acceptable range now, and thank you for pointing out the
 library mistake.  I'm also down from 1.5 minutes load time to under 10
 seconds of load time, which is also very very nice.  Incidentally, the
 code I'm now using is:

Good!

 
 binaryLoadDocumentCoordinates :: 
   String - IO (Ptr Float, Array.UArray Int Int)
 binaryLoadDocumentCoordinates path = do
   putStrLn File opened
   coordinates - decodeFile (path ++ /Clusters.bin) :: IO
 (Array.UArray Int Float)
   print . Array.bounds $ coordinates
   putStrLn Got coordinates
   galaxies - decodeFile (path ++ /Galaxies.bin) :: IO (Array.UArray
 Int Int)
   putStrLn Got galaxies
   coordinatesArr - mallocArray . snd . Array.bounds $ coordinates
   putStrLn Allocated array
   pokeArray coordinatesArr . Array.elems $ coordinates
   return (coordinatesArr, galaxies)
 
 binarySaveDocumentCoordinates :: String - [Point] - IO ()
 binarySaveDocumentCoordinates path points = do
   let len = length points
   encodeFile (path ++ Clusters.bin) . (Array.listArray (0,len*3) :: 
 [Float] - Array.UArray Int Float) . coordinateList . solve $ points
   encodeFile (path ++ Galaxies.bin) . (Array.listArray (0,len) :: 
 [Int] - Array.UArray Int Int) . galaxyList $ points

You could improve this further by removing the intermediate list
serialisation and construction required by the UArray instance. 

My previous example did that,  using Ptr Int arrays instead of UArray,
which can then be serialised by casting to a bytestring, and writing
those bytes directly, rather than serialising via lists, as UArrays do.

Removing the UArray serialisation means no allocation overhead at all to
serialise the Int array.  That final step would reduce the memory
overhead to exactly the size of the input file, and probably shave at
least 50% off the time, if you need to further improve it.

That is, you'd have:

  binaryLoadDocumentCoordinates :: 
String - IO (Ptr Float, IntTable)

where IntTable is a shallow wrapper over Ptr Int, serialised with
the techniques used here,

http://haskell.org/haskellwiki/Serialisation_and_compression_with_Data_Binary

newtype IntTable = IntTable (Ptr Int)

with a suitable Binary instance (possibly compressing it on the fly
too).

The general lesson is to avoid lists of any kind (including those
constructed implicitly when serialising Arrays), as soon as you have
more than 1M or so of data stored in those lists.

-- Don

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Allocating enormous amounts of memory and wondering why

2007-07-10 Thread Donald Bruce Stewart
jeff:
 I switched to Data.Binary, which dropped me from 2.6GB to 1.5GB, and
 then I switched this afternoon to unboxed arrays from lists of floats,
 and that dropped me again from 1.5GB to 475MB.  I think, all told, that
 I'm in an acceptable range now, and thank you for pointing out the
 library mistake.  I'm also down from 1.5 minutes load time to under 10
 seconds of load time, which is also very very nice.  Incidentally, the
 code I'm now using is:
 
 binaryLoadDocumentCoordinates :: 
   String - IO (Ptr Float, Array.UArray Int Int)
 binaryLoadDocumentCoordinates path = do
   putStrLn File opened
   coordinates - decodeFile (path ++ /Clusters.bin) :: IO
 (Array.UArray Int Float)
   print . Array.bounds $ coordinates
   putStrLn Got coordinates
   galaxies - decodeFile (path ++ /Galaxies.bin) :: IO (Array.UArray
 Int Int)
   putStrLn Got galaxies
   coordinatesArr - mallocArray . snd . Array.bounds $ coordinates
   putStrLn Allocated array
   pokeArray coordinatesArr . Array.elems $ coordinates
   return (coordinatesArr, galaxies)
 
 binarySaveDocumentCoordinates :: String - [Point] - IO ()
 binarySaveDocumentCoordinates path points = do
   let len = length points
   encodeFile (path ++ Clusters.bin) . (Array.listArray (0,len*3) :: 
 [Float] - Array.UArray Int Float) . coordinateList . solve $ points
   encodeFile (path ++ Galaxies.bin) . (Array.listArray (0,len) :: 
 [Int] - Array.UArray Int Int) . galaxyList $ points

I've just pushed a patch to Data.Binary in the darcs version that should
help serialising arrays by avoiding forcing an intermediate list.
You can get that here:

darcs get http://darcs.haskell.org/binary

I'd still avoid that 'listArray' call though, you may as well just write
the list out, rather than packing it into an array, and then serialising
the array back as a list.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Too many packages on hackage? :-)

2007-07-09 Thread Donald Bruce Stewart
ketil:
 On Mon, 2007-07-09 at 10:30 +1000, Donald Bruce Stewart wrote:
 
  Another idea I've been pondering is allowing people to add links to
  documentation for libraries
 
 My main worry about Hackage is that it is often hard to tell the current
 status of packages - it could easily develop into a huge list of mostly
 dead projects.
 
 The current deliverables seem to consist of a tar file and a package
 description, neither of them 
 accurately dated.  

Yes, I'd like uploader dates/names too. Something like:

xmonad-0.3  - July 2007

 I'd like to see: 
   links to project home pages, 

That's already provided via the 'homepage: ' field, see, e.g.

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/xmonad

   darcs (devel) repositories, 

yes, I'd support a 'repository: ' field.

   email address of maintainers.  

handled by the 'maintainer: ' field

 I'd also like browsable README and ChangeLog or similar. And what about a 
 darcs-graph plot?

a link to the README might be good. If there was a 'repository'
field, we could automatically compute the darcs-graph, a la,

http://www.cse.unsw.edu.au/~dons/images/commits/community/

Who's our SoC hackage guy? To do list right here!
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lambdabot web interface

2007-07-09 Thread Donald Bruce Stewart
voigt:
 Hi,
 
 I can't get http://lambdabot.codersbase.com/ to work for me. Whatever 
 input = No lambdabot process
 
 Is that a known issue, not the right URL, ...?
 
 Thanks,
 Janis.

Right URL, but Jason's not running lambdabot at the moment. You can
access our bot via IRC though.  http://haskell.org/haskellwiki/IRC_channel

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] no-coding functional data structures via lazyness

2007-07-09 Thread Donald Bruce Stewart
bayer:
 Learning Haskell, the Prelude.ShowS type stood out as odd, exploiting  
 the implementation of lazy evaluation to avoid explicitly writing an  
 efficient concatenable list data structure. This felt like cheating,  
 or at least like using a screwdriver as a crowbar, to be less  
 judgmental.

See also
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/dlist-0.3

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Too many packages on hackage? :-)

2007-07-09 Thread Donald Bruce Stewart
sascha.boehme:
 Hello,
 
  Who's our SoC hackage guy? To do list right here!
 
 The HackageDB project is for now concentrating on another subject. I see
 the necessity of adding search features and additionally tags, but in
 the moment I work on automatic generation of Haddock documentation.
 
 The progress and a to do list can be found from here:
 
   http://hackage.haskell.org/trac/summer-of-code/wiki/SoC2007Hackage
 
 A more complete ToDo list can be found here:
 
   http://hackage.haskell.org/trac/hackage/wiki/HackageToDo
   
 This also covers your wishes, and, as soon as automatic Haddock
 documentation is working, I'll turn towards that.

Great Sascha.

We often discuss how to improve hackage on the #haskell irc channel --
perhaps you could just idle there while you're doing your SoC project,
as there's quite a community of users there with ideas/questions about
hackage, and how to best use and improve it?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.ByteString.dropWhile

2007-07-09 Thread Donald Bruce Stewart
drtomc:
 Hi All,
 
 I notice that Data.ByteString has span and spanEnd. Is there a known

and break/breakEnd.

 particular reason why dropWhile and takeWhile don't have corresponding
 *End functions? If not, what is the protocol for adding them?

There's no reason -- we couldn't decide on whether to support
'end/-right' versions of most traversals. To add them you'd implement
them, send the patch to Duncan and I, for inclusion in bytestring 1.0.

Duncan -- did we ever sort out a policy on the left/right normal/-end
versions of things? breakEnd I use all the time, but perhaps we should
fix upon what api we are to provide.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.ByteString.dropWhile

2007-07-09 Thread Donald Bruce Stewart
drtomc:
 Well, maybe I shoud be asking a higher level question then.
 
 I have a function
 
 tidy = reverse . dropWhile punk . reverse . dropWhile punk
where
punk = isPunctuation . chr . fromIntegral
 
 which is leading to a significant amount of allocation, and you can see why.
 
 The way I'd like to write it is
 
 tidy = dropWhile punk . dropWhileEnd punk
where 
 
 which has the obvious advantage of avoiding quite a bit of
 intermediate allocation.
 
 Is there a another way?
 
 I note that since I'm using a nice declarative language, the compiler
 CLEARLY should be transforming the first form into the second. :-)

I'd just manually write a 'tidy' loop (in the Data.ByteString style) (which
would avoid all allocations), since it seems pretty useful.

Something in this style:

findIndexOrEnd :: (Word8 - Bool) - ByteString - Int
findIndexOrEnd k (PS x s l) =
 inlinePerformIO $ withForeignPtr x $ \f - go (f `plusPtr` s) 0
  where
go !ptr !n | n = l= return l
   | otherwise = do w - peek ptr
if k w
then return n
else go (ptr `plusPtr` 1) (n+1)

If its costly, since that'll make it non-costly.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A very edgy language (was: A very nontrivial parser)

2007-07-08 Thread Donald Bruce Stewart
bulat.ziganshin:
 Hello Thomas,
 
 Sunday, July 8, 2007, 2:36:43 AM, you wrote:
  This is certainly true. I've coded up in less than six months,
  something that uses better algorithms and finer grained concurrency
  than the software I used to work on, and the latter represented 5 or
  more man-years of coding. However this is server software, which is
  long running so performance and memory usage are pretty important, and
  these are relatively hard to get right in Haskell.
 
 i've improved memory usage of my program 3 times one month after i've
 started to use Haskell, and 4 times more 1.5 years later (the last
 improvement included development of ByteString-alike library and
 strictifying some computations). i think that for programming-in-large
 experienced haskeller may reach C-like level of efficiency, unlike for
 programming-in-small (i.e. implementation of raw computations)

Yes, this agrees with my experience too. Programs of a certain size
become unfeasible to improve in C or C++ -- while the Haskell program
may be continually refactored and improved.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A very edgy language

2007-07-08 Thread Donald Bruce Stewart
andrewcoppin:
 
 Does anybody have any clue why ByteStrings are actually faster? (And why 
 this information isn't easily findable anywhere - must shorly be a VFAQ.)
 

It's well documented in the API documentation for bytestrings.

Start here:
http://www.cse.unsw.edu.au/~dons/fps/Data-ByteString-Lazy.html

And then read:
http://www.cse.unsw.edu.au/~dons/papers/CSL06.html

Cheers,
  Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Toy compression algorithms [was: A very edgy language]

2007-07-08 Thread Donald Bruce Stewart
andrewcoppin:
 Andrew Coppin wrote:
 Dave Bayer wrote:
 I was beginning to accept that I might die before clearing my 
 pipeline of research projects I want to code up. Haskell has given me 
 new hope.
 
 Indeed. ;-)
 
 Today I hve implemented encoders and decoders for RLE, MTF, Fibonacci 
 codes, and LZW. Next on my list is BWT, Huffman codes and arithmetic 
 coding. (That last is *very* hard though...)
 
 In case anybody cares...
 
   darcs get http://www.orphi.me.uk/darcs/ToyCompression
   ghc -O2 --make Encode
   ghc -O2 --make Decode
 
 You can now do
 
  Encode LZW foo
 
 Will look for a file named foo, and generate a file called foo-LZW 
 containing the LZW-compressed version of it. Also foo-LZW.log.txt, 
 which contains some statistics.
 
 Similarly,
 
  Decode LZW foo-LZW
 
 will make a file foo-LZW-unLZW, which should *hopefully* be identical 
 to the original one. (!)
 
 It didn't occur to me, but I should probably go add some way to discover 
 a list of available algorithms! LOL. Anyway, currently working:
 
   RLE (Run Length Encoding). Counts runs of symbols. Considers 8 bits 
 to be one symbol. The maximum run length is 255.
   MTF (Move To Front). Takes an input file and produces an output file 
 of exactly the same size, but containing mostly *low* numbers. Works 
 well with RLE or Fib.
   BWT (Burners-Wheeler Transform). Like MTF, does no actual 
 compression, but makes the file more compressible.
   Fib (Fibonacci codes). Stores numbers using Fibonacci codes instead 
 of unsigned binary. This makes low numbers smaller and high numbers 
 bigger. (Use it with MTF...)
   LZW (Lempel-Ziv-Welch). The daddy. Looks for repeated input strings 
 and compresses them.
 
 Caution: Encoding or decoding BWT currently requires absurd amounts of 
 time and space. (Like,  2 GB RAM and 8 minutes wall time to process 12 
 KB of text.) I hope to fix that soon...
 
 Currently it's unclear whether LZW or BWT+MTF+Fib gives the best 
 compression. (Mainly because BWT is so hard to run!) I hope to implement 
 a few other algorithms soonly.
 
 (Note: LZW is typically used with a variable number of bits per output 
 symbol. I haven't done this yet. It simply uses 16 bits in all cases. 
 Once I fix this, compression will go up. Also, once the symbol 
 dictionary is full, the encoder just resets itself.)
 

Good work. Probably worth benchmarking against the other compression
libraries on hackage, just to get a sense for how well your
implementations are optimised (and maybe how to best interact with lazy
bytestrings?). See for example:

zlib   http://hackage.haskell.org/cgi-bin/hackage-scripts/package/zlib-0.3
bzlib  http://hackage.haskell.org/cgi-bin/hackage-scripts/package/bzlib-0.3

and also:
lzf
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/Codec-Compression-LZF-0.2

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] In-place modification

2007-07-08 Thread Donald Bruce Stewart
andrewcoppin:
 I was wittering on about stream fusion and how great it is, and I got a 
 message from Mr C++.
 
 (Mr C++ develops commercial games, and is obsessed with performance. For 
 him, the only way to achieve the best performance is to have total 
 control over every minute detail of the implementation. He sees Haskell 
 is a stupid language that can never be fast. It seems he's not alone...)

Many libraries use inplace update and mutable data. Like, hmm, Data.ByteString.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Too many packages on hackage? :-)

2007-07-08 Thread Donald Bruce Stewart
bulat.ziganshin:
 Hello apfelmus,
 
 Sunday, July 8, 2007, 5:20:18 PM, you wrote:
 
   Looks like there's too many packages on hackage.haskell.org now for a
 
 it's the nicest problem i can imagine :)
 
  For browsing libraries, I like the wiki pages much more than hackage.
  Can't those two be merged into one?
 
 may be we just need alternative single-page listing with FULL
 package descriptions instead of brief one. this should be rather close
 in amount of information to old wiki. Categories section on top of
 page should be enough for anyone who need pages split by category
 
 many years ago i already proposed to extend hackage to include
 information about general entities, not only packages uploaded to it.
 unfortunately, its author was not interested in this. i still think
 that joining up HCAR, librariestools wiki pages and hackage into the
 one information source will give us significant benefits, making
 searching of needed library easier, especially for casual haskellers.
 as you can see, in many cases, haskellers think that some libraries
 doesn't exist just because it's hard to find them

Another idea I've been pondering is allowing people to add links to
documentation for libraries, from their hackage page. We have a fair 
few libs documented in blog form, here,

http://haskell.org/haskellwiki/Blog_articles

So adding links to those from the hackage page for the library would help.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Compress and serialise data with lazy bytestrings, zlib and Data.Binary (was: Allocating enormous amounts of memory)

2007-07-08 Thread Donald Bruce Stewart
Jefferson Heard write:

 I'm using the Data.AltBinary package to read in a list of 4.8 million
 floats and 1.6 million ints.  Doing so caused the memory footprint to
 blow up to more than 2gb, which on my laptop simply causes the program
 to crash.  I can do it on my workstation, but I'd really rather not,
 because I want my program to be fairly portable.  
 
 The file that I wrote out in packing the data structure was only 28MB,
 so I assume I'm just using the wrong data structure, or I'm using full
 laziness somewhere I shouldn't be.

Here's a quick example of how to efficient read and write such a structure to
disk, compressing and decompressing on the fly. 

$ time ./A
Wrote 480 floats, and 160 ints
Read  480 floats, and 160 ints
./A  0.93s user 0.06s system 89% cpu 1.106 total

It uses Data.Binary to provide quick serialisation, and the zlib library to
compress the resulting stream. It builds the tables in memory, writes and
compresses the result to disk, reads them back in, and checks we read the right
amount of CFloats and CInts. You'd then pass the CFloats over to your C library
that needs them.

Compressing with zlib is a flourish, but cheap and simple, so we may as well do
it. With zlib and Data.Binary, the core code just becomes:

encodeFile /tmp/table.gz table
table' - decodeFile /tmp/table.gz

Which transparently streams the data through zlib, and onto the disk, and back.

Simple and efficient.

-- Don


Here's the code:

-- some imports

import Text.Printf
import Data.Binary
import Codec.Compression.Zlib
import Control.Monad
import qualified Data.ByteString.Lazy as L
import qualified Data.ByteString.Base as B
import Foreign
import Foreign.C.Types

-- A simple table type for the arrays, that will be easy to manipulate

data Table = Table { floats :: L.ByteString
   , ints   :: L.ByteString }


-- Serialise this data in gzip-compressed form: a gzipping Binary instance.

instance Binary Table where
put t = do put (compress (floats t))
   put (compress (ints   t))

get = liftM2 Table (decompress `liftM` get)
   (decompress `liftM` get)

-- Default sizes

floatSize = 480
intSize   = 160

-- Build a new empty table

newTable :: IO Table
newTable = do
f - mallocArray floatSize :: IO (Ptr CFloat)
i - mallocArray intSize   :: IO (Ptr CInt)

-- fill them with data here --

-- convert to bytestrings.
bf - B.packCStringFinalizer (castPtr f)
(floatSize * sizeOf (undefined :: CFloat)) (return ())
bi - B.packCStringFinalizer (castPtr i)
(intSize   * sizeOf (undefined :: CInt))   (return ())
return $ Table (L.fromChunks [bf]) (L.fromChunks [bi])

-- Now just build the table, serialise it, read it back in, and print the 
result

main = do
table - newTable

-- write the data to disk, compressed with gzip as we go.
encodeFile /tmp/table.gz table
draw Wrote table

-- load it back in, decompressing on the fly
table' - decodeFile /tmp/table.gz
draw Read  table'

-- now use 'floats' as a Ptr to pass back to C.

  where
draw s v = printf %s %d floats, and %d ints\n s
(fromIntegral (lengthFloats v) `div` sizeOf (undefined 
:: CFloat))
(fromIntegral (lengthInts v  ) `div` sizeOf (undefined 
:: CInt))

lengthFloats = L.length . floats
lengthInts   = L.length . ints

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compress and serialise data with lazy bytestrings, zlib and Data.Binary (was: Allocating enormous amounts of memory)

2007-07-08 Thread Donald Bruce Stewart
dons:
 Jefferson Heard write:
 
  I'm using the Data.AltBinary package to read in a list of 4.8 million
  floats and 1.6 million ints.  Doing so caused the memory footprint to
  blow up to more than 2gb, which on my laptop simply causes the program
  to crash.  I can do it on my workstation, but I'd really rather not,
  because I want my program to be fairly portable.  
  
  The file that I wrote out in packing the data structure was only 28MB,
  so I assume I'm just using the wrong data structure, or I'm using full
  laziness somewhere I shouldn't be.
 
 Here's a quick example of how to efficient read and write such a structure to
 disk, compressing and decompressing on the fly. 
 
 $ time ./A
 Wrote 480 floats, and 160 ints
 Read  480 floats, and 160 ints
 ./A  0.93s user 0.06s system 89% cpu 1.106 total
 
 It uses Data.Binary to provide quick serialisation, and the zlib library to
 compress the resulting stream. It builds the tables in memory, writes and
 compresses the result to disk, reads them back in, and checks we read the 
 right
 amount of CFloats and CInts. You'd then pass the CFloats over to your C 
 library
 that needs them.
 
 Compressing with zlib is a flourish, but cheap and simple, so we may as well 
 do
 it. With zlib and Data.Binary, the core code just becomes:
 
 encodeFile /tmp/table.gz table
 table' - decodeFile /tmp/table.gz
 
 Which transparently streams the data through zlib, and onto the disk, and 
 back.
 
 Simple and efficient.

Oh, and profiling this code:

$ ghc -prof -auto-all -O2 --make A.hs

$ ./A +RTS -p
Wrote 480 floats, and 160 ints
Read 480 floats, and 160 ints

$ cat A.prof 
Mon Jul  9 12:44 2007 Time and Allocation Profiling Report  (Final)

total time  =0.90 secs   (18 ticks @ 50 ms)
total alloc =  26,087,140 bytes  (excludes profiling overheads)

COST CENTREMODULE   %time %alloc
main   Main 100.0  100.0

Looks fine. We'd expect at least 25,600,000 bytes, and a little overhead for 
the 
runtime system. I note that the compressed file on disk is 26k too (yay for
gzip on zeros ;)

-- Don

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] More binary IO, compression, bytestrings and FFI fun

2007-07-08 Thread Donald Bruce Stewart
Processing larger amounts of data, compression, serialisation and calling C.

An elaboration of the previous example:

* Build a largish structure in Haskell
* Compress it in memory
* Serialise it to disk
* Deserialise it
* Decompress
* Pass it to C
* Display the result

Pretty common pattern for low level stuff. We use zlib + lazy
bytestrings for streaming decompression, and Data.Binary for the
serialisation.

We will use

* Foreign.* to generate the data
* Wrap it as a lazy bytestring
* Data.Binary to serialise it
* Code.Compression.Gzip to compress/uncompress
* Pass it to C and make a simple FFI call on the result
* Display the result

Running:

$ ghc -O2 A.hs --make

$ time ./A
Built table
Compressed  2560 bytes
Compressed size  2231545 bytes (91.28%)
Decompressed2560 bytes
Calling into C ...
-8.742278e-8
-0.6865875
-0.7207948
-0.1401903
0.63918984
0.7437966
0.27236375
-0.5763547
-0.75708854
-0.39026973
./A  2.98s user 0.11s system 94% cpu 3.275 total

The code:

{-# OPTIONS -fglasgow-exts #-}

-- 
-- Some imports
--
import Foreign
import Foreign.C.Types
import Data.Int

import qualified Data.ByteString.Lazy as L
import qualified Data.ByteString.Base as S
import qualified Data.ByteString  as S

import Data.Binary
import Codec.Compression.GZip

import System.IO
import Text.Printf
import Control.Monad


-- Foreign Ptrs
--
-- A simple wrapper type
--
data Table = Table { floats :: ForeignPtr CFloat
   , ints   :: ForeignPtr Int}

-- Statically fixed sizes
floatSize = 480
intSize   = 160

totalBytes = sizeOf (undefined :: CFloat) * floatSize
   + sizeOf (undefined :: Int)* intSize

--
-- Build a table populated with some defaults
-- Float table filled with 'pi' , ints numbered consecutively
--
newTable :: IO Table
newTable = do
fp - S.mallocByteString (floatSize * sizeOf (undefined :: CFloat))
ip - S.mallocByteString (intSize   * sizeOf (undefined :: Int   ))
withForeignPtr fp $ \p -
forM_ [0..floatSize-1] $ \n -
pokeElemOff p n pi
withForeignPtr ip $ \p -
forM_ [0..intSize-1]   $ \n -
pokeElemOff p n n
return (Table fp ip)


-- Lazy ByteStrings
--
-- Convert ForeignPtr a to and from a lazy ByteString
--
toByteString   :: Storable a = ForeignPtr a - Int - L.ByteString
toByteString (fp :: ForeignPtr a) n =
L.fromChunks . (:[]) $ S.fromForeignPtr (castForeignPtr fp)
(n * sizeOf (undefined :: a))

--
-- Flatten a lazy bytestring back to a ForeignPtr.
--
fromByteString :: Storable a = L.ByteString - ForeignPtr a
fromByteString lbs = castForeignPtr fp
   where (fp,_,n) = S.toForeignPtr . S.concat $ L.toChunks lbs


-- GZip and Data.Binary
--
-- Serialise a Table, compressing with gzip it as we go:
--
instance Binary Table where
put (Table f i) = do
put . compress . toByteString f $ floatSize
put . compress . toByteString i $ intSize

get = do
fs - liftM decompress get
is - liftM decompress get

-- check we read the correct amount:
if L.length fs + L.length is == fromIntegral totalBytes
then return $ Table (fromByteString fs) (fromByteString is)
else error Partial read


-- FFI
--
-- Example call to process the data using C functions.
--
rounded :: Int - ForeignPtr CFloat - IO [CFloat]
rounded l fp = withForeignPtr fp $ \p - go p
where
go p = forM [0..l-1] $ \n - do
v - peekElemOff p n
return $ c_tanhf (c_sinf (v + fromIntegral n))

-- A random C function to use:
foreign import ccall unsafe math.h sinf  c_sinf  :: CFloat - CFloat
foreign import ccall unsafe math.h tanhf c_tanhf :: CFloat - CFloat



--
-- Now glue it all together
-- 
main = do
table - newTable
putStrLn Built table

-- write the data to disk, compressed with gzip as we go.
encodeFile /tmp/table.gz table
printf Compressed  %d bytes\n totalBytes

-- how good was the compression?
h - openFile /tmp/table.gz ReadMode
n - hFileSize h
  

Re: [Haskell-cafe] Clearly, Haskell is ill-founded

2007-07-08 Thread Donald Bruce Stewart
drtomc:
 I don't know if you saw the following linked off /.
 
 http://www.itwire.com.au/content/view/13339/53/
 
 An amazon link for the book is here:
 
 http://www.amazon.com/Computer-Science-Reconsidered-Invocation-Expression/dp/0471798142
 
 The basic claim appears to be that discrete mathematics is a bad
 foundation for computer science. I suspect the subscribers to this
 list would beg to disagree.
 
 Enjoy,

:-)

And he's patented it...

http://www.patentstorm.us/patents/5355496-description.html

SUMMARY OF THE INVENTION

A method and system for process expression and resolution is described. A
first language structure comprising a possibility expression having at least
one definition which is inherently and generally concurrent is provided.
Further, a second language structure comprising an actuality expression
including a fully formed input data name to be resolved is provided.
Furthermore, a third language structure comprising an active expression
initially having at least one invocation, the invocation comprising an
association with a particular definition and the fully formed input data 
name
of the actuality expression is provided. Subsequently, the process of 
resolving
invocations begins in the active expression with fully formed input data 
names
in relation to their associated definition to produce at least one or both 
of
the following: (1) an invocation with a fully formed input data name and 
(2) a
result data name. 

Interesting...

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A very edgy language

2007-07-07 Thread Donald Bruce Stewart
andrewcoppin:
 Donald Bruce Stewart wrote:
 Give #haskell is a far larger community than:
 
 #lisp
 #erlang
 #scheme
 #ocaml
 
 As well as
 
 #java
 #javascript
 #ruby
 #lua
 #d
 #perl6
 
 Maybe we need to reconsider where the (FP) mainstream is now? :-)
   
 
 Yeah, #haskell is pretty big - 300 people idling and 1-3 people actually 
 talking. :-P

Hey! We answer questions and write code for free, and you misrepresent
the population anyway:

Maximum users seen in #haskell: 354, currently: 318 (97.8%), active: 53 
(16.7%)
 
^^

In fact, a lot of your exploratory/introductory questions would be most
efficiently answered on irc. Do drop by!

http://haskell.org/haskellwiki/IRC_channel

Cheers,
   Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GHC threads and SMP

2007-07-07 Thread Donald Bruce Stewart
ninegua:
 replying to my own message... the behavior is only when -O is used
 during compilation, otherwise they both run on 2 cores but at a much
 lower (1/100) speed.

Hmm, any change with -O2? Is the optimiser changing the code such that
the scheduler doesn't get to switch threads as often? If you change
the thread scheduler switching rate does that change anything?

See the GHC user's guide for more details:

7.12.1.3.?Scheduling policy for concurrent threads

Runnable threads are scheduled in round-robin fashion. Context switches are
signalled by the generation of new sparks or by the expiry of a virtual 
timer
(the timer interval is configurable with the -C[num] RTS option). 
However, a
context switch doesn't really happen until the current heap block is full. 
You
can't get any faster context switching than this.

When a context switch occurs, pending sparks which have not already been
reduced to weak head normal form are turned into new threads. However, 
there is
a limit to the number of active threads (runnable or blocked) which are 
allowed
at any given time. This limit can be adjusted with the -t num RTS option 
(the
default is 32). Once the thread limit is reached, any remaining sparks are
deferred until some of the currently active threads are completed.

Perhaps SimonM can shed some light here?

 
 On 7/6/07, Paul L [EMAIL PROTECTED] wrote:
 I have two parallel algorithms that use the lightweight GHC thread and
 forkIO. I compile them using the -threaded (or -smp) option, and run
 both with +RTS -N2 -RTS command line option.
 
 QSort is able to make use of the dual cores on my laptop -- top
 shows that two threads show up and both CPUs are utilized, and time
 it will give something like this:
 
   real0m0.502s
   user0m0.872s
   sys 0m0.004s
 
 But Prime can only make use of one core, as shown by top. time gives
 
   real0m9.112s
   user0m9.093s
   sys 0m0.028s
 
 Because forkOS is not used anywhere, the decision of running them on 1
 or 2 OS threads seem rather arbitary. Why?
 
 Regards,
 Paul L
 
 
 
  import Control.Concurrent
  import System.Random
  import Data.Array.MArray
  import Data.Array.IO
  import System.IO.Unsafe
  import Control.Exception
 
 1. Quick Sort
 
  testQSort' n verbose = do
let b = (0, n - 1)
arr - newArray b 0  :: IO (IOUArray Int Int)
initM' (mkStdGen 0) b arr
waitForChildren
qsortM' b arr
waitForChildren
if verbose then getElems arr = putStrLn . show else return ()
 
 Initialize an array with random numbers.
 
  initM' g (i, j) arr | j - i  1 = fillArr g i j
where
  fillArr g i j = if i  j then return () else do
let (v, g') = next g
writeArray arr i v  fillArr g' (i + 1) j
  initM' g (i, j) arr = do
let k = (i + j) `div` 2
(g1, g2) = split g
forkChild $ initM' g1 (i, k) arr
forkChild $ initM' g2 (k + 1, j) arr
return ()
 
  qsortM' (i, j) arr = qsort' (i, j)
where
  qsort' (i, j) =
if j = i then return () else do
  k - split i j
  if j - i  1 then (forkChild $ qsort' (i, k - 1))  return 
 ()
  else qsort' (i, k - 1)
  qsort' (k + 1, j)
  split left right = do
v - readArray arr right
let split' i j = if j == right then swap i right v  return i 
 else do
  b - readArray arr j
  if b = v
then (swap i j b)  split' (i + 1) (j + 1)
else split' i (j + 1)
split' left left
  swap i j b = do
a - readArray arr i
writeArray arr i b
writeArray arr j a
 
 2. Prime
 
  testPrime' n verbose = do
arr - newArray (0, n) True :: IO (IOUArray Int Bool)
primeM' arr n
waitForChildren
if verbose
  then getElems arr = putStrLn . show . map fst . filter snd . zip 
 [0..]
  else return ()
 
  primeM' arr n = do
let p = truncate $ sqrt (fromIntegral n) + 1
remove i = if i  p then return () else do
  spawnRemover (i + 1)
  remove' (i + i)
 where
  remove' j = if j  n then return () else do
writeArray arr j False
remove' (j + i)
  spawnRemover j = if j  n then return () else do
t - readArray arr j
if t then forkChild (remove j) else spawnRemover (j + 1)
remove 2
 
 Manage thread termination
 
  children :: MVar [MVar ()]
  children = unsafePerformIO (newMVar [])
 
  waitForChildren :: IO ()
  waitForChildren = do
cs - takeMVar children
case cs of
  []   - putMVar children cs
  m:ms - do
putMVar children ms
takeMVar m
waitForChildren
 
  forkChild :: IO () - IO ()
  forkChild io = do
mvar - newEmptyMVar
childs - takeMVar children
putMVar children (mvar:childs)
forkIO (io `finally` putMVar mvar ())
return ()
 
 ___
 Haskell-Cafe mailing 

  1   2   3   4   5   6   7   8   9   10   >