[Haskell-cafe] Re: Haddock: documenting parameters of functional arguments

2007-08-24 Thread Simon Marlow

Henning Thielemann wrote:

I like to write documentation comments like

fix ::
  (   a {- ^ local argument -}
   - a {- ^ local output -} )
  - a {- ^ global output -}

but Haddock doesn't allow it. Or is there a trick to get it work?


Haddock only supports documenting the top-level arguments of a function 
right now.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GHC optimisations

2007-08-30 Thread Simon Marlow

Simon Peyton-Jones wrote:

GHC does some constant folding, but little by way of strength reduction, or 
using shifts instead of multiplication.  It's pretty easy to add more: it's all 
done in a single module.  Look at primOpRules in the module PrelRules.


Although it isn't done at the Core level as Simon says, GHC's native code 
generator does turn multiplies into shifts, amongst various other low-level 
optimisations.  In fact it's not clear that this kind of thing *should* be 
done at a high level, since it's likely to be machine-dependent.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GHC 6.6.1 and SELinux issues

2007-09-03 Thread Simon Marlow

Bryan O'Sullivan wrote:

Alexander Vodomerov wrote:


I've put GHC in unconfined_execmem_t and it started to work fine.  But
the problem is not in GHC -- it is in programs compiled by GHC. They
also require exec/write memory. Only root can grant unconfined_execmem
privileges, so simple user can not run binaries compiled by GHC. How do
you solve this problem?


Running chcon -t unconfined_execmem_exec_t as root will let you run 
the binaries, which you probably already knew.


The underlying problem is harder to fix: the default SELinux policy 
doesn't allow PROT_EXEC pages to be mapped with PROT_WRITE, for obvious 
reasons.  The solution is expensive in terms of address space and TLB 
entries: map the same pages twice, once only with PROT_EXEC, and once 
only with PROT_WRITE.


There's already a Trac ticket filed against this problem, but Simon 
Marlow marked it as closed because he couldn't test the code he wrote to 
try to fix it, and nobody stepped in to help out at the time: 
http://hackage.haskell.org/trac/ghc/ticket/738


IIRC, what I did was work around execheap, not execmem (and similar 
problems with Data Execution Prevention on Windows).  There aren't any 
uncommitted patches.


Does anyone know how the dynamic linker works?  Does it map the pages 
writable first, then mprotect them executable/read-only after relocation? 
I guess we should do this in the RTS linker, and use the double-mapping 
trick for foreign-import-wrapper stuff.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: FFI and DLLs

2007-09-04 Thread Simon Marlow

Lewis-Sandy, Darrell wrote:
An early proposal for the FFI supported importing functions directly 
from dynamic link libraries:


www.*haskell*.org/hdirect/ffi-a4.ps.gz 
http://www.haskell.org/hdirect/ffi-a4.ps.gz


This looks like it was dropped from the final version of the addendum in 
favor of C header files as the sole form of import entities.  Not being 
a C programmer, how would one go about importing a foreign function 
(e.g. void Foo(void) ) from a dynamic link library (e.g. foo.dll)??   
Would someone be willing to provide an explicit example?


You don't have to do anything special to import a function from a DLL. 
e.g. here's one from the Win32 library:


foreign import stdcall unsafe windows.h CloseHandle
  c_CloseHandle :: HANDLE - IO Bool

The windows.h header file isn't essential unless you're compiling via C 
with GHC, when compiling direct to native code with -fasm the header file 
is ignored.  Note that by convention most functions in DLLs are called 
using the stdcall convention, so you have to specify this on the foreign 
import.


To link against the DLL, the easiest way is to name the DLL directly on the 
command line.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Elevator pitch for Haskell.

2007-09-05 Thread Simon Marlow

Ketil Malde wrote:

WARNING: Learning Haskell is dangerous to your health!


:-)  I liked that so much I made a hazard image to go with it.

http://malde.org/~ketil/Hazard_lambda.svg


Cool! Can we have a license to reuse that image?  (I want it on a T-shirt)

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Mutable but boxed arrays?

2007-09-06 Thread Simon Marlow

Ketil Malde wrote:

On Wed, 2007-09-05 at 20:37 +0200, Henning Thielemann wrote:
Can someone explain me, why there are arrays with mutable but boxed 
elements?


I, on the other hand, have always wondered why the strict arrays are
called unboxed, rather than, well, strict?  Strictness seems to be
their observable property, while unboxing is just an (admittedly
important) implementation optimization.  I imagine that it'd be at least
as easy to implement the strictness as the unboxedness for non-GHC
compilers, and thus increase compatibility.


You're quite right, that was a mistake, we should have called them strict 
arrays.  Hugs implements the unboxed arrays without any kind of unboxing, I 
believe.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Tiny documentation request

2007-09-11 Thread Simon Marlow

Sven Panne wrote:


2. Could we make is so all items are collapsed initially? (Currently
they're all expended initially - which makes it take rather a long time
to find anything.)


Again this depends on the use case: I'd vote strongly against collapsing the 
list initially, because that way the incremental search in Firefox won't work 
without un-collapsing everything.


This is exactly why the list is expanded by default.  At first I made it 
default to collapsed, and people complained (possibly Sven, in fact :-).


When the index is generated with a more recent Haddock, you get a search 
field, which does an incremental search, so this might perhaps be more what 
you are looking for.


A more aesthetical note: We should really get rid of the ugly table/CSS layout 
mixture, the lower part of the page renders a bit ugly and varies between 
browsers. Switching to pure CSS should be safe in 2007, I guess.


Please, please, someone do this for me.  I tried, and failed, to get the 
layout right for the contents list in all browsers at the same time.  The 
semantics of CSS is beyond my comprehension.


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Tiny documentation request

2007-09-11 Thread Simon Marlow

Thomas Schilling wrote:


However, regarding the modules list.  I think it should be easy to have
optional javascript functionality to toggle the visibility of the module
tree.  The default visibility could be customized using a cookie.


I don't know how to make cookies work purely in Javascript - presumably 
there's a way, though.  I'm accepting patches!


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Tiny documentation request

2007-09-12 Thread Simon Marlow

manu wrote:


On Sep 11, 2007, Simon Marlow wrote:



Please, please, someone do this for me.  I tried, and failed, to get the 

layout right for the contents list in all browsers at the same time.  The 


semantics of CSS is beyond my comprehension.


Cheers,

Simon



Hi Simon,

On the 
page http://www.haskell.org/ghc/docs/latest/html/libraries/index.html, you 
only need tables to display the foldable lists of modules (HTML tables 
were commonly used to display many things on a same line), but they 
can be replaced by nested lists with a bit of CSS :


Check this page out : http://la.di.da.free.fr/haddock/ 


I can help further, if need be.


I see the idea, but it looks like you're just right-aligning the package 
names, which will look strange when there are different package names. 
(but perhaps not as strange as it currently looks).


In any case, I'd welcome a patch to Haddock that improves things.

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: help getting happy

2007-09-13 Thread Simon Marlow

Greg Meredith wrote:

Haskellians,

The code pasted in below causes Happy to return parE when invoked with 
happy rparse.y -i . Is there anyway to get Happy to give me just a wee 
bit more info as to what might be causing the parE (which i interpret a 
'parse error').


Please grab a more recent version of Happy from darcs:

  http://darcs.haskell.org/happy

the parE thing was a bug in the error handling introduced in the last 
release.  You'll need Cabal-1.2 in order to build the latest Happy.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Building production stable software in Haskell

2007-09-18 Thread Simon Marlow

Adrian Hey wrote:

Neil Mitchell wrote:

Hi


They are less stable and have less quality control.

Surely you jest? I see no evidence of this, rather the contrary in fact.


No, dead serious. The libraries have a library submission process.


It does not follow that libraries that have not been submitted
to this process are less stable and have less quality control. Nor
does it follow that libraries that have been through this submission
process are high quality, accurately documented, bug free and efficient
(at least not ones I've looked at and sometimes even used).


Adrian's right - the set of libraries that are shipped with GHC is 
essentially random.  A bit of history:


Originally we shipped pretty much all freely-available non-trivial Haskell 
libraries with GHC.  At some point (about 5 years ago or so) the number of 
Haskell libraries started to grow beyond what we could reasonably ship with 
GHC, and some of them were providing duplicate functionality, so we stopped 
adding to the set.  We made a few small exceptions (e.g. filepath) for 
things that we felt really should be in the default GHC install, but to a 
large extent the set of libraries that are shipped with GHC has remained 
constant over the last 3 major releases.


In 6.6, we made a nomenclature change: we divided the packages shipped 
with GHC into two: those that are required to bootstrap GHC (the boot 
libraries, until recently called the core libraries), and the others that 
we just include with a defualt binary install (the extra libraries).  On 
some OSs, e.g. Debian, Ubuntu, Gentoo, you don't even get the extra 
libraries by default.  This was intended to be a stepping stone to 
decoupling GHC from these libraries entirely, which is possible now that we 
have Cabal and Hackage.


What I'm getting around to is that being shipped with GHC is not a 
category that has any particular meaning right now.  I think it's time the 
community started to look at what libraries we have in Hackage, and 
identify a subset that we should consider standard in some sense - that 
is, those to which the library submission process applies, at the least. 
If there were such a set, we could easily make GHC's extra libraries 
equal to it.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Blocked STM GC question

2007-09-18 Thread Simon Marlow

Jules Bean wrote:

Simon Marlow wrote:

Ashley Yakeley wrote:
If I have a thread that's blocked on an STM retry or TChan read, and 
none of its TVars are referenced elsewhere, will it get stopped and 
garbage-collected?


I have in mind a pump thread that eternally reads off a TChan and 
pushes the result to some function. If the TChan is forgotten 
elsewhere, will the permanently blocked thread still sit around using 
up some small amount of memory, or will it be reaped by the garbage 
collector?


In this case, your thread should receive the BlockedIndefinitely 
exception:


http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Exception.html#v%3ABlockedIndefinitely 



If the system is idle for a certain amount of time (default 0.3s, 
change it with the +RTS -I option) a full GC is triggered, which will 
detect any threads that are blocked on unreachable objects, and 
arrange to send them the BlockedIndefinitely exception.



Including MVars? Your quoted text suggests 'Yes' but the docs you link 
to suggest 'No'.


Deadlocked threads blocked on MVars instead get the BlockedOnDeadMVar 
exception.  Perhaps those two exceptions should be merged.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Swapping parameters and type classes

2007-09-18 Thread Simon Marlow

Ian Lynagh wrote:

On Tue, Sep 18, 2007 at 03:48:06PM +0100, Andrzej Jaworski wrote:

Responding to Simon Peyton-Jones'  reminder that this is a low-bandwidth list I
was obscure and commited a blunder.

This one and many other threads here are started undoubtedly by experts [sorry
guys:-)] and coffee brake should work for them, but on numerous occasions
threads here spawn beginner type questions. So, my thinking was that it is
perhaps against the tide trying to stop them.
Why not to make the list Haskell a first contact general practitioner? Then
creating e.g. Announcements  Challenge or Announcements  ask guru list
could take the best from Haskell but also would make it less front line and
thus more elitist, which should imply the manner by itself.


I proposed renaming
haskell@ - haskell-announce@
haskell-cafe@ - haskell@
in http://www.haskell.org/pipermail/haskell-cafe/2007-July/028719.html


I suggested the same thing at the time we created haskell-cafe, but the 
concensus was in favour of haskell-cafe.  At the time I didn't think it 
would work - for a long time, the number of subscribers to both lists was 
almost the same - but now I have to admit I think haskell-cafe is a big win 
for the community.  The -cafe extension gives people the confidence to post 
any old chatter without fear of being off-topic, and I'm sure this has 
helped the community to grow.


Those of us who grew up with Usenet (RIP) are more at home with the 
foo/foo-announce split, and it's certainly quite conventional for 
mailing-list naming too, but on the whole I don't think doing things 
differently has really done us any harm and it may well have been a stroke 
of genius :-)


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Library Process (was Building production stable software in Haskell)

2007-09-19 Thread Simon Marlow

Sven Panne wrote:

On Tuesday 18 September 2007 09:44, Dominic Steinitz wrote:

This discussion has sparked a question in my mind:

What is the process for the inclusion of modules / packages in ghc, hugs
and other compilers  interpreters?


Personal interest of the people working on GHC et. al. ;-)


I thought the master plan was that less would come with the compiler /
interpreter and the user would install packages using cabal. [...]


Although this statement might be a bit heretical on this list, I'll have to 
repeat myself again that Cabal, cabal-install, cabal-whatever will *never* be 
the right tool for the end user to install Haskell packages on platforms with 
their own packaging systems like RPM (the same holds for other systems, I 
just use RPM as an example here).


I think you're identifying a non-problem here.  Cabal was never intended to 
be used instead of the system's packaging tools for installing packages 
globally on the system, if you look back through the original Cabal design 
discussions you'll see this.  We recognised the critical importance of 
working with, rather than around, emerge/ports/RPM/apt/whatever.


Nowadays from a Cabal package you get make an RPM, a Windows installer, and 
 the Gentoo folks have imported the entirety of Hackage.  This is how it's 
meant to work.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: C's fmod in Haskell

2007-09-26 Thread Simon Marlow

Henning Thielemann wrote:


See also
  
http://www.haskell.org/haskellwiki/Things_to_avoid#Forget_about_quot_and_rem 


OTOH, since quot/rem are the primitives in GHC, and div/mod are implemented 
in terms of them, then you might prefer to use quot/rem all other things 
being equal.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GHC 6.7 on Windows / containers-0.1 package?

2007-09-26 Thread Simon Marlow

Stefan O'Rear wrote:

On Wed, Sep 19, 2007 at 10:24:24PM +0100, Neil Mitchell wrote:

Hi Peter,


 So I grabbed ghc-6.7.20070824 (=the latest one for Windows I could find)
and the extra-libs, compiled and installed the GLUT package (which I
needed), but when I compile my library, I get

 Could not find module `Data.Map':
   it is a member of package containers-0.1, which is hidden

All dependencies etc. have changed when going to 6.7/6.8 - you are
probably better off using 6.6.1 for now.

I also don't think that the debugger will help you track down infinite
loop style errors. You might be better off posting the code and asking
for help.


You said 0% CPU.  That's *very* important.  It means that you are using
the threaded runtime (GHCi?), and that you triggered a blackhole.  You
should be able to handle this by compiling your program with -prof (do
*not* use -threaded!), and running with +RTS -xc.  With luck, that will
give you a backtrace to the infinite loop.


As Stefan said, when the program hangs using 0% CPU, it probably means you 
have a black hole.  A black hole is a particular kind of infinite loop; 
one that GHC detects.  In this case it has detected that the program is in 
a loop, but it hasn't managed to detect that the loop is a real deadlock - 
if it did, then you'd also get an exception (loop).  The failure to 
deliver an exception happens in GHCi for subtle reasons that I've 
forgotten, it might even behave differently in GHC 6.8.1.


The debugger in 6.8.1 can also help to track down loops and deadlocks.  Set 
-fbreak-on-error, run the program using :trace, and hit Control-C when it 
hangs.  Then :history will give you a list of the most recently-visited 
points in your program, so you can step back and inspect the values of 
variables.


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: unsafePerformIO: are we safe?

2007-10-09 Thread Simon Marlow

Jorge Marques Pelizzoni wrote:

Hi, all!

This is a newbie question: I sort of understand what unsafePerformIO does
but I don't quite get its consequences. In short: how safe can one be in
face of it? I mean, conceptually, it allows any Haskell function to have
side effects just as in any imperative language, doesn't it? Doesn't it
blow up referential transparency for good? Is there anything intrinsic to
it that still keeps Haskell sound no matter what unsafePerformIO users
do (unlikely) or else what are the guidelines we should follow when using
it?


Old thread, but I don't think anyone mentioned this text from the GHC FAQ:

http://haskell.org/haskellwiki/GHC:FAQ#When_is_it_safe_to_use_unsafe_functions_such_as_unsafePerformIO.3F

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell FFI and finalizers

2007-10-09 Thread Simon Marlow

Maxime Henrion wrote:

Stefan O'Rear wrote:

On Thu, Oct 04, 2007 at 12:55:41AM +0200, Maxime Henrion wrote:

When writing the binding for foo_new(), I need to open a file with
fopen() to pass it the FILE *.  Then I get a struct foo * that I can
easily associate the the foo_destroy() finalizer.  However, when
finalizing the struct foo * object, I want to also close the FILE *
handle.

If I write a small C function for doing the finalizer myself, I still
wouldn't get passed the FILE * to close, only the struct foo * pointer
which is of no use.

Ah, yes, this does make the situation more interesting.

Looks like newForeignPtrEnv is maybe what you want?


Yeah, this is what I use now.  I wrote a player_finalizer() function in
C, that takes a FILE * and a pointer to the struct I'm handling, and
which just closes the file.  I then added these sources to the mix in my
.cabal file (with C-Sources, Extra-Includes, etc), and registered this
new finalizer using addForeignPtrFinalizerEnv.

This makes me want to ask you, what is so bad about Foreign.Concurrent
that it should be avoided at almost any cost?  It sure is likely to be
much slower than just calling a plain C finalizer, but aren't Haskell
threads super-cheap anyways?


In GHC ordinary ForeignPtr finalizers are implemented using 
Foreign.Concurrent anyway.  It's not so much that Foreign.Concurrent should 
be avoided at all costs, but rather finalizers in general should be 
avoided, especially if you really care about when they run (i.e. bad things 
could happen if they run late or at unpredictable times).


The Haskell code is not run by the garbage collector, rather the garbage 
collector figures out which finalizers need running and creates a thread to 
run them.  It's perfectly safe to have C finalizers that invoke Haskell 
code using GHC, although this is explicitly undefined by the FFI spec.


The reason that Foreign.Concurrent is separate from Foreign.ForeignPtr is 
that it does essentially require concurrency to implement, whereas ordinary 
C finalizers can be run by the GC (although GHC doesn't do it this way).


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Re: Trying to install binary-0.4

2007-10-15 Thread Simon Marlow

Claus Reinke wrote:

but calling split-base base goes directly against all basic 
assumptions of all packages depending on base.


The new base will have a new version number.  There is no expectation of 
compatibility when the major version is bumped; but we do have an informal 
convention that minor version bumps only add functionality, and sub-minor 
version bumps don't change the API at all.


So a package that depends on 'base' (with no upper version bound) *might* 
be broken in GHC 6.8.1, depending on which modules from base it actually 
uses.  Let's look at the other options:


  - if we rename base, the package will *definitely* be broken

  - if the package specified an upper bound on its base dependency, it will
*definitely* be broken

In the design we've chosen, some packages continue to work without change.

Specifying a dependency on a package without giving an explicit version 
range is a bet: sometimes it wins, sometimes it doesn't.  The nice thing is 
that we have most of our packages in one place, so we can easily test which 
ones are broken and notify the maintainers and/or fix them.


Another reason not to change the name of 'base' is that there would be a 
significant cost to doing so: the name is everywhere, not just in the 
source code of GHC and its tools, but wiki pages, documentation, and so on. 
 Yes I know we've changed other names - very little in packaging is clear-cut.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Re: Trying to install binary-0.4

2007-10-15 Thread Simon Marlow

Claus Reinke wrote:
 Simon Marlow wrote:
 Another reason not to change the name of 'base' is that there would be
 a significant cost to doing so: the name is everywhere, not just in
 the source code of GHC and its tools, but wiki pages, documentation,
 and so on.

 but the name that is everywhere does not stand for what the new version
 provides! any place that is currently referring to 'base' will have to be
 inspected to check whether it will or will not work with the reduced
 base package. and any place that is known to work with the new
 base package might as well make that clear, by using a different name.

base changed its API between 2.0 and 3.0, that's all.  The only difference 
between what happened to the base package between 2.0 and 3.0 and other 
packages is the size of the changes.  In fact, base 3.0 provides about 80% 
the same API as version 2.0.


Exactly what percentage change should in your opinion require changing the 
name of the package rather than just changing its version number?  Neither 
0% nor 100% are good choices... packaging is rarely clear-cut!


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Re: Trying to install binary-0.4

2007-10-15 Thread Simon Marlow

Claus Reinke wrote:


if this is the official interpretation of cabal package version numbers,
could it please be made explicit in a prominent position in the cabal docs?


Yes - I think it would be a good idea to make that convention explicit 
somewhere (I'm sure we've talked about it in the past, but I can't remember 
what happened if anything).


However, I'd like to separate it from Cabal.  Cabal provides mechanism not 
policy, regarding version numbers.



of course, i have absolutely no idea how to write stable packages under
this interpretation. and the examples in the cabal docs do not explain 
this,
either (neither bar nor foo  1.2 are any good under this 
interpretation).


base = 2.0   3.0

I believe Cabal is getting (or has got?) some new syntax to make this simpler.


why do you omit the most popular (because most obvious to users) option?

- if base remains what it is and a new package is created providing the 
rest
   of base after the split, then every user is happy (that it is 
currently hard
   to implement this by reexporting the split packages as base is no 
excuse)


Omitted only because it isn't implemented.  Well, it is implemented, on my 
laptop, but I'm not happy with the design yet.


In the design we've chosen, some packages continue to work without 
change.


Specifying a dependency on a package without giving an explicit 
version range is a bet: sometimes it wins, sometimes it doesn't.  The 
nice thing is that we have most of our packages in one place, so we 
can easily test which ones are broken and notify the maintainers 
and/or fix them.


sorry, i don't want to turn package management into a betting system.
and i don't see how knowing how much is broken (so cabal can now
only work with central hackage?) is any better than avoiding such 
breakage in the first place.


cabal is fairly new and still under development, so there is no need to
build in assumptions that are sure to cause grief later (and indeed are
doing so already).


what assumptions does Cabal build in?

 Yes I know we've changed other names - very little in packaging is 
clear-cut.


how about using a provides/expects system instead of betting on
version numbers? if a package X expects the functionality of base-1.0,
cabal would go looking not for packages that happen to share the name,
but for packages that provide the functionality. 


Using the version number convention mentioned earlier, base-1.0 
funcionality is provided by base-1.0.* only.  A package can already specify 
that explicitly.


I think what you're asking for is more than that: you want us to provide 
base-1.0, base-2.0 and base-3.0 at the same time, so that old packages 
continue to work without needing to be updated.  That is possible, but much 
more work for the maintainer.  Ultimately when things settle down it might 
make sense to do this kind of thing, but right now I think an easier 
approach is to just fix packages when dependencies change, and to identify 
sets of mutually-compatible packages (we've talked about doing this on 
Hackage before).


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Re: Trying to install binary-0.4

2007-10-16 Thread Simon Marlow

Udo Stenzel wrote:

Simon Marlow wrote:
So a package that depends on 'base' (with no upper version bound) *might* 
be broken in GHC 6.8.1, depending on which modules from base it actually 
uses.  Let's look at the other options:


  - if we rename base, the package will *definitely* be broken

  - if the package specified an upper bound on its base dependency, it will
*definitely* be broken


- if you provide a 'base' configuration that pulls in the stuff that
  used to be in base, the package will work


I don't know of a way to do that.  The name of the package is baked into 
the object files at compile time, so you can't use the same compiled module 
in more than one package.



I hate betting, but I'd like to know if...

- it is okay to give GHC 6.4/6.6 a castrated configuration of the base
  libraries to remove the conflict with recent ByteString?


Sure, that's what I suggested before.  Moving modules of an existing 
package from 'exposed-modules' to 'hidden-modules' is safe (I wouldn't 
recommend removing them entirely).



- when GHC 6.8 comes out containing base-3, will it be possible to
  additionally install something calling base-2 with both being
  available to packages?


In theory yes - the system was designed to allow this.  In practice we've 
never tried it, and base-2 might not compile unmodified with GHC 6.8.



- If so, will existing Cabal packages automatically pick up the
  compatible base-2 despite base-3 being available?


Only if they specify an upper bound on the base dependency, which most 
don't, but it's an easy change to make.


Cheers,
Simon



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Re: Trying to install binary-0.4

2007-10-16 Thread Simon Marlow
Several good points have been raised in this thread, and while I might not 
agree with everything, I think we can all agree on the goal: things 
shouldn't break so often.


So rather than keep replying to individual points, I'd like to make some 
concrete proposals so we can make progress.


1. Document the version numbering policy.

We should have done this earlier, but we didn't.  The proposed policy, for 
the sake of completeness is: x.y where:


  x changes == API changed
  x constant but y changes == API extended only
  x and y constant == API is identical

further sub-versions may be added after the x.y, their meaning is 
package-defined.  Ordering on versions is lexicographic, given multiple 
versions that satisfy a dependency Cabal will pick the latest.


2. Precise dependencies.

As suggested by various people in this thread: we change the convention so 
that dependencies must specify a single x.y API version, or a range of 
versions with an upper bound.  Cabal or Hackage can refuse to accept 
packages that don't follow this convention (perhaps Hackage is a better 
place to enforce it, and Cabal should just warn, I'm not sure).


Yes, earlier I argued that not specifying precise dependencies allows some 
packages to continue working even when dependencies change, and that having 
precise dependencies means that all packages are guaranteed to break when 
base is updated.  However, I agree that specifying precise dependencies is 
ultimately the right thing, we'll get better errors when things break,



There's lots more to discuss, but I think the above 2 proposals are a step 
in the right direction, agreed?


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Re: Trying to install binary-0.4

2007-10-16 Thread Simon Marlow

Claus Reinke wrote:

- if you provide a 'base' configuration that pulls in the stuff that
  used to be in base, the package will work


I don't know of a way to do that.  The name of the package is baked 
into the object files at compile time, so you can't use the same 
compiled module in more than one package.


i've been wrong about this before, so check before you believe,-) but 
here is a hack i arrived at the last time we discussed this:


[using time:Data.Time as a small example; ghc-6.6.1]

1. create, build, and install a package QTime, with default Setup.hs

...

2. create, build, and install a package Time2, with default Setup.hs

...

3. write and build a client module


Ok, when I said above I don't know a way to do that, I really meant 
there's no way to do it by modifying the package database alone, which I 
think is what Udo was after.


Your scheme does work, and you have discovered how to make a package that 
re-exports modules from other packages (I made a similar discovery recently 
when looking into how to add support to Cabal for this).  As you can see, 
it's rather cumbersome, in that you need an extra dummy package, and two 
stub modules for each module to be re-exported.


One way to make this easier is to add a little extension to GHC, one that 
we've discussed before:


module Data.Time (module Base1.Data.Time) where
import base-1.0 Data.Time as Base1.Data.Time

the extension is the base-1.0 package qualifier on the import, which GHC 
very nearly supports (only the syntax is missing).


Now you don't need the dummy package, and only one stub module per module 
to be re-exported.  Cabal could generate these automatically, given some 
appropriate syntax.  Furthermore, this is better than doing something at 
the package level, because you're not stuck with module granularity, you 
can re-export just parts of a module, which is necessary if you're trying 
to recreate an old version of an API.


I was going to propose this at some point.  Comments?

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Re: Trying to install binary-0.4

2007-10-16 Thread Simon Marlow

Simon Marlow wrote:

Claus Reinke wrote:

- if you provide a 'base' configuration that pulls in the stuff that
  used to be in base, the package will work


I don't know of a way to do that.  The name of the package is baked 
into the object files at compile time, so you can't use the same 
compiled module in more than one package.


i've been wrong about this before, so check before you believe,-) but 
here is a hack i arrived at the last time we discussed this:


[using time:Data.Time as a small example; ghc-6.6.1]

1. create, build, and install a package QTime, with default Setup.hs

...

2. create, build, and install a package Time2, with default Setup.hs

...

3. write and build a client module


Ok, when I said above I don't know a way to do that, I really meant 
there's no way to do it by modifying the package database alone, which I 
think is what Udo was after.


Your scheme does work, and you have discovered how to make a package 
that re-exports modules from other packages (I made a similar discovery 
recently when looking into how to add support to Cabal for this).  As 
you can see, it's rather cumbersome, in that you need an extra dummy 
package, and two stub modules for each module to be re-exported.


Ah, I should add that due to technical limitations this scheme can't be 
used to make a base-2 that depends on base-3.  Base is special in this 
respect, GHC only allows a single package called base to be linked into any 
given executable.  The reason for this is that GHC can be independent of 
the version of the base package, and refer to it as just base; in theory 
it's possible to upgrade the base package independently of GHC.


So we're restricted at the moment to providing only completely independent 
base-2 and base-3 in the same installation, and essentially that means 
having (at least) two copies of every package, one that depends on base-2 
and one that depends on base-3.


Perhaps we should revisit this decision, it would be better for GHC to 
depend explicitly on base-3, but allow a separate backwards-compatible 
base-2 that depends on base-3 to be installed alongside.


OTOH, this will still lead to difficulties when you try to mix base-2 and 
base-3.  Suppose that the Exception type changed, so that base-2 needs to 
provide its own version of Exception.  The base-2:Exception will be 
incompatible with the base-3:Exception, and type errors will ensue if the 
two are mixed.


If the base-3:Exception only added a constructor, then you could hide it in 
base-2 instead of defining a new type.  However, if base-3 changed the type 
of a constructor, you're stuffed.  Ah, I think we've discovered a use for 
the renaming feature that was removed in Haskell 1.3!


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Re: Trying to install binary-0.4

2007-10-16 Thread Simon Marlow

Bayley, Alistair wrote:
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Simon Marlow


   x changes == API changed
   x constant but y changes == API extended only
   x and y constant == API is identical

Ordering on versions is lexicographic, given multiple 
versions that satisfy a dependency Cabal will pick the latest.


Just a minor point, but would mind explaining exactly what lexicographic
ordering implies? It appears to me that e.g. version 9.3 of a package
would be preferred over version 10.0. That strikes me as
counter-intuitive.


The lexicographical ordering would make 10.0  9.3.  In general, A.B  C.D 
iff A  C or A == C  B  D.  When we say the latest version we mean 
greatest, implying that version numbers increase with time.  Does that help?


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Proposal: register a package as providing several API versions

2007-10-16 Thread Simon Marlow

ChrisK wrote:

Simon Marlow wrote:

Several good points have been raised in this thread, and while I might
not agree with everything, I think we can all agree on the goal: things
shouldn't break so often.


I have another concrete proposal to avoid things breaking so often.  Let us
steal from something that works: shared library versioning on unixy systems.

On Max OS X, I note that I have, in /usr/lib:

lrwxr-xr-x1 root  wheel15 Jul 24  2005 libcurl.2.dylib - 
libcurl.3.dylib
lrwxr-xr-x1 root  wheel15 Jul 24  2005 libcurl.3.0.0.dylib - 
libcurl.3.dylib
-rwxr-xr-x1 root  wheel201156 Aug 17 17:14 libcurl.3.dylib
lrwxr-xr-x1 root  wheel15 Jul 24  2005 libcurl.dylib - 
libcurl.3.dylib


The above declaratively expresses that libcurl-3.3.0 provides the version 3 API
and the version 2 API.

This is the capability that should be added to Haskell library packages.

Right now a library can only declare a single version number.  So if I update
hsFoo from 2.1.1 to 3.0.0 then I cannot express whether or not the version 3 API
is a superset of (backward compatible with) the version 2 API.


Certainly, this is something we want to support.  However, there's an 
important difference between shared-library linking and Haskell: in 
Haskell, a superset of an API is not backwards-compatible, because it has 
the potential to cause new name clashes.



Once it is possible to have cabal register the hsFoo-3.0.0 also as hsFoo-2 it
will be easy to upgrade to hsFoo.  No old programs will fail to compile.

Who here knows enough about the ghc-pkg database to say how easy or hard this
would be?


It could be done using the tricks that Claus just posted and I followed up 
on.  You'd need a separate package for hsFoo-2 that specifies exactly which 
bits of hsFoo-3 are re-exported.  Given some Cabal support and a little 
extension in GHC, this could be made relatively painless for the library 
maintainer.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Re: Trying to install binary-0.4

2007-10-16 Thread Simon Marlow

Bayley, Alistair wrote:
From: Simon Marlow [mailto:[EMAIL PROTECTED] 

The lexicographical ordering would make 10.0  9.3.  In 
general, A.B  C.D 
iff A  C or A == C  B  D.  When we say the latest 
version we mean 
greatest, implying that version numbers increase with time. 
 Does that help?



Sort of. It's what I'd expect from a sensible version comparison. It's
just not something I'd ever choose to call lexicographic ordering. IMO,
lexicographgic ordering is a basic string comparision so e.g.

max 10.0 9.3 = 9.3

I'd call what you're doing numeric ordering. Does it have a better name,
like version-number-ordering, or section-number-ordering (e.g. Section
3.2.5, Section 3.2.6)?


I've heard it called lexicographical ordering before, but I'm happy to call 
it by whatever name induces the least confusion!


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Proposal: register a package as providingseveral API versions

2007-10-16 Thread Simon Marlow

Claus Reinke wrote:

It could be done using the tricks that Claus just posted and I 
followed up on.  You'd need a separate package for hsFoo-2 that 
specifies exactly which bits of hsFoo-3 are re-exported.  Given some 
Cabal support and a little extension in GHC, this could be made 
relatively painless for the library maintainer.


are those tricks necessary in this specific case? couldn't we
have a list/range of versions in the version: field, and let cabal
handle the details?


I don't understand what you're proposing here.  Surely just writing

version: 1.0, 2.0

isn't enough - you need to say what the 1.0 and 2.0 APIs actually *are*, 
and then wouldn't that require more syntax?  I don't yet see a good reason 
to do this in a single .cabal file instead of two separate packages.  The 
two-package way seems to require fewer extensions to Cabal.



aside: what happens if we try to combine two modules M and N
that use the same api A, but provided by two different packages
P1 and P2? say, M was built when P1 was still around, but when
N was built, P2 had replaced P1, still supporting A, but not necessarily 
with the same internal representation as used in P1.


Not sure what you mean by try to combine.  A concrete example?

Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Bug in runInteractiveProcess?

2007-10-17 Thread Simon Marlow

Donn Cave wrote:

On Oct 16, 2007, at 9:52 PM, Brandon S. Allbery KF8NH wrote:



On Oct 17, 2007, at 0:39 , Donn Cave wrote:
...
As for closing file descriptors explicitly - if I remember right what 
I've seen

in the NetBSD source, the UNIX popen() implementation may years ago
have closed all file descriptors, but now it keeps track of the ones 
it created,

and only closes them.  I think that's the way to go, if closing fds.


Either implementation causes problems; security folks tend to prefer 
that all file descriptors other than 0-2 (0-4 on Windows?) be closed, 
and 0-2(4) be forced open (on /dev/null if they're not already open).  
But in this case, the idea is to set FD_CLOEXEC on (and only on) file 
descriptors opened by the Haskell runtime, so you would get the same 
effect as tracking file descriptors manually.


I can't speak for security folks, but for me, the way you put it goes 
way too far.

The file descriptors at issue were opened by runInteractiveProcess, and
FD_CLOEXEC on them would solve the whole problem (I think.)  Is that
what you mean?  To set this flag routinely on all file descriptors 
opened in

any way would require a different justification, and it would have to be a
pretty good one!


Setting FD_CLOEXEC on just the pipes created by runInteractiveProcess 
sounds right to me.


Certainly we don't want to set the flag on *all* FDs created in Haskell, in 
particular users of System.Posix.openFd probably want to choose whether 
they set FD_CLOEXEC or not.


Would someone like to create a bug report?

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Bug in runInteractiveProcess?

2007-10-17 Thread Simon Marlow

Donn Cave wrote:


On Oct 16, 2007, at 1:48 PM, John Goerzen wrote:



I have been trying to implement a Haskell-like version of shell
pipelines using runInteractiveProcess.  I am essentially using
hGetContents to grab the output from one command, and passing that to
(forkIO $ hPutStr) to write to the next.  Slow, but this is just an
experiment.


As an aside, I personally would look to System.Posix.Process for this.
Something like this would deal with the file descriptors in the fork ...

fdfork fn dupfds closefds = do
pid - forkProcess $ execio
return pid
where
dupe (a, b) = do
dupTo a b
closeFd a
execio = do
mapM_ dupe dupfds
mapM_ closeFd closefds
fn


Note that forkProcess doesn't currently work with +RTS -N2 (or any value 
larger than 1), and it isn't likely to in the future.  I suspect 
forkProcess should be deprecated.


The POSIX spec is pretty strict about what system calls you can make in the 
child process of a fork in a multithreaded program, and those same 
restrictions apply in Haskell, and they apply not only to the Haskell code 
but also to the runtime (e.g. what if the child process needs more memory 
and the runtime calls mmap(), that's not allowed).  We get away with it 
most of the time because the OSs we run on are less strict than POSIX. 
However, in general I think forking should be restricted to C code that you 
invoke via the FFI.


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Proposal: register a package asprovidingseveral API versions

2007-10-17 Thread Simon Marlow

Claus Reinke wrote:


the idea was for the cabal file to specify a single provided api,
but to register that as sufficient for a list of dependency numbers.
so the package would implement the latest api, but could be used
by clients expecting either the old or the new api.


I don't see how that could work.  If the old API is compatible with the new 
API, then they might as well have the same version number, so you don't 
need this.  The only way that two APIs can be completely compatible is if 
they are identical.


A client of an API can be tolerant to certain changes in the API, but that 
is something that the client knows about, not the provider.  e.g. if the 
client knows that they use explicit import lists everywhere, then they can 
be tolerant of additions to the API, and can specify that in the dependency.



aside: what happens if we try to combine two modules M and N
that use the same api A, but provided by two different packages
P1 and P2? say, M was built when P1 was still around, but when
N was built, P2 had replaced P1, still supporting A, but not 
necessarily with the same internal representation as used in P1.


Not sure what you mean by try to combine.  A concrete example?


lets see - how about this:

-- package P-1, Name: P, Version: 0.1
module A(L,f,g) where
newtype L a = L [a]
f  a (L as) = elem a as
g as = L as

-- package P-2, Name: P, Version: 0.2
module A(L,f,g) where
newtype L a = L (a-Bool)
f  a (L as) = as a
g as = L (`elem` as)

if i got this right, both P-1 and P-2 support the same api A, right
down to types. but while P-1's A and P-2's A are each internally
consistent, they can't be mixed. now, consider

module M where
import A
m = g [1,2,3]

module N where
import A
n :: Integer - A.L Integer - Bool
n = f

so, if i install P-1, then build M, then install P-2, then build N, 
wouldn't N pick up the newer P-2,


while M would use the older P-1? 
and if so, what happens if we then add


module Main where
import M
import N
main = print (n 0 m)


You'll get a type error - try it.  The big change in GHC 6.6 was to allow 
this kind of construction to occur safely.  P-1:A.L is not the same type as 
P-2:A.L, they don't unify.



i don't seem to be able to predict the result, without actually
trying it out. can you?-) i suspect it won't be pretty, though.


Sure.  We have a test case in our testsuite for this very eventuality, see

http://darcs.haskell.org/testsuite/tests/ghc-regress/typecheck/bug1465

that particular test case arose because someone discovered that the type 
error you get is a bit cryptic (it's better in 6.8.1).


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Re: Trying to install binary-0.4

2007-10-17 Thread Simon Marlow

Neil Mitchell wrote:

Hi


I agree. = 1.0 isn't viable in the long term. Rather, a specific list,
or bounded range of tested versions seems likely to be more robust.


In general, if it compiles and type checks, it will work. It is rare
that an interface stays sufficiently similar that the thing compiles,
but then crashes at runtime. Given that, shouldn't the tested versions
be something a machine figures out - rather than something each
library author has to tend to with every new release of every other
library in hackage?


The only reasonable way we have to test whether a package compiles with a 
new version of a dependency is to try compiling it.  To do anything else 
would be duplicating what the compiler does, and risks getting it wrong.


But you're right that tools could help a lot: for example, after a base 
version bump, Hackage could try to build all its packages against the new 
base to figure out which ones need source code modifications and which can 
probably just have their .cabal files tweaked to allow the new version. 
Hackage could tentatively fix the .cabal files itself and/or contact the 
maintainer.


We'll really need some tool to analyse API changes too, otherwise API 
versioning is too error-prone.  Anyone like to tackle this?  It shouldn't 
be too hard using the GHC API..


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Re: Trying to install binary-0.4

2007-10-17 Thread Simon Marlow

I've written down the proposed policy for versioning here:

  http://haskell.org/haskellwiki/Package_versioning_policy

It turned out there was a previous page written by Bulat that contained 
essentially this policy, but it wasn't linked from anywhere which explains 
why it was overlooked.  I took the liberty of rewriting the text.


I took into account Ross's suggestions that the major version should have 
two components, and that we need to be more precise about what it means to 
extend an API.


After a round of editing, we can start to link to this page from 
everywhere, and start migrating packages to this scheme where necessary.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Proposal: register a package asprovidingseveralAPI versions

2007-10-17 Thread Simon Marlow

Claus Reinke wrote:


a few examples, of the top of my head:
- consider the base split in reverse: if functionality is only repackaged,
   the merged base would also provide for the previously separate
   sub-package apis (that suggests a separate 'provides:' field, 
though,   as merely listing version numbers wouldn't be sufficient)

- consider the base split itself: if there was a way for the base split
   authors to tell cabal that the collection of smaller packages can
   provide for clients of the the old big base, those clients would
   not run into trouble when the old big base is removed


These two cases could be solved by re-exports, no extra mechanism is required.


- consider adding a new monad transformer to a monad transformer
   library, or a new regex variant to a regex library - surely the new 
   package version can still provide for clients of the old version


This case doesn't work - if you add *anything* to a library, I can write a 
module that can tell the difference.  So whether your new version is 
compatible in practice depends on the client.



- consider various packages providing different implementations
   of an api, say edison's - surely any of the implementations will
   do for clients who depend only on the api, not on specifics


Yes, and in this case we should have another package that just re-exports 
one of the underlying packages.


You seem to want to add another layer of granularity in addition to 
packages, and I think that would be unnecessary complexity.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Bug in runInteractiveProcess?

2007-10-18 Thread Simon Marlow

John Goerzen wrote:

On 2007-10-17, Simon Marlow [EMAIL PROTECTED] wrote:
Note that forkProcess doesn't currently work with +RTS -N2 (or any value 
larger than 1), and it isn't likely to in the future.  I suspect 
forkProcess should be deprecated.


That would be most annoying, and would render HSH unable to function
without using FFI to import fork from C.  I suspect that forkProcess has
some more intelligence to it than that, though I haven't looked.

System.Process is not powerful enough to do serious work on POSIX, and
perhaps it never can be.  The mechanism for setting up a direct pipeline
with more than 2 processes is extremely inconvenient at best, and it
does not seem possible to create n-process pipelines using
System.Process without having to resort to copying data in the Haskell
process at some point.  (This even putting aside the instant bug)

Not only that, but the ProcessHandle system doesn't:

 * Let me get the child process's PID
 * Send arbitrary signals to the child process
 * Handle SIGCHLD in a custom and sane way
 * Get full exit status information (stopped by a particular signal,
   etc)

Now, there are likely perfectly valid cross-platform reasons that it
doesn't do this.


Yes, absolutely.

Although I *would* like there to be a more general version of 
runInteractiveProcess where for each FD the caller can choose whether to 
supply an existing Handle or to have a new pipe generated.  This would let 
you pipe multiple processes together directly, which can't be done at the 
moment (we've discussed this before, I think).



I am merely trying to point out that removing
forkProcess in favor of System.Process will shut out a large number of
very useful things.


I wasn't intending to push users towards System.Process instead, rather to 
forking in C where it can be done safely.  I completely agree that 
System.Process isn't a replacement for everything you might want to do with 
fork.



Don't forget either that there are a whole class of programs whose
multithreading needs may be better addressed by forkProcess,
executeFile, and clever management of subprograms rather than by a
threaded RTS.


Ok, the non-threaded RTS can indeed support forkProcess without any 
difficulties.  I'm not sure where that leaves us; the non-threaded RTS also 
cannot support waitForProcess in a multithreaded Haskell program, and it 
can't do non-blocking FFI calls in general.


While we have no immediate plans to get rid of the non-threaded RTS, I'd 
really like to.  The main reasons being that it adds another testing 
dimension, and it contains a completely separate implementation of 
non-blocking IO that we have to maintain.



forkProcess, with the ability to dupTo, closeFd, and executeFile is
still mighty useful in my book.


Noted!

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Proposal: register a package asprovidingseveralAPI versions

2007-10-18 Thread Simon Marlow

ChrisK wrote:

I disagree with Simon Marlow here. In practice I think Claus' definition of
compatible is good enough:


I don't think you're disagreeing with me :-)  In fact, you agreed that 
extending an API can break a client:



One can write such a module.  But that is only a problem if the old client
accidentally can tell the difference.  As far as I can see, the only two things
that can go wrong are name conflicts and new instances.



New names can only cause compilation to fail, and this can be fixed by using a
mix of
  (1) adding an explicit import list to the old import statement, or
  (2) adding/expanding a hiding list to the old import statement, or
  (3) using module qualified names to remove the collision
Fixing this sort of compile error is easy; nearly simple enough for a regexp
script.  And the fix does not break using the client with the old library.
Adding things to the namespace should not always force a new API version number.


Yes the errors can be fixed, but it's too late - the client already failed 
to compile out of the box against the specified dependencies.


New instances are ruled out in the definition of an extended API in the 
version policy proposal, incedentally:


http://haskell.org/haskellwiki/Package_versioning_policy

And I agree with you that name clashes are rare, which is why that page 
recommends specifying dependencies that are insensitive to changes in the 
minor version number (i.e. API extensions).


But that still leaves the possibility of breakage if the client isn't using 
import lists, and Claus argued for a system with no uncertainty.  So if you 
want no uncertainty in your dependencies, you either have to (a) not depend 
on API versions (including minor versions) that you haven't tested, or (b) 
use explicit import lists and allow minor version changes only. 
Incedentally, this reminds me that GHC should have a warning for not using 
explicit import lists (perhaps only for external package imports).


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Proposal: register a packageasprovidingseveralAPI versions

2007-10-19 Thread Simon Marlow

Ketil Malde wrote:

Claus Reinke [EMAIL PROTECTED] writes:


Incedentally, this reminds me that GHC should have a warning for not
using explicit import lists (perhaps only for external package
imports).



for package-level imports/exports, that sounds useful.


Isn't there a secret key combination in haskell-mode for Emacs that
populates the import lists automatically?


No emacs command that I know of, but GHC has the -ddump-minimal-imports flag.

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Using Haddock to document ADTs

2007-10-23 Thread Simon Marlow

Alfonso Acosta wrote:


I'm beginning to get familiar with Haddock and I want to document a
library which, as usually happens, has some ADT definitions.

I'd like to document the ADTs both for the end-user (who shouldn't be
told about its internal implementation) and future developers.


Haddock is designed to document APIs for the end-user rather than the 
developer, although it has been suggested several times that it could 
generate source-code documentation too.


One way to do what you want is to split the module into two:

module Lib.ADT.Internals (ADT(..)) where
data ADT = C1 | ... | Cn

module Lib.ADT (ADT) where
import Lib.ADT.Internals

developers can import the .Internals module, and end-users import the ADT 
module.


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: OS Abstraction module??

2007-10-23 Thread Simon Marlow

Galchin Vasili wrote:

I am really talking about a module or perhaps a Haskell class that 
provides notion for multiple threads of execution, semaphores, .. that 
hides POSIX vs Win32 APIs ..


I wonder if this discussion is missing the point: if you only want to do 
threads, then Haskell (or GHC, to be more precise) already provides thread 
abstractions that hide the OS-specific implementation details.  In fact, 
doing it yourself with the FFI is likely to cause a lot of problems.


Take a look at the Control.Concurrent module, and GHC's -threaded option.

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell Paralellism

2007-10-25 Thread Simon Marlow

Dominik Luecke wrote:

I am trying to use the code 

rels list = 
let 
o1 = (map makeCompEntry) $ head $ splitList list

o2 = (map makeCompEntry) $ head $ tail $ splitList list
o3 = (map makeCompEntry) $ head $ tail $ tail $ splitList list
in
  case (head $ tail $ tail $ tail $ splitList list) of


you've written 'splitList list' 4 times here.  The compiler might be clever 
enough to common them up, but personally I wouldn't rely on it.  Your code 
can be shortened quite a lot, e.g:


  let
  (o1:o2:o3:rest) = map makeCompEntry (splitList list)
  in
case rest of
  ...


[] - o1 `par` o2 `par` o3 `seq` o1 : o2 : o3 : []
_  - 
let o4 = rels (head $ tail $ tail $ tail $ splitList list)

in
  o1 `par` o2 `par` o3 `par` o4 `seq` 
  o1 : o2 : o3 : o4


without knowing what splitList and the rest of the program does, it's hard 
to say why you don't get any parallelism here.  But note that when you say:


o1 `par` o2 `par` o3 `seq` o1 : o2 : o3 : []

what this does is create sparks for o1-o3 before returning the list 
[o1,o2,o3].  Now, something else is consuming this list - if whatever 
consumes it looks at the head of the list first, then you've immediately 
lost the opportunity to overlap o1 with anything else, because the program 
is demanding it eagerly.


All this is quite hard to think about, and there is a serious lack of tools 
at the moment to tell you what's going on.  We do hope to address that in 
the future.


Right now, I suggest keeping things really simple.  Use large-grain 
parallelism, in a very few places in your program.  Use strategies - parMap 
is a good one because it hides the laziness aspect, you don't need to worry 
about what is getting evaluated when.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: newbie optimization question

2007-10-29 Thread Simon Marlow

Peter Hercek wrote:

Daniel Fischer wrote:
What perpetually puzzles me is that in C long long int has very good 
performance, *much* faster than gmp, in Haskell, on my computer, Int64 
is hardly faster than Integer. 


I tried the example with Int64 and Integer. The integer version
 was actually quicker ... which is the reason I decided to post
 the results.

C++ version times: 1.125; 1.109; 1.125
Int32 cpu times: 3.203; 3.172; 3.172
Int64 cpu times: 11.734; 11.797; 11.844
Integer cpu times: 9.609; 9.609; 9.500

Interesting that Int64 is *slower* than Integer.


I can believe that.  Integer is actually optimised for small values: 
there's a specialised representation for values that fit in a single word 
that avoids calling out to the GMP library.


As Stefan pointed out, there's a lot of room to improve the performance of 
Int64, it's just never been a high priority.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: package maintainers: updating your packages to work with GHC 6.8.1

2007-11-05 Thread Simon Marlow

Duncan Coutts wrote:


flag splitBase
  description: Choose the new smaller, split-up base package.
library
  if flag(splitBase)
build-depends: base = 3, containers
  else
build-depends: base  3


This is also a good time to add upper bounds to dependencies, in accordance 
with the package versioning policy:


http://haskell.org/haskellwiki/Package_versioning_policy

For instance, with accurate dependencies the above would become

flag splitBase
  description: Choose the new smaller, split-up base package.
library
  if flag(splitBase)
build-depends: base = 3.0   3.1, containers = 0.1   0.2
  else
build-depends: base = 2.0   3.0

Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Problem linking with GHC 6.8.1

2007-11-07 Thread Simon Marlow

Alberto Ruiz wrote:
If you don't use the foreign function interface I think that you only need 
the -L option:


ghc --make -L/usr/local/lib/ghc-6.8.1/gmp -O2 -o edimail Main.hs

Something similar worked for me, but this new behavior is not very reasonable. 
Could it be a bug?


It looks like a problem with the binary distributions.  They include gmp, 
but somehow don't install it.  As a workaround, you can take gmp.h and 
libgmp.a from from the binary tarball and put them by hand into 
/usr/local/lib/ghc-6.8.1 (or wherever you installed it).  Alternatively you 
can install a suitable gmp package using your OS's package manager (you 
didn't say which flavour of Linux you're on).


BTW, a better place to ask questions about GHC is 
[EMAIL PROTECTED], you're more likely to get a quick answer.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Building Haskell stuff on Windows

2007-11-13 Thread Simon Marlow

Peter Hercek wrote:

Simon Peyton-Jones wrote:
|  Windows and Haskell is not a well travelled route, but if you 
stray of

|  the cuddly installer packages, it gets even worse.
|
| But it shouldn't. Really it shouldn't. Even though Windows is not my
| preferred platform, it is by no means different enough to warrant such
| additional complexity. Plus, GHC is developed at Microsoft, and the
| currently most featureful Haskell IDE is on Windows...

We build GHC on Windows every day.  I use MSYS with no trouble.



Are there any reasons to use mingw+msys instead of mingw+cygwin?


The GHC build system supports both.  The Cygwin route is usually more 
likely to work, because that's what the nightly builds use.  MSYS is much 
faster than Cygwin for development work, though.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Weird ghci behaviour?

2007-11-15 Thread Simon Marlow

Jonathan Cast wrote:

On 13 Nov 2007, at 11:03 PM, Jules Bean wrote:

Just to be clear: my proposal is that if you want it to go faster you do

ghci foo.hi

or

ghci foo.o

... so you still have the option to run on compiled code.

My suggestion is simply that ghci foo.hs is an instruction to load 
source code (similarly :load); while ghci foo.o is obviously an 
instruction to load compiled code.


Even just having

:m + *Foo


Currently :m doesn't load any modules, it just alters the context (what's 
in scope at the prompt), and fails if you ask for a module that isn't 
loaded.  It would make sense for :m +M to behave like :add if M wasn't 
loaded, though.  And perhaps :add *M or :load *M should ignore compiled 
code for M - that's another option.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Weird ghci behaviour?

2007-11-15 Thread Simon Marlow


It's worth saying that right now, all you have to do to get the source file 
loaded is


   :! touch M.hs
   :reload

Put this in a macro, if you want:

   :def src \s - return (:! touch ++s)

I hear the arguments in this thread, and others have suggested changes before:

http://hackage.haskell.org/trac/ghc/ticket/1205
http://hackage.haskell.org/trac/ghc/ticket/1439

One of the suggestions was that if you name the source file rather than the 
module name, you get the source file loaded for that file, and any object 
code is ignored.  So any files you explicitly want to have full top-level 
scope for must be named in :load or on the GHCi command line.


The only problem with this is that someone who isn't aware of this 
convention might accidentally be ignoring compiled code, or might wonder 
why their compiled code isn't being used.  Well, perhaps this is less 
confusing than the current behaviour; personally I find the current 
behaviour consistent and easy to understand, but I guess I'm in the minority!


The other suggestion is to have a variant of :load that ignores compiled 
code (or for :load to do that by default, and rename the current :load to 
something else).  I don't have a strong preference.


Cheers,
Simon


Short form of my proposal: Make two separate commands that each have a
predictable behavior.  Make ghci modulename default to source loading, and
require a flag to load a binary.  I don't give a bikeshed what they are called.
 I don't care if the magic :load stays or goes or ends up with only one 
behavior.

This is different/orthogonal to the .o or .hs file extension sensitive proposal.

My arguments:

I run into annoyances because I often poke at things in ghci when trying to get
my package to compile.  So depending on which modules succeeded or failed to
compile I get different behavior when loading into ghci.  I am no longer
confused by this, but just annoyed.

I would say that the user gets surprised which leads to feeling that there is a
lack of control.

The '*' in the '*Main' versus 'Main' prompt is a UI feature for experts, not
for new users.  Making this more obvious or verbose or better documented does
not fix the lack of control the user feels.

The only flags that the user can easily find are those listed by --help:


chrisk$ ghci --help
Usage:

ghci [command-line-options-and-input-files]

The kinds of input files that can be given on the command-line
include:

  - Haskell source files (.hs or .lhs suffix)
  - Object files (.o suffix, or .obj on Windows)
  - Dynamic libraries (.so suffix, or .dll on Windows)

In addition, ghci accepts most of the command-line options that plain
GHC does.  Some of the options that are commonly used are:

-fglasgow-exts  Allow Glasgow extensions (unboxed types, etc.)

-idir Search for imported modules in the directory dir.

-H32m   Increase GHC's default heap size to 32m

-cppEnable CPP processing of source files

Full details can be found in the User's Guide, an online copy of which
can be found here:

http://www.haskell.org/ghc/documentation.html


The -fforce-recomp and -fno-force-recomp flags only exist in the User's Guide.
Thus they are hard to find. Is there a ticket open for adding at least a list of
the recognized flags to ghc and ghci usage messages?

Ideally, I want a :load modulename to get the source and a :bin modulename
to get the binary (and a :m ... to get the binary).  I want ghci modulename
to get the source and ghch -bin modulename to get the binary.  Simple and
predictable and no surprises.

Cheers,
  Chris K


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Guidance on using asynchronous exceptions

2007-11-16 Thread Simon Marlow

Yang wrote:

To follow up on my previous post (Asynchronous Exceptions and the
RealWorld), I've decided to put together something more concrete in
the hopes of eliciting response.

I'm trying to write a library of higher-level concurrency
abstractions, in particular for asynchronous systems programming. The 
principal goal here is composability and safety. Ideally, one can apply 
combinators on any existing (IO a), not just procedures written for this 
library. But that seems like a pipe dream at this point.


It's quite hard to write composable combinators using threads and 
asynchronous exceptions, and this is certainly a weakness of the design. 
See for example the timeout combinator we added recently:


http://darcs.haskell.org/packages/base/System/Timeout.hs

There we did just about manage to make timeout composable, but it was tricky.

In the code below, the running theme is process orchestration. (I've put 
TODOs at places where I'm blocked - no pun intended.)


I'm currently worried that what I'm trying to do is simply impossible in
Concurrent Haskell. I'm bewildered by the design decisions in the
asynchronous exceptions paper. I'm also wondering if there are any
efforts under way to reform this situation. I found some relevant
posts below hinting at this, but I'm not sure what the status is
today.


We haven't made any changes to block/unblock, although that's something I'd 
like to investigate at some point.  If you have any suggestions, I'd be 
interested to hear them.


The problem your code seems to be having is that waitForProcess is 
implemented as a C call, and C calls cannot be interrupted by asynchronous 
exceptions - there's just no way to implement that in general.  One 
workaround would be to fork a thread to call waitForProcess, and 
communicate with the thread using an MVar (takeMVar *is* interruptible). 
You could encapsulate this idiom as a combinator interruptible, perhaps. 
 But note that interrupting the thread waiting on the MVar won't then 
terminate the foreign call: the call will run to completion as normal.


The fact that some operations which block indefinitely cannot be 
interrupted is a problem.  We should document which those are, but the fact 
that the audit has to be done by hand means it's both tedious and 
error-prone, which is why it hasn't been done.


The only example that I know of where asynchronous exceptions and 
block/unblock are really used in anger is darcs, which tries to do 
something reasonable in response to a keyboard interrupt.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Digest size

2007-11-23 Thread Simon Marlow

[ with mailing list maintainer's hat on ]

Someone asked me if they could get fewer digests per day from haskell-cafe. 
 The threshold for sending out a digest is currently 30k, which is 
probably the default, but seems a bit small to me.  Any objections to 
bumping it, to say 100k?


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Sillyness in the standard libs.

2007-11-29 Thread Simon Marlow

Brandon S. Allbery KF8NH wrote:


On Nov 19, 2007, at 16:06 , Arthur van Leeuwen wrote:

here is a puzzle for you: try converting a 
System.Posix.Types.EpochTime into either a
System.Time.CalendarTime or a Data.Time.Clock.UTCTime without going 
through

read . show or a similar detour through strings.


fromEnum and/or toEnum are helpful for this kind of thing, and I am 
occasionally tempted to bind cast = toEnum . fromEnum because I need 
it so much.


Let's ignore System.Time since it's obsoleted by Data.Time.

I just spent a little while trying to solve this puzzle, and it turns out 
there *is* a right way to do this: for t :: EpochTime,


  posixSecondsToUTCTime (fromRational (toRational t) :: POSIXTime)

You want to go via Data.Time.Clock.POSIXTime, because that's what an 
EpochTime is.   Now, EpochTime does not have an Integral instance, because 
it is the same as C's time_t type, which is not guaranteed to be an 
integral type.  You have fromEnum, but that would be a hack: there's no 
guarantee that EpochTime fits in an Int, and if EpochTime is a fractional 
value you lose information.  But you *can* convert to a Rational with 
toRational, and from there you can get to a POSIXTime with fromRational 
(also realToFrac would do).


It turns out there are good reasons for all this, it just ends up being 
quite obscure.  I'll put the above formula in the docs.


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Over-allocation

2007-11-30 Thread Simon Marlow

Gracjan Polak wrote:


My program is eating too much memory:

copyfile source.txt dest.txt +RTS -sstderr
Reading file...
Reducing structure...
Writting file...
Done in 20.277s
1,499,778,352 bytes allocated in the heap
2,299,036,932 bytes copied during GC (scavenged)
1,522,112,856 bytes copied during GC (not scavenged)
 17,846,272 bytes maximum residency (198 sample(s))

   2860 collections in generation 0 ( 10.37s)
198 collections in generation 1 (  8.35s)

 50 Mb total memory in use

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time1.26s  (  1.54s elapsed)
  GCtime   18.72s  ( 18.74s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time   19.98s  ( 20.28s elapsed)


ooh.  May I have your program (the unfixed version) for benchmarking the 
parallel GC?


Cheers,
Simon, currently collecting space leaks
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: A tale of three shootout entries

2007-12-03 Thread Simon Marlow

Simon Peyton-Jones wrote:

| There may well have been changes to the strictness analyser that make
| some of the bangs (or most) unnecessary now. Also, its very likely
| I didn't check all combinations of strict and lazy arguments for the
| optimal evaluation strategy :)
|
| If it seems to be running consitently faster (and producing better Core
| code), by all means submit. I don't think this is a ghc bug or anything
| like that though: just overuse of bangs, leading to unnecessary work.

You might think that unnecessary bangs shouldn't lead to unnecessary work -- if 
GHC knows it's strict *and* you bang the argument, it should still only be 
evaluated once. But it can happen.  Consider

f !xs = length xs

Even though 'length' will evaluate its argument, f nevertheless evaluates it too.  Bangs 
say evaluate it now, like seq, because we may be trying to control space 
usage.  In this particular case it's silly, because the *first* thing length does is 
evaluate its argument, but that's not true of every strict function.

That's why I say it'd be good to have well-characterised examples.  It *may* be 
something like what I describe. Or it may be a silly omission somewhere.


A little addition to what Simon mentioned above: while it is definitely 
true that adding unnecessary bangs can cause a slowdown, the slowdown 
should be much less with 6.8.1 because in the common case each evaluation 
will be an inline test rather than an out-of-line indirect jump and return.


So, with 6.8.x, you should feel more free to sprinkle those bangs...

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Waiting for thread to finish

2007-12-03 Thread Simon Marlow

Brad Clow wrote:

On Nov 28, 2007 11:30 AM, Matthew Brecknell [EMAIL PROTECTED] wrote:

Even with threads, results are evaluated only when they are needed (or
when forced by a strictness annotation). So the thread that needs a
result (or forces it) first will be the one to evaluate it.


So does GHC implement some sychronisation given that a mutation is
occuring under the covers, ie. the thunk is being replaced by the
result?


Yes, see

http://haskell.org/~simonmar/bib/multiproc05_abstract.html

we use lock-free synchronisation, with a slight possibility that two 
threads might evaluate the same thunk.  But since they'll produce the same 
result, nothing goes wrong.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Nofib modifications

2007-12-04 Thread Simon Marlow

I'd do something like

#if defined(__nhc98__) || defined(YHC)
#define NO_MONOMORPHISM_RESTRICTION
#endif

#ifdef NO_MONOMORPHISM_RESTRICTION
powers :: [[Integer]]
#endif

just to make it quite clear what's going on.  (good comments would do just 
as well).


Cheers,
Simon

Simon Peyton-Jones wrote:

By all means apply a patch, I think.

Simon

| -Original Message-
| From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Neil Mitchell
| Sent: 03 December 2007 17:34
| To: Haskell Cafe
| Cc: Simon Marlow; Malcolm Wallace; Duncan Coutts
| Subject: [Haskell-cafe] Nofib modifications
|
| Hi,
|
| Some of the nofib suite are messed up by Yhc/nhc because of the
| monomorphism restriction. Take imaginary/bernouilli as an example:
|
| powers = [2..] : map (zipWith (*) (head powers)) powers
|
| Hugs and GHC both see powers :: [[Integer]] and a CAF.
|
| Yhc (and nhc) both see powers :: (Enum a, Num a) = [[a]] and no CAF.
|
| This completely destroys the performance in Yhc/nhc. Since this is not
| so much a performance aspect but a compiler bug, based on a feature
| whose future in Haskell' is as yet unclear, perhaps it would be wise
| to patch nofib to include an explicit type signature where this
| matters. I am happy to send in a patch (or just apply it) - but I have
| no idea who maintains the suite. I've CC'd those people who make
| substantial use of the nofib suite.
|
| Thanks
|
| Neil
| ___
| Haskell-Cafe mailing list
| Haskell-Cafe@haskell.org
| http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Leopard: ghc 6.8.1 and the case of the missing _environ

2007-12-04 Thread Simon Marlow

Joel Reymont wrote:

Symptoms:

You build 6.8.1 from source on Leopard (x86 in my case) and then

junior:ghc-6.8.1 joelr$ ghci
GHCi, version 6.8.1: http://www.haskell.org/ghc/  :? for help
ghc-6.8.1:
/usr/local/lib/ghc-6.8.1/lib/base-3.0.0.0/HSbase-3.0.0.0.o: unknown 
symbol `_environ'
Loading package base ... linking ... ghc-6.8.1: unable to load package 
`base'



Problem:

ghc binaries are being stripped upon installation which eliminates 
_environ, e.g.


junior:tmp joelr$ nm x|grep environ
2ff0 T ___hscore_environ
0004d004 D _environ

junior:tmp joelr$ strip x
junior:tmp joelr$ nm x|grep environ

Solution:

Need to make sure install-sh does _not_ use the -s option. Haven't found 
out where this needs to be done yet. A temporary workaround is to ask 
Manuel for the pre-built binaries.


MacOS folks: is this still an issue?  If so, could someone create a ticket?

Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell interface file (.hi) format?

2007-12-04 Thread Simon Marlow

Stefan O'Rear wrote:

On Sun, Dec 02, 2007 at 05:45:48AM +0100, Tomasz Zielonka wrote:

On Fri, Nov 30, 2007 at 08:55:51AM +, Neil Mitchell wrote:

Hi


  Prelude :b Control.Concurrent.MVar
  module 'Control.Concurrent.MVar' is not interpreted

:b now defaults to :breakpoint, you want :browse

That's a questionable decision, IMO:
- it changes behavior
- I expect :browse to be used more often, so it deserves the sort
  :b version (:bro is not that short)

On the other hand, this change can have an (unintended?) feature
advertising effect ;-)


It's not a decision at all.  :b is the first command starting with b,
which was browse yesterday, is breakpoint today, and tomorrow will be
something you've never heard of.  


Well, it wasn't quite that accidental.  I noticed that :b had changed, made 
a unilateral decision that :breakpoint was likely to be typed more often 
than :browse, and decided to leave it that way.



It's inherently fragile, and shouldn't
be relied on in scripts - and if :b does anything funny, spell out the
command!


FWIW, I wish we'd never implemented the first prefix match behaviour, 
unextensible as it is.  We could change it for 6.10...


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Re: Haskell Digest, Vol 52, Issue 1

2007-12-05 Thread Simon Marlow

Taral wrote:

On 12/4/07, Simon Marlow [EMAIL PROTECTED] wrote:

  do
 x - newIVar
 let y = readIVar x
 writeIVar x 3
 print y

(I wrote the let to better illustrate the problem, of course you can inline
y if you want).  Now suppose the compiler decided to evaluate y before the
writeIVar.  What's to prevent it doing that?


Look at the translation:

newIVar = (\x - let y = readIVar x in writeIVar x 3  print y)

y can't be floated out because it depends on x.


y doesn't need to be floated out, just evaluated eagerly.

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] IVar

2007-12-05 Thread Simon Marlow

Jan-Willem Maessen wrote:


Consider this:

do
   x - newIVar
   let y = readIVar x
   writeIVar x 3
   print y

(I wrote the let to better illustrate the problem, of course you can 
inline y if you want).  Now suppose the compiler decided to evaluate y 
before the writeIVar.  What's to prevent it doing that?  Nothing in 
the Haskell spec, only implementation convention.


Nope, semantics.  If we have a cyclic dependency, we have to respect 
it---it's just like thunk evaluation order in that respect.


Ah, so I was thinking of the following readIVar:

 readIVar = unsafePerformIO . readIVarIO

But clearly there's a better one.  Fair enough.

Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] IVar

2007-12-06 Thread Simon Marlow

Jan-Willem Maessen wrote:


On Dec 5, 2007, at 3:58 AM, Simon Marlow wrote:


Ah, so I was thinking of the following readIVar:

readIVar = unsafePerformIO . readIVarIO

But clearly there's a better one.  Fair enough.


Hmm, so unsafePerformIO doesn't deal with any operation that blocks?  


Well, operations that block inside unsafePerformIO do block the whole 
thread, yes, and that may lead to a deadlock if the blocking operation is 
waiting to be unblocked by its own thread.  It sounds like you want to back 
off from any earlier-than-necessary evaluation at this point.  Fortunately 
this problem doesn't arise, because GHC won't commute evaluation past an IO 
operation.



I'm wondering about related sorts of examples now, as well:

do
  x - newIVar
  y - unsafeInterleaveIO (readIVarIO x)
  writeIVar x 3
  print y

Or the equivalent things to the above with MVars.


Yes, this suffers from the same problem.  If the compiler were to eagerly 
evaluate y, then you'll get deadlock.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Looking for largest power of 2 = Integer

2007-12-06 Thread Simon Marlow

Dan Piponi wrote:

There's a bit of work required to make this code good enough for
general consumption, and I don't know much about Haskell internals.

(1) What is the official way to find the size of a word? A quick
look at 6.8.1's base/GHC/Num.lhs reveals that it uses a #defined
symbol.


I'm not 100% sure what you mean by word in this context, but assuming you 
mean the same thing as we do in GHC when we say word (a pointer-sized 
integral type), then one way is


  Foreign.sizeOf (undefined :: Int)

or

  Foreign.sizeOf (undefined :: Ptr ())

(in GHC these are guaranteed to be the same).

There's also

  Data.Bits.bitSize (undefined :: Int)

which might be more appropriate since you're using Data.Bits anyway.

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Problems building and using ghc-6.9

2007-12-06 Thread Simon Marlow

Daniel Fischer wrote:


Then I tried to build zlib-0.4.0.1:
$ runghc ./Setup.hs configure --user --prefix=$HOME
Configuring zlib-0.4.0.1...
Setup.hs: At least the following dependencies are missing:
base =2.02.2
??? okay, there was something with flag bytestring-in-base, removed that, so 
that build-depends was base  2.0 || = 2.2, bytestring = 0.9, then

$ runghc ./Setup.hs configure --user --prefix=$HOME
Configuring zlib-0.4.0.1...
Setup.hs: At least the following dependencies are missing:
base 2.0||=2.2, bytestring =0.9


This turns out to be something that broke when we changed the command-line 
syntax for ghc-pkg in the HEAD.  I've just posted a patch for Cabal on 
[EMAIL PROTECTED]


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] class default method proposal

2007-12-11 Thread Simon Marlow

Duncan Coutts wrote:

On Tue, 2007-12-11 at 07:07 -0800, Stefan O'Rear wrote:


This is almost exactly the
http://haskell.org/haskellwiki/Class_system_extension_proposal; that
page has some discussion of implementation issues.


Oh yes, so it is. Did this proposal get discussed on any mailing list?
I'd like to see what people thought. Was there any conclusion about
feasibility?


Ross proposed this on the libraries list in 2005:

http://www.haskell.org//pipermail/libraries/2005-March/003494.html

and I brought it up for Haskell':

http://www.haskell.org//pipermail/haskell-prime/2006-April/001344.html

see also this:

http://www.haskell.org//pipermail/haskell-prime/2006-August/001582.html

Unfortunately the Haskell' wiki doesn't have a good summary of the issues; 
it should.  I'll add these links at least.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Waiting for thread to finish

2007-12-11 Thread Simon Marlow

ChrisK wrote:


That is new. Ah, I see GHC.Conc.forkIO now has a note:

GHC note: the new thread inherits the /blocked/ state of the parent 
(see 'Control.Exception.block').


BUT...doesn't this change some of the semantics of old code that used forkIO ?


Yes, it is a change to the semantics.  I assumed (naively) that virtually 
nobody would be using forkIO inside block, and so the change would be 
benign.  It is (another) departure from the semantics in the Asynchronous 
Exceptions paper.  Still, I think this is the right thing.



I wanted a way to control the blocked status of new threads, since this makes it
 easier to be _sure_ some race conditions will never happen.

And so my new preferred way of writing this is now:


-- we are in parent's blocked state, so make the ticker explicit:
  res - bracket (forkIO (unblock ticker))
 killThread
 const act  -- act runs in parent's blocked state





In this case the unblock isn't strictly necessary, because the ticker 
thread spends most of its time in threadDelay, which is interruptible anyway.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Execution of external command

2007-12-13 Thread Simon Marlow

Bulat Ziganshin wrote:

Hello Duncan,

Thursday, December 13, 2007, 4:43:17 PM, you wrote:


Use just the GHC bit from the code I pointed at:


thank you, Duncan. are there any objections against simplest code
proposed by Yitzchak? i.e.

(_, h, _, _) - runInteractiveCommand script params
output - hGetContents h

taking into account that bad-behaved scripts are not my headache?


It could deadlock if the script produces enough stderr to fill up its pipe 
buffer, because the script will stop waiting for your program to empty the 
pipe.


It's been said several times, but we should really have a higher-level 
abstraction over runInteractiveProcess.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: -threaded

2007-12-14 Thread Simon Marlow

Maurí­cio wrote:


I see in the documentation and in many
messages in this list that, if you want
multithreading in your program, you
need to use -threaded in ghc.


Concurrency is supported just fine without -threaded.  You need -threaded 
if you want to:


 1) make foreign calls that do not block other threads
 2) use multiple CPUs
 3) write a multithreaded Haskell library or DLL

Some library functions use foreign calls, for example 
System.Process.waitForProcess, so if you want to use them without blocking 
other threads you need -threaded.


From an implementation perspective, -threaded enables OS-thread support in 
the runtime.  Without -threaded, everything runs in a single OS thread. 
The features listed above all require multiple OS threads.



Why isn't that option default? Does
it imply some kind of overhead?


There may be an overhead, because the runtime has to use synchronisation in 
various places.  Hopefully the overhead is negligible for most things.


It's not the default mostly for historical reasons, and because there are 
people who don't like to get multiple OS threads unless they ask for it. 
Also there are one or two things that don't work well with -threaded 
(Gtk2Hs, and System.Posix.forkProcess).


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: -threaded

2007-12-14 Thread Simon Marlow

Ketil Malde wrote:

Simon Marlow [EMAIL PROTECTED] writes:


Concurrency is supported just fine without -threaded.  You need
-threaded if you want to:

  :

 3) write a multithreaded Haskell library or DLL


I thought -threaded (A.K.A. -smp, no?) only affected which runtime was
used, and thus was a linking option.  I do have a library that needs
-smp, but as far as I knew, the onus would be on the *applications* to
specify this when compiling/linking.  Is that incorrect?  Is there a
way for a library to inform the application about this?


Sorry, I was a bit ambiguous there: for case (3) I mean a Haskell library 
that you intend to call from C, or some other foreign language, using 
multiple OS threads.  You're absolutely right that for a Haskell library 
that you intend to call from Haskell, the -threaded option is used when the 
final program is linked, there's no way for the library itself to add it.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: announcing darcs 2.0.0pre2

2007-12-17 Thread Simon Marlow

David Roundy wrote:

I am pleased to announce the availability of the second prerelease of darcs
two, darcs 2.0.0pre2.


Thanks!

Continuing my performance tests, I tried unpulling and re-pulling a bunch 
of patches in a GHC tree.  I'm unpulling about 400 patches using 
--from-tag, and then pulling them again from a local repo.  Summary: darcs2 
is about 10x slower than darcs1 on unpull, and on pull it is 100x slower in 
user time but only 20x slower in elapsed time.


In both cases, the repository was on an NFS filesystem.  In the darcs2 
case, the repository I was pulling from was on the local disk, and I'm also 
using a cache (NFS-mounted).  The darcs2 repository has been optimized, but 
the darcs1 repository has not (at lesat, not recently).  I did all of these 
a couple of times to eliminate the effects of cache preloading etc., the 
times reported are from the second run.


--- darcs 1:

$ time darcs unpull --from-tag 2007-09-25 -a
Finished unpulling.
35.17s real   5.77s user   1.00s system   19% darcs unpull --from-tag 
2007-09-25 -a


$ time darcs pull ~/ghc-HEAD -a
Pulling from /home/simonmar/ghc-HEAD...
33.51s real   3.62s user   1.05s system   13% darcs pull ~/ghc-HEAD -a

--- darcs 2:

$ time darcs2 unpull --from-tag 2007-09-25 -a
Finished unpulling.
385.22s real   52.18s user   12.62s system   16% darcs2 unpull --from-tag 
2007-09-25 -a


$ time darcs2 pull /64playpen/simonmar/ghc-darcs2 -a
Finished pulling and applying.
668.75s real   290.74s user   15.03s system   45% darcs2 pull 
/64playpen/simonmar/ghc-darcs2 -a


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: #haskell works

2007-12-20 Thread Simon Marlow

Tim Chevalier wrote:

On 12/14/07, Dan Piponi [EMAIL PROTECTED] wrote:

There have been some great improvements in array handling recently. I
decided to have a look at the assembly language generated by some
simple array manipulation code and understand why C is at least twice
as fast as ghc 6.8.1. One the one hand it was disappointing to see
that the Haskell register allocator seems a bit inept and was loading
data into registers that should never have been spilled out of
registers in the first place.


Someone who knows the backend better than I do can correct me if I'm
wrong, but it's my understanding that GHC 6.8.1 doesn't even attempt
to do any register allocation on x86. So -- register allocator? What
register allocator?


That's not entirely true - there is a fairly decent linear-scan register 
allocator in GHC


http://darcs.haskell.org/ghc/compiler/nativeGen/RegAllocLinear.hs

the main bottleneck is not the quality of the register allocation (at 
least, not yet).


The first problem is that in order to get good performance when compiling 
via C we've had to lock various global variables into registers (the heap 
pointer, stack pointer etc.), which leaves too few registers free for 
argument passing on x86, so the stack is used too much.  This is probably 
why people often say that the register allocator sucks - in fact it is 
really the calling convention that sucks.  There is some other stupidness 
such as reloading values from the stack, though.


Another problem is that the backend doesn't turn recursion into loops (i.e. 
backward jumps), so those crappy calling conventions are used around every 
loop.  If we fixed that - which is pretty easy, we've tried it - then the 
bottleneck becomes the lack of loop optimisations in the native code 
generator, and we also run into the limitations of the current register 
allocator.  Fortunately the latter has been fixed: Ben Lippmeier has 
written a graph-colouring allocator, and it's available for trying out in 
GHC HEAD.


Fixing it all properly means some fairly significant architectural changes, 
and dropping the via-C backend (except for bootstrapping on new platforms), 
which is what we'll be doing in 6.10.  I'd expect to see some dramatic 
improvements for those tight loops, in ByteString for example, but for 
typical Haskell code and GHC itself I'll be pleased if we get 10%.  We'll 
see.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell performance

2007-12-20 Thread Simon Marlow

Malcolm Wallace wrote:

Simon Peyton-Jones [EMAIL PROTECTED] wrote:


What would be v helpful would be a regression suite aimed at
performance, that benchmarked GHC (and perhaps other Haskell
compilers) against a set of programs, regularly, and published the
results on a web page, highlighting regressions.


Something along these lines already exists - the nobench suite.
darcs get http://www.cse.unsw.edu.au/~dons/code/nobench
It originally compared ghc, ghci, hugs, nhc98, hbc, and jhc.
(Currently the results at
http://www.cse.unsw.edu.au/~dons/nobench.html
compare only variations of ghc fusion rules.)

I have just been setting up my own local copy - initial results at
http://www.cs.york.ac.uk/fp/nobench/powerpc/results.html
where I intend to compare ghc from each of the 6.4, 6.6 and 6.8
branches, against nhc98 and any other compilers I can get working.
I have powerpc, intel, and possibly sparc machines available.


That's great.  BTW, GHC has a performance bug affecting calendar at the moment:

  http://hackage.haskell.org/trac/ghc/ticket/1168

The best GHC options for this program might therefore be -O2 
-fno-state-hack.  Or perhaps just -O0.



Like Hackage, it should be easy to add a new program.


Is submitting a patch against the darcs repo sufficiently easy?
Should we move the master darcs repo to somewhere more accessible, like
code.haskell.org?


Yes, please do.  When I have a chance I'd like to help out.


It'd be good to measure run-time,


Done...


but allocation count, peak memory use, code size,
compilation time are also good (and rather more stable) numbers to
capture.


Nobench does already collect code size, but does not yet display it in
the results table.  I specifically want to collect compile time as well.
Not sure what the best way to measure allocation and peak memory use
are?


With GHC you need to use +RTS -s and then slurp in the prog.stat file. 
 You can also get allocations, peak memory use, and separate mutator/GC 
times this way.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell performance

2007-12-20 Thread Simon Marlow

Simon Marlow wrote:


Nobench does already collect code size, but does not yet display it in
the results table.  I specifically want to collect compile time as well.
Not sure what the best way to measure allocation and peak memory use
are?


With GHC you need to use +RTS -s and then slurp in the prog.stat 
file.  You can also get allocations, peak memory use, and separate 
mutator/GC times this way.


Oh, and one more thing.  We have this program called nofib-analyse in GHC's 
source tree:


 http://darcs.haskell.org/ghc/utils/nofib-analyse

which takes the output from a couple of nofib runs and generates nice 
tables, in ASCII or LaTeX (for including in papers, see e.g. our 
pointer-tagging paper from ICFP'07).  The only reason we haven't switched 
to using nobench for GHC is the existence of this tool.  Unfortuantely it 
relies on specifics of the output generated by a nofib run, and uses a Perl 
script, etc. etc.  The point is, it needs some non-trivial porting.


I'm pointing this out just in case you or anyone else felt enthusiastic 
enough to port this to nobench, and to hopefully head off any duplication 
of effort.  Failing that, I'll probably get around to porting it myself at 
some point.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell performance

2008-01-02 Thread Simon Marlow

[EMAIL PROTECTED] wrote:

G'day all.

Quoting Jon Harrop [EMAIL PROTECTED]:


I would recommend adding:

1. FFT.

2. Graph traversal, e.g. nth-nearest neighbor.


I'd like to put in a request for Pseudoknot.  Does anyone still have it?


This is it, I think:

http://darcs.haskell.org/nofib/spectral/hartel/nucleic2

Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [16/16] SBM: Discussion and Conclusion

2008-01-02 Thread Simon Marlow

Peter Firefly Brodersen Lund wrote:


Using top together with huge input files convinced me that -sstderr was
untrustworthy so I developed the pause-at-end preloading hack.  Paranoia paid
off big time there.


In what way did you find -sstderr untrustworthy?  Perhaps it is because the 
memory in use figure only counts memory that the RTS knows about, and 
doesn't include malloc()'d memory.  If it's anything else, I need to know 
about it.



 o can the core/stg code be made easier to read?


By all means - submit tickets with specific suggestions, or better still 
send patches.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Importing Data.Char speeds up ghc around 70%

2008-01-02 Thread Simon Marlow

Joost Behrends wrote:

Neil Mitchell ndmitchell at gmail.com writes:
 

If it can be reproduced on anyones machine, it is a bug. If you can
bundle up two programs which don't read from stdin (i.e. no getLine
calls) or the standard arguments (i.e. getArgs) which differ only by
the Data.Char import, and have massive speed difiference, then report
a bug.

You should probably also give your GHC versions and platforms etc.


Thanks for your attention too !

Now i tried a version without input (just computing the primefactors of the
constant 2^61+1, what it did correctly in spite of its bugginess). And it
didn't have the Data.Char bug (and Data.List bug) too.

As my original code hasn't on Linux also it seems.

Thus it  happens only in an exotic configuration. Windows will stay exotic
in the Haskell community. Before should noone
has reproduced it at least on Windows (XPpro SP2 is my version), i will do 
nothing more.


The hardware is Intel Celeron 2.2GHZ, 512 MB Ram. ghc 6.8.1 lives on
D:\\Programme (not on the system drive C:, which poses problems to Cabal, 
told aside).


Even if it only happens on Windows, if it isn't specific to your hardware 
then it could still be a bug.


I have seen strange artifacts like this before that turned out to be caused 
by one of two things:


 - bad cache interactions, e.g. we just happen to have laid out the code in
   such a way that frequently accessed code sequences push each other out
   of the cache, or the relative position of the heap and stack have a bad
   interaction.  This happens less frequently these days with 4-way and
   8-way associative caches on most CPUs.

 - alignment issues, such as storing or loading a lot of misaligned Doubles

in the second case, I've seen the same program run +/- 50% in performance 
from run to run, just based on random alignment of the stack.  But it's not 
likely to be the issue here, I'm guessing.  If it is code misalignment, 
that's something we can easily fix (but I don't *think* we're doing 
anything wrong in that respect).


I have an Opteron box here that regularly gives +/- 20% from run to run of 
the same program with no other load on the machine.  I have no idea why...


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: announcing darcs 2.0.0pre2

2008-01-03 Thread Simon Marlow

David Roundy wrote:


Anyhow, could you retry this test with the above change in methodology, and
let me know if (a) the pull is still slow the first time and (b) if it's
much faster the second time (after the reverse unpull/pull)?


I think I've done it in both directions now, and it got faster, but still 
much slower than darcs1:


$ time darcs2 unpull --from-tag 2007-09-25 -a
Finished unpulling.
58.68s real   50.64s user   6.36s system   97% darcs2 unpull --from-tag 
2007-09-25 -a

$ time darcs2 pull -a ../ghc-darcs2
Pulling from ../ghc-darcs2...
Finished pulling and applying.
53.28s real   44.62s user   7.10s system   97% darcs2 pull -a ../ghc-darcs2

This is still an order of magnitude slower than darcs1 for the same 
operation.  (these times are now on the local filesystem, BTW)


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [darcs-devel] announcing darcs 2.0.0pre2

2008-01-04 Thread Simon Marlow

David Roundy wrote:

On Thu, Jan 03, 2008 at 11:11:40AM +, Simon Marlow wrote:

David Roundy wrote:

Anyhow, could you retry this test with the above change in methodology, and
let me know if (a) the pull is still slow the first time and (b) if it's
much faster the second time (after the reverse unpull/pull)?
I think I've done it in both directions now, and it got faster, but still 
much slower than darcs1:


$ time darcs2 unpull --from-tag 2007-09-25 -a
Finished unpulling.
58.68s real   50.64s user   6.36s system   97% darcs2 unpull --from-tag 
2007-09-25 -a

$ time darcs2 pull -a ../ghc-darcs2
Pulling from ../ghc-darcs2...
Finished pulling and applying.
53.28s real   44.62s user   7.10s system   97% darcs2 pull -a ../ghc-darcs2

This is still an order of magnitude slower than darcs1 for the same 
operation.  (these times are now on the local filesystem, BTW)


Is this with the latest darcs-unstable? I made some improvements shortly
before Christmas (or was it after Christmas?) that ought to improve the
speed of pulls dramatically.  We were doing O(N^2) operations in our
handling of pending changes, which I fixed (I think).  So I'll wait on
investigating this until you've confirmed which version this was tested
with.  And thanks for the testing!


This is using a binary I compiled up from the latest sources yesterday, so 
it should have those improvements.


Cheers,
Simon


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] ANNOUNCE: Haddock version 2.0.0.0

2008-01-09 Thread Simon Marlow

Felipe Lessa wrote:

On Jan 8, 2008 10:28 AM, David Waern [EMAIL PROTECTED] wrote:

  * Haddock now understands all syntax understood by GHC 6.8.2


Does Haddock still define __HADDOCK__? There's a lot of code that uses
this flag just to hide something Haddock didn't know.


Haddock itself never defined __HADDOCK__, because it didn't do the CPP 
preprocessing itself.  Cabal adds __HADDOCK__ itself when preprocessing 
files for passing to Haddock.  When used with Haddock version 2, Cabal no 
longer defines __HADDOCK__ when preprocessing.


Haddock 2 can do the preprocessing itself, because it is processing the 
Haskell files using the GHC API.  In this case, if you want __HADDOCK__ 
defined, you have to add it explicitly with --optghc -D__HADDOCK__.


The easiest way to use Haddock 2 is via Cabal, which will automatically add 
the appropriate options for your package, including the right -B option 
which Haddock now needs in order that the GHC API can find its package 
database.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Problems with Unicode Symbols as Infix Function Names in Propositional Calculus Haskell DSL

2008-01-10 Thread Simon Marlow

Cetin Sert wrote:
I want to design a DSL in Haskell for propositional calculus. But 
instead of using natural language names for functions like or, and, 
implies etc. I want to use Unicode symbols as infix functions ¬, ˅, ˄, 
→, ↔ But I keep getting error messages from the GHC parser. Is there a 
way to make GHC parse my source files correctly? If it is not possible 
yet, please consider this as a “feature request”.


GHC supports unicode source files encoded using UTF-8 by default.  It 
should be possible to use many unicode symbols for infix symbols.  Note 
that when -XUnicodeSyntax is on, certain symbols have special meanings 
(e.g. → means -).


Without more information I can't tell exactly what problem you're 
encountering.  If you supply the source code you're trying to compile, we 
might be able to help.  Also, note that [EMAIL PROTECTED] 
is a better forum for GHC-specific issues.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Parallel Pi

2010-03-18 Thread Simon Marlow

On 17/03/10 21:30, Daniel Fischer wrote:

Am Mittwoch 17 März 2010 19:49:57 schrieb Artyom Kazak:

Hello!
I tried to implement the parallel Monte-Carlo method of computing Pi
number, using two cores:

move


But it uses only on core:


snip


We see that our one spark is pruned. Why?



Well, the problem is that your tasks don't do any real work - yet.
piMonte returns a thunk pretty immediately, that thunk is then evaluated by
show, long after your chance for parallelism is gone. You must force the
work to be done _in_ r1 and r2, then you get parallelism:

   Generation 0:  2627 collections,  2626 parallel,  0.14s,  0.12s elapsed
   Generation 1: 1 collections, 1 parallel,  0.00s,  0.00s elapsed

   Parallel GC work balance: 1.79 (429262 / 240225, ideal 2)

 MUT time (elapsed)   GC time  (elapsed)
   Task  0 (worker) :0.00s(  8.22s)   0.00s(  0.00s)
   Task  1 (worker) :8.16s(  8.22s)   0.01s(  0.01s)
   Task  2 (worker) :8.00s(  8.22s)   0.13s(  0.11s)
   Task  3 (worker) :0.00s(  8.22s)   0.00s(  0.00s)

   SPARKS: 1 (1 converted, 0 pruned)

   INIT  time0.00s  (  0.00s elapsed)
   MUT   time   16.14s  (  8.22s elapsed)
   GCtime0.14s  (  0.12s elapsed)
   EXIT  time0.00s  (  0.00s elapsed)
   Total time   16.29s  (  8.34s elapsed)

   %GC time   0.9%  (1.4% elapsed)

   Alloc rate163,684,377 bytes per MUT second

   Productivity  99.1% of total user, 193.5% of total elapsed

But alas, it is slower than the single-threaded calculation :(

   INIT  time0.00s  (  0.00s elapsed)
   MUT   time7.08s  (  7.10s elapsed)
   GCtime0.08s  (  0.08s elapsed)
   EXIT  time0.00s  (  0.00s elapsed)
   Total time7.15s  (  7.18s elapsed)


It works for me (GHC 6.12.1):

  SPARKS: 1 (1 converted, 0 pruned)

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time9.05s  (  4.54s elapsed)
  GCtime0.12s  (  0.09s elapsed)
  EXIT  time0.00s  (  0.01s elapsed)
  Total time9.12s  (  4.63s elapsed)

wall-clock speedup of 1.93 on 2 cores.

What hardware are you using there? Have you tried changing any GC settings?

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Parallel Pi

2010-03-19 Thread Simon Marlow

On 18/03/10 22:52, Daniel Fischer wrote:

Am Donnerstag 18 März 2010 22:44:55 schrieb Simon Marlow:

On 17/03/10 21:30, Daniel Fischer wrote:

Am Mittwoch 17 März 2010 19:49:57 schrieb Artyom Kazak:

Hello!
I tried to implement the parallel Monte-Carlo method of computing Pi
number, using two cores:


move


But it uses only on core:


snip


We see that our one spark is pruned. Why?


Well, the problem is that your tasks don't do any real work - yet.
piMonte returns a thunk pretty immediately, that thunk is then
evaluated by show, long after your chance for parallelism is gone. You
must force the work to be done _in_ r1 and r2, then you get
parallelism:

Generation 0:  2627 collections,  2626 parallel,  0.14s,  0.12s
elapsed Generation 1: 1 collections, 1 parallel,  0.00s,
0.00s elapsed

Parallel GC work balance: 1.79 (429262 / 240225, ideal 2)

  MUT time (elapsed)   GC time  (elapsed)
Task  0 (worker) :0.00s(  8.22s)   0.00s(  0.00s)
Task  1 (worker) :8.16s(  8.22s)   0.01s(  0.01s)
Task  2 (worker) :8.00s(  8.22s)   0.13s(  0.11s)
Task  3 (worker) :0.00s(  8.22s)   0.00s(  0.00s)

SPARKS: 1 (1 converted, 0 pruned)

INIT  time0.00s  (  0.00s elapsed)
MUT   time   16.14s  (  8.22s elapsed)
GCtime0.14s  (  0.12s elapsed)
EXIT  time0.00s  (  0.00s elapsed)
Total time   16.29s  (  8.34s elapsed)

%GC time   0.9%  (1.4% elapsed)

Alloc rate163,684,377 bytes per MUT second

Productivity  99.1% of total user, 193.5% of total elapsed

But alas, it is slower than the single-threaded calculation :(

INIT  time0.00s  (  0.00s elapsed)
MUT   time7.08s  (  7.10s elapsed)
GCtime0.08s  (  0.08s elapsed)
EXIT  time0.00s  (  0.00s elapsed)
Total time7.15s  (  7.18s elapsed)


It works for me (GHC 6.12.1):

SPARKS: 1 (1 converted, 0 pruned)

INIT  time0.00s  (  0.00s elapsed)
MUT   time9.05s  (  4.54s elapsed)
GCtime0.12s  (  0.09s elapsed)
EXIT  time0.00s  (  0.01s elapsed)
Total time9.12s  (  4.63s elapsed)

wall-clock speedup of 1.93 on 2 cores.


Is that Artyom's original code or with the pseq'ed length?


Your fixed version.


And, with -N2, I also have a productivity of 193.5%, but the elapsed time
is larger than the elapsed time for -N1. How long does it take with -N1 for
you?


The 1.93 speedup was compared to the time for -N1 (8.98s in my case).


What hardware are you using there?


3.06GHz Pentium 4, 2 cores.
I have mixed results with parallelism, some programmes get a speed-up of
nearly a factor 2 (wall-clock time), others 1.4, 1.5 or so, yet others take
about the same wall-clock time as the single threaded programme, some -
like this - take longer despite using both cores intensively.


I suspect it's something specific to that processor, probably 
cache-related.  Perhaps we've managed to put some data frequently 
accessed by both CPUs on the same cache line.  I'd have to do some 
detailed profiling on that processor to find out though.  If you're have 
the time and inclination, install oprofile and look for things like 
memory ordering stalls.



Have you tried changing any GC settings?


I've played around a little with -qg and -qb and -C, but that showed little
influence. Any tips what else might be worth a try?


-A would be the other thing to try.

Cheers,
Simon



Cheers,
Simon




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Parallel Pi

2010-03-19 Thread Simon Marlow

On 19/03/10 09:00, Ketil Malde wrote:

Daniel Fischerdaniel.is.fisc...@web.de  writes:


3.06GHz Pentium 4, 2 cores.


[I.e. a single-core hyperthreaded CPU]


Ah, that would definitely explain a lack of parallelism.  I'm just 
grateful we don't have another one of those multicore cache-line 
performance bugs, becuase they're a nightmare to track down.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Timeouts that don't cause data growth.

2010-03-23 Thread Simon Marlow

On 23/03/10 17:40, David Leimbach wrote:

Trying to understand why the code here:
http://moonpatio.com/fastcgi/hpaste.fcgi/view?id=8823#a8823  exhausts
memory.

I need to have timeouts in a program I'm writing that will run an
interactive polling session of some remote resources, and know when to
give up and handle that error. Unfortunately this code dies pretty
quickly and produces an -hc  graph like the one attached.

It seems that System.Timeout can't be used for this.  I should note that
if the code is changed to use an inifnite timeout (-1) that this problem
doesn't occur. Is this a bug in System.Timeout, or is there something I
should be doing to keep the data size down?


The leak is caused by the Data.Unique library, and coincidentally it was 
fixed recently.  6.12.2 will have the fix.


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Is hGetLine lazy like hGetContents? And what about this todo item?

2010-03-25 Thread Simon Marlow

On 25/03/2010 15:40, Jason Dagit wrote:

Hello,

I was trying to figure out if hGetLine is safe to use inside of
withFile.  Specifically, I want to return the line I read and use it
later, without inspecting it before withFile calls hClose.

If you want to understand the concern I have, look here:
http://www.haskell.org/haskellwiki/Maintaining_laziness#Input_and_Output

There is a bit of explanation showing that hGetContents can be
problematic with withFile.

I can tell from reading the source of hGetContents that it uses
unsafeInterleaveIO so this make sense to me why that wiki page talks
about hGetContents:
http://www.haskell.org/ghc/docs/latest/html/libraries/base/src/GHC-IO-Handle-Text.html#hGetContents

When I read the source of hGetLine, it is less clear to me if I need to
be concerned.  I believe it is not lazy in the sense of lazy IO above.
Could someone else please comment?
http://www.haskell.org/ghc/docs/latest/html/libraries/base/src/GHC-IO-Handle-Text.html#hGetLine


Correct: it is not lazy, and it is safe to use inside withFile.


Then I notice this 'todo' item in the description:
-- ToDo: the unbuffered case is wrong: it doesn't lock the handle for
-- the duration.

The code itself looks to me like it only handles the buffered case.
Perhaps this todo is obsolete and needs to be removed?  If it's not
obsolete, do we need to create a ticket for this?


Well spotted, that comment is out of date and wrong.  There used to be a 
version of hGetLine written in terms of hGetChar which was used when the 
Handle was unbuffered, but I think I removed it in the recent rewrite.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-03-25 Thread Simon Marlow

On 25/03/2010 11:57, Bas van Dijk wrote:

Dear all, (sorry for this long mail)

When programming in the IO monad you have to be careful about
asynchronous exceptions. These nasty little worms can be thrown to you
at any point in your IO computation. You have to be extra careful when
doing, what must be, an atomic transaction like:

do old- takeMVar m
new- f old `onException` putMVar m old
putMVar m new

If an asynchronous exception is thrown to you right after you have
taken your MVar the putMVar will not be executed anymore and will
leave your MVar in the empty state. This can possibly lead to
dead-lock.

The standard solution for this is to use a function like modifyMVar_:

modifyMVar_ :: MVar a -  (a -  IO a) -  IO ()
modifyMVar_ m io =
   block $ do
 a- takeMVar m
 a'- unblock (io a) `onException` putMVar m a
 putMVar m a'

As you can see this will first block asynchronous exceptions before
taking the MVar.

It is usually better to be in the blocked state as short as possible
to ensure that asynchronous exceptions can be handled as soon as
possible. This is why modifyMVar_ unblocks the the inner (io a).

However now comes the problem I would like to talk about. What if I
want to use modifyMVar_ as part of a bigger atomic transaction. As in:

block $ do ...
modifyMVar_ m f
...


From a quick glanse at this code it looks like asynchronous exceptions

can't be thrown to this transaction because we block them. However the
unblock in modifyMVar_ opens an asynchronous exception wormhole
right into our blocked computation. This destroys modularity.

Besides modifyMVar_ the following functions suffer the same problem:

* Control.Exception.finally/bracket/bracketOnError
* Control.Concurrent.MVar.withMVar/modifyMVar_/modifyMVar
* Foreign.Marshal.Pool.withPool

We can solve it by introducing two handy functions 'blockedApply' and
'blockedApply2' and wrapping each of the operations in them:


import Control.Exception
import Control.Concurrent.MVar
import Foreign.Marshal.Pool
import GHC.IO ( catchAny )




blockedApply :: IO a -  (IO a -  IO b) -  IO b
blockedApply a f = do
   b- blocked
   if b
 then f a
 else block $ f $ unblock a



blockedApply2 :: (c -  IO a) -  ((c -  IO a) -  IO b) -  IO b
blockedApply2 g f = do
   b- blocked
   if b
 then f g
 else block $ f $ unblock . g


Nice, I hadn't noticed that you can now code this up in the library 
since we added 'blocked'.  Unfortunately this isn't cheap: 'blocked' is 
currently an out-of-line call to the RTS, so if we want to start using 
it for important things like finally and bracket, then we should put 
some effort into optimising it.


I'd also be amenable to having block/unblock count nesting levels 
instead, I don't think it would be too hard to implement and it wouldn't 
require any changes at the library level.


Incedentally, I've been using the term mask rather than block in 
this context, as block is far too overloaded.  It would be nice to 
change the terminology in the library too, leaving the old functions 
around for backwards compatibility of course.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Is hGetLine lazy like hGetContents? And what about this todo item?

2010-03-25 Thread Simon Marlow

On 25/03/10 17:07, Jason Dagit wrote:

On Thu, 2010-03-25 at 16:13 +, Simon Marlow wrote:

On 25/03/2010 15:40, Jason Dagit wrote:

Hello,

I was trying to figure out if hGetLine is safe to use inside of
withFile.  Specifically, I want to return the line I read and use it
later, without inspecting it before withFile calls hClose.

If you want to understand the concern I have, look here:
http://www.haskell.org/haskellwiki/Maintaining_laziness#Input_and_Output

There is a bit of explanation showing that hGetContents can be
problematic with withFile.

I can tell from reading the source of hGetContents that it uses
unsafeInterleaveIO so this make sense to me why that wiki page talks
about hGetContents:
http://www.haskell.org/ghc/docs/latest/html/libraries/base/src/GHC-IO-Handle-Text.html#hGetContents

When I read the source of hGetLine, it is less clear to me if I need to
be concerned.  I believe it is not lazy in the sense of lazy IO above.
Could someone else please comment?
http://www.haskell.org/ghc/docs/latest/html/libraries/base/src/GHC-IO-Handle-Text.html#hGetLine


Correct: it is not lazy, and it is safe to use inside withFile.


Great!  I did a few simple tests in GHCi and it seemed safe, but I
wanted to be extra prudent.  Thanks.




Then I notice this 'todo' item in the description:
-- ToDo: the unbuffered case is wrong: it doesn't lock the handle for
-- the duration.

The code itself looks to me like it only handles the buffered case.
Perhaps this todo is obsolete and needs to be removed?  If it's not
obsolete, do we need to create a ticket for this?


Well spotted, that comment is out of date and wrong.  There used to be a
version of hGetLine written in terms of hGetChar which was used when the
Handle was unbuffered, but I think I removed it in the recent rewrite.


What is the next step for getting rid of the obsolete comment?  Did you
already nuke it?  If not, I could try to get a copy of the ghc repo and
see if I can figure out the right protocol for submitting a patch.


Already nuked in my working tree, it'll filter through into the repo in 
due course.  The library submission process would be way overkill for that!


Cheers,
Simon


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-03-25 Thread Simon Marlow

On 25/03/10 17:16, Bas van Dijk wrote:

On Thu, Mar 25, 2010 at 5:36 PM, Simon Marlowmarlo...@gmail.com  wrote:

Nice, I hadn't noticed that you can now code this up in the library since we
added 'blocked'.  Unfortunately this isn't cheap: 'blocked' is currently an
out-of-line call to the RTS, so if we want to start using it for important
things like finally and bracket, then we should put some effort into
optimising it.

I'd also be amenable to having block/unblock count nesting levels instead, I
don't think it would be too hard to implement and it wouldn't require any
changes at the library level.



Yes counting the nesting level like Twan proposed will definitely
solve the modularity problem.

I do think we need to optimize the block and unblock operations in
such a way that they don't need to use IORefs to save the counting
level. The version Twan posted requires 2 reads and 2 writes for a
block and unblock. While I haven't profiled it I think it's not very
efficient.


Oh, I thought that was pseudocode to illustrate the idea.  Where would 
you store the IORef, for one thing?  No, I think the only sensible way 
is to build the nesting semantics into the primitives.



Incedentally, I've been using the term mask rather than block in this
context, as block is far too overloaded.  It would be nice to change the
terminology in the library too, leaving the old functions around for
backwards compatibility of course.


Indeed block is to overloaded. I've been using block and unblock a
lot in concurrent-extra[1] and because this package deals with threads
that can block it sometimes is confusing whether a block refers to
thread blocking or asynchronous exceptions blocking.

So I'm all for deprecating 'block' in favor of 'mask'. However what do
we call 'unblock'? 'unmask' maybe? However when we have:

mask $ mask $ unmask x

and these operations have the counting nesting levels semantics,
asynchronous exception will not be unmasked in 'x'. However I don't
currently know of a nicer alternative.


But that's the semantics you wanted, isn't it?  Am I missing something?

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell.org re-design

2010-03-29 Thread Simon Marlow

On 28/03/2010 21:44, Christopher Done wrote:

This is a post about re-designing the whole Haskell web site.

We got a new logo but didn't really take it any further. For a while
there's been talk about a new design for the Haskell web site, and there
are loads of web pages about Haskell that don't follow a theme
consistent with Haskell.org's, probably because it doesn't really have a
proper theme.

I'm not a designer so take my suggestion with a grain of salt, but
something that showed pictures of the latest events and the feeds we
currently have would be nice. The feeds let you know that the community
is busy, and pictures tell you that we are human and friendly.

Anyway, I came up with something to kick off a discussion:

http://haskell.org/haskellwiki/Image:Haskell-homepage-idea.png

It answers the basic questions:

* What's Haskell?
* Where am I on the site? (Answered by a universally recognised tab
  menu)
* What's it like?
* How do I learn it?
* Does it have an active community?
* What's going on in the community? What are they making?
* This language is weird. Are they human? -- Yes. The picture of a
  recent event can fade from one to another with jQuery.

The colours aren't the most exciting, but someone who's a professional
designer could do a proper design. But I like the idea of the site being
like this; really busy but not scarily busy.


The general design looks great, nice job.

Is the footer necessary?  I dislike sites that have too many ways to 
navigate, and the footer looks superfluous.  The footer will probably be 
off the bottom of the window in any case, which reduces its usefulness 
as a navigation tool.


If we want a tree of links for navigation, maybe the tabs at the top 
could be drop-down menus, or there could be a menu sidebar.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-03-29 Thread Simon Marlow

On 26/03/2010 19:51, Isaac Dupree wrote:

On 03/25/10 12:36, Simon Marlow wrote:

I'd also be amenable to having block/unblock count nesting levels
instead, I don't think it would be too hard to implement and it wouldn't
require any changes at the library level.


Wasn't there a reason that it didn't nest?

I think it was that operations that block-as-in-takeMVar, for an
unbounded length of time, are always supposed to C.Exception.unblock and
in fact be unblocked within that operation. Otherwise the thread might
never receive its asynchronous exceptions.


That's why we have the notion of interruptible operations: any 
operation that blocks for an unbounded amount of time is treated as 
interruptible and can receive asynchronous exceptions.


I think of block as a way to turn asynchronous exceptions into 
synchronous ones.  So rather that having to worry that an asynchronous 
exception may strike at any point, you only have to worry about them 
being throw by blocking operations.  If in doubt you should think of 
every library function as potentially interruptible, but that still 
means you usually have enough control over asynchronous exceptions to 
avoid problems.


If things get really hairy, consider using STM instead.  In STM an 
asynchronous exception causes a rollback, so maintaining your invariants 
is trivial - this is arguably one of the main benefits of STM.  There's 
no need for block/unblock within STM transactions.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell.org re-design

2010-03-29 Thread Simon Marlow

On 29/03/2010 13:20, Christopher Done wrote:

On 29 March 2010 11:19, Simon Marlowmarlo...@gmail.com  wrote:

Is the footer necessary?  I dislike sites that have too many ways to
navigate, and the footer looks superfluous.  The footer will probably be off
the bottom of the window in any case, which reduces its usefulness as a
navigation tool.


Footer navigations are a way to provide a bit of a sitemap without
cluttering the top nav, good for SEO, and to provide the user with an
overview of the hierarchical structure of the site on every page.


IMHO, these aren't compelling reasons.  Note that already on your page 
there is an inconsistency between the tabs at the top and the headings 
at the bottom: I don't know where to look to find the content I want. 
Put the navigation in one place.


A sitemap: sitemaps are for robots.  If you're worried about cluttering 
up the page, use drop-down menus.


SEO: we shouldn't compromise the usability or appearance of the site for 
SEO.  If we do it right, SEO takes care of itself - and it's not like we 
care that much about SEO here, we're not competing with other sites to 
sell you Haskell.



According to the W3C, headings are easily navigable with a
screenreader; as someone using assistive technologies I'd get
something like:

Welcome to Haskell.org
Events
Learning
Headlines
Latest Packages
Quick Links
Download
Community
Wiki
Reports


Making this work right is an important goal, I agree.

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Garbage collecting pointers

2010-03-29 Thread Simon Marlow

On 26/03/2010 20:28, Mads Lindstrøm wrote:

Hi

For some time I have been thinking about an idea, which could limit
Haskell's memory footprint. I don't know if the idea is crazy or clever,
but I would love to hear peoples thoughts about it. The short story is,
I propose that the garbage collector should not just reclaim unused
memory, it should also diminish the need for pointers, by replacing
nested data structures with larger chunks of consecutive memory. In
other words, I would diminish the need for pointers for arbitrary
recursive data types, just as replacing linked lists with arrays
eliminate the need for pointers.

I will explain my idea by an example of a data type we all know and
love:

data List a = Cons a (List a) | Nil

each Cons cell uses two pointers - one for the element and one for the
rest of the list. If we could somehow merge the element and the rest of
the list into consecutive memory, we would be able to eliminate the
pointer to the rest of list. On 64 bit architectures merging would save
us 8 bytes of wasted memory. If we could merge n elements together we
could save n*8 bytes of memory.


The trouble with techniques like this is that they break the uniformity 
of the representation, and complexity leaks all over the place.  Various 
similar ideas have been tried in the past, though not with Haskell as 
far as I'm aware: CDR-coding and BiBOP spring to mind.



64 bit pointers are wasteful in another way, as nobody has anywhere near
2^64 bytes of memory. And nobody is going to have that much memory for
decades to come. Thus, we could use the 8 most significant bits for
something besides pointing to memory, without loosing any significant
ability to address our memories. This would be similar to how GHC uses
some of the least significant bits for pointer tagging.


Unfortunatley you don't know whereabouts in your address space your 
memory is located: the OS could do randomised allocation and give you a 
pages all over the place, so in fact you might need all 64 bits.  Yes 
you can start doing OS-specific things, but that always leads to pain 
later (I know, I've been there).


In the Java community there has been experimentation with using 
different representations to avoid the 64-bit pointer overhead: e.g. 
shifting a 32-bit pointer to the left by 2 bits in order to access 16GB 
of memory.  Personally I'm not keen on doing this kind of trick, mainly 
because in GHC it would be a compile-time switch; Java has it easy here 
because they can make the choice at runtime.  Also we already use the 
low 2 bits of pointers in GHC for pointer tagging.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Shootout update

2010-03-30 Thread Simon Marlow
The shootout (sorry, Computer Language Benchmarks Game) recently updated 
to GHC 6.12.1, and many of the results got worse.  Isaac Gouy has added 
the +RTS -qg flag to partially fix it, but that turns off the parallel 
GC completely and we know that in most cases better results can be had 
by leaving it on.  We really need to tune the flags for these benchmarks 
properly.


http://shootout.alioth.debian.org/u64q/haskell.php

It may be that we have to back off to +RTS -N3 in some cases to avoid 
the last-core problem (http://hackage.haskell.org/trac/ghc/ticket/3553), 
at least until 6.12.2.


Any volunteers with a quad-core to take a look at these programs and 
optimise them for 6.12.1?


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GSOC Haskell Project

2010-03-31 Thread Simon Marlow

On 30/03/2010 20:57, Mihai Maruseac wrote:


I'd like to introduce my idea for the Haskell GSOC of this year. In
fact, you already know about it, since I've talked about it here on
the haskell-cafe, on my blog and on reddit (even on #haskell one day).

Basically, what I'm trying to do is a new debugger for Haskell, one
that would be very intuitive for beginners, a graphical one. I've
given some examples and more details on my blog [0], [1], also linked
on reditt and other places.

This is not the application, I'm posting this only to receive some
kind of feedback before writing it. I know that it seems to be a
little too ambitious but I do think that I can divide the work into
sessions and finish what I'll start this summer during the next year
and following.

[0]: http://pgraycode.wordpress.com/2010/03/20/haskell-project-idea/
[1]: http://pgraycode.wordpress.com/2010/03/24/visual-haskell-debugger-part-2/

Thanks for your attention,


My concerns would be:

 - it doesn't look like it would scale very well beyond small
   examples, the graphical representation would very quickly
   get unwieldy, unless you have some heavyweight UI stuff
   to make it navigable.

 - it's too ambitious

 - have you looked around to see what kind of debugging tools
   people are asking for?  The most oft-requested feature is
   stack traces, and there's lots of scope for doing something
   there (but also many corpses littering the battlefield,
   so watch out!)

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Benchmarking and Garbage Collection

2010-03-31 Thread Simon Marlow

On 04/03/2010 22:01, Neil Brown wrote:

Jesper Louis Andersen wrote:

On Thu, Mar 4, 2010 at 8:35 PM, Neil Brown nc...@kent.ac.uk wrote:

CML is indeed the library that has the most markedly different
behaviour.
In Haskell, the CML package manages to produce timings like this for
fairly
simple benchmarks:

%GC time 96.3% (96.0% elapsed)

I knew from reading the code that CML's implementation would do
something
like this, although I do wonder if it triggers some pathological case
in the
GC.


That result is peculiar. What are you doing to the library, and what
do you expect happens? Since I have some code invested on top of CML,
I'd like to gain a little insight if possible.


In trying to simplify my code, the added time has moved from GC time to
EXIT time (and increased!). This shift isn't too surprising -- I believe
the time is really spent trying to kill lots of threads. Here's my very
simple benchmark; the main thread repeatedly chooses between receiving
from two threads that are sending to it:


import Control.Concurrent
import Control.Concurrent.CML
import Control.Monad

main :: IO ()
main = do let numChoices = 2
cs - replicateM numChoices channel
mapM_ forkIO [replicateM_ (10 `div` numChoices) $ sync $ transmit c
() | c - cs]
replicateM_ 10 $ sync $ choose [receive c (const True) | c - cs]


Compiling with -threaded, and running with +RTS -s, I get:

INIT time 0.00s ( 0.00s elapsed)
MUT time 2.68s ( 3.56s elapsed)
GC time 1.84s ( 1.90s elapsed)
EXIT time 89.30s ( 90.71s elapsed)
Total time 93.82s ( 96.15s elapsed)


FYI, I've now fixed this in my working branch:

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time0.88s  (  0.88s elapsed)
  GCtime0.85s  (  1.04s elapsed)
  EXIT  time0.05s  (  0.07s elapsed)
  Total time1.78s  (  1.97s elapsed)

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shootout update

2010-03-31 Thread Simon Marlow

On 31/03/2010 16:06, Roman Leshchinskiy wrote:

I'm wondering... Since the DPH libraries are shipped with GHC by default are we 
allowed to use them for the shootout?


I don't see why not.

*evil grin*

Simon


Roman

On 30/03/2010, at 19:25, Simon Marlow wrote:


The shootout (sorry, Computer Language Benchmarks Game) recently updated to GHC 
6.12.1, and many of the results got worse.  Isaac Gouy has added the +RTS -qg 
flag to partially fix it, but that turns off the parallel GC completely and we 
know that in most cases better results can be had by leaving it on.  We really 
need to tune the flags for these benchmarks properly.

http://shootout.alioth.debian.org/u64q/haskell.php

It may be that we have to back off to +RTS -N3 in some cases to avoid the 
last-core problem (http://hackage.haskell.org/trac/ghc/ticket/3553), at least 
until 6.12.2.

Any volunteers with a quad-core to take a look at these programs and optimise 
them for 6.12.1?

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GSOC Haskell Project

2010-04-01 Thread Simon Marlow

On 01/04/10 21:41, Max Bolingbroke wrote:

On 1 April 2010 18:58, Thomas Schillingnomin...@googlemail.com  wrote:


On 1 Apr 2010, at 18:39, Mihai Maruseac wrote:


Hmm, interesting. If I intend to give it a try, will there be a mentor
for a GSOC project? Or should I start doing it alone?


I'm sure Simon Marlow could mentor you except maybe if there are too many 
GHC-related GSoC projects.  I could do mentor this as well.  Or maybe Max.  I 
don't think finding a mentor will be a problem.


I'm not the best person to mentor this project - I did bring it up in
the hope that someone would find it tempting as a GSoC project, though
:-). I think it's eminently practical to get this done in a summer (or
less), and it would ameliorate one of Haskell's more embarrassing
problems.


I'd be happy to mentor this project.

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Immix GC as a Soc proposal

2010-04-02 Thread Simon Marlow

On 01/04/10 22:19, Thomas Schilling wrote:

In my opinion the project would be worthwhile even if it's not in the
Top 8.  Mentors vote on the accepted projects based both on the
priority of the project and the applying student, so it's probably not
a bad idea to apply for other projects as well so you don't put all
your stakes on just a single horse.

Looking at your current proposal, however, I think that your timeline
is, well, impossible.  You seem to suggest to build a new allocator
and garbage collector from scratch.  GHC's allocator is already quite
similar to Immix, so you don't really have to re-implement much.  The
main differences (OTTOH) are the different region sizes, the marking
accuracy (Immix marks 128 byte blocks, GHC is word-accurate), and
eager compaction.


Immix actually marks twice: once for the object, and once for the line. 
 I propose that in GHC we just mark once in our bitmap.  Then the sweep 
phase looks for empty lines (we can experiment with different sizes), 
and constructs a free list from them.  We have to choose a 
representation for the free list, and the allocator in the GC will have 
to be taught how to allocate from the free list.  This is all pretty 
straighforward.  The tricky bit is how to deal with objects that are 
larger than a line: we'll have to use a different allocator for them, 
and we'll need to identify them quickly, perhaps with a different object 
type.


The other part is opportunistic defragmentation, which is a doddle in 
GHC.  We just identify blocks (in GHC's terminology) that are too 
fragmented, and flag them with the BF_COPY bit before GC, and any live 
objects in that block will be copied rather than marked.  The existing 
mark/sweep collector in GHC already does this.  We could experiment with 
different policies for deciding which blocks to defrag - one idea is 
just to defrag the oldest 10% of blocks during each GC, so you get to 
them all eventually.


So I think this is all quite doable for a keen SoC student.  Marco: I 
suggest reworking your timeline based on the above.  The mutator's 
allocator doesn't need to change, but the GC's allocator will.


In case it isn't clear, I propose we keep the existing generational 
framework and use Immix only for the old generation.



Therefore I'd suggest to move in small steps:  Change some parameters
(e.g., region size), fix the resulting bugs and benchmark each change.
  Then, maybe, implement eager compaction on top of the existing
system.  I believe this will keep you busy enough.  If in the end GC
is 5% faster that would be a very good outcome!

I also don't know how much complexity the parallel GC and other
synchronisation stuff will introduce.  Maybe Simon (CC'd) can comment
on that.


To do this in parallel you'd need to change the bitmap to be a byte map, 
and then it's pretty straightforward I think.  Objects are at least 2 
words, so the byte-map overhead is 12.5% on a 32-bit machine or 6.25% on 
a 64-bit machine.  There might be contention for the free list, so we 
might have to divise a scheme to avoid that.


Cheers,
Simon




/ Thomas

On 1 April 2010 22:00, Marco Túlio Gontijo e Silvamar...@debian.org  wrote:

Hi.

I've written a Google Summer of Code proposal for implementing the Immix
Garbage Collector in GHC[0].  It's not on dons list of the 8 most important
projects[1], but I only saw that list after the proposal is done.  I'd like to
hear comments about it, specially about its relevance, since it's not on the
list of 8.

0: http://www2.dcc.ufmg.br/laboratorios/llp/wiki/doku.php?do=showid=marco_soc
1: 
http://donsbot.wordpress.com/2010/04/01/the-8-most-important-haskell-org-gsoc-projects/

I'm planning to write another proposal, maybe about LLVM Performance Study,
LLVM Cross Compiler, A Package Versioning Policy Checker or “cabal
test”, if mentors think they're more relevant than my current proposal.
Please let me know if this is the case.

Greetings.
--
marcot
http://wiki.debian.org/MarcoSilva
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe







___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-07 Thread Simon Marlow

On 25/03/2010 23:16, Bas van Dijk wrote:

On Thu, Mar 25, 2010 at 11:23 PM, Simon Marlowmarlo...@gmail.com  wrote:

So I'm all for deprecating 'block' in favor of 'mask'. However what do
we call 'unblock'? 'unmask' maybe? However when we have:

mask $ mask $ unmask x

and these operations have the counting nesting levels semantics,
asynchronous exception will not be unmasked in 'x'. However I don't
currently know of a nicer alternative.


But that's the semantics you wanted, isn't it?  Am I missing something?


Yes I like the nesting semantics that Twan proposed.

But with regard to naming, I think the name 'unmask' is a bit
misleading because it doesn't unmask asynchronous exceptions. What it
does is remove a layer of masking so to speak. I think the names of
the functions should reflect the nesting or stacking behavior. Maybe
something like:

addMaskingLayer :: IO a -  IO a
removeMaskingLayer :: IO a -  IO a
nrOfMaskingLayers :: IO Int

However I do find those a bit long and ugly...


I've been thinking some more about this, and I have a new proposal.

I came to the conclusion that counting nesting layers doesn't solve the 
problem: the wormhole still exists in the form of nested unmasks.  That 
is, a library function could always escape out of a masked context by 
writing


  unmask $ unmask $ unmask $ ...

enough times.

The functions blockedApply and blockedApply2 proposed by Bas van Dijk 
earlier solve this problem:


blockedApply :: IO a - (IO a - IO b) - IO b
blockedApply a f = do
  b - blocked
  if b
then f a
else block $ f $ unblock a

blockedApply2 :: (c - IO a) - ((c - IO a) - IO b) - IO b
blockedApply2 g f = do
  b - blocked
  if b
then f g
else block $ f $ unblock . g

but they are needlessly complicated, in my opinion.  This offers the 
same functionality:


mask :: ((IO a - IO a) - IO b) - IO b
mask io = do
  b - blocked
  if b
 then io id
 else block $ io unblock

to be used like this:

a `finally` b =
  mask $ \restore - do
r - restore a `onException` b
b
return r

So the property we want is that if I call a library function

  mask $ \_ - call_library_function

then there's no way that the library function can unmask exceptions.  If 
all they have access to is 'mask', then that's true.


It's possible to mis-use the API, e.g.

  getUnmask = mask return

but this is also possible using blockedApply, it's just a bit harder:

  getUnmask = do
m - newEmptyMVar
f - blockedApply (join $ takeMVar m) return
return (\io - putMVar m io  f)

To prevent these kind of shennanigans would need a parametricity trick 
like the ST monad.  I don't think it's a big problem that you can do 
this, as long as (a) we can explain why it's a bad idea in the docs, and 
(b) we can still give a semantics to it, which we can.


So in summary, my proposal for the API is:

  mask  :: ((IO a - IO a) - IO b) - IO b
  -- as above

  mask_ :: IO a - IO a
  mask_ io = mask $ \_ - io

and additionally:

  nonInterruptibleMask  :: ((IO a - IO a) - IO b) - IO b
  nonInterruptibleMask_ :: IO a - IO a

which is just like mask/mask_, except that blocking operations (e.g. 
takeMVar) are not interruptible.  Nesting mask inside 
nonInterruptibleMask has no effect.  The new version of 'blocked' would be:


   data MaskingState = Unmasked
 | MaskedInterruptible
 | MaskedNonInterruptible

  getMaskingState :: IO MaskingState

Comments?  I have a working implementation, just cleaning it up to make 
a patch.


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-07 Thread Simon Marlow

On 07/04/2010 16:20, Sittampalam, Ganesh wrote:

Simon Marlow wrote:


I came to the conclusion that counting nesting layers doesn't solve
the problem: the wormhole still exists in the form of nested unmasks.
That is, a library function could always escape out of a masked
context by writing

unmask $ unmask $ unmask $ ...

enough times.

[...]

mask :: ((IO a -  IO a) -  IO b) -  IO b
mask io = do
b- blocked
if b
   then io id
   else block $ io unblock

to be used like this:

a `finally` b =
mask $ \restore -  do
  r- restore a `onException` b
  b
  return r

So the property we want is that if I call a library function

mask $ \_ -  call_library_function

then there's no way that the library function can unmask exceptions.
If all they have access to is 'mask', then that's true.

[...]

It's possible to mis-use the API, e.g.

getUnmask = mask return


Given that both the simple mask/unmask and your alternate proposal
have backdoors, is the extra complexity really worth it?


The answer is yes, for a couple of reasons.

 1. this version really is safer than mask/unmask that count
nesting levels.  If the caller is playing by the rules,
then a library function can't unmask exceptions.  The
responsibility not to screw up is in the hands of the
caller, not the callee: that's an improvement.

 2. in this version more of the code is in Haskell, and
the primitives and RTS implementation are simpler.  So
actually I consider this less complex than counting
nesting levels.

I did implement the nesting levels version first, and when adding 
non-interruptibility to the mix things got quite hairy.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-07 Thread Simon Marlow

On 07/04/10 21:23, Bas van Dijk wrote:

On Wed, Apr 7, 2010 at 5:12 PM, Simon Marlowmarlo...@gmail.com  wrote:

Comments?


I really like this design.

One question, are you planning to write the MVar utility functions
using 'mask' or using 'nonInterruptibleMask'? As in:


withMVar :: MVar a -  (a -  IO b) -  IO b
withMVar m f = whichMask? $ \restore -  do
   a- takeMVar m
   b- restore (f a) `onException` putMVar m a
   putMVar m a
   return b


Definitely the ordinary interruptible mask.  It is the intention that 
the new nonInterruptibleMask is only used in exceptional circumstances 
where dealing with asynchronous exceptions emerging from blocking 
operations would be impossible to deal with.  The more unwieldy name was 
chosen deliberately for this reason.


The danger with nonInterruptibleMask is that it is all too easy to write 
a program that will be unresponsive to Ctrl-C, for example.  It should 
be used with great care - for example when there is reason to believe 
that any blocking operations that would otherwise be interruptible will 
only block for short bounded periods.


In the case of withMVar, if the caller is concerned about the 
interruptibility then they can call it within nonInterruptibleMask, 
which overrides the interruptible mask in withMVar.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-08 Thread Simon Marlow

On 07/04/2010 18:54, Isaac Dupree wrote:

On 04/07/10 11:12, Simon Marlow wrote:

It's possible to mis-use the API, e.g.

getUnmask = mask return


...incidentally,
unmask a = mask (\restore - return restore) = (\restore - restore a)


That doesn't work, as in it can't be used to unmask exceptions when they 
are masked.  The 'restore' you get just restores the state to its 
current, i.e. masked, state.



mask :: ((IO a - IO a) - IO b) - IO b


It needs to be :: ((forall a. IO a - IO a) - IO b) - IO b
so that you can use 'restore' on two different pieces of IO if you need
to. (alas, this requires not just Rank2Types but RankNTypes. Also, it
doesn't cure the loophole. But I think it's still essential.)


Sigh, yes I suppose that's true, but I've never encountered a case where 
I needed to call unmask more than once, let alone at different types, 
within the scope of a mask.  Anyone else?


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-09 Thread Simon Marlow

On 09/04/2010 09:40, Bertram Felgenhauer wrote:

Simon Marlow wrote:

but they are needlessly complicated, in my opinion.  This offers the
same functionality:

mask :: ((IO a -  IO a) -  IO b) -  IO b
mask io = do
   b- blocked
   if b
  then io id
  else block $ io unblock


How does forkIO fit into the picture? That's one point where reasonable
code may want to unblock all exceptions unconditionally - for example to
allow the thread to be killed later.


Sure, and it works exactly as before in that the new thread inherits the 
masking state of its parent thread.  To unmask exceptions in the child 
thread you need to use the restore operator passed to the argument of mask.


This does mean that if you fork a thread inside mask and don't pass it 
the restore operation, then it has no way to ever unmask exceptions.  At 
worst, this means you have to pass a restore value around where you 
didn't previously.



 timeout t io = block $ do
 result- newEmptyMVar
 tid- forkIO $ unblock (io= putMVar result)
 threadDelay t `onException` killThread tid
 killThread tid
 tryTakeMVar result


This would be written

  timeout t io = mask $ \restore - do
  result- newEmptyMVar
  tid- forkIO $ restore (io= putMVar result)
  threadDelay t `onException` killThread tid
  killThread tid
  tryTakeMVar result

though the version of timeout in System.Timeout is better for various 
reasons.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   5   6   7   8   9   >