Re: simultaneous ghc versions

2015-08-02 Thread Josef Svenningsson
On Fri, Jul 31, 2015 at 10:31 PM Erik de Castro Lopo mle...@mega-nerd.com
wrote:

 I maintaing multiple versions of GHC on all the machines I use regularly
 for Haskell development. I have:

 * ghc-7.6.3 installed under /usr/lib/ghc-7.6/
 * ghc-7.8.4 installed under /usr/lib/ghc-7.8/
 * ghc-7.10.2 installed under /usr/lib/ghc-7.10/

 To switch between versions all I need to do is modify $PATH
 to remove say /usr/lib/ghc-7.6/bin and add /usr/lib/ghc-7.10/bin.
 This lets me have two terminal window side by side with different
 versions of GHC.

 I actually have a shell function to to do this PATH mangling. I can
 document this more fully if anyone is interested.


I have a similar setup where I install different versions of GHC in
different directories. But I use GNU stow[1] to create and remove symlinks
in /usr/local/* so that I can use one version of GHC as the default.

[1] https://www.gnu.org/software/stow/

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Wadler space leak

2010-11-09 Thread Josef Svenningsson
Let me clarify a bit exactly how Gustavsson and Sands (I'll refer to them as
GS) handled the issue of the Wadler space leak. It's true that they adopted
an approach similar to Sparud in that they extended their core calculus with
a new language construct which could solve the problem. This is contrast to
Wadler who changed the garbage collector instead, something that GS said
would lead to bad behavior in their calculus.
BUT, GS did not adopt exactly the construct that Sparud suggested. Sparud's
suggestion was to add an updatePat primitive to the language. This was
inspired by how the G-machine work, it had update instructions which where
typically executed after a value was computed. It's a rather bad fit for the
STG-machine which pushes update markers on the stack whenever it starts to
evaluate a thunk. Updates are performed whenever there is an update marker
on the stack when it has computed something to WHNF.
The language construct that GS chose was to have pattern bindings as
primitive in the language. So the code snippet below (taken from Jörgen's
thesis) would be a valid core program. It would not be desugared into case
expressions.
~~~
let (ps,qs) = split ys
in (y:ps,qs)
~~~
The semantics of pattern bindings involves a new kind of update marker
which, in the example above, will update both ps and qs, whenever the 'split
ys' is computed to WHNF. This neatly solves the space leak problem. And it
is a much closer fit to the STG-machine in that uses update markers on the
stack to coordinate the graph reduction.

I think the solution GS chose should work much better for GHC than Sparud's
suggestion. But it would no doubt be an invasive change to GHC as Core would
have to be changed to support pattern bindings.

Cheers,

Josef

On Tue, Nov 9, 2010 at 8:58 AM, Duncan Coutts
duncan.cou...@googlemail.comwrote:

 On 8 November 2010 13:28, Simon Marlow marlo...@gmail.com wrote:
 
  There's another approach in Jan Sparud's paper here:
 
  http://portal.acm.org/citation.cfm?id=165196
 
  although it's not clear that this interacts very well with inlining
 either,
  and it has a suspicious-looking side-effecting operation.  It also looks
  like it creates a circular reference between the thunk and the selectors,
  which might hinder optimisations, and would probably also make things
 slower
  (by adding extra free variables to the thunk).

 This proposal is mentioned favourably by Jörgen Gustavsson David Sands
 in [1] (see section 6, case study 6). They mention that there is a
 formalisation in Gustavsson's thesis [2]. That may say something about
 inlining, since that's just the kind of transformation they'd want to
 show is a space improvement.

 [1]: Possibilities and Limitations of Call-by-Need Space Improvement (2001)
  http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.8.4097

 [2]: Space-Safe Transformations and Usage Analysis for Call-by-Need
 Languages (2001)
  (which I cannot immediately find online)

 Duncan
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: I accidentally the Prelude

2010-03-02 Thread Josef Svenningsson
On Mon, Mar 1, 2010 at 11:54 PM, Jeremy Shaw jer...@n-heptane.com wrote:
 is there, by chance, a file named Prelude.hs in the working directory? (the
 directory you are in when you type ghci?)
 - jeremy

Ah. Thanks! That was indeed the problem.

Though I think ghci:s response could be a little bit more transparent.

Josef

 On Mon, Mar 1, 2010 at 11:43 AM, Josef Svenningsson
 josef.svennings...@gmail.com wrote:

 Hi,

 It seems I've been able to mess up my ghc installation pretty badly.
 Here is what happens if I just try to invoke ghci from the prompt:

 $ ghci
 GHCi, version 6.10.4: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer ... linking ... done.
 Loading package base ... linking ... done.
 command line: module `Prelude' is not loaded
 $

 I have no idea what I did to end up in this situation. What I've been
 doing lately is reinstalling some packages. I also have another ghc
 installed but it's at a completely different place in the file system.
 The only thing I can think of is if cabal managed to somehow confuse
 the two ghcs and wrote some data in the wrong place.

 What I really would like to know is if there is a simple way to fix
 this without completely reinstalling ghc with all the libraries I have
 installed. Has anyone else experienced anything similar?

 If this is a potential bug I'd be happy to provide any data that might
 help track it down.

 Cheers,

 Josef
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: I accidentally the Prelude

2010-03-02 Thread Josef Svenningsson
On Tue, Mar 2, 2010 at 12:21 PM, Simon Marlow marlo...@gmail.com wrote:
 On 02/03/2010 08:59, Josef Svenningsson wrote:

 On Mon, Mar 1, 2010 at 11:54 PM, Jeremy Shawjer...@n-heptane.com  wrote:

 is there, by chance, a file named Prelude.hs in the working directory?
 (the
 directory you are in when you type ghci?)
 - jeremy

 Ah. Thanks! That was indeed the problem.

 Though I think ghci:s response could be a little bit more transparent.

 Sure, how about this:

 $ touch Prelude.hs
 $ ghci
 GHCi, version 6.12.1: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer-gmp ... linking ... done.
 Loading package base ... linking ... done.
 Loading package ffi-1.0 ... linking ... done.
 Prelude

 ie. with 6.12.1 it just works.

Brilliant! Thanks.

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


I accidentally the Prelude

2010-03-01 Thread Josef Svenningsson
Hi,

It seems I've been able to mess up my ghc installation pretty badly.
Here is what happens if I just try to invoke ghci from the prompt:

$ ghci
GHCi, version 6.10.4: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer ... linking ... done.
Loading package base ... linking ... done.
command line: module `Prelude' is not loaded
$

I have no idea what I did to end up in this situation. What I've been
doing lately is reinstalling some packages. I also have another ghc
installed but it's at a completely different place in the file system.
The only thing I can think of is if cabal managed to somehow confuse
the two ghcs and wrote some data in the wrong place.

What I really would like to know is if there is a simple way to fix
this without completely reinstalling ghc with all the libraries I have
installed. Has anyone else experienced anything similar?

If this is a potential bug I'd be happy to provide any data that might
help track it down.

Cheers,

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ST monad and monad tranformers

2009-02-02 Thread Josef Svenningsson
On Mon, Feb 2, 2009 at 8:50 PM, Reid Barton rwbar...@math.harvard.edu wrote:
 On Mon, Feb 02, 2009 at 06:03:15PM +0100, Josef Svenningsson wrote:
 Hi Tyson,

 I also needed something like this a while ago so I knocked up a really
 simple module and put it on hackage:
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/STMonadTrans

 Warning!  The STMonadTrans package uses State# nonlinearly, and as a
 result, can violate referential transparency:

Indeed, thanks for pointing this out. I really should have a warning
sign on the package saying that it only works for certain monads.

Cheers,

/Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ST monad and monad tranformers

2009-02-02 Thread Josef Svenningsson
Hi Tyson,

I also needed something like this a while ago so I knocked up a really
simple module and put it on hackage:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/STMonadTrans

If you have any suggestions for improvement they are most welcome.
Patches even more so.

Josef

2009/2/2 Tyson Whitehead twhiteh...@gmail.com:
 I have a situation in which I believe I need a parameterizable version of the
 strict ST monad.  My computation type is StateT s' (STT s (ErrorT e m)) a
 (i.e., fails or succeeds and has an internal state involving a state thread).

 The STT type above is a version of ST like the ReaderT, StateT, etc. types.

 newtype STT s m a = STT ( State# s - m (STTBox s a) )
 data STTBox s a = STTBox {-#UNPACK#-} !(State# s) {-#UNPACK#-} !a

 (I'm guessing on the UNPACK paragmas here) with

 runSTT :: (Monad m) = (forall s. STT s m a) - m a
 runSTT m = case m of STT m' - do STTBox _ x - m' realWorld#
  return x

 (writing this as runSTT (STT m') = ... doesn't typecheck with ghc 6.8.2)

 instance Monad m = Monad (STT s m) where
return x = STT $ \s - return $ STTBox s x
(STT m) = k = STT $ \s - do STTBox s' x - m s
   case k x of STT k' - k' s'

 plus all the assorted instances for Functor, MonadPlus, MonadFix, MonadTrans,
 MonadReader, MonadState, etc.  For example,

 instance MonadWriter w m = MonadWriter w (STT s m) where
tell = lift . tell
listen (STT m) = STT $ \s - do (STTBox s' x,w) - listen $ m s
return $ STTBox s' (x,w)
pass   (STT m) = STT $ \s - pass $ do STTBox s' (x,f) - m s
   return (STTBox s' x,f)

 I was looking for any comments, wondering if there is a reason for this not
 existing in the library already, and what I should do in terms of paragmas and
 such for speed?  I see the GHC-ST file has a mix of INLINE and NOINLINE.

 http://www.haskell.org/ghc/dist/current/docs/libraries/base/src/GHC-ST.html

 In particular, return, =, , and runST are marked INLINE, but there is a
 regrettably delicate comment that goes with the runST method.  Also, what
 about the Functor, MonadPlus, MonadFix, MonadTrans, MonadReader, etc. methods?

 Thanks! -Tyson

 PS:  I would be happy to provide the whole works to be added to the library if
 it is something that should be there.

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: strictness of interpreted haskell implementations

2008-04-25 Thread Josef Svenningsson
On Fri, Apr 25, 2008 at 9:17 PM, Duncan Coutts
[EMAIL PROTECTED] wrote:

  On Fri, 2008-04-25 at 09:08 -0700, Don Stewart wrote:
   Geraint.Jones:
Are there well-known differences in the implementations of Haskell in
ghci and hugs?  I've got some moderately intricate code (simulations
of pipelined processors) that behave differently - apparently because
ghci Haskell is stricter than hugs Haskell, and I cannot find any
obviously relevant claims about strictness in the documentation.

  I think they should give the same answer. It sounds like a bug in one
  implementation or the other.

I suspect this might be a library thing. If ghc and hugs uses
different versions of the library and some function had its strictness
property changed then that might account for the discrepancy.

   Hugs does no optimisations, while GHC does a truckload, including
   strictness analysis. Some of these optimisations prevent space leaks.

  Though none should change the static semantics.

That was my initial reaction as well, but then I recalled that some of
ghc's optimizations actually changes the strictness behavior. The
foldr/build transformation for instance can actually change the
strictness of a function such that you can actually observe it. So we
can't rule out that ghc is doing something it shouldn't be doing.

  Post the code. Even if you don't have time to track down the difference,
  someone might.

Yep, without the code we're just fumbling in the dark.

Cheers,

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: STM and fairness

2008-03-05 Thread Josef Svenningsson
Tim, Simon,

Thanks for your detailed descriptions. Much of my understanding was
confirmed. I'll see if I can send you a patch with my suggested fix as
soon as my teaching is over.

Thanks,

Josef

On Mon, Mar 3, 2008 at 2:03 PM, Tim Harris (RESEARCH)
[EMAIL PROTECTED] wrote:
 Hi,

  At the moment we don't make any particular effort to make threads runnable 
 in some specific order when they are unblocked.  The current implementation 
 is simply what was easiest to write.

  If I remember rightly threads blocked on TVars will initially be 
 half-woken, putting them on the same run-queue as their waker and leaving 
 the STM data structures intact.  When scheduled they will check whether or 
 not the TVars' contents differ from the values that caused them to block: if 
 the values are unchanged then a thread can block again without needing to 
 build up wait queue structures.  In Simon's example of 100 threads blocked on 
 a single-cell TVar buffer, this would mean 99 of them are validated and block 
 again without needing to re-execute the rest of the transaction containing 
 the TVar access.  This will probably happen within a single OS thread so 
 these are lightweight thread switches within the GHC run time rather than 99 
 OS thread switches.

  At some point it might be nice to look at using run-time feedback about how 
 individual TVars are used.  I suspect that, looking at it dynamically, there 
 are a few simple policies that would apply to most TVars (wake-all / 
 wake-one) with the caveat that anything other than wake-all must eventually 
 fall back to wake-all to preserve the intended semantics for retry.

  NB -- when thinking about a shared buffer built over TVars there's also the 
 possibility that a non-blocked thread will consume the resource ahead of a 
 blocked thread that has been woken.  As with programming with 
 locks/condition-variables, avoiding this case would need an explicit queue of 
 consumers to be maintained by the application (and symmetrically for 
 producers).

  In any case, running threads in something approximating the same order they 
 blocked sounds sensible to me.  The lists of threads blocked on a TVar are 
 doubly-linked (right?) so wouldn't need to be explicitly reversed.

  Tim








  -Original Message-
  From: Simon Peyton-Jones
  Sent: 29 February 2008 20:06
  To: Josef Svenningsson; glasgow-haskell-users@haskell.org
  Cc: Tim Harris (RESEARCH)
  Subject: RE: STM and fairness

  | I'd like to know a bit about the STM implementation in GHC,
  | specifically about how it tries to achieve fairness. I've been reading
  | Composable Memory Transactions but it does not contain that much
  | details on this specific matter. What I want to know boils down to
  | this: what order are processes run which have been woken up from a
  | call to retry?

  Tim is the one who implemented this stuff, so I'm ccing him.

  If threads queue up on a single MVar, it's obvious how to achieve fairness 
 of a sort.  Furthremore, if 100 threads are blocked on one MVar, the 
 scheduler can wake up exactly one when the MVar is filled.  With STM it's 
 much less obvious.

  First, a thread may block on a whole bunch of TVars; if any of them are 
 changed, the thread should re-run.  So there is no single list of threads to 
 reverse or not reverse.

  Second, if 100 threads are blocked on a TVar, t, waking up just one of them 
 may not suffice -- it may read some more TVars and then retry again, 
 re-blocking itself on t (plus some more). The only simple thing to do is to 
 wake all of them up.  In common situations (e.g. a buffer), we may wake up 
 all 100 threads, only for 99 of them to lose the race and block again.

  This arises from the fact that transactions do a wonderful thing, by letting 
 you perform multiple operations atomically -- but that makes it harder to 
 optimize.


  All that said, you may well be right that one could do a better job of 
 scheduling.  For example, even though there may be lots of threads blocked on 
 a TVar, and all must be made runnable, they could perhaps be run in the same 
 order that they blocked, so the longest-blocked got to run first.   I don't 
 think we try to do that, but Tim would know.

  By all means suggest a patch!

  Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


STM and fairness

2008-02-29 Thread Josef Svenningsson
Hi,

I'd like to know a bit about the STM implementation in GHC,
specifically about how it tries to achieve fairness. I've been reading
Composable Memory Transactions but it does not contain that much
details on this specific matter. What I want to know boils down to
this: what order are processes run which have been woken up from a
call to retry? When programming with condition variables the standard
behaviour is that the process which has waited the longest is the
first one to get to run. But that doesn't seem to be the behaviour
here. Consider the following program:
\begin{code}
module STMFair where

import Control.Concurrent
import Control.Concurrent.STM

test n = do v - newTVarIO 0
mapM_ (\n - forkIO (process n v) 
 threadDelay delay) [1..n]
atomically (writeTVar v 1)
threadDelay delay

delay = 50

process id var = do putStrLn (Process  ++ show id ++  started)
atomically $ do
  v - readTVar var
  if v == 0
then retry
else return ()
putStrLn (Process  ++ show id ++  finished)
\end{code}

When I run 'test 2' I expect it to print:
Process 1 started
Process 2 started
Process 1 finished
Process 2 finished

This would correspond to the oldest process being executed first. But
that is not what happens instead I get this (ghci 6.8.2, Ubuntu
Linux):
Process 1 started
Process 2 started
Process 2 finished
Process 1 finished

This is certainly not the behaviour I would want. I discovered this
behaviour when implementing the dining philosophers using STM and
there one of the philosophers gets starved. Except, that he's not
quite starved. When I run the simulation long enough he will
eventually be able to eat but then for a long time there will be some
other philosopher that is starved. I find this behaviour very
mysterious and it would be nice to have some light shed on it.

Apart from this mysterious behaviour it seems quite easy to improve
the fairness of the implementation. From my examples above it seems
that the wait queues for a transactional variable do contain the
processes in the order they call retry (try running 'test n' for some
large n). It just seems that they are given to the scheduler in the
wrong order, so all that needs to be done is to reverse the list. Am I
right?

Thanks for reading,

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: STM and fairness

2008-02-29 Thread Josef Svenningsson
On Fri, Feb 29, 2008 at 4:27 PM, Roberto Zunino [EMAIL PROTECTED] wrote:
 Josef Svenningsson wrote:
   What I want to know boils down to
   this: what order are processes run which have been woken up from a
   call to retry?

  IIUC, the order of wake up is irrelevant, since *all* the threads will
  re-run the transaction in parallel. So, even if thread 1 is the first to
  wake up, thread 2 might beat it in the race, and complete its
  transaction first.

That's not quite right since there is no true parallelism here. I'm
running on a single core (which I suppose I could have mentioned) and
so it is up the scheduler to make sure that processes get a fair
chance at doing their business, i.e. achieving fairness. The point I
was trying to make is that the scheduler isn't doing a very good job
in this case.

  I suggest you put some random delay in your fairness tests, maybe using
  unsafeIOtoSTM, so that you can improve starvation ;-)

I'd rather fix the scheduler.

  Also, try running a very slow (much-delayed) transaction againts several
  fast ones. I expect the slow one will never reach completion.

Indeed. This is a well known problem with STM but afaict orthogonal to
the problem I'm talking about.

  AFAIK, achieving fairness in STM can be quite hard (not unlike other
  mainstream approaches to concurrency, sadly).

Yes. Still, in the particular situation I showed I think we can do a
better job than what is currently being done.

Cheers,

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Prevent optimization from tempering with unsafePerformIO

2007-10-17 Thread Josef Svenningsson
On 10/17/07, Bernd Brassel [EMAIL PROTECTED] wrote:
  why do you want to do this unsafely,
  instead of just using 'length'?  unsafePerformIO is a very slow
  function, remember)

 The big picture is that we generate programs like this example in order
 to compile the functional logic language Curry down to Haskell.
 Basically, there are two ways to do this:

 a) write an interpreter for Curry in Haskell, e.g., by employing
 non-determinism monads
 b) extend Haskell by employing side effects

 Alternative a) is not really an issue for us. Doing it this way, all
 Curry programs will suffer in performance in such a magnitude that - in
 comparison - unsafePerformIO is super-fast. In addition, we already have
 interpreters for Curry which I do not assume a Haskell interpreter to
 outperform without 5 years worth of tuning.

 Alternative b) is the choice, because doing it this way, all
 deterministic, i.e., purely functional parts of Curry programs would
 take advantage of Haskell being a functional language. If then the logic
 parts are not so fast, e.g., because unsafePerformIO is slow, this does
 not matter so much. In comparison to other approaches (like Alternative
 a) and many other implementations of Curry) our slogan is: make
 functions fast and logic possible. Fast functions will outweigh the
 slowdown for logics. But to get functions fast employing optimization
 sounds like a good idea to me. But even without any optimization, our
 system can compare well to most other implementations for many
 applications.

May I suggest a third route that has the advantages of both your
approaches. The backside is of course that it takes a bit of work. My
suggestion is to do an effect analysis of your curry programs to
identify the purely functional parts and compile them directly to pure
Haskell. The rest of the programs can run in a monad. This approach
should be more robust than relying on unsafePerformIO. It is also very
much in the spirit of your slogan.

Just my 2 cents,

/Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: More speed please!

2007-05-11 Thread Josef Svenningsson

On 5/11/07, Simon Peyton-Jones [EMAIL PROTECTED] wrote:

| I'm replying to a rather old thread here, about unboxing in functions. Duncan
| had a continuation monad which passed around some data type that would be nice
| to unbox. You discussed strictness annotations in function types as a 
potential
| solution. I have a different tack on the problem which seems potentially
| useful. I've experimented with doing local defunctionalization on the module.

Interesting suggestion, Josef.  In general, local defunctionalisation would be 
an intersting transformation to try. I'm not sure how well it would scale: the 
larger the scope, the bigger the more distinct functions and the bigger the 
dispatch table.


Indeed the dispatch table could grow big, but I'm not sure it would be
a scalability problem.  Note that all the code that goes in to these
dispatch tables (I call them apply functions) are ripped out from
other places in the program. So there is really no new code being
added, it's only shuffled around.

On the other hand I don't know how GHC deals with large case
expressions and if they are a problem, be it that they can increase
the compilation time or the runtime of the program, then there might
of course be a problem.


Also your transformation is semantically transparent (no effect) whereas Duncan 
is prepared to add ! annotations that really make things stricter, just as ! 
annotations in data type decls do today.  So presumably he will get further 
than you will, because he is making more assumptions.


Indeed. But I think the main advantage for Duncan's approach, over
local defunctionalization, is its general applicability. Local
defunctionalization only kicks in under very special circumstances and
even then isn't always a net win (or so my intuition tells me). The
bang annotations otoh can be inserted wherever you like and would
presumably work transparently across module borders.



Meanwhile, I've thought a bit more about Duncan's idea.  One attractive aspect 
is that you can regard it as a direct extension of Haskell's existing mechanism 
of ! on data types, making the {-# UNPACK #-} pragma look inside function types 
as well as looking inside data types.  I like that. It makes it sounds less ad 
hoc than I previously thought.  I'll open a Trac ticket for this thread, 
http://hackage.haskell.org/trac/ghc/ticket/1349


Sounds good! It would be a cool thing to have. I'm looking forward to
seeing it implemented in GHC :-)

Cheers,

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: More speed please!

2007-05-01 Thread Josef Svenningsson

I'm replying to a rather old thread here, about unboxing in functions. Duncan
had a continuation monad which passed around some data type that would be nice
to unbox. You discussed strictness annotations in function types as a potential
solution. I have a different tack on the problem which seems potentially
useful. I've experimented with doing local defunctionalization on the module.
This is a long mail as I will try to explain in some detail what it is that I
have done. Please be patient.

Normal defunctionalization is about replacing the primitive function type
a - b with an algebraic data type which I'll call Fun a b. Not all
functions will be eliminated as we will see but the program will be first
order after the transformation. The core of the transformation is that every
lambda in the program gives rise to a new constructor in the Fun data type and
whenever we apply a function we instead call a newly created apply function
with the following type Fun a b - a - b. This is basically what JHC does.

Defunctionalization is normally a whole program transformation (which is why
JHC is a whole program compiler). But sometimes it can be done on a per module
basis. This is where *local* defunctionalization comes in. The key to local
defunctionalization is that we often can divide the data type Fun into several
disjoint data types. We can do this whenever there are several different
function spaces that never get mixed up. And sometimes we're even so lucky
that a function space is totally contained in one module. Then we can do
local defunctionalization of that particular function space only and
completely within that module without changing it's interface. This case often
comes up when using the continuation monad and Duncan's code is not an
exception.

So, I've manually done local defunctionalization on Duncan's code. It gives
rise to two types which I've called Fun1 and Fun2. They look like follows
(including the Put monad):

\begin{code}
newtype Put a = Put {
   runPut :: Fun2 a
   }

data Fun1 a where
 Bind :: (a - Put b) - Fun1 b - Fun1 a
 Then :: Put b  - Fun1 b - Fun1 a
 Run  :: Fun1 ()
 FlushOld :: !(Fun1 ()) - !Int - !(ForeignPtr Word8) - !Int - !Int
   - Fun1 ()

data Fun2 a where
 Return :: a - Fun2 a
 Bind2  :: Put a - (a - Put b) - Fun2 b
 Then2  :: Put a - Put b - Fun2 b
 Flush  :: Fun2 ()
 Write  :: !Int - (Ptr Word8 - IO ()) - Fun2 ()
\end{code}
Intuitively every constructor corresponds to a closure. I've chosen the name
for the constructor based on which function the closure appears in.

The respective apply functions for these data types acts as interpreters and
executes the corresponding code for each constructor/closure. Their type look
as follow:

\begin{code}
apply1 :: Fun1 a - a - Buffer - [B.ByteString]
apply2 :: Fun2 a - Fun1 a - Buffer - [B.ByteString]
\end{code}

Now, the cool thing is that once GHC starts optimizing away on these apply
functions they will be unboxed and no Buffer will ever be created or passed
around. Here is the core type for apply1:
\begin{core}
$wapply1_r21p :: forall a_aQu.
 PutMonad.Fun1 a_aQu
 - a_aQu
 - GHC.Prim.Addr#
 - GHC.ForeignPtr.ForeignPtrContents
 - GHC.Prim.Int#
 - GHC.Prim.Int#
 - GHC.Prim.Int#
 - [Data.ByteString.Base.ByteString]
\end{core}
This is exactly what Duncan wanted, right? I declare victory :-)

However, things are not all roses. There are some functions that will
not be unboxed as we hope for with this approach, for instance the function
flushOld (see Duncan's code). To achieve the best possible optimization I
think one would have to perform strictness analysis and the worker-wrapper
transformation twice, once before doing local defunctionalization and then
again on the apply functions generated by the defunctionalization process.
This should give the code that Duncan wants I believe.

I think it should be relatively straightforward to implement local
defunctionalization in GHC but it should not be turned on by default as the
number of modules where it is beneficial is rather few.

The complete defunctionalized version of Duncan's module is attached.

I'm sure there are a lot of things that are somewhat unclear in this message.
Feel free to ask and I'll do my best to clarify.

Cheers,

Josef
{-# OPTIONS -fglasgow-exts -fbang-patterns -cpp #-}

module PutMonad (
-- * The Put type
  Put
, run -- :: Put () - L.ByteString

-- * Flushing the implicit parse state
, flush   -- :: Put ()

-- * Primitives
, write   -- :: Int - (Ptr Word8 - IO ()) - Put ()
, word8   -- :: Word8 - Put ()
  ) where

import Foreign
import qualified Data.ByteString.Base as B (
   ByteString(PS), LazyByteString(LPS),
   inlinePerformIO, mallocByteString, nullForeignPtr)
import qualified Data.ByteString.Lazy as L (ByteString)

-- Our internal buffer 

Re: TArray

2006-09-20 Thread Josef Svenningsson

Hi,

I get the exact same thing with ghc-6.5.20060914.

Weird.

Josef

On 9/20/06, Bulat Ziganshin [EMAIL PROTECTED] wrote:

Hello glasgow-haskell-users,

can someone try to compile this one-line module:

import Control.Concurrent.STM.TArray

with a recent 6.5 builds, preferably mingw32 ones?

it doesn't work for me, although TVar and other modules import
without any problems; and i see TArray.hi module along with TVar.hi
and so on. the message is:

Failed to load interface for `Control.Concurrent.STM.TArray':
  locations searched:
Control/Concurrent/STM/TArray.hi
Control/Concurrent/STM/TArray.hi-boot

compiler is, again,
http://www.haskell.org/ghc/dist/current/dist/ghc-6.5.20060901-i386-unknown-mingw32.tar.gz


--
Best regards,
 Bulat  mailto:[EMAIL PROTECTED]

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Complexity bug in garbage collector?

2005-04-16 Thread Josef Svenningsson
On 4/14/05, Simon Marlow [EMAIL PROTECTED] wrote:
 On 14 April 2005 15:35, Josef Svenningsson wrote:
 
  I've had some fun chasing around a couple of space leaks lately. One
  of the graphs that I produced looked like this:
  www.cs.chalmers.se/~josefs/coresa.ps
 
  Notice the shape of the graph. It shows a perfect squareroot function.
  But my program should be allocating at a constant rate. From previous
  experience this suggests that there is a time complexity bug in the
  garbage collector. This makes it take time proportional to the square
  of the amount of allocated memory. Can someone confirm this?
 
 The X axis of the heap profile is mutator time: that is runtime
 excluding GC time, so you wouldn't see any non-linear GC effects in the
 shape of the heap profile anyway.  You'll be able to confirm this by
 comparing the time on the profile to the wall-clock time, and checking
 the output from +RTS -sstderr is useful too.
 
 It's possible you're seeing cache effects: as the working set grows
 larger, the program slows down.  The shape does look a bit too perfect
 to be cache effects, though.
 
I don't think the cache has much to do with what I'm seeing. I think
the program is mostly allocating and that is (as far as I remember)
much easier to handle efficiently with the cache than reading.

 I wouldn't rule out any bugs (of course :-), so please send us further
 evidence if you find it.
 
OK, I've cooked up this little program to study the behaviour a little closer:
\begin{code}
module Main where

main = print $ strictId [1..]

strictId list = let (c,c') = work list c'
in c
  where work [] y' = (y',[])
work (x:xs) y' = (v,x:v')
  where (v,v') = work xs y'
\end{code}

This program just allocates like crazy til it dies. The funny looking
strictId function is just the strict identity function on lists. (Yes,
there are simpler ways to achieve the same thing. I just think the
above function is particularly sweet :-)

I do the following:
$ ghc -prof -auto-all --make Main.hs
$ main.exe +RTS -hd -MVERY MUCH

The resulting graph is suspiciously similar in shape to the one of my
previous program. The garbage collector is still my primary suspect, I
simply don't know how to explain the graph otherwise.

/Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Complexity bug in garbage collector?

2005-04-14 Thread Josef Svenningsson
Hi all,

I've had some fun chasing around a couple of space leaks lately. One
of the graphs that I produced looked like this:
www.cs.chalmers.se/~josefs/coresa.ps

Notice the shape of the graph. It shows a perfect squareroot function.
But my program should be allocating at a constant rate. From previous
experience this suggests that there is a time complexity bug in the
garbage collector. This makes it take time proportional to the square
of the amount of allocated memory. Can someone confirm this?

Cheers,

/Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 6.4 release candidates available

2005-02-18 Thread Josef Svenningsson
Hi,

Compiling 6.4.20050217 on Windows according to the book fails pretty early:
snippet
/cygdrive/c/ghc/ghc-6.2.2/bin//ghc -H16m -O -I. -Rghc-timing  -I../../../librari
es -fglasgow-exts -no-recomp-c Compat/RawSystem.hs -o Compat/RawSystem.o  -o
hi Compat/RawSystem.hi
c:/DOCUME~1/JOSEFS~1/LOKALA~1/Temp/ghc3868.hc: In function `s3SQ_entry':
c:/DOCUME~1/JOSEFS~1/LOKALA~1/Temp/ghc3868.hc:109: too many arguments to functio
n `rawSystem'
/snippet

Cheers,

/Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Anybody that has an idea about Italian Verb Engine program?

2004-11-19 Thread Josef Svenningsson
Can you perhaps be more precise about what your problem is? You can compile
to code by issuing ghc --make Analisi -o Analisi.

/Josef

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:glasgow-haskell-
 [EMAIL PROTECTED] On Behalf Of Suzan Bayhan
 Sent: den 19 november 2004 13:50
 To: [EMAIL PROTECTED]
 Subject: Anybody that has an idea about Italian Verb Engine program?
 
 I will be gratefull if  anybody knows how to compile and run this program.
 
 http://www.helsinki.fi/filosofia/filo/jvp/analisi.html
 http://www.cs.chalmers.se/~aarne/tmp/contributions/ranta/analisi.html
 
 Thanks a lot.
 
 Suzan BAYHAN
 ___
 Glasgow-haskell-users mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


External Core

2004-11-15 Thread Josef Svenningsson
Dear fellow GHC users,

For a number of years GHC has been equipped with the external core language.
For those of you who are unfamiliar with this, it is a way to get GHC to
emit or read in programs in a format which is very close to its internal
intermediate language. This is for instance useful when you want to play
with various optimisations. Currently external core is close to an orphan.
Nobody is actively maintaining it and it is low on the GHC team's priority
list.

However, here at Chalmers we have two projects running which depends
critically on external core. We have therefore volunteered to adopt external
core and take responsibility for it. I will be the organiser of this effort.
We are happy for the blessing from the GHC developers and for trusting us
with this. 

We have a number of ideas on how we would like external core. But before
putting up an agenda we would like to hear from all the people out there who
uses external core. We would be very happy to hear who you are, what you are
doing with external core and what changes you might want to see. I'm hoping
that we together can work out an external core format which is useful for
many people.

Hoping to hear from you,

/Josef Svenningsson, PhD student at Chalmers, External core tsar


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Bools are not unboxed

2004-10-06 Thread Josef Svenningsson
Simon Marlow wrote:
On 06 October 2004 00:53, John Meacham wrote:
 

This seems like it could be nicely generalized such that all
enumeration types unbox to the unboxed integer of their offset. so
data Perhaps = Yes | Maybe | No
can unbox to an Int# with 0# == Yes 1# == Maybe and 2# == No.
   

Yes, a strict enumeration should be implemented as an Int#, both in the
strictness analyser and also when you {-# UNPACK #-} a constructor
field.  This is something we'd like to try, but haven't got around to it
yet.  Maybe a good bite-sized project for a budding GHC hacker? :-)
 

Would it really be correct to translate it to Int#? AFAIK, unboxed 
values may not contain bottom while a data type most certainly can. I 
would imagine translating it to Int, and then relying on GHC's optimiser 
to optimize this into Int# whenever possible.

  /Josef
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Haskell performance

2004-03-18 Thread Josef Svenningsson
On Thu, 18 Mar 2004, Carsten Schultz wrote:

 Hi Sébastien!

 On Thu, Mar 18, 2004 at 11:30:26AM +0100, Sébastien Pierre wrote:
  In fact, I would like to know how Haskell compares in performance to
  other languages because if I refer to the page I mentioned
  (http://www.bagley.org/~doug/shootout/craps.shtml) it does not even
  compete with Python (which is rather... slow).

 You should look at the individual examples and see how relevant their
 results are for you.  And keep in mind that strings tend to be slow in
 Haskell (being lazy lists of characters), a fact that may influence
 some of the tests.

I agree completely. Here's a little more detailed analysis to make the
comparison fair.

The language shootout has 25 benchmarks to compare languages with. One
thing which is not Haskell's in favour is the fact that Haskell isn't
represented in all of these benchmarks. So to get a better comparison we
should set the score to zero on those benchmarks where we don't have a
Haskell program. I zeroed out the following benchmarks:
List Processing, Method Calls, Object Instantiation, String Concatenation,
Hashes Part II, and Regular Expression Matching.

Suddenly Haskell climbs from 19'th place to 14'th place. Not dramatic but
at least the comparison is more fair now. It is also interesting to see
what happens if you set the lines of code multiplier to one instead of
zero. Haskell now gets 7'th place. Just before Python as it happens

Well, I think this shows that one should be very careful when reading
these kinds of benchmarks. It is very easy to jump to conclusions. But I
do believe there are some conclusions to be drawn if we look close enough.

The main problem for Haskell on these benchmarks is I/O. In all benchmarks
with I/O in them Haskell (or I should really say ghc) gets a really bad
score. This is, however, a known problem. Simon PJ mentioned it in a
previous mail. All I can say is that for my purposes it hasn't been a
problem, so I really don't care about these benchmarks.

Another thing with these benchmarks is that some of them are so tightly
specified that the idiomatic Haskell solution didn't fit in. See for
example the String Concatentation benchmark. The programs have to
concatenate the string in such a way that the Haskell program gets a
quadratic behaviour. I submitted a solution using concat which behaved
OK but it was not allowed to compete. You can find it on the page of the
benchmark.

Well, enough about benchmarks. Just make sure you read them carefully.

S'ebastien, I encourage you to choose the Haskell + C solution. It has
worked well for me and many others. Paradox, a model finder for first
order logic is written in Haskell plus some C for the performance
sensitive parts. It won the SAT/Models class of last years CASC
competition. There only speed counts (after correctness ofcourse). See:
http://www.cs.chalmers.se/~koen/paradox/


All the best,

/Josef
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Misleading error message

2004-03-05 Thread Josef Svenningsson
Hi all,

This mail is a little story about a little guy called Josef and a little
adventure he had this week together with the compiler ghc. Although
exciting, this adventure took him several hours and Josef would have been
happier without it. So he appeals to the implementors to improve on the
situation so that others may not have to embark on the same journey.

OK, here's the story.

Josef has a rather large program which he is writing. It contains many
haskell modules spread out over several directories. Early this week Josef
decided to add yet another module. He decided to call this module
InterfaceParse. So he created a file InterfaceParse.lhs, filled it
with nice haskell code and imported it in another haskell file. Now, after
a while Josef realised that the name InterfaceParse was not in par with
his naming convention and decided to rename the module to
InterfaceParser. Now, here is where the real adventure started. Josef
changed the name of the file to InterfaceParser.lhs but forgot to change
the module declaration in the file. In the file one could still read the
text:
module InterfaceParse ...

Unknowing of his mistake, Josef carried on and changed the import to
importing the module InterfaceParser instead of InterfaceParse. And to
make sure everything had gone right he recompiled his project. And behold!
The compiler complained. What did it say? The compiler said it could not
find the interface for the module InterfaceParse! Josef was very puzzled
by this. Was his import statement wrong? No. Then why was the compiler
looking for the interface InterfaceParse when it should look for
InterfaceParser? He started to experiment with importing other modules,
modules which didn't exist. All the time, whenever he was trying to import
a non-existing module the compiler would correctly complain that the
interface was missing. But as soon as he tried to import InterfaceParser
the compiler would complain that it couldn't find InterfaceParse.

There where times when Josef thought someone was playing him a practical
joke. Maybe Simon Peyton Jones had come up with a devious plan on how to
make money from his compiler? By adding some obscure bug which would
trigger very rarely he could then ask people for money to fix the bug.

After much desperation and consulation with others Josef finally
discovered his error.

One might say that Josef should have understood his error right away, that
he should have checked his files better. But Josef believes that the error
message he got from the compiler was very misleading and led him away from
finding the true source of the error. Dear implementors, please improve
the error reporting in this case.


/Josef (first time he has ever written about himself in third person)

By the way, Josef is using ghc-6.2 on a solaris machine.
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Array optimisation...

2004-02-23 Thread Josef Svenningsson
On Mon, 23 Feb 2004, MR K P SCHUPKE wrote:


 Was just thinking about GHC's implementation of arrays, and their
 poor performance. I know little about GHC's internal workings, but
 I was thinking about how array performance could be improved.

 What if when writing an array you instead construct a function:

 f :: (Ix x,Ix y) = Array a - Ix x - a - Ix y - a
 f a x b y | x==y = b
   | otherwise = a!y

 Then the update in place operator // becomes a curried application
 of 'f' above.

 You could then define a a series of 'overlays' for a base array.
 The clever bit would be to get the garbage collector to merge
 the two as soon as any reference to the original array is
 discarded.

 Does GHC already do anything like this?

No it doesn't. But if you want this behaviour you should look at
Data.Array.Diff . I think that library does what you want.

Cheers,

/Josef
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Local qualification in data types

2003-11-11 Thread Josef Svenningsson
Hi,

I'm playing around with the type system trying to encode various stuff in
it. I've gotten an error though which it would be useful to understand.

Say I have a type class:
class Foo a

and then I want a datatype like this:

data Bar a = Foo a = C1 | C2 ... | C3

This works fine syntactically but ghc says that I must have at least one
existential type variable in the constraint. Hugs doesn't require that but
has other problems
Is there a good reason for not allowing the above?

Cheers,

/Josef

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


unknown symbol `__stginit_List_'

2003-10-29 Thread Josef Svenningsson
Hi!

Here's a letter from someone who has next to no clue about what he is
doing so have patience and don't assume any knowledge when answering...

I'm trying to create a package for ghc. I've struggled quite a lot and
gotten this far:

(This is when invoking ghci -package yahu)

Loading package base ... linking ... done.
Loading package yahu ... linking ...
/.../chalmers.se/fs/cab/cs/work/proj/multi/pub/lib/yahu/Yahu/YahuHaskell.o:
unknown symbol `__stginit_List_'
ghc-6.0.1: panic! (the `impossible' happened, GHC version 6.0.1):
can't load package `yahu'

I'm surprised by this. Why do I get this error message? From the name of
the symbol I would expect that it can be found in package base but
aparently not... Please advice.

/Josef

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: state of ghc6 on sparc

2003-06-18 Thread Josef Svenningsson
On Wed, 18 Jun 2003, Isaac Jones wrote:

 Hello :)

 Can anyone enlighten me as to the state of the ghc6 sparc
 distribution?  The file on Hal's web page mentioned some time back is
 not the same file as on the GHC web page (I note that the filenames
 are different also).

 Are any of them known to work if installed in a non-standard place?

Here at Chalmers we use the first shipped version (I believe this is the
one on the GHC web page or an even earlier one...). I patched it myself,
it was very easy. I expect that the version on Hal's web page should work
since he said he fixed the bug.

The only thing that I don't like is that the dist isn't compiled with
readline. It makes working in ghci a nightmare (I need to use backspace
often...). The problem around this is in my opinion to make a Haskell98
compliant trimmed down readline library which doesn't depend on the c
library. It shouldn't be too difficult. I don't expect that people are
using the full power of readline anyway.

Cheers,

/Josef

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: GHC 6.0 Release: sparc-solaris2 binaries

2003-06-02 Thread Josef Svenningsson
On Fri, 30 May 2003, Simon Marlow wrote:


  A binary version of GHC6 is available for sparc-solaris2 machines at:
 
http://www.isi.edu/~hdaume/ghc-6.0-sparc-solaris2.tar.bz2
17.5 mb
 
  I'd appreciate it if the maintainers could copy it and make
  it available
  locally off of the GHC web page so as to not kill bandwidth here :).

 Uploaded, thanks!

 Are the notes about libcurses etc. still relevant?  (see the download
 page).

Uhmmm

When I installed the dist and tried it I got the following message:

ghci: /nfs/moussor/hdaume/lib/ghc-6.0/ghc-6.0: not found

The fix is easy, just change the scripts ghc-6.0 and ghci-6.0. Would be
nice if you could ship a correct binary dist though...

All the best,

/Josef

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: effect of order of function arguments

2003-02-19 Thread Josef Svenningsson
[moved over to glasgow-haskell-users]

On Wed, 19 Feb 2003, Simon Peyton-Jones wrote:

 | GHC used to have an optimisation for static argument like this. It
 would
 | turn both of the above programs into a similar form using a local
 | recursive function:
 |
 | interp y xs = interpaux xs
 |   where interpaux [] = []
 | interpaux (x:[]) = x:[]
 | interpaux (x:xs) = x:y:interpaux xs
 |
 | GHC doesn't do this anymore. The reason for this is unknown to me.

 It turned out to be a very minor effect (1-2% of execution time) and
 hard to tune; with lots of parameters, it's best to make a local
 function, with just a few it's best to pass the parameters round.

I see. I'm a little surprised though. I thought that performing SAT
(static argument transformation) would make GHC much keener on inlining
the function. My expectation was that there would be some gain from that.

Just as a comment I can add that in my paper Shortcut fusion for
accumulating parameters  zip-like functions in last year's ICFP I
performed SAT by hand on the definition of unfoldr and foldl' in order to
make all the inlining happen in the way I wanted. Without this I would
either have to rely on the compiler performing SAT or there would be no
fusion.

My gut feeling (which ofcourse can be wrong) is that SAT can play an
important role when transforming programs.

/Josef

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Floats and Doubles

2002-11-13 Thread Josef Svenningsson
On Wed, 13 Nov 2002, Jan Kort wrote:


 Juan Ignacio Garcia Garcia wrote:
  *P2 (fromRational ((toRational 4) - ( toRational 5.2 )))
  -1.2002

 I can't explain this one, how would fromRational
 know that it has to create a Double ?

It's the defaulting mechanism that kicks in. The default default is
(Integer,Double). Since Integer is not an instance of Fractional, Double
is chosen.

But this brings up a strange thing in GHCi. Suppose I load the following
module into GHCi:

\begin{code}
module Foo where

kalle = (fromRational ((toRational 4) - ( toRational 5.2 )))

default (Rational)
\end{code}

What happens is the following:

Prelude :l Foo.hs
Compiling Foo  ( Foo.hs, interpreted )
Ok, modules loaded: Foo.
*Foo kalle
(-6) % 5
*Foo (fromRational ((toRational 4) - ( toRational 5.2 )))
-1.2002

It seems like GHCi doesn't care about the default directive on the promt.
I tried to look in the documentation but I found nothing about defaulting.
I believe it is reasonable to expect that the defaulting of the current
module also affects the promts (I realise however that this can lead to
problems when several modules are in scope..). Other things one might want
to have is to be able to see what the current defaulting is and perhaps
set it manually from the prompt.
At the very least, please document how GHCi handles defaulting. I know
it's not the most frequently used feature but it's there and needs to be
taken care of.

All the best,

/Josef

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



RE: Does GHC simplify RULES?

2002-02-25 Thread Josef Svenningsson

On Mon, 25 Feb 2002, Simon Peyton-Jones wrote:


 | Suppose I have the following RULES pragma:
 |
 | {-# RULES
 |   foo forall a . foo a = (\x - bar x) a
 | #-}
 |
 | Ok, it's stupid but I have examples where this is motivated, trust me.
 |
 | Now, it seems that GHC simplifies the rule because what I get
 | when compiling it with -ddump-rules is the following rule:
 |
 | foo __forall {@ t_a2UD a :: t_a2UD}
 | Test.foo @ t_a2UD a
 | = Test.bar @ t_a2UD a ;
 |
 | It's \beta-reduced! Argh! Why is that?

 Because if it's left unsimplified, the first thing that will happen
 after
 the rule fires is that the beta reduction will be done.  So why not
 do it first?  (The desugarer can leave quite a lot of crud around,
 so a gentle simplification is indeed run.)

 You'll need to explain your motivation a bit more.  Perhaps give
 the rules you'd like along with a sample simplification sequence.

Alright. Some more motivation is probarbly justified here. This message is
a bit lengthy. If you're not very interested in this I suggest you stop
reading now.

What I am really trying to do is trying to express the foldr/build rule
without build. Intuitively this should be possible since build just sits a
a tag on a function saying that it has the right type.

Expressing the foldr/build-build (read: foldr build minus build) rule is
easy:

{-# RULES
  foldr fuse forall c n (g :: forall b . (a - b - b) - b - b).
   foldr c n (g (:) []) = g c n
#-}

Now, the problem is writing list producing functions so that they get the
right type. Let's look at an example, our favourite function map. Suppose
we have the following code snippet:

foldr p z (map f xs)

How do we write map so that the intermediate list is not built? We can
define map in the following way:

map f xs = mapFB f xs (:) []

mapFB is a function where all the conses and nils are abstracted out. Now,
we can arrange so that map gets inlined in the above example. So for the
rule foldr fuse to apply mapFB must have the right type. Note that mapFB
takes four arguments and g in the rule takes two. BUT, g takes a type
argument before these two arguments because it is polymorphic. Therefore
mapFB must also take a type argument in that position in order for the
rule to apply. With this in mind we might try to give mapFB the following
type:

mapaux :: (a - b) - [a] - (forall c. (b - c - c) - c - c)

But GHC completely ignores our explicit forall quantifier and moves the
quantification to the top level. Bummer!

OK, so we want to force GHC to put a type lambda where we want to. My idea
was then to have a rule that generates a piece of polymorphic code and
insert some redundancy so that GHC would not understand that it could
remove the type lambda. Here's what I tried:

{-# RULES
  map forall (f :: a - b) xs .
map f xs =
((\c n - mapaux f xs c n) :: forall c . (a - c - c) - c - c) (:) []
#-}

The idea is to fool GHC to insert a lambda before my explicit lambdas in
the rhs of the rule. Then the foldr fuse rule whould fire on the
example above. Or will it? It depends on how GHC does things and I'm not
100% sure. If GHC start simplifying something which just came out of an
application of a rule then I guess this trick is really wasted. What I
hoped for was this:

foldr p z (map f xs) ={ Rule map }=
foldr p z ((\c n - mapaux f xs c n) (:) []) ={ Rule foldr fuse }=
(\c n - mapaux f xs c n) p z ={ \beta reduction }=
mapaux f xs p z

In actual GHC the \beta reduction will probarbly happen before the second
rule application. But I suppose I would have found that out if the rule
was not simplified, now there was no way to tell.

Ok, so maybe you guys have already though about removing build and decided
against it. It might not be the most important problem in the world. But
intuitively build is completely redundant and that really bugs me. That's
why I've been playing around with this.

Cheers,

/Josef

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Does GHC simplify RULES?

2002-02-21 Thread Josef Svenningsson

Hi!

A question about the RULES pragma.

Suppose I have the following RULES pragma:

{-# RULES
  foo forall a . foo a = (\x - bar x) a
#-}

Ok, it's stupid but I have examples where this is motivated, trust me.

Now, it seems that GHC simplifies the rule because what I get when
compiling it with -ddump-rules is the following rule:

foo __forall {@ t_a2UD a :: t_a2UD}
Test.foo @ t_a2UD a
= Test.bar @ t_a2UD a ;

It's \beta-reduced! Argh! Why is that?

So why is this a problem? Maybe I should give some explanation. The reason
is that I want to put a type signature on the (\x - bar x) and make it
more general (i.e. forcing GHC to insert a type lambda after a lambda). I
then have other rules which match on higher ranked types. But in order for
these rules to apply the type lambda must be in the right place.

One other thing I tried in order to get the type lambdas on the right
place was to use rank 1 types (is that the correct name?):

xyzzy :: (a - a) - (forall b. b - b)

but GHC simplified the type and just put the forall quantified (and hence
the type lambda) on the top level. Argh again!

Is there a way around this?

/Josef

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



RE: ghc-pkg

2001-10-15 Thread Josef Svenningsson

On Mon, 15 Oct 2001, Simon Marlow wrote:

  With large projects, ghc runs out of heapspace because of too much
  caching.

 I think it's more likely that GHC has some space leaks which cause it to
 hang on to too much memory between compilations.  In theory, it only
 caches the contents of interface files (in a pre-processed state), but
 due to leakage it may end up hanging on to more stuff.  This is high up
 on our list of things to investigate.

Ok, I see. I thought the memory consumption was high just because there
were so many interface files to cache and that they simply take up a lot
of memory. Anyway, it's good to hear that you're working on it.

 I'm not sure what you have in mind w.r.t. weak pointers - could you
 elaborate?  In GHC there isn't a kind of pointer that says hold on this
 unless we run out of heap, which is perhaps what you were after.

My suggestion to use weak pointers was just a confusion on my side.
Sorry for that. What I had in mind was the kind of pointer that you
suggest.

Cheers

/Josef


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



RE: ghc-pkg

2001-10-09 Thread Josef Svenningsson

Hi!

I would like to add a request to Thomas list of lacking features of
ghc --make:

  When caching information between the compilation of different modules,
  use weak pointers.

With large projects, ghc runs out of heapspace because of too much
caching. It's always fine to restart the build process by doing ghc --make
again but the whole thing is really annoying. I agree that caching is
important and very nice, but it shouldn't make the building process to
abort.

/Josef


On Mon, 8 Oct 2001, Simon Marlow wrote:

  Additionally, ghc --make lacks (AFAIK) several useful
  features found in
  other make tools (although not all in the same tool...):
 
 1. The ability to distingush directories containing source
  code to be
compiled from directories containing previously compiled code
(hmake has the flags -P and -i, hbcmake has -i and -I). (I guess
you could use -package-conf as a cumbersome substitute, though.)
 
 2. The ability to specify compiler flags for individual modules
without putting them in the source code. (Some flags
  are 'static'
and can not be put in the source code.)
 3. The ability to compile several modules in parallel, on a
multi-processor machine, or a network of workstations.
 4. The ability to automatically invoke program generators
  (e.g. happy)...
 5. A graphical user interface.

 All valid arguments, of course.  But in --make's favour, it *is* much
 faster than individual compiles.  In a little test I did today, ghc
 --make beats hmake/nhc98 on a reasonably sized program
 (nofib/real/anna).  I timed both with -H32m and no optimisation, ghc
 --make compiled it in 48 seconds compared to 51 seconds for hmake/nhc98.
 On small individual modules, nhc98 wins hands down though.

 Ok, so speed isn't everything, and I hear the arguments you enumerated
 above.  Most of these can be achieved through extra compiler support,
 and the last one (a make GUI) should actually be easier with GHC once we
 get around to specifying the compiler's programmatic API more precisely.

 Cheers,
   Simon

 ___
 Glasgow-haskell-users mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: POLL: GC options

2001-08-06 Thread Josef Svenningsson

On Mon, 6 Aug 2001, Simon Marlow wrote:

 Issue 1: should the maximum heap size be unbounded by default?
 Currently the maximum heap size is bounded at 64M.  Arguments for: this
 stops programs with a space leak eating all your swap space.  Arguments
 against: it's annoying to have to raise the limit when you legitimately
 need more space.

 Options:
   1. remove the default limit altogether
   2. raise the default limit
   3. no change

 (any others?)

I think that if there should be a default limit it would be nice to be
able to set it at compile time. This is something that I've wanted for
quite some time. If I know that the program I am compiling is likely to
need 100M of heap space it feels silly having to give the RTS parameter to
the program each time I run it. It would be much more convenient to just
tell the compiler where I want the limit.

/Josef


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



RE: GHC core representation

2001-06-25 Thread Josef Svenningsson

On Mon, 25 Jun 2001, Simon Peyton-Jones wrote:

 | * I would like to make GHC generate only the core file and
 | the hi file, nothing more. However, I haven't found a way of
 | making GHC stop after outputting the hi file.  Being able to
 | do this is useful in some other cases as well;  I sometimes
 | look at the raw core that comes is printed with the -ddump*
 | flags. It is often the case that I don't want anything to be
 | generated then.

 This would be a Good Thing and not too hard.
 How would you like to specify where to stop?  After the last
 blob of debug output might be a reasonable answer, but
 we definitely also want to be able to carry on.  Yet another flag?

Another flag is probarbly what I want here. Stopping after the last blob
of debug output would certainly be a useful one. Another alternative is
after producing .hi file which potentially could be useful even when
we're not producing any debug output. The latter should be even easier to
implement.
I'm happy with either one.

 GHC is controlled by a little script, one instruction of which is
 dump the ext-core.   I have long wanted to externalise the script
 file so that you can write it youself; just one more thing we have
 not done yet.

Aha. Sounds cool. I guess if you want to make GHC popular as a tool of
which people can use bits and pieces then such a feature would be really
appreciated. At least, I would find such a feature useful.

Cheers,

/Josef


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: GHC version 5.00.2 is available

2001-06-20 Thread Josef Svenningsson

On Mon, 18 Jun 2001, Julian Seward (Intl Vendor) wrote:


The (Interactive) Glasgow Haskell Compiler -- version 5.00.2
   ==

[..]

  What's new in 5.00.2
 ==

[..]

How about Andrew Tolmach's -fext-core flag? Did it make it into 5.00.2 or
do we have to wait for next official release?

/Josef


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



About rules

2001-04-05 Thread Josef Svenningsson

Hi all!

I've been playing around with the rules facility a bit. There is a boring
shortcoming when working with infix operators. It seems that the rule
parser doesn't like them at all. The following example gives syntax error:

{-# RULES
"plus/mult" forall p . p + p = 2 * p
#-}

whereas the following is allright:

{-# RULES
"plus/mult" forall p . (+) p p = (*) 2 p
#-}

It's not a big problem, but it's rather inconvenient and ugly. Is it easy
to fix?

Cheers,
/Josef


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users