Re: [Haskell-cafe] Re: Real-time garbage collection for Haskell

2010-03-04 Thread Curt Sampson
On 2010-03-02 14:17 + (Tue), Simon Marlow wrote:

 System.Mem.performGC is your friend, but if you're unlucky it might do a  
 major GC and then you'll get more pause than you bargained for.

Anybody calling that is a really poor unlucky sod, because, as far as I
can tell from reading the sources, System.Mem.performGC always does a
major GC.

cjs
-- 
Curt Sampson c...@cynic.net +81 90 7737 2974
 http://www.starling-software.com
The power of accurate observation is commonly called cynicism
by those who have not got it.--George Bernard Shaw
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-04 Thread Curt Sampson
On 2010-03-02 11:33 +0900 (Tue), Simon Cranshaw wrote:

 I can confirm that without tweaking the RTS settings we were seeing
 over 100ms GC pauses.

Actually, we can't quite confirm that yet. We're seeing large amounts
of time go by in our main trading loop, but I'm still building the
profiling tools to see what exactly is going on there. However, GC is
high on our list of suspects, since twiddling the GC parameters can
improve things drastically.

On 2010-03-02 00:06 -0500 (Tue), John Van Enk wrote:

 Would a more predictable GC or a faster GC be better in your case? (Of
 course, both would be nice.)

Well, as on 2010-03-01 17:18 -0600 (Mon), Jeremy Shaw wrote:

 For audio apps, there is a callback that happens every few milliseconds. As
 often as 4ms. The callback needs to finish as quickly as possible to avoid a
 buffer underruns.

I think we're in about the same position. Ideally we'd never have to
stop for GC, but that's obviously not practical; what will hurt pretty
badly, and we should be able to prevent, is us gathering up a bunch
of market data, making a huge pause for a big GC, and then generating
orders based on that now oldish market data. We'd be far better off
doing the GC first, and then looking at the state of the market and
doing our thing, because though the orders will still not get out as
quickly as they would without the GC, at least they'll be using more
recent data.

I tried invoking System.Mem.performGC at the start of every loop, but
that didn't help. Now that I know it was invoking a major GC, I can see
why. :-) But really, before I go much further with this:

On 2010-03-01 14:41 +0100 (Mon), Peter Verswyvelen wrote:

 Sounds like we need to come up with some benchmarking programs so we
 can measure the GC latency and soft-realtimeness...

Exactly. Right now I'm working from logs made by my own logging and
profiling system. These are timestamped, and they're good enough
to get a sense of what's going on, but incomplete. I also have the
information from the new event logging system, which is excellent in
terms of knowing exactly when things are starting and stopping, but
doesn't include information about my program, nor does it include any
sort of GC stats. Then we have the GC statistics we get with -S, which
don't have timestamps.

My plan is to bring all of this together. The first step was to fix
GHC.Exts.traceEvent so that we can use that to report information about
what the application is doing. In 6.12.1 it segfaults, but we have a fix
(see http://hackage.haskell.org/trac/ghc/ticket/3874) and it looks as if
it will go into 6.12.2, even. The next step is to start recording the
information generated by -S in the eventlog as well, so that not only
do we know when a GC started or stopped in relation to our application
code, but we know what generation it was, how big the heap was at the
time, how much was collected, and so on and so forth. Someone mentioned
that there were various other stats that were collected but not printed
by -S; we should probably throw those in too.

With all of that information it should be much easier to figure
out where and when GC behaviour is causing us pain in low-latency
applications.

However: now that Simon's spent a bunch of time experimenting with the
runtime's GC settings and found a set that's mitigated much of our
problem, other things are pushing their way up my priority list. Between
that and an upcoming holiday, I'm probably not going to get back to this
for a few weeks. But I'd be happy to discuss my ideas with anybody else
who's interested in similar things, even if just to know what would be
useful to others.

What do you guys think about setting up a separate mailing list for
this? I have to admit, I don't follow haskell-cafe much due to the high
volume of the list. (Thus my late presence in this thread.) I would be
willing to keep much closer track of a low-volume list that dealt with
only GC stuff.

I'd even be open to providing hosting for the list, using my little baby
mailing list manager written in Haskell (mhailist). It's primitive, but
it does handle subscribing, unsubscribing and forwarding of messages.

cjs
-- 
Curt Sampson c...@cynic.net +81 90 7737 2974
 http://www.starling-software.com
The power of accurate observation is commonly called cynicism
by those who have not got it.--George Bernard Shaw
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Real-time garbage collection for Haskell

2010-03-04 Thread Peter Verswyvelen
A fully concurrent GC running on multiple threads/cores might be
great, but I guess this is difficult to implement and introduces a lot
of overhead.

For simple video games, it might work to always do a full GC per
frame, but don't allow it to take more than T milliseconds. In a sense
the GC function should be able to be paused and resumed, but it could
run on the same thread as the game loop, so no synchronization
overhead would arise (in a sense many video games are already little
cooperative multitasking systems). The game loop should then decide
how to balance the time given to the GC and the memory being
collected. This would cause longer frame times and hence sometimes a
decrease in frame rate, but never long stalls.

Note that the issue also popped up for me in C many years ago. Using
Watcom C/C++ in the nineties, I found out that a call to the free
function could take a very long time. Also for games that could run
many hours or days (e.g. a multi player game server) the C heap could
get very fragmented, causing memory allocations and deallocations to
take ever more time, sometimes even  fail. To make my games run
smoothly I had to implement my own memory manager. To make them run
for a very long time, I had to implement a memory defragmenter. So in
the end, this means a garbage collector :-) Of course this problem is
well known, I just wanted to re-state that for games the typical C
heap is not very well suitable either.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Books for advanced Haskell

2010-03-04 Thread Curt Sampson
On 2010-03-01 15:44 +0100 (Mon), Günther Schmidt wrote:

 Apart from monads there are of course also Applicative Functors,  
 Monoids, Arrows and what have you. But in short the Monad thingy seems  
 to be the most powerful one of them all.

Perhaps not exactly. I build monads left and right, but that's because
I don't understand much else. :-) Before you get all hung up on them,
though, I recommend reading The Typeclassopedia,[1], which will
introduce you to all of the monad's friends and family.

[1]: http://byorgey.wordpress.com/2009/03/16/monadreader-13-is-out/

cjs
-- 
Curt Sampson c...@cynic.net +81 90 7737 2974
 http://www.starling-software.com
The power of accurate observation is commonly called cynicism
by those who have not got it.--George Bernard Shaw
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell platform for GHC 6.12.1?

2010-03-04 Thread Peter Verswyvelen
Using GHC 6.12.1 on Windows currently is hard, since one must compile
the latest version of cabal-install, which is a nightmare to do for a
typical windows user (install mingw, msys, utils like wget, download
correct package from hackage, compile them in correct order, etc etc)

What's the status of the Haskell platform for the latest and greatest
Glasgow Haskell Compiler?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: idioms ... for using Control.Applicative.WrapMonad or Control.Arrow.Kleisli?

2010-03-04 Thread Heinrich Apfelmus
Nicolas Frisby wrote:
 Each time I find myself needing to use the wrapping functions
 necessary for this embeddings, I grumble. Does anyone have a favorite
 use-pattern for ameliorating these quickly ubiquitous conversions?
 
 For runKleisli, I was considering something like
 
 onKleisli ::
   (Control.Arrow.Kleisli m a b - Control.Arrow.Kleisli m' a' b')
   - (a - m b) - (a' - m' b')
 onKleisli f = Control.Arrow.runKleisli . f . Control.Arrow.Kleisli
 
 but haven't really tested its usefulness. My most frequent use cases
 usually include Arrow.first, Arrow.second, , ***, or +++. E.g.
 
 somefun :: (Monad m, Num a) = (a, d) - m (a, d)
 somefun = onKleisli Control.Arrow.first (\ a - return (a + 1))
 
 Perhaps a Control.Arrow.Kleisli, which would export (same-named)
 Kleisli specializations of all the Control.Arrow methods? And an
 analogous Control.Applicative.Monad? (Data.Traversable does this a
 little bit to specialize its interface for monads, such as
 Data.Traversable.sequence.)
 
 While writing this, I realized that you can't leave the Arrow-ness of
 Kleisli arrows implicit, since (-) a (m b) is two kinds of arrows
 depending on context -- which is precisely what the Kleisli newtype
 resolves. So I'm not seeing a reason to bring up the 'class
 Applicative m = Monad m where' dispute.

Yep, I don't think you can avoid wrapping and unwrapping a newtype.


While not directly related, I wonder whether Conor McBride's bag of tricks

  http://www.haskell.org/pipermail/libraries/2008-January/008917.html

might be of help.


Regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Haskell platform for GHC 6.12.1?

2010-03-04 Thread Simon Peyton-Jones
See 

http://trac.haskell.org/haskell-platform/wiki/ReleaseTimetable

The Haskell Platform for GHC 6.12 should be out on March 21st.

Simon

| -Original Message-
| From: haskell-cafe-boun...@haskell.org 
[mailto:haskell-cafe-boun...@haskell.org] On
| Behalf Of Peter Verswyvelen
| Sent: 04 March 2010 09:38
| To: The Haskell Cafe
| Subject: [Haskell-cafe] Haskell platform for GHC 6.12.1?
| 
| Using GHC 6.12.1 on Windows currently is hard, since one must compile
| the latest version of cabal-install, which is a nightmare to do for a
| typical windows user (install mingw, msys, utils like wget, download
| correct package from hackage, compile them in correct order, etc etc)
| 
| What's the status of the Haskell platform for the latest and greatest
| Glasgow Haskell Compiler?
| ___
| Haskell-Cafe mailing list
| Haskell-Cafe@haskell.org
| http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] RE: Haskell platform for GHC 6.12.1?

2010-03-04 Thread Maciej Piechotka
On Thu, 2010-03-04 at 09:48 +, Simon Peyton-Jones wrote:
 See 
 
 http://trac.haskell.org/haskell-platform/wiki/ReleaseTimetable
 
 The Haskell Platform for GHC 6.12 should be out on March 21st.
 
 Simon

I hope it will be with long double for C ;)

And BTW - why it is called 2009.4.x not 2010.x?

Regards


signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How do you rewrite your code?

2010-03-04 Thread Claus Reinke
All my code, whether neat or not so neat is still way too concrete, too 
direct.
I think the correct answer is one should try to find abstractions and 
not code straight down to the point. Which to me is still a really tough 
one, I have to admit.


Taking this cue, since you've raised it before, and because the current
thread holds my favourite answer to your problem:

   Small moves, Ellie, small moves.. :-)

Don't think of finding abstractions as an all-or-nothing problem.
One good way of finding good abstractions is to start with straight
code, to get the functionality out of the way, then try to abstract over
repetitive details, improving the code structure without ruining the
functionality. The abstractions you're trying to find evolved this way,
and can be understood this way, as can new abstractions that have
not been found yet.

There are, of course, problems one cannot even tackle (in any 
practical sense) unless one knows some suitable abstractions, and 
design pattern dictionaries can help to get up to speed there. But 
unless one has learned to transform straight code into nicer/more 
concise/more maintainable code in many small steps, using other 
people's nice abstractions wholesale will remain a Chinese room

style black art.

For instance, the whole point of refactoring is to separate general
code rewriting into rewrites that change observable behaviour (API 
or semantics), such as bug fixes, new features, and those that don't 
change observable behaviour, such as cleaning up, restructuring

below the externally visible API, and introducing internal abstractions.
Only the latter group fall under refactoring, and turn out to be a nice
match to the equational reasoning that pure-functional programmers
value so highly. 

What that means is simply that many small code transformations are 
thinkable without test coverage (larger code bases should still have 
tests, as not every typo/thinko is caught by the type system). Even 
better, complex code transformations, such as large-scale refactorings, 
can be composed from those small equational-reasoning-based 
transformations, and these small steps can be applied *without having 
to understand what the code does* (so they are helpful even for 
exploratory code reviews: transform/simplify the code until we 
understand it - if it was wrong before, it will still be wrong the same 
way, but the issues should be easier to see or fix).



From a glance at this thread, it seems mostly about refactorings/

meaning-preserving program transformations, so it might be
helpful to keep the customary distinction between rewriting and
refactoring in mind. A couple of lessons we learned in the old
refactoring functional programs project:

1 refactoring is always with respect to a boundary: things within
   that boundary can change freely, things beyond need to stay
   fixed to avoid observable changes. It is important to make
   the boundary, and the definition of observable change
   explicit for every refactoring session (it might simply mean 
   denotational equivalence, or operational equivalence, or

   API equivalence, or performance equivalence, or..)

2. refactoring is always with respect to a goal: adding structure,
   removing structure, changing structure, making code more
   readable, more maintainable, more concise, .. These goals
   often conflict, and sometimes even lie in opposite directions
   (eg.,removing clever abstractions to understand what is 
   going on, or adding clever abstractions to remove boilerplate),
   so it is important to be clear about the current goal when 
   refactoring.


Hth,
Claus

PS. Obligatory nostalgia:

- a long time ago, HaRe did implement some of the refactorings
   raised in this thread (more were catalogued than implemented,
   and not all suggestions in this thread were even catalogued, but
   the project site should still be a useful resource)

   http://www.cs.kent.ac.uk/projects/refactor-fp/
   
   A mini demo that shows a few of the implemented refactorings

   in action can be found here:

   http://www.youtube.com/watch?v=4I7VZV7elnY
   
- once upon a time, a page was started on the haskell wiki, to

   collect experiences of Haskell code rewriting in practice (the
   question of how to find/understand advanced design patterns
   governs both of the examples listed there so far, it would be
   nice if any new examples raised in this thread would be added
   to the wiki page)

   http://www.haskell.org/haskellwiki/Equational_reasoning_examples


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type class constraints headache

2010-03-04 Thread Eduard Sergeev

Related question probably: why ghc compiles this:

 {-# LANGUAGE RankNTypes, ImpredicativeTypes #-}
 methods :: [(String, forall b. Eq b = b)]
 methods = 
   [ (method1, undefined ) 
   , (method2, undefined)  ] 

 test:: [String] 
 test= pmap methods 
where pmap = map fst

But when I change 'test' to:

 test:: [String] 
 test= map fst methods 

I get:
Cannot match a monotype with `forall b. (Eq b) = b'
  Expected type: [(String, b)]
  Inferred type: [(String, forall b1. (Eq b1) = b1)]
In the second argument of `map', namely `methods'
In the expression: map fst methods
Failed, modules loaded: none.



-- 
View this message in context: 
http://old.nabble.com/type-class-constraints-headache-tp27752745p27779518.html
Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How do you set up windows enviroment

2010-03-04 Thread Maciej Piechotka
On 04/03/2010 07:54, Jonas Almström Duregård wrote:
 I suppose you are already using the Haskell Platform?
 
 I had problems with darcs freezing, and it turned out that the ssh
 client was actually prompting for something. Is the host you are
 connected to in your list of trusted hosts? Also i had a similar
 problem when using Pageant to manage SSH-keys, and the command prompt
 was run in privileged mode.
 
 /Jonas Duregård
 


Hmm. I installed openssh, added its key to trusted hosts and removed
putty from path. Now it complains there is no _darcs/inventory in
repository (there is inventories directory and hashed_inventory file).
Hmm... some progress.

Regards
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How do you set up windows enviroment

2010-03-04 Thread Jonas Almström Duregård
If you use putty instead and type
ssh youru...@yourrepository.com
Does it connect properly?

This is from the patch-tag website:

Problems pushing or pulling
darcs failed: Not a repository: myu...@patch-tag.com:/r/myrepo
((scp) failed to fetch: myu...@patch-tag.com:/r/myrepo/_darcs/inventory)
Did you get an error like that? That's weird. Email the output of

ssh -v myu...@patch-tag.com
to t...@patch-tag.com so we can diagnose and resolve the problem. (Note: the 
-v flag to ssh gives verbose output.)

/Jonas Duregård

2010/3/4 Maciej Piechotka uzytkown...@gmail.com:
 On 04/03/2010 07:54, Jonas Almström Duregård wrote:
 I suppose you are already using the Haskell Platform?

 I had problems with darcs freezing, and it turned out that the ssh
 client was actually prompting for something. Is the host you are
 connected to in your list of trusted hosts? Also i had a similar
 problem when using Pageant to manage SSH-keys, and the command prompt
 was run in privileged mode.

 /Jonas Duregård



 Hmm. I installed openssh, added its key to trusted hosts and removed
 putty from path. Now it complains there is no _darcs/inventory in
 repository (there is inventories directory and hashed_inventory file).
 Hmm... some progress.

 Regards

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: iteratee-parsec 0.0.2

2010-03-04 Thread Maciej Piechotka
Iteratee-parsec is a library which allows to have a parsec (3) parser in
IterateeG monad.

It contains 2 implementations:
- John Lato's on public domain. It is based on monoid and design with
short parsers in mind.
- Mine on MIT. It is based on single-linked mutable list. It seems to be
significantly faster for larger parsers - at least in some cases - but
it requires a monad with references (such as for example IO or ST).

Version 0.0.2 does not differ much from 0.0.1 except that it is
up-to-date with parsec 3.1.0 (version 0.0.1 is not).

Regards


signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Re: How do you set up windows enviroment

2010-03-04 Thread Maciej Piechotka
On Thu, 2010-03-04 at 14:16 +0100, Jonas Almström Duregård wrote:
 If you use putty instead and type
 ssh youru...@yourrepository.com
 Does it connect properly?
 

Yes

 This is from the patch-tag website:
 
 Problems pushing or pulling
 darcs failed: Not a repository: myu...@patch-tag.com:/r/myrepo
 ((scp) failed to fetch: myu...@patch-tag.com:/r/myrepo/_darcs/inventory)
 Did you get an error like that? That's weird. Email the output of
 
 ssh -v myu...@patch-tag.com
 to t...@patch-tag.com so we can diagnose and resolve the problem. (Note: the 
 -v flag to ssh gives verbose output.)
 
 /Jonas Duregård

I'll check but I have code on code.haskell.org.

Regards

PS. I guess that someone who complains on underdevelopment of shell on
Windows is well aware what -v means ;)


signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] New OpenGL package: efficient way to convert datatypes?

2010-03-04 Thread Peter Verswyvelen
I just converted an old HOpenGL application of mine to the new Haskell
OpenGL using GHC 6.12.1, using realToFrac to convert Double to
GLdouble.

The performance dropped from over 800 frames per second to 10 frames
per second... Using unsafeCoerce I got 800 FPS again.

So for all of you using new OpenGL package, be warned about this, it
can really kill performance (it's a known issue to those how already
knew it ;-)

I can't use the logfloat package's realToFrac function since it complains:

ElasticCollision.hs:317:28:
No instance for (Data.Number.Transfinite.Transfinite GL.GLdouble)
  arising from a use of `realToFrac' at ElasticCollision.hs:317:28-39
Possible fix:
  add an instance declaration for
  (Data.Number.Transfinite.Transfinite GL.GLdouble)
In the first argument of `Vertex2', namely `(realToFrac x)'
In the expression: Vertex2 (realToFrac x) (realToFrac y)
In the definition of `glVertex2':
glVertex2 x y = Vertex2 (realToFrac x) (realToFrac y)


On Wed, Sep 30, 2009 at 4:06 PM, Peter Verswyvelen bugf...@gmail.com wrote:
 I don't want to use the GL types directly since the OpenGL renderer is not
 exposes in the rest of the API.
 I was hoping that realToFrac would be a nop in case it would be identical to
 an unsafeCoerce.
 I guess one could make rules for that, but this tickets makes me wander if
 that really works:
 http://hackage.haskell.org/trac/ghc/ticket/1434


 On Wed, Sep 30, 2009 at 4:58 PM, Roel van Dijk vandijk.r...@gmail.com
 wrote:

 If you are *really* sure that the runtime representation is the same
 you could use usafeCoerce. You could use a small test function for
 profiling, something like:

 convertGLfloat :: GLfloat - Float
 convertGLFloat = realToFrac
 -- convertGLFloat = unsafeCoerce

 and toggle between the two (assuming you won't get a segmentation fault).

 Another option is to not convert at all but use the GL types
 everywhere. Either explicitly or by exploiting polymorphism.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell platform for GHC 6.12.1?

2010-03-04 Thread Job Vranish
I'm pretty sure you don't need mingw and all that. I've bootstrapped
cabal-install on windows a few times now without needing anything more than
ghc (though I haven't done 6.12 yet so I might be totally off base here...)

You can't use the nice bootstrap script, but you can download and build the
dependencies manually (and IIRC there are about 5 or so that don't come
included). Which is still a royal pain, but hopefully easier than all that
other messiness.

- Job

On Thu, Mar 4, 2010 at 4:38 AM, Peter Verswyvelen bugf...@gmail.com wrote:

 Using GHC 6.12.1 on Windows currently is hard, since one must compile
 the latest version of cabal-install, which is a nightmare to do for a
 typical windows user (install mingw, msys, utils like wget, download
 correct package from hackage, compile them in correct order, etc etc)

 What's the status of the Haskell platform for the latest and greatest
 Glasgow Haskell Compiler?
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] New OpenGL package: efficient way to convert datatypes?

2010-03-04 Thread Daniel Fischer
Am Donnerstag 04 März 2010 14:55:30 schrieb Peter Verswyvelen:
 I just converted an old HOpenGL application of mine to the new Haskell
 OpenGL using GHC 6.12.1, using realToFrac to convert Double to
 GLdouble.

 The performance dropped from over 800 frames per second to 10 frames
 per second... Using unsafeCoerce I got 800 FPS again.

Yes, without rules, realToFrac = fromRational . toRational.


 So for all of you using new OpenGL package, be warned about this, it
 can really kill performance (it's a known issue to those how already
 knew it ;-)

I think one would have to add {-# RULES #-} pragmas to 
Graphics.Rendering.OpenGL.Raw.Core31.TypesInternal, along the lines of

{-# RULES
realToFrac/CDouble-GLdouble  realToFrac x = GLdouble x
realToFrac/GLdouble - CDouble realToFrac (GLdouble x) = x
  #-}

(There are corresponding rules for Double-CDouble and CDouble-Double in 
Foreign.C.Types, so I think no rules Double-GLdouble are needed).


 On Wed, Sep 30, 2009 at 4:06 PM, Peter Verswyvelen bugf...@gmail.com 
wrote:
  I don't want to use the GL types directly since the OpenGL renderer is
  not exposes in the rest of the API.
  I was hoping that realToFrac would be a nop in case it would be
  identical to an unsafeCoerce.
  I guess one could make rules for that, but this tickets makes me
  wander if that really works:
  http://hackage.haskell.org/trac/ghc/ticket/1434
 

Well, someone would have to add the rules.

 
  On Wed, Sep 30, 2009 at 4:58 PM, Roel van Dijk
  vandijk.r...@gmail.com
 
  wrote:
  If you are *really* sure that the runtime representation is the same

Yup, CDouble is a newtype wrapper for Double, GLdouble is the same newtype 
wrapper for CDouble.

  you could use usafeCoerce. You could use a small test function for
  profiling, something like:
 
  convertGLfloat :: GLfloat - Float
  convertGLFloat = realToFrac
  -- convertGLFloat = unsafeCoerce
 
  and toggle between the two (assuming you won't get a segmentation
  fault).
 
  Another option is to not convert at all but use the GL types
  everywhere. Either explicitly or by exploiting polymorphism.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] New OpenGL package: efficient way to convert datatypes?

2010-03-04 Thread Nick Bowler
On 16:20 Thu 04 Mar , Daniel Fischer wrote:
 Yes, without rules, realToFrac = fromRational . toRational.

snip

 I think one would have to add {-# RULES #-} pragmas to 
 Graphics.Rendering.OpenGL.Raw.Core31.TypesInternal, along the lines of
 
 {-# RULES
 realToFrac/CDouble-GLdouble  realToFrac x = GLdouble x
 realToFrac/GLdouble - CDouble realToFrac (GLdouble x) = x
   #-}

These rules are, alas, *not* equivalent to fromRational . toRational.

Unfortunately, realToFrac is quite broken with respect to floating point
conversions, because fromRational . toRational is entirely the wrong
thing to do.  I've tried to start some discussion on the haskell-prime
mailing list about fixing this wart.  In the interim, the OpenGL package
could probably provide its own CDouble=GLDouble conversions, but sadly
the only way to correctly perform Double=CDouble is unsafeCoerce.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Profiling help (Warning: Euler spoilers)

2010-03-04 Thread Daniel Fischer
Am Donnerstag 04 März 2010 16:07:51 schrieb Louis Wasserman:
 Actually, looking back, I'm not sure mapM is even the right choice.
 I think foldM would suffice. All we're doing is finding the association
 pair with the minimum key, no?  In this case, foldM would do everything
 we need to...and State.Strict would be pretty good at that.

Yes, it would (much much better than C.M.S.Lazy). And it would be an 
improvement over the original, but not much.

The real problem is that Data.Map isn't well suited for this task. 
Inserting n key - value associations into an initially empty Map takes 
O(n*log n) time. Since here the keys have a tendency to come in increasing 
order, there are a lot of rebalancings necessary, giving extra bad 
constants on top.

What one wants here is a data structure with O(1) access and update for the 
cache. Enter STUArray. Of course, now comes the difficulty that you don't 
know how large your numbers will become (56991483520, you probably don't 
have enough RAM for such a large array), so you have to choose a cutoff and 
decide what to do when you exceed that. That makes the code more 
convoluted, but a lot faster.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] How to break strings and put to a tuple

2010-03-04 Thread Pradeep Wickramanayake
Hi,

 

sortList2 :: String - String
sortList2 (x:xs) 
| x == ',' = 
| otherwise = [x] ++ sortList2 xs

 

im breaking a one specific string and putting them to each word. But I need
to put them to a tuple. 

Can someone help me with the code

 

Please. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to break strings and put to a tuple

2010-03-04 Thread John Van Enk
These are beginning to look like homework questions...

On Thu, Mar 4, 2010 at 11:38 AM, Pradeep Wickramanayake prad...@talk.lkwrote:

  Hi,



 sortList2 :: String - String
 sortList2 (x:xs)
 | x == ',' = 
 | otherwise = [x] ++ sortList2 xs



 im breaking a one specific string and putting them to each word. But I need
 to put them to a tuple.

 Can someone help me with the code



 Please.


 __ Information from ESET NOD32 Antivirus, version of virus
 signature database 4915 (20100304) __

 The message was checked by ESET NOD32 Antivirus.

 http://www.eset.com

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: How do you rewrite your code?

2010-03-04 Thread Ertugrul Soeylemez
Sean Leather leat...@cs.uu.nl wrote:

 My question is simple:

*How do you rewrite your code to improve it?*

The short answer is:  I don't.

Long answer:  In the Haskell community there is a strong bias towards
making your code as generic and abstract as possible.  That's not
because it is the recommended coding style (there is no official
recommendation here), but simply because you can do it in Haskell and
more importantly you can do it _easily_ compared to other languages,
without (necessarily) making your code obscure.

Genericity and abstraction are supposed to make coding more efficient,
and if you understand them well, they do just that.  However, there is a
catch.  If you do _not_ understand them well in your language, they will
make you less efficient at first.  As a hobby Haskell programmer of
course you're happy with that, but you will get your code done slower
than people, who 'just do it'.

I'm someone, who likes to get things done, so I found it's best for me
not to improve the style/genericity of existing code, unless there is a
reason to do it.  But I do improve my understanding of the language and
the abstractions it provides, so in general I can say that my code is
written in the best style and the highest genericity I understand.


 What's an example of a rewrite that you've encountered?

For example as a beginner, when I didn't understand the list monad or
how folds work, my implementation of the 'subsequences' function looked
like this:

  subsets :: [a] - [[a]]
  subsets [] = [[]]
  subsets (x:xs) = ys ++ map (x:) ys   where ys = subsets xs

When I started to comprehend folds, the function started to look more
like this:

  subsets :: [a] - [[a]]
  subsets = foldr (\x xs - xs ++ map (x:) xs) [[]]

Finally now that I understand the list monad very well, my
implementation looks like this:

  subsets :: [a] - [[a]]
  subsets = filterM (const [True, False])

Or even like this:

  subsets :: MonadPlus m = [a] - m [a]
  subsets = filterM (const . msum . map return $ [True, False])

Note that I have never rewritten an existing 'subsets' function and
today I would just use Data.List.subsequences, which was added recently.

So my conclusion is:  In production code don't worry too much about it,
just write your code and don't stop learning.  If your code works and is
readable, there is no need to make it look nicer or more abstract.  If
you have to rewrite or change the function, you can still make it
better.

The exception is:  If you code for fun or exercise, there is nothing
wrong with playing with abstractions and enhance your skills in a funny
way.

As a side note:  I have found that many people don't understand the
filterM-based solution.  That's because many people don't understand the
list monad and the power of the monadic interface.  So if you work in a
group, either don't write code like this or preferably explain monads to
your groupmates.


Greets
Ertugrul


-- 
nightmare = unsafePerformIO (getWrongWife = sex)
http://blog.ertes.de/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GPL answers from the SFLC (WAS: Re: ANN: hakyll-0.1)

2010-03-04 Thread Tom Tobin
After politely pestering them again, I finally heard back from the
Software Freedom Law Center regarding our GPL questions (quoted
below).

I exchanged several emails to clarify the particular issues; in short,
the answers are No, No, N/A, and N/A.  The SFLC holds that a
library that depends on a GPL'd library must in turn be GPL'd, even if
the library is only distributed as source and not in binary form.
They offered to draft some sort of explicit response if we'd find it
useful.

Maybe it would be useful if Cabal had some sort of licensing check
command that could be run on a .cabal file, and warn an author if any
libraries it depends on (directly or indirectly) are GPL'd but the
.cabal itself does not have the license set to GPL.


On Fri, Dec 11, 2009 at 10:21 PM, Tom Tobin korp...@korpios.com wrote:
 I'd like to get these questions out to the SFLC so we can satisfy our
 curiosity; at the moment, here's what I'd be asking:

 Background: X is a library distributed under the terms of the GPL. Y
 is another library which calls external functions in the API of X, and
 requires X to compile.  X and Y have different authors.

 1) Can the author of Y legally distribute the *source* of Y under a
 non-GPL license, such as the 3-clause BSD license or the MIT license?

 2) If the answer to 1 is no, is there *any* circumstance under which
 the author of Y can distribute the source of Y under a non-GPL
 license?

 3) If the answer to 1 is yes, what specifically would trigger the
 redistribution of a work in this scenario under the GPL?  Is it the
 distribution of X+Y *together* (whether in source or binary form)?

 4) If the answer to 1 is yes, does this mean that a BSD-licensed
 library does not necessarily mean that closed-source software can be
 distributed which is based upon such a library (if it so happens that
 the library in turn depends on a copylefted library)?

 By the way, apologies to the author of Hakyll — I'm sure this isn't
 what you had in mind when you announced your library!  I'm just hoping
 that we can figure out what our obligations are based upon the GPL,
 since I'm not so sure myself anymore.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ParsecT bug [was: ANNOUNCE: Parsec 3.1.0]

2010-03-04 Thread Roman Cheplyaka
By coincidence, today I found a bug in the parsec 3.0.[01]. Probably due
to changes introduced the bug is absent in parsec 3.1.0.

I think it is worth to release 3.0.2 with this bug fixed.

The bug itself is demonstrated in the following code. It gives
Right (False,True) with parsec-3.0.x while should be giving Right (False,False).

And, by the way, does parsec have any code repository and/or bug tracker?

 import Text.Parsec hiding (parse)
 import Control.Monad.Reader

 type Parser = ParsecT String () (Reader Bool)

 change :: Parser a - Parser a
 change p = local (const False) p

 p = change $ do
 was - ask
 anyChar
 now - ask
 return (was,now)

 parse :: Parser a - SourceName - String - Either ParseError a
 parse p name s = runReader (runPT p () name s) True

 main = print $ parse p  a

* Derek Elkins derek.a.elk...@gmail.com [2010-03-03 22:45:12-0600]
 Changes:
 - the changes to the core of Parsec lead to some changes to when
 things get executed when it is used as a monad transformer
 In the new version bind, return and mplus no longer run in
 the inner monad, so if the inner monad was side-effecting for these
 actions the behavior of existing code will change.
 - notFollowedBy p now behaves like notFollowedBy (try p) which
 changes the behavior slightly when p consumes input, though the
 behavior should be more natural now.
 - the set of names exported from Text.Parsec.Prim has changed somewhat


-- 
Roman I. Cheplyaka :: http://ro-che.info/
Don't let school get in the way of your education. - Mark Twain
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GPL answers from the SFLC (WAS: Re: ANN: hakyll-0.1)

2010-03-04 Thread Achim Schneider
Tom Tobin korp...@korpios.com wrote:

 Maybe it would be useful if Cabal had some sort of licensing check
 command that could be run on a .cabal file, and warn an author if any
 libraries it depends on (directly or indirectly) are GPL'd but the
 .cabal itself does not have the license set to GPL.

Or are dual-licensed under GPL. That is, the license field in .cabals
should take a list or even bool ops, and cabal sdist should utterly
fail (as well as hackage) if code that depends on GPL isn't marked as
GPL.

Note that this is a safety measure for the submitter: If the code is,
indeed, released to the public, it is (dual licesed) GPL, anyway, even
if that might not have been the intent.

-- 
(c) this sig last receiving data processing entity. Inspect headers
for copyright history. All rights reserved. Copying, hiring, renting,
performance and/or quoting of this signature prohibited.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GPL answers from the SFLC (WAS: Re: ANN: hakyll-0.1)

2010-03-04 Thread MightyByte
Interesting.  It seems to me that the only solution for the
BSD-oriented haskell community is to practically boycott GPL'd
libraries.  From what I understand, this is exactly what the LGPL is
for.  I've known the basic idea behind the GPL/LGPL distinction for
quite awhile, but I didn't realize that mistaking the two had such
far-ranging consequences.  Since GPL seems to be the big elephant in
the room, it seems very easy to make this mistake.  At the very least
we should try to educate the community about this.


On Thu, Mar 4, 2010 at 12:34 PM, Tom Tobin korp...@korpios.com wrote:
 After politely pestering them again, I finally heard back from the
 Software Freedom Law Center regarding our GPL questions (quoted
 below).

 I exchanged several emails to clarify the particular issues; in short,
 the answers are No, No, N/A, and N/A.  The SFLC holds that a
 library that depends on a GPL'd library must in turn be GPL'd, even if
 the library is only distributed as source and not in binary form.
 They offered to draft some sort of explicit response if we'd find it
 useful.

 Maybe it would be useful if Cabal had some sort of licensing check
 command that could be run on a .cabal file, and warn an author if any
 libraries it depends on (directly or indirectly) are GPL'd but the
 .cabal itself does not have the license set to GPL.


 On Fri, Dec 11, 2009 at 10:21 PM, Tom Tobin korp...@korpios.com wrote:
 I'd like to get these questions out to the SFLC so we can satisfy our
 curiosity; at the moment, here's what I'd be asking:

 Background: X is a library distributed under the terms of the GPL. Y
 is another library which calls external functions in the API of X, and
 requires X to compile.  X and Y have different authors.

 1) Can the author of Y legally distribute the *source* of Y under a
 non-GPL license, such as the 3-clause BSD license or the MIT license?

 2) If the answer to 1 is no, is there *any* circumstance under which
 the author of Y can distribute the source of Y under a non-GPL
 license?

 3) If the answer to 1 is yes, what specifically would trigger the
 redistribution of a work in this scenario under the GPL?  Is it the
 distribution of X+Y *together* (whether in source or binary form)?

 4) If the answer to 1 is yes, does this mean that a BSD-licensed
 library does not necessarily mean that closed-source software can be
 distributed which is based upon such a library (if it so happens that
 the library in turn depends on a copylefted library)?

 By the way, apologies to the author of Hakyll — I'm sure this isn't
 what you had in mind when you announced your library!  I'm just hoping
 that we can figure out what our obligations are based upon the GPL,
 since I'm not so sure myself anymore.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GPL answers from the SFLC (WAS: Re: ANN: hakyll-0.1)

2010-03-04 Thread Stephen Tetley
Hi Tom

Hmm, its seems I'm due to eat my hat...

To me though, the judgement makes that insistence that using an API is
making a derivative work. I can't see how that squares up.

Before I eat a hat, I'll wait for the explicit response if you don't mind.

Best wishes

Stephen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GPL answers from the SFLC (WAS: Re: ANN: hakyll-0.1)

2010-03-04 Thread Robert Greayer
Before taking any action with respect to cabal or hackage, etc., I'd
think people would want to see their explicit response.

On Thu, Mar 4, 2010 at 12:34 PM, Tom Tobin korp...@korpios.com wrote:
 After politely pestering them again, I finally heard back from the
 Software Freedom Law Center regarding our GPL questions (quoted
 below).

 I exchanged several emails to clarify the particular issues; in short,
 the answers are No, No, N/A, and N/A.  The SFLC holds that a
 library that depends on a GPL'd library must in turn be GPL'd, even if
 the library is only distributed as source and not in binary form.
 They offered to draft some sort of explicit response if we'd find it
 useful.

 Maybe it would be useful if Cabal had some sort of licensing check
 command that could be run on a .cabal file, and warn an author if any
 libraries it depends on (directly or indirectly) are GPL'd but the
 .cabal itself does not have the license set to GPL.


 On Fri, Dec 11, 2009 at 10:21 PM, Tom Tobin korp...@korpios.com wrote:
 I'd like to get these questions out to the SFLC so we can satisfy our
 curiosity; at the moment, here's what I'd be asking:

 Background: X is a library distributed under the terms of the GPL. Y
 is another library which calls external functions in the API of X, and
 requires X to compile.  X and Y have different authors.

 1) Can the author of Y legally distribute the *source* of Y under a
 non-GPL license, such as the 3-clause BSD license or the MIT license?

 2) If the answer to 1 is no, is there *any* circumstance under which
 the author of Y can distribute the source of Y under a non-GPL
 license?

 3) If the answer to 1 is yes, what specifically would trigger the
 redistribution of a work in this scenario under the GPL?  Is it the
 distribution of X+Y *together* (whether in source or binary form)?

 4) If the answer to 1 is yes, does this mean that a BSD-licensed
 library does not necessarily mean that closed-source software can be
 distributed which is based upon such a library (if it so happens that
 the library in turn depends on a copylefted library)?

 By the way, apologies to the author of Hakyll — I'm sure this isn't
 what you had in mind when you announced your library!  I'm just hoping
 that we can figure out what our obligations are based upon the GPL,
 since I'm not so sure myself anymore.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Parsec 3.1.0

2010-03-04 Thread Job Vranish
Sweet :)

I'm glad that notFollowedBy has been fixed. I've often had to redefine it
because the type was to restrictive.

- Job

On Wed, Mar 3, 2010 at 11:45 PM, Derek Elkins derek.a.elk...@gmail.comwrote:

 Parsec is a monadic combinator library that is well-documented, simple
 to use, and produces good error messages.   Parsec is not inherently
 lazy/incremental and is not well-suited to handling large quantities
 of simply formatted data.  Parsec 3 adds to Parsec the ability to use
 Parsec as a monad transformer and generalizes the input Parsec
 accepts.  Parsec 3 includes a compatibility layer for Parsec 2 and
 should be a drop-in replacement for code using Parsec 2.  Code using
 the features of Parsec 3 should use the modules in Text.Parsec.

 Due almost entirely to the work of Antoine Latter there is a new
 version of Parsec 3 available.  He documented some of his thoughts on
 this in this series of blog posts:
 http://panicsonic.blogspot.com/2009/12/adventures-in-parsec.html

 The main features of this release are:
- the performance should be much better and comparable to Parsec 2
- notFollowedBy's type and behavior have been generalized

 Changes:
- the changes to the core of Parsec lead to some changes to when
 things get executed when it is used as a monad transformer
In the new version bind, return and mplus no longer run in
 the inner monad, so if the inner monad was side-effecting for these
 actions the behavior of existing code will change.
- notFollowedBy p now behaves like notFollowedBy (try p) which
 changes the behavior slightly when p consumes input, though the
 behavior should be more natural now.
- the set of names exported from Text.Parsec.Prim has changed somewhat
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Benchmarking and Garbage Collection

2010-03-04 Thread Neil Brown

Hi,

I'm looking at benchmarking several different concurrency libraries 
against each other.  The benchmarks involve things like repeatedly 
sending values between two threads.  I'm likely to use Criterion for the 
task.


However, one thing I've found is that the libraries have noticeably 
different behaviour in terms of the amount of garbage created.  
Criterion has an option to perform GC between each benchmark, but I 
think that the benchmark is only fair if it takes into account the GC 
time for each system; it doesn't seem right for two systems to be 
counted as equal if the times to get the results are the same, but then 
one has to spend twice as long as the other in GC afterwards.  Here's 
some options I've considered:


* I could make a small change to Criterion to include the time for 
performing GC in each benchmark run, but I worry that running the GC so 
often is also misrepresentative (might 100 small GCs take a lot longer 
than one large GC of the same data?) -- it may also add a lot of 
overhead to quick benchmarks, but I can avoid that problem.


* Alternatively, I could run the GC once at the end of all the runs, 
then apportion the cost equally to each of the benchmark times (so if 
100 benchmarks require 0.7s of GC, add 0.007s to each time) -- but if GC 
is triggered somewhere in the middle of the runs, that upsets the 
strategy a little.


* I guess a further alternative is to make each benchmark a whole 
program (and decently long), then just time the whole thing, rather than 
using Criterion.


Has anyone run into these issues before, and can anyone offer an opinion 
on what the best option is?


Thanks,

Neil.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector stream fusion, inlining and compilation time

2010-03-04 Thread Don Stewart
sk:
 hi,
 
 two questions in one post:
 
 i've been hunting down some performance problems in DSP code using vector and
 the single most important transformation seems to be throwing in INLINE 
 pragmas
 for any function that uses vector combinators and is to be called from
 higher-level code. failing to do so seems to prevent vector operations from
 being fused and results in big performance hits (the good news is that the
 optimized code is quite competitive). does anybody have some more info about 
 the
 do's and don'ts when programming with vector?

Always inline any combination of things that are expressed in terms of
vector combinators, so that the combination of your code can hope to
fuse as well.

 the downside after adding the INLINE pragmas is that now some of my modules 
 take
 _really_ long to compile (up to a couple of minutes); any ideas where i can
 start looking to bring the compilation times down again?

I'm not sure there's much we can do there.

 i'm compiling with -O2 -funbox-strict-fields instead of -Odph (with ghc 6.10.4
 on OSX 10.4), because it's faster for some of my code, but -O2 vs. -Odph 
 doesn't
 make a noticable difference in compilation time.

-Odph should make it easier for some things to fuse -- and get better
code. But Roman can say more.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell platform for GHC 6.12.1?

2010-03-04 Thread Don Stewart
bugfact:
 Using GHC 6.12.1 on Windows currently is hard, since one must compile
 the latest version of cabal-install, which is a nightmare to do for a
 typical windows user (install mingw, msys, utils like wget, download
 correct package from hackage, compile them in correct order, etc etc)
 
 What's the status of the Haskell platform for the latest and greatest
 Glasgow Haskell Compiler?

We've just entered the release phase,

http://trac.haskell.org/haskell-platform/wiki/ReleaseTimetable

And you can expect 2010.2 during ZuriHac.

Thanks for checking in!

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] New OpenGL package: efficient way to convert datatypes?

2010-03-04 Thread Nick Bowler
On 17:45 Thu 04 Mar , Daniel Fischer wrote:
 Am Donnerstag 04 März 2010 16:45:03 schrieb Nick Bowler:
  On 16:20 Thu 04 Mar , Daniel Fischer wrote:
   Yes, without rules, realToFrac = fromRational . toRational.
 
  snip
 
   I think one would have to add {-# RULES #-} pragmas to
   Graphics.Rendering.OpenGL.Raw.Core31.TypesInternal, along the lines of
  
   {-# RULES
   realToFrac/CDouble-GLdouble  realToFrac x = GLdouble x
   realToFrac/GLdouble - CDouble realToFrac (GLdouble x) = x
 #-}
 
  These rules are, alas, *not* equivalent to fromRational . toRational.
 
 But these rules are probably what one really wants for a [C]Double - 
 GLdouble conversion.

I agree that the conversions described by the rules are precisely what
one really wants.  However, this doesn't make them valid rules for
realToFrac, because they do not do the same thing as realToFrac does.
They break referential transparency by allowing to write functions whose
behaviour depends on whether or not realToFrac was inlined by the ghc
(see below for an example).

  Unfortunately, realToFrac is quite broken with respect to floating point
  conversions, because fromRational . toRational is entirely the wrong
  thing to do.
 
 entirely? For
 
 realToFrac :: (Real a, Fractional b) = a - b
 
 I think you can't do much else that gives something more or less 
 reasonable. For (almost?) any concrete conversion, you can do something 
 much better (regarding performance and often values), but I don't think 
 there's a generic solution.

Sorry, I guess I wasn't very clear.  I didn't mean to say that
fromRational . toRational is a bad implementation of realToFrac.  I
meant to say that fromRational . toRational is not appropriate for
converting values from one floating point type to another floating point
type.  Corollary: realToFrac is not appropriate for converting values
from one floating point type to another floating point type.

The existence of floating point values which are not representable in a
rational causes problems when you use toRational in a conversion.  See
the recent discussion on the haskell-prime mailing list

  http://thread.gmane.org/gmane.comp.lang.haskell.prime/3146

or the trac ticket on the issue

  http://hackage.haskell.org/trac/ghc/ticket/3676

for further details.

  I've tried to start some discussion on the haskell-prime
  mailing list about fixing this wart.  In the interim, the OpenGL package
  could probably provide its own CDouble=GLDouble conversions, but sadly
 
 s/could/should/, IMO.
 
  the only way to correctly perform Double=CDouble is unsafeCoerce.
 
 Are you sure? In Foreign.C.Types, I find
 
 {-# RULES
 realToFrac/a-CFloatrealToFrac = \x - CFloat   (realToFrac x)
 realToFrac/a-CDouble   realToFrac = \x - CDouble  (realToFrac x)
 
 realToFrac/CFloat-arealToFrac = \(CFloat   x) - realToFrac x
 realToFrac/CDouble-a   realToFrac = \(CDouble  x) - realToFrac x
  #-}

Even though these are the conversions we actually want to do, these
rules are also invalid.  I'm not at all surprised to see this, since we
have the following:

 {-# RULES
 realToFrac/Double-Double realToFrac = id :: Double - Double
   #-}
 
 (why isn't that in GHC.Real, anyway?), it should do the correct thing - not 
 that it's prettier than unsafeCoerce.

This rule does exist, in GHC.Float (at least with 6.12.1), and is
another bug.  It does the wrong thing because fromRational . toRational
:: Double - Double is *not* the identity function on Doubles.  As
mentioned before, the result is that we can write programs which behave
differently when realToFrac gets inlined.

Try using GHC to compile the following program with and without -O:

  compiledWithOptimisation :: Bool
  compiledWithOptimisation = isNegativeZero . realToFrac $ -0.0

  main :: IO ()
  main = putStrLn $ if compiledWithOptimisation
  then Optimised :)
  else Not optimised :(

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell platform for GHC 6.12.1?

2010-03-04 Thread Andrew Coppin

Peter Verswyvelen wrote:

Using GHC 6.12.1 on Windows currently is hard, since one must compile
the latest version of cabal-install, which is a nightmare to do for a
typical windows user


Out of curiosity, why do you need to compile the latest version of 
cabal-install? Why can't you use the precompiled one?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] New OpenGL package: efficient way to convert datatypes?

2010-03-04 Thread Daniel Fischer
Am Donnerstag 04 März 2010 19:25:43 schrieb Nick Bowler:
 I agree that the conversions described by the rules are precisely what
 one really wants.  However, this doesn't make them valid rules for
 realToFrac, because they do not do the same thing as realToFrac does.
 They break referential transparency by allowing to write functions whose
 behaviour depends on whether or not realToFrac was inlined by the ghc
 (see below for an example).


You're absolutely right, of course. The clean way would be for the modules 
defining the newtype wrappers to define and export the desired conversion 
functions. Without that, you can only choose between incorrect-but-working-
as-intended rewrite rules and unsafeCoerceButUsedSafelyHere. I don't like 
either very much.

 Sorry, I guess I wasn't very clear.  I didn't mean to say that
 fromRational . toRational is a bad implementation of realToFrac.  I
 meant to say that fromRational . toRational is not appropriate for
 converting values from one floating point type to another floating point
 type.  Corollary: realToFrac is not appropriate for converting values
 from one floating point type to another floating point type.

Agreed.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GPL answers from the SFLC

2010-03-04 Thread Stefan Monnier
 Note that this is a safety measure for the submitter: If the code is,
 indeed, released to the public, it is (dual licesed) GPL, anyway, even
 if that might not have been the intent.

No.  If the submitter did not explicitly release his code under the GPL,
then it is not licensed under the GPL, even if is a derivative of
GPL code.  Instead, it is a breach of the GPL license and the submitter
is exposing himself to a civil suit.


Stefan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GPL answers from the SFLC

2010-03-04 Thread Stefan Monnier
 The next question that comes to mind is thus:
 What if a new library X' released under BSD or MIT license implements
 the X API (making possible to compile Y against it)? Can such a new
 library X' be licensed under something else than the GPL (we guess Yes
 because we don't think it is possible to license the API itself)?

Yes.

 Why should the existence of X' make any difference for the author
 of Y?

Because the existence of X' makes it possible to use Y without using X.
The order in which X and X' come to exist doesn't matter.

This exact scenario took place for the GMP library, whose API was
reimplemented as fgmp, specifically so that a user of the GMP
library could release their code under a different library than the GPL.


Stefan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-04 Thread Don Stewart
cjs:
 On 2010-03-01 19:37 + (Mon), Thomas Schilling wrote:
 
  A possible workaround would be to sprinkle lots of 'rnf's around your
  code
 
 As I learned rather to my chagrin on a large project, you generally
 don't want to do that. I spent a couple of days writing instance
 of NFData and loading up my application with rnfs and then watched
 performance fall into a sinkhole.
 
 I believe that the problem is that rnf traverses the entirety of a large
 data structure even if it's already strict and doesn't need traversal.
 My guess is that doing this frequently on data structures (such as Maps)
 of less than tiny size was blowing out my cache.

And rnf will do the traversal whether it is needed or not.
Imo, it is better  to ensure the structures you want are necessarily
strict by definition, so that only the minimum additional evaluation is
necessary.

'rnf' really is a hammer, but not everything is a nail.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-04 Thread Jason Dagit
On Thu, Mar 4, 2010 at 11:10 AM, Don Stewart d...@galois.com wrote:

 cjs:
  On 2010-03-01 19:37 + (Mon), Thomas Schilling wrote:
 
   A possible workaround would be to sprinkle lots of 'rnf's around your
   code
 
  As I learned rather to my chagrin on a large project, you generally
  don't want to do that. I spent a couple of days writing instance
  of NFData and loading up my application with rnfs and then watched
  performance fall into a sinkhole.
 
  I believe that the problem is that rnf traverses the entirety of a large
  data structure even if it's already strict and doesn't need traversal.
  My guess is that doing this frequently on data structures (such as Maps)
  of less than tiny size was blowing out my cache.

 And rnf will do the traversal whether it is needed or not.
 Imo, it is better  to ensure the structures you want are necessarily
 strict by definition, so that only the minimum additional evaluation is
 necessary.


Isn't the downside of strict structures the implicit nature of the
'strictification'?  You lose the fine grained control of when particular
values should be strict.

Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Benchmarking and Garbage Collection

2010-03-04 Thread Jesper Louis Andersen
On Thu, Mar 4, 2010 at 7:16 PM, Neil Brown nc...@kent.ac.uk wrote:

 However, one thing I've found is that the libraries have noticeably
 different behaviour in terms of the amount of garbage created.

In fact, CML relies on the garbage collector for some implementation
constructions. John H. Reppys Concurrent Programming in ML is worth
a read if you haven't. My guess is that the Haskell implementation of
CML is bloody expensive. It is based on the paper
http://www.cs.umd.edu/~avik/projects/cmllch/ where Chaudhuri first
constructs an abstract machine for CML and then binds this to the
Haskell MVar and forkIO constructions.

In any case, implementing sorting networks, Power Series
multiplication (Doug Mcilroy, Rob Pike - Squinting at power series) or
the good old prime sieve are also interesting benchmarks.

-- 
J.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GPL answers from the SFLC

2010-03-04 Thread minh thu
2010/3/4 Stefan Monnier monn...@iro.umontreal.ca:
 The next question that comes to mind is thus:
 What if a new library X' released under BSD or MIT license implements
 the X API (making possible to compile Y against it)? Can such a new
 library X' be licensed under something else than the GPL (we guess Yes
 because we don't think it is possible to license the API itself)?

 Yes.

 Why should the existence of X' make any difference for the author
 of Y?

 Because the existence of X' makes it possible to use Y without using X.
 The order in which X and X' come to exist doesn't matter.

 This exact scenario took place for the GMP library, whose API was
 reimplemented as fgmp, specifically so that a user of the GMP
 library could release their code under a different library than the GPL.

The thing is that the new X' library can provide the same API while
not being very useful (bug, performance, whatever). And in this case,
it is trivial to make that new X'. So I don't understand why the
answer was no in the first place.

They also said no for the second question, which was asking about some
possibility to make it legal to not license Y under GPL, and we are
saying that providing a new implementation is such a possibility.

So it is still either unclear, or either very constraining.

Cheers,
Thu
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Benchmarking and Garbage Collection

2010-03-04 Thread Neil Brown

Jesper Louis Andersen wrote:

On Thu, Mar 4, 2010 at 7:16 PM, Neil Brown nc...@kent.ac.uk wrote:

  

However, one thing I've found is that the libraries have noticeably
different behaviour in terms of the amount of garbage created.



In fact, CML relies on the garbage collector for some implementation
constructions. John H. Reppys Concurrent Programming in ML is worth
a read if you haven't. My guess is that the Haskell implementation of
CML is bloody expensive. It is based on the paper
http://www.cs.umd.edu/~avik/projects/cmllch/ where Chaudhuri first
constructs an abstract machine for CML and then binds this to the
Haskell MVar and forkIO constructions.
  
CML is indeed the library that has the most markedly different 
behaviour.  In Haskell, the CML package manages to produce timings like 
this for fairly simple benchmarks:


 INIT  time0.00s  (  0.00s elapsed)
 MUT   time2.47s  (  2.49s elapsed)
 GCtime   59.43s  ( 60.56s elapsed)
 EXIT  time0.00s  (  0.01s elapsed)
 Total time   61.68s  ( 63.07s elapsed)

 %GC time  96.3%  (96.0% elapsed)

 Alloc rate784,401,525 bytes per MUT second

 Productivity   3.7% of total user, 3.6% of total elapsed

I knew from reading the code that CML's implementation would do 
something like this, although I do wonder if it triggers some 
pathological case in the GC.  The problem is that when I benchmark the 
program, it seems to finish it decent time; then spends 60 seconds doing 
GC before finally terminating!  So I need some way of timing that will 
reflect this; I wonder if just timing the entire run-time (and making 
the benchmarks long enough to not be swallowed by program start-up 
times, etc) is the best thing to do.  A secondary issue is whether I 
should even include CML at all considering the timings!


Thanks,

Neil

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Anyone up for Google SoC 2010?

2010-03-04 Thread Johan Tibell
On Sun, Jan 31, 2010 at 12:04 PM, Malcolm Wallace 
malcolm.wall...@cs.york.ac.uk wrote:

 Google has announced that the Summer of Code programme will be running
 again this year.  If haskell.org people would like to take part again this
 year, then we need volunteers:

 First,
* suggestions for suitable projects
  (in the past this was organised using a reddit)


Here's a proposal for a project I'd be willing to mentor:

= A high-performance HTML combinator library using Data.Text =

Almost all web applications need to generate HTML for rendering in the
user's browser. The three perhaps most important properties in an HTML
generation library are:

- High performance: Given that the network introduces a lot of latency the
server is left with very little time to create a response to send back to
the client. Every millisecond not spent on generating HTML can be used to
process the user's request. Furthermore, efficient use of the server's
resources is important to keep the number of clients per server high and
costs per client low.

- Correctness: Incorrectly created HTML can result in anything from
incorrect rendering (in the best case) to XSS attacks (in the worst case).

- Composability: Being able to create small widgets and reuse them in
several pages fosters consistency in the generated output and helps both
correctness and reuse. (Formlets play a big roll here but being able to
treat HTML fragments as values rather than as strings is important too.)

Combinator libraries, like the 'html' package on Hackage [1], address the
the last two criteria by making the generated HTML correct by construction
and making HTML fragments first class values. Traditional templating systems
generally have the first property, offering excellent performance, but lacks
the other two.

Task: Create a new HTML combinator library, based on the 'html' library,
that's blazing fast, well tested and well documented. Also improve upon the
'html' package's API by e.g. splitting the attribute related functions into
their own module.

Tools: QuickCheck for testing, Criterion for benchmarking, and Haddock for
documenting.

1. http://hackage.haskell.org/package/html

-- Johan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Benchmarking and Garbage Collection

2010-03-04 Thread Jesper Louis Andersen
On Thu, Mar 4, 2010 at 8:35 PM, Neil Brown nc...@kent.ac.uk wrote:

 CML is indeed the library that has the most markedly different behaviour.
  In Haskell, the CML package manages to produce timings like this for fairly
 simple benchmarks:

  %GC time      96.3%  (96.0% elapsed)

 I knew from reading the code that CML's implementation would do something
 like this, although I do wonder if it triggers some pathological case in the
 GC.

That result is peculiar. What are you doing to the library, and what
do you expect happens? Since I have some code invested on top of CML,
I'd like to gain a little insight if possible.

-- 
J.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Prelude.undefined

2010-03-04 Thread Ketil Malde
Ivan Miljenovic ivan.miljeno...@gmail.com writes:

 GHC.Err.CAFTest: Prelude.undefined

 Are you matching all patterns?  When compiling with -Wall does it make
 any complaints?

How would this help?  'Prelude.undefined' happens because somewhere
you're trying to evaluate a value defined with that particular literal,
doesn't it?

Using this in a library seems to me to be in poor taste, but grepping
the code should reveal it, if you're sure that's where the problem is
hiding.  I generally replace any 'undefined's with 'error string', with
each 'string' unique for that position.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GPL answers from the SFLC (WAS: Re: ANN: hakyll-0.1)

2010-03-04 Thread Matthias Kilian
On Thu, Mar 04, 2010 at 11:34:24AM -0600, Tom Tobin wrote:
 [...] The SFLC holds that a
 library that depends on a GPL'd library must in turn be GPL'd, even if
 the library is only distributed as source and not in binary form.

Was this a general statement or specific to the fact that (at least
GHC) is doing heavy inlining?

Anyway, I think the SFLC is the wrong institution to ask, since
they're biased.

Ciao,
Kili
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Benchmarking and Garbage Collection

2010-03-04 Thread Neil Brown

Jesper Louis Andersen wrote:

On Thu, Mar 4, 2010 at 8:35 PM, Neil Brown nc...@kent.ac.uk wrote:
  

CML is indeed the library that has the most markedly different behaviour.
 In Haskell, the CML package manages to produce timings like this for fairly
simple benchmarks:

 %GC time  96.3%  (96.0% elapsed)

I knew from reading the code that CML's implementation would do something
like this, although I do wonder if it triggers some pathological case in the
GC.



That result is peculiar. What are you doing to the library, and what
do you expect happens? Since I have some code invested on top of CML,
I'd like to gain a little insight if possible.
  


In trying to simplify my code, the added time has moved from GC time to 
EXIT time (and increased!).  This shift isn't too surprising -- I 
believe the time is really spent trying to kill lots of threads.  Here's 
my very simple benchmark; the main thread repeatedly chooses between 
receiving from two threads that are sending to it:



import Control.Concurrent
import Control.Concurrent.CML
import Control.Monad

main :: IO ()
main = do let numChoices = 2
 cs - replicateM numChoices channel
 mapM_ forkIO [replicateM_ (10 `div` numChoices) $ sync $ 
transmit c () | c - cs]
 replicateM_ 10 $ sync $ choose [receive c (const True) | c 
- cs]



Compiling with -threaded, and running with +RTS -s, I get:

 INIT  time0.00s  (  0.00s elapsed)
 MUT   time2.68s  (  3.56s elapsed)
 GCtime1.84s  (  1.90s elapsed)
 EXIT  time   89.30s  ( 90.71s elapsed)
 Total time   93.82s  ( 96.15s elapsed)

I think the issue with the CML library is that it spawns a lot of 
threads (search the source for forkIO: 
http://hackage.haskell.org/packages/archive/cml/0.1.3/doc/html/src/Control-Concurrent-CML.html).  
Presumably the Haskell RTS isn't optimised for this approach (maybe the 
ML RTS was, from what you said?), and at the end of the program it 
spends a lot of time reaping the threads.  The problem isn't nearly as 
bad if you don't use choose, though:



import Control.Concurrent
import Control.Concurrent.CML
import Control.Monad

main :: IO ()
main = do c - channel
 forkIO $ replicateM_ 10 $ sync $ transmit c ()
 replicateM_ 10 $ sync $ receive c (const True)


I get:

 INIT  time0.00s  (  0.00s elapsed)
 MUT   time1.92s  (  2.65s elapsed)
 GCtime0.92s  (  0.93s elapsed)
 EXIT  time0.00s  (  0.02s elapsed)
 Total time2.65s  (  3.59s elapsed)

 %GC time  34.6%  (25.8% elapsed)


Hope that helps,

Neil.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Books for advanced Haskell

2010-03-04 Thread Matthias Görgens
 A shining example are Dan Piponis blog posts. Not his fault, mind. All I see
 is that there is something powerful. I also notice that the big brains
 construct monads in many different ways and thus giving them entirely
 different capabilities. An example of this is some techniques turn CBV to
 CBN or CBL while other techniques null this.

What are CBV, CBN and CBL?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] GPL answers from the SFLC (WAS: Re: ANN: hakyll-0.1)

2010-03-04 Thread Bulat Ziganshin
Hello Matthias,

Friday, March 5, 2010, 12:56:48 AM, you wrote:

 [...] The SFLC holds that a
 library that depends on a GPL'd library must in turn be GPL'd, even if
 the library is only distributed as source and not in binary form.

 Was this a general statement

yes. it's soul of GPL idea, and it's why BG called GPL a virus :)


-- 
Best regards,
 Bulatmailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: wai-0.0.0 (Web Application Interface)

2010-03-04 Thread Michael Snoyman
Hello all,

I'd like to announce the first release of the Web Application Interface
package[1]. The WAI is a common protocol between web server backends and web
applications, allowing application writers to target a single interface and
have their code run on multiple server types.

There are two previous implementations of the same idea: the Network.Wai
module as implemented in the Hyena package, and Hack. Some distinguishing
characteristics of the wai package include:

* Unlike Hack, this does not rely upon lazy I/O for passing request and
response bodies within constrained memory.
* Unlike Hyena, the request body is passed via a Source instead of
Enumerator, so that it can easily be converted to a lazy bytestring for
those who wish.
* Unlike both, it attempts to achieve more type safety by have explicit
types of request headers, response headers, HTTP version and status code.
* It also removes any variables which are not universal to all web server
backends. For example, scriptName has been removed, since it has no meaning
for standalone servers.

This package also contains separate modules for conversions to and from
Sources and Enumerators.

I am also simultaneously releasing two other packages: web-encodings
0.2.4[2] includes a new function, parseRequestBody, which directly parses a
request body in a WAI request. It handles both URL-encoded and multipart
data, and can store file contents in a file instead of memory. This should
allow dealing with large file submits in a memory-efficient manner. You can
also receive the file contents as a lazy bytestring.

Finally, wai-extra 0.0.0 [3] is a collection of middleware and backends I
use regularly in web application development. On the middleware side, there
are five modules, including GZIP encoding and storing session data in an
encrypted cookie (ClientSession). On the backend side, it includes CGI and
SimpleServer. The latter is especially useful for testing web applications,
though some people have reported using it in a production environment. All
of the code is directly ported from previous packages I had written against
Hack, so they are fairly well tested.

As far as stability, I don't expect the interface to change too drastically
in the future. I am happy to hear any suggestions people have for moving
forward, but expect future versions to be mostly backwards compatible with
this release. Also, future versions will respect the Package Versioning
Policy[4].

Thanks to everyone who contributed their input.

Michael

[1] http://hackage.haskell.org/package/wai
[2] http://hackage.haskell.org/package/web-encodings
[3] http://hackage.haskell.org/package/wai-extra
[4] http://www.haskell.org/haskellwiki/Package_versioning_policy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Books for advanced Haskell

2010-03-04 Thread David Leimbach
2010/3/4 Matthias Görgens matthias.goerg...@googlemail.com

  A shining example are Dan Piponis blog posts. Not his fault, mind. All I
 see
  is that there is something powerful. I also notice that the big brains
  construct monads in many different ways and thus giving them entirely
  different capabilities. An example of this is some techniques turn CBV to
  CBN or CBL while other techniques null this.

 What are CBV, CBN and CBL?


It's a series of 3 TLAs designed to make people who know what they are feel
more in-the-know than those who don't.
:-)

Dave


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Books for advanced Haskell

2010-03-04 Thread Michael Vanier

Matthias Görgens wrote:

A shining example are Dan Piponis blog posts. Not his fault, mind. All I see
is that there is something powerful. I also notice that the big brains
construct monads in many different ways and thus giving them entirely
different capabilities. An example of this is some techniques turn CBV to
CBN or CBL while other techniques null this.



What are CBV, CBN and CBL?

  


CBV = Call By Value, essentially strict evaluation
CBN = Call By Name, essentially lazy evaluation
CBL = I don't know.

-- Mike


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Books for advanced Haskell

2010-03-04 Thread Ivan Miljenovic
2010/3/5 Michael Vanier mvanie...@gmail.com:
 CBV = Call By Value, essentially strict evaluation
 CBN = Call By Name, essentially lazy evaluation
 CBL = I don't know.

Commercial Bill of Lading? :p

http://www.google.com.au/search?q=define%3ACBL

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GPL answers from the SFLC

2010-03-04 Thread Stefan Monnier
 The thing is that the new X' library can provide the same API while
 not being very useful (bug, performance, whatever).  And in this case,
 it is trivial to make that new X'.  So I don't understand why the
 answer was no in the first place.

The law is not a set of mathematical rules.  It all needs to be
interpreted, compared to the underlying intentions etc...
So while you can say that it's pointless if you push the idea to its
limit, that doesn't mean that it's meaningless in the context of
the law.
All it might mean is that in some cases, the interpretation is
not clear.  It's those cqases where a court needs to decide which
interpretation should be favored.


Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GPL answers from the SFLC

2010-03-04 Thread Darrin Chandler
On Thu, Mar 04, 2010 at 07:07:31PM -0500, Stefan Monnier wrote:
  The thing is that the new X' library can provide the same API while
  not being very useful (bug, performance, whatever).  And in this case,
  it is trivial to make that new X'.  So I don't understand why the
  answer was no in the first place.
 
 The law is not a set of mathematical rules.  It all needs to be
 interpreted, compared to the underlying intentions etc...
 So while you can say that it's pointless if you push the idea to its
 limit, that doesn't mean that it's meaningless in the context of
 the law.
 All it might mean is that in some cases, the interpretation is
 not clear.  It's those cqases where a court needs to decide which
 interpretation should be favored.

I'd like to point out that sometimes requesting that the author switch
from GPL to LGPL is all it takes. Some may even be willing to switch to
a BSD-style. It doesn't hurt to ask.

Of course libraries with many authors increases the headache.

-- 
Darrin Chandler|  Phoenix BSD User Group  |  MetaBUG
dwchand...@stilyagin.com   |  http://phxbug.org/  |  http://metabug.org/
http://www.stilyagin.com/  |  Daemons in the Desert   |  Global BUG Federation
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Books for advanced Haskell

2010-03-04 Thread caseyh

CBV = Call By Value, essentially strict evaluation
CBN = Call By Name
Call by Need = Call-by-need is a memoized version of call-by-name;  
essentially lazy evaluation


CBL = Maybe a new acronym for Call by Need.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] QOTM: It's a series of 3 TLAs designed to make people who know what they are feel more in-the-know than those who don't.

2010-03-04 Thread caseyh

CBV, CBN, CBL

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Books for advanced Haskell

2010-03-04 Thread Günther Schmidt

Am 04.03.10 23:19, schrieb Matthias Görgens:

A shining example are Dan Piponis blog posts. Not his fault, mind. All I see
is that there is something powerful. I also notice that the big brains
construct monads in many different ways and thus giving them entirely
different capabilities. An example of this is some techniques turn CBV to
CBN or CBL while other techniques null this.
 

What are CBV, CBN and CBL?
   

Hi Matthias,

sry for using the abbreviations. CBV = Call by Value, CBN = Call by 
Name, CBL = Call by Need (Lazy)


Günther

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: How do you set up windows enviroment

2010-03-04 Thread Maciej Piechotka
On Thu, 2010-03-04 at 07:29 -0800, Jason Dagit wrote:
 
 
 On Wed, Mar 3, 2010 at 12:26 PM, Maciej Piechotka
 uzytkown...@gmail.com wrote:
 How do you set up Windows environment? I'd like to develop few
 platform-specific code or rather port it to this platform.
 
 I tried to set up environment but I failed (darcs hanging on
 copy via
 ssh, command line is... well slightly better then in W98
 etc.).
 
 If you're having problems with darcs I would recommend using the
 darcs-users mailing list:
 http://lists.osuosl.org/pipermail/darcs-users/
 
 It would also be nice to know more about your setup (darcs version,
 ssh tools, what steps you've taken to configure darcs/ssh).
 
 The darcs wiki also has some information about windows:
 http://wiki.darcs.net/WindowsConfiguration
 

1. Please ignore last error I post. There was mistake in repo location.
2. I finally managed to use ssh+ssh-agent from openssh to deal with
problems. I left with setting ssh with pinentry but it is not the
correct list ;)
3. Thank you - that was the first place I looked at ;)


 Since it's a wiki it's possibly in need of updates but as I don't use
 windows I can't really say (or accurately update it).
 

You're lucky ;) Well - to be honest I'm as well most of the time ;)

 I hope that helps,
 Jason

Regards



signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GPL answers from the SFLC (WAS: Re: ANN: hakyll-0.1)

2010-03-04 Thread Maciej Piechotka
On Thu, 2010-03-04 at 11:34 -0600, Tom Tobin wrote:
 After politely pestering them again, I finally heard back from the
 Software Freedom Law Center regarding our GPL questions (quoted
 below).
 
 I exchanged several emails to clarify the particular issues; in short,
 the answers are No, No, N/A, and N/A.  The SFLC holds that a
 library that depends on a GPL'd library must in turn be GPL'd, even if
 the library is only distributed as source and not in binary form.
 They offered to draft some sort of explicit response if we'd find it
 useful.
 
 Maybe it would be useful if Cabal had some sort of licensing check
 command that could be run on a .cabal file, and warn an author if any
 libraries it depends on (directly or indirectly) are GPL'd but the
 .cabal itself does not have the license set to GPL 

AFAIR AGPL can be linked with GPL (but not vice-versa) so Y can be on
AGPL (well - is AGPL a GPL in answer). 

Regards



signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector stream fusion, inlining and compilation time

2010-03-04 Thread Roman Leshchinskiy
On 05/03/2010, at 04:34, stefan kersten wrote:

 i've been hunting down some performance problems in DSP code using vector and
 the single most important transformation seems to be throwing in INLINE 
 pragmas
 for any function that uses vector combinators and is to be called from
 higher-level code. failing to do so seems to prevent vector operations from
 being fused and results in big performance hits (the good news is that the
 optimized code is quite competitive). does anybody have some more info about 
 the
 do's and don'ts when programming with vector?

This is a general problem when working with RULES-based optimisations. Here is 
an example of what happens: suppose we have

foo :: Vector Int - Vector Int
foo xs = map (+1) xs

Now, GHC will generate a nice tight loop for this but if in a different module, 
we have something like this:

bar xs = foo (foo xs)

then this won't fuse because (a) foo won't be inlined and (b) even if GHC did 
inline here, it would inline the nice tight loop which can't possibly fuse 
instead of the original map which can. By slapping an INLINE pragma on foo, 
you're telling GHC to (almost) always inline the function and to use the 
original definition for inlining, thus giving it a chance to fuse.

GHC could be a bit cleverer here (perhaps by noticing that the original 
definition is small enough to inline and keeping it) but in general, you have 
to add INLINE pragmas in such cases if you want to be sure your code fuses. A 
general-purpose mechanism for handling situations like this automatically would 
be great but we haven't found a good one so far.

 the downside after adding the INLINE pragmas is that now some of my modules 
 take
 _really_ long to compile (up to a couple of minutes); any ideas where i can
 start looking to bring the compilation times down again?

Alas, stream fusion (and fusion in general, I guess) requires what I would call 
whole loop compilation - you need to inline everything into loops. That tends 
to be slow. I don't know what your code looks like but you could try to control 
inlining a bit more. For instance, if you have something like this:

foo ... = ... map f xs ...
  where
f x = ...

you could tell GHC not to inline f until fairly late in the game by adding

  {-# INLINE [0] f #-}

to the where clause. This helps sometimes.

 i'm compiling with -O2 -funbox-strict-fields instead of -Odph (with ghc 6.10.4
 on OSX 10.4), because it's faster for some of my code, but -O2 vs. -Odph 
 doesn't
 make a noticable difference in compilation time.

If you're *really* interested in performance, I would suggest using GHC head. 
It really is much better for this kind of code (although not necessarily faster 
wrt to compilation times).

This is what -Odph does:

-- -Odph is equivalent to
--
---O2   optimise as much as possible
---fno-method-sharing   sharing specialisation defeats fusion
--  sometimes
---fdicts-cheap always inline dictionaries
---fmax-simplifier-iterations20 this is necessary sometimes
---fsimplifier-phases=3 we use an additional simplifier phase
--  for fusion
---fno-spec-constr-thresholdrun SpecConstr even for big loops
---fno-spec-constr-countSpecConstr as much as possible

I'm surprised -Odph doesn't produce faster code than -O2. In any case, you 
could try turning these flags on individually (esp. -fno-method-sharing and the 
spec-constr flags) to see how they affect performance and compilation times.

Roman


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Real-time garbage collection for Haskell

2010-03-04 Thread wren ng thornton

Simon Marlow wrote:

So it would be pretty easy to provide something like

  disableMajorGC, enableMajorGC :: IO ()

Of course leaving it disabled too long could be bad, but that's your 
responsibility.


It seems like it'd be preferable to have an interface like:

withMajorGCDisabled :: IO() - IO()

or (for some definition of K):

withMajorGCDisabled :: (K - IO()) - IO()

in order to ensure that it always gets turned back on eventually. Of 
course, the latter can be created from the former pair. It's just that 
the former reminds me a bit much of explicit memory management and how 
difficult it is to balance the free()s...


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Using Haskell's FFI to send ancillary data over Unix domain sockets

2010-03-04 Thread ihope
Unix domain sockets are a type of socket created between two programs
on a single Unix system. They are useful in part because over them you
can send so-called ancillary data: file descriptors and credentials
(i.e. a proof of who the process on the other end is). The thing is,
Haskell doesn't have a nice way of sending ancillary data.

Network.Socket does have these really opaque functions for sending and
receiving ancillary data:

sendAncillary :: Socket - Int - Int - Int - Ptr a - Int - IO ()
sendAncillary sock level ty flags datum len = do ...

recvAncillary :: Socket - Int - Int - IO (Int, Int, Ptr a, Int)
recvAncillary sock flags len = do ... return (lev,ty,pD,len)

Looking in the man page UNIX(7), which describes Unix domain sockets,
some enlightening information is given. It says that ancillary data is
sent and received using sendmsg(2) and recvmsg(2). Those two man pages
say that sock and flags are arguments to sendmsg and recvmsg. UNIX(7)
says that level is always SOL_SOCKET, and ty is either SCM_RIGHTS or
SCM_CREDENTIALS depending on whether the ancillary data is file
descriptors or credentials. In the former case, datum is an integer
array of the file descriptors; in the latter case, it's the following
struct:

struct ucred {
pid_t pid;/* process ID of the sending process */
uid_t uid;/* user ID of the sending process */
gid_t gid;/* group ID of the sending process */
};

As for len, this struct seems to be enlightening:

struct cmsghdr {
socklen_t cmsg_len;/* data byte count, including header */
int   cmsg_level;  /* originating protocol */
int   cmsg_type;   /* protocol-specific type */
/* followed by unsigned char cmsg_data[]; */
};

So, sock is a Socket, level is the constant SOL_SOCKET, ty is either
SCM_RIGHTS or SCM_CREDENTIALS (strangely, Network.Socket contains the
former but not the latter), flags is the flags passed to sendmsg or
recvmsg (whatever those are--what are they?), datum is either a C
integer array or that struct, and len is the length in bytes of all
that.

For me, this presents a few problems. I don't know where to get the
SCM_CREDENTIALS constant, I have no idea what flags to use (does the
Network module help with that?), I don't know how to get from a list
of file descriptors or a tuple to a Ptr, and perhaps most importantly,
I have no idea how to get the lengths of pid_t, uid_t, gid_t, and
socklen_t.

Can anyone offer assistance?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Anyone up for Google SoC 2010?

2010-03-04 Thread Johan Tibell
On Fri, Mar 5, 2010 at 4:48 AM, iquiw iku.iw...@gmail.com wrote:

 Hi Johan,

 On Fri, Mar 5, 2010 at 6:18 AM, Johan Tibell johan.tib...@gmail.com
 wrote:
  Here's a proposal for a project I'd be willing to mentor:
  = A high-performance HTML combinator library using Data.Text =

 Nice project! I would like to see the project will be accepted.

 Perhaps it's not scope of the project, but if compatibility doesn't
 matter, I want new HTML library have uniform naming convention
 for functions that based on element or attribute.

 For example, function name should be;
  - e_ + element name (html, head, body = e_html, e_head,
 e_body)
   a_ + attribute name (href, id, class = a_href, a_id,
 a_class)
 or
  - e + capitalized element name (html, head, body = eHtml,
 eHead, eBody)
   a + capitalized attribute name (href, id, class = aHref,
 aId, aClass)

 or some other convention.


I think I would use the module system for namespacing rather than using
function prefixes. Like so:

import Text.Html as E
import qualified Text.Html.Attribute as A

E.html ! [A.class_ my-class] (... more combinators ...)

(Assuming that ! is used to introduce attributes.)

This allows you to use the element names and/or the attribute names
unclassified if you so desire.

html ! [class_ my-class] (... more combinators ...)

Function names in the 'html' library are unpredictable from
 corresponding element/attribute names...
  (head, base, a = header, thebase, anchor)


I'm of the same opinion. The combinators should match the element/attribute
names as far as possible. The rule that I had in mind was that the
combinators should have exactly the same name as the corresponding
element/tag except when the name collides with a keyword (e.g. class). If
the name collides with a keyword we could e.g. always append a _.

Cheers,
Johan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Anyone up for Google SoC 2010?

2010-03-04 Thread Johan Tibell
On Fri, Mar 5, 2010 at 8:07 AM, Johan Tibell johan.tib...@gmail.com wrote:

 On Fri, Mar 5, 2010 at 4:48 AM, iquiw iku.iw...@gmail.com wrote:

 I think I would use the module system for namespacing rather than using
 function prefixes. Like so:


 import Text.Html as E
 import qualified Text.Html.Attribute as A

 E.html ! [A.class_ my-class] (... more combinators ...)

 (Assuming that ! is used to introduce attributes.)

 This allows you to use the element names and/or the attribute names
 unclassified if you so desire.


This should of course have been unqualified!
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GPL answers from the SFLC (WAS: Re: ANN: hakyll-0.1)

2010-03-04 Thread Kevin Jardine
I'm a Haskell newbie but long time open source developer and I've been 
following this thread with some interest.

The GPL is not just a license - it is a form of social engineering and social 
contract. The idea if I use the GPL is that I am releasing free and open source 
software to the community. You are welcome to use it for any purpose but in 
exchange you must also agree to release any software you create that uses my 
software as free and open source.

That is the difference between GPL and BSD type licenses. The GPL very 
deliberately creates an obligation. Yes, that can be inconvenient. It is meant 
to be inconvenient.

Actually the GPL reminds me of a Haskell concept that I am struggling with 
right now - the monad. When I started writing Haskell code I was always trying 
to mix pure and IO code and I soon learned that once I used the IO monad I was 
stuck within it. The monad creates an inconvenient obligation and any IO code 
can only be used within other IO code. There are good reasons for monads (just 
as, in my view, there are good reasons for the GPL) but using them means that I 
need to make a lot of changes to the way I write software.

Kevin


  
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe