[Haskell-cafe] Evolving Faster Haskell Programs (now with LLVM!)

2010-03-01 Thread Don Stewart

http://donsbot.wordpress.com/2010/03/01/evolving-faster-haskell-programs-now-with-llvm/

… In which I use genetic algorithms to search for optimal LLVM optimizer
passes to make Haskell programs faster …

LLVM + GHC is a lot of fun.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Has Try Haskell! An interactive tutorial in your browser been announced yet?

2010-03-01 Thread Roel van Dijk
I think these reddit posts are relevant:

First announcement:
http://www.reddit.com/r/haskell/comments/b58rk/try_haskell/

Second announcement:
http://www.reddit.com/r/haskell/comments/b7dil/try_haskell_now_with_t_and_wip_interactive/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] View patterns

2010-03-01 Thread Simon Peyton-Jones
| However, as somebody pointed out, the Java version is polymorphic.
| Assuming that length() is defined for multiple types of container, the
| Java version works with lists, arrays, sets, etc. If you try to do this
| in Haskell, you end up with

A standard way to do it would be this:

class ListLike c where
  lcase :: c a - LCase a (c a)
  lcons :: a - c a - c a
  lnil  :: c a

data LCase a b = LNil | LCons a b

f :: ListLike c = c (c Int) - Int
f (lcase - LCons (lcase - LCons x (lcase - LNil))
 (lcase - LCons (lcase - LCons y (lcase - LCons _ (lcase - LNil)))
(lcase - LNil))) 
  = x+y


Simon

| -Original Message-
| From: haskell-cafe-boun...@haskell.org 
[mailto:haskell-cafe-boun...@haskell.org] On
| Behalf Of Andrew Coppin
| Sent: 27 February 2010 18:11
| To: haskell-cafe@haskell.org
| Subject: [Haskell-cafe] View patterns
| 
| One somewhat neat thing about Haskell is that you can say
| 
|   case list of
| [[x], [y,_], [z,_,_]] - x + y + z
| _ - 0
| 
| In Java, you'd have to write something like
| 
|   if (list.length() == 3)
|   {
| List t1 = list.at(0);
| if (t1.length() == 1)
| {
|   int x = t1.at(0);
|   List t2 = list.at(1);
|   if (t2.length() == 2)
|   ...
| 
| I can't even be bothered to finish typing all that lot!
| 
| However, as somebody pointed out, the Java version is polymorphic.
| Assuming that length() is defined for multiple types of container, the
| Java version works with lists, arrays, sets, etc. If you try to do this
| in Haskell, you end up with
| 
|   case size c of
| 3 -
|   case (c ! 0, c ! 1, c ! 2) of
| (xs, ys, zs) | size x == 1  size y == 2  size z == 3 - (xs !
| 0) + (ys ! 0) + (zs ! 0)
| _ - 0
| _ - 0
| 
| or similar. Which is shorter than Java, but nowhere near as nice as the
| original list-only version.
| 
| Now I was under the impression that view patterns fix this problem,
| but it seems they don't:
| 
|   case c of
| (size - 3) -
|   case (c ! 0, c ! 1, c ! 2) of
| (size - 1, size - 2, size - 3) - (c ! 0 ! 0) + (c ! 1 ! 0) +
| (c ! 2 ! 0)
| 
| Any suggestions?
| 
| ___
| Haskell-Cafe mailing list
| Haskell-Cafe@haskell.org
| http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] pddl parser

2010-03-01 Thread Vojtěch Knyttl
Has anyone seen any PDDL(Planning Domain Definition Language) parser in Haskell?

Thanks.

VK___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Has Try Haskell! An interactive tutorial in your browser been announced yet?

2010-03-01 Thread Benjamin L. Russell
Hector Guilarte hector...@gmail.com writes:

 span class=Apple-style-span style=font-family: arial, sans-serif;
 font-size: 13px; border-collapse: collapse; Nice!I tried it and it
 worked perfectly, however I tried it again 45 minutes later and when I
 pressed Enter nothing happened. I couldn#39;t enter any expressions
 except for: help, step1, ... stepN but that#39;s it. I tried on
 Google Chrome and Firefox, another friend tried it too and it
 didn#39;t work for him either, only the same expressions I mentioned
 before. Any ideas?

Apparently, there is a time limit for this tutorial.

I just tried it out again in Safari 4.0.4 on Mac OS X 10.5.8 Leopard,
and the tutorial run by the help command worked perfectly; however,
when I then tried it out again in Firefox 3.5.8, the same tutorial
stopped just after I entered the 'a' : [] expression with the
following error:

 Time limit exceeded.

This occurred approximately ten minutes after starting the tutorial in
Safari.

But then I tried the same tutorial approximately four minutes later in
SeaMonkey 2.0.3, and this time the tutorial ran perfectly again.

So then, approximately four minutes after Firefox had returned the above
error message, I returned to Firefox, clicked on the Reset button in the
upper-right corner of the page, and restarted the tutorial.  This time,
the tutorial behaved slightly different from before:  Earlier, I typed
the following sequence of commands (listed in the first step of the
tutorial):

 23*36
 reverse hello

At that point, the tutorial had not started automatically.  However, for
some reason, this time it did; then, I was able to continue with the
tutorial until completion.  Then I started the tutorial again with the
help command, and it workd fine again, too.

Then, about thirty-eight minutes after starting the second tutorial in
Firefox (during which time I tried to run the tutorial in Camino 2.0.1,
but Camino froze during the auto-update to 2.0.2, and when I manually
updated it to 2.0.2, Camino 2.0.2 froze upon startup as well, so I
finally gave up on trying the tutorial in Camino), I tried out the 
tutorial in Opera 10.10.

For some reason, Opera inserted spaces after typing certain characters,
and the spaces could not be deleted without also deleting the character
just before the space as well.  Then I entered the above following sequence
of commands again:

 23*36
 reverse hello

Although the tutorial in Opera returned the correct responses to these
statements, it did not move on to the next step automatically
afterwards, so I had to type help to start the tutorial.  However, I
was then able to complete the entire tutorial successfully (although the
extra space bug manifested itself a few times during this tutorial as
well).

I do not use Google Chrome on my Mac at home, so I have not tested it in
that browser (the tab layout of Google Chrome reminds me of that of
Internet Explorer 7.0, which is relatively slow and which I do not like,
so I personally have not used Chrome at home so far; I may use it as
some point in the future, especially if the layout changes).

Apparently, the tutorial behaves slightly differently in different
browsers, and has a built-in time limit.  You may wish to experiment
with different browsers as well, and to press the Reset button in the
upper-right corner of the page after completing the tutorial.

-- Benjamin L. Russell
-- 
Benjamin L. Russell  /   DekuDekuplex at Yahoo dot com
http://dekudekuplex.wordpress.com/
Translator/Interpreter / Mobile:  +011 81 80-3603-6725
Furuike ya, kawazu tobikomu mizu no oto. -- Matsuo Basho^ 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Computer Graphics and Haskell - Radiosity Methods

2010-03-01 Thread Serguey Zefirov
2010/3/1 Hector Guilarte hector...@gmail.com:
 Hello cafe,
 While I was studying for my computer graphics test I have tomorrow I
 realized that maybe some of the major problems I've read so far about
 Radiosity Rendering Algorithms may be reduced significantly if it was
 implemented in Haskell and taking advantage of the lazy evaluation so that
 only what can be seen from the viewer's perspective point of view is
 calculated, and the rest of the scene just remains as thunks waiting for
 them to be calculated in case they are needed.

I don't think that you really can get away without calculating those
out-of-view parts of scene. But you certainly can make rendering
slightly easier (or not) by using infinite scene subdivision trees and
walk them lazily.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Computer Graphics and Haskell - Radiosity Methods

2010-03-01 Thread Maarten Hazewinkel
 2010/3/1 Hector Guilarte hector...@gmail.com:
 Hello cafe,
 While I was studying for my computer graphics test I have tomorrow I
 realized that maybe some of the major problems I've read so far about
 Radiosity Rendering Algorithms may be reduced significantly if it was
 implemented in Haskell and taking advantage of the lazy evaluation so that
 only what can be seen from the viewer's perspective point of view is
 calculated, and the rest of the scene just remains as thunks waiting for
 them to be calculated in case they are needed.

The way radiosity works, those invisible parts of the scene can still
add illumination to the visible parts.
So the first time you query the radiosity data for any part of the scene,
you'll end up forcing the calculation of the entire radiosity solution.

That's basically the difference between plain raytracing, which is lazy in
that way and works backwards from the viewer's perspective, and radiosity.


Maarten

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Peter Verswyvelen
Sounds like we need to come up with some benchmarking programs so we
can measure the GC latency and soft-realtimeness...

PS: Regarding Haskell and games: the University of Utrecht teaches
Haskell in their brand new game technology course :-)

On Mon, Mar 1, 2010 at 1:04 AM, Luke Palmer lrpal...@gmail.com wrote:
 On Sun, Feb 28, 2010 at 2:06 AM, Pavel Perikov peri...@gmail.com wrote:
 Did you really seen 100ms pauses?! I never did extensive research on this 
 but my numbers are rather in microseconds range (below 1ms). What causes 
 such a long garbage collection? Lots of allocated and long-living objects?

 This is all hypothetical right now.  I heard some horror stories in
 which people had to switch to the main game loop in C++ and only do
 the AI logic in Haskell because of pauses.  I would rather not do
 that, especially because this project is *about* proving Haskell as a
 viable game development platform.  So I am trying to be prepared if I
 do see something like that, so that it doesn't put the show on hold
 for a few months.

 Presumably large, long-living objects would cause the generation 0
 collections to take a long time.  I am not sure if we will have any
 said objects, but we can't rule it out...

 Thanks for the positive reassurances, at least.  I'd like to hear from
 people with the opposite experience, if there are any.

 Luke
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Sönke Hahn
On Monday 01 March 2010 01:04:37 am Luke Palmer wrote:
 On Sun, Feb 28, 2010 at 2:06 AM, Pavel Perikov peri...@gmail.com wrote:
  Did you really seen 100ms pauses?! I never did extensive research on this
  but my numbers are rather in microseconds range (below 1ms). What causes
  such a long garbage collection? Lots of allocated and long-living
  objects?
 
 This is all hypothetical right now.  I heard some horror stories in
 which people had to switch to the main game loop in C++ and only do
 the AI logic in Haskell because of pauses.  I would rather not do
 that, especially because this project is *about* proving Haskell as a
 viable game development platform.  So I am trying to be prepared if I
 do see something like that, so that it doesn't put the show on hold
 for a few months.
 
 Presumably large, long-living objects would cause the generation 0
 collections to take a long time.  I am not sure if we will have any
 said objects, but we can't rule it out...
 
 Thanks for the positive reassurances, at least.  I'd like to hear from
 people with the opposite experience, if there are any.

Yes there are. I am working on a game with Haskell using OpenGL rendering. 
I've done some frame time measurements lately and encountered single frames 
needing more than 100ms to be rendered. I am currently trying to gather 
information on what is going on in these 100ms and why. From what i 
understand, the GC is running very often and just some (very few) of its runs 
are very slow.

BTW: switching to parallel GC (either with -N1 or -N2 (on a dual core 
machine)) made the behavior MUCH worse, for some reason.

Sönke


 
 Luke
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Books for advanced Haskell

2010-03-01 Thread Günther Schmidt

Hi all,

there seems to be a huge number of things that monads can be used for. 
And there are lots of papers, blog posts, etc. describing that, some 
more or less accessible.


Apart from monads there are of course also Applicative Functors, 
Monoids, Arrows and what have you. But in short the Monad thingy seems 
to be the most powerful one of them all.


Is there a book that specializes on Monads? A Haskell-Monad book?

Günther


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Thomas Schilling
On 28 February 2010 05:20, Luke Palmer lrpal...@gmail.com wrote:
 I have seen some proposals around here for SoC projects and other
 things to try to improve the latency of GHC's garbage collector.  I'm
 currently developing a game in Haskell, and even 100ms pauses are
 unacceptable for a real-time game.  I'm calling out to people who have
 seen or made such proposals, because I would be willing to contribute
 funding and/or mentor a project that would contribute to this goal.
 Also any ideas for reducing this latency in other ways would be very
 appreciated.

There is a SoC project suggestion to implement Immix's ideas [1] in
GHC's GC.  Both already use similar overall designs.  Both split the
heap into regions which may employ different collection strategies.
However, Immix does not address real-time issues.

The main difficulty with real-time GC is that, while first-generation
collection is usually very fast, eventually you just have to collect
the old generation and you have to do it all at once.  Sun's new
Garbage-First (G1) [2] collector therefore tracks pointers between
regions, as opposed to just pointers from older two newer generations.
 This allows collecting regions independently (and in parallel).  G1
is still stop-the-world, although marking phase is concurrent.
Tracking pointers between all regions can result in quite substantial
space overheads, however, so G1 uses some heuristics to discover
popular objects and treats them specially.  In a personal
conversation Simon Marlow expressed to me that he intends to go
further into this direction, but I don't know how high-priority it is.
 In general I don't think true real-time is the goal in any case, but
rather a general effort to keep GC-pauses short.

Truly concurrent garbage collection is a whole different beast.
Concurrent marking can be implemented efficiently with a write
barrier.  I don't know of any fully concurrent GC scheme that gets by
without a read barrier and significant space overhead, however.  There
are certainly no plans from the GC HQ to implement a fully concurrent
GC.

[1]: http://www.cs.utexas.edu/users/speedway/DaCapo/papers/immix-pldi-2008.pdf
[2]: http://research.sun.com/jtech/pubs/04-g1-paper-ismm.pdf

-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] passing cpp options to c2hs from cabal

2010-03-01 Thread Chris Casinghino
This works great, thanks!

As a footnote: I doubt I will be the only one is confused because
cc-options get passed to c2hs but cpp-options don't.  Perhaps the
cabal designers should revisit this choice.

--Chris

On Sat, Feb 27, 2010 at 3:44 PM, Daniel Fischer
daniel.is.fisc...@web.de wrote:
 Am Samstag 27 Februar 2010 21:27:27 schrieb Chris Casinghino:
 Hi all,

 I have a question about cpp, c2hs and cabal.  The short version:

 What can I put in my cabal file to get -cppopts=-U__BLOCKS__ passed
 as an argument in calls to c2hs?

 Maybe

 cc-options: -U__BLOCKS__

 is worth a try.


 Longer story:

 I need to set up my cabal file so that c2hs gets this extra option to
 make things build smoothly on some macs.  If I run cabal from the
 command line, I can pass it:

 cabal install --c2hs-options='--cppopts=-U__BLOCKS__'

 And this works great.  I need to make my .cabal file do this, but
 c2hs-options doesn't seem to be accepted there.  I tried:

 cpp-options:  -U__BLOCKS__

 But if I run cabal install -v I can see this isn't being passed on to
 c2hs:

 ...
 /home/ccasin/.cabal/bin/c2hs --include=dist/build
 --cppopts=-D__GLASGOW_HASKELL__=610 --cppopts=-Icontrib/libpuz/include
 --output-dir=dist/build --output=Codec/Game/Puz/Internal.hs
 ./Codec/Game/Puz/Internal.chs
 ...

 Thanks!

 --Chris Casinghino
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Linear programming in Haskell

2010-03-01 Thread Henning Thielemann


On Sun, 28 Feb 2010, Louis Wasserman wrote:


It's an expensive operation, though -- since I don't track the set of all
variables as the LP is built, I need to construct the set of all variables
before generating new ones -- so it's recommended that you get all the
variables you need in one or two passes.


Then you might prefer a single operation that generates all variables and 
runs an enclosed problem on them:


run :: ([Variable] - LP a) - LP a


Use as

run $ \x0:x1:x2:_ - do
   ...
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Ghent Functional Programming Group: First Meeting on April 1, 2010

2010-03-01 Thread Jeroen Janssen
Dear All,

The results of the doodle are in, and the result is that the first meeting of 
the Ghent Functional Programming Group will be held on April 1, 2010 at 7pm in 
meeting room Shannon in the Technicum building (Sint-Pietersnieuwstraat 41, 
9000 Gent) of Ghent University. The automatic doors will be closed at this 
time, but we will attach a notice with the telephone number of the meeting 
room at the front door in the outer left of the building, so you can call us 
to let you in.

We would like to ask anyone even remotely considering to participate to fill 
in the following Google form:

http://spreadsheets.google.com/viewform?formkey=dEtsR2ZIdVhqeVdRNkx6bmxCdF9Lanc6MA

Filling in this form does not commit you to anything, but merely serves as an 
indication of the number of participants for our sake. We will also only be 
sending follow-up mails with the final program etc. to the persons that have 
registered. If you do not intend to come to this first meeting, but wish to be 
updated on our activities, you can follow us on Twitter:

http://twitter.com/ghentfpg

or you can sign-up for our Google group:

http://groups.google.com/group/ghent-fpg

Our current program already contains one talk by Tom Schrijvers entitled 
Functional Pearl: The Monad Zipper, with the following abstract:

This pearl aims to demonstrate the application of Huet's zipper to the
monad stack
for the development of highly modular programs.

Anyone willing to give a talk on a subject that fits the scope can e-mail us 
with a proposed topic, no matter what experience level you have, or no matter 
how short or long the talk will be.

Kind Regards,
Bart Coppens
Jeroen Janssen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Books for advanced Haskell

2010-03-01 Thread David Leimbach
I don't think a Haskell-monad book would be terribly interesting.  A book on
taking the pieces of category theory, with a little bit more of the math, to
apply to Haskell would be greatly interesting to me.

Also a book on learning what to look for for measuring Haskell performance
in space and time + optimization seems like it'd be a good thing to have as
well.

Monad in itself is really simple.  Some of the implementations of Monad can
be a little mind bending at times, but the Monad itself is not really that
complicated.

Dave

2010/3/1 Günther Schmidt gue.schm...@web.de

 Hi all,

 there seems to be a huge number of things that monads can be used for. And
 there are lots of papers, blog posts, etc. describing that, some more or
 less accessible.

 Apart from monads there are of course also Applicative Functors, Monoids,
 Arrows and what have you. But in short the Monad thingy seems to be the most
 powerful one of them all.

 Is there a book that specializes on Monads? A Haskell-Monad book?

 Günther


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Books for advanced Haskell

2010-03-01 Thread Stephen Tetley
Hi Günther

For advanced programming with no special attention to monads, there is
'The Fun of Programming' edited by Jeremy Gibbons and Oege de Moor,
contents in a funny tab-box on this on this page:

http://www.palgrave.com/products/title.aspx?PID=265581

The Haskell 'maths' books - 'The Haskell Road' (Kees Doets  Jan van
Eijk) and 'Discrete Mathematics Using a Computer' (John O'Donnell,
Cordelia Hall  Rex Page) - aren't guides to advanced language
features, but they take somewhat vanilla functional programming (no
monads, no type-classes) quite a long way.

Best wishes

Stephen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GHC's parallel garbage collector -- what am I doing wrong?

2010-03-01 Thread Michael Lesniak
Hello haskell-cafe,

Sorry for this long post, but I can't think of a way to describe and explain
the problem in a shorter way.

I've (again) a very strange behaviour with the parallel GC and would be glad
if someone could either reproduce (and explain) it or provide a solution. A
similar but unrelated problem has been described in [1].


EXAMPLE CODE
The following demonstration program, which is a much smaller and
single-threaded version of my real problem behaves as my real program.
It does some number crunching by calculating pi to a definable precision:

 -- File Pi.hs
 -- you need the numbers package from hackage.
 module Main where
 import Data.Number.CReal
 import System.Environment
 import GHC.Conc

 main = do
 digits - (read . head) `fmap` getArgs :: IO Int
 calcPi digits

 calcPi digits = showCReal (fromEnum digits) pi `pseq` return ()

Compile it with

  ghc --make -threaded -O2 Pi.hs -o pi


BENCHMARKS
On my two-core machine I get the following quite strange and
unpredictable results:

* Using one thread:

$ for i in `seq 1 5`;do time pi 5000 +RTS -N1;done

real0m1.441s
user0m1.390s
sys 0m0.020s

real0m1.449s
user0m1.390s
sys 0m0.000s

real0m1.399s
user0m1.370s
sys 0m0.010s

real0m1.401s
user0m1.380s
sys 0m0.000s

real0m1.404s
user0m1.380s
sys 0m0.000s


* Using two threads, hence the parallel GC is used:

for i in `seq 1 5`;do time pi 5000 +RTS -N2;done

real0m2.540s
user0m2.490s
sys 0m0.010s

real0m1.527s
user0m1.530s
sys 0m0.010s

real0m1.966s
user0m1.900s
sys 0m0.010s

real0m5.670s
user0m5.620s
sys 0m0.010s

real0m2.966s
user0m2.910s
sys 0m0.020s


* Using two threads, but disabling the parallel GC:

for i in `seq 1 5`;do time pi 5000 +RTS -N2 -qg;done

real0m1.383s
user0m1.380s
sys 0m0.010s

real0m1.420s
user0m1.360s
sys 0m0.010s

real0m1.406s
user0m1.360s
sys 0m0.010s

real0m1.421s
user0m1.380s
sys 0m0.000s

real0m1.360s
user0m1.360s
sys 0m0.000s


THREADSCOPE
I've additionally attached the threadscope profile of a really bad run,
started with

 $ time pi 5000 +RTS -N2 -ls

real0m15.594s
user0m15.490s
sys 0m0.010s

as file pi.pdf


FURTHER INFORMATION/QUESTION
Just disabling the parallel GC leads to very bad performance in my original
code, which forks threads with forkIO and does a lot of communications. Hence,
using -qg is not a real option for me.

Do I have overlooked some cruical aspect of this problem? If you've
read this far, thank you for reading ... this far ;-)

Cheers,
  Michael



[1] http://osdir.com/ml/haskell-cafe@haskell.org/2010-02/msg00850.html


-- 
Dipl.-Inf. Michael C. Lesniak
University of Kassel
Programming Languages / Methodologies Research Group
Department of Computer Science and Electrical Engineering

Wilhelmshöher Allee 73
34121 Kassel

Phone: +49-(0)561-804-6269


pi.pdf
Description: Adobe PDF document
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Performance of Knight's Tour

2010-03-01 Thread Artyom Kazak
Hi! I'm learning Haskell, and now I'm trying to make framework for
solving searching problems, such as Knight's Tour. For small boards it
answers instantly. For 7x8 board - 23 seconds. For 8x8 board - more
than 30 minutes (it hasn't finished yet). Where is the root of the
evil?

--program
module Main where

import Data.List
import Data.Array.Unboxed
import qualified Data.Array.IArray as IArr
import Data.Ix

data SResult = Good | GoodProcess | Process | Bad

data SDPSearch a p r = SDPSearch (a - p - [a])   --expand
 (p - p)  --update
 (a - p - SResult)   --sort
 ([a] - r)--result

runSDPSearch :: SDPSearch a c b - [a] - c - b
runSDPSearch (SDPSearch e u s r) list p = r (rec list params)
  where
params = iterate u p
rec [] _ = []
rec (l:lp) pr@(n:np) = case s l n of
 Good- l : rec lp pr
 GoodProcess - l : (rec (e l n) np) ++ (rec lp pr)
 Process - (rec (e l n) np) ++ (rec lp pr)
 Bad - rec lp pr

main = do
  (a, b) - (break (== ' ')) `fmap` getLine
  print (knightTour (read a) (read b))

knightTour :: Int - Int - UArray (Int, Int) Int
knightTour a b = runSDPSearch (SDPSearch e u s r) [((1, 1), sArray)] 2
  where
size = a * b
range = ((1, 1), (a, b))
sArray = listArray range (1 : (replicate (size - 1) 0))
allTurns :: Array (Int, Int) [(Int, Int)]
allTurns = IArr.listArray range [turns x y | x - [1..a], y - [1..b]]
  where
shifts = [(1, 2),(1, -2),(2, 1),(2, -1),(-1, 2),(-1, -2),(-2,
1),(-2, -1)]
turns x y = [(x+i, y+j) | (i, j) - shifts, inRange range (x+i, y+j)]
e ((x, y), arr) p = [(t, arr // [(t, p)]) | t - changes]
  where
changes = [t | t - allTurns ! (x, y), arr ! t == 0]
s el p | p == size = Good
   | otherwise = Process
u = (+ 1)
r l | not (null l) = snd (head l)
| otherwise= error No solutions!
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Books for advanced Haskell

2010-03-01 Thread Johan Tibell
2010/3/1 David Leimbach leim...@gmail.com

 I don't think a Haskell-monad book would be terribly interesting.  A book
 on taking the pieces of category theory, with a little bit more of the math,
 to apply to Haskell would be greatly interesting to me.

 Also a book on learning what to look for for measuring Haskell performance
 in space and time + optimization seems like it'd be a good thing to have as
 well.


I'd really like to read such a book! Just hacking on a project [1] for a few
months have tought me a lot about writing high performance Haskell code. I
imagine that one could write a decent size book on the topic by gathering
the experience of a handful of experienced haske, scraping the blogosphere,
and looking at the high performance libraries out there.

1. http://github.com/tibbe/event

Cheers,
Johan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Job Vranish
My current area of work is on realtime embedded software programming for
avionics systems. We do most of our coding in Ada but I've been dreaming of
using haskell instaed.

However, the garbage collector is actually one of the larger obsticles to
making this happen.

All of our avionics software needs to be certified by various regulatory
agencies, and there are varying levels of certification depending on
criticality. For the higher certification levels we would need to be able to
sure (or a least very very confidant) that the GC will collect everything
within a fixed amount of time, and that it won't take more than some fixed
amount of time per major from to do it.

A delay of a several milliseconds that could occur effectively at random is
completely unacceptable.

I would be very interested in alternative GC algorithms/approaches  that
would have a more deterministic/realtime behavior. I would be even be
willing to help out if there is other interest in this area :)


As a side note, I ran across an article on a way to use 100% reference
counting in a pure language by using weak references and being careful how
you preserve the weak/strong references during graph reduction:
http://comjnl.oxfordjournals.org/cgi/content/abstract/33/5/466
I don't want to pay $25 for the article though so I don't know how viable it
is. It would probably have lower performance than the current generational
GC but in this case I'd be willing to trade performance for determinism.
Has anyone heard of this algorithm before?

- Job


On Mon, Mar 1, 2010 at 9:53 AM, Thomas Schilling nomin...@googlemail.comwrote:

 On 28 February 2010 05:20, Luke Palmer lrpal...@gmail.com wrote:
  I have seen some proposals around here for SoC projects and other
  things to try to improve the latency of GHC's garbage collector.  I'm
  currently developing a game in Haskell, and even 100ms pauses are
  unacceptable for a real-time game.  I'm calling out to people who have
  seen or made such proposals, because I would be willing to contribute
  funding and/or mentor a project that would contribute to this goal.
  Also any ideas for reducing this latency in other ways would be very
  appreciated.

 There is a SoC project suggestion to implement Immix's ideas [1] in
 GHC's GC.  Both already use similar overall designs.  Both split the
 heap into regions which may employ different collection strategies.
 However, Immix does not address real-time issues.

 The main difficulty with real-time GC is that, while first-generation
 collection is usually very fast, eventually you just have to collect
 the old generation and you have to do it all at once.  Sun's new
 Garbage-First (G1) [2] collector therefore tracks pointers between
 regions, as opposed to just pointers from older two newer generations.
  This allows collecting regions independently (and in parallel).  G1
 is still stop-the-world, although marking phase is concurrent.
 Tracking pointers between all regions can result in quite substantial
 space overheads, however, so G1 uses some heuristics to discover
 popular objects and treats them specially.  In a personal
 conversation Simon Marlow expressed to me that he intends to go
 further into this direction, but I don't know how high-priority it is.
  In general I don't think true real-time is the goal in any case, but
 rather a general effort to keep GC-pauses short.

 Truly concurrent garbage collection is a whole different beast.
 Concurrent marking can be implemented efficiently with a write
 barrier.  I don't know of any fully concurrent GC scheme that gets by
 without a read barrier and significant space overhead, however.  There
 are certainly no plans from the GC HQ to implement a fully concurrent
 GC.

 [1]:
 http://www.cs.utexas.edu/users/speedway/DaCapo/papers/immix-pldi-2008.pdf
 [2]: http://research.sun.com/jtech/pubs/04-g1-paper-ismm.pdf

 --
 Push the envelope.  Watch it bend.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Sebastian Sylvan
On Sun, Feb 28, 2010 at 5:20 AM, Luke Palmer lrpal...@gmail.com wrote:

 I have seen some proposals around here for SoC projects and other
 things to try to improve the latency of GHC's garbage collector.  I'm
 currently developing a game in Haskell, and even 100ms pauses are
 unacceptable for a real-time game.  I'm calling out to people who have
 seen or made such proposals, because I would be willing to contribute
 funding and/or mentor a project that would contribute to this goal.
 Also any ideas for reducing this latency in other ways would be very
 appreciated.


Since we're talking games here (my profession), I'd point out that it would
be cool to be able to supply hints to the GC about when might be a good
time to do a GC (without unconditionally forcing it), games in particular
have some pretty specific properties that may be exploitable.

Presumably a non-trivial portion of the objects copied from the nursery/G0
are actually short-lived objects that just happened to have their short
life-span overlap with the collection. So really, copying them to the next
generation is a mistake in some sense. For games, though, we have a very
good point that occurs regularly where we know that all/most short-lived
objects will no longer be referenced - at the start of a fresh frame.
So if we could do as many as possible of our G0 collections at that point
we'd avoid accidental copying of objects that are actually short-lived
into the older generation (which should reduce pressure on that older
generation, as well as speed up G0 collection). Ideally we'd have some way
of telling the GC to try to avoid running during the actual frame itself,
too, by for example tuning the heap region sizes automatically.


-- 
Sebastian Sylvan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC's parallel garbage collector -- what am I doing wrong?

2010-03-01 Thread Daniel Fischer
Am Montag 01 März 2010 16:59:54 schrieb Michael Lesniak:
 I've (again) a very strange behaviour with the parallel GC and would be
 glad if someone could either reproduce (and explain) it or provide a
 solution.

Sorry, can't reproduce it:

$ for i in `seq 1 5`; do time ./mlPi 5000 +RTS -N1; done
7.54user 0.02system 0:07.57elapsed 99%CPU 
7.53user 0.02system 0:07.56elapsed 99%CPU
7.70user 0.01system 0:07.72elapsed 99%CPU
7.55user 0.02system 0:07.57elapsed 99%CPU
7.57user 0.01system 0:07.58elapsed 99%CPU

$ for i in `seq 1 5`; do time ./mlPi 5000 +RTS -N2; done
7.76user 0.02system 0:07.68elapsed 101%CPU
7.69user 0.02system 0:07.63elapsed 101%CPU
7.94user 0.04system 0:07.89elapsed 101%CPU
7.72user 0.02system 0:07.64elapsed 101%CPU
7.68user 0.04system 0:07.62elapsed 101%CPU

$ for i in `seq 1 5`; do time ./mlPi 5000 +RTS -N2 -qg; done
7.56user 0.02system 0:07.58elapsed 99%CPU
7.58user 0.00system 0:07.58elapsed 99%CPU
7.55user 0.01system 0:07.57elapsed 100%CPU
7.57user 0.02system 0:07.58elapsed 100%CPU
7.56user 0.00system 0:07.57elapsed 99%CPU


Seems your system has a problem synchronising the cores for GC.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Books for advanced Haskell

2010-03-01 Thread Marc Weber
Excerpts from Stephen Tetley's message of Mon Mar 01 16:54:57 +0100 2010:
 Hi Günther
 
 For advanced programming with no special attention to monads, there is
 'The Fun of Programming' edited by Jeremy Gibbons and Oege de Moor,
 contents in a funny tab-box on this on this page:
 
 http://www.palgrave.com/products/title.aspx?PID=265581
 
 The Haskell 'maths' books - 'The Haskell Road' (Kees Doets  Jan van
 Eijk) and 'Discrete Mathematics Using a Computer' (John O'Donnell,
 Cordelia Hall  Rex Page) - aren't guides to advanced language
 features, but they take somewhat vanilla functional programming (no
 monads, no type-classes) quite a long way.

Whatever pointers may appear here they should be put on the Haskell wiki
for future reference.

Marc Weber
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Evolving Faster Haskell Programs (now with LLVM!)

2010-03-01 Thread John Van Enk
What's the chance you have generational graphs for the rest of your examples
like you do with the first? I'd be interested to see those.

On Mon, Mar 1, 2010 at 3:02 AM, Don Stewart d...@galois.com wrote:



 http://donsbot.wordpress.com/2010/03/01/evolving-faster-haskell-programs-now-with-llvm/

 … In which I use genetic algorithms to search for optimal LLVM optimizer
 passes to make Haskell programs faster …

 LLVM + GHC is a lot of fun.

 -- Don
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Books for advanced Haskell

2010-03-01 Thread Günther Schmidt

Hello David,

I do agree, Monads as *such* may not be super complicated. That said I 
see monads constructed by some of the Haskell big brains and used that 
would never ever occur to me. And then I read their papers, blog posts 
and everything I can get my hands on and still not get it.


A shining example are Dan Piponis blog posts. Not his fault, mind. All I 
see is that there is something powerful. I also notice that the big 
brains construct monads in many different ways and thus giving them 
entirely different capabilities. An example of this is some techniques 
turn CBV to CBN or CBL while other techniques null this.


I just cannot find my way through all this, in part because the 
information is scattered through many papers from many different authors 
on what does seem quite similar subjects. It's bloody confusing.


So, no book, eh?

Günther



Am 01.03.10 16:41, schrieb David Leimbach:
I don't think a Haskell-monad book would be terribly interesting.  A 
book on taking the pieces of category theory, with a little bit more 
of the math, to apply to Haskell would be greatly interesting to me.


Also a book on learning what to look for for measuring Haskell 
performance in space and time + optimization seems like it'd be a good 
thing to have as well.


Monad in itself is really simple.  Some of the implementations of 
Monad can be a little mind bending at times, but the Monad itself is 
not really that complicated.


Dave

2010/3/1 Günther Schmidt gue.schm...@web.de mailto:gue.schm...@web.de

Hi all,

there seems to be a huge number of things that monads can be used
for. And there are lots of papers, blog posts, etc. describing
that, some more or less accessible.

Apart from monads there are of course also Applicative Functors,
Monoids, Arrows and what have you. But in short the Monad thingy
seems to be the most powerful one of them all.

Is there a book that specializes on Monads? A Haskell-Monad book?

Günther


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe







___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What are free Monads?

2010-03-01 Thread Henning Thielemann


On Sat, 27 Feb 2010, Dan Doel wrote:


Free structures originate (I think) in algebra. There you'll find talk of free
groups, free rings, free monoids, etc.


How about turning this post into a HaskellWiki article in 
Category:Glossary ?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC's parallel garbage collector -- what am I doing wrong?

2010-03-01 Thread Michael Lesniak
Hello Daniel,

 Sorry, can't reproduce it:
That's also very helpful, so thanks for this :-)


 Seems your system has a problem synchronising the cores for GC.
Correct. Could you please tell me your system configuration (i.e. GHC
compiler and OS?)


Cheers,
 Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal pre-compiled packages

2010-03-01 Thread Henning Thielemann


On Sat, 27 Feb 2010, Diego Souza wrote:


Hi,
currently when one install a cabal package it compiles it and then install
generated binaries. I wonder whether or not it would be useful to have
pre-compiled binaries as many package managers usually do (e.g. apt). I
often think that would save some time on the expense of a busier hackage
server capable of generating packages for many different platforms.


There is cabal-rpm
  http://hackage.haskell.org/package/cabal-rpm
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Performance of Knight's Tour

2010-03-01 Thread Daniel Fischer
Am Montag 01 März 2010 17:07:46 schrieb Artyom Kazak:
 Hi! I'm learning Haskell, and now I'm trying to make framework for
 solving searching problems, such as Knight's Tour. For small boards it
 answers instantly. For 7x8 board - 23 seconds. For 8x8 board - more
 than 30 minutes (it hasn't finished yet). Where is the root of the
 evil?

In the algorithm. You investigate far too many dead ends. Since for larger 
boards, the number of dead ends increases fast, larger boards take much 
much longer.
With one little change, I get
$ echo 59 59 | ./knights +RTS -s  /dev/null
./knights +RTS -s
  68,243,720 bytes allocated in the heap
   5,914,848 bytes copied during GC
  36,436,628 bytes maximum residency (6 sample(s))
   8,486,604 bytes maximum slop
  58 MB total memory in use (1 MB lost due to fragmentation)

  Generation 0:   109 collections, 0 parallel,  0.03s,  0.03s elapsed
  Generation 1: 6 collections, 0 parallel,  0.02s,  0.02s elapsed

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time0.05s  (  0.10s elapsed)
  GCtime0.05s  (  0.05s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time0.10s  (  0.15s elapsed)

  %GC time  50.0%  (32.2% elapsed)

  Alloc rate1,421,744,166 bytes per MUT second

  Productivity  50.0% of total user, 31.3% of total elapsed

For a reason I don't understand, if the second dimension is 60 and the 
first is  18, it takes much longer,

$ echo 19 60 | ./knights +RTS -A8M -H64M-s  /dev/null
./knights +RTS -A8M -H64M -s
   2,374,198,988 bytes allocated in the heap
   1,873,412 bytes copied during GC
   5,611,132 bytes maximum residency (2 sample(s))
   4,934,352 bytes maximum slop
  70 MB total memory in use (1 MB lost due to fragmentation)

  Generation 0:   281 collections, 0 parallel,  0.15s,  0.15s elapsed
  Generation 1: 2 collections, 0 parallel,  0.00s,  0.01s elapsed

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time1.17s  (  1.21s elapsed)
  GCtime0.15s  (  0.16s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time1.32s  (  1.37s elapsed)

  %GC time  11.2%  (11.6% elapsed)

  Alloc rate2,032,579,317 bytes per MUT second

  Productivity  88.8% of total user, 85.5% of total elapsed

The magic change:

    e ((x, y), arr) p = [(t, arr // [(t, p)]) | t - changes]
      where
legit ps = [t | t - allTurns ! ps, arr!t == 0]
        changes = map snd $ sort [(length $ legit t, t) | t - allTurns ! 
(x, y), arr ! t == 0]

investigate squares with fewer options first.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Henning Thielemann


On Sat, 27 Feb 2010, Luke Palmer wrote:


I have seen some proposals around here for SoC projects and other
things to try to improve the latency of GHC's garbage collector.  I'm
currently developing a game in Haskell, and even 100ms pauses are
unacceptable for a real-time game.  I'm calling out to people who have
seen or made such proposals, because I would be willing to contribute
funding and/or mentor a project that would contribute to this goal.
Also any ideas for reducing this latency in other ways would be very
appreciated.


In my experiments with real-time audio signal processing I could always 
find a culprit for buffer-underflows other than the garbage collector. 
Sometimes it was a space leak (e.g. by adding a finalizer to the wrong 
object), sometimes incorrect handling of time differences, and when 
working with LLVM it was frequent recompilation of LLVM code.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC's parallel garbage collector -- what am I doing wrong?

2010-03-01 Thread Daniel Fischer
Am Montag 01 März 2010 18:52:53 schrieb Michael Lesniak:
 Hello Daniel,

  Sorry, can't reproduce it:

 That's also very helpful, so thanks for this :-)

  Seems your system has a problem synchronising the cores for GC.

 Correct. Could you please tell me your system configuration (i.e. GHC
 compiler and OS?)

ghc-6.12.1 (btw, I also tried with 6.10.3, there -N2 [and even -N3] are 
really close to -N1,
-N1: 7.57user 0.01system 0:07.58elapsed 100%CPU (+/- 0.02s)
-N2: 7.59user 0.03system 0:07.61elapsed 100%CPU (+/- 0.02s)
-N3: 7.60user 0.03system 0:07.62elapsed 100%CPU - 8.12user 0.00system 
0:07.87elapsed 103%CPU
typical -N3 behaviour with 6.12.1: 14.43user 0.03system 0:11.05elapsed 
130%CPU
),
openSuSE 11.1,

$ uname -a
Linux linux-mkk1 2.6.27.42-0.1-pae #1 SMP 2010-01-06 16:07:25 +0100 i686 
i686 i386 GNU/Linux

$ cat /proc/cpuinfo 
processor   : 0 
vendor_id   : GenuineIntel  
cpu family  : 15
model   : 4 
model name  : Intel(R) Pentium(R) 4 CPU 3.06GHz
stepping: 9
cpu MHz : 3058.856 
cache size  : 1024 KB  
physical id : 0
siblings: 2
core id : 0
cpu cores   : 1
apicid  : 0
initial apicid  : 0
fdiv_bug: no   
hlt_bug : no   
f00f_bug: no   
coma_bug: no   
fpu : yes  
fpu_exception   : yes  
cpuid level : 5
wp  : yes  
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm 
constant_tsc pebs bts pni monitor ds_cpl tm2 cid cx16 xtpr lahf_lm
bogomips: 6117.71
clflush size: 64
power management:

processor   : 1
vendor_id   : GenuineIntel
cpu family  : 15
model   : 4
model name  : Intel(R) Pentium(R) 4 CPU 3.06GHz
stepping: 9
cpu MHz : 3058.856
cache size  : 1024 KB
physical id : 0
siblings: 2
core id : 0
cpu cores   : 1
apicid  : 1
initial apicid  : 1
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 5
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm 
constant_tsc pebs bts pni monitor ds_cpl tm2 cid cx16 xtpr lahf_lm
bogomips: 6118.12
clflush size: 64
power management:




 Cheers,
  Michael

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Performance of Knight's Tour

2010-03-01 Thread Artyom Kazak
2010/3/1 Daniel Fischer daniel.is.fisc...@web.de:
 In the algorithm. You investigate far too many dead ends. Since for larger
 boards, the number of dead ends increases fast, larger boards take much
 much longer.
 With one little change, I get
 ...
 For a reason I don't understand, if the second dimension is 60 and the
 first is  18, it takes much longer,
...
 The magic change:

     e ((x, y), arr) p = [(t, arr // [(t, p)]) | t - changes]
       where
        legit ps = [t | t - allTurns ! ps, arr!t == 0]
         changes = map snd $ sort [(length $ legit t, t) | t - allTurns !
 (x, y), arr ! t == 0]

 investigate squares with fewer options first.


Wow! Thanks you!
By the way, I didn't notice the difference between (59, 59) and (60,
60) on my machine...
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] idioms ... for using Control.Applicative.WrapMonad or Control.Arrow.Kleisli?

2010-03-01 Thread Nicolas Frisby
Each time I find myself needing to use the wrapping functions
necessary for this embeddings, I grumble. Does anyone have a favorite
use-pattern for ameliorating these quickly ubiquitous conversions?

For runKleisli, I was considering something like

okKleisli ::
  (Control.Arrow.Kleisli m a b - Control.Arrow.Kleisli m' a' b')
  - (a - m b) - (a' - m' b')
onKleisli f = Control.Arrow.runKleisli . f . Control.Arrow.Kleisli

but haven't really tested its usefulness. My most frequent use cases
usually include Arrow.first, Arrow.second, , ***, or +++. E.g.

somefun :: (Monad m, Num a) = (a, d) - m (a, d)
somefun = onKleisli Control.Arrow.first (\ a - return (a + 1))

Perhaps a Control.Arrow.Kleisli, which would export (same-named)
Kleisli specializations of all the Control.Arrow methods? And an
analogous Control.Applicative.Monad? (Data.Traversable does this a
little bit to specialize its interface for monads, such as
Data.Traversable.sequence.)

While writing this, I realized that you can't leave the Arrow-ness of
Kleisli arrows implicit, since (-) a (m b) is two kinds of arrows
depending on context -- which is precisely what the Kleisli newtype
resolves. So I'm not seeing a reason to bring up the 'class
Applicative m = Monad m where' dispute.

Thanks for your time.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Performance of Knight's Tour

2010-03-01 Thread Daniel Fischer
Am Montag 01 März 2010 19:29:45 schrieb Artyom Kazak:
 2010/3/1 Daniel Fischer daniel.is.fisc...@web.de:
  In the algorithm. You investigate far too many dead ends. Since for
  larger boards, the number of dead ends increases fast, larger boards
  take much much longer.
  With one little change, I get
  ...
  For a reason I don't understand, if the second dimension is 60 and the
  first is  18, it takes much longer,
 ...
  The magic change:
 
      e ((x, y), arr) p = [(t, arr // [(t, p)]) | t - changes]
        where
         legit ps = [t | t - allTurns ! ps, arr!t == 0]
          changes = map snd $ sort [(length $ legit t, t) | t -
  allTurns ! (x, y), arr ! t == 0]
 
  investigate squares with fewer options first.

 Wow! Thanks you!
 By the way, I didn't notice the difference between (59, 59) and (60,
 60) on my machine...

Strangely,

$ echo 62 59 | time ./knights  /dev/null
0.10user 0.08system 0:00.17elapsed 101%CPU
$ echo 65 59 | time ./knights  /dev/null
0.08user 0.07system 0:00.17elapsed 96%CPU

, so it's a thing of the second dimension predominantly (the size plays a 
small role, too).

As I said, I don't understand it, but looking at the allocation figures:
70*59: 97,970,072 bytes allocated in the heap
18*60: 12,230,296 bytes allocated in the heap
19*60: 2,374,148,320 bytes allocated in the heap
19*61: 13,139,688 bytes allocated in the heap
60*61: 71,771,324 bytes allocated in the heap
61*61: 72,965,428 bytes allocated in the heap

it seems that something is kicked out of the registers when the second 
dimension is 60 and the first  18.

Very strange.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] using haskell to serve with apache

2010-03-01 Thread Yitzchak Gale
brad clawsie wrote:
 should i just try out something based on fastcgi?

Obviously it depends on exactly what you want to do.

For a simple very low volume page, even cgi should be
just fine. I use it all the time.

Regards,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Thomas Schilling
On 1 March 2010 16:27, Job Vranish job.vran...@gmail.com wrote:
 My current area of work is on realtime embedded software programming for
 avionics systems. We do most of our coding in Ada but I've been dreaming of
 using haskell instaed.

Do you really think this is realistic?  Garbage collector aside,
Haskell's execution model is very difficult to predict, which I would
suspect is crucial for even soft real-time systems.  The whole concept
of lazy evaluation seems to run counter to the idea of real-time
systems.  Lazy evaluation essentially says do as little as possible
*now* at the expense of having to do it all later.  For a real-time
system you want almost the opposite; you want to make sure that you
complete all the required work in the current time slice.

A possible workaround would be to sprinkle lots of 'rnf's around your
code to make sure you don't build up a thunk or two that will delay
you later.  And if you do this, aren't you essentially programming in
a strict functional language (like SML or O'Caml)?  By careful
profiling you and auditing you can probably rule out most of the
potential bad cases, so it can be acceptable for a soft real-time
system (Galois did something like this, I believe).  But for avionics
systems you probably want to more assurances than that, don't you?

-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread John Van Enk
 The whole concept of lazy evaluation seems to run counter to the idea of
real-time systems.

Hi Thomas,

Lazy evaluation is okay since it has deterministic characteristics. We can
predict what will happen quite accurately (heck, we can model it in simpler
cases). It might take a while to get people comfortable with the concept,
but it wouldn't be a show stopper (actually, some people would benefit
greatly from lazy evaluation and referential transparency).

The GC throws a wrench in a system which would otherwise make it past
certification with enough effort. If we can write a GC that can be modeled,
we'd have an excellent case for using Haskell in aerospace.

/jve

On Mon, Mar 1, 2010 at 2:37 PM, Thomas Schilling nomin...@googlemail.comwrote:

 On 1 March 2010 16:27, Job Vranish job.vran...@gmail.com wrote:
  My current area of work is on realtime embedded software programming for
  avionics systems. We do most of our coding in Ada but I've been dreaming
 of
  using haskell instaed.

 Do you really think this is realistic?  Garbage collector aside,
 Haskell's execution model is very difficult to predict, which I would
 suspect is crucial for even soft real-time systems.  The whole concept
 of lazy evaluation seems to run counter to the idea of real-time
 systems.  Lazy evaluation essentially says do as little as possible
 *now* at the expense of having to do it all later.  For a real-time
 system you want almost the opposite; you want to make sure that you
 complete all the required work in the current time slice.

 A possible workaround would be to sprinkle lots of 'rnf's around your
 code to make sure you don't build up a thunk or two that will delay
 you later.  And if you do this, aren't you essentially programming in
 a strict functional language (like SML or O'Caml)?  By careful
 profiling you and auditing you can probably rule out most of the
 potential bad cases, so it can be acceptable for a soft real-time
 system (Galois did something like this, I believe).  But for avionics
 systems you probably want to more assurances than that, don't you?

 --
 Push the envelope.  Watch it bend.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC's parallel garbage collector -- what am I doing wrong?

2010-03-01 Thread Bryan O'Sullivan
The parallel GC currently doesn't behave well with concurrent programs that
uses multiple capabilities (aka OS threads), and the behaviour you see is
the known symptom of this.. I believe that Simon Marlow has some fixes in
hand that may go into 6.12.2.

Are you saying that you see two different classes of undesirable
performance, one with -qg and one without? How are your threads in your real
program communicating with each other? We've seen problems there when
there's a lot of contention for e.g. IORefs among thousands of threads.

On Mon, Mar 1, 2010 at 7:59 AM, Michael Lesniak mlesn...@uni-kassel.dewrote:

 Hello haskell-cafe,

 Sorry for this long post, but I can't think of a way to describe and
 explain
 the problem in a shorter way.

 I've (again) a very strange behaviour with the parallel GC and would be
 glad
 if someone could either reproduce (and explain) it or provide a solution. A
 similar but unrelated problem has been described in [1].


 EXAMPLE CODE
 The following demonstration program, which is a much smaller and
 single-threaded version of my real problem behaves as my real program.
 It does some number crunching by calculating pi to a definable precision:

  -- File Pi.hs
  -- you need the numbers package from hackage.
  module Main where
  import Data.Number.CReal
  import System.Environment
  import GHC.Conc
 
  main = do
  digits - (read . head) `fmap` getArgs :: IO Int
  calcPi digits
 
  calcPi digits = showCReal (fromEnum digits) pi `pseq` return ()

 Compile it with

  ghc --make -threaded -O2 Pi.hs -o pi


 BENCHMARKS
 On my two-core machine I get the following quite strange and
 unpredictable results:

 * Using one thread:

$ for i in `seq 1 5`;do time pi 5000 +RTS -N1;done

real0m1.441s
user0m1.390s
sys 0m0.020s

real0m1.449s
user0m1.390s
sys 0m0.000s

real0m1.399s
user0m1.370s
sys 0m0.010s

real0m1.401s
user0m1.380s
sys 0m0.000s

real0m1.404s
user0m1.380s
sys 0m0.000s


 * Using two threads, hence the parallel GC is used:

for i in `seq 1 5`;do time pi 5000 +RTS -N2;done

real0m2.540s
user0m2.490s
sys 0m0.010s

real0m1.527s
user0m1.530s
sys 0m0.010s

real0m1.966s
user0m1.900s
sys 0m0.010s

real0m5.670s
user0m5.620s
sys 0m0.010s

real0m2.966s
user0m2.910s
sys 0m0.020s


 * Using two threads, but disabling the parallel GC:

for i in `seq 1 5`;do time pi 5000 +RTS -N2 -qg;done

real0m1.383s
user0m1.380s
sys 0m0.010s

real0m1.420s
user0m1.360s
sys 0m0.010s

real0m1.406s
user0m1.360s
sys 0m0.010s

real0m1.421s
user0m1.380s
sys 0m0.000s

real0m1.360s
user0m1.360s
sys 0m0.000s


 THREADSCOPE
 I've additionally attached the threadscope profile of a really bad run,
 started with

 $ time pi 5000 +RTS -N2 -ls

real0m15.594s
user0m15.490s
sys 0m0.010s

 as file pi.pdf


 FURTHER INFORMATION/QUESTION
 Just disabling the parallel GC leads to very bad performance in my original
 code, which forks threads with forkIO and does a lot of communications.
 Hence,
 using -qg is not a real option for me.

 Do I have overlooked some cruical aspect of this problem? If you've
 read this far, thank you for reading ... this far ;-)

 Cheers,
  Michael



 [1] http://osdir.com/ml/haskell-cafe@haskell.org/2010-02/msg00850.html


 --
 Dipl.-Inf. Michael C. Lesniak
 University of Kassel
 Programming Languages / Methodologies Research Group
 Department of Computer Science and Electrical Engineering

 Wilhelmshöher Allee 73
 34121 Kassel

 Phone: +49-(0)561-804-6269

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Performance of Knight's Tour

2010-03-01 Thread Artyom Kazak
2010/3/1 Daniel Fischer daniel.is.fisc...@web.de:
 Am Montag 01 März 2010 19:29:45 schrieb Artyom Kazak:
 2010/3/1 Daniel Fischer daniel.is.fisc...@web.de:
  In the algorithm. You investigate far too many dead ends. Since for
  larger boards, the number of dead ends increases fast, larger boards
  take much much longer.
  With one little change, I get
  ...
  For a reason I don't understand, if the second dimension is 60 and the
  first is  18, it takes much longer,
 ...
  The magic change:
 
      e ((x, y), arr) p = [(t, arr // [(t, p)]) | t - changes]
        where
         legit ps = [t | t - allTurns ! ps, arr!t == 0]
          changes = map snd $ sort [(length $ legit t, t) | t -
  allTurns ! (x, y), arr ! t == 0]
 
  investigate squares with fewer options first.

 Wow! Thanks you!
 By the way, I didn't notice the difference between (59, 59) and (60,
 60) on my machine...

 Strangely,

 $ echo 62 59 | time ./knights  /dev/null
 0.10user 0.08system 0:00.17elapsed 101%CPU
 $ echo 65 59 | time ./knights  /dev/null
 0.08user 0.07system 0:00.17elapsed 96%CPU

 , so it's a thing of the second dimension predominantly (the size plays a
 small role, too).

 As I said, I don't understand it, but looking at the allocation figures:
 70*59: 97,970,072 bytes allocated in the heap
 18*60: 12,230,296 bytes allocated in the heap
 19*60: 2,374,148,320 bytes allocated in the heap
 19*61: 13,139,688 bytes allocated in the heap
 60*61: 71,771,324 bytes allocated in the heap
 61*61: 72,965,428 bytes allocated in the heap

 it seems that something is kicked out of the registers when the second
 dimension is 60 and the first  18.

 Very strange.

Maybe we were compiling with different options? I compiled with -O2
-fvia-C -optc-O3.
...
Oh, I know! I slightly changed the code.

import Data.Ord

e ((x, y), arr) p = [(t, arr // [(t, p)]) | t - changes]
  where
legit ps = [t | t - allTurns ! ps, arr ! t == 0]
changes = sortOn (length . legit) (legit (x, y))
sortOn = sortBy . comparing

My version gives answer for 60, 60 in one second. But if both
dimensions are 60, it won't finish.
Yes, very strange.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Job Vranish
On Mon, Mar 1, 2010 at 2:37 PM, Thomas Schilling nomin...@googlemail.comwrote:

 On 1 March 2010 16:27, Job Vranish job.vran...@gmail.com wrote:
  My current area of work is on realtime embedded software programming for
  avionics systems. We do most of our coding in Ada but I've been dreaming
 of
  using haskell instaed.

 A possible workaround would be to sprinkle lots of 'rnf's around your
 code to make sure you don't build up a thunk or two that will delay
 you later.  And if you do this, aren't you essentially programming in
 a strict functional language (like SML or O'Caml)?  By careful
 profiling you and auditing you can probably rule out most of the
 potential bad cases, so it can be acceptable for a soft real-time
 system (Galois did something like this, I believe).  But for avionics
 systems you probably want to more assurances than that, don't you?


Yes and no.
It's true that lazy evaluation makes reasoning about timings a bit more
difficult (and might not be usable in very time critical scenarios) but it
is still has well defined deterministic behavior.

It's the referential transparency that saves us here. If you run a lazy
function with the same objects (in the same evaluation state) it should
_theoretically_ take the same amount of time to run. All of our toplevel
inputs will be strict, and if we keep our frame-to-frame state strick, our
variances in runtimes, given the same inputs, should be quite low modulo the
GC.

Even our current code can take significantly different amounts of time to
compute things depending on what you're doing. Some waypoints take longer to
lookup from the database than others. Predicting the time to arrival can
take significantly longer/shorter depending on seemingly trivial parameters,
etc...

It matters less that code always takes the same amount of time to run
(though it needs to always be less than the frame time)  and more so that it
always takes the same amount of time to run given the same initial
conditions.

- Job
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] references for compiler optimizations for functional languages

2010-03-01 Thread Michael Vanier

Hi everyone,

I'm interested in collecting good references for compiler optimizations 
for functional languages (lazy, strict, statically-typed or not).  Any 
suggestions?


Thanks in advance,

Mike


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC's parallel garbage collector -- what am I doing wrong?

2010-03-01 Thread Michael Lesniak
Hello Bryan,

 The parallel GC currently doesn't behave well with concurrent programs that
 uses multiple capabilities (aka OS threads), and the behaviour you see is
 the known symptom of this.. I believe that Simon Marlow has some fixes in
 hand that may go into 6.12.2.


 Are you saying that you see two different classes of undesirable
 performance, one with -qg and one without?
No, I wouldn't say that.


 How are your threads in your real
 program communicating with each other? We've seen problems there when
 there's a lot of contention for e.g. IORefs among thousands of threads.
No, my program uses only MVars and Chan and much less threads,
normally as many threads as cores available (doing numbercrunching).


But, as Daniel pointed out, I believe it has something to do with
Ubuntu's kernelpatches. I tried another machine with an earlier kernel
(2.6.28...) and the same program did not behave as bad.

Currently I'm trying some experiements with custom kernels and if I
solve this using a compiled kernel I'll blog about this.


Cheers,
  Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Performance of Knight's Tour

2010-03-01 Thread Daniel Fischer
Am Montag 01 März 2010 21:40:16 schrieb Artyom Kazak:
 Maybe we were compiling with different options? I compiled with -O2
 -fvia-C -optc-O3.
 ...
 Oh, I know! I slightly changed the code.

 import Data.Ord

 e ((x, y), arr) p = [(t, arr // [(t, p)]) | t - changes]
   where
 legit ps = [t | t - allTurns ! ps, arr ! t == 0]
 changes = sortOn (length . legit) (legit (x, y))
 sortOn = sortBy . comparing

Ah, that!

I also tried that, that gets stuck for different values.

With a little debugging output, I saw that it got stuck in a dead-end, 
always advancing a few steps and then backtracking. I'm now considering 
also the grand-children, that speeds things up and enters fewer dead-ends, 
but so far I haven't found a valuation which doesn't enter a dead-end for 
some values. I have an idea, though, also consider the distance from the 
border, try squares near the border first.


 My version gives answer for 60, 60 in one second. But if both
 dimensions are 60, it won't finish.
 Yes, very strange.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] references for compiler optimizations for functional languages

2010-03-01 Thread Don Stewart
mvanier42:
 Hi everyone,

 I'm interested in collecting good references for compiler optimizations  
 for functional languages (lazy, strict, statically-typed or not).  Any  
 suggestions?


There's lots for what GHC implements on SimonPJ's site:

http://www.research.microsoft.com/~simonpj/Papers/inlining/index.htm

http://research.microsoft.com/en-us/um/people/simonpj/papers/cpr/index.htm


http://research.microsoft.com/en-us/um/people/simonpj/papers/usage-types/usage.htm


http://research.microsoft.com/en-us/um/people/simonpj/papers/comp-by-trans-scp.ps.gz


http://research.microsoft.com/en-us/um/people/simonpj/papers/andy-thesis.ps.gz


http://research.microsoft.com/en-us/um/people/simonpj/papers/deforestation-short-cut.ps.Z

http://www.cse.unsw.edu.au/~dons/papers/CLS07.html :)

I've collected many of them here:

http://haskell.org/haskellwiki/Research_papers/Compilation#Compiler_Analyses

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Neil Davies
I don't know that hanging your hat on the deterministic coat hook is  
the right thing to do.


The way that I've always looked at this is more probabilistic - you  
want the result to arrive within a certain time frame for a certain  
operation with a high probability.  There is always the probability  
that the h/w will fail anyway - you could even reason that the  
software taking too long is just  a transient fault that clears -  
random (non-correlated - preferably a bernoulli choice) failures are  
OK, non-deterministic ones aren't.


This probabilistic, low probability of being at the tail of timing,  
approach would give a lot more flexibility in any form of (say  
incremental) GC - you may not be able to bound the incremental steps  
absolutely but a strong probabilistic bound might well be more  
achievable.


Neil


On 1 Mar 2010, at 21:06, Job Vranish wrote:




On Mon, Mar 1, 2010 at 2:37 PM, Thomas Schilling nomin...@googlemail.com 
 wrote:

On 1 March 2010 16:27, Job Vranish job.vran...@gmail.com wrote:
 My current area of work is on realtime embedded software  
programming for
 avionics systems. We do most of our coding in Ada but I've been  
dreaming of

 using haskell instaed.

A possible workaround would be to sprinkle lots of 'rnf's around your
code to make sure you don't build up a thunk or two that will delay
you later.  And if you do this, aren't you essentially programming in
a strict functional language (like SML or O'Caml)?  By careful
profiling you and auditing you can probably rule out most of the
potential bad cases, so it can be acceptable for a soft real-time
system (Galois did something like this, I believe).  But for avionics
systems you probably want to more assurances than that, don't you?

Yes and no.
It's true that lazy evaluation makes reasoning about timings a bit  
more difficult (and might not be usable in very time critical  
scenarios) but it is still has well defined deterministic behavior.


It's the referential transparency that saves us here. If you run a  
lazy function with the same objects (in the same evaluation state)  
it should _theoretically_ take the same amount of time to run. All  
of our toplevel inputs will be strict, and if we keep our frame-to- 
frame state strick, our variances in runtimes, given the same  
inputs, should be quite low modulo the GC.


Even our current code can take significantly different amounts of  
time to compute things depending on what you're doing. Some  
waypoints take longer to lookup from the database than others.  
Predicting the time to arrival can take significantly longer/shorter  
depending on seemingly trivial parameters, etc...


It matters less that code always takes the same amount of time to  
run (though it needs to always be less than the frame time)  and  
more so that it always takes the same amount of time to run given  
the same initial conditions.


- Job
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] references for compiler optimizations for functional languages

2010-03-01 Thread Michael Vanier

Awesome!  Thanks, Don!

Mike

Don Stewart wrote:

mvanier42:
  

Hi everyone,

I'm interested in collecting good references for compiler optimizations  
for functional languages (lazy, strict, statically-typed or not).  Any  
suggestions?





There's lots for what GHC implements on SimonPJ's site:

http://www.research.microsoft.com/~simonpj/Papers/inlining/index.htm

http://research.microsoft.com/en-us/um/people/simonpj/papers/cpr/index.htm


http://research.microsoft.com/en-us/um/people/simonpj/papers/usage-types/usage.htm


http://research.microsoft.com/en-us/um/people/simonpj/papers/comp-by-trans-scp.ps.gz


http://research.microsoft.com/en-us/um/people/simonpj/papers/andy-thesis.ps.gz


http://research.microsoft.com/en-us/um/people/simonpj/papers/deforestation-short-cut.ps.Z

http://www.cse.unsw.edu.au/~dons/papers/CLS07.html :)

I've collected many of them here:

http://haskell.org/haskellwiki/Research_papers/Compilation#Compiler_Analyses

-- Don
  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Thomas Schilling
On 1 March 2010 21:34, Neil Davies semanticphilosop...@googlemail.com wrote:
 I don't know that hanging your hat on the deterministic coat hook is the
 right thing to do.
 The way that I've always looked at this is more probabilistic - you want the
 result to arrive within a certain time frame for a certain operation with a
 high probability.

That's where I would think the difference between hard and soft
real-time lies, as I understand it.  Of course, real hard real-time
of course is practically impossible on modern hardware due to caches,
branch prediction, out-of-order execution and such like.

 There is always the probability that the h/w will fail
 anyway - you could even reason that the software taking too long is just  a
 transient fault that clears - random (non-correlated - preferably a
 bernoulli choice) failures are OK, non-deterministic ones aren't.
 This probabilistic, low probability of being at the tail of timing, approach
 would give a lot more flexibility in any form of (say incremental) GC - you
 may not be able to bound the incremental steps absolutely but a strong
 probabilistic bound might well be more achievable.

The Garbage-First paper showed some promising but not spectacular
successes in keeping the soft real-time goal.  I don't know the
correct numbers off the top of my head, but I think they missed the
deadline in  5% of the cases.  I would assume that it should be  1%
if you want to treat it as a random failure.  I understand that in a
fault-tolerant systems you have to handle worse things than that, but
you make assumptions about the likelihood of each kind of error
(otherwise you may run into QoS issues).

As Job and John have pointed out, though, laziness per se doesn't seem
to be an issue, which is good to hear.  Space leaks might, but there
is no clear evidence that they are particularly harder to avoid than
in strict languages.  As Röjemo and Runciman put it: By using heap
profiles on a lazy language we find problems with lazy languages.
Using it on a strict language we would find problems with strict
languages too. [1] (very recommended paper, btw.)

[1]: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.1219


--
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Optimizing hash array mapped tries

2010-03-01 Thread Edward Z. Yang
Excerpts from Don Stewart's message of Wed Feb 24 16:13:18 -0500 2010:
 Almost all the vector library generators fill a vector destructively,
 before doing an unsafeFreeze.

Alright.  Is there a standard idiom for filling vectors pointing to vectors
destructively, and then unsafely freezing all of them (as you would find in a
recursively defined datastructure), or is this something simply not supported
by the compiler at this time?

Cheers,
Edward
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Jeremy Shaw
I am just going to jump on the RT dogpile and mention that I too have wanted
RT friendly GC in Haskell. I have attempted on more than one occasion to
implement real-time audio applications in Haskell. But I tend to get a lot
of buffer underruns, most likely due to GC.

For audio apps, there is a callback that happens every few milliseconds. As
often as 4ms. The callback needs to finish as quickly as possible to avoid a
buffer underruns. So, in theory, I believe I would want garbage collection
to only happen in the time periods between when the callback is running.
This wouldn't affect the total amount of time that garbage collection ran --
but it would help minimize the amount of time spent in the callback, which
should reduce buffer underruns.

My feeling right now is that the 'best' solution might be something similar
to synthesis OS. I would create a DSL for the realtime DSP code. Using
harpy, this DSL would be compiled to assembly with execution time guarantees
(as much as can be predicted on modern hardware). For a 'modular synth' this
might actually be better than C code, because the effects of certain choices
could be 'compiled in' instead of having to check the setting via a compare.
(that is what Synthesis OS does).

- jeremy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Linear programming in Haskell

2010-03-01 Thread Louis Wasserman
If you're using the LPM monad, then this is about as easy as that: you use

do(x1:x2:x3:_) - newVariables
   ..

I mean, run is equivalent to

run f = execLPM (newVariables = put . f)

so...yeah, I think this is a reasonable solution.

Alternatively, I'm almost positive there's a monad out there that lets you
draw on unique values.  It'd look something like

type Variable = Int
newtype UniqueM a = UniqueM (Variable - (Variable, a))

-- monad instance, etc.

getUnique :: UniqueM Variable
getUnique = UniqueM (\ x - (x+1, x))

Then you can use the LPT monad transformer to construct a linear program
around this, just by working in the LPT Variable c UniqueM monad.

That's actually a nicer solution than my current implementation.  I'll do
that, then...

Louis Wasserman
wasserman.lo...@gmail.com
http://profiles.google.com/wasserman.louis


On Mon, Mar 1, 2010 at 9:29 AM, Henning Thielemann 
lemm...@henning-thielemann.de wrote:


 On Sun, 28 Feb 2010, Louis Wasserman wrote:

  It's an expensive operation, though -- since I don't track the set of all
 variables as the LP is built, I need to construct the set of all variables
 before generating new ones -- so it's recommended that you get all the
 variables you need in one or two passes.


 Then you might prefer a single operation that generates all variables and
 runs an enclosed problem on them:

 run :: ([Variable] - LP a) - LP a


 Use as

 run $ \x0:x1:x2:_ - do
   ...

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Simon Cranshaw
On Sun, Feb 28, 2010 at 6:06 PM, Pavel Perikov peri...@gmail.com wrote:

 Did you really seen 100ms pauses?! I never did extensive research on this
 but my numbers are rather in microseconds range (below 1ms). What causes
 such a long garbage collection? Lots of allocated and long-living objects?


I am using an automated options trading system written in Haskell.  I'm more
on the business side than the technical side of the issues so I'm not clear
on all the details.  I can confirm that without tweaking the RTS settings we
were seeing over 100ms GC pauses.  I've mainly been trying to minimise our
overall response time and we were able to improve this by increasing the
allocation area with -A.  I think this brought GC well under 100ms.  We are
still working on analysis of this.

I can also confirm, as others seem to have found, that under 6.12 the
parallel GC seemed to make things much worse. I am always turning it off
with -qg.  If there is a project to improve performance of the GC I could be
interested to contribute.

Simon Cranshaw
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What are free Monads?

2010-03-01 Thread Dan Doel
On Monday 01 March 2010 12:50:21 pm Henning Thielemann wrote:
 On Sat, 27 Feb 2010, Dan Doel wrote:
  Free structures originate (I think) in algebra. There you'll find talk of
  free groups, free rings, free monoids, etc.
 
 How about turning this post into a HaskellWiki article in
 Category:Glossary ?

Here's a first draft.

  http://haskell.org/haskellwiki/Free_monad
  http://haskell.org/haskellwiki/Free_structure

Feel free to make suggestions/changes.

-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread John Van Enk
Simon,

Would a more predictable GC or a faster GC be better in your case? (Of
course, both would be nice.)

/jve

On Mon, Mar 1, 2010 at 9:33 PM, Simon Cranshaw simon.crans...@gmail.comwrote:

 On Sun, Feb 28, 2010 at 6:06 PM, Pavel Perikov peri...@gmail.com wrote:

 Did you really seen 100ms pauses?! I never did extensive research on this
 but my numbers are rather in microseconds range (below 1ms). What causes
 such a long garbage collection? Lots of allocated and long-living objects?


 I am using an automated options trading system written in Haskell.  I'm
 more on the business side than the technical side of the issues so I'm not
 clear on all the details.  I can confirm that without tweaking the RTS
 settings we were seeing over 100ms GC pauses.  I've mainly been trying to
 minimise our overall response time and we were able to improve this by
 increasing the allocation area with -A.  I think this brought GC well under
 100ms.  We are still working on analysis of this.

 I can also confirm, as others seem to have found, that under 6.12 the
 parallel GC seemed to make things much worse. I am always turning it off
 with -qg.  If there is a project to improve performance of the GC I could be
 interested to contribute.

 Simon Cranshaw

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Simon Cranshaw
John,

I suspect speed is more important than timing.  In practice I don't mind a
pause for a gc when nothing is happening in the market.  It's when the
market is moving fast that we particularly want to keep our response time
low. Busy times may continue for long periods though and I'm not sure if
being able to defer gc (if that's what you're suggesting) would be possible
for that long.  We are still studying how the gc times are interacting with
our response times and so hopefully I will have a better answer to this
later on.

Simon

On Tue, Mar 2, 2010 at 2:06 PM, John Van Enk vane...@gmail.com wrote:

 Simon,

 Would a more predictable GC or a faster GC be better in your case? (Of
 course, both would be nice.)

 /jve

 On Mon, Mar 1, 2010 at 9:33 PM, Simon Cranshaw 
 simon.crans...@gmail.comwrote:

 On Sun, Feb 28, 2010 at 6:06 PM, Pavel Perikov peri...@gmail.com wrote:

 Did you really seen 100ms pauses?! I never did extensive research on this
 but my numbers are rather in microseconds range (below 1ms). What causes
 such a long garbage collection? Lots of allocated and long-living objects?


 I am using an automated options trading system written in Haskell.  I'm
 more on the business side than the technical side of the issues so I'm not
 clear on all the details.  I can confirm that without tweaking the RTS
 settings we were seeing over 100ms GC pauses.  I've mainly been trying to
 minimise our overall response time and we were able to improve this by
 increasing the allocation area with -A.  I think this brought GC well under
 100ms.  We are still working on analysis of this.

 I can also confirm, as others seem to have found, that under 6.12 the
 parallel GC seemed to make things much worse. I am always turning it off
 with -qg.  If there is a project to improve performance of the GC I could be
 interested to contribute.

 Simon Cranshaw

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Has anybody translated Douglas Hofstadter's Scientific American articles introducting Scheme to a general audience into Haskell?

2010-03-01 Thread Benjamin L. Russell
There is an interesting, if somewhat dated, suggestion on Lambda the
Ultimate (see http://lambda-the-ultimate.org/node/1748) that someone
translate Doug Hofstadter's Scientific American columns introducing
Scheme to a general audience into Haskell.

(I came across this link while adding full titles and links to the
HaskellWiki Books and tutorials page (see
http://www.haskell.org/haskellwiki/Books_and_tutorials), where I clicked
on the link to Tutorials (see
http://www.haskell.org/haskellwiki/Tutorials), which contained a link to
a Haskell vs. Scheme (see http://www.reddit.com/r/programming/tb/nq1k)
article, which described the post containing the suggestion.)

According to a comment by Ehud Lamm (see
http://lambda-the-ultimate.org/node/1748#comment-21292) on the above
post, the columns are in Hoftstadter's book _Metamagical Themas:
Questing For The Essence Of Mind And Pattern_ [1] (see
http://www.amazon.com/Metamagical-Themas-Questing-Essence-Pattern/dp/0465045669).

Has anybody translated Hofstadter's articles from Scheme into Haskell?

-- Benjamin L. Russell

[1] Hofstadter, Douglas. _Metamagical Themas: Questing For The Essence Of Mind
And Pattern._  New York: Basic Books,
1996. 
http://www.amazon.com/Metamagical-Themas-Questing-Essence-Pattern/dp/0465045669.
-- 
Benjamin L. Russell  /   DekuDekuplex at Yahoo dot com
http://dekudekuplex.wordpress.com/
Translator/Interpreter / Mobile:  +011 81 80-3603-6725
Furuike ya, kawazu tobikomu mizu no oto. -- Matsuo Basho^ 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe