Re: [Haskell-cafe] Conduit and pipelined protocol processing using a threadpool

2012-11-27 Thread Michael Snoyman
On Tue, Nov 27, 2012 at 7:25 PM, Nicolas Trangez wrote:

> Michael,
>
> On Tue, 2012-11-27 at 17:14 +0200, Michael Snoyman wrote:
> > I think the stm-conduit package[1] may be helpful for this use case.
> > Each time you get a new command, you can fork a thread and give it the
> > TBMChan to write to, and you can use sourceTBMChan to get a source to
> > send to the client.
>
> That's +- what I had in mind. I did find stm-conduit before and did try
> to get the thing working using it, but these attempts failed.
>
> I attached an example which might clarify what I intend to do. I'm aware
> it contains several potential bugs (leaking threads etc), but that's
> beside the question ;-)
>
> If only I could figure out what to put on the 3 lines of comment I left
> in there...
>
> Thanks for your help,
>
> Nicolas
>
>
The issue is that you're trying to put everything into a single Conduit,
which forces reading and writing to occur in a single thread of execution.
Since you want your writing to be triggered by a separate event (data being
available on the Chan), you're running into limitations.

The reason network-conduit provides a Source for the incoming data and a
Sink for outgoing data is specifically to address your use case. You want
to take the data from the Source and put it into the Chan in one thread,
and take the data from the other Chan and put it into the Sink in a
separate thread. Something like:

myApp appdata = do
chan1 <- ...
chan2 <- ...
replicateM_ 5 $ forkIO $ worker chan1 chan2
forkIO $ appSource appdata $$ sinkTBMChan chan1
sourceTBMChan chan2 $$ appSink appdata

You'll also want to make sure to close chan1 and chan2 to make sure that
your threads stop running.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Recursive timezone-loading function

2012-11-27 Thread David Thomas
https://github.com/dlthomas/tzcache

A small bit of code, but seems likely to be useful enough that I figured I
should share.

I've a few notes/questions:

1) Does this already exist somewhere I missed?

2) It seems silly to make this its own library - any suggestions where it
could be added?

3) Is the traverse-a-directory-and-populate-a-map pattern one worth
abstracting?  If so, where should that go?

4) Presently, it's a static cache entirely pre-loaded.  This seems fine, as
it's not a terribly huge amount of data, but it's worth noting.

5) Any comments on the code generally?  Improvements?  Complaints?

Thanks,

- David
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cabal configure && cabal build && cabal install

2012-11-27 Thread Albert Y. C. Lai

On 12-11-27 04:40 AM, kudah wrote:

On Tue, 27 Nov 2012 02:20:35 -0500 "Albert Y. C. Lai" 
wrote:


When "cabal build" succeeds, it always says:

(older) "registering -"
(newer) "In-place registering -"

That's what it says. But use ghc-pkg and other tests to verify that
no registration whatsoever has happened.


It doesn't register in user package-db, it registers in it's own
dist/package.conf.inplace. If it didn't you wouldn't be able to build
an executable and a library in one package such that executable depends
on the library.


That's fair. But it also means

cabal configure
cabal build

is not equivalent to

cabal configure
cabal build
cabal register --inplace

which was the context when you said "(newer) cabal build registers 
inplace automatically".


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can a GC delay TCP connection formation?

2012-11-27 Thread Nicolas Wu
On Tue, Nov 27, 2012 at 9:45 PM, Jeff Shaw  wrote:
> On 11/27/2012 2:45 PM, Gershom Bazerman wrote:
>>
>> HDBC-odbc has long used the wrong type of FFI imports, resulting in
>> long-running database queries potentially blocking all other IO. I just
>> checked, and apparently a patch was made to the repo in September that
>> finally fixes this [1], but apparently a new release has yet to be uploaded
>> to hackage. In any case, if you try to install it from the repo, this may at
>> least solve some of your problems.
>>
>> [1]
>> https://github.com/hdbc/hdbc-odbc/commit/7299d3441ce2e1d5a485fe79b37540c0a44a44d4
>>
>> --Gershom
>
> Gershom, Thanks for pointing this out. I've checked out the latest hdbc-odbc
> code, and I'll see if there's an improvement.

Hi, I'm the maintainer of HDBC. I haven't yet released this code since
it hasn't yet been fully tested. However, if you're happy with it,
I'll push the version with proper ffi bindings up to Hackage.

Nick

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can a GC delay TCP connection formation?

2012-11-27 Thread Jeff Shaw

On 11/27/2012 2:45 PM, Gershom Bazerman wrote:
HDBC-odbc has long used the wrong type of FFI imports, resulting in 
long-running database queries potentially blocking all other IO. I 
just checked, and apparently a patch was made to the repo in September 
that finally fixes this [1], but apparently a new release has yet to 
be uploaded to hackage. In any case, if you try to install it from the 
repo, this may at least solve some of your problems.


[1] 
https://github.com/hdbc/hdbc-odbc/commit/7299d3441ce2e1d5a485fe79b37540c0a44a44d4


--Gershom
Gershom, Thanks for pointing this out. I've checked out the latest 
hdbc-odbc code, and I'll see if there's an improvement.


Jeff

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: language-java 0.2.0

2012-11-27 Thread Ömer Sinan Ağacan
Great, thanks for this great work!

One of the things I _love_ about haskell and it's community is that
"language-x" packages. I really love playing with source code and compiling
to other languages. And working with Haskell, thanks to that "language-x"
packages, make this a joy.

And recently I also wrote one, language-lua:
http://hackage.haskell.org/package/language-lua . Maybe I should write an
announcement mail too :-) .

---
Ömer Sinan Ağacan


2012/11/27 Vincent Hanquez 

> On 11/27/2012 06:46 PM, Alfredo Di Napoli wrote:
>
>> Thanks for the effort!
>> Now, what about some documentation? :P
>>
>
> Sure ! Fork away, and send pull requests :-)
>
>
> --
> Vincent
>
>
> __**_
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/**mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time code in Haskell (Was: Can a GC delay TCP connection formation?)

2012-11-27 Thread Roman Cheplyaka
* Mike Meyer  [2012-11-27 13:40:17-0600]
> Lazyness, on the other hand ... I haven't thought about. I suspect you
> need to force the evaluation of everything you're going to need before
> you start the critical region, but I wonder if that's enough? Has
> anyone out there investigated this?

I don't know much about RT systems, but if GC is not a problem, then
laziness should be even less so. It is much more predictable, especially
when you look at the Core.

Roman

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time code in Haskell (Was: Can a GC delay TCP connection formation?)

2012-11-27 Thread Felipe Almeida Lessa
http://hackage.haskell.org/packages/archive/base/4.6.0.0/doc/html/System-Mem.html#v:performGC

On Tue, Nov 27, 2012 at 5:52 PM,   wrote:
> What triggers GC in haskell?  We obviously aren't using Java's method of GC
> as needed(for good reasons, Java's method is terrible because you get slow
> downs when you need speed the most).  But we should be able to learn
> something from Java and have a gc::IO() method that one could call BEFORE a
> critical region of code...
>
>
> -- Původní zpráva --
> Od: Mike Meyer 
> Datum: 27. 11. 2012
> Předmět: [Haskell-cafe] Real-time code in Haskell (Was: Can a GC delay TCP
> connection formation?)
>
> On Tue, Nov 27, 2012 at 3:45 AM, Gregory Collins
>  wrote:
>> If you have a hard real-time requirement then a garbage-collected
>> language may not be appropriate for you.
>
> This is a common meme, but frankly, it isn't true. When writing
> real-time code, you just need to make sure that everything that
> happens takes a known maximum amount of time. Then, you can sum up the
> maximums and verify that you do indeed finish in the real-time window
> of the task.
>
> GC is a problem because it's not predictable, and may not have a
> maximum. However, it's no worse than a modern version of the C
> function malloc. Some of those even do garbage collection internally
> before doing an OS call if they're out of memory. The solution is the
> same in both cases - make sure you don't do GC (or call malloc) in the
> critical region. Both require knowing implementation details of
> everything you call, but it isn't impossible, or even particularly
> difficult.
>
> Lazyness, on the other hand ... I haven't thought about. I suspect you
> need to force the evaluation of everything you're going to need before
> you start the critical region, but I wonder if that's enough? Has
> anyone out there investigated this?
>
> Thanks,
> 
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>



-- 
Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: language-java 0.2.0

2012-11-27 Thread Vincent Hanquez

On 11/27/2012 06:46 PM, Alfredo Di Napoli wrote:

Thanks for the effort!
Now, what about some documentation? :P


Sure ! Fork away, and send pull requests :-)

--
Vincent


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time code in Haskell (Was: Can a GC delay TCP connection formation?)

2012-11-27 Thread timothyhobbs
What triggers GC in haskell?  We obviously aren't using Java's method of GC 
as needed(for good reasons, Java's method is terrible because you get slow 
downs when you need speed the most).  But we should be able to learn 
something from Java and have a gc::IO() method that one could call BEFORE a 
critical region of code...


-- Původní zpráva --
Od: Mike Meyer 
Datum: 27. 11. 2012
Předmět: [Haskell-cafe] Real-time code in Haskell (Was: Can a GC delay TCP 
connection formation?)
"On Tue, Nov 27, 2012 at 3:45 AM, Gregory Collins
 wrote:
> If you have a hard real-time requirement then a garbage-collected
> language may not be appropriate for you.

This is a common meme, but frankly, it isn't true. When writing
real-time code, you just need to make sure that everything that
happens takes a known maximum amount of time. Then, you can sum up the
maximums and verify that you do indeed finish in the real-time window
of the task.

GC is a problem because it's not predictable, and may not have a
maximum. However, it's no worse than a modern version of the C
function malloc. Some of those even do garbage collection internally
before doing an OS call if they're out of memory. The solution is the
same in both cases - make sure you don't do GC (or call malloc) in the
critical region. Both require knowing implementation details of
everything you call, but it isn't impossible, or even particularly
difficult.

Lazyness, on the other hand ... I haven't thought about. I suspect you
need to force the evaluation of everything you're going to need before
you start the critical region, but I wonder if that's enough? Has
anyone out there investigated this?

Thanks,
http://www.haskell.org/mailman/listinfo/haskell-cafe
(http://www.haskell.org/mailman/listinfo/haskell-cafe)"___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can a GC delay TCP connection formation?

2012-11-27 Thread Gershom Bazerman

On 11/27/12 2:17 PM, Jason Dagit wrote:


Based on that I would check the FFI imports in your database library. 
In the best case (-threaded, 'safe', and thread-safe odbc), I think 
you'll find that N of these can run concurrently, but here your number 
of requests is likely to be much greater than N (where N is the number 
of threads the RTS created with +RTS -N).


HDBC-odbc has long used the wrong type of FFI imports, resulting in 
long-running database queries potentially blocking all other IO. I just 
checked, and apparently a patch was made to the repo in September that 
finally fixes this [1], but apparently a new release has yet to be 
uploaded to hackage. In any case, if you try to install it from the 
repo, this may at least solve some of your problems.


[1] 
https://github.com/hdbc/hdbc-odbc/commit/7299d3441ce2e1d5a485fe79b37540c0a44a44d4


--Gershom

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Real-time code in Haskell (Was: Can a GC delay TCP connection formation?)

2012-11-27 Thread Mike Meyer
On Tue, Nov 27, 2012 at 3:45 AM, Gregory Collins
 wrote:
> If you have a hard real-time requirement then a garbage-collected
> language may not be appropriate for you.

This is a common meme, but frankly, it isn't true. When writing
real-time code, you just need to make sure that everything that
happens takes a known maximum amount of time. Then, you can sum up the
maximums and verify that you do indeed finish in the real-time window
of the task.

GC is a problem because it's not predictable, and may not have a
maximum. However, it's no worse than a modern version of the C
function malloc. Some of those even do garbage collection internally
before doing an OS call if they're out of memory. The solution is the
same in both cases - make sure you don't do GC (or call malloc) in the
critical region. Both require knowing implementation details of
everything you call, but it isn't impossible, or even particularly
difficult.

Lazyness, on the other hand ... I haven't thought about. I suspect you
need to force the evaluation of everything you're going to need before
you start the critical region, but I wonder if that's enough? Has
anyone out there investigated this?

   Thanks,
   http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can a GC delay TCP connection formation?

2012-11-27 Thread Jason Dagit
On Tue, Nov 27, 2012 at 11:17 AM, Jason Dagit  wrote:

>
>
> On Tue, Nov 27, 2012 at 11:02 AM, Jeff Shaw  wrote:
>
>> Hello Timothy and others,
>> One of my clients hosts their HTTP clients in an Amazon cloud, so even
>> when they turn on persistent HTTP connections, they use many connections.
>> Usually they only end up sending one HTTP request per TCP connection. My
>> specific problem is that they want a response in 120 ms or so, and at times
>> they are unable to complete a TCP connection in that amount of time. I'm
>> looking at on the order of 100 TCP connections per second, and on the order
>> of 1000 HTTP requests per second (other clients do benefit from persistent
>> HTTP connections).
>>
>> Once each minute, a thread of my program updates a global state, stored
>> in an IORef, and updated with atomicModifyIORef', based on query results
>> via HDBC-obdc. The query results are strict, and atomicModifyIORef' should
>> receive the updated state already evaluated. I reduced the amount of time
>> that query took from tens of seconds to just a couple, and for some reason
>> that reduced the proportion of TCP timeouts drastically. The approximate
>> before and after TCP timeout proportions are 15% and 5%. I'm not sure why
>> this reduction in timeouts resulted from the query time improving, but this
>> discovery has me on the task of removing all database code from the main
>> program and into a cron job. My best guess is that HDBC-odbc somehow
>> disrupts other communications while it waits for the DB server to respond.
>>
>
> Have you read section 8.4.2 of the ghc user guide?
> http://www.haskell.org/ghc/docs/7.4.1/html/users_guide/ffi-ghc.html
>
>
Ahem, I meant *8.2.4*.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can a GC delay TCP connection formation?

2012-11-27 Thread Jason Dagit
On Tue, Nov 27, 2012 at 11:02 AM, Jeff Shaw  wrote:

> Hello Timothy and others,
> One of my clients hosts their HTTP clients in an Amazon cloud, so even
> when they turn on persistent HTTP connections, they use many connections.
> Usually they only end up sending one HTTP request per TCP connection. My
> specific problem is that they want a response in 120 ms or so, and at times
> they are unable to complete a TCP connection in that amount of time. I'm
> looking at on the order of 100 TCP connections per second, and on the order
> of 1000 HTTP requests per second (other clients do benefit from persistent
> HTTP connections).
>
> Once each minute, a thread of my program updates a global state, stored in
> an IORef, and updated with atomicModifyIORef', based on query results via
> HDBC-obdc. The query results are strict, and atomicModifyIORef' should
> receive the updated state already evaluated. I reduced the amount of time
> that query took from tens of seconds to just a couple, and for some reason
> that reduced the proportion of TCP timeouts drastically. The approximate
> before and after TCP timeout proportions are 15% and 5%. I'm not sure why
> this reduction in timeouts resulted from the query time improving, but this
> discovery has me on the task of removing all database code from the main
> program and into a cron job. My best guess is that HDBC-odbc somehow
> disrupts other communications while it waits for the DB server to respond.
>

Have you read section 8.4.2 of the ghc user guide?
http://www.haskell.org/ghc/docs/7.4.1/html/users_guide/ffi-ghc.html

Based on that I would check the FFI imports in your database library. In
the best case (-threaded, 'safe', and thread-safe odbc), I think you'll
find that N of these can run concurrently, but here your number of requests
is likely to be much greater than N (where N is the number of threads the
RTS created with +RTS -N).

I'm not sure how to solve your problem, but perhaps this information can
help you pinpoint the problem.

Good luck,
Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can a GC delay TCP connection formation?

2012-11-27 Thread Jeff Shaw

Hello Timothy and others,
One of my clients hosts their HTTP clients in an Amazon cloud, so even 
when they turn on persistent HTTP connections, they use many 
connections. Usually they only end up sending one HTTP request per TCP 
connection. My specific problem is that they want a response in 120 ms 
or so, and at times they are unable to complete a TCP connection in that 
amount of time. I'm looking at on the order of 100 TCP connections per 
second, and on the order of 1000 HTTP requests per second (other clients 
do benefit from persistent HTTP connections).


Once each minute, a thread of my program updates a global state, stored 
in an IORef, and updated with atomicModifyIORef', based on query results 
via HDBC-obdc. The query results are strict, and atomicModifyIORef' 
should receive the updated state already evaluated. I reduced the amount 
of time that query took from tens of seconds to just a couple, and for 
some reason that reduced the proportion of TCP timeouts drastically. The 
approximate before and after TCP timeout proportions are 15% and 5%. I'm 
not sure why this reduction in timeouts resulted from the query time 
improving, but this discovery has me on the task of removing all 
database code from the main program and into a cron job. My best guess 
is that HDBC-odbc somehow disrupts other communications while it waits 
for the DB server to respond.


To respond to Ertugrul, I'm compiling with -threaded, and running with 
+RTS -N.


I hope this helps describe my problem. I c an probably come up with some 
hard information if requested, E.G. threadscope.


Jeff

On 11/27/2012 10:55 AM, timothyho...@seznam.cz wrote:
Could you give us more info on what your constraints are?  Is it 
necessary that you have a certain number of connections per second, or 
is it necessary that the connection results very quickly after some 
other message is received?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: language-java 0.2.0

2012-11-27 Thread Alfredo Di Napoli
Thanks for the effort!
Now, what about some documentation? :P
Cheers,
A.

On 27 November 2012 18:26, Vincent Hanquez  wrote:

> Hi Cafe,
>
> with the approval of Niklas, the original author and maintainer, i'll be
> maintaining language-java for now. I've uploaded a new version on hackage
> [1] with some minor improvements and the repository is now hosted on github
> [2].
>
> Thanks Niklas for language-java !
>
> [1] 
> http://hackage.haskell.org/**package/language-java
> [2] 
> http://github.com/vincenthz/**language-java/
>
> --
> Vincent
>
> __**_
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/**mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: language-java 0.2.0

2012-11-27 Thread Vincent Hanquez

Hi Cafe,

with the approval of Niklas, the original author and maintainer, i'll be 
maintaining language-java for now. I've uploaded a new version on 
hackage [1] with some minor improvements and the repository is now 
hosted on github [2].


Thanks Niklas for language-java !

[1] http://hackage.haskell.org/package/language-java
[2] http://github.com/vincenthz/language-java/

--
Vincent

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Portability of Safe Haskell packages

2012-11-27 Thread Amit Levy

FWIW, some very core libraries do this:
http://hackage.haskell.org/packages/archive/bytestring/0.10.2.0/doc/html/src/Data-ByteString.html

(see very top of linked source file)

Perhaps a more general solution would be for GHC to take the internet 
explorer route and require a special javascript include in each source 
file to get compatibility :)


-A

On 11/23/2012 03:34 PM, Roman Cheplyaka wrote:

* Herbert Valerio Riedel  [2012-11-24 00:06:44+0100]

Roman Cheplyaka  writes:

It has been pointed out before that in order for Safe Haskell to be
useful, libraries (especially core libraries) should be annotated
properly with Safe Haskell LANGUAGE pragmas.

However, that would make these libraries unusable with alternative
Haskell implementations, even if otherwise they these libraries are
Haskell2010.

To quote the standard:

   If a Haskell implementation does not recognize or support a particular
   language feature that a source file requests (or cannot support the
   combination of language features requested), any attempt to compile or
   otherwise use that file with that Haskell implementation must fail
   with an error.

Should it be advised to surround safe annotations with CPP #ifs?
Or does anyone see a better way out of this contradiction?

...but IIRC CPP isn't part of Haskell2010, or is it?

It isn't indeed. But:

1) it's a very basic extension which is supported by (almost?) all
existing implementations; or
2) if you want to be 100% Haskell2010, you can name your file *.cpphs and
let Cabal do preprocessing.

1) is a compromise and 2) is not very practical, so I'm eager to hear
other alternatives.

Roman

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conduit and pipelined protocol processing using a threadpool

2012-11-27 Thread Nicolas Trangez
Michael,

On Tue, 2012-11-27 at 17:14 +0200, Michael Snoyman wrote:
> I think the stm-conduit package[1] may be helpful for this use case.
> Each time you get a new command, you can fork a thread and give it the
> TBMChan to write to, and you can use sourceTBMChan to get a source to
> send to the client.

That's +- what I had in mind. I did find stm-conduit before and did try
to get the thing working using it, but these attempts failed.

I attached an example which might clarify what I intend to do. I'm aware
it contains several potential bugs (leaking threads etc), but that's
beside the question ;-)

If only I could figure out what to put on the 3 lines of comment I left
in there...

Thanks for your help,

Nicolas

{-# LANGUAGE Rank2Types #-}

module Main where

import Data.Conduit
import qualified Data.Conduit.List as CL
import Data.Conduit.TMChan

import Control.Applicative
import Control.Concurrent (forkIO)
import Control.Concurrent.STM (atomically)
import Control.Monad (forM_)
import Control.Monad.IO.Class (MonadIO, liftIO)

data Command = Add Int Int
 | Disconnect
  deriving (Show)
data Reply = Result Int
  deriving (Show)

application :: MonadIO m => GConduit Int m String
application = do
-- Create input and output channels to/from worker threads
(chanIn, chanOut) <- liftIO $ (,) <$> newTBMChanIO 10 <*> newTBMChanIO 10

-- Spawn some worker threads
liftIO $ forM_ [0..5] $ \i -> forkIO $ processCommands i chanIn chanOut

-- How to make
-- sourceTBMChan chanOut
-- something of which all produced values are yield'ed by this Conduit?

loop chanIn
  where
-- Loop retrieves one command from our source and pushes it to the
-- worker threads input channel, then loops
loop :: MonadIO m => TBMChan Command -> GConduit Int m String
loop chan = do
liftIO $ putStrLn "Enter loop"
cmd <- getCommand
liftIO $ do
putStrLn $ "Got command: " ++ show cmd
atomically $ writeTBMChan chan cmd
case cmd of
Disconnect -> return ()
_ -> loop chan

-- getCommand fetches and parses a single command from our source
getCommand :: Monad m => GSink Int m Command
getCommand = do
v <- await
case v of
Nothing -> return Disconnect
Just i -> return $ Add i 1

-- processCommands reads commands from a given input channel, processes
-- them, and pushes the result to a given output channel
processCommands :: Int -> TBMChan Command -> TBMChan Reply -> IO ()
processCommands i chanIn chanOut = do
putStrLn $ "Enter processCommands " ++ show i
cmd <- atomically $ readTBMChan chanIn
putStrLn $ show i ++ " read command: " ++ show cmd
case cmd of
Nothing -> return ()
Just (Add a b) -> do
atomically $ writeTBMChan chanOut (Result (a + b))
putStrLn $ show i ++ " pushed result"
processCommands i chanIn chanOut
Just Disconnect -> return ()


main :: IO ()
main = do
res <- CL.sourceList [1..20] $= application $$ CL.consume
putStrLn $ "Result: " ++ show res
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Observer pattern in haskell FRP

2012-11-27 Thread Nathan Hüsken
On 11/27/2012 04:18 PM, Heinrich Apfelmus wrote:
> Nathan Hüsken wrote:
>> Hey,
>>
>> When writing games in other (imperative) languages, I like to separate
>> the game logic from the rendering. For this I use something similar to
>> the observer pattern.
>>
>> With rendering I mean anything only related to how objects are drawn to
>> the screen. Animation state for example.
>>
>> On my journey of exploring game programming with haskell (and FRP), I
>> wonder what a good way of archiving something similar would be.
>>
>> [..]
>>
>> So I am wondering: Is there (or can someone think of) a different
>> pattern by which this could be archived? Or asked different: How would
>> you do it?
> 
> Personally, I would recommend is a complete change in perspective.
> 
> The main idea of FRP is that it is a method to describe the evolution of
> values in time. What is a game? It's just a picture that evolves in
> time. The user can exert influence on the evolution by clicking certain
> buttons on a mechanical device, but in the end, all he sees is a picture
> that moves.
> 
> How to describe picture that moves? Your large picture is probably made
> from smaller pictures, for instance a small picture in the shape of
> something we often call a "spaceship". So, you can implement a game by
> describing the evolution of smaller pictures, and then combine these
> into the description of a larger picture.
> 
> Now, the smaller pictures tend to have "hidden state", which means that
> their future evolution depends a lot on the past evolution of the other
> small pictures. In my experience with programming in FRP, it is very
> useful to describe the individual pictures in terms of tiny state
> machines and then connect these state machines via appropriate events
> and behaviors to each other. The essence here is to decouple the
> individual state machines from each other as much as possible and only
> then to use the FRP abstractions to connect and combine them into a
> "large emergent state machine".


That perspective certainly make sense. But couldn't one also describe a
game as a set of entities (spaceships) that react to the clicking of
buttons?

If I take for example the breakout game from here [1]. It outputs an
object "scene" of type Picture. But this picture is calculated from the
objects "ballPos" and "paddlePos". So first a game state (ballPos,
paddlePos) is created and than transformed to something renderable.

I believe all examples I have seen for games with FRP follow this
pattern, and I would I want to do is seperate the steps of calculating
the game state and calculating the renderable from it.


> (However, it is important to keep in mind that the fundamental
> abstraction is not a state machine, but a time evolution that remembers
> the past. This helps with embracing the new perspective and not
> accidentally fall back to previous ways of thinking. Whether that ends
> up with good code is up to you to find out, but if you decide to apply a
> new perspective, it's best to do it in an extremist way to gain the
> maximum benefit -- this benefit might certainly turn out to be zero, but
> you will never find out if you wet your feet only a little bit.)

That certainly makes sense, and it is also very difficult for me to
stick to the "FRP perspective".
But I do not see that seperating rendering and game logic code goes
against the FRP perspective.

Best Regards,
Nathan

[1] https://github.com/bernstein/breakout/blob/master/src/Main.hs


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can a GC delay TCP connection formation?

2012-11-27 Thread timothyhobbs
Could you give us more info on what your constraints are?  Is it necessary 
that you have a certain number of connections per second, or is it necessary
that the connection results very quickly after some other message is 
received?


-- Původní zpráva --
Od: Johan Tibell 
Datum: 27. 11. 2012
Předmět: Re: [Haskell-cafe] Can a GC delay TCP connection formation?
"
Kazu and Andreas, could this be IO manager related?

On Monday, November 26, 2012, Jeff Shaw wrote:
" Hello,
I've run into an issue that makes me think that when the GHC GC runs while a
Snap or Warp HTTP server is serving connections, the GC prevents or delays 
TCP connections from forming. My application requires that TCP connections 
form within a few tens of milliseconds. I'm wondering if anyone else has run
into this issue, and if there are some GC flags that could help. I've tried 
a few, such as -H and -c, and haven't found anything to help. I'm using GHC 
7.4.1.

Thanks,
Jeff

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
(http://www.haskell.org/mailman/listinfo/haskell-cafe)
" 
"___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Observer pattern in haskell FRP

2012-11-27 Thread Heinrich Apfelmus

Nathan Hüsken wrote:

Hey,

When writing games in other (imperative) languages, I like to separate
the game logic from the rendering. For this I use something similar to
the observer pattern.

With rendering I mean anything only related to how objects are drawn to
the screen. Animation state for example.

On my journey of exploring game programming with haskell (and FRP), I
wonder what a good way of archiving something similar would be.

[..]

So I am wondering: Is there (or can someone think of) a different
pattern by which this could be archived? Or asked different: How would
you do it?


Personally, I would recommend is a complete change in perspective.

The main idea of FRP is that it is a method to describe the evolution of 
values in time. What is a game? It's just a picture that evolves in 
time. The user can exert influence on the evolution by clicking certain 
buttons on a mechanical device, but in the end, all he sees is a picture 
that moves.


How to describe picture that moves? Your large picture is probably made 
from smaller pictures, for instance a small picture in the shape of 
something we often call a "spaceship". So, you can implement a game by 
describing the evolution of smaller pictures, and then combine these 
into the description of a larger picture.


Now, the smaller pictures tend to have "hidden state", which means that 
their future evolution depends a lot on the past evolution of the other 
small pictures. In my experience with programming in FRP, it is very 
useful to describe the individual pictures in terms of tiny state 
machines and then connect these state machines via appropriate events 
and behaviors to each other. The essence here is to decouple the 
individual state machines from each other as much as possible and only 
then to use the FRP abstractions to connect and combine them into a 
"large emergent state machine".


(However, it is important to keep in mind that the fundamental 
abstraction is not a state machine, but a time evolution that remembers 
the past. This helps with embracing the new perspective and not 
accidentally fall back to previous ways of thinking. Whether that ends 
up with good code is up to you to find out, but if you decide to apply a 
new perspective, it's best to do it in an extremist way to gain the 
maximum benefit -- this benefit might certainly turn out to be zero, but 
you will never find out if you wet your feet only a little bit.)



Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conduit and pipelined protocol processing using a threadpool

2012-11-27 Thread Michael Snoyman
I think the stm-conduit package[1] may be helpful for this use case. Each
time you get a new command, you can fork a thread and give it the TBMChan
to write to, and you can use sourceTBMChan to get a source to send to the
client.

Michael

[1] http://hackage.haskell.org/package/stm-conduit


On Tue, Nov 27, 2012 at 12:57 PM, Nicolas Trangez wrote:

> All,
>
> I've written a library to implement servers for some protocol using
> Conduit (I'll announce more details later).
>
> The protocol supports pipelining, i.e. a client can send a 'command'
> which contains some opaque 'handle' chosen by the client, the server
> processes this command, then returns some reply which contains this
> handle. The client is free to send other commands before receiving a
> reply for any previous request, and the server can process these
> commands in any order, sequential or concurrently.
>
> The library is based on network-conduit's "Application" style [1], as
> such now I write code like (OTOH)
>
> > application :: AppData IO -> IO ()
> > application client = appSource client $= handler $$ appSink client
> >   where
> > handler = do
> > negotiateResult <- MyLib.negotiate
> > liftIO $ validateNegotiateResult negotiateResult
> > MyLib.sendInformation 123
> > loop
> >
> >loop = do
> >command <- MyLib.getCommand
> >case command of
> >CommandA handle arg -> do
> >result <- liftIO $ doComplexProcessingA arg
> >MyLib.sendReply handle result
> >loop
> >Disconnect -> return ()
>
> This approach handles commands in-order, sequentially. Since command
> processing can involve quite some IO operations to disk or network, I've
> been trying to support pipelining on the server-side, but as of now I
> was unable to get things working.
>
> The idea would be to have a pool of worker threads, which receive work
> items from some channel, then return any result on some other channel,
> which should then be returned to the client.
>
> This means inside "loop" I would have 2 sources: commands coming from
> the client (using 'MyLib.getCommand :: MonadIO m => Pipe ByteString
> ByteString o u m Command'), as well as command results coming from the
> worker threads through the result channel. Whenever the first source
> produces something, it should be pushed onto the work queue, and
> whenever the second on yields some result it should be sent to the
> client using 'MyLib.sendReply :: Monad m => Handle -> Result -> Pipe l i
> ByteString u m ()'
>
> I've been fighting this for a while and haven't managed to get something
> sensible working. Maybe the design of my library is flawed, or maybe I'm
> approaching the problem incorrectly, or ...
>
> Has this ever been done before, or would anyone have some pointers how
> to tackle this?
>
> Thanks,
>
> Nicolas
>
> [1]
>
> http://hackage.haskell.org/packages/archive/network-conduit/0.6.1.1/doc/html/Data-Conduit-Network.html#g:2
>
>
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can a GC delay TCP connection formation?

2012-11-27 Thread Johan Tibell
Kazu and Andreas, could this be IO manager related?

On Monday, November 26, 2012, Jeff Shaw wrote:

> Hello,
> I've run into an issue that makes me think that when the GHC GC runs while
> a Snap or Warp HTTP server is serving connections, the GC prevents or
> delays TCP connections from forming. My application requires that TCP
> connections form within a few tens of milliseconds. I'm wondering if anyone
> else has run into this issue, and if there are some GC flags that could
> help. I've tried a few, such as -H and -c, and haven't found anything to
> help. I'm using GHC 7.4.1.
>
> Thanks,
> Jeff
>
> __**_
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/**mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with benchmarking FFI calls with Criterion

2012-11-27 Thread Janek S.
Dnia wtorek, 27 listopada 2012, Gregory Collins napisał:
> Did you pass the option to criterion asking it to do a GC between
> trials? 
Yes.


> You might be measuring a GC pause. 
>
> On Tue, Nov 27, 2012 at 2:41 PM, Janek S.  wrote:
> > Dnia wtorek, 27 listopada 2012, Jake McArthur napisał:
> >> I once had a problem like this. It turned out that my laptop was
> >> stepping the cpu clock rate down whenever it got warm. Disabling that
> >> feature in my BIOS fixed it. Your problem might be similar.
> >
> > I just check - I disabled frequency scaling and results are the same.
> > Actually I doubt that 39us of benchmarking would cause CPU overheating
> > with such repeatibility. Besides, this wouldn't explain why the first
> > benchmark actually got faster.
> >
> > Janek
> >
> >> On Nov 27, 2012 7:23 AM, "Janek S."  wrote:
> >> > I tested the same code on my second machine - Debian Squeeze (kernel
> >> > 2.6.32) with GHC 7.4.1 - and
> >> > the results are extremely surprising. At first I was unable to
> >> > reproduce the problem and got
> >> > consistent runtimes of about 107us:
> >> >
> >> > benchmarking FFI/C binding
> >> > mean: 107.3837 us, lb 107.2013 us, ub 107.5862 us, ci 0.950
> >> > std dev: 983.6046 ns, lb 822.6750 ns, ub 1.292724 us, ci 0.950
> >> >
> >> > benchmarking FFI/C binding
> >> > mean: 108.1152 us, lb 107.9457 us, ub 108.3052 us, ci 0.950
> >> > std dev: 916.2469 ns, lb 793.1004 ns, ub 1.122127 us, ci 0.950
> >> >
> >> > I started experimenting with the vector size and after bumping its
> >> > size to 32K elements I started
> >> > getting this:
> >> >
> >> > benchmarking FFI/C binding
> >> > mean: 38.50100 us, lb 36.71525 us, ub 46.87665 us, ci 0.950
> >> > std dev: 16.93131 us, lb 1.033678 us, ub 40.23900 us, ci 0.950
> >> > found 6 outliers among 100 samples (6.0%)
> >> >   3 (3.0%) low mild
> >> >   3 (3.0%) high severe
> >> > variance introduced by outliers: 98.921%
> >> > variance is severely inflated by outliers
> >> >
> >> > benchmarking FFI/C binding
> >> > mean: 209.9733 us, lb 209.5316 us, ub 210.4680 us, ci 0.950
> >> > std dev: 2.401398 us, lb 2.052981 us, ub 2.889688 us, ci 0.950
> >> >
> >> > First result is always about 39us (2,5 faster, despite longer signal!)
> >> > while the remaining
> >> > benchmarks take almost two times longer.
> >> >
> >> > Janek
> >> >
> >> > Dnia niedziela, 25 listopada 2012, Janek S. napisał:
> >> > > Well, it seems that this only happens on my machine. I will try to
> >> > > test this code on different computer and see if I can reproduce it.
> >> > >
> >> > > I don't think using existing vector is a good idea - it would make
> >> > > the
> >> >
> >> > code
> >> >
> >> > > impure.
> >> > >
> >> > > Janek
> >> > >
> >> > > Dnia sobota, 24 listopada 2012, Branimir Maksimovic napisał:
> >> > > > I don't see such behavior neither.ubuntu 12.10, ghc 7.4.2.
> >> > > > Perhaps this has to do with how malloc allocates /cachebehavior.
> >> > > > If you try not to allocate array rather use existing one perhaps
> >> > > > there would
> >> >
> >> > be
> >> >
> >> > > > no inconsistency?It looks to me that's about CPU cache
> >> > > > performance. Branimir
> >> > > >
> >> > > > > I'm using GHC 7.4.2 on x86_64 openSUSE Linux, kernel 2.6.37.6.
> >> > > > >
> >> > > > > Janek
> >> > > > >
> >> > > > > Dnia piątek, 23 listopada 2012, Edward Z. Yang napisał:
> >> > > > > > Running the sample code on GHC 7.4.2, I don't see the "one
> >> > > > > > fast, rest slow" behavior.  What version of GHC are you
> >> > > > > > running?
> >> > > > > >
> >> > > > > > Edward
> >> > > > > >
> >> > > > > > Excerpts from Janek S.'s message of Fri Nov 23 13:42:03 -0500 
> >> > > > > > 2012:
> >> > > > > > > > What happens if you do the benchmark without
> >> > > > > > > > unsafePerformIO involved?
> >> > > > > > >
> >> > > > > > > I removed unsafePerformIO, changed copy to have type Vector
> >> >
> >> > Double
> >> >
> >> > > > > > > -> IO (Vector Double) and modified benchmarks like this:
> >> > > > > > >
> >> > > > > > > bench "C binding" $ whnfIO (copy signal)
> >> > > > > > >
> >> > > > > > > I see no difference - one benchmark runs fast, remaining
> >> > > > > > > ones run slow.
> >> > > > > > >
> >> > > > > > > Janek
> >> > > > > > >
> >> > > > > > > > Excerpts from Janek S.'s message of Fri Nov 23 10:44:15
> >> > > > > > > > -0500
> >> >
> >> > 2012:
> >> > > > > > > > > I am using Criterion library to benchmark C code called
> >> > > > > > > > > via
> >> >
> >> > FFI
> >> >
> >> > > > > > > > > bindings and I've ran into a problem that looks like a
> >> > > > > > > > > bug.
> >> > > > > > > > >
> >> > > > > > > > > The first benchmark that uses FFI runs correctly, but
> >> > > > > > > > > subsequent benchmarks run much longer. I created demo
> >> > > > > > > > > code (about 50 lines, available at github:
> >> > > > > > > > > https://gist.github.com/4135698 ) in which C function
> >> >
> >> > copies a
> >> >
> >> > > > > > > > > vector of doubles. I benchmark that function a

Re: [Haskell-cafe] Problem with benchmarking FFI calls with Criterion

2012-11-27 Thread Gregory Collins
Did you pass the option to criterion asking it to do a GC between
trials? You might be measuring a GC pause.

On Tue, Nov 27, 2012 at 2:41 PM, Janek S.  wrote:
> Dnia wtorek, 27 listopada 2012, Jake McArthur napisał:
>> I once had a problem like this. It turned out that my laptop was stepping
>> the cpu clock rate down whenever it got warm. Disabling that feature in my
>> BIOS fixed it. Your problem might be similar.
> I just check - I disabled frequency scaling and results are the same. 
> Actually I doubt that 39us
> of benchmarking would cause CPU overheating with such repeatibility. Besides, 
> this wouldn't
> explain why the first benchmark actually got faster.
>
> Janek
>
>>
>> On Nov 27, 2012 7:23 AM, "Janek S."  wrote:
>> > I tested the same code on my second machine - Debian Squeeze (kernel
>> > 2.6.32) with GHC 7.4.1 - and
>> > the results are extremely surprising. At first I was unable to reproduce
>> > the problem and got
>> > consistent runtimes of about 107us:
>> >
>> > benchmarking FFI/C binding
>> > mean: 107.3837 us, lb 107.2013 us, ub 107.5862 us, ci 0.950
>> > std dev: 983.6046 ns, lb 822.6750 ns, ub 1.292724 us, ci 0.950
>> >
>> > benchmarking FFI/C binding
>> > mean: 108.1152 us, lb 107.9457 us, ub 108.3052 us, ci 0.950
>> > std dev: 916.2469 ns, lb 793.1004 ns, ub 1.122127 us, ci 0.950
>> >
>> > I started experimenting with the vector size and after bumping its size
>> > to 32K elements I started
>> > getting this:
>> >
>> > benchmarking FFI/C binding
>> > mean: 38.50100 us, lb 36.71525 us, ub 46.87665 us, ci 0.950
>> > std dev: 16.93131 us, lb 1.033678 us, ub 40.23900 us, ci 0.950
>> > found 6 outliers among 100 samples (6.0%)
>> >   3 (3.0%) low mild
>> >   3 (3.0%) high severe
>> > variance introduced by outliers: 98.921%
>> > variance is severely inflated by outliers
>> >
>> > benchmarking FFI/C binding
>> > mean: 209.9733 us, lb 209.5316 us, ub 210.4680 us, ci 0.950
>> > std dev: 2.401398 us, lb 2.052981 us, ub 2.889688 us, ci 0.950
>> >
>> > First result is always about 39us (2,5 faster, despite longer signal!)
>> > while the remaining
>> > benchmarks take almost two times longer.
>> >
>> > Janek
>> >
>> > Dnia niedziela, 25 listopada 2012, Janek S. napisał:
>> > > Well, it seems that this only happens on my machine. I will try to test
>> > > this code on different computer and see if I can reproduce it.
>> > >
>> > > I don't think using existing vector is a good idea - it would make the
>> >
>> > code
>> >
>> > > impure.
>> > >
>> > > Janek
>> > >
>> > > Dnia sobota, 24 listopada 2012, Branimir Maksimovic napisał:
>> > > > I don't see such behavior neither.ubuntu 12.10, ghc 7.4.2.
>> > > > Perhaps this has to do with how malloc allocates /cachebehavior. If
>> > > > you try not to allocate array rather use existing one perhaps there
>> > > > would
>> >
>> > be
>> >
>> > > > no inconsistency?It looks to me that's about CPU cache performance.
>> > > > Branimir
>> > > >
>> > > > > I'm using GHC 7.4.2 on x86_64 openSUSE Linux, kernel 2.6.37.6.
>> > > > >
>> > > > > Janek
>> > > > >
>> > > > > Dnia piątek, 23 listopada 2012, Edward Z. Yang napisał:
>> > > > > > Running the sample code on GHC 7.4.2, I don't see the "one
>> > > > > > fast, rest slow" behavior.  What version of GHC are you running?
>> > > > > >
>> > > > > > Edward
>> > > > > >
>> > > > > > Excerpts from Janek S.'s message of Fri Nov 23 13:42:03 -0500 2012:
>> > > > > > > > What happens if you do the benchmark without unsafePerformIO
>> > > > > > > > involved?
>> > > > > > >
>> > > > > > > I removed unsafePerformIO, changed copy to have type Vector
>> >
>> > Double
>> >
>> > > > > > > -> IO (Vector Double) and modified benchmarks like this:
>> > > > > > >
>> > > > > > > bench "C binding" $ whnfIO (copy signal)
>> > > > > > >
>> > > > > > > I see no difference - one benchmark runs fast, remaining ones
>> > > > > > > run slow.
>> > > > > > >
>> > > > > > > Janek
>> > > > > > >
>> > > > > > > > Excerpts from Janek S.'s message of Fri Nov 23 10:44:15 -0500
>> >
>> > 2012:
>> > > > > > > > > I am using Criterion library to benchmark C code called via
>> >
>> > FFI
>> >
>> > > > > > > > > bindings and I've ran into a problem that looks like a bug.
>> > > > > > > > >
>> > > > > > > > > The first benchmark that uses FFI runs correctly, but
>> > > > > > > > > subsequent benchmarks run much longer. I created demo code
>> > > > > > > > > (about 50 lines, available at github:
>> > > > > > > > > https://gist.github.com/4135698 ) in which C function
>> >
>> > copies a
>> >
>> > > > > > > > > vector of doubles. I benchmark that function a couple of
>> >
>> > times.
>> >
>> > > > > > > > > First run results in avarage time of about 17us, subsequent
>> > > > > > > > > runs take about 45us. In my real code additional time was
>> >
>> > about
>> >
>> > > > > > > > > 15us and it seemed to be a constant factor, not relative to
>> > > > > > > > > "correct" run time. The surprising thing is that if my C
>> > > > > > > > > function only allocat

Re: [Haskell-cafe] Problem with benchmarking FFI calls with Criterion

2012-11-27 Thread Janek S.
Dnia wtorek, 27 listopada 2012, Jake McArthur napisał:
> I once had a problem like this. It turned out that my laptop was stepping
> the cpu clock rate down whenever it got warm. Disabling that feature in my
> BIOS fixed it. Your problem might be similar.
I just check - I disabled frequency scaling and results are the same. Actually 
I doubt that 39us 
of benchmarking would cause CPU overheating with such repeatibility. Besides, 
this wouldn't 
explain why the first benchmark actually got faster.

Janek

>
> On Nov 27, 2012 7:23 AM, "Janek S."  wrote:
> > I tested the same code on my second machine - Debian Squeeze (kernel
> > 2.6.32) with GHC 7.4.1 - and
> > the results are extremely surprising. At first I was unable to reproduce
> > the problem and got
> > consistent runtimes of about 107us:
> >
> > benchmarking FFI/C binding
> > mean: 107.3837 us, lb 107.2013 us, ub 107.5862 us, ci 0.950
> > std dev: 983.6046 ns, lb 822.6750 ns, ub 1.292724 us, ci 0.950
> >
> > benchmarking FFI/C binding
> > mean: 108.1152 us, lb 107.9457 us, ub 108.3052 us, ci 0.950
> > std dev: 916.2469 ns, lb 793.1004 ns, ub 1.122127 us, ci 0.950
> >
> > I started experimenting with the vector size and after bumping its size
> > to 32K elements I started
> > getting this:
> >
> > benchmarking FFI/C binding
> > mean: 38.50100 us, lb 36.71525 us, ub 46.87665 us, ci 0.950
> > std dev: 16.93131 us, lb 1.033678 us, ub 40.23900 us, ci 0.950
> > found 6 outliers among 100 samples (6.0%)
> >   3 (3.0%) low mild
> >   3 (3.0%) high severe
> > variance introduced by outliers: 98.921%
> > variance is severely inflated by outliers
> >
> > benchmarking FFI/C binding
> > mean: 209.9733 us, lb 209.5316 us, ub 210.4680 us, ci 0.950
> > std dev: 2.401398 us, lb 2.052981 us, ub 2.889688 us, ci 0.950
> >
> > First result is always about 39us (2,5 faster, despite longer signal!)
> > while the remaining
> > benchmarks take almost two times longer.
> >
> > Janek
> >
> > Dnia niedziela, 25 listopada 2012, Janek S. napisał:
> > > Well, it seems that this only happens on my machine. I will try to test
> > > this code on different computer and see if I can reproduce it.
> > >
> > > I don't think using existing vector is a good idea - it would make the
> >
> > code
> >
> > > impure.
> > >
> > > Janek
> > >
> > > Dnia sobota, 24 listopada 2012, Branimir Maksimovic napisał:
> > > > I don't see such behavior neither.ubuntu 12.10, ghc 7.4.2.
> > > > Perhaps this has to do with how malloc allocates /cachebehavior. If
> > > > you try not to allocate array rather use existing one perhaps there
> > > > would
> >
> > be
> >
> > > > no inconsistency?It looks to me that's about CPU cache performance.
> > > > Branimir
> > > >
> > > > > I'm using GHC 7.4.2 on x86_64 openSUSE Linux, kernel 2.6.37.6.
> > > > >
> > > > > Janek
> > > > >
> > > > > Dnia piątek, 23 listopada 2012, Edward Z. Yang napisał:
> > > > > > Running the sample code on GHC 7.4.2, I don't see the "one
> > > > > > fast, rest slow" behavior.  What version of GHC are you running?
> > > > > >
> > > > > > Edward
> > > > > >
> > > > > > Excerpts from Janek S.'s message of Fri Nov 23 13:42:03 -0500 2012:
> > > > > > > > What happens if you do the benchmark without unsafePerformIO
> > > > > > > > involved?
> > > > > > >
> > > > > > > I removed unsafePerformIO, changed copy to have type Vector
> >
> > Double
> >
> > > > > > > -> IO (Vector Double) and modified benchmarks like this:
> > > > > > >
> > > > > > > bench "C binding" $ whnfIO (copy signal)
> > > > > > >
> > > > > > > I see no difference - one benchmark runs fast, remaining ones
> > > > > > > run slow.
> > > > > > >
> > > > > > > Janek
> > > > > > >
> > > > > > > > Excerpts from Janek S.'s message of Fri Nov 23 10:44:15 -0500
> >
> > 2012:
> > > > > > > > > I am using Criterion library to benchmark C code called via
> >
> > FFI
> >
> > > > > > > > > bindings and I've ran into a problem that looks like a bug.
> > > > > > > > >
> > > > > > > > > The first benchmark that uses FFI runs correctly, but
> > > > > > > > > subsequent benchmarks run much longer. I created demo code
> > > > > > > > > (about 50 lines, available at github:
> > > > > > > > > https://gist.github.com/4135698 ) in which C function
> >
> > copies a
> >
> > > > > > > > > vector of doubles. I benchmark that function a couple of
> >
> > times.
> >
> > > > > > > > > First run results in avarage time of about 17us, subsequent
> > > > > > > > > runs take about 45us. In my real code additional time was
> >
> > about
> >
> > > > > > > > > 15us and it seemed to be a constant factor, not relative to
> > > > > > > > > "correct" run time. The surprising thing is that if my C
> > > > > > > > > function only allocates memory and does no copying:
> > > > > > > > >
> > > > > > > > > double* c_copy( double* inArr, int arrLen ) {
> > > > > > > > >   double* outArr = malloc( arrLen * sizeof( double ) );
> > > > > > > > >
> > > > > > > > >   return outArr;
> > > > > > > > > }
> > > > > > > > >
> > > > > > > > 

Re: [Haskell-cafe] Problem with benchmarking FFI calls with Criterion

2012-11-27 Thread Jake McArthur
I once had a problem like this. It turned out that my laptop was stepping
the cpu clock rate down whenever it got warm. Disabling that feature in my
BIOS fixed it. Your problem might be similar.
On Nov 27, 2012 7:23 AM, "Janek S."  wrote:

> I tested the same code on my second machine - Debian Squeeze (kernel
> 2.6.32) with GHC 7.4.1 - and
> the results are extremely surprising. At first I was unable to reproduce
> the problem and got
> consistent runtimes of about 107us:
>
> benchmarking FFI/C binding
> mean: 107.3837 us, lb 107.2013 us, ub 107.5862 us, ci 0.950
> std dev: 983.6046 ns, lb 822.6750 ns, ub 1.292724 us, ci 0.950
>
> benchmarking FFI/C binding
> mean: 108.1152 us, lb 107.9457 us, ub 108.3052 us, ci 0.950
> std dev: 916.2469 ns, lb 793.1004 ns, ub 1.122127 us, ci 0.950
>
> I started experimenting with the vector size and after bumping its size to
> 32K elements I started
> getting this:
>
> benchmarking FFI/C binding
> mean: 38.50100 us, lb 36.71525 us, ub 46.87665 us, ci 0.950
> std dev: 16.93131 us, lb 1.033678 us, ub 40.23900 us, ci 0.950
> found 6 outliers among 100 samples (6.0%)
>   3 (3.0%) low mild
>   3 (3.0%) high severe
> variance introduced by outliers: 98.921%
> variance is severely inflated by outliers
>
> benchmarking FFI/C binding
> mean: 209.9733 us, lb 209.5316 us, ub 210.4680 us, ci 0.950
> std dev: 2.401398 us, lb 2.052981 us, ub 2.889688 us, ci 0.950
>
> First result is always about 39us (2,5 faster, despite longer signal!)
> while the remaining
> benchmarks take almost two times longer.
>
> Janek
>
>
> Dnia niedziela, 25 listopada 2012, Janek S. napisał:
> > Well, it seems that this only happens on my machine. I will try to test
> > this code on different computer and see if I can reproduce it.
> >
> > I don't think using existing vector is a good idea - it would make the
> code
> > impure.
> >
> > Janek
> >
> > Dnia sobota, 24 listopada 2012, Branimir Maksimovic napisał:
> > > I don't see such behavior neither.ubuntu 12.10, ghc 7.4.2.
> > > Perhaps this has to do with how malloc allocates /cachebehavior. If you
> > > try not to allocate array rather use existing one perhaps there would
> be
> > > no inconsistency?It looks to me that's about CPU cache performance.
> > > Branimir
> > >
> > > > I'm using GHC 7.4.2 on x86_64 openSUSE Linux, kernel 2.6.37.6.
> > > >
> > > > Janek
> > > >
> > > > Dnia piątek, 23 listopada 2012, Edward Z. Yang napisał:
> > > > > Running the sample code on GHC 7.4.2, I don't see the "one
> > > > > fast, rest slow" behavior.  What version of GHC are you running?
> > > > >
> > > > > Edward
> > > > >
> > > > > Excerpts from Janek S.'s message of Fri Nov 23 13:42:03 -0500 2012:
> > > > > > > What happens if you do the benchmark without unsafePerformIO
> > > > > > > involved?
> > > > > >
> > > > > > I removed unsafePerformIO, changed copy to have type Vector
> Double
> > > > > > -> IO (Vector Double) and modified benchmarks like this:
> > > > > >
> > > > > > bench "C binding" $ whnfIO (copy signal)
> > > > > >
> > > > > > I see no difference - one benchmark runs fast, remaining ones run
> > > > > > slow.
> > > > > >
> > > > > > Janek
> > > > > >
> > > > > > > Excerpts from Janek S.'s message of Fri Nov 23 10:44:15 -0500
> 2012:
> > > > > > > > I am using Criterion library to benchmark C code called via
> FFI
> > > > > > > > bindings and I've ran into a problem that looks like a bug.
> > > > > > > >
> > > > > > > > The first benchmark that uses FFI runs correctly, but
> > > > > > > > subsequent benchmarks run much longer. I created demo code
> > > > > > > > (about 50 lines, available at github:
> > > > > > > > https://gist.github.com/4135698 ) in which C function
> copies a
> > > > > > > > vector of doubles. I benchmark that function a couple of
> times.
> > > > > > > > First run results in avarage time of about 17us, subsequent
> > > > > > > > runs take about 45us. In my real code additional time was
> about
> > > > > > > > 15us and it seemed to be a constant factor, not relative to
> > > > > > > > "correct" run time. The surprising thing is that if my C
> > > > > > > > function only allocates memory and does no copying:
> > > > > > > >
> > > > > > > > double* c_copy( double* inArr, int arrLen ) {
> > > > > > > >   double* outArr = malloc( arrLen * sizeof( double ) );
> > > > > > > >
> > > > > > > >   return outArr;
> > > > > > > > }
> > > > > > > >
> > > > > > > > then all is well - all runs take similar amount of time. I
> also
> > > > > > > > noticed that sometimes in my demo code all runs take about
> > > > > > > > 45us, but this does not seem to happen in my real code -
> first
> > > > > > > > run is always shorter.
> > > > > > > >
> > > > > > > > Does anyone have an idea what is going on?
> > > > > > > >
> > > > > > > > Janek
> > > >
> > > > ___
> > > > Haskell-Cafe mailing list
> > > > Haskell-Cafe@haskell.org
> > > > http://www.haskell.org/mailman/listinfo/haskell-cafe
> >
> > __

[Haskell-cafe] 1st São Paulo Haskell Meeting [1° Encontro de Haskellers em São Paulo]

2012-11-27 Thread Felipe Almeida Lessa
Hey!

I'd like to invite you to the 1st São Paulo Haskell Meeting!  It's
going to be something simple, we just want to meet each other and talk
about Haskell =).  We already have 9 people confirmed on the Google+
event [1], so come join us already!

Cheers,

PS: We still didn't set the place yet.

[1] https://plus.google.com/events/cng3rcv1tjl84g2juddk1i36icg

--
Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with benchmarking FFI calls with Criterion

2012-11-27 Thread Janek S.
I tested the same code on my second machine - Debian Squeeze (kernel 2.6.32) 
with GHC 7.4.1 - and 
the results are extremely surprising. At first I was unable to reproduce the 
problem and got 
consistent runtimes of about 107us:

benchmarking FFI/C binding
mean: 107.3837 us, lb 107.2013 us, ub 107.5862 us, ci 0.950
std dev: 983.6046 ns, lb 822.6750 ns, ub 1.292724 us, ci 0.950

benchmarking FFI/C binding
mean: 108.1152 us, lb 107.9457 us, ub 108.3052 us, ci 0.950
std dev: 916.2469 ns, lb 793.1004 ns, ub 1.122127 us, ci 0.950

I started experimenting with the vector size and after bumping its size to 32K 
elements I started 
getting this:

benchmarking FFI/C binding
mean: 38.50100 us, lb 36.71525 us, ub 46.87665 us, ci 0.950
std dev: 16.93131 us, lb 1.033678 us, ub 40.23900 us, ci 0.950
found 6 outliers among 100 samples (6.0%)
  3 (3.0%) low mild
  3 (3.0%) high severe
variance introduced by outliers: 98.921%
variance is severely inflated by outliers

benchmarking FFI/C binding
mean: 209.9733 us, lb 209.5316 us, ub 210.4680 us, ci 0.950
std dev: 2.401398 us, lb 2.052981 us, ub 2.889688 us, ci 0.950

First result is always about 39us (2,5 faster, despite longer signal!) while 
the remaining 
benchmarks take almost two times longer.

Janek


Dnia niedziela, 25 listopada 2012, Janek S. napisał:
> Well, it seems that this only happens on my machine. I will try to test
> this code on different computer and see if I can reproduce it.
>
> I don't think using existing vector is a good idea - it would make the code
> impure.
>
> Janek
>
> Dnia sobota, 24 listopada 2012, Branimir Maksimovic napisał:
> > I don't see such behavior neither.ubuntu 12.10, ghc 7.4.2.
> > Perhaps this has to do with how malloc allocates /cachebehavior. If you
> > try not to allocate array rather use existing one perhaps there would be
> > no inconsistency?It looks to me that's about CPU cache performance.
> > Branimir
> >
> > > I'm using GHC 7.4.2 on x86_64 openSUSE Linux, kernel 2.6.37.6.
> > >
> > > Janek
> > >
> > > Dnia piątek, 23 listopada 2012, Edward Z. Yang napisał:
> > > > Running the sample code on GHC 7.4.2, I don't see the "one
> > > > fast, rest slow" behavior.  What version of GHC are you running?
> > > >
> > > > Edward
> > > >
> > > > Excerpts from Janek S.'s message of Fri Nov 23 13:42:03 -0500 2012:
> > > > > > What happens if you do the benchmark without unsafePerformIO
> > > > > > involved?
> > > > >
> > > > > I removed unsafePerformIO, changed copy to have type Vector Double
> > > > > -> IO (Vector Double) and modified benchmarks like this:
> > > > >
> > > > > bench "C binding" $ whnfIO (copy signal)
> > > > >
> > > > > I see no difference - one benchmark runs fast, remaining ones run
> > > > > slow.
> > > > >
> > > > > Janek
> > > > >
> > > > > > Excerpts from Janek S.'s message of Fri Nov 23 10:44:15 -0500 2012:
> > > > > > > I am using Criterion library to benchmark C code called via FFI
> > > > > > > bindings and I've ran into a problem that looks like a bug.
> > > > > > >
> > > > > > > The first benchmark that uses FFI runs correctly, but
> > > > > > > subsequent benchmarks run much longer. I created demo code
> > > > > > > (about 50 lines, available at github:
> > > > > > > https://gist.github.com/4135698 ) in which C function copies a
> > > > > > > vector of doubles. I benchmark that function a couple of times.
> > > > > > > First run results in avarage time of about 17us, subsequent
> > > > > > > runs take about 45us. In my real code additional time was about
> > > > > > > 15us and it seemed to be a constant factor, not relative to
> > > > > > > "correct" run time. The surprising thing is that if my C
> > > > > > > function only allocates memory and does no copying:
> > > > > > >
> > > > > > > double* c_copy( double* inArr, int arrLen ) {
> > > > > > >   double* outArr = malloc( arrLen * sizeof( double ) );
> > > > > > >
> > > > > > >   return outArr;
> > > > > > > }
> > > > > > >
> > > > > > > then all is well - all runs take similar amount of time. I also
> > > > > > > noticed that sometimes in my demo code all runs take about
> > > > > > > 45us, but this does not seem to happen in my real code - first
> > > > > > > run is always shorter.
> > > > > > >
> > > > > > > Does anyone have an idea what is going on?
> > > > > > >
> > > > > > > Janek
> > >
> > > ___
> > > Haskell-Cafe mailing list
> > > Haskell-Cafe@haskell.org
> > > http://www.haskell.org/mailman/listinfo/haskell-cafe
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Conduit and pipelined protocol processing using a threadpool

2012-11-27 Thread Nicolas Trangez
All,

I've written a library to implement servers for some protocol using
Conduit (I'll announce more details later).

The protocol supports pipelining, i.e. a client can send a 'command'
which contains some opaque 'handle' chosen by the client, the server
processes this command, then returns some reply which contains this
handle. The client is free to send other commands before receiving a
reply for any previous request, and the server can process these
commands in any order, sequential or concurrently.

The library is based on network-conduit's "Application" style [1], as
such now I write code like (OTOH)

> application :: AppData IO -> IO ()
> application client = appSource client $= handler $$ appSink client
>   where
> handler = do
> negotiateResult <- MyLib.negotiate
> liftIO $ validateNegotiateResult negotiateResult
> MyLib.sendInformation 123
> loop
>
>loop = do
>command <- MyLib.getCommand
>case command of
>CommandA handle arg -> do
>result <- liftIO $ doComplexProcessingA arg
>MyLib.sendReply handle result
>loop
>Disconnect -> return ()

This approach handles commands in-order, sequentially. Since command
processing can involve quite some IO operations to disk or network, I've
been trying to support pipelining on the server-side, but as of now I
was unable to get things working.

The idea would be to have a pool of worker threads, which receive work
items from some channel, then return any result on some other channel,
which should then be returned to the client.

This means inside "loop" I would have 2 sources: commands coming from
the client (using 'MyLib.getCommand :: MonadIO m => Pipe ByteString
ByteString o u m Command'), as well as command results coming from the
worker threads through the result channel. Whenever the first source
produces something, it should be pushed onto the work queue, and
whenever the second on yields some result it should be sent to the
client using 'MyLib.sendReply :: Monad m => Handle -> Result -> Pipe l i
ByteString u m ()'

I've been fighting this for a while and haven't managed to get something
sensible working. Maybe the design of my library is flawed, or maybe I'm
approaching the problem incorrectly, or ...

Has this ever been done before, or would anyone have some pointers how
to tackle this?

Thanks,

Nicolas

[1]
http://hackage.haskell.org/packages/archive/network-conduit/0.6.1.1/doc/html/Data-Conduit-Network.html#g:2



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Observer pattern in haskell FRP

2012-11-27 Thread Ertugrul Söylemez
Nathan Hüsken  wrote:

> > In fact it could be a (free) monad:
> >
> > myApp :: MyWire a (GameDelta ())
> >
> > someDelta :: GameDelta ()
> > someDelta = do
> > randomPos <- liftA2 (,) getRandom getRandom
> > replicateM_ 4 (addCreature randomPos)
> > getPlayerPos >>= centerCamOver
> >
> > Then you could perform that monadic action as part of the rendering
> > process.
>
> That sound like a good Idea. But I still have the problem of
> connection "game logic" objects with "rendering" objects, or am I
> missing something? Implementing "addCreature" is fine, but when I want
> a "removeCreature", it has to remove the correct creature from a
> potentially very large list/set of creatures.
> How can I efficiently build this connections (which corresponds to a
> pointer in other languages, I guess)?

That was a simplified example.  In the real world it depends on what
generates your creatures.  If they can be generated all over the code
then you need some form of identifier generation.  This can be done by
the wire's underlying monad:

type Identifier = Int

type Game = WireM (StateT Identifier ((->) AppConfig))

A creature then may look something like this:

creature :: Game World (Creature, GameDelta ())

The wire produces a creating action at the first instant, then switches
to the creature's regular wire.  The various GameDelta actions form at
least a monoid under (>>), and depending on your design even a group:

rec (creature1, gd1) <- creature -< w
(creature2, gd2) <- creature -< w
(creature3, gd3) <- creature -< w
w <- delay (World []) -< World [c1, c2, c3]
id -< (gd1 >> gd2 >> gd3)

That's the basic idea.


Greets,
Ertugrul

-- 
Not to be or to be and (not to be or to be and (not to be or to be and
(not to be or to be and ... that is the list monad.


signature.asc
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can a GC delay TCP connection formation?

2012-11-27 Thread Gregory Collins
GHC has a "stop the world" garbage collector, meaning that while major
GC is happening, the entire process must be halted. In my experience
GC pause times are typically low, but depending the heap residency
profile of your application (and the quantity of garbage being
produced by it), this may not be the case. If you have a hard
real-time requirement then a garbage-collected language may not be
appropriate for you.

On Tue, Nov 27, 2012 at 5:19 AM, Jeff Shaw  wrote:
> Hello,
> I've run into an issue that makes me think that when the GHC GC runs while a
> Snap or Warp HTTP server is serving connections, the GC prevents or delays
> TCP connections from forming. My application requires that TCP connections
> form within a few tens of milliseconds. I'm wondering if anyone else has run
> into this issue, and if there are some GC flags that could help. I've tried
> a few, such as -H and -c, and haven't found anything to help. I'm using GHC
> 7.4.1.
>
> Thanks,
> Jeff
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Gregory Collins 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cabal configure && cabal build && cabal install

2012-11-27 Thread kudah
On Tue, 27 Nov 2012 02:20:35 -0500 "Albert Y. C. Lai" 
wrote:

> When "cabal build" succeeds, it always says:
> 
> (older) "registering -"
> (newer) "In-place registering -"
> 
> That's what it says. But use ghc-pkg and other tests to verify that
> no registration whatsoever has happened.

It doesn't register in user package-db, it registers in it's own
dist/package.conf.inplace. If it didn't you wouldn't be able to build
an executable and a library in one package such that executable depends
on the library.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Observer pattern in haskell FRP

2012-11-27 Thread Nathan Hüsken
On 11/27/2012 07:12 AM, Ertugrul Söylemez wrote:
> Nathan Hüsken  wrote:
> 
>> When writing games in other (imperative) languages, I like to separate
>> the game logic from the rendering. For this I use something similar to
>> the observer pattern.
>>
>> [...]
>>
>> So I am wondering: Is there (or can someone think of) a different
>> pattern by which this could be archived? Or asked different: How would
>> you do it?
> 
> [...] As far as possible you should use a
> stateless monad, ideally simply Identity or a reader:
> 
> type MyWire = WireM ((->) AppConfig)
> 
> myApp :: MyWire a GameFrame
> 
> This is only the first part of the story.  The second part is the
> rendering itself.  You certainly want a way to make use of various
> OpenGL extensions like vertex buffers, which are inherently stateful.
> One sensible way I see is not to output the game's state, but rather a
> state delta:
> 
> myApp :: MyWire a GameDelta
> 
> That way you can do the imperative stateful plumbing outside of the
> application's wire and get the full power of FRP without giving up
> efficient rendering.  GameDelta itself would essentially be a type for
> game state commands.  In fact it could be a (free) monad:
> 
> myApp :: MyWire a (GameDelta ())
> 
> someDelta :: GameDelta ()
> someDelta = do
> randomPos <- liftA2 (,) getRandom getRandom
> replicateM_ 4 (addCreature randomPos)
> getPlayerPos >>= centerCamOver
> 
> Then you could perform that monadic action as part of the rendering
> process.
> 

That sound like a good Idea. But I still have the problem of connection
"game logic" objects with "rendering" objects, or am I missing something?
Implementing "addCreature" is fine, but when I want a "removeCreature",
it has to remove the correct creature from a potentially very large
list/set of creatures.
How can I efficiently build this connections (which corresponds to a
pointer in other languages, I guess)?

Thanks!
Nathan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe