RE: [Haskell-cafe] Haskell vs. Erlang for heavy-duty network apps

2005-12-28 Thread Simon Peyton-Jones
I've already said to Joel (off-list) how valuable I think his experience
of using Haskell for a new kind of real application is.  I see no reason
in principle why Haskell shouldn't work just fine for his kind of
application.  But he's pushing hard on parts of the system (both
run-time-system and libraries) that have not been as thoroughly
exercised as (say) list processing.

Quite a few people on the mailing list have helped a lot.  My sense is
(though I have not followed all the twists and turns) that Joel's
experience has exposed immaturity in many of the networking libraries.
I hope that people on this list may feel motivated to help improve them.

Simon

| -Original Message-
| From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Joel
| Reymont
| Sent: 25 December 2005 13:35
| To: Tomasz Zielonka
| Cc: haskell-cafe@haskell.org; Peter Simons; Bulat Ziganshin
| Subject: Re: [Haskell-cafe] Haskell vs. Erlang for heavy-duty network
apps
| 
| 
| On Dec 25, 2005, at 1:14 PM, Tomasz Zielonka wrote:
| 
|  I think your work will be very important and valuable for the
|  community.
|  You've shown were Haskell could be better, and I believe it will
catch
|  up eventually, either by improvements in GHC, in libraries or simply
|  in documentation.
| 
| Thank you Tomasz! I think eventually will come sooner than you
| think :-).
| 
| On my TODO list for the next few weeks:
| 
| 1) Set up a Haskell vs. Erlang heavily-multithreaded serialization
| shootout
| 2) Work Simon M. to make sure Haskell is on par and try to hack GHC
| to add whatever is needed
| 3) Add profiling support for STM to GHC (including retainer profiling)
| 4) Adapt the Zipper FileServer OS to #1. See if single-threading with
| delimited continuations is better than multi-threading with unbound
| threads.
| 5) If #4 is a yes then investigate why
| 
| And of course I will blog about the whole thing as I go along.
| 
|   Thanks, Joel
| 
| --
| http://wagerlabs.com/
| 
| 
| 
| 
| 
| ___
| Haskell-Cafe mailing list
| Haskell-Cafe@haskell.org
| http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] binary IO

2005-12-28 Thread Joel Reymont


On Dec 27, 2005, at 10:30 PM, Tomasz Zielonka wrote:


Let's see if I understand correctly. There are 17605 messages in
trace.dat. On my hardware the average message unpicking time is  
0.0002s

when you only have a single thread. So, it indeed seems that with 1000
threads it should be possible to process every message in under 4
seconds.


I'm on a PowerBook G4 1.25Ghz with 1Gb of memory and lots of apps  
running. I have about 60-70Mb of physical memory free although other  
apps are barely consuming CPU time.


This is what I get from the Erlang version. Columns are # of  
processes, average time per message, peak observed physical memory  
and peak observed virtual memory usage respectively.


1   - 2.34322e-5s,
10  - 3.91232e-5s, 18Mb, 50Mb
100 - 3.26753s, 70Mb, 100Mb, all 100 failed since alarms were set at  
3 seconds.


I just noticed that I'm unpickling all the packets whereas timeleak  
only looks at compressed commands and unpickles server info packets  
from those. I also made ~160Mb of physical memory available and  
decided to read some web articles while the tests are running  
(browser already running). Another run...


1 -1.00657e-6s
10 -   1.10232e-6s
100 -  3.09583s, 55Mb, 90Mb, all 100 failed
1000 - 25s. All failed rather quickly.

The issue could be that they are all stumbling at the same big  
packets at about the same time. So I inserted a random delay of  
between 100ms and 3.1s and got an average of 2.96161e-2s with 77  
failures out of 100. On 1000 it's 957 failed with slightly more than  
3s and 1.12748e-6 on the rest.


The comparison is still a bit unfair as Haskell compiles to native  
code whereas I was running the above test using the regular bytecode  
VM. With native compilation enabled I get


1   - 1.00359e-6s
10  - 1.08691e-6s
100 - 6.19101e-3s with 87 out of 100 failed at about 3.5s.
100 - 1.12210e-6s and 0 failed on another run.

The difference is in the random delays between bot starts,  
apparently. You are well off so long as bots don't hit compressed  
packets all at once. The big packets decompress into 50k and are a  
hassle to unpickle.


Now here's the kicker... Using the read_ahead option when opening the  
file gives you a ~64K buffer. Another run...


10   - 1.06194e-6
100  - 1.05641e-6
1000 - 1.06799e-6 and 916 failed with time between 3s and 4s

Increasing alarm time to 4s, using the native compiler with all  
optimizations (erlc +native +o3 +inline *erl) gives me


10   - 1.10848e-6s
100  - 1.24159e-6s, 0 failed
1000 - 1.02611e-6s, 923 failed


Right now I can think of two reasons:
- 1000 treads need much data in the help, which increases the cost
  of GC and with frequent context switches results in poor cache
  performance
- the GHC's process scheduler is not as intelligent as Erlang's


It's clear to me by now that the app is not language or compiler-bound.

One possible solution is to reduce the number of simultaneously  
running

unpicklings/ picklings (I think someone already proposed it, was that
Bulat?). It would reduce the amount of needed memory and should  
improve

cache usage.

But then, what will be the delay observed by the server?


Right, you can't really do this as it will increase the overall delay.


Each bot is given 5, 15 or 35 seconds to respond by the poker server
and this is proving to be too little for my Haskell implementation.


This is per message, right?


Per message, yes. I will code the networking part of my Erlang version
and will report whether I got more/less timeouts than tha Haskell  
version.



What if the spec was the data type itself? When I was dealing with a
proprietary Intel-based binary format, I derived my picklers /
unpicklers with TH from data type definitions. Of course, there were
cases were the derived code would be wrong, and then I had to write  
the

code myself, but it was for about 2-3 record types out of total 30.


Wish I could do that. I don't know TH at all and any TH I got was due
to the folks on #haskell and here providing it (thanks Cale!).

Joel

--
http://wagerlabs.com/





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] binary IO

2005-12-28 Thread Einar Karttunen
On 27.12 07:00, Tomasz Zielonka wrote:
 Some time ago I was playing with DNS too. I have a library that can
 construct and interpret DNS packets, but it's a bit incomplete right
 now. It reads packets as Strings, but it should be quite straightforward
 to make it read and interpret FastPackedStrings.
 
 http://www.uncurry.com/repos/TzDNS


Nice, here is my shot at DNS - 
http://cs.helsinki.fi/u/ekarttun/haskell/hdnsd-20051227.tar.bz2
feel free to take bits if you are interested. The serialization/deserialization
uses Ptrs.

- Einar Karttunen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] binary IO

2005-12-28 Thread Lennart Augustsson

Joel Reymont wrote:
You are right in that I spent the first few weeks learning. By now I  
know that pickling is the bottleneck, though. The timeleak code is  very 
simple. It forks X threads where each thread opens a file for  reading. 

Why on earth do you want each tread to open the file and unpickle?
Why not unpickle once and reuse it?
Or, if this is just a test and in the future they will all read
from different files (or sockets), then maybe you are hitting
on a different bottleneck than you think.



This discussion is a bit of a dead-end unless you are willing to take  
the next step and apply your solution to my timeleak repro case. If  you 
or someone else is willing to do that than I can continue my set  of 
profiler reports and eventually get some closure. It will happen  once 
either 1) the last bit of performance is squeezed out of  pickling and 
it's determined that threading or the scheduler is the  bottleneck or 2) 
things work out nicely.


Instead of starting optimizing a particular pickling library perhaps you
should have tried different libraries and picked the best one.
Since this is your project I don't think your project I don't think
you can expect others to test things for you.  Well, maybe if you post
your code as a challange. ;)

-- Lennart
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] binary IO

2005-12-28 Thread Joel Reymont


On Dec 28, 2005, at 11:40 AM, Lennart Augustsson wrote:


Why on earth do you want each tread to open the file and unpickle?
Why not unpickle once and reuse it?
Or, if this is just a test and in the future they will all read
from different files (or sockets), then maybe you are hitting
on a different bottleneck than you think.


Right, they will be reading from different sockets.

Instead of starting optimizing a particular pickling library  
perhaps you

should have tried different libraries and picked the best one.


Well, I thought pickler combinators were good but yes, you are right.


Since this is your project I don't think your project I don't think
you can expect others to test things for you.  Well, maybe if you post
your code as a challange. ;)


I did. http://wagerlabs.com/timeleak.tgz

Joel

--
http://wagerlabs.com/





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] binary IO

2005-12-28 Thread Sebastian Sylvan
On 12/28/05, Joel Reymont [EMAIL PROTECTED] wrote:

 On Dec 28, 2005, at 11:40 AM, Lennart Augustsson wrote:

  Why on earth do you want each tread to open the file and unpickle?
  Why not unpickle once and reuse it?
  Or, if this is just a test and in the future they will all read
  from different files (or sockets), then maybe you are hitting
  on a different bottleneck than you think.

 Right, they will be reading from different sockets.


I think what he's trying to say is that reading from disk is different
than reading from sockets so doing tests with disk reading is
misrepresenting the problem and may indicate a bottle-neck that isn't
there.
How does this work if you remove the file-reading? I mean just putting
the file on a small TCP/IP file server with some simulated latency and
bandwidth limitation, and then connecting to that in each thread?


/S

--
Sebastian Sylvan
+46(0)736-818655
UIN: 44640862
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Erlang vs. Haskell (was Re: [Haskell-cafe] binary IO)

2005-12-28 Thread Joel Reymont

On Dec 28, 2005, at 1:05 PM, Sebastian Sylvan wrote:

How does this work if you remove the file-reading? I mean just putting
the file on a small TCP/IP file server with some simulated latency and
bandwidth limitation, and then connecting to that in each thread?


This is probably the way to go but I don't have a a corresponding  
Haskell repro case.


This is what my application does, basically, and I'm getting timeouts  
there. Which is the reason that I created the file-based test in the  
first place. The results from the two seemed very similar, i.e.  
timeouts in both versions.


I will know how the full-blown Erlang app behaves today. 3 months of  
Haskell vs. 3 days of Erlang. There's the right tool for every job  
and while Haskell code could be tweaked to be on par with Erlang  
doing this will be missing the point.


Using Haskell for this networking app forced me to focus on all the  
issues _but_ the business logic. Type constraints, binary IO and  
serialization, minimizing memory use and fighting laziness, timers,  
tweaking concurrency and setting up message channels, you name it.


There are no type constraints to take care of with Erlang since it's  
dynamically typed. It's also strict and has excellent support for  
binary IO, serialization, timers and so on. All this is a given. You  
can just assume that doing it the most natural way is doing the  
right thing and can focus on the business logic.


The Erlang VM and distribution has been tweaked and polished to no  
end by Ericsson with the single-minded networking focus. It's a  
specialist language designed for high-concurrency, fault-tolerance,  
scalability and protocol processing.


It's also a pure functional language and it doesn't even have the IO  
monad. Alternatively, you could say that the IO monad is hidden away  
in the built-in library functions. You still cannot mutate a  
variable, you are always copying.


You can probably try to do all sorts of things with Erlang (look at  
Wings 3D) but I would probably just stick to high concurrency and  
networking apps. I would like to see if Yampa (and Frag) can be more  
transparently done in Erlang but that's because I find Yampa  
extremely complicated. Otherwise, as you try to do things that Erlang  
probably wasn't optimized for, you will face limitations just like I  
did with Haskell.


There's not a lot of business logic to focus on in my app. I just had  
to add a thin serialization veneer (http://wagerlabs.com/erlang/ 
pickle.erl) that implements the pickler combinators and ... I was  
done. ZLib compression is built into the Erlang distribution and  
since the Erlang FFI makes me cringe I implemented the custom SSL  
encryption as a single-threaded UDP server in about 170 lines of C.  
This is how i was able to finish 30 times faster.


Joel

--
http://wagerlabs.com/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: binary IO

2005-12-28 Thread Joel Reymont
Amen! Haskell has forever realigned my mind-gears and I'm observing  
positive results as we speak :-).


On Dec 28, 2005, at 1:56 AM, Peter Simons wrote:


Even if you ultimately
decide to write your application in another language, you'll find
that knowing and understanding Haskell will change the way you
design software -- regardless of the language you use.


--
http://wagerlabs.com/





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: binary IO

2005-12-28 Thread Joel Reymont
I would compare Haskell to visiting the chiropractor. You will walk  
straighter, stand taller and your life will never be the same :D.


On Dec 28, 2005, at 1:56 AM, Peter Simons wrote:


you'll find
that knowing and understanding Haskell will change the way you
design software -- regardless of the language you use.


--
http://wagerlabs.com/





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] OpenAL bindings / compiling ghc 6.5

2005-12-28 Thread Michael Benfield

I see here:
http://www.haskell.org/HOpenGL/newAPI/

OpenAL bindings listed as part of the Hierachical Libraries. And when I 
download the source to a development snapshot of GHC, there they are. 
Is there a way to install this on GHC 6.4?


Alternatively... I can't get GHC 6.5 to compile. I do ./configure  
make and it gets to this step:


==fptools== make all -r;
 in /Users/mike/source/ghc-6.5.20051221/ghc/compiler

and sits forever (well, I only let it sit a day and a half, 
actually...). And the only relevant process that seems to be running is 
gmake (no ghc or gcc or anything). I've tried a few different versions 
of 6.5, including HEAD and STABLE branch. I'm on Mac OS 10.3.9. Thanks.


Mike Benfield

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] OpenAL bindings / compiling ghc 6.5

2005-12-28 Thread Joel Reymont

Mike,

I think you should post to cvs-ghc. I was able to get things to  
compile (almost) on 10.4.3 but had to configure with --disable-alut -- 
disable-openal, etc.


Joel

On Dec 28, 2005, at 3:15 PM, Michael Benfield wrote:


I see here:
http://www.haskell.org/HOpenGL/newAPI/

OpenAL bindings listed as part of the Hierachical Libraries. And  
when I download the source to a development snapshot of GHC, there  
they are. Is there a way to install this on GHC 6.4?


Alternatively... I can't get GHC 6.5 to compile. I do ./configure  
 make and it gets to this step:
-- 
--

==fptools== make all -r;
 in /Users/mike/source/ghc-6.5.20051221/ghc/compiler
-- 
--
and sits forever (well, I only let it sit a day and a half,  
actually...). And the only relevant process that seems to be  
running is gmake (no ghc or gcc or anything). I've tried a few  
different versions of 6.5, including HEAD and STABLE branch. I'm on  
Mac OS 10.3.9.


--
http://wagerlabs.com/





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Shared/Exclusive Locks

2005-12-28 Thread John Goerzen
Hello,

I have the need for a locking object that can provide shared and
exclusive locks in much the same manner as the POSIX flock() function.

A thread that acquires an exclusive lock has a guarantee that no other
thread holds any locks.

A thread that acquires a shared lock has a guarantee that no other
thread holds an exclusive lock, though any number of other threads hold
shared locks.

My intuition tells me that this could be implemented in terms of an MVar
somehow.  (At least, I've used MVars for simple locks for quite some
time.)  But I can't quite figure out how.  Any ideas?

Thanks,

-- John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] OpenAL bindings / compiling ghc 6.5

2005-12-28 Thread Sven Panne
Am Mittwoch, 28. Dezember 2005 16:15 schrieb Michael Benfield:
 I see here:
 http://www.haskell.org/HOpenGL/newAPI/

 OpenAL bindings listed as part of the Hierachical Libraries. And when I
 download the source to a development snapshot of GHC, there they are.
 Is there a way to install this on GHC 6.4?

Although I haven't tried it, you should be able to simply use the OpenAL and 
ALUT directories from the HEAD in the 6.4 branch. The only problem that might 
arise is that the Cabal stuff might have changed a bit, but I'm not sure 
about that. Anyway, this should be fairly easy to fix...

 Alternatively... I can't get GHC 6.5 to compile. I do ./configure 
 make and it gets to this step:
 
 ==fptools== make all -r;
   in /Users/mike/source/ghc-6.5.20051221/ghc/compiler
 
 and sits forever [...]

I haven't heard about that problem. :-(

Cheers,
S.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] OpenAL bindings / compiling ghc 6.5

2005-12-28 Thread Sven Panne
Am Mittwoch, 28. Dezember 2005 16:24 schrieb Joel Reymont:
 I think you should post to cvs-ghc. I was able to get things to
 compile (almost) on 10.4.3 but had to configure with --disable-alut --
 disable-openal, etc.

Why were those --disable-foo options necessary? In theory everything should be 
autodetected, otherwise it's a bug. Detailed information about the platform, 
compilers, a configuration/compilation log plus config.log will help to 
diagnose the problem.

Cheers,
   S.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shared/Exclusive Locks

2005-12-28 Thread Chris Kuklewicz
Hi,

The little book of semaphones (287 pages) is available at
http://greenteapress.com/semaphores/

It has a slightly better solution that uses two mutexes and a count, see
the Readers-writers problem, section 4.2 page 67 (pdf page 79).  It also
goes on to discuss fairness and starvation and writer (i.e. exclusive)
priority.

-- 
Chris

Chris Kuklewicz wrote:
 John Goerzen wrote:
 
Hello,

I have the need for a locking object that can provide shared and
exclusive locks in much the same manner as the POSIX flock() function.

A thread that acquires an exclusive lock has a guarantee that no other
thread holds any locks.

A thread that acquires a shared lock has a guarantee that no other
thread holds an exclusive lock, though any number of other threads hold
shared locks.

My intuition tells me that this could be implemented in terms of an MVar
somehow.  (At least, I've used MVars for simple locks for quite some
time.)  But I can't quite figure out how.  Any ideas?

Thanks,

-- John
 
 
 STM or IO ?
 
 You need a count of shared locks S, *Var Word32.
 
 To increase the count S, you need to hold a mutex E, *Var ().
 So (take mutex E  increment S  release E) is the the combined
 operation.
 
 To decrease the count S, you do not need to hold a mutex. (decrement S).
 
 By grabbing the mutex E and waiting for S to go to zero, you have
 acquired exclusive control.  When you are done just release E.
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shared/Exclusive Locks

2005-12-28 Thread Chris Kuklewicz
John Goerzen wrote:
 Hello,
 
 I have the need for a locking object that can provide shared and
 exclusive locks in much the same manner as the POSIX flock() function.
 
 A thread that acquires an exclusive lock has a guarantee that no other
 thread holds any locks.
 
 A thread that acquires a shared lock has a guarantee that no other
 thread holds an exclusive lock, though any number of other threads hold
 shared locks.
 
 My intuition tells me that this could be implemented in terms of an MVar
 somehow.  (At least, I've used MVars for simple locks for quite some
 time.)  But I can't quite figure out how.  Any ideas?
 
 Thanks,
 
 -- John

STM or IO ?

You need a count of shared locks S, *Var Word32.

To increase the count S, you need to hold a mutex E, *Var ().
So (take mutex E  increment S  release E) is the the combined
operation.

To decrease the count S, you do not need to hold a mutex. (decrement S).

By grabbing the mutex E and waiting for S to go to zero, you have
acquired exclusive control.  When you are done just release E.

-- 
Chris
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shared/Exclusive Locks

2005-12-28 Thread Robert Dockins


On Dec 28, 2005, at 11:14 AM, Chris Kuklewicz wrote:


John Goerzen wrote:

Hello,

I have the need for a locking object that can provide shared and
exclusive locks in much the same manner as the POSIX flock()  
function.


A thread that acquires an exclusive lock has a guarantee that no  
other

thread holds any locks.

A thread that acquires a shared lock has a guarantee that no other
thread holds an exclusive lock, though any number of other threads  
hold

shared locks.

My intuition tells me that this could be implemented in terms of  
an MVar

somehow.  (At least, I've used MVars for simple locks for quite some
time.)  But I can't quite figure out how.  Any ideas?

Thanks,

-- John


STM or IO ?

You need a count of shared locks S, *Var Word32.

To increase the count S, you need to hold a mutex E, *Var ().
So (take mutex E  increment S  release E) is the the  
combined

operation.

To decrease the count S, you do not need to hold a mutex.  
(decrement S).


By grabbing the mutex E and waiting for S to go to zero, you have
acquired exclusive control.  When you are done just release E.

--
Chris


This seems fine for STM because you can just retry until count is 0,  
but I don't know of a good way to wait for an MVar to have a  
particular value (I assume busy-wait isn't what you have in mind).   
You'll probably need an additional MVar that exclusive lockers take  
to let them block.  Then you need to be sure that this MVar is filled  
when count goes to 0 and empty when count goes above zero.



Rob Dockins

Speak softly and drive a Sherman tank.
Laugh hard; it's a long way to the bank.
  -- TMBG



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] OpenAL bindings / compiling ghc 6.5

2005-12-28 Thread Joel Reymont

Sven,

The logs are at http://wagerlabs.com/logs.tgz. I have 6.4.1 installed  
from darwinports into /opt/local.


Thanks, Joel

On Dec 28, 2005, at 4:14 PM, Sven Panne wrote:


Am Mittwoch, 28. Dezember 2005 16:24 schrieb Joel Reymont:

I think you should post to cvs-ghc. I was able to get things to
compile (almost) on 10.4.3 but had to configure with --disable- 
alut --

disable-openal, etc.


Why were those --disable-foo options necessary? In theory  
everything should be
autodetected, otherwise it's a bug. Detailed information about the  
platform,
compilers, a configuration/compilation log plus config.log will  
help to

diagnose the problem.

Cheers,
   S.


--
http://wagerlabs.com/





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shared/Exclusive Locks

2005-12-28 Thread Chris Kuklewicz


 STM or IO ?

 You need a count of shared locks S, *Var Word32.

 To increase the count S, you need to hold a mutex E, *Var ().
 So (take mutex E  increment S  release E) is the the  combined
 operation.

 To decrease the count S, you do not need to hold a mutex. 
 (decrement S).

 By grabbing the mutex E and waiting for S to go to zero, you have
 acquired exclusive control.  When you are done just release E.

 -- 
 Chris
 
 
 This seems fine for STM because you can just retry until count is 0, 
 but I don't know of a good way to wait for an MVar to have a  particular
 value (I assume busy-wait isn't what you have in mind).   You'll
 probably need an additional MVar that exclusive lockers take  to let
 them block.  Then you need to be sure that this MVar is filled  when
 count goes to 0 and empty when count goes above zero.
 
 
 Rob Dockins

You are right.  I spent too much time teaching myself STM, and I
defaulted to those primatives.

But STM, wrapped in small pieces, makes for interesting IO commands
(untested):

createLocks = do me - newMVar ()
 tv - atomically $ newTVar (0::Word32)
 return (me,tv)

waitForZero :: (Num a, Ord a) = (TVar a) - IO ()
waitForZero tv = atomically $ do
  v - readTVar tv
  when (v0) retry

takeExclusive :: MVar () - TVar Word 32 - IO ()
takeExclusive me tv = takeMVar me  waitForZero tv

releaseExclusive me = putMVar me ()

takeShared :: MVar () - TVar Word32 - IO ()
takeShared me tv = withMVar me $ atomically $ do
  v - readTVar tv
  writeTVar tv (v+1)

releaseShared tv = atomically $ do
  v - readTVar tv
  writeTVar tv (v-1)

So you don't need much STM to have the benefit of retry.  Also: The
ability to put (STM a) or (IO a) into a TVar or MVar makes for wonderful
cross thread solutions to some of the standard synchronization problems.

-- 
Chris Kuklewicz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shared/Exclusive Locks

2005-12-28 Thread Tomasz Zielonka
On Wed, Dec 28, 2005 at 05:28:28PM +, Chris Kuklewicz wrote:
 But STM, wrapped in small pieces, makes for interesting IO commands
 (untested):

 waitForZero :: (Num a, Ord a) = (TVar a) - IO ()
 waitForZero tv = atomically $ do
   v - readTVar tv
   when (v0) retry

This function is rather useless in IO - you will get race conditions.
That's what you get for leaving the STM wonderland ;-)

Best regards
Tomasz

-- 
I am searching for a programmer who is good at least in some of
[Haskell, ML, C++, Linux, FreeBSD, math] for work in Warsaw, Poland
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shared/Exclusive Locks

2005-12-28 Thread Robert Dockins


On Dec 28, 2005, at 1:38 PM, Tomasz Zielonka wrote:


On Wed, Dec 28, 2005 at 05:28:28PM +, Chris Kuklewicz wrote:

But STM, wrapped in small pieces, makes for interesting IO commands
(untested):



waitForZero :: (Num a, Ord a) = (TVar a) - IO ()
waitForZero tv = atomically $ do
  v - readTVar tv
  when (v0) retry


This function is rather useless in IO - you will get race conditions.
That's what you get for leaving the STM wonderland ;-)


Actually, I think it works in the particular instance given of  
constructing read-write locks because the call to waitForZero is  
guarded by a mutex.  But it would certainly be easy to introduce a  
race condition using constructs like this.  Given the alternatives  
{use STM fully, use STM some, don't use STM}, I would have a hard  
time justifying the use STM some alternative (at least for new  
programs).  If you are OK with introducing a dependency on STM, why  
not go whole hog?



Rob Dockins

Speak softly and drive a Sherman tank.
Laugh hard; it's a long way to the bank.
  -- TMBG



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Help with shootout

2005-12-28 Thread Chris Kuklewicz
Hello all,

I decided to put together an entry for one of the shooutout categories:

http://haskell.org/hawiki/ChameneosEntry

It involves 4 threads. There is no current Haskell entry.

Anyone have comments? Alterations?  I can't have created the best code yet.

I don't have the compilers for the other entries, so I can't compare
speed on my powerbook (which is the wrong kind of CPU anyway -- they
test on an AMD system).

-- 
Chris
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Re: Haskell Speed

2005-12-28 Thread Branimir Maksimovic




From: Isaac Gouy [EMAIL PROTECTED]
To: haskell-cafe@haskell.org
Subject: [Haskell-cafe] Re: Haskell Speed
Date: Tue, 27 Dec 2005 20:12:01 -0800 (PST)

Branimir Maksimovic wrote:
 Of course, first example uses [String] instead of
Data.HashTable
 as other languages do. Imagine C program does not
use
 hash,rather list, how it will perform:)

And the author comments his program
-- This is a purely functional solution to the
problem.
-- An alternative which keeps a mutable table of
occurences would
-- be faster.


Yes, I saw that but that does not qualifies for benchmark with hash tables.
I mean program is ok, but this is aplles to oranges.


We'll be happy to also show a Haskell program that
uses Data.HashTable - first, someone needs to
contribute that program.


Ok , i've attached hash table version. As I'm Haskell newbie I suppose
there is someone which can do much better.




 I didn't look further after that.

Ideal - you may criticize without the risk that others
will criticize what you do.


Attachment follows, you can cricize my poor Haskell skills as much you want 
:)
This version si actually D program trnslated to Haskell + file parsing code 
from

original Haskell program. Now wou can compare no matter how slow it is :)

Greetings, Bane.

_
Don't just search. Find. Check out the new MSN Search! 
http://search.msn.click-url.com/go/onm00200636ave/direct/01/


knucleotide.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Help with shootout

2005-12-28 Thread Isaac Gouy
--- Chris Kuklewicz [EMAIL PROTECTED]
wrote:
-snip-
 which is the wrong kind of CPU anyway -- they
 test on an AMD system

What machine are you running the programs on?
http://shootout.alioth.debian.org/gp4/faq.php#machine




__ 
Yahoo! for Good - Make a difference this year. 
http://brand.yahoo.com/cybergivingweek2005/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] OpenAL bindings / compiling ghc 6.5

2005-12-28 Thread Michael Benfield
I tried to compile ghc 6.4.2 and I got the same thing (sits forever at 
that point).


Sven Panne wrote:
 Although I haven't tried it, you should be able to simply use the 
OpenAL and

ALUT directories from the HEAD in the 6.4 branch.

I don't know how to do this. The Cabal User's Guide says After you've 
unpacked a Cabal package, you can build it  by moving into the root 
directory of the package and using the  Setup.hs or Setup.lhs script 
there but there is no such script present.


Mike Benfield

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: binary IO

2005-12-28 Thread Bulat Ziganshin
Hello Peter,

Tuesday, December 27, 2005, 11:26:29 PM, you wrote:

PS My guess is that you would learn more if _you_ would plug
PS the different IO libraries into your test code. I'm certain

Peter, because you claimed that Haskell can be made as effective as C,
please help us :)

your BlockIO library is great, but it's usage is limited to very
specific sutuations - when we can save pass state between processing
of individual bytes

what for (de)serialization tasks? my best attempts is still 10x slower
than C version. can you roll up little prototype for such library or
just sketch an ideas so i can try to implement it?

it is also call to everyone - what is the key to efficient Binary lib?
you can see my current attempt in http://freearc.narod.ru/Binary.tar.gz

the key functions:

instance (MemoryStream h) = ByteStream (BufferedMemoryStream h) where
vPutByte mem@(Buf h buf' pos' end') byte = do
pos - readPtr pos'
end - readPtr end'
if (pos==end) then do
sendCurrentBuffer mem
receiveNextBuffer mem
vPutByte mem byte
 else do
writePtr pos' $! (pos+:1)
writeByteAt pos $! (fromEnum byte)

vGetByte mem@(Buf h buf' pos' end') = do
pos - readPtr pos'
end - readPtr end'
if (pos==end) then do
sendCurrentBuffer mem
receiveNextBuffer mem
vGetByte mem
 else do
writePtr pos' $! (pos+:1)
byte - readByteAt pos
return $! (toEnum byte)

and series of getWordXX/putWordXX in class (ByteStream h) = BitStream h:

putWord32 h w = do vPutByte h $! ( w `shiftR` 24)
   vPutByte h $! ((w `shiftR` 16) .. 0xff)
   vPutByte h $! ((w `shiftR` 8)  .. 0xff)
   vPutByte h $! ( w  .. 0xff)
getWord32 h = do w1 - vGetByte h
 w2 - vGetByte h
 w3 - vGetByte h
 w4 - vGetByte h
 return $! ((w1 `shiftL` 24) .|.
(w2 `shiftL` 16) .|.
(w3 `shiftL`  8) .|.
(w4))

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] NewBinary on Ptrs

2005-12-28 Thread Bulat Ziganshin
Hello Pupeno,

Wednesday, December 28, 2005, 6:27:54 AM, you wrote:

P My question now is how to turn a Ptr into a BinHandle to use NewBinary on it,
P or is there another way to do it ?

no. you need to write this self. see at implementation of readBinMem,
it's closest code to what you need

afair, Jeremy uses BE order for bytes, but LE order for bits inside
bytes. but i'm not sure - test his library on parsing your real data


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] strange stack overflow with Data.Map

2005-12-28 Thread David Roundy
Hi all,

I've got a problem that I'm seeing using either Data.Map or Data.IntMap.

 module Main where
 import Data.List
 import qualified Data.IntMap as Map

 stats elems = foldl add_elem Map.empty elems
 add_elem m x = Map.insertWith (+) x 1 m

 main = print $ stats $ take 100 $ repeat 1

This program has a space leak and runs out of stack space.  I'm guessing
that I'm being bit here by an unnatural amount of laziness in
Map.insertWith, but I can't see how to fix this.  :(  I'm sure it's
something elementary...

I tried defining

add_elem m x = let m' = Map.insertWith (+) x 1 m
   Just num = Map.lookup x m'
   in seq num m'
to force the (+) to be evaluated strictly, but that didn't help.
-- 
David Roundy
http://www.darcs.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strange stack overflow with Data.Map

2005-12-28 Thread Udo Stenzel
David Roundy wrote:
  stats elems = foldl add_elem Map.empty elems
  add_elem m x = Map.insertWith (+) x 1 m
 [...]
 I tried defining
 
 add_elem m x = let m' = Map.insertWith (+) x 1 m
Just num = Map.lookup x m'
in seq num m'
 to force the (+) to be evaluated strictly, but that didn't help.

No, it doesn't, unless you also use foldl'.  Not tested, but been
there, seen it, bought the t-shirt.


Udo.
-- 
The Force is what holds everything together.  It has its dark side, and
it has its light side.  It's sort of like cosmic duct tape.


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe