On Jan 28, 2007, at 8:51 AM, Andy Georges wrote:
it is nice to know that e.g., Data.ByteString performs as good as
C, but is would be even nicer to see that large, real-life apps can
reach that same performance.
What about using darcs as a benchmark? I heard people say it's slow.
The und
On Jan 26, 2007, at 2:40 PM, Arie Peterson wrote:
Using DrIFT would probably automate the deriving just as well, but
in my
particular situation TH support is easier to maintain than DrIFT
support.
May I ask why TH is easier to maintain than DrIFT?
I'm not familiar with DrIFT.
Why would I
On Jan 23, 2007, at 10:37 PM, Tim Docker wrote:
I'm not aware of any ongoing haskell work in finance,
I'm gearing up to do something but don't have anything to show yet.
I'd be happy to learn of any more, however. I don't think there's any
reasons right now why one ought to favour ocaml ove
Alexy,
This is a subject near and dear to my heart and I also dabble in Lisp
and Erlang.
Google for "Composing Financial Contracts", you will surely like the
paper. This is the paper that got me started with Haskell. I'm sure
you could do financial data mining in either Lisp, Haskell or O
Folks,
This is a raw version of cabalized Yampa + GADT for ghc 6.6.
darcs get http://wagerlabs.com/yampa
I would like to change the layout of the directory tree and I think
there should be a single Cabal file that builds the source, tests and
examples. I'm not sure if this is possible, thou
Folks,
I have a version of Yampa with Henrik Nilsson's GADT optimizations
that I cleaned up for ghc 6.6 and cabalized. Would it be possible to
set it up at darcs.haskell.org and if so how should I go about it?
Thanks, Joel
--
http://wagerlabs.com/
_
On Jan 2, 2007, at 1:48 AM, Bulat Ziganshin wrote:
btw, may be the following can help you:
http://www.inf.ufes.br/~ffranzosi/BSPHlib-0.1.tar.gz
Thanks Bulat. I don't know what this is, though, and the link is broken.
--
http://wagerlabs.com/
_
On Jan 2, 2007, at 8:33 AM, Simon Peyton-Jones wrote:
GdH is maintained and distributed by Phil Trinder and his
colleagues at Heriot Watt. I think it's still alive, but it's
based on a much earlier version of GHC.
I wonder how much work would it be to integrate it. Also, their
servers a
Is anyone using GdH?
Can someone tell me why it's not part of the GHC distribution?
It seems that GdH is not being developed anymore and I think it's a
real pity!
Thanks, Joel
--
http://wagerlabs.com/
___
Haskell-Cafe mailing list
Has
Are cool kids supposed to put the comma in front like this?
, foo
, bar
, baz
Is this for historical or other reasons because Emacs formats Haskell
code well enough regardless.
Thanks, Joel
--
http://wagerlabs.com/
___
Haskell-Cafe ma
I don't see how this can work for arbitrary types without auto-
generating the serialization code. Once the code is generated you can
just store the type dictionary at the beginning of the file and use
it to deserialize.
I'm not sure this can be done on top of Binary since the type tag
wil
Is anyone using Haskell for heavy numerical computations? Could you
share your experience?
My app will be mostly about running computations over huge amounts of
stock data (time series) so I'm thinking I might be better of with
OCaml.
Thanks, Joel
--
http://wagerlabs.com/
_
On Jul 5, 2006, at 11:56 PM, Ashley Yakeley wrote:
Trading financial instruments? You might be interested in the SPJ/
Eber/Seward paper "Composing contracts":
http://research.microsoft.com/~simonpj/Papers/financial-contracts/
contracts-icfp.htm
Yes, that paper and an hour or so on the phone
On Jul 5, 2006, at 3:07 PM, Niklas Broberg wrote:
Lava: http://www.cs.chalmers.se/~koen/Lava/
Excellent example, thank you Niklas!
Are you using QuickCheck for verification?
Thanks, Joel
--
http://wagerlabs.com/
___
Haskell-Cafe mail
Folks,
Do you have examples of using Haskell as a DSL in an environment NOT
targeted at people who know it already?
I'm thinking of using Haskell to build my Mac trading app but I'm
very concerned about dumping Haskell on the trading systems
developers. It seems that using, say, Ruby as t
On Jul 4, 2006, at 6:29 PM, Bulat Ziganshin wrote:
isn't it better to use sql database?
Not necessarily but it's better to start simple. I'll try SQLlite first.
--
http://wagerlabs.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
ht
And a related question... Would there be an advantage in using lazy
byte strings or Bulat's streams library over HDF5? There's a good
performance review [1] of PyTables (thin wrapper on top of HDF5) vs.
SqlLite that I just read and it made me wonder.
I'm looking to model the trading process
Does anyone have bindings for HDF5 [1]?
[1] http://hdf.ncsa.uiuc.edu/whatishdf5.html
Thanks, Joel
--
http://wagerlabs.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
Is there anyone trading with Haskell or interested in doing it?
Thanks, Joel
--
http://wagerlabs.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
I should have opened my eyes real wide. This does the trick and makes
TH look for HOC.Arguments.ObjCArgument which is proper.
thModulePrefix mod id = "HOC." ++ mod ++ "." ++ id
On Jul 1, 2006, at 10:33 PM, Joel Reymont wrote:
Folks,
I'm getting this error:
./HO
Folks,
I'm getting this error:
./HOC/StdArgumentTypes.hs:1:0:
Not in scope: type constructor or class
`HOC.Arguments:ObjCArgument'
But if you look through the output below you will see that
HOC.Arguments is being loaded by ghc. I assume that's what the
skipping of HOC.Arguments means
Please disregard the question. There's an opportunity to statically
link the library at an earlier build step. There seems to be no way
to link in static libraries when using ghc --make as the dynamic
linker (ghci) is being used.
On Jul 1, 2006, at 3:08 PM, Joel Reymont wrote:
Folks,
I'm trying to compile the Haskell Objective-C Binding (HOC) and it
needs /usr/lib/libSystemStubs.a to find the missing _fprintf$LDBLStub
symbol.
I tried the following in the make file
FOUNDATION_LIBS=-static -lSystemStubs -l dynamic -framework Foundation
but ghc still tries to link
I think the issue wasn't using functional programming for large image
processing, it was using Haskell. OCaml is notoriously fast and
strict. Haskell/GHC is... lazy.
Everyone knows that laziness is supposed to be a virtue. In practice,
though, I'm one of the people who either can't wrap the
Where's the solution and what is the repmin problem?
On Jun 19, 2006, at 5:21 PM, Jerzy Karczmarczuk wrote:
Such tricks become your second nature, when you take the solution
(lazy) of the "repmin" problem by Richard Bird, you put it under your
pillow, and sleep for one week with your head close
Has anyone explored destructuring HTML with Parsec? Any other ideas
on how to best do this?
I'm looking to scrape bits of information from more or less
unstructured HTML pages. I'm looking to structure, tag and classify
the content afterwards.
I think that developing HTML scrapers require
On Jun 15, 2006, at 6:18 PM, Sebastian Sylvan wrote:
This may not be very helpful, but I would say that an Image is neither
a list nor an array - it's a function! :-)
How exactly do you manipulate the bits and bytes of a function?
--
http://wagerlabs.com/
___
On Jun 7, 2006, at 10:20 PM, S. Alexander Jacobson wrote:
Does this make sense?
Makes sense but almost sounds too good. What package would you
recommend I use with HAppS to merge HTML templates with application
data?
Thanks, Joel
--
http://wagerlabs.com/
__
Alex,
On Jun 7, 2006, at 9:08 PM, S. Alexander Jacobson wrote:
I am using it on http://pass.net which is live in production but
not yet high volume. I hope to have some other projects live soon,
but they are currently works in progress.
What type of machine are you running this on?
What
Folks,
Is anyone using HAppS in production right now?
It seems to be the most advanced Haskell web development platform
right now but I would like to hear about others as well. Production
(heavy) use is what I'm looking for.
Thanks, Joel
--
http://wagerlabs.com/
_
Thank you Bjorn!
I'll take a look but it sounds like exactly what I'm looking for!
On May 30, 2006, at 2:35 AM, Bjorn Bringert wrote:
Hi Joel,
the attached example is a simple RPC library. It uses show and read
for serialization, and some type class tricks to allow functions
with differen
On May 25, 2006, at 8:34 PM, Robert Dockins wrote:
If you want to deliver source code to be executed elsewhere, you
can use hs-plugins or the GHC API (in GHC HEAD branch).
hs-plugins is too heavy since it runs ghc. I don't need to deliver
any type of source code, just a function call and i
On May 25, 2006, at 7:25 PM, Jason Dagit wrote:
I will say that you should add a macro or
high order function (depneding on lisp vs. haskell) that is something
like "(with-client (c args-list) body)", that way you can simplify
the creation/cleanup of clients. Same idea as with-open-file. You
Folks,
I'm curious about how the following bit of Lisp code would translate
to Haskell. This is my implementation of Lisp RPC and it basically
sends strings around, printed "readably" and read on the other end by
the Lisp reader. If I have a list '(1 2) it prints as "(1 2)" and
becomes '(
Folks,
I'm looking to use the following code to process a multi-GB text
file. I am using ByteStrings but there was a discussion today on IRC
about tail recursion, laziness and accumulators that made me wonder.
Is fixLines below lazy enough? Can it be made lazier?
Thanks, Joel
---
mod
Folks,
How would you go about implementing a RDF triple store in Haskell
capable of handling a billion triples? I'm looking for ideas.
For example, you could use tries to store all strings but you could
easily get to the point where you can't load them all into, say, 1Gb
of memory which i
Howdy folks!
Does anyone have sample code for independent component analysis
(ICA), singular value decomposition (SVD) aka spectral graph
partitioning, or semidiscrete decomposition (SDD)?
I'm trying to learn this rocket science and apply it to RDF graph
analysis.
Thanks, Joel
I compiled a simple one-liner: main = print "Blah".
This is the GC report:
5,620 bytes allocated in the heap
0 bytes copied during GC
0 collections in generation 0 ( 0.00s)
0 collections in generation 1 ( 0.00s)
1 Mb total memory in use
Where di
Thanks Bulat! I'm happy with Erlang for the time being but I'll
consider your library for my next IO-intensive Haskell project.
On Jan 7, 2006, at 6:08 PM, Bulat Ziganshin wrote:
Joel, if you are interested in switchinh to my library - write me. i
have ideas about supporting your 150 records
On Jan 5, 2006, at 7:50 PM, Jason Dagit wrote:
I'm pretty sure I was on OSX when I tried it out a couple weeks
ago. There was at least one bot. I was able to kill it.
Game play was buggy and awkward, bots didn't seem to have any
intelligence.
Well, I would like to make extremely intelli
Folks,
Has anyone tried to run Frag on Mac OSX?
Also, since I can't get it to run (glDrawBuffer crash), can someone
tell me if it includes monsters? It looks to me like it's just a
player with a gun running around.
Thanks, Joel
--
http://wagerlabs.com/
___
Could you give us a bit more detail on this?
How does using handles involve large memory/CPU pressure?
On Jan 5, 2006, at 10:01 AM, Bulat Ziganshin wrote:
i also recommend you to try FD from my Binary package instead of
Handles because using 1000 Handles may involve a large memory/cpu
pressu
My apologies if this has been described somewhere but what is MUT time?
Also, isn't 30% GC a bit high? This is something that totally
surprised me when I first saw it as my program was spending 60-70% on
GC.
Is there a good low % number that should be used as a benchmark?
Thanks, J
Bulat,
On Jan 4, 2006, at 7:57 PM, Bulat Ziganshin wrote:
3) i also placed lock around `unstuff` call to decrease GC times
This sort of invalidates the test. We have already proven that it
works much better when you do this but it just pushes the delays
upstream.
I will profile your ver
This is my latest version. Based on Don's tweaks.
{-# INLINE sequ #-}
sequ :: (b -> a) -> PU a -> (a -> PU b) -> PU b
sequ a b c | a `seq` b `seq` c `seq` False = undefined
sequ f pa k = PU fn1 fn2 fn3
where
{-# INLINE fn1 #-}
fn1 ptr b =
case f b of
a -> ca
Yes, that _is_ obvious but then puts the burden on the programmer to
define the getters. It also misses the setters issue entirely.
Each field can definitely be made into a class and records can be
composed dynamically, HList-style. I think HList is _the_ facility
for doing this.
How do y
Sure. Type classes, as Ketil Malde has suggested.
On Jan 4, 2006, at 2:09 AM, Dylan Thurston wrote:
Looking at this code, I wonder if there are better ways to express
what you really want using static typing. To wit, with records, you
give an example
data Pot = Pot
{
pProfit :: !Wor
The timeleak code is just a repro case. In real life I'm reading from
sockets as opposed to a file.
All I'm trying to do is run poker bots. They talk to the server and
play poker. Of course some events are more important than others, a
request to make a bet is more important than, say, a ta
Simon,
I don't think CPU usage is the issue. An individual thread will take
a fraction of a second to deserialize a large packet. The issue is
that, as you pointed out, you can get alerts even with 50 threads.
Those fractions of a second add up in a certain way that's
detrimental to the p
On Jan 3, 2006, at 2:30 PM, Simon Marlow wrote:
The default context switch interval in GHC is 0.02 seconds,
measured in CPU time by default. GHC's scheduler is stricly round-
robin, so therefore with 100 threads in the system it can be 2
seconds between a thread being descheduled and schedul
I asked the Erlang guys why I can log to a single process in Erlang
without any problems. The scheduler could well be round-robin
but since the message queue is hard-wired to each Erlang process
they found an elegant way out.
--
There is a small fix in the scheduler for the standard
producer/cons
It seems like the real difference between TChan and the Ch code below
is that TChan is, basically, [TVar a] whereas Ch is MVar [a], plus
the order is guaranteed for a TChan.
Now why would it matter so much speed-wise?
This is the CVS code. newTChanIO is exported but undocumented in GHC
6.4
On Jan 2, 2006, at 9:20 PM, Chris Kuklewicz wrote:
This makes me ponder one of the things that Joel was trying to do:
efficiently pass data to a logging thread. It may be that a custom
channel would be helpful for that as well.
I have not taken the time to analyze the Chameneos code but ne
I had this exact same issue when I swapped e and e1 by mistake.
Does your code work right without the type signature or does it just
compile?
On Jan 2, 2006, at 8:59 AM, Dominic Steinitz wrote:
Codec/ASN1/BER.hs:66:0:
Quantified type variable `e1' is unified with another
quantified ty
Simon,
Please see this post for an extended reply:
http://wagerlabs.com/articles/2006/01/01/haskell-vs-erlang-reloaded
Thanks, Joel
On Dec 29, 2005, at 8:22 AM, Simon Peyton-Jones wrote:
| Using Haskell for this networking app forced me to focus on all the
| issues _but_ the business
On Dec 29, 2005, at 1:23 PM, Tomasz Zielonka wrote:
- a deficiency of GHC's thread scheduler - giving too much time one
thread steals it from others (Simons, don't get angry at me - I am
probably wrong here ;-)
I would finger the scheduler, at least partially. There's no magic in
this w
Adrian,
There's no mistery here.
Threads take a while to unpickle the large server info packet when
the gang up on it all together. By adding the MVar you are basically
installing a door in front of the packet and only letting one thread
come through.
The end result is that you are pushi
Sven,
The logs are at http://wagerlabs.com/logs.tgz. I have 6.4.1 installed
from darwinports into /opt/local.
Thanks, Joel
On Dec 28, 2005, at 4:14 PM, Sven Panne wrote:
Am Mittwoch, 28. Dezember 2005 16:24 schrieb Joel Reymont:
I think you should post to cvs-ghc. I was able to
Mike,
I think you should post to cvs-ghc. I was able to get things to
compile (almost) on 10.4.3 but had to configure with --disable-alut --
disable-openal, etc.
Joel
On Dec 28, 2005, at 3:15 PM, Michael Benfield wrote:
I see here:
http://www.haskell.org/HOpenGL/newAPI/
OpenAL bi
I would compare Haskell to visiting the chiropractor. You will walk
straighter, stand taller and your life will never be the same :D.
On Dec 28, 2005, at 1:56 AM, Peter Simons wrote:
you'll find
that knowing and understanding Haskell will change the way you
design software -- regardless of th
Amen! Haskell has forever realigned my mind-gears and I'm observing
positive results as we speak :-).
On Dec 28, 2005, at 1:56 AM, Peter Simons wrote:
Even if you ultimately
decide to write your application in another language, you'll find
that knowing and understanding Haskell will change th
On Dec 28, 2005, at 1:05 PM, Sebastian Sylvan wrote:
How does this work if you remove the file-reading? I mean just putting
the file on a small TCP/IP file server with some simulated latency and
bandwidth limitation, and then connecting to that in each thread?
This is probably the way to go but
On Dec 28, 2005, at 11:40 AM, Lennart Augustsson wrote:
Why on earth do you want each tread to open the file and unpickle?
Why not unpickle once and reuse it?
Or, if this is just a test and in the future they will all read
from different files (or sockets), then maybe you are hitting
on a diffe
On Dec 27, 2005, at 10:30 PM, Tomasz Zielonka wrote:
Let's see if I understand correctly. There are 17605 messages in
trace.dat. On my hardware the average message unpicking time is
0.0002s
when you only have a single thread. So, it indeed seems that with 1000
threads it should be possible t
I will have to leave this for a while. I apologize but I'm
more than a bit frustrated at the moment and it's not fair
of me to take it out on everybody else.
If someone is willing to take this further I will appreciate it,
otherwise I'll get to it in the coming weeks. Besides knowing
how to do it
That's great to hear! I will continue once I have a chance to discuss
it with the gurus and optimize it further. At the same time, I would
challenge everyone with a fast IO library to plug it into the
timeleak code, run it under a profiler and post the results (report +
any alarms).
The t
On Dec 27, 2005, at 4:52 PM, Bulat Ziganshin wrote:
spending several weeks to random
optimization is like buying a gold computer case trying to speed up
the game :)
I did not spend several weeks on optimization. I went through about
25 iterations with the timeleak code and the profiler. I w
Bulat,
On Dec 27, 2005, at 1:58 PM, Bulat Ziganshin wrote:
no problem. my library handle about 10-15mb/s, and i think that
speed can
be doubled by using unboxed ints
Would you like to present your version of the timeleak code plus
statistics from a test run?
This will demonstrate the te
We'll see, Erlang is built for this type of stuff. I might have
results from the "timeleak" test today and will probably have first
networking results tomorrow.
But I wish I could achieve even a fraction of that with Haskell.
On Dec 27, 2005, at 9:51 AM, Branimir Maksimovic wrote:
I have C
Tomasz,
Try http://wagerlabs.com/timeleak.tgz. See the "Killer pickler
combinators" thread as well.
My desired goal is to have 4k bots (threads?) running at the same
time. At, say, 1k/s per bot I figure something like 4Mb/s round-trip.
Each bot cannot spend more than a couple of seconds o
This is what I spent the past 3 months on. Pickling code that
interoperates with a C++ server that sends things to me little-
endian. And sends other wierd data like unicode strings that are zero-
terminated.
SerTH does not handle things like that since it cannot divine the
wire format for
If anyone is interested, I posted the Erlang version of the pickler
combinators at http://wagerlabs.com/erlang/pickle.erl
Orignal paper at http://research.microsoft.com/~akenn/fun/
picklercombinators.pdf
Notice that I did away with the "sequ" while preserving "wrap" and
friends. Erlang doe
On Dec 25, 2005, at 1:14 PM, Tomasz Zielonka wrote:
I think your work will be very important and valuable for the
community.
You've shown were Haskell could be better, and I believe it will catch
up eventually, either by improvements in GHC, in libraries or simply
in documentation.
Thank yo
On Dec 25, 2005, at 10:13 AM, Bulat Ziganshin wrote:
Hello Joel,
[...]
so i think that your problems is due to bad design decisions caused by
lack of experience. two weeks ago when you skipped my suggestions
about improving this design and answered that you will use "systematic
approach", i for
On Dec 24, 2005, at 6:02 PM, Paul Moore wrote:
One of the interesting points that this illustrates (to me) is that
the "obvious" approach in Haskell can be seriously non-optimal in
terms of performance. Add to this the fact that tuning functional
programs is a non-trivial art, and it becomes qu
Hal,
What is the syntactic sugar that you are lacking with arrays?
Also, do loops matter if they can be emulated with recursion?
Thanks, Joel
On Dec 24, 2005, at 12:46 AM, Hal Daume III wrote:
That said, I use O'Caml for all of my non-Perl coding. Why?
Because I
need lots of arra
Bulat,
I appreciate your offer of help but lets focus on the topic ;-).
Thanks, Joel
On Dec 23, 2005, at 5:42 PM, Bulat Ziganshin wrote:
Hello Joel,
Friday, December 23, 2005, 6:29:28 PM, you wrote:
JR> It's an assumption that you are making.
well, i may be wrong, but you also may b
Bulat,
On Dec 23, 2005, at 11:58 AM, Bulat Ziganshin wrote:
Hello Joel,
Thursday, December 22, 2005, 7:27:17 PM, you wrote:
JR> #ifdef BIG_ENDIAN
JR> swap16 v = (v `shiftR` 8) .|. (v `shiftL` 8)
JR> #else
JR> swap16 v = v
JR> #endif
afaik, your code anyway will not work on non-x86 architectu
On Dec 23, 2005, at 1:06 PM, Bulat Ziganshin wrote:
hm... you are waste much time unsystematically "optimizing"
random-selected parts of program.
It's an assumption that you are making.
what you want to buy with continuations and, more important, why you
need it?
To try to implement "thre
Folks,
My current setup involves threads that block on a MVar/TMVar until
they read an event from it and then proceed. I would like to convert
these threads into continuations whereby a continuation is saved when
an event is requested and I can call that continuation when an even
arrives.
Is this something that can be compiled with GHC right now? I noticed -
fgenerics but I think it does something else entirely.
On Dec 23, 2005, at 8:52 AM, Ralf Hinze wrote:
It's Generic Haskell source code, see
http://www.generic-haskell.org/
Generic Haskell is an extension of Haskel
Folks,
I have been looking at the code for the "Arrows for invertible
programming" paper (http://www.cs.ru.nl/A.vanWeelden/bi-arrows/) and
I have a question about syntax. ghci surely does not like it.
What does this mean and how do I make it compile?
mapl{|a, b|arr|} :: (mapl{|a, b|arr|},
How expensive is a GHC function call?
How do I make a decision whether to specialize or to inline?
On Dec 22, 2005, at 10:06 PM, Simon Peyton-Jones wrote:
(Brief because I'm at home.)
Use INLINE (alone) to include swap at its call sites. (That gives
perfect per-call-site specialisation.)
Us
Jeremy,
This is a very nice library you've got but... It does not answer my
question re: arrows and it still requires you to specify pickling and
unpickling separately. I can have a single spec right now and would
like to keep that.
Thanks, Joel
On Dec 22, 2005, at 8:25 PM, Jerem
Folks,
These functions together can be taking 15-20% of my processing time.
I'm trying to optimize the hell out of them. Would it be less
expensive to convert each of them into a foreign function?
Is there a way to optimize them further within Haskell? appU_wstr is
particularly expensive
Folks,
I have been trying to improve my byte swapping routines as part of my
effort to speed up serialization. I then tried to look at the core
output from GHC to see what it was converting my code into. Brandon
(skew on #haskell) helped me code a TH version but then I went with a
regular
Folks,
I'm trying to monadify the pickler code. sequ below positively looks
like >>= but you can't really join both pickle and unpickle into a
single monad. I would like to keep the ops together, though, as this
allows me a single specification for both pickling and unpickling.
Cale sugge
I don't want any kind of locking, true. I need all bots to respond in
time otherwise the poker server will sit them out. Eliminating the
timeout on pickling does not eliminate the timeout overall, it just
passes it to a different place.
One thread will go through serialization quickly but i
On Dec 20, 2005, at 1:38 PM, Bulat Ziganshin wrote:
can you say what it exactly means? we are not mastered in your code.
some common explanation like "my program takes 6 seconds to
deserialize 50kb of data on Pentium4/3ghz" will be more understabdable
That's why I posted the code at http://wa
The other thing worth noting is that by inserting a lock with a
thread delay we are fooling ourselves. While the individual pickling
time goes down, the threads are slowed down overall. Assuming that an
external source was waiting for the unpickled packet _that_ source
would get a timeout!
On Dec 21, 2005, at 2:56 PM, Cale Gibbard wrote:
By the way, when I was doing threadDelays, I meant:
trace s = withMVar lock $ const $ threadDelay 20
In case you didn't try that.
I'm trying 5k threads and I still get delays at times. Using thread
delay of 1, though.
./unstuff trace.dat
un
case you didn't try that.
- Cale
On 21/12/05, Joel Reymont <[EMAIL PROTECTED]> wrote:
I'm not sure I buy this. Again, this helps:
{-# NOINLINE lock #-}
lock :: MVar ()
lock = unsafePerformIO $ newMVar ()
trace s = withMVar lock $ const $ putStrLn s
and then in read_
cmd
I'm not sure I buy this. Again, this helps:
{-# NOINLINE lock #-}
lock :: MVar ()
lock = unsafePerformIO $ newMVar ()
trace s = withMVar lock $ const $ putStrLn s
and then in read_
cmd <- read h trace
trace is called _after_ all the timings in read so it should not
affect the timings
This should not be the case. The amount of work is the same
regardless and the issues seem to be with _timing_. Passing in trace
that writes to the screen with a lock sort of slows things down.
I encourage you to actually build the code and see it for yourself.
Thanks, Joel
On Dec 2
On Dec 21, 2005, at 7:45 AM, Bulat Ziganshin wrote:
1) i think that this method of deserialization may be just
inefficient. what is plain deserialization time for this 50k?
No idea. I know it's inefficient but this is not the issue. The issue
is that with some strange tweaks it runs fast.
I still get timeouts with 5k threads. Not as often as with 1k before,
though.
On Dec 21, 2005, at 3:35 AM, Donald Bruce Stewart wrote:
It looks like with the 1000s of threads that get run, the problem is
just getting enough cpu time for each thread. All the solutions that
appear to work invol
About the only universal solution seems to pace the threads by
passing trace to read. Even then I got 1 alert. Now, can someone
explain why the lock eliminates the time leak?
On Dec 21, 2005, at 2:41 AM, Cale Gibbard wrote:
Using forkOS with -threaded seems to work, at least on linux and
o
Does not help me on Mac OSX Tiger.
./a.out trace.dat
a.out: user error (ORANGE ALERT: 0s, 4s, SrvServerInfo, ix1: 6, size:
49722)
a.out: internal error: scavenge_stack: weird activation record found
on stack: 9
Please report this as a bug to glasgow-haskell-bugs@haskell.org,
or http:/
The original paper is at http://research.microsoft.com/ ~akenn/fun/
picklercombinators.pdf
My adaptation is at http://wagerlabs.com/timeleak.tgz. This is a full
repro case, data included.
The issue is that with 1000 threads my serialization is taking a few
seconds.
Inserting a delay or p
0.2
pair Script.Pickle 0.91.2
post_ Script.Engine 0.42.8
On Dec 20, 2005, at 11:01 AM, Joel Reymont wrote:
Folks,
It looks like I successfully squashed my time leaks and moving my
serialization to Ptr Word8 got me as
101 - 200 of 497 matches
Mail list logo