Hello,
        Recently I've come across a certain GC/FFI-related problem. I've 
googled a 
bit, but didn't find anything specific.
        I'm running certain simulations, which tend to allocate a lot of 
garbage in 
memory. Since this causes the OOM-killer to kill my simulation at 98% 
completion, I used the -M switch, and all was well.
        But because my simulation results are fairly big, I needed to compress 
them 
with bz2 before sending them over the network. So I used bzlib.
        Now this took an odd turn, because the simulation started crashing with 
out-of-memory errors _after_ completing (during bz2 compression). I'm fairly 
certain this is a GC/FFI bug, because increasing the max heap didn't help. 
Moving the bz2 compression to a separate process provided a reasonable 
solution.
        What I think is happening is that after the simulation completes, 
almost all 
of the available memory (within the -M limit) is filled with garbage. Then I 
run bzlib which tries to allocate more memory (from behind FFI?) to compress 
the results, which in turn causes an out-of-memory error instead of 
triggering a GC collection.
        I'm writing to ask if this is a known/fixed issue. I'm using ghc 
6.10.3, 
bzlib 0.5.0.0. If this is something new then I'll try to come up with a small 
program which demonstrates the problem.
--
Marcin Kosiba
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to