At 11:22 AM 7/30/02 +0100, Nicholas Clark wrote:
> > Hmmm... do we _really_ want to be reading or writing gzipped files from
> > different threads? I think that would really be asking for trouble...
>Oh, reading or writing should "break" (in that your data gets corrupted by
>being randomly split into the two programs, or randomly spliced together
>on disk from the two programs) but it should not SEGV (or break your program
>otherwise with double free()s)
Ah... ok, glad we agree on that... ;-)
>I mean to find the simplest way to make cloning an new ithread behave in a
>way equivalent to (real) forking a program that happens to be using zlib
>to read/write a compressed stream:
I think the simplest way would be to have PerlIO::gzip kill itself off
altogether when it is being cloned. If you would like to have PerlIO::gzip
in the thread as well, you could simply C<useit> again.
>... providing one side of the fork doesn't
>touch the stream further then all is happy. (all that side does is close and
>cleanly discard its z_stream structure at some point) If both sides of the
>fork access the stream then their zlib structures in memory are not
>corrupted, but garbage data will result.
>
>In theory one could do a "lazy" copy on write style clone to implement the
>above - whichever side accesses the stream first wins it, and the other
>half has it forcibly discarded (with a warning if it attempts to access it)
>but that would mean deferring and storing the information available at CLONE
>time about how to copy into the new malloc() pool of the new ithread.
As I said, as a simple solution for now, I think a:
sub CLONE { undef( %PerlIO::gzip:: ) }
or equivalent of that in XS code would be sufficient to get around the
current problem of having to start your threads before you can open a
PerlIO::gzip layer...
Liz