Ok so the trick right now is to always collect the exit code? That's
good enough for me.
I thought I did collect the exit codes, but it's possible that
something slipped, I will
keep an eye on it.
Thanks for the quick response.
Marcus Bergstrom
[EMAIL PROTECTED]
On Oct 23, 2008, at 5:32 PM, Nicolas Cannasse wrote:
Marcus Bergstrom a écrit :
Hi list,
When I use neko.io.Process for some repeat use (such as using image
magick to scale images),
the system has a lot of zombie processes, and they become so many
that they make my system
unresponsive, and sometimes just creates a complete crash. I have
browsed around for a few days
hoping to find a reason for this, but with no success. I can't even
kill the processes as they are
zombies.
After I create a new process, I collect the pid, but that doesn't
seem to help much either. Am I missing
something here? Are there some rules to keep in mind?
This problem doesn't seem to be much of a problem on my server, but
on my OSX it certainly is.
Yes, we looked at this recently.
You need to perform one .exitCode() haXe call (which will call
process_exit primitive). This will use a C waitpid() call which will
free the zombi process.
We found that :
- we can't use waitpid immediately since that we want the process to
execute in parallel
- we don't want to use waitpid() when the process is GC'collected
since if it's not terminated/locked, that would lock the entire Neko
process at random time.
Maybe using waitpid() in process_close (and wrapping it with the
haXe API with explicit close() call) would help here.
Nicolas
--
Neko : One VM to run them all
(http://nekovm.org)
--
Neko : One VM to run them all
(http://nekovm.org)