Hi,

I use pypy to run an application in gunicorn.
Gunicorn (as well) has a "preload" capability in multiprocess workers, which means it loads the application in one interpreter/process and instead of starting new interpreters/processes for additional workers, it just forks from the first one. This means the OS can share memory pages between the processes, which makes the app use less memory and start faster.

This works nicely with pypy too and the memory savings are significant (taking into account that pypy uses much more memory than cpython, this is even more true).

The problem is that each pypy process before the fork is "cold" and the JIT starts to compile code in the forked processes, independently from the original process, which makes a good deal of additonal memory (and CPU) go wasted.

It would be nice to make this happen in the original process, so the operating system's COW semantics could work for the JIT-ed code too.

Is there a way to achieve this?

Thanks,

_______________________________________________
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev

Reply via email to