On 5 Dec 2009, 04:49 pm, [email protected] wrote:
On Sat, Dec 5, 2009 at 4:44 PM, Armin Rigo <[email protected]> wrote:
Hi,

On Fri, Dec 04, 2009 at 06:18:13PM +0100, Antonio Cuni wrote:
I agree that at this point in time we cannot or don't want to make
annotation/rtyping/backend parallelizable, but it should definitely be
possible to just pass the -j flag to 'make' in an automatic way.

Of course, that is full of open problems too. �The main one is that each
gcc process consumes potentially a lot of RAM, so just passing "-j" is
not a great idea, as all gccs are started in parallel. �It looks like
some obscure tweak is needed, like setting -j to a number that depends
not only on the number of CPUs (as is classically done) but also on the
total RAM of the system...


A bientot,

Armin.

I guess the original idea was to have a translation option that is
passed as -j flag to make, so one can specify what number of jobs he
wants, instead of trying to guess it automatically.

I poked around on this front a bit. I couldn't find any code in PyPy which invokes make. I did find pypy.translator.platform.distutils_platform.DistutilsPlatform._build, though. This seems to be where lists of C files are sent for compilation. Is that right?

I thought about how to make this parallel. The cheesy solution, of course, would be to start a few threads and have them do the compilation (which should be sufficiently parallel, since it's another process that's doing the actual work). This is a bit complicated by the chdir calls in the code, though. Also, maybe distutils isn't threadsafe.

I dunno if I'll think about this any further, but I thought I'd summarize what little I did figure out.

Jean-Paul

Cheers,
fijal
_______________________________________________
[email protected]
http://codespeak.net/mailman/listinfo/pypy-dev
_______________________________________________
[email protected]
http://codespeak.net/mailman/listinfo/pypy-dev

Reply via email to