Just to be clear, this is executing the **same** workload in parallel, **not** trying to parallelize factorial. E.g. the 8 CPU calculation is calculating 50,000! 8 separate times and not calculating 50,000! once by spreading the work across 8 CPUs. This measurement is still showing parallel work, but now I'm really curious to see the first calculation where you're measuring how much faster a calculation is thanks to sub-interpreters. :)
I also realize this is not optimized in any way, so being this close to multiprocessing already is very encouraging! _______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/2PJRTWADEURRQ6SMI6PTD26YRFFH47RZ/ Code of Conduct: http://python.org/psf/codeofconduct/