On Monday, 14 August 2017 23:13:27 CEST, Matthew Dillon wrote:
commit b5daedd44b0cb06b9a9df40d3288467b31d0b5e1
Author: Matthew Dillon <dil...@apollo.backplane.com>
Date:   Mon Aug 14 14:03:57 2017 -0700

    test - Add various 900,000 process tests
Add tests involving a large number of user processes. These tests default
    to creating 900,000 user processes and require a machine with at least
    128GB of ram.  The machine must be booted with kern.maxproc=4000000 in
    /boot/loader.conf.

I wonder why we need 128GB of ram for 900,000 processes? That is ~142 MB per
process. Does the kernel have to preallocate so much to support that high
number of processes?

The pipe900k test is pretty cool... measuring the latency of 900k piped processes :). I wonder how other operating systems would perform under that test, or abstraction layers like MPI (message passing interface).

Regards,

 Michael


Reply via email to