We've fixed a number of bottlenecks that can develop when the number of
user processes runs into the tens of thousands or higher. One thing led to
another and I said to myself, "gee, we have a 6-digit PID, might as well
make it work to a million!". With the commits made today, master can
support at least 900,000 processes with just a kern.maxproc setting in
/boot/loader.conf, assuming the machine has the memory to handle it.
And, in fact, as today's machines start to ratchet up there in both memory
capacity and core count, with fast storage (NVMe) and fast networking
(10GigE and higher), even in consumer boxes, this is actually something
that one might want to do. With AMD's threadripper and EPYC chips now out,
the Intel<->AMD cpu wars are back on! Boasting up to 32 cores (64
threads) per socket and two sockets on EPYC, terrabytes of ram, and
motherboards with dual 10GigE built-in, the reality is that these numbers
are already achievable in a useful manner.
In anycase, I've tested these changes on a dual-socket xeon. I can in-fact
start 900,000 processes. They don't get a whole lot of cpu and running
'ps' would be painful, but it works and the system is still responsive from
the shell with all of that going on.
1:42PM up 9 mins, 3 users, load averages: 890407.00, 549381.40, 254199.55
In fact, judging from the memory use, these minimal test processes only eat
around 60KB each. 900,000 of them ate only 55GB on a 128GB machine. So
even a million processes is not out of the question, depending on the cpu
requirements for those processes. Today's modern machines can be stuffed
with enormous amounts of memory.
Of course, our PIDs are currently limited to 6 digits, so a million is
kinda the upper limit in terms of discrete user processes (verses pthreads
which are less restricted). I'd rather not go to 7 digits (yet).
NOTE: master users, a full world + kernel compile is needed.