I tested on 2 GB RAM increasing every time by one order of magnitude at
the time. At 10M it no longer started, complaining about not being able
to allocate resources. I don't know exactly if it is the RAM or my linux
file descriptors (CentOS 6.0). So, I remained at 1M. As far as I know,
Windows 7 supports at maximum 16M file descriptors, so, you may try to
raise it up to that limit for the test.
Another test which I've done was if I can really have 1M spawned threads
which can interact with the OS (Erlang process in which I started 1M
processes in parallel which each should have brought something with
cURL). The answer was no (the OS stopped increasing the number of cURL
instances, invoking lack of resources). The full memory was occupied and
so I couldn't determine if I reached the OS limit or I was starving my
processes.
In case the RAM is the problem, I suppose your 8 GB can bring up a lot
of threads, but if the OS is imposing the limit, then you may want to
leave some file descriptors for other processes as well.
I would be interested at what numbers you reach. :)
CGS
On 12/07/2011 08:39 PM, Pete Vander Giessen wrote:
On Wed, Dec 7, 2011 at 12:38 AM, CGS<[email protected]> wrote:
It looks like Erlang reached the 1024 processes default limit. That should
be solved by increasing ERL_MAX_PORTS ... To be noted that
Erlang checks for the working memory availability for each thread, so, the
maximum ports cannot be insanely high.
Thank you for that piece of info. How high would you say would be
"insanely" high for, say, a server w/ 8GB of RAM?
Broader question: if I start a continuous replication, will that hold
a thread open for the (theoretically unlimited) duration of the
replication?
They key thing is that I'm trying to figure out how we hit that limit
without doing anything _too_ mean to the server. Increasing the thread
limit sounds like it might be part of the solution, but I'd like to
understand the problem a bit more, and I'm doing a bit of blind
flailing at the moment ...
~ PeteVG