On 6/28/21 13:08, Eric Robinson wrote:
-----Original Message-----
From: Christopher Schultz <>
Sent: Monday, June 28, 2021 8:54 AM
Subject: Re: 500 instances of tomcat on the same server


On 6/25/21 22:58, Eric Robinson wrote:
We can run 75 to 125 instances of tomcat on a single Linux server with
12 cores and 128GB RAM. It works great. CPU is around 25%, our JVMs
are not throwing OOMEs, iowait is minimal, and network traffic is
about 30Mbps. We're happy with the results.

Now we're upping the ante. We have a 48-core server with 1TB RAM, and
we're planning to run 600+ tomcat instances on it simultaneously.
What caveats or pitfalls should we watch out for? Are there any hard
limits that would prevent this from working as expected?
If you have the resources, I see no reason why this would present any

On the other hand, what happens when you need to upgrade the OS on this
beast? You are now talking about disturbing not 72-125 clients, but 600 of

There are two load-balanced servers, each with adequate power to support the 
whole load. When we want to maintain Server A, we drain it at the load balancer 
and wait for the last active connection to complete. Then we reboot/maintain 
the server and add it back into the rotation gracefully.

Sounds good. I'm curious, are you using the LoadBalancerDrainingValve for that purpose? What are you using for your load-balancer and/or reverse-proxy?

If I had a beast like this, I'd run VMWare (or similar) on it, carve it up into
virtual machines, and run fewer clients on each.... just for the sheer 
of it.

We considered doing it that way. Performance is top priority, so we ultimately 
decided to run the instances on metal rather than introducing a few trillion 
lines of OS code into the mix.  We might consider containerizing.

If this is already a virtualized/cloud environment, then I think you're doing it
wrong: don't provision one huge instance and use it for multiple clients.
Instead, provision lots of small instances and use them for fewer (or even 1)
at a time.

That makes sense until you know the environment better. It's a canned 
application and we're not the publisher. Breaking it out this way gives us the 
ability to present each customer and a unique entity to the publisher for 
support purposes. When their techs connect, the sandbox allows them to 
troubleshoot and support our mutual customer independently, which puts them in 
an environment their techs are comfortable with, and removed the risk of them 
doing something that impacts everybody on the server (or in the VM, if we used 

Okay. I'm sure I don't understand, but if you have heterogeneous support getting involved, to me it would be even more important to isolate all those applications from each other. Maybe you mean in-application support for mutual customers and not in-OS, etc. support.

All I can tell you is we've been running it this way for 15 years and we've 
never looked back and wished we were doing it differently.

That's a good position to be in. :)


To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to