-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

John,

On 12/9/2010 4:04 PM, John Goodleaf wrote:
> Google is giving me too many different answers!

:(

> I need to serve a single webapp to a lot of people with acceptable latency.

That ought to be possible. What is "acceptable latency" for your system?
Do you just want to get byte 1 as fast as possible? Or do you mean that
you want total response times to be "acceptable"?

> There's no need for multiple contexts or any other funkines. Tomcat 6, JVM
> 1.6x. I have a hardware load balancer and two 64-bit machines (Windows 2003
> Server--not my choice, yes I'd have preferred Linux) each with two CPUs and
> 8GB RAM.

Sounds good. You didn't mention what kinds of CPUs you have. Since they
are 64-bit, they are probably of reasonably recent manufacture.

> I also have a consultant who insists we need to set up at least two,
> possibly more, instances of Tomcat on each machine for good performance.

At this point, I'm skeptical of that argument, though under certain
conditions it might make sense.

> I'm
> more inclined to think that a single instance with tuned Java options will
> provide the same performance, but be easier to set up and maintain.

I agree with you thus far -- without any other requirements being indicated.

> If I
> needed to serve different webapps or somehow needed to separate things for
> some reason, I could see it, but given just the one app/context. it seems
> like multiple instances really amounts to second-guessing the OS scheduler.

Not really, since the threads in one instance aren't that different than
the threads in multiple instances. What you're doing is limiting the
effectiveness of things like VM-based synchronization, etc.

> Also notable: the servers are VMs.

Hmm... which one? Some of them have terrible performance under certain
conditions. For instance, we have some OpenVZ instances that have
horrible I/O performance, stalls, etc. while the CPU seems to be great.

> Anyway, I'd appreciate advice, and I don't mind being wrong if you need to
> side with the consultant. If it needs to be complicated to go fast, then
> that's what we'll do... Ideally, I'd try both ways and hit it with JMeter,
> but I lack the time and resources (because mgmt spent the money on our
> consultant). So I must beg for answers here...

Honestly, benchmarking should be the root of all your decisions when
performance is concerned.

A couple of thoughts on the whole thing:

Each JVM has a minimum amount of memory under which it can possible
operate. Running multiple JVMs (as opposed to a single one) on a single
machine will inflate the amount of memory required for the whole system
with no perceivable benefit in and of itself. If your consultant tells
you that the garbage collector will have to work less, he or she is
right in that each JVM will (likely) have fewer objects to deal with but
then you've got two (or more) GCs operating on the same set of CPUs, so
it's basically a wash.

Since Java is greedy about memory, it's generally best to give the JVM
as much memory as you can afford to. If you split a JVM into two JVMs,
you're lowering the ceiling of each JVM's max memory and potentially
opening yourself up to an OutOfMemoryError if you have certain
operations that require lots of memory. This is a very tough thing to
get a handle on, since any operation that could potentially take down a
"small" JVM can certainly take down a big one, too. But, if the
memory-heavy operations are relatively infrequent, or you can limit the
number of simultaneous memory-heavy operations, you can have more of
them running in one big heap than in two (or more) smaller ones.

Monitor locks (aka synchronization) can be "improved" by running
multiple JVMs. Let's say you have a shared resource that is very
popular. One obvious example is a database connection pool. If you have,
say, 1000 threads actively requesting connections from that pool,
contention is relatively high for the lock that protects the integrity
of the pool regardless of the size of the pool. If you split into two
JVMs, then you only have (on average) 500 active threads fighting for
the lock, and you may see performance improve slightly. There is another
edge to that sword, though: you are likely to sacrifice any performance
gains with (somewhat) limited contention by putting another component in
front of Tomcat to decide which JVM handles the request.

If you have an unstable application, running additional JVMs (that is,
more than your hardware and/or network setup requires) can help
alleviate those stability problems: instead of 1/2 of your users getting
their sessions expired because the webapp and/or JVM crashed, you might
be able to get away with 1/4 or 1/8 of your users being interrupted.
That's only a stop-gap solution, though: you should fix your webapp :)

Finally, if you are running a fault-tolerant website with all the
bells-and-whistles, you are probably using either distributable sessions
or something reasonably comparable (webcache, etc.). In those cases, the
overhead of communicating session information between the JVMs will kill
your performance many orders of magnitude faster than any single-JVM
inefficiency you could find.

I'd be happy to hear specific reasons why multiple JVMs would improve
the performance of a webapp. Tell your consultant you want those
specifics, and definitely let us know so we can comment on them. Who
knows... he or she may have some good points that I hadn't considered.

Hope that helps,
- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk0BSgkACgkQ9CaO5/Lv0PD1vwCgwmTCWUxp8itw9wuUjS4o7ANd
o2EAn2L7iz1PGo3q3jUTB902RnE5U/4i
=sKEt
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to