Hello all,
Thanks for the great posting on Jetspeed mailing list. We are evaluating Jetspeed for use in a project within Cisco. I hope you can shed some light on some of the problems we are facing.
In a very simple test of one portlet that has fixed content (no business logic, no database access, etc.), we have observed some weired behavior and are suspecting Jetspeed is the cause. We use LoadRunner to run 100 concurrent users that pull the same page that has the one portlet of fixed content. It runs smoothly for about 25 minutes (it takes about 16 minutes to ramp up the 100 users) and then the response time starts to climb quite sharply form < 1s to 8s and then to > 20s or higher while CPU goes from 75% to 45%. We put the same logic in the plain JSP and ran it with the same load and response time was always flat under 1s. We put in the recommended Jetspeed settings.
Has any one done similar tests, did you observe something like that? Do you have any suggestions where to look further? This is on Solaris 8 with JDK 1.4.1_01, Tomcat 4.1.18 and Jetspeed 1.4b3.
There are heavy bugs in the hotspot compiler en 1.4.1_01 (I'm not sure about Solaris, but it led to release of 1.4.1_02 some time ago). I've seen all the 1.4.1 VMs I have tested do very funny things (like disproportionate memory growth) under heavy load. I have stopped using 1.4.1 for ant and maven, since they claim (sometimes) all memory and swap in my laptop, making it unusable for minutes, until I manage to kill the java processes.
I would either try with 1.3.1_07 or update 1.4.1 to latest. I strongly recomment to use 1.3.1 for stress testing (today) ;-)
Also, in tests I have done in dual PentiumIII with 1GB RAM, I observed that, with more than 50 simultaneous request, the performance degraded significantly.
I would recommend to design your system as a cluster (with session memory) to ensure that no more than, say, 10 requests will be processed in parallel for each VM. This is for small dual linux PentiumIII servers in the cluster (a much cheaper architecture than Solaris, BTW). If Solaris hardware allows, maybe the limit would be higher. But I would not try to squeeze every cycle from a VM.
Old (well tested) principles in Capacity Planning, coming from Mainframe ages, say that overall load of a system should never be planned to go over 60%.
Thanks!
- Shan
-- Santiago Gala High Sierra Technology, S.L. (http://hisitech.com) http://memojo.com?page=SantiagoGalaBlog
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
