Shasi Mitra Yarram schrieb:
We have developed an application..These are the technologies we've used:
- JDK 1.4.2
- JSF 1.1 (Myfaces 1.1.6)
- Ajax4JSF 1.1.1
- Tomahawk 1.1.8
- Tiles 2.1.0
- Spring 2.5, Spring Security for security layer
- iBatis 2.0
Databases:
- SQL Server 2000, DB2 8, Sybase
Servers:
- IBM Websphere 6.0 - JVM memory min - 64MB, max 512MB
- IBM MQ Series 6.0
- IBM AIX UNIX, load balacing on 2 servers (Clustered environment), Each
unix box has 2 CPUs
Our application uses MQ for lot of business transactions. It is
dependent more on MQ than the database. Among the databases, SQL Server
is the main database. DB2 and Sybase is used for few transactions.
This application is supposed to take a load of 1000 users in production
and give us a response time of 10 secs.
As we started with load testing we saw poor response time. We did
profiling using Jprofiler and corrected some inherent application bugs
which was causing high JVM utilization.
As we reached 250 user load, myfaces started eating memory (This was
revealed by heap dump).
We tuned myfaces based on various websites and did the following:
1) State saving mechanism as "server"
2) Number of views as 3
3) Streaming resource
org.apache.myfaces.component.html.util.StreamingAddResource with
t:documentHead
4) Set org.apache.myfaces.SERIALIZE_STATE_IN_SESSION as false.
By adding the above code, it improved the performance only a bit.
I later wrote some filters to cache the images and css files. This
improved the screen load performance a bit.
But the overall response time was not upto the benchmark. We were
getting a response time of 20 secs for 250 user load. We are far from
achieving the response time for 1000 user load.
We got the heap dump once again but we saw myfaces continued to eat
memory. The object in heap that is causing the trouble is
*JspStateManagerImpl$SerializedViewCollection*
I saw in some website that this object tries to save the old view states
in some weak hashmap which nevers gets garbage collected. Thought that
could be
the problem. I found a fix in the website and replaced the corrected
jars. Now JspStateManagerImpl is not storing old views in weak hashmap.
This actually helped a bit. It reduced the memory utilization.
But When we run for 500 users, heap dump still shows JspStateManagerImpl
object is eating (Approx 1.6MB).
I am not sure if 1.6MB size in heap is normal for 500 users!!!
I know the screen size also makes a lot of difference to give a
conclusion upfront.
But let me provide more information.
On an average we use 25 components.
Each screen has list of selectitems for a drop down. The select items is
inturn refered by a managed bean in session.
Apart from the above object in session, we store only 3 managed beans in
session. These managed beans carry menu and user information.
JSCookmenu inturn reads the menu object in session and renders the
output for every screen.
I wrote a session size calculator jsp to find the size of each of these
session objects.
They are hardly 20~30KB. But JspStateManagerImpl object in session is
easily 150KB min. Sometimes it goes above 450KB.
We use tomahawk savestate to store some object information.
I should accept that we do use EL expression statements in many of our
screens.
We have limited usage of datatables. But wherever we have used, we have
done managed bean (in request scope) binding with preservedatamodel as
true.
Wherever JSF components were not needed, we used pure HTML tags enclosed
withing f:verbatim.
Each page contains atleast 4 command buttons. Each command button is
supposed to call some other managed bean and render those screens.
This means apart from the main managed bean, when the screen is rendered
the beans referred by these buttons are instantiated.
Of course all these managed beans are in request scope.
Most of the components in our screen use "rendered" attribute to perform
some business function.
Now my question is, did I miss anything else in myfaces? Did i miss
anything that could help me tune JSF further?
Our constraint is we cannot move to JDK 1.5 (which means JSF 1.2 or
higher) as it will be a big infrastructre cost to our clients.
I know the problem for poor response time could also be due to database,
MQ and others. We are working on them parallely.
But I want to eliminate all JSF related issues from the picture.
Actually with a 20 seconds response time I assume you have a problem on
the IO layer which sucks out your performance...
Check your code if you have IO intensive stuff going on in the setters
and move it out (classical problem is that somebody does IO related
stuff in setters or getters)
Also check if you have complex el instructions and move them over to the
java side either by using el functions or helper accessor methods.
I do not know any actual safestate numbers, but I assume your problem is
not safestate related, but the main issue is that safestating takes some
space unfortunately in JSF 1.x. 2.0 will become way better.
The issue is, that the entire component tree is safestated every time
you hit a request, that is jsf inherent, and within the component tree
every component safestates every attribute it knows in an array which
then is serialized.
This will be resolved in jsf2 by introducing delta states which store
actually only the attributes which have changed after an initial savestate.
But this only really drags down significant performance if you run into
ram issues, which you can resolve by upping the ram.
Most of the times the main issue in a webapp as I said is the IO layer
and there it is mostly some locking calls which then drag the entire
performance down. Have in mind web applications operate mostly under the
one thread per request model which means if you have something in the IO
layer which takes a long time then the entire request processing is
slowed down significantly.
You might also check if your server already is running at its limits and
you might move to a clustering model. Depending on the load your produce.