Hi all,
for my article series I have tested a number of frameworks. The main
issues, I have encountered is the selection of szenarios and providing
reasonable hardware.
For my use case, I have developed a number of szenarios:
a) Get request with heavy use of resource bundles and conditional outputs
b) Post of forms with successful or failing validations
c) large pages versus small pages
d) Constant throughput szenarios to measure time
e) Growing request per time approaches to find out the limit of parallel
requests with reasonable response times
All these szenarios are naturally not identical to your real world
application but it is an indication of performance.
The effort is very high and I don't think that you would like to compare
with many other frameworks.
The second problem is to find reasonable hardware. Neither the network
nor the client which measures time should be saturated. If you want to
test a Dual Core machine with 4 GB RAM as server, you will need about
2-4 dual core machines to test it and a fast switch.
I have used JMeter for my tests
Best Regards / Viele Grüße
Sebastian Hennebrueder
Am 03.09.10 18:20, schrieb Howard Lewis Ship:
These are all good notes, and I was certainly thinking about Ben's
tests when I wrote my initial email.
The scope of this can expand dramatically.
What I really want, for a first pass, is an answer to the question: Is
Tapestry 5.2 faster than 5.1? Does it use less memory?
On Thu, Sep 2, 2010 at 11:33 PM, Kalle Korhonen
<[email protected]> wrote:
Ben, what you are talking about seems to be geared more towards
testing readiness and scalability of a production application. For
benchmarking multiple versions of the framework, I'd assume data
access would never come to play as there should be no database
whatsoever. Maintaining a fully synthetic test should also be far
easier than maintaining a scalability test for a real world
application. Also, standardized, guaranteed QoS VMs are great for
scalability testing when you are mostly looking for "good enough" as
an answer but if you are really comparing versions of the framework
over time, the differences should be measured in milliseconds even
with thousands of repeated executions, so it's critical that extra
variables are reduced to minimum just the same way as if we
benchmarked specific features of JVM against different versions. On
the other hand, creating and benchmarking a close-to-real-world
blueprint application is always good marketing, which Tapestry
certainly needs.
Howard expressed a desire for doing both type of testing, but I don't
think its a good idea to try and combine the different goals into the
same test suite. In a fully synthetic test, the absolute numbers don't
matter, only the differences do. And you certainly couldn't compare
results of a synthetic test implemented in one framework to test
results implemented in another framework, but only to different
versions of itself. Seems that the first thing to do is to decide what
exactly we want to measure.
Kalle
On Thu, Sep 2, 2010 at 10:55 PM, Ben Gidley<[email protected]> wrote:
I did a comparative load test
http://blog.gidley.co.uk/2009/05/tapestry-load-testing-round-up.html a while
back and we do a lot of load testing for SeeSaw.com (mainly to compare
versions). I did create a 'comparative' struts application for that
evaluation (source code
http://sites.google.com/a/gidley.co.uk/tapestryloadtest/Home/test-5)
We use Grinder as our load test tool - as it handles clustered load
generating agents well and the Jython scripting makes it easy to share steps
between 'journeys' in the application. You probably don't need to worry
about clustering - as it isn't a core tapestry 'feature' (rather an
environmental one), but even without that feature grinder is easy to work
and powerful.
If your goal is to test tapestry the first issue you will hit is any Data
Access - in our apps the data access piece is 90% of the page time. In
SeeSaw we 'solve' this by heavily caching data so most pages only access
data from a Cache. I would suggest to get a test that focused on tapestry
you load you data from in memory caches. That way you won't be measuring how
fast Hibernate or your DB(etc) performs.
Another big issue we hit is breaking/maintaining our load test scripts - as
they effectively reverse engineer some tapestry behaviours (especially AJAX
ones) they can get broken quite easily. Our solution is to run them single
threaded as part of our CI builds. This usually catches obvious changes.
Another thing to watch out for is your app server. Jetty is an excellent
high load container - but depending on how it is configured it can
'throttle' throughput (this is excellent thing in production). This can skew
your test as you end up measuring the throttling and not the application.
The trick is to set the thread pools in Jetty far higher than you would in
production so you can be sure you are not maximising them.
I would advise trying to use controlled hardware for your test - slight
environmental differences can totally adjust the results. EC2 and its like
are popular for this - do Apache have anything similar? I can get you time
on (but not direct access to) the ioko cloud (which is a VMWare VSphere
cluster), which does let you nearly guarantee QOS.
It may be a good idea to build the test as something that can be easily
deployed as a Virtual machine as it would let you easily deploy it anywhere.
I have previously used Suse Studio (http://susestudio.com/) to build a very
small VM with a maven script on it that updates/runs my app. The VM's built
there will run pretty much anywhere.
To find issues with a load test I recommend YourKit. It gives you a ton of
monitoring and via its 'probes' you can see right inside tapestry. It is
free for open source projects. I would also recommend enabling JMX in your
application and periodically writing the stats somewhere. You can do this
simply with VisualVM or you can get more advanced and store them in a
database and graph them - ioko use Cacti http://www.cacti.net/ and
http://jmxmonitor.sourceforge.net/ (though it may be a bit more complex for
what you want).
Ben Gidley
On Thu, Sep 2, 2010 at 11:07 PM, Thiago H. de Paula Figueiredo<
[email protected]> wrote:
On Thu, 02 Sep 2010 18:30:49 -0300, Howard Lewis Ship<[email protected]>
wrote:
One of my long term and unrealized goals for Tapestry is to have a
legitimate performance testing lab.
Cool!
I'd like the ability to run a "standard" application through its
paces, collect statistics, and see how well each new Tapestry release
compares to its predecessor in terms of general performance: response
time, memory utilization, saturation and recovery. Ideally, it would
be nice to create equivalent JSP/Spring MVC/Struts/Wicket
implementations of the same app.
I've done something a little similar, but in a smaller scope, for a
presentation comparing Struts 1, Struts 2, JSF and Tapestry 5. I implemented
a simple application, DAOs and simple business rules layer, and them
implemented its web interface using each of the frameworks.
I'm really concerned with ensuring that 5.2 out-performs 5.1, and uses
less memory. The singleton pages approach should lower the memory
utilization ... but I'm concerned that the new AOP stuff inside
ClassTransformation (which has a tendency to create lots of little
one-off classes) will overshadow the other improvements.
Someone in the mailing list mentioned Parfait (
http://code.google.com/p/parfait/wiki/IntroductionToParfait). It could be
helpful for that.
--
Thiago H. de Paula Figueiredo
Independent Java, Apache Tapestry 5 and Hibernate consultant, developer,
and instructor
Owner, Ars Machina Tecnologia da Informação Ltda.
http://www.arsmachina.com.br
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
--
Ben Gidley
www.gidley.co.uk
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
--
Best Regards / Viele Grüße
Sebastian Hennebrueder
-----
Software Developer and Trainer for Hibernate / Java Persistence
http://www.laliluna.de
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]