Yep. It all depends in what you are trying to measure.

- Josh, on the go

On Sep 15, 2009, at 9:38 AM, Robert Casto <[email protected]>  
wrote:

> That depends of course on what you are trying to do.
>
> Joshua wants to measure average system performance while things are  
> humming along.
>
> If you want to know how long it takes to startup, then you keep the  
> data. I tend to separate the two in reports I give to companies.  
> Very different work is done to speed one or the other up.
>
> On Tue, Sep 15, 2009 at 12:26 PM, Alexey Zinger  
> <[email protected]> wrote:
> Interesting, but don't you think that for certain situations,  
> throwing away results that might be affected by start-up times is  
> the exact wrong thing to do?
>
> Alexey
>
>
> From: Joshua Marinacci <[email protected]>
> To: [email protected]
> Sent: Tuesday, September 15, 2009 10:43:06 AM
> Subject: [The Java Posse] Re: Problems with continuous performance  
> testing, solutions anyone?
>
>
> When performance testing the client JRE we do two things which seem to
> help:
>
> 1) check out both the latest and your older / baseline releases of
> your code. Test them *both*. This lets you plot how you have improved,
> regardless of what computer your tests are running on. It's also the
> only way to test things which might vary from computer to computer.
>
> 2) always run each test a bunch of times, throw away the first result,
> then average the rest. This gives you a more consistent result.
> Throwing away the first one lets you ignore the places where HotSpot
> hadn't kicked in, or you were thrashing in JVM startup.
>
> - Josh
>
> On Sep 15, 2009, at 12:45 AM, Fabrizio Giudici wrote:
>
> >
> > Working with imaging, I came to the conclusion that I need  
> continuous
> > performance testing more than one year ago
> > (http://netbeans.dzone.com/news/stopwatches-anyone-or-about-co). Of
> > course, the idea is not mine, but seems surprisingly "old" (2003,
> > http://www.devx.com/Java/Article/16755). As you can read in my
> > article,
> > my continuous performance testing had initially been manual, 1.5  
> years
> > ago I developed a few trivial code to at least automatically collect
> > the
> > results (then they were manually inserted into an Excel sheet).  
> Since
> > Hudson allows to plot arbitrary data, the next step I'm going to
> > complete is to provide those data to Hudson. Due to the very  
> nature of
> > my functions, I'm not going to strictly assert that a task is
> > completed
> > in a certain time, but I'd be satisfied to plot the trend over time,
> > so
> > I can see the impact of performance optimizations, and above all I  
> can
> > make sure that the performance isn't slowly but inexorably getting
> > worse
> > refactoring after refactoring.
> >
> > Since the time of my article, I got one more problem. My testing
> > machine, to compute and compare timings against, so far has been my
> > laptop. The amount of tests is increasing and it has become  
> impossible
> > to run everything on my laptop each time (otherwise I couldn't use  
> it
> > for hours), so I've moved the tests to a Hudson slave (a good
> > 8-processor, where I'm going to exploit the parallelism to compute
> > multiple tests at the same time). At this point there's the problem:
> > scheduling parallel tasks is an excellent way to screw up  
> measurements
> > (while with my laptop I made sure that everything was executed
> > serially
> > and there were no other processes consuming CPU in the background).
> > While at least for some tasks I could strictly measure the CPU time
> > (by
> > means of JMX), parts of the tests are related to I/O (loading and
> > decoding files) - clearly performing many of them at the same time
> > will
> > have each interfere with the other. BTW, I've got doubts that even
> > pure
> > elaboration tests can interfere, as they work with large (about
> > 100MBytes) rasters in memory, so loading multiple ones could lead to
> > memory swapping and cache interferences. Furthermore, the fact that
> > the
> > host is a Hudson slave makes it possible that other projects gets
> > scheduled for a build, making things even more complex.
> >
> > What to do? At the moment, the only thing I can think of is to use
> > Hudson locks to properly serialize performance tests - with a
> > multi-stage approach I can reduce the "critical section" of tests,
> > still
> > resorting to the most brutal solution hurts me. I'd like to know
> > whether
> > somebody else has done, or is doing, public work in the area.
> >
> > PS There is a very recent (JavaZone '09) presentation about "testing
> > in
> > the cloud" which could address some problems, but I think that
> > JavaZone
> > '09 slides are not available yet:
> >
> > http://javazone.no/incogito09/events/JavaZone%202009/sessions/Continuous%20Performance%20Testing%20in%20the%20Cloud
> >
> > In any case, it seems to mostly refer to JEE testing, where one  
> would
> > expect that indeed the most significant tests are those with  
> multiple
> > clients in parallel, which is not my primary case.
> >
> > PS Yes, I know that parallelizing to 8 different computers instead
> > than
> > 8 CPUs of a single computer would be a good idea, but I can't afford
> > it
> > :-) In any case, this would bring the problem of having 8 perfectly
> > identical computers.
> >
> > --
> > Fabrizio Giudici - Java Architect, Project Manager
> > Tidalwave s.a.s. - "We make Java work. Everywhere."
> > weblogs.java.net/blog/fabriziogiudici - www.tidalwave.it/blog
> > [email protected] - mobile: +39 348.150.6941
> >
> >
> > >
>
>
>
>
>
>
>
> -- 
> Robert Casto
> www.robertcasto.com
>
>
> >

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "The 
Java Posse" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/javaposse?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to