That is a good point, what stats would you recommend. Ideally it would be
good to get something we can pull from jmx.

-jay

On Thu, Oct 27, 2011 at 7:09 AM, Jeffrey Damick <jeffreydam...@gmail.com>wrote:

> One thought, I don't see any points related to jvm heap / gc profiling, it
> may not be a major concern at this point but it probably makes sense to
> have
> a baseline..
>
>
> On Mon, Oct 24, 2011 at 3:06 PM, Jay Kreps <jay.kr...@gmail.com> wrote:
>
> > Neha and I have been doing so work on perf testing. My past experience
> with
> > these perf and integration testing frameworks is that
> >
> >   1. This kind of testing is extremely important. At least as important
> as
> >   unit testing. A lot of the bugs that are caught in production could be
> >   caught be good integration tests but likely will never be caught be
> unit
> >   tests.
> >   2. It is hard to get all your pieces scripted up so that you can fully
> >   automate the perf analysis you want to do and run this every day.
> >   3. It requires dedicated hardware that doesn't change from day to day.
> >   4. One of the biggest problems is that perf code is always kind of an
> >   afterthought. As a result one never gets to a framework that is good
> > enough
> >   to use. Instead you keep re-writing the same test harnesses over and
> over
> >   but with little tweaks for each new test you need to run, then throwing
> > that
> >   code away because it is so specific it can't be reused.
> >
> > To hopefully help I started a wiki where we could work out some ideas for
> > this. The idea is basically just to dump out all the stats we have now to
> > CSV and do some analysis in R. Then script this up in a way that we can
> > craft "test scenarios" and run a bunch of these different configurations.
> >
> > Wiki here:
> > https://cwiki.apache.org/confluence/display/KAFKA/Performance+testing
> >
> > Would love to hear people's thoughts.
> >
> > -Jay
> >
>

Reply via email to