This is helpful thanks.  I'm still getting used to the Netbeans profiling
tool, but I hope to be able to contribute some useful information at least
about slow points.  If you guys do end up having a JRuby internals crash
course/conference I would definitely love to see that.  I'm sure it would
help gain a better understanding of where this stuff all fits in.  And
living right in the Twin Cities I couldn't refuse.
Joe

On Dec 6, 2007 10:32 AM, Charles Oliver Nutter <[EMAIL PROTECTED]>
wrote:

> Joseph Athman wrote:
> > What kind of performance do we think is more important to focus on at
> > this point in JRuby development?  Should we find artificial benchmarks
> > and optimize any bottlenecks here, or run more "real world" type tests
> > from things like running a simple rails app?
> >
> > An example is something like bug 1660 which is an artificial benchmark
> > of the Time.at <http://Time.at> method.  It's true that this is slower
> > in JRuby than MRI, but how often is someone going to call Time.at
> > <http://Time.at> hundreds of millions of times right in a row.  But
> > maybe small things like this are the only real bottlenecks left.
> >
> > Just wondering what kind of focus we should have.
>
> I'd say there's a few different categories at this point:
>
> - real world application benchmarks
>
> This includes things like the Pet Store rails app and similar full-app
> benchmarks. They're going to be the numbers that matter the most, but
> they take more work to improve. Here's the process I'd follow:
>
> 1. break the app into a few sub pieces; usually this should be easy,
> start benchmarking what each stage of the app does
> 2. compare the results on those pieces with, say, MRI, under various loads
> 3. If you can identify a piece that's slower, and can figure out
> why...recurse. If it's still too large a piece, recurse the analysis
> into that one and start at 1 again.
>
> I think we haven't made enough progress on the Pet Store benchmark
> because the smallest pieces we've broken it into are view, controller,
> and model/persistence, where each of those pieces has a lot of code
> behind them.
>
> - nontrivial but domain-specific microbenchmarks
>
> Borasky's MatrixBenchmark (part of the Cougar project) is an example of
> this, as is the pentomino benchmark from YARV. This includes benchmarks
> that are not really large enough to constitute an app, but could be a
> small piece of a larger application very easily. Finding bottlenecks
> here is easier, since usually the code is fairly localized and there's
> not as much to go through. But because these are somewhat
> domain-specific, they may or may not have a lot of general applicability.
>
> - trivial but general microbenchmarks
>
> Most of the JRuby benchmarks (under /test/bench) fall into this
> category. They're benchmarking things like method invocation, variable
> access, block dispatch, and commonly-used language features. They're
> microbenchmarks, so they're not indicative of overall application
> performance. But they're general enough that improving them does improve
> a majority of applications. They're a good tiny area of focus, provided
> they're truly general enough.
>
> - trivial and domain-specific microbenchmarks
>
> This includes things like the Time.at benchmark, the strptime benchmark
> in test/bench, and individual tests in Dan Berger's suite (under
> test/externals/ruby_test/bench...but taken as a whole, these are more
> general). We'd only want to focus on these if they could be shown to
> actually help something in the real world or if there was an application
> that had some provable bottleneck that needed to be fixed.
>
> So I'd say focus on app benchmarks first, general microbenchmarks
> second, and nontrivial domain-specific microbenchmarks and trivial
> domain-specific benchmarks last.
>
> - Charlie
>
> ---------------------------------------------------------------------
> To unsubscribe from this list please visit:
>
>    http://xircles.codehaus.org/manage_email
>
>

Reply via email to