We haven't run tests between releases precisely because it is "tricky
getting usable numbers for comparison".  the existing tests were fairly
arbitrary in how they were selected - i was mostly trying to develop a
model we could follow.  i wasn't really trying too hard to do much else.

> To start out, I'm thinking there could at least be a test for each new
performance fix going forward unless that fix hits on an area that already
has performance coverage.

that sounds sensible to me

On Tue, Dec 1, 2015 at 9:43 AM, Ted Wilmes <[email protected]> wrote:

> I think the move to JMH is a good idea.  How have you guys been using the
> existing tests? For example, have you generated the reports at various
> points and then done comparisons between releases?  It would be neat to
> have the tests run along with the Travis build but it could be tricky
> getting usable numbers for comparison between builds since we don't have
> any control over that box (other processes running, etc.).  I think there
> could be some benefit even initially if we converted all or some set of the
> existing tests, and then did a build on a known box (one of our own) around
> code freeze time to see if we introduced any new, unintended issues since
> the last release.  We'd keep all the reports as we did these for comparison
> purposes and at the same time, work towards more automation.  Jenkins has a
> pretty slick looking JMH plugin that helps you track performance trends and
> detect regressions. [1]  I haven't found one for Travis yet, but the JMH
> output is pretty straightforward so it might not be that hard to put one
> together.
>
> I think the other issue is how do we want to build the performance tests
> out from a coverage standpoint?  I don't think it would be too much trouble
> to convert the existing tests to JMH but Stephen, I like your idea of
> evaluating them first to see if we'd like to keep/tweak them.  To start
> out, I'm thinking there could at least be a test for each new performance
> fix going forward unless that fix hits on an area that already has
> performance coverage.  For example, I would to add one for the
> addV/addVertex work.  This would help us track areas of particular interest
> from release-to-release and eventually, build-to-build.
>
> --Ted
>
> [1] https://github.com/blackboard/jmh-jenkins
>
>
> On Tue, Dec 1, 2015 at 5:40 AM, Stephen Mallette <[email protected]>
> wrote:
>
> > Yeah, i think that's a bit too heavy and coarse grained for our purposes.
> > I was thinking more in terms of micro-benchmarks for our internal use.
> >
> > Btw, Is your implementation such that any TP3 compliant database could
> work
> > with the LDBC?
> >
> > On Mon, Nov 30, 2015 at 11:49 PM, Jonathan Ellithorpe <
> [email protected]
> > >
> > wrote:
> >
> > > This might be far more heavy-weight than what you are looking for, but
> > I've
> > > been working on implementing the LDBC Social Network Benchmark for TP3:
> > >
> > > http://ldbcouncil.org/developer/snb
> > >
> > > Jonathan
> > >
> > >
> > > On Mon, Nov 30, 2015 at 8:28 AM Stephen Mallette <[email protected]
> >
> > > wrote:
> > >
> > > > I had long ago built in a model for doing "performance tests" that
> > used:
> > > >
> > > > https://github.com/carrotsearch/junit-benchmarks
> > > >
> > > > I thought we would at some point in the future build those out
> further
> > > but
> > > > that hasn't really happened. I'd probably just not worry about them
> at
> > > this
> > > > point, but while talking to Ted about it, I learned that carrotsearch
> > has
> > > > stopped development on their project and are instead directing folks
> to
> > > use
> > > > JMH:
> > > >
> > > > http://openjdk.java.net/projects/code-tools/jmh/
> > > >
> > > > I think we should just consider dropping the carrotsearch tests in
> > light
> > > of
> > > > this - perhaps do a review to see if there are any tests worth moving
> > to
> > > > unit or integration tests.  Then we consider a better model for
> > > performance
> > > > testing with JMH (or something else) going forward.
> > > >
> > > > Thoughts?
> > > >
> > >
> >
>

Reply via email to