Hi Kirk,

What we're thinking of putting in geode-benchmarks are new, multi host
benchmarks of the full system with the public APIs, not microbenchmarks. We
weren't planning on doing anything with the JMH benchmarks at the moment. I
agree with you those should stay in the geode module they are testing,
since they generally are microbenchmarking internal APIs of that module.

I appreciate you bringing those up though - I would like to get to the
point where we are running those microbenchmarks in CI as well!

-Dan

On Fri, Nov 16, 2018 at 9:07 AM Kirk Lund <kl...@apache.org> wrote:

> That makes sense for some benchmarks but not others. For example, while
> working on the Logging changes, I wrote a some benchmarks that directly use
> some new internal code to ensure that the new changes perform well.
>
> +1 to creating a benchmarks repo that has general perf tests that will be
> run in the pipelines
>
> -1 to getting rid of benchmarks from geode-core or any other submodule
> because this will discourage developers from writing benchmarks specific to
> new code as they write it -- we shouldn't be forced to write benchmarks
> AFTER we commit to the main geode repo (or worse, after a release)
>
> On Thu, Nov 15, 2018 at 10:47 AM Dan Smith <dsm...@pivotal.io> wrote:
>
> > Hi all,
> >
> > We (Naba, Sean, Brian and I) would like to add some benchmarks for geode,
> > with a goal of eventually running them as part of our concourse build and
> > detecting performance changes.
> >
> > We think it makes sense to put these benchmarks in a separate
> > geode-benchmarks repository. That will make them less tightly coupled to
> a
> > specific revision of geode. What do you all think? If folks are okay with
> > this, I will go ahead and create the repository.
> >
> > We have some prototype code in this repository with a simple
> client/server
> > put benchmark:  https://github.com/upthewaterspout/geode-performance.
> >
> > -Dan
> >
>

Reply via email to