Thanks for the demo Eric.  It was nice to see

We already have https://github.com/apache/solr/tree/main/solr/benchmark --
which has a number of useful benchmarks.  It uses JMH, which is a lower
level framework compared to Gatling.  The foundation is strong -- modern,
IDE support, well-used (so LLMs may know of it).  If you worked on
something with performance implications (as I just did with collapsing), I
strongly encourage using this specific benchmark framework to evaluate the
results.  If I only have X hours to invest in benchmarking something, say,
I am going to pick one, not more than one place to benchmark what I'm
interested in.  And I recommend solr/benchmark be it.

What solr/benchmark lacks is continuous-benchmarking (i.e. longitudinal)
across commits or time or version (don't need all 3; version is sufficient
MVP).  I view this as a higher level / **complementary** concern to a
foundational benchmark framework / system.  This need/requirement need not
bring about an entirely different benchmark foundation!  What's needed is a
basic high level interface / protocol to trigger (run) a specific benchmark
(e.g. a full terminal command), and to collect its results in a standard
way -- could simply be the elapsed time for an MVP.  Perhaps what you
found, Eric, might be amenable to this.  I took a look; I can imagine maybe
it could evolve to this.  But it presently isn't that.

~ David

On Wed, Feb 18, 2026 at 1:04 PM David Eric Pugh via dev <[email protected]>
wrote:

> Hi all, this is the repo I showed:
> https://github.com/epugh/search-benchmark-game/
> I'd love some review of the various solr setups in
> https://github.com/epugh/search-benchmark-game/tree/master/engines directory.
> Also, if anyone is interested in bringing in the turbopuffer web UI to
> this, I'd love that.
> If we get some traction with this approach, then I'd bring up moving this
> into a proper Apache Solr repo....   Sandbox maybe?
> Eric

Reply via email to