Vivek Narang commented on SOLR-10317:
I am feeling better now, Thanks! I took this time to think about your
suggestions and here is my opinion.
Yes, it is a good idea to make this configurable but there are some challenges
to it. Even if the backend (leading to the point where CSVs are generated) is
made dynamic (which is not as straight forward as it seems), the front end will
have to be made dynamic which violates the initial agreement of keeping the
front end static as discussed initially. Limitations on the front-end are one
of the reasons why the implementation is the way it is. We need to make the
front end dynamic for this idea to be realistic. The suite even in the current
state is extensible but for that one will have to modify the classes and the
front-end to add/modify benchmarks. However, one can always find better ways to
do things and JSON based configurable suite is a good idea as an improvement!
I want this benchmark project to be useful for the community and I would not
recommend waiting for another year to make the suggested changes. However,
since I anticipate these changes to be time taking and since the deadline is
just over a week away, I recommend creating a duplicate branch of the current
project and implement your suggestions over the coming weeks.
> Solr Nightly Benchmarks
> Key: SOLR-10317
> URL: https://issues.apache.org/jira/browse/SOLR-10317
> Project: Solr
> Issue Type: Task
> Reporter: Ishan Chattopadhyaya
> Labels: gsoc2017, mentor
> Attachments: changes-lucene-20160907.json,
> changes-solr-20160907.json, managed-schema,
> Screenshot from 2017-07-30 20-30-05.png, SOLR-10317.patch, SOLR-10317.patch,
> Solr needs nightly benchmarks reporting. Similar Lucene benchmarks can be
> found here, https://home.apache.org/~mikemccand/lucenebench/.
> Preferably, we need:
> # A suite of benchmarks that build Solr from a commit point, start Solr
> nodes, both in SolrCloud and standalone mode, and record timing information
> of various operations like indexing, querying, faceting, grouping,
> replication etc.
> # It should be possible to run them either as an independent suite or as a
> Jenkins job, and we should be able to report timings as graphs (Jenkins has
> some charting plugins).
> # The code should eventually be integrated in the Solr codebase, so that it
> never goes out of date.
> There is some prior work / discussion:
> # https://github.com/shalinmangar/solr-perf-tools (Shalin)
> # https://github.com/chatman/solr-upgrade-tests/blob/master/BENCHMARKS.md
> # SOLR-2646 & SOLR-9863 (Mark Miller)
> # https://home.apache.org/~mikemccand/lucenebench/ (Mike McCandless)
> # https://github.com/lucidworks/solr-scale-tk (Tim Potter)
> There is support for building, starting, indexing/querying and stopping Solr
> in some of these frameworks above. However, the benchmarks run are very
> limited. Any of these can be a starting point, or a new framework can as well
> be used. The motivation is to be able to cover every functionality of Solr
> with a corresponding benchmark that is run every night.
> Proposing this as a GSoC 2017 project. I'm willing to mentor, and I'm sure
> [~shalinmangar] and [~markrmil...@gmail.com] would help here.
This message was sent by Atlassian JIRA
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org