[
https://issues.apache.org/jira/browse/SOLR-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16011107#comment-16011107
]
Michael Sun commented on SOLR-10317:
------------------------------------
[[email protected]] Good progress.
A couple of suggestions
1. There are nearly 30% difference between max value and min value in the
report. I noticed they are run against different build but what is the reason
for high variation.
2. In report, it's useful to specify the test environment, test parameters used
in test. These metadata make test result meaningful.
3. Memory usage is probably not very important for this test but it would be
good to think about it in your report. Some test such as faceting can be memory
intensive.
I strongly agree with [~ichattopadhyaya] that this is a 'community bonding'
project. One suggestion. A good way to start conversation with community would
be to start a simply design document to describe the goals, challenges, high
level design and how your work can address them. You can share it with
community and get feedbacks.
> Solr Nightly Benchmarks
> -----------------------
>
> Key: SOLR-10317
> URL: https://issues.apache.org/jira/browse/SOLR-10317
> Project: Solr
> Issue Type: Task
> Reporter: Ishan Chattopadhyaya
> Labels: gsoc2017, mentor
> Attachments: Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks.docx,
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks-FINAL-PROPOSAL.pdf
>
>
> Solr needs nightly benchmarks reporting. Similar Lucene benchmarks can be
> found here, https://home.apache.org/~mikemccand/lucenebench/.
> Preferably, we need:
> # A suite of benchmarks that build Solr from a commit point, start Solr
> nodes, both in SolrCloud and standalone mode, and record timing information
> of various operations like indexing, querying, faceting, grouping,
> replication etc.
> # It should be possible to run them either as an independent suite or as a
> Jenkins job, and we should be able to report timings as graphs (Jenkins has
> some charting plugins).
> # The code should eventually be integrated in the Solr codebase, so that it
> never goes out of date.
> There is some prior work / discussion:
> # https://github.com/shalinmangar/solr-perf-tools (Shalin)
> # https://github.com/chatman/solr-upgrade-tests/blob/master/BENCHMARKS.md
> (Ishan/Vivek)
> # SOLR-2646 & SOLR-9863 (Mark Miller)
> # https://home.apache.org/~mikemccand/lucenebench/ (Mike McCandless)
> # https://github.com/lucidworks/solr-scale-tk (Tim Potter)
> There is support for building, starting, indexing/querying and stopping Solr
> in some of these frameworks above. However, the benchmarks run are very
> limited. Any of these can be a starting point, or a new framework can as well
> be used. The motivation is to be able to cover every functionality of Solr
> with a corresponding benchmark that is run every night.
> Proposing this as a GSoC 2017 project. I'm willing to mentor, and I'm sure
> [~shalinmangar] and [[email protected]] would help here.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]