[
https://issues.apache.org/jira/browse/SOLR-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16026549#comment-16026549
]
Vivek Narang commented on SOLR-10317:
-------------------------------------
Hello [~ichattopadhyaya], After getting inspired by source code in SolrMeter I
have come up with the logic to do a kind of a stress test where I am estimating
the QPS for a set of numeric queries (as an example for now.).
Please access [http://212.47.227.9/dev/NumericQueryBenchmarkCloud.html]. I am
observing a strange co-incidence - The QPS measured for a query looking for a
specific number is lowest and the QPS measured for a query looking for all
those numbers greater than a number. Is there an explanation for this?
QPS(Field:Number) < QPS (Field:[Number1 TO Number2]) < QPS(Field:[* TO
Number]) < QPS(Field:[Number TO *])
This has been observed over a set of commits and not one commit.
> Solr Nightly Benchmarks
> -----------------------
>
> Key: SOLR-10317
> URL: https://issues.apache.org/jira/browse/SOLR-10317
> Project: Solr
> Issue Type: Task
> Reporter: Ishan Chattopadhyaya
> Labels: gsoc2017, mentor
> Attachments: changes-lucene-20160907.json,
> changes-solr-20160907.json, managed-schema,
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks.docx,
> Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks-FINAL-PROPOSAL.pdf,
> solrconfig.xml
>
>
> Solr needs nightly benchmarks reporting. Similar Lucene benchmarks can be
> found here, https://home.apache.org/~mikemccand/lucenebench/.
> Preferably, we need:
> # A suite of benchmarks that build Solr from a commit point, start Solr
> nodes, both in SolrCloud and standalone mode, and record timing information
> of various operations like indexing, querying, faceting, grouping,
> replication etc.
> # It should be possible to run them either as an independent suite or as a
> Jenkins job, and we should be able to report timings as graphs (Jenkins has
> some charting plugins).
> # The code should eventually be integrated in the Solr codebase, so that it
> never goes out of date.
> There is some prior work / discussion:
> # https://github.com/shalinmangar/solr-perf-tools (Shalin)
> # https://github.com/chatman/solr-upgrade-tests/blob/master/BENCHMARKS.md
> (Ishan/Vivek)
> # SOLR-2646 & SOLR-9863 (Mark Miller)
> # https://home.apache.org/~mikemccand/lucenebench/ (Mike McCandless)
> # https://github.com/lucidworks/solr-scale-tk (Tim Potter)
> There is support for building, starting, indexing/querying and stopping Solr
> in some of these frameworks above. However, the benchmarks run are very
> limited. Any of these can be a starting point, or a new framework can as well
> be used. The motivation is to be able to cover every functionality of Solr
> with a corresponding benchmark that is run every night.
> Proposing this as a GSoC 2017 project. I'm willing to mentor, and I'm sure
> [~shalinmangar] and [[email protected]] would help here.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]