Re: solr perf metric for the last 1 hour (or last 10 mn)
Hi Dominique, Unfortunately Solr doesn't support metrics you are interested in. You can however have another process that makes jmx queries on the solr process, do required transformation and store data in some kind of data store. Just make sure you are not DDOSing your Solr instances :-) On Oct 10, 2016 11:58 AM, "Dominique De Vito"wrote: Hi, It looks like the Solr metric "avgTimePerRequest" is computed with requests from t0 (startup time). If so, it's quite useless, for example, for detecting a surge in latency within the last 10 mn for example. Is my understanding correct ? If so, is there a way (1) to configure Solr to compute all its metrics per period of time (let's say every 10 mn) or (2) to reset metrics through some (?) call ? Thanks. Dominique
Re: solr perf metric for the last 1 hour (or last 10 mn)
On 10/10/2016 9:58 AM, Dominique De Vito wrote: > It looks like the Solr metric "avgTimePerRequest" is computed with > requests from t0 (startup time). The percentile metrics (available in 4.1 and later if memory serves) are generally far more useful than the average time. > If so, is there a way (1) to configure Solr to compute all its metrics > per period of time (let's say every 10 mn) No, all the metrics (even the percentiles) are calculated since the core started. > (2) to reset metrics through some (?) call Reload the core (collection if running in cloud mode). This creates a whole new SolrCore object and all stats reset to zero. Thanks, Shawn
solr perf metric for the last 1 hour (or last 10 mn)
Hi, It looks like the Solr metric "avgTimePerRequest" is computed with requests from t0 (startup time). If so, it's quite useless, for example, for detecting a surge in latency within the last 10 mn for example. Is my understanding correct ? If so, is there a way (1) to configure Solr to compute all its metrics per period of time (let's say every 10 mn) or (2) to reset metrics through some (?) call ? Thanks. Dominique
Re: solr perf
not bad advise ;-) 2009/12/20 Walter Underwood wun...@wunderwood.org Here is an idea. Don't make one core per user. Use a field with a user id. wunder On Dec 20, 2009, at 12:38 PM, Matthieu Labour wrote: Hi I have a slr instance in which i created 700 core. 1 Core per user of my application. The total size of the data indexed on disk is 35GB with solr cores going from 100KB and few documents to 1.2GB and 50 000 documents. Searching seems very slow and indexing as well This is running on a EC2 xtra large instance (6CPU, 15GB Memory, Raid0 disk) I would appreciate if anybody has some tips, articles etc... as what to do to understand and improve performance Thank you -- Lici ~Java Developer~
Re: solr perf
Have you tried loading solr instances as you need them and unloading those that are not being used? I wish I could help more, I don't know many people running that many use cores. didier On Sun, Dec 20, 2009 at 2:38 PM, Matthieu Labour matth...@strateer.com wrote: Hi I have a slr instance in which i created 700 core. 1 Core per user of my application. The total size of the data indexed on disk is 35GB with solr cores going from 100KB and few documents to 1.2GB and 50 000 documents. Searching seems very slow and indexing as well This is running on a EC2 xtra large instance (6CPU, 15GB Memory, Raid0 disk) I would appreciate if anybody has some tips, articles etc... as what to do to understand and improve performance Thank you
solr perf
Hi I have a slr instance in which i created 700 core. 1 Core per user of my application. The total size of the data indexed on disk is 35GB with solr cores going from 100KB and few documents to 1.2GB and 50 000 documents. Searching seems very slow and indexing as well This is running on a EC2 xtra large instance (6CPU, 15GB Memory, Raid0 disk) I would appreciate if anybody has some tips, articles etc... as what to do to understand and improve performance Thank you
Re: solr perf
Here is an idea. Don't make one core per user. Use a field with a user id. wunder On Dec 20, 2009, at 12:38 PM, Matthieu Labour wrote: Hi I have a slr instance in which i created 700 core. 1 Core per user of my application. The total size of the data indexed on disk is 35GB with solr cores going from 100KB and few documents to 1.2GB and 50 000 documents. Searching seems very slow and indexing as well This is running on a EC2 xtra large instance (6CPU, 15GB Memory, Raid0 disk) I would appreciate if anybody has some tips, articles etc... as what to do to understand and improve performance Thank you