Erick,
- I think I checked that my QueryResultsCache and DocumentCache ratios
were close to 1. I will double check that by repeating my test.
- I think checking the Qtimes in the log is a very good suggestion, I will
also check that the next time I run my test
- It is not possible as the client is just a java client program which
just fires the queries using a REST Client API

Is there any way that SOLR publishes its thread pool statistics?

For e.g in cassandra you have a command like nodetool tpstats which
provides a nice table stats for all the thread pools involved,
how many are pending jobs etc?


Thanks
Suresh

On 3/23/17 9:33 PM, "Erick Erickson" <erickerick...@gmail.com> wrote:

>I'd check my I/O. Since you're firing the same query, I expect that
>you aren't I/O bound at all, since, as you say, the docs should
>already be in memory. This assumes that your document cache size is >
>0. You can check this. Go to the admin UI, select one of your cores
>(not collection) and go to plugins/stats. You should see the
>documentCache as one of the entries and you should be hitting an
>insane hit ratio close to 100% as your test runs.
>
>Also check your queryResultCache. That also should be near 100% in
>your test. Do note that these caches really never hit this "for real",
>but as you indicated this is a highly artificial test so such high hit
>ratios are what I'd expect.
>
>Assuming that those caches are being hit near 100%, Solr really isn't
>doing any work to speak of so there almost has to be some kind of
>queueing going on.
>
>The fact that your CPU is only running 8-10% is an indication that
>you're requests are queued up somewhere, but where I have no clue. The
>Jetty thread pool is quite large.  What are the QTimes reported in the
>responses? My guess is that the QTime stays pretty constant (and very
>low) even as your response time increases, another indication that
>you're queueing.
>
>Hmmm, is it possible that on the queueing is on the _client_ side?
>What aggregate throughput to you get if you fire up 10 _clients_ each
>with one thread rather than 1 client and 10 threads? That's a shot in
>the dark, but worth a try I suppose. And how does your client fire
>queries? SolrJ? Http? Jmeter or the like?
>
>But yeah, this is weird. Since you're firing the same query, Solr
>isn't really doing any work at all.
>
>Best,
>Erick
>
>On Thu, Mar 23, 2017 at 7:56 PM, Aman Deep Singh
><amandeep.coo...@gmail.com> wrote:
>> You can play with the merge factor in the index config.
>> If their is no frequent updates then make it 2 ,it will give you High
>> throughput and less latency.
>>
>> On 24-Mar-2017 8:22 AM, "Zheng Lin Edwin Yeo" <edwinye...@gmail.com>
>>wrote:
>>
>>> I also did find that beyond 10 threads for 8GB heap size , there isn't
>>>much
>>> improvement with the performance. But you can increase your heap size a
>>> little if your system allows it.
>>>
>>> By the way, which Solr version are you using?
>>>
>>> Regards,
>>> Edwin
>>>
>>>
>>> On 24 March 2017 at 09:21, Matt Magnusson <magnuss...@gmail.com> wrote:
>>>
>>> > Out of curosity, what is your index size? I'm trying to do something
>>> > similar with maximizing output, I'm currently looking at streaming
>>> > expressions which I'm seeing some interesting results for, I'm also
>>> > finding that the direct mass query route seems to hit a wall for
>>> > performance. I'm also finding that about 10 threads seems to be an
>>> > optimum number.
>>> >
>>> > On Thu, Mar 23, 2017 at 8:10 PM, Suresh Pendap
>>><spen...@walmartlabs.com>
>>> > wrote:
>>> > > Hi,
>>> > > I am new to SOLR search engine technology and I am trying to get
>>>some
>>> > performance numbers to get maximum throughput from the SOLR cluster
>>>of a
>>> > given size.
>>> > > I am currently doing only query load testing in which I randomly
>>>fire a
>>> > bunch of queries to the SOLR cluster to generate the query load.  I
>>> > understand that it is not the ideal workload as the
>>> > > ingestion and commits happening invalidate the Solr Caches, so it
>>>is
>>> > advisable to perform query load along with some documents being
>>>ingested.
>>> > >
>>> > > The SOLR cluster was made up of 2 shards and 2 replicas. So there
>>>were
>>> > total 4 replicas serving the queries. The SOLR nodes were running on
>>>an
>>> LXD
>>> > container with 12 cores and 88GB RAM.
>>> > > The heap size allocated was 8g min and 8g max. All the other SOLR
>>> > configurations were default.
>>> > >
>>> > > The client node was running on an 8 core VM.
>>> > >
>>> > > I performed the test with 1 thread, 10 client threads and 50 client
>>> > threads.  I noticed that as I increased the number of threads, the
>>>query
>>> > latency kept increasing drastically which I was not expecting.
>>> > >
>>> > > Since my initial test was randomly picking queries from a file, I
>>> > decided to keep things constant and ran the program which fired the
>>>same
>>> > query again and again. Since it is the same query, all the documents
>>>will
>>> > > be in the Cache and the query response time should be very fast. I
>>>was
>>> > also expecting that with 10 or 50 client threads, the query latency
>>> should
>>> > not be increasing.
>>> > >
>>> > > The throughput increased only up to 10 client threads but then it
>>>was
>>> > same for 50 threads, 100 threads and the latency of the query kept
>>> > increasing as I increased the number of threads.
>>> > > The query was returning 2 documents only.
>>> > >
>>> > > The table below summarizes the numbers that I was saying with a
>>>single
>>> > query.
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > #No of Client Nodes
>>> > >         #No of Threads  99 pct Latency  95 pct latency  throughput
>>> > CPU Utilization Server Configuration
>>> > >
>>> > > 1       1       9 ms    7 ms    180 reqs/sec    8%
>>> > >
>>> > > Heap size: ms=8g, mx=8g
>>> > >
>>> > > default configuration
>>> > >
>>> > >
>>> > > 1       10      400 ms  360 ms  360 reqs/sec    10%
>>> > >
>>> > > Heap size: ms=8g, mx=8g
>>> > >
>>> > > default configuration
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > I also ran the client program on the SOLR server node in order to
>>>rule
>>> > our the network latency factor. On the server node also the response
>>>time
>>> > was higher for 10 threads, but the amplification was smaller.
>>> > >
>>> > > I am getting an impression that probably my query requests are
>>>getting
>>> > queued up and limited due to probably some thread pool size on the
>>>server
>>> > side.  However I saw that the default jetty.xml does
>>> > > have the thread pool of min size of 10 and  max of 10000.
>>> > >
>>> > > Is there any other internal SOLR thread pool configuration which
>>>might
>>> > be limiting the query response time?
>>> > >
>>> > > I wanted to check with the community if what I am seeing is
>>>abnormal
>>> > behavior, what could be the issue?  Is there any configuration that
>>>I can
>>> > tweak to get better query response times for more load?
>>> > >
>>> > > Regards
>>> > > Suresh
>>> > >
>>> >
>>>
>

Reply via email to