The short form is "add more replicas", assuming you're using SolrCloud.

If older-style master/slave, then "add more slaves". Solr request processing
scales pretty linearly with the number of replicas (or slaves).

Note that this is _not_ adding shards (assuming SolrCloud). You usually add
shards when your response time under light load is unacceptable indicating
that you need fewer documents in each shard.

Biony's question needs to be answered before any but the most general
advice is possible, what is your setup? What
version of Solr? How many docs? How many shards? etc.

Best,
Erick

On Thu, Feb 4, 2016 at 7:06 AM, Binoy Dalal <binoydala...@gmail.com> wrote:
> What is your solr setup -- nodes/shards/specs?
> 7221 requests/min is a lot so it's likely that your solr setup simply isn't
> able to support this kind of load which results in the requests timing out
> which is why you keep seeing the timeout and connect exceptions.
>
> On Thu, 4 Feb 2016, 20:30 Tiwari, Shailendra <
> shailendra.tiw...@macmillan.com> wrote:
>
>> Hi All,
>>
>> We did our first load test on Search (Solr) API, and started to see some
>> errors after 2000 Users. Errors used to go away after 30 seconds, but keep
>> happening frequently. Errors were "java.net.SocketTimeoutException" and
>> "org.apache.http.conn.HttpHostConnectException". We were using JMeter to
>> run the load test, and total of 15 different Search terms were used to
>> execute API. Total Request/Min was 7221/min.
>> We are using Apache/RedHat.
>> We want to scale upto 4000 users. What's recommendation to reach there?
>>
>> Thanks
>>
>> Shail
>>
> --
> Regards,
> Binoy Dalal

Reply via email to