Great I must have missed that.
On Wed, Dec 23, 2015 at 9:41 AM, Jeff Wartes wrote:
> Looks like it’ll set partialResults=true on your results if you hit the
> timeout.
>
> https://issues.apache.org/jira/browse/SOLR-502
>
> https://issues.apache.org/jira/browse/SOLR-5986
Looks like it’ll set partialResults=true on your results if you hit the
timeout.
https://issues.apache.org/jira/browse/SOLR-502
https://issues.apache.org/jira/browse/SOLR-5986
On 12/22/15, 5:43 PM, "Vincenzo D'Amore" wrote:
>Well... I can write everything, but
On 12/23/2015 01:42 AM, William Bell wrote:
I agree that when using timeAllowed in the header info there should be an
entry that indicates timeAllowed triggered.
If I'm not mistaken, there is
=> partialResults:true
"responseHeader":{ "partialResults":true }
//
This is the only reason
We need to know a LOT more about your site. Number of documents, size of index,
frequency of updates, length of queries approximate size of server (CPUs, RAM,
type of disk), version of Solr, version of Java, and features you are using
(faceting, highlighting, etc.).
After that, we’ll have more
Hi All,
my website is under pressure, there is a big number of concurrent searches.
When the connected users are too many, the searches becomes so slow that in
some cases users have to wait many seconds.
The queue of searches becomes so long that, in same cases, servers are
blocked trying to
Well... I can write everything, but really all this just to understand
when timeAllowed
parameter trigger a partial answer? I mean, isn't there anything set in the
response when is partial?
On Wed, Dec 23, 2015 at 2:38 AM, Walter Underwood
wrote:
> We need to know a LOT
I agree that when using timeAllowed in the header info there should be an
entry that indicates timeAllowed triggered.
This is the only reason why we have not used timeAllowed. So this is a
great suggestion. Something like: 1 ??
That would be great.
0
1
107
*:*
1000
On Tue, Dec 22, 2015 at
timeAllowed was designed to handle queries that by themselves consume lots
of resources, not to try to handle situations with large numbers of
requests that starve other requests from accessing CPU and I/O resources.
The usual technique for handling large numbers of requests is replication,