I’ve been running Solr for a dozen years and I’ve never needed a heap larger
than 8 GB.
>> What is your data size? same like us 1 TB? is your searching or indexing
>> frequently? NRT model?
My question is why replica is going into recovery? When replica went down, I
checked GC log but GC pause
Those are extremely large JVMs. Unless you have proven that you MUST
have 55 GB of heap, use a smaller heap.
I’ve been running Solr for a dozen years and I’ve never needed a heap
larger than 8 GB.
Also, there is usually no need to use one JVM per replica.
Your configuration is using 110 GB (two
I _think_ it will run all 3 and then do index hopping. But if you know one
fq is super expensive, you could assign it a cost
Value over 100 will try to use PostFilter then and apply the query on top
of results from other queries.
On 7/8/2020 3:36 PM, gnandre wrote:
I am using Solr docker image 8.5.2-slim from https://hub.docker.com/_/solr.
I use it as a base image and then add some more stuff to it with my custom
Dockerfile. When I build the final docker image, it is built successfully.
After that, when I try to use it
Hi all! In a collection where we have ~54 million documents we've noticed
running a query with the following:
"fq":["{!cache=false}_class:taggedTickets",
"{!cache=false}taggedTickets_ticketId:100241",
"{!cache=false}companyId:22476"]
when I debugQuery I see:
Hi,
Usually, limit=-1 works as a single pass-through and counts accumulating;
but when limit >0 causes collecting per value docset, whic might take
longer. There's a note about this effect in uniqueBlock() description.
On Wed, Jul 8, 2020 at 11:29 AM ana wrote:
> Hi Team,
> Which is more