[
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16236706#comment-16236706
]
ASF subversion and git services commented on SOLR-11423:
--------------------------------------------------------
Commit 0637407ea4bf3a49ed5bdabadcfee650a8e0a200 in lucene-solr's branch
refs/heads/master from [~dragonsinth]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0637407 ]
SOLR-11423: fix typo
> Overseer queue needs a hard cap (maximum size) that clients respect
> -------------------------------------------------------------------
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
> Issue Type: Improvement
> Security Level: Public(Default Security Level. Issues are Public)
> Components: SolrCloud
> Reporter: Scott Blum
> Assignee: Scott Blum
> Priority: Major
>
> When Solr gets into pathological GC thrashing states, it can fill the
> overseer queue with literally thousands and thousands of queued state
> changes. Many of these end up being duplicated up/down state updates. Our
> production cluster has gotten to the 100k queued items level many times, and
> there's nothing useful you can do at this point except manually purge the
> queue in ZK. Recently, it hit 3 million queued items, at which point our
> entire ZK cluster exploded.
> I propose a hard cap. Any client trying to enqueue a item when a queue is
> full would throw an exception. I was thinking maybe 10,000 items would be a
> reasonable limit. Thoughts?
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]