[
https://issues.apache.org/jira/browse/SOLR-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393730#comment-16393730
]
Shalin Shekhar Mangar commented on SOLR-12067:
----------------------------------------------
Thanks Varun. I agree 30 seconds is less. Actually, I found that with HDFS the
timeout was autoReplicaFailoverBadNodeExpiration (default 60s) +
autoReplicaFailoverWaitAfterExpiration (default 30s). We deprecated
autoReplicaFailoverBadNodeExpiration value but did not add it to the default
autoReplicaFailoverWaitAfterExpiration. So the timeout should be 90 seconds at
least. I think we should be conservative here and set this to a higher value,
say 120s.
> AutoAddReplicas default 30 second wait time is too low
> ------------------------------------------------------
>
> Key: SOLR-12067
> URL: https://issues.apache.org/jira/browse/SOLR-12067
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Reporter: Varun Thacker
> Assignee: Shalin Shekhar Mangar
> Priority: Major
>
> If I create a collection with autoAddReplicas=true in Solr 7.x a
> AutoAddReplicasPlanAction get's created with a waitFor=30 seconds
> The default should be increased as a JVM which is down for more than
> 30seconds can cause the framework to add a new replica on another node
> With HDFS this was a cheap operation as it only involved create a core and
> pointing it to the same index directory.
> But for non shared file systems, this is a very expensive operation and can
> potentially move large indexes around so maybe we should have a higher default
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]