[
https://issues.apache.org/jira/browse/SOLR-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
jawlit patel updated SOLR-14235:
--------------------------------
Description:
Hello,
Node lost event does not delete all replicas from the nodes are being deleted
when a higher number of collections & replicas. This is really blocking the
implementation of solr on autoscaling groups where nodes are being added &
removed. All events are working fine when very few(3-4) collections and
replicas but we started adding more collections and replicas to the cluster, we
started seeing the cluster is not maintaining the desired state. during scale
down, nodes are being removed but replicas pointing to the nodes are still
there in down status. The only way it works is manually deleting them from
admin UI.
have added the following policies
curl -X POST "http://localhost:8983/solr/admin/autoscaling" --data-binary \
'
{ "set-cluster-policy": [ \\{ "replica": "1","shard": "#EACH", "node": "#EACH" }
]
}'
curl -X POST "http://localhost:8983/solr/admin/autoscaling" --data-binary \
'{
"set-trigger":
{ "name": "node_added_trigger", "event": "nodeAdded", "waitFor": "5s",
"preferredOperation": "ADDREPLICA" }
}'
curl -X POST "localhost:8983/solr/admin/autoscaling" --data-binary \
'{
"set-trigger":
{ "name": "node_lost_trigger", "event": "nodeLost", "waitFor": "600s",
"preferredOperation": "DELETENODE" }
}'
was:
Hello,
Node lost event does not delete all replicas from the nodes are being deleted
when a higher number of collections & replicas. This is really blocking a
implementation of solr on autoscaling group where nodes are being added &
remove. I have added the following policies
curl -X POST "http://localhost:8983/solr/admin/autoscaling" --data-binary \
'{
"set-cluster-policy": [
\{ "replica": "1","shard": "#EACH", "node": "#EACH" }
]
}'
curl -X POST "http://localhost:8983/solr/admin/autoscaling" --data-binary \
'{
"set-trigger": {
"name": "node_added_trigger",
"event": "nodeAdded",
"waitFor": "5s",
"preferredOperation": "ADDREPLICA"
}
}'
curl -X POST "localhost:8983/solr/admin/autoscaling" --data-binary \
'{
"set-trigger": {
"name": "node_lost_trigger",
"event": "nodeLost",
"waitFor": "600s",
"preferredOperation": "DELETENODE"
}
}'
> Node lost event does not delete all replicas from the nodes are being deleted
> ------------------------------------------------------------------------------
>
> Key: SOLR-14235
> URL: https://issues.apache.org/jira/browse/SOLR-14235
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Affects Versions: 7.7.2
> Reporter: jawlit patel
> Priority: Critical
>
> Hello,
>
> Node lost event does not delete all replicas from the nodes are being deleted
> when a higher number of collections & replicas. This is really blocking the
> implementation of solr on autoscaling groups where nodes are being added &
> removed. All events are working fine when very few(3-4) collections and
> replicas but we started adding more collections and replicas to the cluster,
> we started seeing the cluster is not maintaining the desired state. during
> scale down, nodes are being removed but replicas pointing to the nodes are
> still there in down status. The only way it works is manually deleting them
> from admin UI.
>
> have added the following policies
> curl -X POST "http://localhost:8983/solr/admin/autoscaling" --data-binary \
> '
> { "set-cluster-policy": [ \\{ "replica": "1","shard": "#EACH", "node":
> "#EACH" }
> ]
> }'
> curl -X POST "http://localhost:8983/solr/admin/autoscaling" --data-binary \
> '{
> "set-trigger":
> { "name": "node_added_trigger", "event": "nodeAdded", "waitFor": "5s",
> "preferredOperation": "ADDREPLICA" }
> }'
> curl -X POST "localhost:8983/solr/admin/autoscaling" --data-binary \
> '{
> "set-trigger":
> { "name": "node_lost_trigger", "event": "nodeLost", "waitFor": "600s",
> "preferredOperation": "DELETENODE" }
> }'
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]