[
https://issues.apache.org/jira/browse/SOLR-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14056906#comment-14056906
]
Mark Miller commented on SOLR-5656:
-----------------------------------
bq. So, I should assume if there is no node number it's on node 1?
Currently it defaults to 1. I was going to make it explicit, but not a lot of
error checking yet anyway, so left it for further improvement later. I figure
this will be reused in a few other places that have to choose nodes given a
clusterstate.
bq. And that * attaches to the previous shard?
The * marks a replica as the one being replaced. The current replacement
algorithm looks at each replica - when it finds one, it looks for the best
place to replace it given a clusterstate.
bq. I still don't really know what "-" does.
It just overrides a specific state for a replica in clusterstate.json - so
rather than ACTIVE, you could mark them as RECOVERING or DOWN.
> Add autoAddReplicas feature for shared file systems.
> ----------------------------------------------------
>
> Key: SOLR-5656
> URL: https://issues.apache.org/jira/browse/SOLR-5656
> Project: Solr
> Issue Type: New Feature
> Reporter: Mark Miller
> Assignee: Mark Miller
> Attachments: SOLR-5656.patch, SOLR-5656.patch, SOLR-5656.patch,
> SOLR-5656.patch
>
>
> When using HDFS, the Overseer should have the ability to reassign the cores
> from failed nodes to running nodes.
> Given that the index and transaction logs are in hdfs, it's simple for
> surviving hardware to take over serving cores for failed hardware.
> There are some tricky issues around having the Overseer handle this for you,
> but seems a simple first pass is not too difficult.
> This will add another alternative to replicating both with hdfs and solr.
> It shouldn't be specific to hdfs, and would be an option for any shared file
> system Solr supports.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]