[
https://issues.apache.org/jira/browse/SOLR-3167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13419015#comment-13419015
]
Jan Høydahl commented on SOLR-3167:
-----------------------------------
I was thinking "auto-everything" by default :) like ElasticSearch
# Start Solr on a node without any options other than telling it to start in
cloud mode
## If -DzkHost is not specified it will try autoDiscover (through some 0-conf
protocol) and join existing ZK
## If no existing ZK found, spin up a local one
# Start Solr on another node, it will discover the existing one(s) without any
host:port at startup
## If "too few" ZK servers, will start another one and refresh the ZK list on
all other nodes
## If "enough" ZK servers already, will simply join. Should also be possible to
auto-start ZK on another node if one master has failed.
> Allow running embedded zookeeper 1 for 1 dynamically with solr nodes
> --------------------------------------------------------------------
>
> Key: SOLR-3167
> URL: https://issues.apache.org/jira/browse/SOLR-3167
> Project: Solr
> Issue Type: Improvement
> Reporter: Mark Miller
> Assignee: Mark Miller
>
> Right now you have to decide which nodes run zookeeper up front - each node
> must know the list of all the servers in the ensemble. Growing or shrinking
> the list of nodes requires a rolling restart.
> https://issues.apache.org/jira/browse/ZOOKEEPER-1355 (Add
> zk.updateServerList(newServerList) might be able to help us here. Perhaps the
> over seer could make a call to each replica when the list changes and use the
> update server list call.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]