It was my hope that storing solr.xml would mean I could spin up a Solr node 
pointing it to a properly configured zookeeper ensamble, and that no further 
local configuration or knowledge would be necessary.

However, I’m beginning to wonder if that’s sufficient. It’s looking like I may 
also need each node to be preconfigured with at least a directory and a 
core.properties for each collection/core the node intends to participate in? Is 
that correct?

I figured I’d test this by starting a stand-alone ZK, and configuring it by 
issuing a zkCli bootstrap against the solr examples dir solr dir, then manually 
putfile-ing the (new-style) solr.xml.
I then attempted to connect two solr instances that referenced that zookeeper, 
but did NOT use the solr examples dir as the base. I essentially used empty 
directories for the solr home. Although both connected and zk shows both in the 
/live_nodes, both report “0 cores discovered” in the logs, and don’t seem to 
find and participate in the collection as happens when you follow the SolrCloud 
example verbatim. 
(http://wiki.apache.org/solr/SolrCloud#Example_A:_Simple_two_shard_cluster)

I may have some other configuration issues at present, but I’m going to be 
disappointed if I need to have preknowledge of what collections/cores may have 
been dynamically created in a cluster in order to add a node that participates 
in that cluster.
It feels like I might be missing something.

Any clarifications would be appreciated.

Reply via email to