Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Solr Wiki" for change 
notification.

The "ZooKeeperIntegration" page has been changed by NoblePaul.
http://wiki.apache.org/solr/ZooKeeperIntegration?action=diff&rev1=8&rev2=9

--------------------------------------------------

      <!-- other params go here -->
   
      <shardHandler class="ZooKeeperAwareShardHandler">
-        <str name="shardName">shard1/nodes</int>
+        <str name="shard">shard1</int>
      </shardHandler>
  </requestHandler>
  }}}
  
- With the above configuration, on initialization, the 
!ZooKeeperAwareShardHandler will get the ZKClient from the !SolrCore and 
register itself as a sequential node under the path "/myApp/solr/shard1/nodes" 
and value me=localhost:8983/solr/core1
+ With the above configuration, on initialization, the 
!ZooKeeperAwareShardHandler will get the ZKClient from the !SolrCore and 
register itself as a sequential node under the path "/solr_domain/nodes" and 
value url=localhost:8983/solr/core1
  
  TODO: Figure out where does "me" live - zk configuration or shard handler 
configuration.
  
  Shards are ephemeral and sequential nodes in ZK speak and thus go away if the 
node dies.
  
- Then, when a query comes in, the !ShardsComponent can build the 
!ResponseBuilder.shards value appropriately based on what's contained in the 
shard group that it is participating in.  This shard group approach should 
allow for a fanout approach to be employed.
+ !ZooKeeperAwareShardHandler always maintains a list of shard names and a list 
of nodes that belong to that shard. If the setup has all the slaves sitting 
behind a loadbalancer, the value of 'me' points to the loadbalancer instead of 
the node's host:port. The Shardhandler would automatically load balance if 
there are multiple  nodes serving up a shard.
  
  == Master/Slave ==
+ 
+ master/slave setup is only valid for !ReplicationHandler. So the 
configuration of master node can be delegated to ReplicationHandler.
+ --TODO--
  
  NOTE COMPLETELY IMPLEMENTED YET.
  

Reply via email to