Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Solr Wiki" for change 
notification.

The "SolrReplication" page has been changed by FredDrake.
http://wiki.apache.org/solr/SolrReplication?action=diff&rev1=67&rev2=68

--------------------------------------------------

  
  The master is totally unaware of the slaves. The slave continuously keeps 
polling the master (depending on the 'pollInterval' parameter) to check the 
current index version the master. If the slave finds out that the master has a 
newer version of the index it initiates a replication process. The steps are as 
follows,
  
-  * slave issues a filelist command to get the list of the files. This command 
returns the names of the files as well as some metadata 
(size,lastmodified,alias if any)
+  * Slave issues a filelist command to get the list of the files. This command 
returns the names of the files as well as some metadata 
(size,lastmodified,alias if any)
-  * The slave checks with its own index if it has any of those files in the 
local index. It then proceeds to download the missing files (The command name 
is 'filecontent' ). This uses a custom format (akin to the HTTP chunked 
encoding) to download the full content or a part of each file. If the 
connection breaks in between , the download resumes from the point it failed. 
At any point , it tries 5 times before giving up a replication altogether. 
+  * The slave checks with its own index if it has any of those files in the 
local index. It then proceeds to download the missing files (The command name 
is 'filecontent' ). This uses a custom format (akin to the HTTP chunked 
encoding) to download the full content or a part of each file. If the 
connection breaks in between , the download resumes from the point it failed. 
At any point, it tries 5 times before giving up a replication altogether. 
-  * The files are downloaded into a temp dir . So if the slave or master  
crashes in between it does not corrupt anything. It just aborts the current 
replication . 
+  * The files are downloaded into a temp dir. So if the slave or master 
crashes in between it does not corrupt anything. It just aborts the current 
replication. 
-  * After the download completes , all the new files are 'mov'ed to the 
slave's live index directory and the files' timestamps will match the 
timestamps in the master.
+  * After the download completes, all the new files are 'mov'ed to the slave's 
live index directory and the files' timestamps will match the timestamps in the 
master.
   * A 'commit' command is issued on the slave by the Slave's 
!ReplicationHandler and the new index is loaded.
  
  
@@ -156, +156 @@

  
  == HTTP API ==
  These commands can be invoked over HTTP to the !ReplicationHandler
-  * Get the latest replicateable index on master.  
http://master_host:port/solr/replication?command=indexversion
+  * Get the latest replicateable index on master:  
http://master_host:port/solr/replication?command=indexversion
-  * Abort copying index from master to slave command : 
http://slave_host:port/solr/replication?command=abortfetch
+  * Abort copying index from master to slave command: 
http://slave_host:port/solr/replication?command=abortfetch
-  * Create a backup on master if there are committed index data in the server, 
otherwise do nothing. This is useful to take periodic backups .command : 
http://master_host:port/solr/replication?command=backup
+  * Create a backup on master if there are committed index data in the server, 
otherwise do nothing. This is useful to take periodic backups. Command: 
http://master_host:port/solr/replication?command=backup
-  * Force a fetchindex on slave from master command : 
http://slave_host:port/solr/replication?command=fetchindex
+  * Force a fetchindex on slave from master command: 
http://slave_host:port/solr/replication?command=fetchindex
-   * It is possible to pass on extra attribute 'masterUrl' or other attributes 
like 'compression' (or any other parameter which is specified in the <lst 
name="slave"> tag) to do a one time replication from a master. This obviates 
the need for hardcoding the master in the slave.
+   * It is possible to pass on extra attribute 'masterUrl' or other attributes 
like 'compression' (or any other parameter which is specified in the {{{<lst 
name="slave">}}} tag) to do a one time replication from a master. This obviates 
the need for hardcoding the master in the slave.
-  * Disable polling for changes from slave command : 
http://slave_host:port/solr/replication?command=disablepoll
+  * Disable polling for changes from slave command: 
http://slave_host:port/solr/replication?command=disablepoll
-  * Enable polling for changes from slave command : 
http://slave_host:port/solr/replication?command=enablepoll
+  * Enable polling for changes from slave command: 
http://slave_host:port/solr/replication?command=enablepoll
-  * Get all the details of the configuration and current status : 
http://slave_host:port/solr/replication?command=details
+  * Get all the details of the configuration and current status: 
http://slave_host:port/solr/replication?command=details
   * Get version number of the index: 
http://host:port/solr/replication?command=indexversion
   * Get list of lucene files present in the index: 
http://host:port/solr/replication?command=filelist&indexversion=<index-version-number>
 . The version number can be obtained using the indexversion command
-  * Disable replication on master for all slaves : 
http://master_host:port/solr/replication?command=disablereplication
+  * Disable replication on master for all slaves: 
http://master_host:port/solr/replication?command=disablereplication
-  * Enable replication on master for all slaves : 
http://master_host:port/solr/replication?command=enablereplication
+  * Enable replication on master for all slaves: 
http://master_host:port/solr/replication?command=enablereplication
  
  == enable/disable master/slave in a node ==
  If a server needs to be turned into a master from a slave or if you wish to 
use the same solrconfig.xml for both master and slave, do as follows,

Reply via email to