Hi Team,Need your suggestions/views on the approach I have in place for SOLR
availability and recovery.
I am running *SOLR 3.5* and have around *30k* document's indexed in my SOLR
core. I have configured SOLR to hold *5k * documents in each segment at a
time.I periodically commit & optimize my SOLR index. 

I have delta indexing in place to index new documents in SOLR, /very rarely
/ I face index corruption issue, to fix this issue I have *checkindex -fix*
job in place as well.However sometime this job can delete the corrupt
segment! (meaning loss of 5K documents, till I full Re-index SOLR.)

_*I have few follow up questions on this case.*_
1. How can I avoid loss of 5K documents (checkindex -fix), shall I reduce
number of documents per segments count? is there an alternate solution?

2. If I start taking periodic backup (snapshot) of entire index, shall I
just replace my data/index folder from the backup folder in case corruption
is found? Is this a good implementation? 

3. Any other good solution, suggestion to have maximum index availability
all the time? 

Thanks in advance for giving your time. 

Atul 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOLR-index-Recovery-availability-tp4088782.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to