Hi,
I encountered a problem with snapshot and restore, checking if somebody ran into the same problem and if yes how they resolved it. So this is what happened, I did a snapshot of one our clusters and then restored that snapshot in another cluster. When I did the restore there was a discrepancy of about 7 million documents from one index between the snapshot cluster and the restored cluster(snapshot cluster had more documents in it). There were 3 other indexes that restored without any data loss. I did a shard check(_cat/shards) and there was one primary shard and its 1 other replicas that had 0 documents. I deleted this index, and restored the index, but this time I set the replica count to zero. This time around there was no data loss in the restored index. The shard that was showing with 0 documents was now showing with 7 million documents. Now I set the replica to 1 and after shards were allocated again, there was again a data loss of 7 million documents. After doing shard check, same shard and its replica had 0 documents. Also the cluster from which I took the snapshot has ES1.2.2 installed and the cluster in which snapshot was restored has ES1.3.4 installed on it. Thanks, Prateek Singhal -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8007d82e-5563-4c06-9c59-f51ae55eb36f%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
