No data loss here (except one case some months ago, with an older version
of Lucene, and a wrong file descriptor limit setting, with test data)

No problems with shard movement here.

It helped me

- to have a robust network environment with low latency and high throughput
- to keep up with the latest ES version
- to use the latest JVM version
- to reindex the source data in case the Lucene version changed
- to set the heap around 4-8 GB per node
- and G1GC, for less GC pauses

A node going low on free resources (heap, OS file descriptors, disk space)
is the most popular cause of trouble. If the JVM is not able to allocate
write buffers, all kind of bad things happen (lost and corrupted files etc)
that may harm shard recovery.

Jörg



On Thu, Feb 13, 2014 at 11:54 PM, Mohit Anchlia <[email protected]>wrote:

> Thanks for sharing all the details. Have you come accross any data loss
> situation during shard allocation? It looks like most of the data loss
> issues are somewhat related to shard movement.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGinF3yX_0B2Cn1fxc1B9xCccVXTVcS-GX3uamfjvqq-Q%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to