We're running a large 0.90.10 cluster. Due to performance problems we're 
seeing with has_parent queries (unrelated issue), upgrading to 1.x is not 
an option for us at the current time.

We're trying to figure out how to backup 0.90. The following link gives 
some ideas for doing 
backups: 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-upgrade.html.
 
Essentially, the process is

   1. Stop indexes from being flushed to disk.
   2. Stop shard reallocation.
   3. Copy the data.
   4. Resume index flushing.
   5. Resume shard reallocation.

The concern is step 3. We have a lot of data to backup. So, step 3 could 
take a longer amount of time than we'd want to keep indexes from flushing 
to disk.

However, given that segments in the data directory are immutable, I'm 
wondering if we could change step 3 to first create a parallel directory 
structure off to the side somewhere and then to hard link all the files in 
the data directory into the equivalent directories in the parallel 
structure. Running through the files and directories and creating hard 
links should be sub-second.

Then, we can resume index flushing and shard reallocation, backup the 
parallel directory structure (waiting however long that takes), and finally 
delete the parallel directory structure.

This approach is similar to how backups work in Solr.

Will that approach work, or are there any files in the data directory that 
are mutable?

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e5503355-a5a5-40cc-adda-8f638568a76e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to