Hello,

My company is using the ELK stack.  Right now we have a very small amount 
of data actually being sent to elastic search (probably a couple hundred 
logstash entries a day if that), however, the data that is getting logged 
is very important.  I recently set up snapshots to help protect this data.  

I take 1 snapshot a day, I delete snapshots that are older than 20 days, 
and each snapshot is comprised of all the logstash indexes in 
elasticsearch.  It's also a business requirement that we are able to search 
at least a year's worth of data, so I can't close logstash indexes unless 
they're older than at least a year.

Now, we've been using logstash for several months and each day it creates a 
new index.  We've found that even though there is very little data in these 
indexes, it's taking upwards of 30 minutes to take a snapshot of all of 
them and each day it appears to take 20 - 100 seconds longer than the last. 
 It is also taking about 30 minutes to delete a single snapshot, which is 
done each day as a part of cleaning up old snapshots.  So, the whole 
process is is taking about an hour each day and appears to be growing 
longer very quickly.

Am I doing something wrong here or is this kind of thing expected?  It's 
seems pretty strange that it's taking so long with the little amount of 
data we have.  I've looked through the snapshot docs several times and 
there doesn't appear to be much talk about how the process scales.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0add3377-4b49-4a82-a233-e005113ab1b9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to