On 24 juin 2014, at 00:27, Mark Walkom wrote:

> but ES compresses data natively so you won't get much out of doing the same 
> on the FS level

Using a gzip compressed FS I got 2.27x compress ratio on my ES data. It's very 
bad for performance but depending on your needs it can be useful (storage of 
very old indices for example). ES reads and writes 4k blocks, so it's 
compression is made on 4k chunks. A bigger block size would yield to better 
compress ratio.

Patrick

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/686A2E78-B053-48A5-8E9F-357ABB7715DE%40patpro.net.
For more options, visit https://groups.google.com/d/optout.

Reply via email to