Hello, I don't know how it's compressed but it appears that data is compressed up to an amount of 4k. ie. it's useless to store data on a compressed (lz4) filesystem if fs block size is 4k:
Filesystem Size Used Avail Capacity Mounted on zdata/ES-lz4 1.1T 1.9G 1.1T 0% /zdata/ES-lz4 zdata/ES 1.1T 1.9G 1.1T 0% /zdata/ES But if fs block size is greater (say 128k), filesystem compression is a huge win: Filesystem Size Used Avail Capacity Mounted on zdata/ES-lz4 1.1T 1.1G 1.1T 0% /zdata/ES-lz4 -> compressratio 1.73x zdata/ES-gzip 1.1T 901M 1.1T 0% /zdata/ES-gzip -> compressratio 2.27x zdata/ES 1.1T 1.9G 1.1T 0% /zdata/ES Unfortunately, a filesystem block size greater than 4K is not optimal for IO (unless you have a big amount of physical memory you can dedicate to filesystem data cache, which would be redundant with ES cache). On 08 juin 2014, at 18:41, David Pilato wrote: > It's compressed by default now. > > -- > David ;-) > Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs > > > Le 8 juin 2014 à 18:01, sri <[email protected]> a écrit : > > Hello everyone, > > I have read posts and blogs on how elasticsearch compression can be enabled > in the previous versions(0.17 - 0.19). > > I am currently using ES 1.2.1, i wasn't able to find out how to enable > compression in this version or if at all there is any such option for it. > > I know that i can reduce the storage amount by disabling the source using the > mapping api, but what i was interested is the compression of data storage. > > Thanks and Regards > Sri -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/74DEB7BF-4ED9-4E27-85E6-7775D9DD586E%40patpro.net. For more options, visit https://groups.google.com/d/optout.
