Gzip is decently fast, but cannot take advantage of Hadoop's natural map splits because it's impossible to start decompressing a gzip stream starting at a random offset in the file. LZO is a wonderful compression scheme to use with Hadoop because it's incredibly fast, and (with a bit of work) it's splittable LZO's block format makes it possible to start decompressing at certain specific offsets of the file -- those that start new LZO block boundaries. James Neofotistos Senior Sales Consultant Emerging Markets East Phone: 1-781-565-1890| Mobile: 1-603-759-7889 Email:jim.neofotis...@oracle.com From: Ramasubramanian Narayanan [mailto:ramasubramanian.naraya...@gmail.com] Hi, If a zip file(Gzip) is loaded into HDFS will it get splitted into Blocks and store in HDFS? I understand that a single mapper can work with GZip as it reads the entire file from beginning to end... In that case if the GZip file size is larget than 128 MB will it get splitted into blocks and stored in HDFS? regards, Rams |
- Doubts on compressed file Ramasubramanian Narayanan
- Re: Doubts on compressed file Harsh J
- Re: Doubts on compressed file Niels Basjes
- RE: Doubts on compressed file Jim Neofotistos