Keith,

The 'hadoop fs -text' tool does decompress a file given to it if
needed/able, but what you could also do is run a distributed mapreduce
job that converts from compressed to decompressed, that'd be much
faster.

On Fri, Aug 5, 2011 at 4:58 AM, Keith Wiley <[email protected]> wrote:
> Instead of "hd fs -put" hundreds of files of X megs, I want to do it once on 
> a gzipped (or zipped) archive, one file, much smaller total megs.  Then I 
> want to decompress the archive on HDFS?  I can't figure out what "hd fs" type 
> command would do such a thing.
>
> Thanks.
>
> ________________________________________________________________________________
> Keith Wiley     [email protected]     keithwiley.com    
> music.keithwiley.com
>
> "What I primarily learned in grad school is how much I *don't* know.
> Consequently, I left grad school with a higher ignorance to knowledge ratio 
> than
> when I entered."
>                                           --  Keith Wiley
> ________________________________________________________________________________
>
>



-- 
Harsh J

Reply via email to