It's worth noting that although the instructions are on the goole code site,
you should really be grabbing the lzo libraries from
http://github.com/kevinweil/hadoop-lzo -- there are quite a few bug fixes
that have gone into that branch that are not in the google code version.
Usage instructions are the same.

-D

On Wed, Sep 1, 2010 at 1:07 PM, Santhosh Srinivasan <s...@yahoo-inc.com>wrote:

> LZO is not supported out of the box in Hadoop 0.20.2 In order to enable LZO
> on Hadoop, you can follow the instructions at:
>
> http://code.google.com/p/hadoop-gpl-compression/wiki/FAQ
>
> Santhosh
>
> -----Original Message-----
> From: Saurav Datta [mailto:sda...@apple.com]
> Sent: Wednesday, September 01, 2010 12:41 PM
> To: pig-user@hadoop.apache.org
> Subject: Error: Unable to create input splits
>
> Hi ,
>
> I am getting the below error while running a pig script :
> "ERROR 6017: org.apache.pig.backend.executionengine.ExecException:
> ERROR 2118: Unable to create input splits for: hdfs://localhost:9000"
>
> Going by the below URL , it appears to be related to compression codec for
> LZO .
>
> http://mail-archives.apache.org/mod_mbox/hadoop-pig-user/201008.mbox/%3caanlktik=5ab5zyk1bjnnozwzy-cmyifj_tzp9zzkf...@mail.gmail.com%3e
>
> Any suggestions on how to resolve this ?
>
> I am using Pig 0.7.0 , with Hadoop 0.20.2  on a Mac OS X 10.6.3 box.
>
> Regards,
> Saurav
>
>
>
> Below is the content of the log file :
>
> Pig Stack Trace
> ---------------
> ERROR 6017: org.apache.pig.backend.executionengine.ExecException:
> ERROR 2118: Unable to create input splits for: hdfs://localhost:9000/
> user/rahulmalviya/main_merged_File_20100804.dat.completed
>
> org.apache.pig.backend.executionengine.ExecException: ERROR 6017:
> org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
> Unable to create input splits for: hdfs://localhost:9000/user/
> rahulmalviya/main_merged_File_20100804.dat.completed
>        at
> org
> .apache
> .pig
> .backend
> .hadoop
> .executionengine
> .mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:227)
>        at
> org
> .apache
> .pig
> .backend
> .hadoop.executionengine.HExecutionEngine.execute(HExecutionEngine.java:
> 308)
>        at
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:
> 835)
>        at org.apache.pig.PigServer.execute(PigServer.java:828)
>        at org.apache.pig.PigServer.access$100(PigServer.java:105)
>        at org.apache.pig.PigServer$Graph.execute(PigServer.java:1080)
>        at org.apache.pig.PigServer.executeBatch(PigServer.java:288)
>        at
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:
> 109)
>        at
> org
> .apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:
> 166)
>        at
> org
> .apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:
> 138)
>        at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:89)
>        at org.apache.pig.Main.main(Main.java:391)
> =
> =
> =
> =
> =
> =
> =
> =
> ========================================================================
>
>

Reply via email to