Same answer as last time this was asked: http://search-hadoop.com/m/rUV9on6kWA1

You can't do this without a fully distributed setup.

J-D

On Tue, Nov 22, 2011 at 10:33 AM, Ales Penkava
<[email protected]> wrote:
> Hello, I am on CDH3 trying to perform bulk upload but following error occurs 
> each time
>
> WARN mapred.LocalJobRunner: job_local_0001
> java.lang.IllegalArgumentException: Can't read partitions file
>        at 
> org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:111)
>        at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
>        at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
>        at 
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560)
>        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>        at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
> Caused by: java.io.FileNotFoundException: File _partition.lst does not exist.
>        at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:383)
>        at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
>        at org.apache.hadoop.fs.FileSystem.getLength(FileSystem.java:776)
>        at 
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)
>        at 
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)
>        at 
> org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.readPartitions(TotalOrderPartitioner.java:296)
>        at 
> org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:82)
>        ... 6 more
>
> Classic upload works fine, but it is slow.
>
> Thx for  any Ideas.
> Ales
>

Reply via email to