You can't do this without running a fully distributed setup, see this
very similar thread: http://search-hadoop.com/m/hqc6F1U9S6e

J-D

On Wed, Oct 26, 2011 at 2:30 PM, danoomistmatiste
<[email protected]> wrote:
>
> Hi,  I am facing a strange problem.    I am running a HBase bulkload job and
> I get the following exception,   the _partition.lst is a symlink to
> /tmp/hadoop-hadoop/mapred/local/archive/-4954754809645175294_197781465_1108566742/file/Users/hadoop/dev/bulkloader/partitions_1319663526012.
>
> The wierd part is,  the path upto
> /tmp/hadoop-hadoop/mapred/local/archive/-4954754809645175294_197781465_1108566742/file/Users/hadoop/dev/bulkloader/
> exists but the directory partitions_1319663526012 is created in an entirely
> different location.   Any ideas on how to fix this.
>
>
>
> 11/10/26 14:12:06 WARN mapred.LocalJobRunner: LocalJobRunner does not
> support symlinking into current working dir.
> 11/10/26 14:12:06 INFO mapred.TaskRunner: Creating symlink:
> /tmp/hadoop-hadoop/mapred/local/archive/-4954754809645175294_197781465_1108566742/file/Users/hadoop/dev/bulkloader/partitions_1319663526012
> <- /tmp/hadoop-hadoop/mapred/local/localRunner/_partition.lst
> 11/10/26 14:12:07 INFO mapred.JobClient: Running job: job_local_0001
> 11/10/26 14:12:07 INFO mapred.MapTask: io.sort.mb = 100
> 11/10/26 14:12:07 INFO mapred.MapTask: data buffer = 79691776/99614720
> 11/10/26 14:12:07 INFO mapred.MapTask: record buffer = 262144/327680
> 11/10/26 14:12:07 WARN mapred.LocalJobRunner: job_local_0001
> java.lang.IllegalArgumentException: Can't read partitions file
>        at
> org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:111)
>        at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
>        at
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
>        at
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560)
>        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>        at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
> Caused by: java.io.FileNotFoundException: File _partition.lst does not
> exist.
>        at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:383)
>        at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
>        at org.apache.hadoop.fs.FileSystem.getLength(FileSystem.java:776)
>        at 
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)
>        at 
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)
>        at
> org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.readPartitions(TotalOrderPartitioner.java:296)
>        at
> org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:82)
>        ... 6 more
>
> --
> View this message in context: 
> http://old.nabble.com/Caused-by%3A-java.io.FileNotFoundException%3A-File-_partition.lst-does-not-exist.-tp32727792p32727792.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>

Reply via email to