The stack trace you sent has: at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
Which means it's not using your JobTracker. It means either two things: - you don't have one, in which case you need one - you have one but you run importtsv via HBase and didn't configure it to know about your JT, in which case you need add the hadoop conf dir to HBase's classpath. Or do it the other way around following this: http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath J-D On Tue, Nov 22, 2011 at 11:11 AM, Ales Penkava <[email protected]> wrote: > I do have 3 servers - so I guess I do have fully distributed server. > > I found the link you sent, just not sure I do have the same issue. > > Thx > Ales > > -----Original Message----- > From: [email protected] [mailto:[email protected]] On Behalf Of Jean-Daniel > Cryans > Sent: November-22-11 2:09 PM > To: [email protected] > Subject: Re: importtsv bulk upload fail > > Same answer as last time this was asked: > http://search-hadoop.com/m/rUV9on6kWA1 > > You can't do this without a fully distributed setup. > > J-D > > On Tue, Nov 22, 2011 at 10:33 AM, Ales Penkava > <[email protected]> wrote: >> Hello, I am on CDH3 trying to perform bulk upload but following error occurs >> each time >> >> WARN mapred.LocalJobRunner: job_local_0001 >> java.lang.IllegalArgumentException: Can't read partitions file >> at >> org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:111) >> at >> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) >> at >> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) >> at >> org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) >> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) >> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) >> at >> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) >> Caused by: java.io.FileNotFoundException: File _partition.lst does not exist. >> at >> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:383) >> at >> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) >> at org.apache.hadoop.fs.FileSystem.getLength(FileSystem.java:776) >> at >> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424) >> at >> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419) >> at >> org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.readPartitions(TotalOrderPartitioner.java:296) >> at >> org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:82) >> ... 6 more >> >> Classic upload works fine, but it is slow. >> >> Thx for any Ideas. >> Ales >> >
