Did you configure hadoop to store your HDFS instance/data somewhere other than /tmp? Look up the single node set up in the Hadoop docs.
On Tue, Jul 17, 2012 at 12:07 PM, Shrestha, Tejen [USA] <[email protected]> wrote: > This is the error that was produced. > > java.io.FileNotFoundException: File /tmp/files does not exist. > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.ja > va:361) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:2 > 45) > at > org.apache.hadoop.filecache.DistributedCache.getTimestamp(DistributedCache. > java:509) > at > org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.ja > va:644) > at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:761) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:432) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447) > at com.bah.applefox.plugins.loader.NGramLoader.run(NGramLoader.java:302) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > at com.bah.applefox.ingest.Ingest.main(Ingest.java:133) > > > > On 7/17/12 12:50 PM, "Eric Newton" <[email protected]> wrote: > >>You will need to look in the master/tserver logs for the reason. >> >>-Eric >> >>On Tue, Jul 17, 2012 at 11:03 AM, Shrestha, Tejen [USA] >><[email protected]> wrote: >>> Below is the line I am using to do the Bulk Import: >>> >>> >>> conn.tableOperations().importDirectory(table, dir, failureDir, false); >>> >>> >>> Where conn is the connector to the ZooKeeper instance. The problem is >>>the >>> error: "Internal error processing waitForTableOperation." >
