Amandeep,

Does the job fail after that happens? Are there any WARN or ERROR lines in
the log nearby, or any exceptions?

Three possibilities I can think of:

You may have configured Hadoop to run under /tmp, and tmpwatch or another
cleanup utility like that decided to throw away a bunch of files in the temp
space while your job was running. In this case, you should consider moving
hadoop.tmp.dir and mapred.local.dir out from under the default /tmp.

You might be out of disk space?

mapred.local.dir or hadoop.tmp.dir might be set to paths that Hadoop doesn't
have the privileges to write to?

- A


On Thu, Jul 23, 2009 at 2:06 AM, Amandeep Khurana <[email protected]> wrote:

> Hi
>
> I get these messages in the TT log while running a job:
>
> 2009-07-23 02:03:59,091 INFO org.apache.hadoop.mapred.TaskTracker:
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
>
> taskTracker/jobcache/job_200907221738_0020/attempt_200907221738_0020_r_000000_0/output/file.out
> in any of the configured local directories
>
> Whats the problem?
>
> Amandeep
>
>
>
> Amandeep Khurana
> Computer Science Graduate Student
> University of California, Santa Cruz
>

Reply via email to