Hello!

I had the same kind of problem. In my case this was caused by one of the node 
of my cluster with full memory, so to solve the priblem I simply freed up 
memory on that node. Check if all of the nodes of your cluster have free memory.

As for the second error, it seems you're missing some library: try adding it to 
hadoop.


Inviato da iPhone

Il giorno 30/apr/2012, alle ore 15:15, Igor Salma <[email protected]> ha 
scritto:

> Hi to all,
> 
> We're having trouble with nutch when trying to crawl. Nutch version 1.4,
> Hadoop 0.20.2. (working in local mode). After 2 days of crawling we've got:
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
> taskTracker/jobcache/job_local_0015/attempt_local_0015_m_000000_0/output/spill0.out
> in any of the configured local directories
>    at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:389)
>    at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:138)
>    at
> org.apache.hadoop.mapred.MapOutputFile.getSpillFile(MapOutputFile.java:94)
>    at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:1443)
>    at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1154)
>    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:359)
>    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>    at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
> 
> We've looked at mailing list archives but I'm not sure if exact thing is
> mentioned. Tried to upgrade to hadoop-core-0.20.203.0.jar but then this is
> thrown:
> Exception in thread "main" java.lang.NoClassDefFoundError:
> org/apache/commons/configuration/Configuration
> 
> Can someone, please, shed some light on this?
> 
> Thanks.
> Igor

Reply via email to