On 1/28/09 7:42 PM, "Andy Liu" <andyliu1...@gmail.com> wrote:
> I'm running Hadoop 0.19.0 on Solaris (SunOS 5.10 on x86) and many jobs are
> failing with this exception:
> 
> Error initializing attempt_200901281655_0004_m_000025_0:
> java.io.IOException: Cannot run program "chmod": error=12, Not enough space
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
...
> at java.lang.UNIXProcess.forkAndExec(Native Method)
> at java.lang.UNIXProcess.(UNIXProcess.java:53)
> at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
> ... 20 more
> 
> However, all the disks have plenty of disk space left (over 800 gigs).  Can
> somebody point me in the right direction?

    "Not enough space" is usually SysV kernel speak for "not enough virtual
memory to swap".  See how much mem you have free.


Reply via email to