Hi All,

Im running quite a basic map/reduce job with 10 or so map tasks. During the
task's execution, the
entire stack (and my OS for that matter) start failing due to being unable
to fork() new processes.
It seems Hadoop (1.0.3) is creating 700+ threads and exhausting this
resource. RAM utilisation is fine however.
This still occurs with ulimit set to unlimited.

Any ideas or advice would be great, it seems very sketchy for a task that
doesn't require much grunt.

Cheers!

Reply via email to