This doesn't solve your stderr/stdout problem, but you can always set the
timeout to be a bigger value if necessary.

-Dmapred.task.timeout=______ (in milliseconds)

Koji


On 10/25/09 12:00 PM, "Ryan Rosario" <uclamath...@gmail.com> wrote:

> I am using a Python script as a mapper for a Hadoop Streaming (hadoop
> 0.20.0) job, with reducer NONE. My jobs keep getting killed with "task
> failed to respond after 600 seconds." I tried sending a heartbeat
> every minute to stderr using sys.stderr.write in my mapper, but
> nothing is being output to stderr either on disk (in
> logs/userlogs/...) or in the web UI. stdout is not even recorded.
> 
> This also means I have no way of knowing what my tasks are doing at
> any given moment except to look at the counts produced in syslog.
> 
> I got it to work once, but have not had any luck since. Any
> suggestions of things to look at as to why I am not able to get any
> output? Help is greatly appreciated.
> 
> - Ryan

Reply via email to