I think you just need to write to stderr. My understanding is that
hadoop is happy as long as input is being consumed, output is being
generated or status is being generated.
Rahul Sood wrote:
Hi,
We have a Pipes C++ application where the Reduce task does a lot of
computation. After some time the task gets killed by the Hadoop
framework. The job output shows the following error:
Task task_200803051654_0001_r_000000_0 failed to report status for 604
seconds. Killing!
Is there any way to send a heartbeat to the TaskTracker from a Pipes
application. I believe this is possible in Java using
org.apache.hadoop.util.Progress and we're looking for something
equivalent in the C++ Pipes API.
-Rahul Sood
[EMAIL PROTECTED]