On Mar 5, 2008, at 9:31 AM, Rahul Sood wrote:
Hi, We have a Pipes C++ application where the Reduce task does a lot of computation. After some time the task gets killed by the Hadoop framework. The job output shows the following error: Task task_200803051654_0001_r_000000_0 failed to report status for 604 seconds. Killing! Is there any way to send a heartbeat to the TaskTracker from a Pipes application. I believe this is possible in Java using org.apache.hadoop.util.Progress and we're looking for something equivalent in the C++ Pipes API.
The context object has a progress method that should be called during long computations...
http://tinyurl.com/yt7hyx search for progress... -- Owen
