Hallo

Currently I am using Hadoop version 0.19.0.

During my Map task operation, I allocate a variety of resources in the
form of opening and reading files as well as, in particular, spawn a
seperate Java Process to execute a system call to do some work. This
Process can potentially execute for a long time.

If however the current Task Attempt is killed (or fails), I need to free
the resources associated with the map, which includes killing the child
process. Is there a method or hook I could override that always executes
in the event of Task completion (whether it is successful, fails or is
killed)? I have tried overriding the close() method in MapReduceBase,
but when I killed the job, it was not executed.

Any help would be greatly appreciated.

Kind regards
-- 
Andrich van Wyk

Reply via email to