I tried to find the way you wanted, but I couldn't.

As I know, hadoop framework doesn't provide the function to stop the
entire job in the specific moment.

I like to suggest to use Counter. For example, if you want to stop the
entire job, you can set the Counter on. Every mapper instance should
check the value of Counter before proceeding, if the value of Counter
is on, running mapper should stop the execution. When remaining mapper
instances is picked up by job tracker and starts running, they check
the value of Counter on and return immediately.

Look at the following sample code, this is code snippet of my mapper
class. In this code, I am using configuration object, but you can
change this implementation to using counter.

        @Override
        public void run(Context context) throws IOException, 
InterruptedException {
                boolean run = false;
                long start = context.getConfiguration().getLong("debug", -1);
                if (start != -1) {
                        if (start == 
((RAGZIPFileSplit)context.getInputSplit()).getStart()) {
                                run = true;
                        } else {
                                run = false;
                        }
                } else {
                        run = true;
                }

                if (run) {
                        setup(context);
                        while (context.nextKeyValue()) {
                                map(context.getCurrentKey(), 
context.getCurrentValue(), context);
                        }
                        cleanup(context);
                }
        }

2010/5/4 Ersoy Bayramoglu <[email protected]>:
> Hi,
>
> I'm a new user. I have a question about aborting an ongoing mapreduce job. If
> one of the mappers compute a particular value, I'd like to stop the entire 
> job,
> and give the control back to the master. Is this possible in Hadoop?
>

Reply via email to