Hello Arun and all,
         I think current hadoop have a good capability of scale out but not so 
good at scale in. As its design for dedicated cluster and machines, there is 
not too much attention for "scale in" capability in a long time. However, I 
noticed that there are more and more users to deploy hadoop clusters in Cloud 
(ec2, eucalyptus, etc.) or shared infrastructures(vmware, xen) that "scale in" 
capability can contribute to save resource utilization for other clusters or 
applications. The current "scale in" solution (as you proposed in previous 
mail) have some significant drawbacks:
         1. It doesn't use a formal way to handle scale-in case but rather a 
temporary workaround base on a disaster recovery mechanism.
         2. It is not convenient, Hadoop admin have to manually kill datanode 
one by one(in fact, maximum to be N(replica number) -1 each time to avoid 
possible data loss) and wait replica back To shrink a cluster from 1000 nodes 
to 500 nodes, how much time and effort it could be?
         3. It is not efficient as it is not well planned. Let's say both node 
A, B and C should be eliminated from cluster. At first, A and B will be 
eliminated from cluster ( suppose N =3), and it is possible that C can get some 
replicas for block in A or B. This problem is serious if big shrink happens.
         Thus, I think it is necessary to have a good discussion to let hadoop 
have this cool "elastic" features. Here I am volunteer for proposing one 
possible solution and welcome better solutions:
         1. We can think of breaking out the assumption of coexist of Datanode 
and TaskTracker on one machine and let some machines only have task node. I 
think network traffic inside a rack is not so expensive, but you may say that 
it waste some local I/O resource for machines only with task node. Hey, don't 
look at these machines as dedicated resource for this hadoop cluster. They can 
be used by other clusters and application(so they should be eliminated at some 
time). To this cluster, these machines are better than nothing, right?
          2. The percentage of machines with only task node in whole cluster is 
a "elastic" factor for this cluster. Take a example, if this cluster want to be 
scalable between "500"-"1000", the elastic factor could be 1/2, and it should 
have 500 normal machines with both data and task nodes and another 500 machines 
with task node only.
          3. Elastic factor can be configured by hadoop admin and non-dedicated 
machines in this cluster can be marked through some script like what have been 
done in rack-awareness.
          4. One command is provided to hadoop admin to shrink the cluster to 
the target size directly. Some policy can be applied here for waiting or not 
waiting task completed. If target size is smaller than elastic factor * current 
size, some data node will be killed too but in a well planned way.
          My 2 cents.

Thanks,

Junping


________________________________
From: Arun C Murthy <a...@hortonworks.com>
To: common-dev@hadoop.apache.org
Sent: Thursday, September 15, 2011 5:24 AM
Subject: Re: Adding Elasticity to Hadoop MapReduce


On Sep 14, 2011, at 1:27 PM, Bharath Ravi wrote:

> Hi all,
> 
> I'm a newcomer to Hadoop development, and I'm planning to work on an idea
> that I wanted to run by the dev community.
> 
> My apologies if this is not the right place to post this.
> 
> Amazon has an "Elastic MapReduce" Service (
> http://aws.amazon.com/elasticmapreduce/) that runs on Hadoop.
> The service allows dynamic/runtime changes in resource allocation: more
> specifically, varying the number of
> compute nodes that a job is running on.
> 
> I was wondering if such a facility could be added to the publicly available
> Hadoop MapReduce.

From a long while  you can bring up either DataNodes or TaskTrackers and point 
them (via config) to the NameNode/JobTracker and they will be part of the 
cluster.

Similarly you can just kill the DataNode or TaskTracker and the respective 
masters will deal with their loss.

Arun

Reply via email to