On 15/09/11 10:14, Junping Du wrote:
Hello Arun and all,
          I think current hadoop have a good capability of scale out but not so good at scale in. As its 
design for dedicated cluster and machines, there is not too much attention for "scale in" 
capability in a long time. However, I noticed that there are more and more users to deploy hadoop clusters in 
Cloud (ec2, eucalyptus, etc.) or shared infrastructures(vmware, xen) that "scale in" capability can 
contribute to save resource utilization for other clusters or applications. The current "scale in" 
solution (as you proposed in previous mail) have some significant drawbacks:
          1. It doesn't use a formal way to handle scale-in case but rather a 
temporary workaround base on a disaster recovery mechanism.
          2. It is not convenient, Hadoop admin have to manually kill datanode 
one by one(in fact, maximum to be N(replica number) -1 each time to avoid 
possible data loss) and wait replica back To shrink a cluster from 1000 nodes 
to 500 nodes, how much time and effort it could be?
          3. It is not efficient as it is not well planned. Let's say both node 
A, B and C should be eliminated from cluster. At first, A and B will be 
eliminated from cluster ( suppose N =3), and it is possible that C can get some 
replicas for block in A or B. This problem is serious if big shrink happens.
          Thus, I think it is necessary to have a good discussion to let hadoop have this 
cool "elastic" features. Here I am volunteer for proposing one possible 
solution and welcome better solutions:
          1. We can think of breaking out the assumption of coexist of Datanode 
and TaskTracker on one machine and let some machines only have task node. I 
think network traffic inside a rack is not so expensive, but you may say that 
it waste some local I/O resource for machines only with task node. Hey, don't 
look at these machines as dedicated resource for this hadoop cluster. They can 
be used by other clusters and application(so they should be eliminated at some 
time). To this cluster, these machines are better than nothing, right?
           2. The percentage of machines with only task node in whole cluster is a "elastic" factor 
for this cluster. Take a example, if this cluster want to be scalable between 
"500"-"1000", the elastic factor could be 1/2, and it should have 500 normal machines 
with both data and task nodes and another 500 machines with task node only.
           3. Elastic factor can be configured by hadoop admin and 
non-dedicated machines in this cluster can be marked through some script like 
what have been done in rack-awareness.
           4. One command is provided to hadoop admin to shrink the cluster to 
the target size directly. Some policy can be applied here for waiting or not 
waiting task completed. If target size is smaller than elastic factor * current 
size, some data node will be killed too but in a well planned way.
           My 2 cents.


These are all good ideas. The other trick -which has been discussed recently in the context of the Platform Scheduler- is to run HDFS across all nodes, but switch the workload of the cluster between Hadoop jobs (MR, Graph, Hamster), and other work (Grid jobs). That way the filesystem is just a very large FS for anything. If some grid jobs don't use the HDFS, the nodes can still serve up their data.

Reply via email to