This makes a bit of sense, but you have to worry about the inertia of the
data.  Adding compute resources is easy.  Adding data resources, not so
much.  And if the computation is not near the data, then it is likely to be
much less effective.

On Wed, Sep 14, 2011 at 4:27 PM, Bharath Ravi <bharathra...@gmail.com>wrote:

> Hi all,
>
> I'm a newcomer to Hadoop development, and I'm planning to work on an idea
> that I wanted to run by the dev community.
>
> My apologies if this is not the right place to post this.
>
> Amazon has an "Elastic MapReduce" Service (
> http://aws.amazon.com/elasticmapreduce/) that runs on Hadoop.
> The service allows dynamic/runtime changes in resource allocation: more
> specifically, varying the number of
> compute nodes that a job is running on.
>
> I was wondering if such a facility could be added to the publicly available
> Hadoop MapReduce.
>
> Does this idea make sense, has any previous work been done on this?
> I'd appreciate it if someone could point me the right way to find out more!
>
> Thanks a lot in advance!
> --
> Bharath Ravi
>

Reply via email to