Map/Reduce was designed primarily for parallel batch processing text.
In this respect, its best suited to data sets that are homogenous and can
be divided (mapped) and merged (reduced) without impacting the algorithm
efficacy.

As such, its not well suited to general purpose algorithms or non-uniform
data sets, but rather more for transformative batch processing, in my
opinion.


> On Tue, Aug 10, 2010 at 10:00 AM, Jackob Carlsson
> <[email protected]> wrote:
>> Hi,
>>
>> I am trying to write a thesis proposal about my PhD about usage of
>> hadoop in
>> cloud computing. I need to find some open problems in cloud computing
>> which
>> can be addressed by hadoop. I would appreciate if somebody could help me
>> to
>> find some topics.
>>
>> Thanks in advance
>> Jackob
>>
>

Reply via email to