Another approach is the next generation Map Reduce 2 framework that
moves the processing framework up into "user space" so that you can
plug in another framework, like a "Dryad" implementation, in theory.

I think we'll see something like this happen, and then you could get
your low latency small data chunks from hbase or from the dcache, etc.

JP

On Thu, Jun 16, 2011 at 1:34 PM, Hector Yee <[email protected]> wrote:
> What do people think of using Spark for iterative jobs:
>
> http://www.spark-project.org/
>
> Or is there a new version of hadoop that supports this kind of computation?
>
> --
> Yee Yang Li Hector
> http://hectorgon.blogspot.com/ (tech + travel)
> http://hectorgon.com (book reviews)
>



-- 
Twitter: @jpatanooga
Solution Architect @ Cloudera
hadoop: http://www.cloudera.com
blog: http://jpatterson.floe.tv

Reply via email to