I think the whole idea of the spark API  is to simplify building iterative 
workflows/algorithms when compared to Hadoop's bloated API

I am not saying it's completely wrong or anything although it would be clearer 
if you had a particular use case in mind that you wish to tackle. 

> On Feb 1, 2014, at 15:57, nileshc <[email protected]> wrote:
> 
> This might seem like a silly question, so please bear with me. I'm not sure
> about it myself, just would like to know if you think it's utterly
> unfeasible or not, and if it's at all worth doing.
> 
> Does anyone feel like it'll be a good idea to build some sort of a library
> that allows us to write code for Spark using the usual bloated Hadoop API?
> This is for the people who want to run their existing MapReduce code (with
> NIL or minimal adjustments) with Spark to take advantage of its speed and
> its better support for iterative workflows.
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Hadoop-MapReduce-on-Spark-tp1110.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to