This might seem like a silly question, so please bear with me. I'm not sure
about it myself, just would like to know if you think it's utterly
unfeasible or not, and if it's at all worth doing.

Does anyone feel like it'll be a good idea to build some sort of a library
that allows us to write code for Spark using the usual bloated Hadoop API?
This is for the people who want to run their existing MapReduce code (with
NIL or minimal adjustments) with Spark to take advantage of its speed and
its better support for iterative workflows.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Hadoop-MapReduce-on-Spark-tp1110.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to