I'm modifying hdfs module inside hadoop, and would like the see the
reflection while i'm running spark on top of it, but I still see the native
hadoop behaviour. I've checked and saw Spark is building a really fat jar
file, which contains all hadoop classes (using hadoop profile defined in
maven), and deploy it over all workers. I also tried bigtop-dist, to exclude
hadoop classes but see no effect.

Is it possible to do such a thing easily, for example by small modifications
inside the maven file?



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/Compiling-Spark-with-a-local-hadoop-profile-tp14517.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to