I recently wrote a blog post[1] sharing my experiences with using
Apache Spark to load data into Apache Fluo. One of the things I cover
in this blog post is late binding of dependencies and exclusion of
provided dependencies when building a shaded jar.  When writing the
post, I was unsure about dependency isolation and convergence
expectations in the Spark env.

Does Spark support any form of dependency isolation for user code?
For example can the Spark framework use Guava ver X while user code
uses Guava version Y?  This is assuming the user packaged Guava
version Y in their shaded jar.  Or, are Spark users expected to
converge their user dependency versions with those used by Spark?  For
example, the user is expected to converge their code to use Guava
version X which is used by the Spark framework.

[1]: http://fluo.apache.org/blog/2016/12/22/spark-load/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to