Hey Cristian,

You don’t need to remove anything. Spark has a standalone mode. Actually that’s 
the default. https://spark.apache.org/docs/latest/spark-standalone.html

When building Spark (and you should build it yourself), just use the option 
that suits you: https://spark.apache.org/docs/latest/building-spark.html

Regards,

Yohann Jardin

Le 11-Nov-17 à 6:42 PM, Cristian Lorenzetto a écrit :
Considering the case i neednt hdfs, it there a way for removing completely 
hadoop from spark?
Is YARN the unique dependency in spark?
is there no java or scala (jdk langs)YARN-like lib to embed in a project 
instead to call external servers?
YARN lib is difficult to customize?

I made different questions for understanding what is the better way for me

Reply via email to