I'm trying to use zeppelin.pyspark.python as the variable to set the python
that Spark worker nodes should use for my job, but it doesn't seem to be
working.

Am I missing something or this variable does not do that ?

My goal is to change that variable to point to different conda
environments.  These environments are available in all worker nodes since
it's on a shared location and ideally all nodes then would have access to
the same libraries and dependencies.

Thanks,

~/William

Reply via email to