Reamer commented on pull request #4097:
URL: https://github.com/apache/zeppelin/pull/4097#issuecomment-827815927


   You are right, putting the Conda environment in a cloud storage will be the 
best option. Do you have any idea what possibilities for integration 
`spark.archives` supports? Local mounting via filesystem is not an option in 
Kubernetes. I am hoping for an HTTP endpoint, which is very flexible and should 
work for most users.
   YARN should also be fine with an HTTP endpoint, so that the Conda 
environment can be dynamically loaded by the Zeppelin interpreter when the 
Python or PySpark interpreter is started.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to