zjffdu commented on pull request #4097:
URL: https://github.com/apache/zeppelin/pull/4097#issuecomment-830638631


   For spark interpreter, we can leverage `spark.archives` to download and 
setup conda environment in both driver(spark interpreter) and executor. But for 
python interpreter, I don't think there's unified approach for that for now. 
Buy we can introduce unified configuration for that. e.g. We can introduce 
`python.archive` which will be translated to yarn/k8s specific configuration. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to