Github user ueshin commented on a diff in the pull request: https://github.com/apache/spark/pull/13599#discussion_r160085499 --- Diff: docs/submitting-applications.md --- @@ -218,6 +218,73 @@ These commands can be used with `pyspark`, `spark-shell`, and `spark-submit` to For Python, the equivalent `--py-files` option can be used to distribute `.egg`, `.zip` and `.py` libraries to executors. +# VirtualEnv for Pyspark +For simple PySpark application, we can use `--py-files` to add its dependencies. While for a large PySpark application, +usually you will have many dependencies which may also have transitive dependencies and even some dependencies need to be compiled +to be installed. In this case `--py-files` is not so convenient. Luckily, in python world we have virtualenv/conda to help create isolated +python work environment. We also implement virtualenv in PySpark (It is only supported in yarn mode for now). + +# Prerequisites +- Each node have virtualenv/conda, python-devel installed +- Each node is internet accessible (for downloading packages) + +{% highlight bash %} +# Setup virtualenv using native virtualenv on yarn-client mode +bin/spark-submit \ + --master yarn \ + --deploy-mode client \ + --conf "spark.pyspark.virtualenv.enabled=true" \ + --conf "spark.pyspark.virtualenv.type=native" \ + --conf "spark.pyspark.virtualenv.requirements=<local_requirement_file>" \ + --conf "spark.pyspark.virtualenv.bin.path=<virtualenv_bin_path>" \ + <pyspark_script> + +# Setup virtualenv using conda on yarn-client mode +bin/spark-submit \ + --master yarn \ + --deploy-mode client \ + --conf "spark.pyspark.virtualenv.enabled=true" \ + --conf "spark.pyspark.virtualenv.type=conda" \ + --conf "spark.pyspark.virtualenv.requirements=<<local_requirement_file>" \ --- End diff -- nit: remove an extra `<`.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org