This is an automated email from the ASF dual-hosted git repository.

eladkal pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/airflow.git


The following commit(s) were added to refs/heads/main by this push:
     new 7b7661a  Fixed naming in the Spark Connection Extra field (#18469)
7b7661a is described below

commit 7b7661a8d1bc4150494be94be4a278dbefab5c9d
Author: cristian-fatu <[email protected]>
AuthorDate: Sun Sep 26 08:50:20 2021 +0200

    Fixed naming in the Spark Connection Extra field (#18469)
    
    Changed the naming of the Extra fields from 
spark_home/spark_binary/deploy_mode to spark-home/spark-binary/deploy-mode to 
match what is actually in the codebase
---
 docs/apache-airflow-providers-apache-spark/connections/spark.rst | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/apache-airflow-providers-apache-spark/connections/spark.rst 
b/docs/apache-airflow-providers-apache-spark/connections/spark.rst
index adc88b7..733bbd9 100644
--- a/docs/apache-airflow-providers-apache-spark/connections/spark.rst
+++ b/docs/apache-airflow-providers-apache-spark/connections/spark.rst
@@ -41,9 +41,9 @@ Extra (optional)
     Specify the extra parameters (as json dictionary) that can be used in 
spark connection. The following parameters out of the standard python 
parameters are supported:
 
     * ``queue`` - The name of the YARN queue to which the application is 
submitted.
-    * ``deploy_mode`` - Whether to deploy your driver on the worker nodes 
(cluster) or locally as an external client (client).
-    * ``spark_home`` - If passed then build the ``spark-binary`` executable 
path using it (``spark-home``/bin/``spark-binary``); otherwise assume that 
``spark-binary`` is present in the PATH of the executing user.
-    * ``spark_binary`` - The command to use for Spark submit. Some distros may 
use ``spark2-submit``. Default ``spark-submit``.
+    * ``deploy-mode`` - Whether to deploy your driver on the worker nodes 
(cluster) or locally as an external client (client).
+    * ``spark-home`` - If passed then build the ``spark-binary`` executable 
path using it (``spark-home``/bin/``spark-binary``); otherwise assume that 
``spark-binary`` is present in the PATH of the executing user.
+    * ``spark-binary`` - The command to use for Spark submit. Some distros may 
use ``spark2-submit``. Default ``spark-submit``.
     * ``namespace`` - Kubernetes namespace (``spark.kubernetes.namespace``) to 
divide cluster resources between multiple users (via resource quota).
 
 When specifying the connection in environment variable you should specify

Reply via email to