mkgray commented on a change in pull request #34134:
URL: https://github.com/apache/spark/pull/34134#discussion_r718095616



##########
File path: docs/rdd-programming-guide.md
##########
@@ -241,12 +241,12 @@ For a complete list of options, run `spark-shell --help`. 
Behind the scenes,
 In the PySpark shell, a special interpreter-aware SparkContext is already 
created for you, in the
 variable called `sc`. Making your own SparkContext will not work. You can set 
which master the
 context connects to using the `--master` argument, and you can add Python 
.zip, .egg or .py files
-to the runtime path by passing a comma-separated list to `--py-files`. You can 
also add dependencies
+to the runtime path by passing a comma-separated list to `--py-files`. For 
thrid-party Python dependencies,

Review comment:
       Minor typo, should be third-party

##########
File path: docs/submitting-applications.md
##########
@@ -35,7 +35,8 @@ script as shown here while passing your jar.
 
 For Python, you can use the `--py-files` argument of `spark-submit` to add 
`.py`, `.zip` or `.egg`
 files to be distributed with your application. If you depend on multiple 
Python files we recommend
-packaging them into a `.zip` or `.egg`.
+packaging them into a `.zip` or `.egg`. For thrid-party Python dependencies,

Review comment:
       Minor typo, should be third-party




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to