Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16856#discussion_r102616268
  
    --- Diff: docs/quick-start.md ---
    @@ -211,7 +199,7 @@ a cluster, as described in the [programming 
guide](programming-guide.html#initia
     It may seem silly to use Spark to explore and cache a 100-line text file. 
The interesting part is
     that these same functions can be used on very large data sets, even when 
they are striped across
     tens or hundreds of nodes. You can also do this interactively by 
connecting `bin/pyspark` to
    -a cluster, as described in the [programming 
guide](programming-guide.html#initializing-spark).
    +a cluster, as described in the [programming 
guide](rdd-programming-guide.html#initializing-spark).
    --- End diff --
    
    cc @rxin 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to