Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16137#discussion_r91855239
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
    @@ -816,6 +833,10 @@ class SparkContext(config: SparkConf) extends Logging {
       /**
        * Read a text file from HDFS, a local file system (available on all 
nodes), or any
        * Hadoop-supported file system URI, and return it as an RDD of Strings.
    +   *
    +   * @param path URI to Hadoop-supported file system
    +   * @param minPartitions suggested value of the minimal splitting number 
for input data
    --- End diff --
    
    I think the doc string is hard to understand. "Suggested minimum number of 
partitions for the resulting RDD"? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to