Github user falaki commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1351#discussion_r14752212
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
    @@ -130,6 +131,47 @@ class SQLContext(@transient val sparkContext: 
SparkContext)
         new SchemaRDD(this, JsonRDD.inferSchema(json, samplingRatio))
     
       /**
    +   * Loads a CSV file (according to RFC 4180) and returns the result as a 
[[SchemaRDD]].
    +   *
    +   * NOTE: If there are new line characters inside quoted fields this 
method may fail to
    +   * parse correctly, because the two lines may be in different 
partitions. Use
    +   * [[SQLContext#csvRDD]] to parse such files.
    +   *
    +   * @param path path to input file
    +   * @param delimiter Optional delimiter (default is comma)
    +   * @param quote Optional quote character or string (default is '"')
    +   * @param header Optional flag to indicate first line of each file is 
the header
    +   *               (default is false)
    +   */
    +  def csvFile(path: String,
    +      delimiter: String = ",",
    +      quote: String = "\"",
    --- End diff --
    
    RFC 4180 only mentions double quotes. This implementation is more general 
and you can use any sequence of characters as quotes, but it is unique.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to