Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22654#discussion_r223724099
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
 ---
    @@ -1826,4 +1826,13 @@ class CSVSuite extends QueryTest with 
SharedSQLContext with SQLTestUtils with Te
         val df = spark.read.option("enforceSchema", false).csv(input)
         checkAnswer(df, Row("1", "2"))
       }
    +
    +  test("using the backward slash as the delimiter") {
    +    val input = Seq("""abc\1""").toDS()
    --- End diff --
    
    Isn't `\` the default escape character? this should be read as the string 
"abc1" then, and not delimited. It would have to be `\\`, right? I'm not 
talking about Scala string escaping, but CSV here.
    
    Or is the point that delimiting takes precedence?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to