Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/20004#discussion_r157375137
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
---
@@ -249,6 +249,8 @@ final class DataStreamReader private[sql](sparkSession:
SparkSession) extends Lo
* `com.databricks.spark.csv`.</li>
* <li>`escape` (default `\`): sets the single character used for
escaping quotes inside
* an already quoted value.</li>
+ * <li>`escapeQuoteEscaping` (default `\0`): sets the single character
used for escaping
+ * the quote-escape character.</li>
--- End diff --
Why not following the CSV API docs?
http://docs.univocity.com/parsers/1.5.1/com/univocity/parsers/csv/CsvFormat.html
> charToEscapeQuoteEscaping (defaults to '\0' - undefined): character used
for escaping the escape for the quote character
> e.g. if the quoteEscape and charToEscapeQuoteEscaping are set to '\', the
value " \\\" a , b \\\" " is parsed as [ \" a , b \" ]
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]