[ https://issues.apache.org/jira/browse/SPARK-21684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128960#comment-16128960 ]
Vinod KC commented on SPARK-21684: ---------------------------------- [~taransaini43] Can try with option("quoteAll", "true") ? Eg : udf_comma.write.option("quoteAll", "true") .format("csv").save("/home/taranjits/test-df-trans-2/") > df.write double escaping all the already escaped characters except the first > one > -------------------------------------------------------------------------------- > > Key: SPARK-21684 > URL: https://issues.apache.org/jira/browse/SPARK-21684 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.2.0 > Reporter: Taran Saini > Attachments: SparkQuotesTest2.scala > > > Hi, > If we have a dataframe with the column value as {noformat} ab\,cd\,ef\,gh > {noformat} > Then while writing it is being written as > {noformat} "ab\,cd\\,ef\\,gh" {noformat} > i.e it double escapes all the already escaped commas/delimiters but not the > first one. > This is weird behaviour considering either it should do for all or none. > If I do mention df.option("escape","") as empty then it solves this problem > but the double quotes inside the same value if any are preceded by a special > char i.e '\u00'. Why does it do so when the escape character is set as > ""(empty)? -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org