AngersZhuuuu commented on a change in pull request #29438:
URL: https://github.com/apache/spark/pull/29438#discussion_r470964073



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala
##########
@@ -759,6 +759,12 @@ class SparkSqlAstBuilder(conf: SQLConf) extends 
AstBuilder(conf) {
         def entry(key: String, value: Token): Seq[(String, String)] = {
           Option(value).map(t => key -> t.getText).toSeq
         }
+
+        if (Option(c.linesSeparatedBy).map(string).getOrElse("\n") != "\n") {

Review comment:
       > IMO its okay to support any char as a line separator normally.
   
   In this way ,  line Separator must be a char. since it's hard to write a 
high-performance Reader (like BufferedReader.getLine) that can get one row data 
with string line separator . 
   
   if just support char as line delim, we can just rewrite a reader like 
BufferedReader.
   
   > We need to follow this restriction of Hive?
    Script transform's user always come from hive. It ok just to follow hive 
and this behavior is straight forward and simple.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to