AngersZhuuuu commented on pull request #30957:
URL: https://github.com/apache/spark/pull/30957#issuecomment-754407179


   > This is really confusing, and seems Hive did it for its own backward 
compatibility. Can we have a clearer solution in Spark? I don't think we should 
inherit all the hacks from Hive for Hive compatibility.
   
   Yea, How about just add all characters ( except below  comment)  to spark's 
separators?
   ```
   //use only control chars that are very unlikely to be part of the string
       // the following might/likely to be used in text files for strings
       // 9 (horizontal tab, HT, \t, ^I)
       // 10 (line feed, LF, \n, ^J),
       // 12 (form feed, FF, \f, ^L),
       // 13 (carriage return, CR, \r, ^M),
       // 27 (escape, ESC, \e [GCC only], ^[).
   ```
   =


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to