sadikovi commented on a change in pull request #34995:
URL: https://github.com/apache/spark/pull/34995#discussion_r776119783



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalogUtils.scala
##########
@@ -52,7 +52,7 @@ object ExternalCatalogUtils {
       '\n', '\u000B', '\u000C', '\r', '\u000E', '\u000F', '\u0010', '\u0011', 
'\u0012', '\u0013',
       '\u0014', '\u0015', '\u0016', '\u0017', '\u0018', '\u0019', '\u001A', 
'\u001B', '\u001C',
       '\u001D', '\u001E', '\u001F', '"', '#', '%', '\'', '*', '/', ':', '=', 
'?', '\\', '\u007F',
-      '{', '[', ']', '^')
+      '{', '[', ']', '^', '.')

Review comment:
       I thought the code would translate any escaped values, so it should not 
matter what characters are encoded. As long as decoding code is generic enough 
to handle all escaped values, older Spark versions should be able to read it. I 
will verify this. For now, there is an issue in partitioning that prevents the 
PR to pass the tests.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to