jackye1995 commented on pull request #2701: URL: https://github.com/apache/iceberg/pull/2701#issuecomment-870369949
@marton-bod @pvary I think Peter raised a good point that the current solution by replacing delimiter does not solve the issue because we have 2 delimiters to escape in this case. After thinking for a while, I think the best way to go is to not go with this approach and use backquote to escape column names. This also fits better with the Spark SQL specification for column name. I have completely rewrote the parser to do character by character parsing instead of using simply a `string.split`. Please let me know if there is any case not covered here, thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
