Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17655#discussion_r111781782
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -114,14 +114,14 @@ class SessionCatalog(
* Format table name, taking into account case sensitivity.
*/
protected[this] def formatTableName(name: String): String = {
- if (conf.caseSensitiveAnalysis) name else name.toLowerCase
+ if (conf.caseSensitiveAnalysis) name else name.toLowerCase(Locale.ROOT)
--- End diff --
Yes you are correct then, if these identifiers always have only
alphanumeric characters. There's no case where lower-casing the table name
should be locale-sensitive then.
Is this true of column names?
It won't be true of data, and those are the cases I was trying to leave
alone along with user-supplied table and col names, but maybe the latter two
aren't locale-sensitive.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]