cloud-fan commented on a change in pull request #35726:
URL: https://github.com/apache/spark/pull/35726#discussion_r820804485
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
##########
@@ -97,7 +97,15 @@ object JDBCRDD extends Logging {
* Returns None for an unhandled filter.
*/
def compileFilter(f: Filter, dialect: JdbcDialect): Option[String] = {
- def quote(colName: String): String = dialect.quoteIdentifier(colName)
+ def isEnclosedInBackticks(colName: String): Boolean =
+ colName.startsWith("`") && colName.endsWith("`")
Review comment:
I checked `V2ScanRelationPushDown`. For v2 sources, Spark always quote
the column name if it contains special chars for filter pushdown. So I think
here we can just invoke the sql parser to parse it
```
val nameParts =
SparkSession.active.sessionState.sqlParser.parseMultipartIdentifier(colName)
assert(nameParts.length == 1) // or throw a user-facing exception if nested
column can reach here
dialect.quoteIdentifier(nameParts.head)
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]