cloud-fan commented on code in PR #39564:
URL: https://github.com/apache/spark/pull/39564#discussion_r1070226778


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/sources/filters.scala:
##########
@@ -91,7 +91,7 @@ case class EqualTo(attribute: String, value: Any) extends 
Filter {
   override def toV2: Predicate = {
     val literal = Literal(value)
     new Predicate("=",
-      Array(FieldReference(attribute), LiteralValue(literal.value, 
literal.dataType)))
+      Array(FieldReference.column(attribute), LiteralValue(literal.value, 
literal.dataType)))

Review Comment:
   The column name in v1 filter is generated by `PushableColumn`, and we need 
to match it.
   
   When nested predicate pushdown is enabled, we already quote the column name 
if the name is not a valid SQL identifier. So `FieldReference.apply` is correct 
here. However, when it's disabled, the column name is the parsed name and 
`FieldReference.column` should be used.
   
   I think what we can do is
   ```
   try {
     FieldReference(attribute)
   } catch {
     case _: ParseException => FieldReference.column(attribute)
   }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to