HyukjinKwon commented on a change in pull request #27780: [SPARK-31026] [SQL] 
[test-hive1.2] Parquet predicate pushdown on columns with dots
URL: https://github.com/apache/spark/pull/27780#discussion_r389331398
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/sources/filters.scala
 ##########
 @@ -32,6 +32,7 @@ import org.apache.spark.annotation.{Evolving, Stable}
 sealed abstract class Filter {
   /**
    * List of columns that are referenced by this filter.
+   * Note that, if a column contains `dots` in name, it will be quoted to 
avoid confusion.
 
 Review comment:
   This one is actually a pretty breaking change. Not all implementations of 
the data sources will have the syntax to handle backquotes - there are so many 
non-DBMS implementations outside like elasticsearch, mongodb, etc. which I see 
relevant tickets in Spark JIRAs time to time.
   
   In particular, this is a stable API. Can we update the migration guide at 
the very least?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to