huaxingao commented on a change in pull request #29695:
URL: https://github.com/apache/spark/pull/29695#discussion_r498569218



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala
##########
@@ -643,6 +647,34 @@ object DataSourceStrategy {
     (nonconvertiblePredicates ++ unhandledPredicates, pushedFilters, 
handledFilters)
   }
 
+  def translateAggregate(aggregates: AggregateExpression): 
Option[AggregateFunc] = {
+
+    def columnAsString(e: Expression): String = e match {
+      case AttributeReference(name, _, _, _) => name
+      case Cast(child, _, _) => child match {

Review comment:
       Thanks for the example. I actually only want to strip off the cast added 
by Spark. For example, when doing sum, Spark cast integral type to long. 
   ```
   case Sum(e @ IntegralType()) if e.dataType != LongType => Sum(Cast(e, 
LongType))
   ```
   For the casting added by Spark, I will remove the casting, push down 
aggregate and do the same casting on database side.
   If the cast is from user, I will keep the cast and NOT push down `aggregate 
(cast(col))` for now.
   To differentiate user explicitly casting from Spark added casting, I will 
add a flag some where. Does this sound OK to you?
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to