linhongliu-db commented on a change in pull request #31286:
URL: https://github.com/apache/spark/pull/31286#discussion_r565768669



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
##########
@@ -1763,6 +1763,21 @@ class Analyzer(override val catalogManager: 
CatalogManager)
     def expandStarExpression(expr: Expression, child: LogicalPlan): Expression 
= {
       expr.transformUp {
         case f1: UnresolvedFunction if containsStar(f1.arguments) =>
+          // SPECIAL CASE: We want to block count(table.*) because in spark, 
count(table.*) will
+          // be expanded while count(*) will be converted to count(1). They 
will produce different
+          // results and confuse users if there is any null values. For 
count(t1.*, t2.*), it is

Review comment:
       because expand `count(t1.*, t2.*)` is not ambiguous, I think no one will 
argue that `count(t1.*, t2.*)` should equal `count(1)`. Also this usage is not 
supported by the common database (MySQL, oracle, pgsql), so we are not in 
conflict with others.
   but `count(table.*)` is not same. spark SQL expands the columns, pgsql 
converts it to `count(1)`, MySQL, Oracle (as well as ANSI) thinks this should 
be disallowed.
   So I think block `count(table.*)` can avoid ambiguous while keeping 
`count(t1.*, t2.*)` can reduce the side effects.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to