kachayev commented on a change in pull request #28133: [SPARK-31156][SQL] 
DataFrameStatFunctions API to be consistent with respect to Column type
URL: https://github.com/apache/spark/pull/28133#discussion_r407295549
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/stat/FrequentItems.scala
 ##########
 @@ -66,6 +68,19 @@ object FrequentItems extends Logging {
     }
   }
 
+  /** Helper function to resolve column to expr (if not yet) */
+  // TODO: it might be helpful to have this helper in Dataset.scala,
+  // e.g. `drop` function uses exactly the same flow to deal with
+  // `Column` arguments
+  private def resolveColumn(df: DataFrame, col: Column): Column = {
+    col match {
+      case Column(u: UnresolvedAttribute) =>
+        Column(df.queryExecution.analyzed.resolveQuoted(
+          u.name, df.sparkSession.sessionState.analyzer.resolver).getOrElse(u))
+      case Column(_expr: Expression) => col
+    }
+  }
 
 Review comment:
   Exactly the same approach is used for `drop` implementation here: 
https://github.com/apache/spark/blob/22bb6b0fddb3ecd3ac0ad2b41a5024c86b8a6fc7/sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala#L2500-L2505
 -- is there a better approach?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to