kachayev commented on a change in pull request #28133: [SPARK-31156][SQL] 
DataFrameStatFunctions API to be consistent with respect to Column type
URL: https://github.com/apache/spark/pull/28133#discussion_r404548266
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/stat/FrequentItems.scala
 ##########
 @@ -66,6 +68,19 @@ object FrequentItems extends Logging {
     }
   }
 
+  /** Helper function to resolve column to expr (if not yet) */
+  // TODO: it might be helpful to have this helper in Dataset.scala,
+  // e.g. `drop` function uses exactly the same flow to deal with
+  // `Column` arguments
+  private def resolveColumn(df: DataFrame, col: Column): Column = {
+    col match {
+      case Column(u: UnresolvedAttribute) =>
+        Column(df.queryExecution.analyzed.resolveQuoted(
+          u.name, df.sparkSession.sessionState.analyzer.resolver).getOrElse(u))
+      case Column(_expr: Expression) => col
+    }
+  }
 
 Review comment:
   The code here tries to resolve the column if it has `UnresolvedAttribute`. 
If it still does not provide clarity, I think it's fair to throw an exception. 
Similar to how `Dataset.drop` works if the argument given is a column with an 
unresolved attribute. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to