cloud-fan commented on code in PR #43632:
URL: https://github.com/apache/spark/pull/43632#discussion_r1380089299
##########
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##########
@@ -3064,13 +3064,31 @@ class Dataset[T] private[sql](
*/
@scala.annotation.varargs
def drop(col: Column, cols: Column*): DataFrame = {
- val allColumns = col +: cols
- val expressions = (for (col <- allColumns) yield col match {
+ val expressions = (col +: cols).map {
case Column(u: UnresolvedAttribute) =>
- queryExecution.analyzed.resolveQuoted(
Review Comment:
I think the problem is we put complicated resolution logic in DF APIs. Shall
we add a new logical plan `DropColumns`? It will be rewritten to `Project` when
the columns are all resolved, so that we can reuse all the column resolution
logic we implemented in the analyzer.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]