maropu commented on a change in pull request #27128: [SPARK-30421][SQL] Dropped 
columns still available for filtering
URL: https://github.com/apache/spark/pull/27128#discussion_r364098534
 
 

 ##########
 File path: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
 ##########
 @@ -2376,7 +2376,18 @@ class Dataset[T] private[sql](
     if (remainingCols.size == allColumns.size) {
       toDF()
     } else {
-      this.select(remainingCols: _*)
+      val df = this.select(remainingCols: _*)
+      withAction("drop", df.queryExecution) { physicalPlan =>
 
 Review comment:
   Probably, @amanomer just used my example code in the jira. That's just an 
example code and I didn't carefully check it (I just copied/pasted it from the 
other part). I'm not currently sure that this is an issue to fix as I said in 
the jira: https://issues.apache.org/jira/browse/SPARK-30421

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to