viirya commented on a change in pull request #34051:
URL: https://github.com/apache/spark/pull/34051#discussion_r715277578



##########
File path: sql/core/src/main/scala/org/apache/spark/sql/execution/subquery.scala
##########
@@ -130,14 +131,17 @@ case class InSubqueryExec(
     } else {
       rows.map(_.get(0, child.dataType))
     }
-    resultBroadcast = plan.session.sparkContext.broadcast(result)
+    if (needBroadcast) {
+      resultBroadcast = plan.session.sparkContext.broadcast(result)
+    }
   }
 
-  def values(): Option[Array[Any]] = Option(resultBroadcast).map(_.value)
+  // This is used only by DPP where we don't need broadcast the result.
+  def values(): Option[Array[Any]] = Option(result)
 
   private def prepareResult(): Unit = {

Review comment:
       Oh, seems we have different evaluation for DPP. `DataSourceScanExec` 
calls DPP filters' evaluation. But for v2 scan, we translate to source filters 
and let scan to do filtering internally, so it won't call catalyst expression 
evaluation.
   
   So we cannot fail in `prepareResult` now.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to