advancedxy commented on code in PR #147:
URL:
https://github.com/apache/arrow-datafusion-comet/pull/147#discussion_r1508423594
##########
spark/src/main/java/org/apache/spark/sql/comet/CometScalarSubquery.java:
##########
@@ -47,10 +47,12 @@ public static synchronized void setSubquery(long planId,
ScalarSubquery subquery
}
public static synchronized void removeSubquery(long planId, ScalarSubquery
subquery) {
- subqueryMap.get(planId).remove(subquery.exprId().id());
+ if (subqueryMap.containsKey(planId)) {
+ subqueryMap.get(planId).remove(subquery.exprId().id());
Review Comment:
This looks correct to me.
One possible way to construct such cases:
```
val subquery =
new
Column(ScalarSubquery(spark.range(10).selectExpr("max(id)").logicalPlan)).as("subquery")
val df = spark.range(1000).select($"id", subquery).filter("id ==
subquery")
```
Disclaimer: this case is not verified.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]