HeartSaVioR edited a comment on issue #25913: [WIP][SPARK-29221][SQL] LocalTableScanExec: handle the case where executors are accessing "null" rows URL: https://github.com/apache/spark/pull/25913#issuecomment-534524090 I guess the approach should be changed, as there seems to be plenty of pieces in physical plans which are not safe to execute in executors. Something might seem to be changed recently. https://github.com/apache/spark/blob/7c02c143aad92823bb58d14d0e66b0128af63772/sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala#L93-L102 https://github.com/apache/spark/blob/c61270fd74a89a39e5fbbfa9402e801ec57ce34c/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala#L63-L72 https://github.com/apache/spark/blob/fff2e847c2fb58e4ef50063e70fa0053498a4109/sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala#L976-L983 When values/methods in SparkPlan are referenced in executor side, `sqlContext` in SparkPlan would be `null` (regardless of `@transient`), and calling `sparkContext` in SparkPlan would throw NPE. Thus, HashAggregateExec cannot be initialized in executor side. Same applies on using `sqlContext` or `sparkContext` while initializing physical node. Not sure we want to fix this up via changing them to return `Option`. (I agree that's just a band-aid.)
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
