cloud-fan commented on issue #25095: [SPARK-28332][SQL] SQLMetric wrong initValue URL: https://github.com/apache/spark/pull/25095#issuecomment-510357416 I think there are 2 kinds of sql metrics: 1. executor side just set a value, and driver side merges the values by calculating some statistics 2. executor side accumulates values (by calling `add`), and driver side merges the values by sum Only the first case need to set the initial value as -1. Maybe we can refactor `SQLMetrics` a little to enforce it. BTW it's hard to avoid invalid accumulators. Currently Spark just sends the physical plan tree to the executor side to do execution, and accumulators are carried in the physical plan node. Ideally we should have something that is separated from the physical plan, so that we can precisely control which parts need to go to executor side, and avoid sending unrelated accumulators. This is a lot of work.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
