[
https://issues.apache.org/jira/browse/SPARK-21425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095323#comment-16095323
]
Shixiong Zhu commented on SPARK-21425:
--------------------------------------
[~srowen] The issue is static accumulators. Right? They won't be
serialized/deserialized with tasks and cannot be reported back to driver. If
running in local-cluster mode or a real cluster, they will always be 0 in the
driver side.
The overhead comes from memory barrier introduced by synchronization. I tried
to improved this but gave up due to the significant performance regression:
https://github.com/apache/spark/pull/15065
> LongAccumulator, DoubleAccumulator not threadsafe
> -------------------------------------------------
>
> Key: SPARK-21425
> URL: https://issues.apache.org/jira/browse/SPARK-21425
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 2.2.0
> Reporter: Ryan Williams
> Priority: Minor
>
> [AccumulatorV2
> docs|https://github.com/apache/spark/blob/v2.2.0/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala#L42-L43]
> acknowledge that accumulators must be concurrent-read-safe, but afaict they
> must also be concurrent-write-safe.
> The same docs imply that {{Int}} and {{Long}} meet either/both of these
> criteria, when afaict they do not.
> Relatedly, the provided
> [LongAccumulator|https://github.com/apache/spark/blob/v2.2.0/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala#L291]
> and
> [DoubleAccumulator|https://github.com/apache/spark/blob/v2.2.0/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala#L370]
> are not thread-safe, and should be expected to behave undefinedly when
> multiple concurrent tasks on the same executor write to them.
> [Here is a repro repo|https://github.com/ryan-williams/spark-bugs/tree/accum]
> with some simple applications that demonstrate incorrect results from
> {{LongAccumulator}}'s.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]