[GitHub] spark pull request #22586: [SPARK-25568][Core]Continue to update the remaini...
Github user asfgit closed the pull request at: https://github.com/apache/spark/pull/22586 --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #22586: [SPARK-25568][Core]Continue to update the remaini...
Github user zsxwing commented on a diff in the pull request: https://github.com/apache/spark/pull/22586#discussion_r221438766 --- Diff: core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala --- @@ -1880,6 +1880,26 @@ class DAGSchedulerSuite extends SparkFunSuite with LocalSparkContext with TimeLi assert(sc.parallelize(1 to 10, 2).count() === 10) } + test("misbehaved accumulator should not impact other accumulators") { --- End diff -- > Also verify the log message? That's not in the core project. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #22586: [SPARK-25568][Core]Continue to update the remaini...
Github user gatorsmile commented on a diff in the pull request: https://github.com/apache/spark/pull/22586#discussion_r221415820 --- Diff: core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala --- @@ -1880,6 +1880,26 @@ class DAGSchedulerSuite extends SparkFunSuite with LocalSparkContext with TimeLi assert(sc.parallelize(1 to 10, 2).count() === 10) } + test("misbehaved accumulator should not impact other accumulators") { --- End diff -- Also verify the log message? See the example: https://github.com/apache/spark/blob/master/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/OptimizerLoggingSuite.scala#L41-L52 --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #22586: [SPARK-25568][Core]Continue to update the remaini...
GitHub user zsxwing opened a pull request: https://github.com/apache/spark/pull/22586 [SPARK-25568][Core]Continue to update the remaining accumulators when failing to update one accumulator ## What changes were proposed in this pull request? Since we don't fail a job when `AccumulatorV2.merge` fails, we should try to update the remaining accumulators so that they can still report correct values. ## How was this patch tested? The new unit test. You can merge this pull request into a Git repository by running: $ git pull https://github.com/zsxwing/spark SPARK-25568 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/22586.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #22586 commit badb4711caff6e3d10ba9037f71c1ad6515577e8 Author: Shixiong Zhu Date: 2018-09-28T20:52:07Z continue to update the remaining accumulators --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org