[
https://issues.apache.org/jira/browse/BEAM-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16774534#comment-16774534
]
Kyle Weaver commented on BEAM-5171:
-----------------------------------
https://builds.apache.org/job/beam_PreCommit_Java_Phrase/755/testReport/junit/org.apache.beam.runners.spark.aggregators.metrics.sink/SparkMetricsSinkTest/testInBatchMode/
> org.apache.beam.sdk.io.CountingSourceTest.test[Un]boundedSourceSplits tests
> are flaky in Spark runner
> -----------------------------------------------------------------------------------------------------
>
> Key: BEAM-5171
> URL: https://issues.apache.org/jira/browse/BEAM-5171
> Project: Beam
> Issue Type: Bug
> Components: runner-spark
> Reporter: Valentyn Tymofieiev
> Assignee: Etienne Chauchot
> Priority: Major
> Labels: triaged
>
> Two tests:
> org.apache.beam.sdk.io.CountingSourceTest.testUnboundedSourceSplits
> org.apache.beam.sdk.io.CountingSourceTest.testBoundedSourceSplits
> failed in a PostCommit [Spark Validates Runner test
> suite|https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark_Gradle/1277/testReport/]
> with an error that seems to be common for Spark. Could this be due to
> misconfiguration of Spark cluster?
> Task serialization failed: java.io.IOException: Failed to create local dir in
> /tmp/blockmgr-de91f449-e5d1-4be4-acaa-3ee06fdfa95b/1d.
> java.io.IOException: Failed to create local dir in
> /tmp/blockmgr-de91f449-e5d1-4be4-acaa-3ee06fdfa95b/1d.
> at
> org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:70)
> at org.apache.spark.storage.DiskStore.remove(DiskStore.scala:116)
> at
> org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1511)
> at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1045)
> at
> org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1083)
> at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:841)
> at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1404)
> at
> org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:123)
> at
> org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:88)
> at
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
> at
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
> at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1482)
> at
> org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1039)
> at
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:947)
> at
> org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:891)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1780)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)