[jira] [Updated] (BEAM-5171) org.apache.beam.sdk.io.CountingSourceTest.test[Un]boundedSourceSplits tests are flaky in Spark runner

2020-06-08 Thread Kenneth Knowles (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles updated BEAM-5171:
--
Labels: beam-fixit flake stale-P2  (was: stale-P2)

> org.apache.beam.sdk.io.CountingSourceTest.test[Un]boundedSourceSplits tests 
> are flaky in Spark runner
> -
>
> Key: BEAM-5171
> URL: https://issues.apache.org/jira/browse/BEAM-5171
> Project: Beam
>  Issue Type: Bug
>  Components: runner-spark
>Reporter: Valentyn Tymofieiev
>Priority: P2
>  Labels: beam-fixit, flake, stale-P2
>
> Two tests: 
>  org.apache.beam.sdk.io.CountingSourceTest.testUnboundedSourceSplits 
>  org.apache.beam.sdk.io.CountingSourceTest.testBoundedSourceSplits
> failed in a PostCommit [Spark Validates Runner test 
> suite|https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark_Gradle/1277/testReport/]
>  with an error that seems to be common for Spark. Could this be due to 
> misconfiguration of Spark cluster? 
> Task serialization failed: java.io.IOException: Failed to create local dir in 
> /tmp/blockmgr-de91f449-e5d1-4be4-acaa-3ee06fdfa95b/1d.
>  java.io.IOException: Failed to create local dir in 
> /tmp/blockmgr-de91f449-e5d1-4be4-acaa-3ee06fdfa95b/1d.
>  at 
> org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:70)
>  at org.apache.spark.storage.DiskStore.remove(DiskStore.scala:116)
>  at 
> org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1511)
>  at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1045)
>  at 
> org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1083)
>  at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:841)
>  at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1404)
>  at 
> org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:123)
>  at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:88)
>  at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>  at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
>  at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1482)
>  at 
> org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1039)
>  at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:947)
>  at 
> org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:891)
>  at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1780)
>  at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
>  at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
>  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-5171) org.apache.beam.sdk.io.CountingSourceTest.test[Un]boundedSourceSplits tests are flaky in Spark runner

2020-06-08 Thread Kenneth Knowles (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Knowles updated BEAM-5171:
--
Priority: P1  (was: P2)

> org.apache.beam.sdk.io.CountingSourceTest.test[Un]boundedSourceSplits tests 
> are flaky in Spark runner
> -
>
> Key: BEAM-5171
> URL: https://issues.apache.org/jira/browse/BEAM-5171
> Project: Beam
>  Issue Type: Bug
>  Components: runner-spark
>Reporter: Valentyn Tymofieiev
>Priority: P1
>  Labels: beam-fixit, flake, stale-P2
>
> Two tests: 
>  org.apache.beam.sdk.io.CountingSourceTest.testUnboundedSourceSplits 
>  org.apache.beam.sdk.io.CountingSourceTest.testBoundedSourceSplits
> failed in a PostCommit [Spark Validates Runner test 
> suite|https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark_Gradle/1277/testReport/]
>  with an error that seems to be common for Spark. Could this be due to 
> misconfiguration of Spark cluster? 
> Task serialization failed: java.io.IOException: Failed to create local dir in 
> /tmp/blockmgr-de91f449-e5d1-4be4-acaa-3ee06fdfa95b/1d.
>  java.io.IOException: Failed to create local dir in 
> /tmp/blockmgr-de91f449-e5d1-4be4-acaa-3ee06fdfa95b/1d.
>  at 
> org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:70)
>  at org.apache.spark.storage.DiskStore.remove(DiskStore.scala:116)
>  at 
> org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1511)
>  at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1045)
>  at 
> org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1083)
>  at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:841)
>  at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1404)
>  at 
> org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:123)
>  at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:88)
>  at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>  at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
>  at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1482)
>  at 
> org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1039)
>  at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:947)
>  at 
> org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:891)
>  at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1780)
>  at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
>  at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
>  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BEAM-5171) org.apache.beam.sdk.io.CountingSourceTest.test[Un]boundedSourceSplits tests are flaky in Spark runner

2020-06-01 Thread Beam JIRA Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/BEAM-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Beam JIRA Bot updated BEAM-5171:

Labels: stale-P2  (was: )

> org.apache.beam.sdk.io.CountingSourceTest.test[Un]boundedSourceSplits tests 
> are flaky in Spark runner
> -
>
> Key: BEAM-5171
> URL: https://issues.apache.org/jira/browse/BEAM-5171
> Project: Beam
>  Issue Type: Bug
>  Components: runner-spark
>Reporter: Valentyn Tymofieiev
>Priority: P2
>  Labels: stale-P2
>
> Two tests: 
>  org.apache.beam.sdk.io.CountingSourceTest.testUnboundedSourceSplits 
>  org.apache.beam.sdk.io.CountingSourceTest.testBoundedSourceSplits
> failed in a PostCommit [Spark Validates Runner test 
> suite|https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark_Gradle/1277/testReport/]
>  with an error that seems to be common for Spark. Could this be due to 
> misconfiguration of Spark cluster? 
> Task serialization failed: java.io.IOException: Failed to create local dir in 
> /tmp/blockmgr-de91f449-e5d1-4be4-acaa-3ee06fdfa95b/1d.
>  java.io.IOException: Failed to create local dir in 
> /tmp/blockmgr-de91f449-e5d1-4be4-acaa-3ee06fdfa95b/1d.
>  at 
> org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:70)
>  at org.apache.spark.storage.DiskStore.remove(DiskStore.scala:116)
>  at 
> org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1511)
>  at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1045)
>  at 
> org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1083)
>  at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:841)
>  at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1404)
>  at 
> org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:123)
>  at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:88)
>  at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>  at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
>  at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1482)
>  at 
> org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1039)
>  at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:947)
>  at 
> org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:891)
>  at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1780)
>  at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
>  at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
>  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)