[ 
https://issues.apache.org/jira/browse/SPARK-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15975490#comment-15975490
 ] 

ilker ozsaracoglu edited comment on SPARK-16599 at 4/19/17 8:50 PM:
--------------------------------------------------------------------

[~sowen], I get this error consistently. I am currently on 2.1 but I had the 
same experience on 2.0. Error points to "foreach" step.

This is the case (with collect) that I do NOT experience the problem regardless 
of my job submit type (Local, YARN-client, or YARN-cluster)
DFnodeGroup.collect().foreach(r=> {  
...
}

This is the case (without collect) that I DO experience the problem when 
submitting job to YARN (client or cluster), but not Local 
DFnodeGroup.foreach(r=> {  
...
}

The difference might be that if tasks are running in the same JVM or not. I 
tried workarounds including the one suggested by [~naegelejd] above on Sep 8 
with no success.

Thanks.


was (Author: iozsaracoglu):
[~sowen], I get this error consistently. I am currently on 2.1 but I had the 
same experience on 2.0. Error points to "foreach" step.

This is the case (with collect) that I do NOT experience the problem regardless 
of my job submit type (Local, YARN-client, or YARN-cluster)
DFnodeGroup.collect().foreach(r=> {  
...
}

This is the case (without collect) that I DO experience the problem when 
submitting job to YARN (client or cluster), but not Local 
DFnodeGroup.foreach(r=> {  
...
}

The difference might be that if tasks are running in the same JVM or not. I 
tried workarounds including the one suggested by [~naegelejd] above on Sep 8.

Thanks.

> java.util.NoSuchElementException: None.get  at at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
> ----------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-16599
>                 URL: https://issues.apache.org/jira/browse/SPARK-16599
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 2.0.0
>         Environment: centos 6.7   spark 2.0
>            Reporter: binde
>
> run a spark job with spark 2.0, error message
> Job aborted due to stage failure: Task 0 in stage 821.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 821.0 (TID 1480, e103): 
> java.util.NoSuchElementException: None.get
>       at scala.None$.get(Option.scala:347)
>       at scala.None$.get(Option.scala:345)
>       at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
>       at 
> org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to