[ 
https://issues.apache.org/jira/browse/SPARK-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15487203#comment-15487203
 ] 

Thomas Graves commented on SPARK-17321:
---------------------------------------

yes that makes sense and as I stated I think the fix for this should be that 
the Spark shuffle services doesn't use the backup database at all if NM 
recovery (and spark config) aren't enabled.  Thus you wouldn't have any disk 
errors.  If NM recovery isn't enabled the spark DB isn't going to do you any 
good because NM is going to shoot any running containers on restart.

If you are up for making those changes please go ahead and put up patch.

> YARN shuffle service should use good disk from yarn.nodemanager.local-dirs
> --------------------------------------------------------------------------
>
>                 Key: SPARK-17321
>                 URL: https://issues.apache.org/jira/browse/SPARK-17321
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.6.2, 2.0.0
>            Reporter: yunjiong zhao
>
> We run spark on yarn, after enabled spark dynamic allocation, we notice some 
> spark application failed randomly due to YarnShuffleService.
> From log I found
> {quote}
> 2016-08-29 11:33:03,450 ERROR org.apache.spark.network.TransportContext: 
> Error while initializing Netty pipeline
> java.lang.NullPointerException
>         at 
> org.apache.spark.network.server.TransportRequestHandler.<init>(TransportRequestHandler.java:77)
>         at 
> org.apache.spark.network.TransportContext.createChannelHandler(TransportContext.java:159)
>         at 
> org.apache.spark.network.TransportContext.initializePipeline(TransportContext.java:135)
>         at 
> org.apache.spark.network.server.TransportServer$1.initChannel(TransportServer.java:123)
>         at 
> org.apache.spark.network.server.TransportServer$1.initChannel(TransportServer.java:116)
>         at 
> io.netty.channel.ChannelInitializer.channelRegistered(ChannelInitializer.java:69)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRegistered(AbstractChannelHandlerContext.java:133)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRegistered(AbstractChannelHandlerContext.java:119)
>         at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRegistered(DefaultChannelPipeline.java:733)
>         at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:450)
>         at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.access$100(AbstractChannel.java:378)
>         at 
> io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:424)
>         at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>         at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>         at java.lang.Thread.run(Thread.java:745)
> {quote} 
> Which caused by the first disk in yarn.nodemanager.local-dirs was broken.
> If we enabled spark.yarn.shuffle.stopOnFailure(SPARK-16505) we might lost 
> hundred nodes which is unacceptable.
> We have 12 disks in yarn.nodemanager.local-dirs, so why not use other good 
> disks if the first one is broken?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to