如题,我生产集群频繁报 
org.apache.flink.runtime.io.network.netty.exception.LocalTransportException
异常,具体异常cause有如下情况,按照出现频率从高到底列举。
(1)Sending the partition request to '...' failed;
org.apache.flink.runtime.io.network.netty.exception.LocalTransportException:
Sending the partition request to '/10.35.215.18:2065 (#0)' failed.
at 
org.apache.flink.runtime.io.network.netty.NettyPartitionRequestClient$1.operationComplete(NettyPartitionRequestClient.java:145)
at 
org.apache.flink.runtime.io.network.netty.NettyPartitionRequestClient$1.operationComplete(NettyPartitionRequestClient.java:133)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:1017)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:878)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1071)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
at 
org.apache.flink.shaded.netty4.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at 
org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:748)
Caused by: 
org.apache.flink.shaded.netty4.io.netty.channel.StacklessClosedChannelException
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.write(Object,
ChannelPromise)(Unknown Source)
Caused by: 
org.apache.flink.shaded.netty4.io.netty.channel.unix.Errors$NativeIoException:
writeAddress(..) failed: Connection timed out

(2)readAddress(..) failed: Connection timed out
org.apache.flink.runtime.io.network.netty.exception.LocalTransportException:
readAddress(..) failed: Connection timed out (connection to
'10.35.109.149/10.35.109.149:2094')
at 
org.apache.flink.runtime.io.network.netty.CreditBasedPartitionRequestClientHandler.exceptionCaught(CreditBasedPartitionRequestClientHandler.java:168)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:273)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1377)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:907)
at 
org.apache.flink.shaded.netty4.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.handleReadException(AbstractEpollStreamChannel.java:728)
at 
org.apache.flink.shaded.netty4.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:821)
at 
org.apache.flink.shaded.netty4.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at 
org.apache.flink.shaded.netty4.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at 
org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:748)
Caused by: 
org.apache.flink.shaded.netty4.io.netty.channel.unix.Errors$NativeIoException:
readAddress(..) failed: Connection timed out

集群信息:standalone 集群,上百台机器。每台都是自研容器,10g内存左右,每个TM提供1个slot,避免任务之间影响。
作业信息:作业数小于10个。复杂的单任务大概用120并发执行,流量入口16w/s左右,数据处理、5min平滑window处理(目前偶尔buzz在window处)、redis
异步算子等。
作业状态:反压目前倒是不严重,基本很少反压了。但经常因为出现如上的连接相关问题,自研容器虽然存在故障重启等可能性,但经过调整当前容器本身出问题概率多天才1次,可是任务可能一天好多次报LocalTransportException异常。
此外,按照之前的经验,LocalTransportException的出现概率和我的集群运行时间也有关,长期运行大概几天后就会加大概率,然后整体重启后会好转。

当前在考虑优化可能方向,大佬们看看有啥建议,实际因为做实验对比周期太长,最好有一定倾向,否则控制影响因素一个一个对比,而且每次对比需要长期观察,时间太久。
(1)slot group分到不同slot group,然后降低整体并发。
如果问题是tm的task之间连接对太多导致,这个可能会有效。比如A=>B的120并发,改成60并发的A=>B,让AB处于不同slot
group。
         从连接对角度好像降低了复杂度,但单容器流量可能会增大(因为大流量的算子可能集中某60个容器,另外60个容器流量变小可能)。
         其次,如果将任务分为3个slot
group,按60并发,则实际需180个容器。180*每个容器发现故障的概率,一定程度上增大了因容器故障导致的作业重启问题。
         注意如上的连接对我理解其实也是“逻辑概念”,因为tm之间tcp连接是共享的?所以实际‘连接对’也只是
resultSubPartition 的数量问题。
(2)taskmanager.network.netty.num-arenas、taskmanager.network.netty.server.numThreads、taskmanager.network.netty.client.numThreads
这三参数是否有影响呢,由于tm的slot为1,这3参数默认都是slot数,也就是1。


此外,还有一个问题,并发大了是否会影响任务启动过程速度。我今天启动120并发的任务,task从deploying变成initializing再到running是一点点变的,差不多1-2min才全部变成running。

回复