最后的解决方案是在 kubernetes-session.sh 启动参数中增加: -Dakka.ask.timeout=100s -Dweb.timeout=1000000 或者直接修改 configmap 中的 flink-conf.yml
经过多次测试,该问题的根源应该是 slot 太多导致初始的 10000ms timeout 不够用,但 1.13 增加参数后仍会出现异常,1.14 则没有问题。 再次感谢 huweihua On Apr 29 2022, at 9:49 am, Pan Junxun <i2013...@163.com> wrote: > 感谢大佬指点,我换到 1.14.4 后这个问题解决了 > > On Apr 28 2022, at 9:03 pm, huweihua <huweihua....@gmail.com> wrote: > > 当 SlotManager 向 TaskExecutor 为作业申请 Slot 后,TaskExecutor 会向 JobMaster offer这些 > > Slots。从 TaskExecutor 接受到 SlotManager 的请求后会注册一个定时器,如果在定时器计时结束时仍然没有将 Slots > > offer 给 JobMaster,会触发这个问题。 > > > > Slot timeout 的时间配置项为taskmanager.slot.timeout,如果没有单独配置,则使用 akka.ask.timeout > > 的值(默认为 10s)。 > > 可以尝试增加 taskmanager.slot.timeout 超时来避免这个问题,如果仍然有问题,需要进一步通过 > > JobManager/TaskManager 日志进行分析。 > > > 2022年4月28日 下午8:04,Pan Junxun <i2013...@163.com> 写道: > > > > > > 感谢回复! > > > > > > 日志内容如下: > > > > > > 2022-04-28 19:58:20 > > > org.apache.flink.runtime.JobException: Recovery is suppressed by > > > NoRestartBackoffTimeStrategy > > > at > > > org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138) > > > at > > > org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82) > > > at > > > org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:207) > > > at > > > org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:197) > > > at > > > org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:188) > > > at > > > org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:677) > > > at > > > org.apache.flink.runtime.scheduler.UpdateSchedulerNgOnInternalFailuresListener.notifyTaskFailure(UpdateSchedulerNgOnInternalFailuresListener.java:51) > > > at > > > org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.notifySchedulerNgAboutInternalTaskFailure(DefaultExecutionGraph.java:1462) > > > at > > > org.apache.flink.runtime.executiongraph.Execution.processFail(Execution.java:1140) > > > at > > > org.apache.flink.runtime.executiongraph.Execution.processFail(Execution.java:1080) > > > at > > > org.apache.flink.runtime.executiongraph.Execution.fail(Execution.java:783) > > > at > > > org.apache.flink.runtime.jobmaster.slotpool.SingleLogicalSlot.signalPayloadRelease(SingleLogicalSlot.java:195) > > > at > > > org.apache.flink.runtime.jobmaster.slotpool.SingleLogicalSlot.release(SingleLogicalSlot.java:182) > > > at > > > org.apache.flink.runtime.scheduler.SharedSlot.lambda$release$4(SharedSlot.java:271) > > > at > > > java.util.concurrent.CompletableFuture.uniAccept(CompletableFuture.java:670) > > > at > > > java.util.concurrent.CompletableFuture.uniAcceptStage(CompletableFuture.java:683) > > > at > > > java.util.concurrent.CompletableFuture.thenAccept(CompletableFuture.java:2010) > > > at > > > org.apache.flink.runtime.scheduler.SharedSlot.release(SharedSlot.java:271) > > > at > > > org.apache.flink.runtime.jobmaster.slotpool.AllocatedSlot.releasePayload(AllocatedSlot.java:152) > > > at > > > org.apache.flink.runtime.jobmaster.slotpool.DefaultDeclarativeSlotPool.releasePayload(DefaultDeclarativeSlotPool.java:385) > > > at > > > org.apache.flink.runtime.jobmaster.slotpool.DefaultDeclarativeSlotPool.releaseSlots(DefaultDeclarativeSlotPool.java:361) > > > at > > > org.apache.flink.runtime.jobmaster.slotpool.DeclarativeSlotPoolService.internalReleaseTaskManager(DeclarativeSlotPoolService.java:249) > > > at > > > org.apache.flink.runtime.jobmaster.slotpool.DeclarativeSlotPoolService.releaseTaskManager(DeclarativeSlotPoolService.java:230) > > > at > > > org.apache.flink.runtime.jobmaster.JobMaster.disconnectTaskManager(JobMaster.java:497) > > > at sun.reflect.GeneratedMethodAccessor591.invoke(Unknown Source) > > > at > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > at java.lang.reflect.Method.invoke(Method.java:498) > > > at > > > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:305) > > > at > > > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212) > > > at > > > org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) > > > at > > > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) > > > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) > > > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) > > > at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) > > > at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) > > > at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > > > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > > > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) > > > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) > > > at akka.actor.Actor.aroundReceive(Actor.scala:517) > > > at akka.actor.Actor.aroundReceive$(Actor.scala:515) > > > at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) > > > at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) > > > at akka.actor.ActorCell.invoke(ActorCell.scala:561) > > > at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) > > > at akka.dispatch.Mailbox.run(Mailbox.scala:225) > > > at akka.dispatch.Mailbox.exec(Mailbox.scala:235) > > > at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > > > at > > > akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > > > at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > > > at > > > akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > > > Caused by: org.apache.flink.util.FlinkException: TaskExecutor > > > akka.tcp://flink@10.100.1.127:6122/user/rpc/taskmanager_0 has no more > > > allocated slots for job 4e6bdfa51b6a7fd61d595db0177bc6e8. > > > at > > > org.apache.flink.runtime.taskexecutor.TaskExecutor.closeJobManagerConnectionIfNoAllocatedResources(TaskExecutor.java:1941) > > > at > > > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1922) > > > at > > > org.apache.flink.runtime.taskexecutor.TaskExecutor.timeoutSlot(TaskExecutor.java:1955) > > > at > > > org.apache.flink.runtime.taskexecutor.TaskExecutor.access$3000(TaskExecutor.java:181) > > > at > > > org.apache.flink.runtime.taskexecutor.TaskExecutor$SlotActionsImpl.lambda$timeoutSlot$1(TaskExecutor.java:2313) > > > at > > > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440) > > > at > > > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208) > > > ... 21 more > > > Job 内容是从 kafka 读取数据写入到 es,中间还有一些简单操作,任务图上就两个子任务,并行度在前端提交任务的地方设置。启动后可以在 > > > Taskmanager 处看到一共 6 个 slot 空闲一个,但很快就 Failed 然后全部释放掉。 > > > > > > Best, > > > pan junxun > > > On Apr 28 2022, at 7:54 pm, huweihua <huweihua....@gmail.com> wrote: > > >> Hi, Junxun > > >> > > >> 按照你的说法前半部分是符合预期的,并行度为 5 的作业需要 2 个 slot 数量为 3 的 TaskManager。 > > >> 这里没看到具体的报错日志,方便提供下完成日志吗?以及对应的 flink 版本信息。 > > >> > > >>> 2022年4月28日 下午5:28,Pan Junxun <i2013...@163.com> 写道: > > >>> > > >>> 您好, > > >>> > > >>> 我最近在尝试使用 native kubernetes 方式部署 flink 集群。我根据官方文档使用 session > > >>> 模式部署了一个集群,并在上面提交了一个并行度为 5 的测试 Job,参数设置了 > > >>> -D-Dtaskmanager.numberOfTaskSlots=3。提交后可以在前端看到创建了两个 slot 数量为 3 的 > > >>> Taskmanager,并且其中有一个 Taskmanager 显示 1 slot free。但是 Job 无法正常启动,得到了 > > >>> > > >>> has no more allocated slots for job > > >> > > > > > >