查看下client的日志,一般在flink的logs目录下
在 2021-11-12 20:59:59,"sky" <[email protected]> 写道: >我使用的事flink on yarn。在执行命令时: flink run -m yarn-cluster >./examples/batch/WordCount.jar 结果却报错了: >------------------------------------------------------------ > The program finished with the following exception: > >org.apache.flink.client.program.ProgramInvocationException: The main method >caused an error: org.apache.flink.runtime.rest.util.RestClientException: >[org.apache.flink.runtime.rest.handler.RestHandlerException: >org.apache.flink.runtime.messages.FlinkJobNotFoundException: Could not find >Flink job (397a081a0313f462818575fc725b3582) > at >org.apache.flink.runtime.rest.handler.job.JobExecutionResultHandler.propagateException(JobExecutionResultHandler.java:94) > at >org.apache.flink.runtime.rest.handler.job.JobExecutionResultHandler.lambda$handleRequest$1(JobExecutionResultHandler.java:84) > at >java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870) > > ... >麻烦告知是什么原因呢,我配置文件是这样的: >#=============================================================================== >high-availability: zookeeper >high-availability.storageDir: hdfs://mycluster/flink/ha/ >high-availability.zookeeper.quorum: >hadoop201:2181,hadoop202:2181,hadoop203:2181 >high-availability.zookeeper.path.root: /flink >high-availability.cluster-id: /default_one # important: customize per cluster >#设置ck的状态后端 >state.backend: filesystem >state.checkpoints.dir: hdfs://mycluster/flink/checkpoints >#设置默认的savepoint的保存位置 >state.savepoints.dir: hdfs://mycluster/flink/savepoints ># 集群名称不能写错 >jobmanager.archive.fs.dir: hdfs://mycluster/flink/completed-jobs/ >historyserver.archive.fs.dir: hdfs://mycluster/flink/completed-jobs/ >#=============================================================================== > >谢谢!
