cc @Rui Li <[email protected]> 李佳宸 <[email protected]> 于2020年9月14日周一 下午5:11写道:
> 大家好~我执行batch table的作业写入hive时,会出现FileNotFoundException的错误。找不到.staging文件 > 版本是1.11.1 > Caused by: java.io.FileNotFoundException: File > > hdfs://gykjcluster/user/hive/warehouse/etl_et_flink_sink.db/ods_et_es_financialestimate/.staging_1600070419144 > does not exist. > at > > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:1053) > ~[hadoop-client-api-3.1.3.jar:?] > at > > org.apache.hadoop.hdfs.DistributedFileSystem.access$1000(DistributedFileSystem.java:131) > ~[hadoop-client-api-3.1.3.jar:?] > at > > org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1113) > ~[hadoop-client-api-3.1.3.jar:?] > at > > org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1110) > ~[hadoop-client-api-3.1.3.jar:?] > at > > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > ~[hadoop-client-api-3.1.3.jar:?] > at > > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:1120) > ~[hadoop-client-api-3.1.3.jar:?] > at > > org.apache.flink.hive.shaded.fs.hdfs.HadoopFileSystem.listStatus(HadoopFileSystem.java:157) > ~[flink-sql-connector-hive-3.1.2_2.11-1.11.0.jar:1.11.0] > at > > org.apache.flink.table.filesystem.PartitionTempFileManager.headCheckpoints(PartitionTempFileManager.java:140) > ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] > at > > org.apache.flink.table.filesystem.FileSystemCommitter.commitUpToCheckpoint(FileSystemCommitter.java:98) > ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] > at > > org.apache.flink.table.filesystem.FileSystemOutputFormat.finalizeGlobal(FileSystemOutputFormat.java:95) > ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] > at > > org.apache.flink.runtime.jobgraph.InputOutputFormatVertex.finalizeOnMaster(InputOutputFormatVertex.java:132) > ~[flink-dist_2.11-1.11.1.jar:1.11.1] > at > > org.apache.flink.runtime.executiongraph.ExecutionGraph.vertexFinished(ExecutionGraph.java:1286) > ~[flink-dist_2.11-1.11.1.jar:1.11.1] > > 在standalone模式下没有这个问题,on yarn 的per job模式下部分job就会出现这个问题 >
