Re: 配置hadoop依赖问题

2024-04-01 文章 Biao Geng
Hi fengqi,

“Hadoop is not in the
classpath/dependencies.”报错说明org.apache.hadoop.conf.Configuration和org.apache.hadoop.fs.FileSystem这些hdfs所需的类没有找到。

如果你的系统环境中有hadoop的话,通常是用这种方式来设置classpath:
export HADOOP_CLASSPATH=`hadoop classpath`

如果你的提交方式是提交到本地一个standalone的flink集群的话,可以检查下flink生成的日志文件,里面会打印classpath,可以看下是否有Hadoop相关的class。

Best,
Biao Geng


ha.fen...@aisino.com  于2024年4月2日周二 10:24写道:

> 1、在开发环境下,添加的有hadoop-client依赖,checkpoint时可以访问到hdfs的路径
> 2、flink1.19.0,hadoop3.3.1,jar提交到单机flink系统中,提示如下错误
> Caused by: java.lang.RuntimeException:
> org.apache.flink.runtime.client.JobInitializationException: Could not start
> the JobMaster.
> at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
> at
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:75)
> at
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at
> java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:443)
> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> at
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> at
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> Caused by: org.apache.flink.runtime.client.JobInitializationException:
> Could not start the JobMaster.
> at
> org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97)
> at
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> at
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> at
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1595)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.CompletionException:
> org.apache.flink.util.FlinkRuntimeException: Failed to create checkpoint
> storage at checkpoint coordinator side.
> at
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
> at
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
> at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592)
> ... 3 more
> Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to create
> checkpoint storage at checkpoint coordinator side.
> at
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.(CheckpointCoordinator.java:364)
> at
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.(CheckpointCoordinator.java:273)
> at
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.enableCheckpointing(DefaultExecutionGraph.java:503)
> at
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:334)
> at
> org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:173)
> at
> org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:381)
> at
> org.apache.flink.runtime.scheduler.SchedulerBase.(SchedulerBase.java:224)
> at
> org.apache.flink.runtime.scheduler.DefaultScheduler.(DefaultScheduler.java:140)
> at
> org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:162)
> at
> org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:121)
> at
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:379)
> at org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:356)
> at
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:128)
> at
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:100)
> at
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
> at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> ... 3 more
> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
> Could not find a file system implementation for scheme 'hdfs'. The scheme
> is not directly supported by Flink and no Hadoop file system to support
> this scheme could be loaded. For a full list of supported file systems,
> please see
> https://nightlies.apache.org/flink/flink-docs-stable/ops/filesystems/.
> at
> 

Re: 退订

2024-04-01 文章 Biao Geng
Hi,

退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
.


Best,
Biao Geng

CloudFunny  于2024年3月31日周日 22:25写道:

>
>


Re: 退订

2024-04-01 文章 Biao Geng
Hi,

退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
.


Best,
Biao Geng

戴少  于2024年4月1日周一 11:09写道:

> 退订
>
> --
>
> Best Regards,
>
>
>
>
>  回复的原邮件 
> | 发件人 | wangfengyang |
> | 发送日期 | 2024年03月22日 17:28 |
> | 收件人 | user-zh  |
> | 主题 | 退订 |
> 退订


Re: 退订

2024-04-01 文章 Biao Geng
Hi,

退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
.


Best,
Biao Geng


杨东树  于2024年3月31日周日 20:23写道:

> 申请退订邮件通知,谢谢!


Re: 申请退订邮件申请,谢谢

2024-04-01 文章 Biao Geng
Hi,

退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
.


Best,
Biao Geng

 于2024年3月31日周日 22:20写道:

> 申请退订邮件申请,谢谢


退订

2024-04-01 文章 薛礼彬
退订