Re: 【求助】Flink Hadoop依赖问题

2020-07-16 文章 Yang Wang
你可以在Pod里面确认一下/data目录是否正常挂载,另外需要在Pod里ps看一下
起的JVM进程里的classpath是什么,有没有包括hadoop的jar


当然,使用Roc Marshal建议的增加flink-shaded-hadoop并且放到$FLINK_HOME/lib下也可以解决问题

Best,
Yang

Roc Marshal  于2020年7月15日周三 下午5:09写道:

>
>
>
> 你好,Z-Z,
>
> 可以尝试在
> https://repo1.maven.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/
> 下载对应的uber jar包,并就将下载后的jar文件放到flink镜像的 ${FLINK_HOME}/lib 路径下,之后启动编排的容器。
> 祝好。
> Roc Marshal.
>
>
>
>
>
>
>
>
>
>
>
> 在 2020-07-15 10:47:39,"Z-Z"  写道:
> >我在使用Flink 1.11.0版本中,使用docker-compose搭建,docker-compose文件如下:
> >version: "2.1"
> >services:
> > jobmanager:
> >  image: flink:1.11.0-scala_2.12
> >  expose:
> >   - "6123"
> >  ports:
> >   - "8081:8081"
> >  command: jobmanager
> >  environment:
> >   - JOB_MANAGER_RPC_ADDRESS=jobmanager
> >   -
> HADOOP_CLASSPATH=/data/hadoop-2.9.2/etc/hadoop:/data/hadoop-2.9.2/share/hadoop/common/lib/*:/data/hadoop-2.9.2/share/hadoop/common/*:/data/hadoop-2.9.2/share/hadoop/hdfs:/data/hadoop-2.9.2/share/hadoop/hdfs/lib/*:/data/hadoop-2.9.2/share/hadoop/hdfs/*:/data/hadoop-2.9.2/share/hadoop/yarn:/data/hadoop-2.9.2/share/hadoop/yarn/lib/*:/data/hadoop-2.9.2/share/hadoop/yarn/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/lib/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
> >  volumes:
> >   - ./jobmanager/conf:/opt/flink/conf
> >   - ./data:/data
> >
> >
> > taskmanager:
> >  image: flink:1.11.0-scala_2.12
> >  expose:
> >   - "6121"
> >   - "6122"
> >  depends_on:
> >   - jobmanager
> >  command: taskmanager
> >  links:
> >   - "jobmanager:jobmanager"
> >  environment:
> >   - JOB_MANAGER_RPC_ADDRESS=jobmanager
> >  volumes:
> >   - ./taskmanager/conf:/opt/flink/conf
> >networks:
> > default:
> >  external:
> >   name: flink-network
> >
> >
> >
> >hadoop-2.9.2已经放在data目录了,且已经在jobmanager和taskmanager的环境变量里添加了HADOOP_CLASSPATH,但通过cli提交和webui提交,jobmanager还是提示报Could
> not find a file system implementation for scheme 'hdfs'。有谁知道是怎么回事吗?
>


Re:【求助】Flink Hadoop依赖问题

2020-07-15 文章 Roc Marshal



你好,Z-Z,

可以尝试在 
https://repo1.maven.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/ 
下载对应的uber jar包,并就将下载后的jar文件放到flink镜像的 ${FLINK_HOME}/lib 路径下,之后启动编排的容器。
祝好。
Roc Marshal.











在 2020-07-15 10:47:39,"Z-Z"  写道:
>我在使用Flink 1.11.0版本中,使用docker-compose搭建,docker-compose文件如下:
>version: "2.1"
>services:
> jobmanager:
>  image: flink:1.11.0-scala_2.12
>  expose:
>   - "6123"
>  ports:
>   - "8081:8081"
>  command: jobmanager
>  environment:
>   - JOB_MANAGER_RPC_ADDRESS=jobmanager
>   - 
>HADOOP_CLASSPATH=/data/hadoop-2.9.2/etc/hadoop:/data/hadoop-2.9.2/share/hadoop/common/lib/*:/data/hadoop-2.9.2/share/hadoop/common/*:/data/hadoop-2.9.2/share/hadoop/hdfs:/data/hadoop-2.9.2/share/hadoop/hdfs/lib/*:/data/hadoop-2.9.2/share/hadoop/hdfs/*:/data/hadoop-2.9.2/share/hadoop/yarn:/data/hadoop-2.9.2/share/hadoop/yarn/lib/*:/data/hadoop-2.9.2/share/hadoop/yarn/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/lib/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
>  volumes:
>   - ./jobmanager/conf:/opt/flink/conf
>   - ./data:/data
>
>
> taskmanager:
>  image: flink:1.11.0-scala_2.12
>  expose:
>   - "6121"
>   - "6122"
>  depends_on:
>   - jobmanager
>  command: taskmanager
>  links:
>   - "jobmanager:jobmanager"
>  environment:
>   - JOB_MANAGER_RPC_ADDRESS=jobmanager
>  volumes:
>   - ./taskmanager/conf:/opt/flink/conf
>networks:
> default:
>  external:
>   name: flink-network
>
>
>
>hadoop-2.9.2已经放在data目录了,且已经在jobmanager和taskmanager的环境变量里添加了HADOOP_CLASSPATH,但通过cli提交和webui提交,jobmanager还是提示报Could
> not find a file system implementation for scheme 'hdfs'。有谁知道是怎么回事吗?