有输出的
















在 2020-06-16 15:24:29,"王松" <[email protected]> 写道:
>那你在命令行执行:hadoop classpath,有hadoop的classpath输出吗?
>
>Zhou Zach <[email protected]> 于2020年6月16日周二 下午3:22写道:
>
>>
>>
>>
>>
>>
>>
>> 在/etc/profile下,目前只加了
>> export HADOOP_CLASSPATH=`hadoop classpath`
>> 我是安装的CDH,没找到sbin这个文件。。
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 在 2020-06-16 15:05:12,"王松" <[email protected]> 写道:
>> >你配置HADOOP_HOME和HADOOP_CLASSPATH这两个环境变量了吗?
>> >
>> >export HADOOP_HOME=/usr/local/hadoop-2.7.2
>> >export HADOOP_CLASSPATH=`hadoop classpath`
>> >export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
>> >
>> >Zhou Zach <[email protected]> 于2020年6月16日周二 下午2:53写道:
>> >
>> >> flink/lib/下的jar:
>> >> flink-connector-hive_2.11-1.10.0.jar
>> >> flink-dist_2.11-1.10.0.jar
>> >> flink-jdbc_2.11-1.10.0.jar
>> >> flink-json-1.10.0.jar
>> >> flink-shaded-hadoop-2-3.0.0-cdh6.3.0-7.0.jar
>> >> flink-sql-connector-kafka_2.11-1.10.0.jar
>> >> flink-table_2.11-1.10.0.jar
>> >> flink-table-blink_2.11-1.10.0.jar
>> >> hbase-client-2.1.0.jar
>> >> hbase-common-2.1.0.jar
>> >> hive-exec-2.1.1.jar
>> >> mysql-connector-java-5.1.49.jar
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> 在 2020-06-16 14:48:43,"Zhou Zach" <[email protected]> 写道:
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >high-availability.storageDir: hdfs:///flink/ha/
>> >> >high-availability.zookeeper.quorum: cdh1:2181,cdh2:2181,cdh3:2181
>> >> >state.backend: filesystem
>> >> >state.checkpoints.dir:
>> hdfs://nameservice1:8020//user/flink10/checkpoints
>> >> >state.savepoints.dir: hdfs://nameservice1:8020//user/flink10/savepoints
>> >> >high-availability.zookeeper.path.root: /flink
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >在 2020-06-16 14:44:02,"王松" <[email protected]> 写道:
>> >> >>你的配置文件中ha配置可以贴下吗
>> >> >>
>> >> >>Zhou Zach <[email protected]> 于2020年6月16日周二 下午1:49写道:
>> >> >>
>> >> >>> org.apache.flink.runtime.entrypoint.ClusterEntrypointException:
>> Failed
>> >> to
>> >> >>> initialize the cluster entrypoint YarnJobClusterEntrypoint.
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:119)
>> >> >>>
>> >> >>> Caused by: java.io.IOException: Could not create FileSystem for
>> highly
>> >> >>> available storage path
>> (hdfs:/flink/ha/application_1592215995564_0027)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
>> >> >>>
>> >> >>> at java.security.AccessController.doPrivileged(Native Method)
>> >> >>>
>> >> >>> at javax.security.auth.Subject.doAs(Subject.java:422)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
>> >> >>>
>> >> >>> ... 2 more
>> >> >>>
>> >> >>> Caused by:
>> >> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>> >> >>> Could not find a file system implementation for scheme 'hdfs'. The
>> >> scheme
>> >> >>> is not directly supported by Flink and no Hadoop file system to
>> support
>> >> >>> this scheme could be loaded.
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)
>> >> >>>
>> >> >>> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
>> >> >>>
>> >> >>> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
>> >> >>>
>> >> >>> ... 13 more
>> >> >>>
>> >> >>> Caused by:
>> >> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>> >> >>> Cannot support file system for 'hdfs' via Hadoop, because Hadoop is
>> >> not in
>> >> >>> the classpath, or some classes are missing from the classpath.
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)
>> >> >>>
>> >> >>> ... 16 more
>> >> >>>
>> >> >>> Caused by: java.lang.VerifyError: Bad return type
>> >> >>>
>> >> >>> Exception Details:
>> >> >>>
>> >> >>>   Location:
>> >> >>>
>> >> >>>
>> >> >>>
>> >>
>> org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage;
>> >> >>> @160: areturn
>> >> >>>
>> >> >>>   Reason:
>> >> >>>
>> >> >>>     Type 'org/apache/hadoop/fs/ContentSummary' (current frame,
>> >> stack[0])
>> >> >>> is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method
>> >> >>> signature)
>> >> >>>
>> >> >>>   Current Frame:
>> >> >>>
>> >> >>>     bci: @160
>> >> >>>
>> >> >>>     flags: { }
>> >> >>>
>> >> >>>     locals: { 'org/apache/hadoop/hdfs/DFSClient',
>> 'java/lang/String',
>> >> >>> 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' }
>> >> >>>     stack: { 'org/apache/hadoop/fs/ContentSummary' }
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> 在本地intellij idea中可以正常运行,flink job上订阅kafka,sink到mysql和hbase,集群flink
>> >> lib目录下,
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >>
>>

回复