ngk2009 commented on issue #4297:
URL: https://github.com/apache/hudi/issues/4297#issuecomment-993308474
S3:// cannot be used now, and s3a:// can be used. In addition, s3-fs-hadoop
under flink will always appear in the INITIALIZING state when the task is
submitted. In addition, the
ngk2009 commented on issue #4297:
URL: https://github.com/apache/hudi/issues/4297#issuecomment-993232636
> > >
> >
> >
> > > Flink use its own plugin to support filesystems other than HDFS. Hudi
adapter to different DFS by extending the `FileSystem` interface directly.
> >
ngk2009 commented on issue #4297:
URL: https://github.com/apache/hudi/issues/4297#issuecomment-993071565
> I have some tips
>
> 1. put `flink-s3-fs-hadoop` into `/opt/flink/lib`
>
> 2. add `hadoop-hdfs-client`, `hadoop-aws`,
`hadoop-mapreduce-client-core` into
ngk2009 commented on issue #4297:
URL: https://github.com/apache/hudi/issues/4297#issuecomment-993069876
>
> Flink use its own plugin to support filesystems other than HDFS. Hudi
adapter to different DFS by extending the `FileSystem` interface directly.
How to solve
ngk2009 commented on issue #4297:
URL: https://github.com/apache/hudi/issues/4297#issuecomment-992274434
> you need to set up the s3 config in hadoop core-site file ? do you package
the s3 package in the bundle jar ?
Thank you for your reply, I did not find in the guide about flink
ngk2009 commented on issue #4297:
URL: https://github.com/apache/hudi/issues/4297#issuecomment-992271565
Thank you for your reply, I did not find in the guide about flink setting
core-site.xml, and I do not plan to use hadoop environment, just build data
lake analysis based on S3 storage