[ 
https://issues.apache.org/jira/browse/FLINK-34651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17845177#comment-17845177
 ] 

Zhenqiu Huang commented on FLINK-34651:
---------------------------------------

Have you tried to put lib into your Flink App uber jar? Hadoop path should be 
able to find its own s3 file system implementation.

https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws/2.7.3

> The HiveTableSink of Flink does not support writing to S3
> ---------------------------------------------------------
>
>                 Key: FLINK-34651
>                 URL: https://issues.apache.org/jira/browse/FLINK-34651
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Hive
>    Affects Versions: 1.13.6, 1.14.6, 1.15.4, 1.16.3, 1.17.2, 1.18.1
>            Reporter: shizhengchao
>            Priority: Blocker
>
> My Hive table is located on S3. When I try to write to Hive using Flink 
> Streaming SQL, I find that it does not support writing to S3. Furthermore, 
> this issue has not been fixed in the latest version. The error I got is as 
> follows:
> {code:java}
> //代码占位符
> java.io.IOException: No FileSystem for scheme: s3
>     at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2586)
>     at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2593)
>     at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
>     at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2632)
>     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2614)
>     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>     at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>     at 
> org.apache.flink.connectors.hive.HadoopFileSystemFactory.create(HadoopFileSystemFactory.java:44)
>     at 
> org.apache.flink.table.filesystem.stream.StreamingSink.lambda$compactionWriter$8dbc1825$1(StreamingSink.java:95)
>     at 
> org.apache.flink.table.filesystem.stream.compact.CompactCoordinator.initializeState(CompactCoordinator.java:102)
>     at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:118)
>     at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:290)
>     at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:441)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:585)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:565)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:650)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:540)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:759)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
>     at java.lang.Thread.run(Thread.java:750)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to