openinx commented on issue #3079:
URL: https://github.com/apache/iceberg/issues/3079#issuecomment-915044545


   @shengkui ,  I don't have a correct aws s3 enviroment,  but I've configured 
this flink connector correctly in our alibaba public object storage before 
(Just use the open hadoop distribution with aliyun-oss hdfs implementation). 
The first thing you need to do is :  configurate the hadoop hdfs correctly by 
setting the key-values in core-site.xml and verify this by using `hadoop hdfs` 
command. Then you will need to make sure your flink cluster & hive-metastore 
are using the correct hadoop classpath you've configured above.  In theory, you 
can submit the flink job correctly then. 
   
   We don't need to configure any s3 configurations in the flink table 
properties.  There's a [document](https://developer.aliyun.com/article/783957) 
describing how to write data into aliyun oss in Chinese.  You may need to 
replace all the oss configurations to s3 according to the doc.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to