Hi, Yan.
we have met this problem too when using aliyun-pangu and have commented in 
FLINK-8801 but no response yet. 
I think most file systems including s3/s3a/s3n/azure/aliyun-oss etc can 
encounter this problem since they doesn’t implement FileSystem#setTimes but the 
PR in FLINK-8801 think they does.
We have made a similar workaround for this problem.

Comment link: 
https://issues.apache.org/jira/browse/FLINK-8801?focusedCommentId=16807691&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16807691

Best, 
Tao Yang

> 在 2019年4月5日,上午5:22,Yan Yan <yanyan300...@gmail.com> 写道:
> 
> Hi, 
> 
> I am running issues when trying to move from HDFS to S3 using Flink 1.6. 
> 
> I am getting an exception from Hadoop code: 
> IOException("Resource " + sCopy +
>     " changed on src filesystem (expected " + resource.getTimestamp() +
>     ", was " + sStat.getModificationTime());
> 
> Digging into this, I found there was one commit 
> <https://github.com/apache/flink/commit/c90a757b29f168144b1bae99df532911ae682e63>
>  made by Nico trying to fix this issue in 2018. However, the fix did not work 
> for my case, as the fs.setTimes() method was not implemented in the 
> hadoop-aws S3AFilesystem I am using. And it seems S3 does not allow you to 
> override the last modified time for an object.
> 
> I am able to make an workaround the other way round: reading the timestamp 
> from S3 and override the local resource. Just wonder if any one has seen 
> similar issues, or he/she can actually make it work by using different 
> implementation of S3AFilesystem? Thanks!
> 
> -- 
> Best,
> Yan

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to