Steve Loughran commented on HADOOP-14707:

It's not just about breaking things up into interfaces, it's about supporting 
per-instance and per-path semantics. e.g in encryption zones and erasure coded 
paths, things are different as you look round the FS. Look at HDFS-11644 for 
the history. You can't go round adding overidding negative interfaces 
"DoesntSupportHFlush" unless you want to deal with overriding that later 
"SupportsHFlush2" And so on.

1. Strip out the capability probe from the rest of things
2. Take the opportunity to allow FileContext to take lamba-expressions in its 
map (i.e have a base implementation of that link resolver which takes a 
functional interface & then switch to it for the new calls

> AbstractContractDistCpTest to test attr preservation with -p, verify 
> blobstores downgrade
> -----------------------------------------------------------------------------------------
>                 Key: HADOOP-14707
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14707
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs, fs/azure, fs/s3, test, tools/distcp
>    Affects Versions: 2.9.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>         Attachments: HADOOP-14707-001.patch, HADOOP-14707-002.patch, 
> HADOOP-14707-003.patch
> It *may* be that trying to use {{distcp -p}} with S3a triggers a stack trace 
> {code}
> java.lang.UnsupportedOperationException: S3AFileSystem doesn't support 
> getXAttrs 
> at org.apache.hadoop.fs.FileSystem.getXAttrs(FileSystem.java:2559) 
> at 
> org.apache.hadoop.tools.util.DistCpUtils.toCopyListingFileStatus(DistCpUtils.java:322)
> {code}
> Add a test to {{AbstractContractDistCpTest}} to verify that this is handled 
> better. What is "handle better" here? Either ignore the option or fail with 
> "don't do that" text

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to