[
https://issues.apache.org/jira/browse/HADOOP-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17495497#comment-17495497
]
Steve Loughran commented on HADOOP-13294:
-----------------------------------------
not to changing the test directly; you'd have to add a new contract option to
declare whether deleting root dir raises an exception vs returns false, then
set it in the s3a xml resource for the tests.
that way, if other stores fail meaningfully then they too report it.
FWIW, i suspect delete file:/// and hdfs:/// fail from filesystem permissions
alone; not sure about other stores
> Test hadoop fs shell against s3a; fix problems
> ----------------------------------------------
>
> Key: HADOOP-13294
> URL: https://issues.apache.org/jira/browse/HADOOP-13294
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.8.0
> Reporter: Steve Loughran
> Priority: Major
> Labels: pull-request-available
> Time Spent: 10m
> Remaining Estimate: 0h
>
> There's no tests of {{hadoop -fs}} commands against s3a; add some. Ideally,
> generic to all object stores.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]