[ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427857#comment-15427857
 ] 

Andras Bokor commented on HADOOP-7363:
--------------------------------------

Hi [~anu],

I was going to explain but I realized my new patch needs some change.

So the related test checks whether the FileSystem is S3 or not by schema name. 
If it is S3 we skip the test because S3 does not implement permissions.
In case of {{RawLocalFileSystem}} the {{getScheme}} call throws 
{{UnsupportedOperationException}} which does not mean an error happened. 
Actually it is not a problem at all. In that block we just need to check the 
S3. We don't have to stop the test in case of UnsupportedOperationException.

So far the test cases did not run with {{RawLocalFileSystem}}. That is why they 
did not get exception so far.

I am uploading patch 05.

Does it make sense?

> TestRawLocalFileSystemContract is needed
> ----------------------------------------
>
>                 Key: HADOOP-7363
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7363
>             Project: Hadoop Common
>          Issue Type: Test
>          Components: fs
>    Affects Versions: 3.0.0-alpha2
>            Reporter: Matt Foley
>            Assignee: Andras Bokor
>         Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, 
> HADOOP-7363.03.patch, HADOOP-7363.04.patch, HADOOP-7363.05.patch
>
>
> FileSystemContractBaseTest is supposed to be run with each concrete 
> FileSystem implementation to insure adherence to the "contract" for 
> FileSystem behavior.  However, currently only HDFS and S3 do so.  
> RawLocalFileSystem, at least, needs to be added. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to