[ 
https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13162:
--------------------------------------
    Attachment: HADOOP-13162-branch-2-004.patch

AWS endpoint:  ap-southeast-1

AWS results:
{noformat}
Results :

Tests in error:
  
TestS3AContractDistCp>AbstractContractDistCpTest.largeFilesToRemote:96->AbstractContractDistCpTest.largeFiles:176
 »
  TestS3ADeleteFilesOneByOne>TestS3ADeleteManyFiles.testBulkRenameAndDelete:103 
»
  TestS3ADeleteManyFiles.testBulkRenameAndDelete:103 »  test timed out after 
180...

Tests run: 228, Failures: 0, Errors: 3, Skipped: 7
{noformat}

rename d1/d2 d1/d4 throws filealready exists exception (with or without patch). 
However, with hadoop command operations it succeeds. Not sure if it has got 
anything to do with inconsistency. But I have removed that step of "rename 
rename d1/d2 s1/d4" as the exception is available earlier as well. 

> Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-13162
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13162
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Rajesh Balamohan
>            Priority: Minor
>         Attachments: HADOOP-13162-branch-2-002.patch, 
> HADOOP-13162-branch-2-003.patch, HADOOP-13162-branch-2-004.patch, 
> HADOOP-13162.001.patch
>
>
> getFileStatus is relatively expensive call and mkdirs invokes it multiple 
> times depending on how deep the directory structure is. It would be good to 
> reduce the number of getFileStatus calls in such cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to