steveloughran opened a new pull request #1944: HADOOP-16900. Very large files 
can be truncated when written through S3AFileSystem
URL: https://github.com/apache/hadoop/pull/1944
 
 
   
   Contributed by Steve Loughran.
   
   WriteOperationsHelper now raises a PathIOException when the number of
   parts to write is too high.
   
   This is cached during block uploads in S3ABlockOutputStream, so that once
   one has been raised, there's no attempt to upload other blocks,
   or to complete() the upload as if it was considered a success.
   
   S3ABlockOutputStream also caches any failure raised during block write,
   so things will fail earlier (currently it would only surface in close()),
   and will abort() if anything went wrong during the upload.
   
   This is intended to ensure that
   * if too many parts are uploaded, the operation fails
   * if anything causes a block upload to fail, the entire write is considered
   a failure
   * after a failure, the upload is never completed
   * and abort() is always closed to remove any pending data
   
   Change-Id: I7ee6cd1defcc4c7ccb1f95b933af2959cd14ddc2
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to