[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15470707#comment-15470707
 ] 

Steve Loughran commented on HADOOP-13560:
-----------------------------------------

2GB uploads when file buffered fails; AWS complains
{code}
Running org.apache.hadoop.fs.s3a.scale.STestS3AHugeFilesDiskBlocks
Tests run: 5, Failures: 0, Errors: 1, Skipped: 3, Time elapsed: 1,258.424 sec 
<<< FAILURE! - in org.apache.hadoop.fs.s3a.scale.STestS3AHugeFilesDiskBlocks
test_010_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.STestS3AHugeFilesDiskBlocks)
  Time elapsed: 1,256.013 sec  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSClientIOException: Multi-part upload with id 
'dZga.hig99Nxdm1S5dlcilzpg1kiav7ZF2QCJZZydN0qyE7U_pMUEYdACOavY_us3q9CgIxfKaQadXLhgUseUw--'
 on tests3a/scale/hugefile: com.amazonaws.ResetException: Failed to reset the 
request input stream;  If the request involves an input stream, the maximum 
stream buffer size can be configured via 
request.getRequestClientOptions().setReadLimit(int): Failed to reset the 
request input stream;  If the request involves an input stream, the maximum 
stream buffer size can be configured via 
request.getRequestClientOptions().setReadLimit(int)
        at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:108)
        at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:165)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.waitForAllPartUploads(S3ABlockOutputStream.java:418)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$100(S3ABlockOutputStream.java:356)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:261)
        at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
        at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
        at 
org.apache.hadoop.fs.s3a.scale.AbstractSTestS3AHugeFiles.test_010_CreateHugeFile(AbstractSTestS3AHugeFiles.java:149)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
        at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: com.amazonaws.ResetException: Failed to reset the request input 
stream;  If the request involves an input stream, the maximum stream buffer 
size can be configured via request.getRequestClientOptions().setReadLimit(int)
        at 
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:665)
        at 
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
        at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
        at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
        at 
com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:2921)
        at 
com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:2906)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1141)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:391)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:384)
        at 
org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Resetting to invalid mark
        at java.io.BufferedInputStream.reset(BufferedInputStream.java:448)
        at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$ForwardingInputStream.reset(S3ADataBlocks.java:432)
        at 
com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:102)
        at 
com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:102)
        at 
com.amazonaws.services.s3.internal.InputSubstream.reset(InputSubstream.java:110)
        at 
com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:102)
        at 
com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream.reset(MD5DigestCalculatingInputStream.java:76)
        at 
com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:102)
        at 
com.amazonaws.event.ProgressInputStream.reset(ProgressInputStream.java:139)
        at 
com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:102)
        at 
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:663)
        at 
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
        at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
        at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
        at 
com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:2921)
        at 
com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:2906)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1141)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:391)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:384)
        at 
org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


Results :

Tests in error: 
  
STestS3AHugeFilesDiskBlocks>AbstractSTestS3AHugeFiles.test_010_CreateHugeFile:149
 ยป AWSClientIO

{code}

> S3A to support huge file writes and operations -with tests
> ----------------------------------------------------------
>
>                 Key: HADOOP-13560
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13560
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.9.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>         Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to