[
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455449#comment-15455449
]
Steve Loughran commented on HADOOP-13560:
-----------------------------------------
{code}
Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 434.455 sec <<<
FAILURE! - in org.apache.hadoop.fs.s3a.scale.STestS3AHugeFileCreate
test_010_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.STestS3AHugeFileCreate)
Time elapsed: 336.774 sec <<< ERROR!
org.apache.hadoop.fs.s3a.AWSS3IOException: Completing multi-part upload on
tests3a/scale/hugefile: com.amazonaws.services.s3.model.AmazonS3Exception: We
encountered an internal error. Please try again. (Service: null; Status Code:
0; Error Code: InternalError; Request ID: 15ED37BFBBA92F92), S3 Extended
Request ID:
+BlJIb5S2QHh1j3dwJysprHTq5iGnvLRD+xWKKld4/0EE3dqt56SwLVZvE4B2100jsN8EKmvVzg=:
We encountered an internal error. Please try again. (Service: null; Status
Code: 0; Error Code: InternalError; Request ID: 15ED37BFBBA92F92)
at
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$CompleteMultipartUploadHandler.doEndElement(XmlResponsesSaxParser.java:1460)
at
com.amazonaws.services.s3.model.transform.AbstractHandler.endElement(AbstractHandler.java:52)
at
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:609)
at
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1783)
at
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2970)
at
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:606)
at
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:118)
at
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510)
at
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:848)
at
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:777)
at
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseXmlInputStream(XmlResponsesSaxParser.java:151)
at
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseCompleteMultipartUploadResponse(XmlResponsesSaxParser.java:444)
at
com.amazonaws.services.s3.model.transform.Unmarshallers$CompleteMultipartUploadResultUnmarshaller.unmarshall(Unmarshallers.java:213)
at
com.amazonaws.services.s3.model.transform.Unmarshallers$CompleteMultipartUploadResultUnmarshaller.unmarshall(Unmarshallers.java:210)
at
com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
at
com.amazonaws.services.s3.internal.ResponseHeaderHandlerChain.handle(ResponseHeaderHandlerChain.java:44)
at
com.amazonaws.services.s3.internal.ResponseHeaderHandlerChain.handle(ResponseHeaderHandlerChain.java:30)
at
com.amazonaws.http.AmazonHttpClient.handleResponse(AmazonHttpClient.java:1072)
at
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:746)
at
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
at
com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload(AmazonS3Client.java:2705)
at
org.apache.hadoop.fs.s3a.S3AFastOutputStream$MultiPartUpload.complete(S3AFastOutputStream.java:388)
at
org.apache.hadoop.fs.s3a.S3AFastOutputStream$MultiPartUpload.access$200(S3AFastOutputStream.java:333)
at
org.apache.hadoop.fs.s3a.S3AFastOutputStream.close(S3AFastOutputStream.java:270)
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at
org.apache.hadoop.fs.s3a.scale.STestS3AHugeFileCreate.test_010_CreateHugeFile(STestS3AHugeFileCreate.java:161)
{code}
> S3A to support huge file writes and operations -with tests
> ----------------------------------------------------------
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.9.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Minor
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really
> works
> 1. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very
> large commit operations for committers using rename
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]