[
https://issues.apache.org/jira/browse/JCLOUDS-1366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16313817#comment-16313817
]
Steve Loughran commented on JCLOUDS-1366:
-----------------------------------------
FWIW in Hadoop S3A we dealt with this by using on HDD buffering of blocks by
default, and handing the actual File reference to the AWS SDK transfer manager.
If given a file it handles transient failures nicely by restarting the upload.
> OutOfMemory when InputStream referencing to big file is used as payload
> -----------------------------------------------------------------------
>
> Key: JCLOUDS-1366
> URL: https://issues.apache.org/jira/browse/JCLOUDS-1366
> Project: jclouds
> Issue Type: Bug
> Components: jclouds-blobstore
> Affects Versions: 2.0.0, 2.0.3
> Environment: Linux and Windows
> Reporter: Deyan
> Priority: Critical
>
> If I use InputStream which source is large file (lets say 3GB) I am getting
> OOE. This is with default java VM options.
> Here is the code I am using to construct the blob:
> {code:java}
> File bigFile = new File(file);
> try (InputStream inputStream = new FileInputStream(f)) {
> Blob b = blobStore.blobBuilder(blobName)
> .payload(inputStream).contentLength(f.length())
> .contentDisposition(blobName)
> .contentType(
> MediaType.OCTET_STREAM)
> .userMetadata(ImmutableMap.of("a", "b", "test",
> "beta"))
> .build();
> blobStore.putBlob("test", bbbbb, multipart());
> }
> {code}
> Stacktrace:
> {code:java}
> java.lang.OutOfMemoryError: Java heap space
> at
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.getNextPayload(BasePayloadSlicer.java:101)
> at
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:90)
> at
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:63)
> at
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:363)
> at
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:349)
> at org.jclouds.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:262)
> {code}
> If 'bigFile' is used as payload the bug is not reproducible.
>
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)