Re: [jclouds/jclouds-labs] JCLOUDS-1362: Proper password generation with custom constraints for each cloud (#430)

2018-01-05 Thread Jim Spring
@nacx LGTM

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds-labs/pull/430#issuecomment-355629218

[jira] [Commented] (JCLOUDS-1366) OutOfMemory when InputStream referencing to big file is used as payload

2018-01-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/JCLOUDS-1366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313817#comment-16313817
 ] 

Steve Loughran commented on JCLOUDS-1366:
-

FWIW in Hadoop S3A we dealt with this by using on HDD buffering of blocks by 
default, and handing the actual File reference to the AWS SDK transfer manager. 
If given a file it handles transient failures nicely by restarting the upload. 

> OutOfMemory when InputStream referencing to big file is used as payload
> ---
>
> Key: JCLOUDS-1366
> URL: https://issues.apache.org/jira/browse/JCLOUDS-1366
> Project: jclouds
>  Issue Type: Bug
>  Components: jclouds-blobstore
>Affects Versions: 2.0.0, 2.0.3
> Environment: Linux and Windows
>Reporter: Deyan
>Priority: Critical
>
> If I use InputStream which source is large file (lets say 3GB) I am getting 
> OOE. This is with default java VM options.
> Here is the code I am using to construct the blob:
> {code:java}
>  File bigFile = new File(file);
>  try (InputStream inputStream = new FileInputStream(f)) {
> Blob b = blobStore.blobBuilder(blobName)
> .payload(inputStream).contentLength(f.length())
> .contentDisposition(blobName)
> .contentType(
> MediaType.OCTET_STREAM)
> .userMetadata(ImmutableMap.of("a", "b", "test", 
> "beta"))
> .build();
> blobStore.putBlob("test", b, multipart());
> }
> {code}
> Stacktrace:
> {code:java}
> java.lang.OutOfMemoryError: Java heap space
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.getNextPayload(BasePayloadSlicer.java:101)
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:90)
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:63)
>   at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:363)
>   at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:349)
>   at org.jclouds.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:262)
> {code}
>  If 'bigFile' is used as payload the bug is not reproducible.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (JCLOUDS-1371) LocalBlobStore.list enumerates entire container

2018-01-05 Thread Andrew Gaul (JIRA)
Andrew Gaul created JCLOUDS-1371:


 Summary: LocalBlobStore.list enumerates entire container
 Key: JCLOUDS-1371
 URL: https://issues.apache.org/jira/browse/JCLOUDS-1371
 Project: jclouds
  Issue Type: Improvement
  Components: jclouds-blobstore
Affects Versions: 2.0.3
Reporter: Andrew Gaul


{{LocalBlobStore.list}} with the filesystem blobstore enumerates the entire 
container even when prefix and delimiter set.  The File API does not provide a 
way to list a subset of files except for those within a specific directory and 
the underlying filesystem makes no guarantees about enumeration order.  We can 
still optimize the case where prefix is set and delimiter is /.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (JCLOUDS-1366) OutOfMemory when InputStream referencing to big file is used as payload

2018-01-05 Thread Andrew Gaul (JIRA)

[ 
https://issues.apache.org/jira/browse/JCLOUDS-1366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312682#comment-16312682
 ] 

Andrew Gaul commented on JCLOUDS-1366:
--

42079e1392fb5b2b792f518812689854c375445f introduced this regression with the 
parallel upload feature.  Previously {{BaseBlobStore.putMultipartBlob}} 
prepared a single MPU part and uploaded it, looping until complete.  Now it 
prepares all MPU parts simultaneously and submits them to an 
{{ExecutorService}}.  Combined with JCLOUDS-814, this buffers the entire blob 
in-memory and results in {{OutOfMemoryError}}.  Instead we should limit the 
number of simultaneous uploads with {{InputStream}} payloads.  [~zack-s] 
[~dgyurdzhekliev] Could you investigate this?

> OutOfMemory when InputStream referencing to big file is used as payload
> ---
>
> Key: JCLOUDS-1366
> URL: https://issues.apache.org/jira/browse/JCLOUDS-1366
> Project: jclouds
>  Issue Type: Bug
>  Components: jclouds-blobstore
>Affects Versions: 2.0.3
> Environment: Linux and Windows
>Reporter: Deyan
>Priority: Critical
>
> If I use InputStream which source is large file (lets say 3GB) I am getting 
> OOE. This is with default java VM options.
> Here is the code I am using to construct the blob:
> {code:java}
>  File bigFile = new File(file);
>  try (InputStream inputStream = new FileInputStream(f)) {
> Blob b = blobStore.blobBuilder(blobName)
> .payload(inputStream).contentLength(f.length())
> .contentDisposition(blobName)
> .contentType(
> MediaType.OCTET_STREAM)
> .userMetadata(ImmutableMap.of("a", "b", "test", 
> "beta"))
> .build();
> blobStore.putBlob("test", b, multipart());
> }
> {code}
> Stacktrace:
> {code:java}
> java.lang.OutOfMemoryError: Java heap space
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.getNextPayload(BasePayloadSlicer.java:101)
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:90)
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:63)
>   at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:363)
>   at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:349)
>   at org.jclouds.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:262)
> {code}
>  If 'bigFile' is used as payload the bug is not reproducible.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jclouds/jclouds] JCLOUDS-1370: Add CannedAccesPolicy constants (#1167)

2018-01-05 Thread Andrew Gaul
Also use CaseFormat instead of extra logic.
You can view, comment on, or merge this pull request online at:

  https://github.com/jclouds/jclouds/pull/1167

-- Commit Summary --

  * JCLOUDS-1370: Add CannedAccesPolicy constants

-- File Changes --

M apis/s3/src/main/java/org/jclouds/s3/domain/CannedAccessPolicy.java (49)

-- Patch Links --

https://github.com/jclouds/jclouds/pull/1167.patch
https://github.com/jclouds/jclouds/pull/1167.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds/pull/1167


Re: [jclouds/jclouds] JCLOUDS-1370: Add CannedAccesPolicy constants (#1167)

2018-01-05 Thread stevegomez17
stevegomez17 requested changes on this pull request.

Should be use ICANN ACESS POLICY

> @@ -36,29 +38,50 @@
 /**
  * Owner gets FULL_CONTROL. No one else has access rights (default).
  */
-PRIVATE("private"),

“PRIVATE”

>  /**
  * Owner gets FULL_CONTROL and the anonymous identity is granted READ
  * access. If this policy is used on an object, it can be read from a
  * browser with no authentication.
  */
-PUBLIC_READ("public-read"),

“PUBLIC _READ”

>  /**
  * Owner gets FULL_CONTROL and the anonymous identity is granted READ
  * access. If this policy is used on an object, it can be read from a
  * browser with no authentication.
  */
-PUBLIC_READ("public-read"),
+PUBLIC_READ,
 /**
  * Owner gets FULL_CONTROL, the anonymous identity is granted READ and
  * WRITE access. This can be a useful policy to apply to a bucket, but is
  * generally not recommended.
  */

PUBLIC_READ

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds/pull/1167#pullrequestreview-86854328

[jira] [Updated] (JCLOUDS-1366) OutOfMemory when InputStream referencing to big file is used as payload

2018-01-05 Thread Andrew Gaul (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCLOUDS-1366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Gaul updated JCLOUDS-1366:
-
Affects Version/s: 2.0.0

> OutOfMemory when InputStream referencing to big file is used as payload
> ---
>
> Key: JCLOUDS-1366
> URL: https://issues.apache.org/jira/browse/JCLOUDS-1366
> Project: jclouds
>  Issue Type: Bug
>  Components: jclouds-blobstore
>Affects Versions: 2.0.0, 2.0.3
> Environment: Linux and Windows
>Reporter: Deyan
>Priority: Critical
>
> If I use InputStream which source is large file (lets say 3GB) I am getting 
> OOE. This is with default java VM options.
> Here is the code I am using to construct the blob:
> {code:java}
>  File bigFile = new File(file);
>  try (InputStream inputStream = new FileInputStream(f)) {
> Blob b = blobStore.blobBuilder(blobName)
> .payload(inputStream).contentLength(f.length())
> .contentDisposition(blobName)
> .contentType(
> MediaType.OCTET_STREAM)
> .userMetadata(ImmutableMap.of("a", "b", "test", 
> "beta"))
> .build();
> blobStore.putBlob("test", b, multipart());
> }
> {code}
> Stacktrace:
> {code:java}
> java.lang.OutOfMemoryError: Java heap space
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.getNextPayload(BasePayloadSlicer.java:101)
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:90)
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:63)
>   at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:363)
>   at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:349)
>   at org.jclouds.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:262)
> {code}
>  If 'bigFile' is used as payload the bug is not reproducible.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (JCLOUDS-1371) LocalBlobStore.list enumerates entire container

2018-01-05 Thread Andrew Gaul (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCLOUDS-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Gaul updated JCLOUDS-1371:
-
Description: 
{{LocalBlobStore.list}} with the filesystem blobstore enumerates the entire 
container even when prefix and delimiter set.  The File API does not provide a 
way to list a subset of files except for those within a specific directory and 
the underlying filesystem makes no guarantees about enumeration order.  We can 
still optimize the case where prefix is set and delimiter is /.  Reference:

https://lists.apache.org/thread.html/72e8a101d8a8f99b6f728336633db2cecae1dc443e4c5b195eee8f0d@%3Cuser.jclouds.apache.org%3E

  was:{{LocalBlobStore.list}} with the filesystem blobstore enumerates the 
entire container even when prefix and delimiter set.  The File API does not 
provide a way to list a subset of files except for those within a specific 
directory and the underlying filesystem makes no guarantees about enumeration 
order.  We can still optimize the case where prefix is set and delimiter is /.


> LocalBlobStore.list enumerates entire container
> ---
>
> Key: JCLOUDS-1371
> URL: https://issues.apache.org/jira/browse/JCLOUDS-1371
> Project: jclouds
>  Issue Type: Improvement
>  Components: jclouds-blobstore
>Affects Versions: 2.0.3
>Reporter: Andrew Gaul
>  Labels: filesystem
>
> {{LocalBlobStore.list}} with the filesystem blobstore enumerates the entire 
> container even when prefix and delimiter set.  The File API does not provide 
> a way to list a subset of files except for those within a specific directory 
> and the underlying filesystem makes no guarantees about enumeration order.  
> We can still optimize the case where prefix is set and delimiter is /.  
> Reference:
> https://lists.apache.org/thread.html/72e8a101d8a8f99b6f728336633db2cecae1dc443e4c5b195eee8f0d@%3Cuser.jclouds.apache.org%3E



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (JCLOUDS-1366) OutOfMemory when InputStream referencing to big file is used as payload

2018-01-05 Thread Andrew Gaul (JIRA)

[ 
https://issues.apache.org/jira/browse/JCLOUDS-1366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312682#comment-16312682
 ] 

Andrew Gaul edited comment on JCLOUDS-1366 at 1/5/18 8:18 AM:
--

42079e1392fb5b2b792f518812689854c375445f introduced this regression in jclouds 
2.0.0 with the parallel upload feature.  Previously 
{{BaseBlobStore.putMultipartBlob}} prepared a single MPU part and uploaded it, 
looping until complete.  Now it prepares all MPU parts simultaneously and 
submits them to an {{ExecutorService}}.  Combined with JCLOUDS-814, this 
buffers the entire blob in-memory and results in {{OutOfMemoryError}}.  Instead 
we should limit the number of simultaneous uploads with {{InputStream}} 
payloads.  [~zack-s] [~dgyurdzhekliev] Could you investigate this?


was (Author: gaul):
42079e1392fb5b2b792f518812689854c375445f introduced this regression with the 
parallel upload feature.  Previously {{BaseBlobStore.putMultipartBlob}} 
prepared a single MPU part and uploaded it, looping until complete.  Now it 
prepares all MPU parts simultaneously and submits them to an 
{{ExecutorService}}.  Combined with JCLOUDS-814, this buffers the entire blob 
in-memory and results in {{OutOfMemoryError}}.  Instead we should limit the 
number of simultaneous uploads with {{InputStream}} payloads.  [~zack-s] 
[~dgyurdzhekliev] Could you investigate this?

> OutOfMemory when InputStream referencing to big file is used as payload
> ---
>
> Key: JCLOUDS-1366
> URL: https://issues.apache.org/jira/browse/JCLOUDS-1366
> Project: jclouds
>  Issue Type: Bug
>  Components: jclouds-blobstore
>Affects Versions: 2.0.3
> Environment: Linux and Windows
>Reporter: Deyan
>Priority: Critical
>
> If I use InputStream which source is large file (lets say 3GB) I am getting 
> OOE. This is with default java VM options.
> Here is the code I am using to construct the blob:
> {code:java}
>  File bigFile = new File(file);
>  try (InputStream inputStream = new FileInputStream(f)) {
> Blob b = blobStore.blobBuilder(blobName)
> .payload(inputStream).contentLength(f.length())
> .contentDisposition(blobName)
> .contentType(
> MediaType.OCTET_STREAM)
> .userMetadata(ImmutableMap.of("a", "b", "test", 
> "beta"))
> .build();
> blobStore.putBlob("test", b, multipart());
> }
> {code}
> Stacktrace:
> {code:java}
> java.lang.OutOfMemoryError: Java heap space
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.getNextPayload(BasePayloadSlicer.java:101)
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:90)
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:63)
>   at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:363)
>   at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:349)
>   at org.jclouds.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:262)
> {code}
>  If 'bigFile' is used as payload the bug is not reproducible.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (JCLOUDS-1370) Add bucket-owner-full-control option to the CannedAccessPolicy class

2018-01-05 Thread Andrew Gaul (JIRA)

[ 
https://issues.apache.org/jira/browse/JCLOUDS-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312723#comment-16312723
 ] 

Andrew Gaul commented on JCLOUDS-1370:
--

Opened pull request: https://github.com/jclouds/jclouds/pull/1167

> Add bucket-owner-full-control option to the CannedAccessPolicy class
> 
>
> Key: JCLOUDS-1370
> URL: https://issues.apache.org/jira/browse/JCLOUDS-1370
> Project: jclouds
>  Issue Type: New Feature
>Affects Versions: 2.0.3
>Reporter: Timothy Anyona
>Priority: Minor
>
> The amazon s3 [canned 
> acl|https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl]
>  documentation lists 8 possible canned acl options. Only 4 are currently 
> available in the 
> [CannedAccessPolicy|https://jclouds.apache.org/reference/javadoc/1.8.x/org/jclouds/s3/domain/CannedAccessPolicy.html]
>  class. In particular, it may be useful to implement/add the 
> *bucket-owner-full-control* acl.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (JCLOUDS-1370) Add bucket-owner-full-control option to the CannedAccessPolicy class

2018-01-05 Thread Andrew Gaul (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCLOUDS-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Gaul reassigned JCLOUDS-1370:


Assignee: Andrew Gaul

> Add bucket-owner-full-control option to the CannedAccessPolicy class
> 
>
> Key: JCLOUDS-1370
> URL: https://issues.apache.org/jira/browse/JCLOUDS-1370
> Project: jclouds
>  Issue Type: New Feature
>Affects Versions: 2.0.3
>Reporter: Timothy Anyona
>Assignee: Andrew Gaul
>Priority: Minor
>
> The amazon s3 [canned 
> acl|https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl]
>  documentation lists 8 possible canned acl options. Only 4 are currently 
> available in the 
> [CannedAccessPolicy|https://jclouds.apache.org/reference/javadoc/1.8.x/org/jclouds/s3/domain/CannedAccessPolicy.html]
>  class. In particular, it may be useful to implement/add the 
> *bucket-owner-full-control* acl.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (JCLOUDS-1366) OutOfMemory when InputStream referencing to big file is used as payload

2018-01-05 Thread Zack Shoylev (JIRA)

[ 
https://issues.apache.org/jira/browse/JCLOUDS-1366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313118#comment-16313118
 ] 

Zack Shoylev commented on JCLOUDS-1366:
---

If I remember correctly, only a certain amount of parts should be prepared 
(instead of all of them). The ExecutorService prepare loop is supposed to block 
after the job queue is full. There could be some problem with the 
implementation, though.

> OutOfMemory when InputStream referencing to big file is used as payload
> ---
>
> Key: JCLOUDS-1366
> URL: https://issues.apache.org/jira/browse/JCLOUDS-1366
> Project: jclouds
>  Issue Type: Bug
>  Components: jclouds-blobstore
>Affects Versions: 2.0.0, 2.0.3
> Environment: Linux and Windows
>Reporter: Deyan
>Priority: Critical
>
> If I use InputStream which source is large file (lets say 3GB) I am getting 
> OOE. This is with default java VM options.
> Here is the code I am using to construct the blob:
> {code:java}
>  File bigFile = new File(file);
>  try (InputStream inputStream = new FileInputStream(f)) {
> Blob b = blobStore.blobBuilder(blobName)
> .payload(inputStream).contentLength(f.length())
> .contentDisposition(blobName)
> .contentType(
> MediaType.OCTET_STREAM)
> .userMetadata(ImmutableMap.of("a", "b", "test", 
> "beta"))
> .build();
> blobStore.putBlob("test", b, multipart());
> }
> {code}
> Stacktrace:
> {code:java}
> java.lang.OutOfMemoryError: Java heap space
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.getNextPayload(BasePayloadSlicer.java:101)
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:90)
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:63)
>   at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:363)
>   at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:349)
>   at org.jclouds.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:262)
> {code}
>  If 'bigFile' is used as payload the bug is not reproducible.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [jclouds/jclouds] JCLOUDS-1370: Add CannedAccesPolicy constants (#1167)

2018-01-05 Thread Andrew Gaul
gaul commented on this pull request.



>  /**
  * Owner gets FULL_CONTROL and the anonymous identity is granted READ
  * access. If this policy is used on an object, it can be read from a
  * browser with no authentication.
  */
-PUBLIC_READ("public-read"),
+PUBLIC_READ,
 /**
  * Owner gets FULL_CONTROL, the anonymous identity is granted READ and
  * WRITE access. This can be a useful policy to apply to a bucket, but is
  * generally not recommended.
  */

I don't understand any of your comments.  Can you use complete sentences?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds/pull/1167#discussion_r159884104

[jira] [Commented] (JCLOUDS-1366) OutOfMemory when InputStream referencing to big file is used as payload

2018-01-05 Thread Andrew Gaul (JIRA)

[ 
https://issues.apache.org/jira/browse/JCLOUDS-1366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313928#comment-16313928
 ] 

Andrew Gaul commented on JCLOUDS-1366:
--

jclouds MPU can handle arbitrarily large payloads with repeatable payloads like 
{{ByteSource}}.  This issue only deals with non-repeatable payloads like 
{{InputStream}}.

> OutOfMemory when InputStream referencing to big file is used as payload
> ---
>
> Key: JCLOUDS-1366
> URL: https://issues.apache.org/jira/browse/JCLOUDS-1366
> Project: jclouds
>  Issue Type: Bug
>  Components: jclouds-blobstore
>Affects Versions: 2.0.0, 2.0.3
> Environment: Linux and Windows
>Reporter: Deyan
>Priority: Critical
>
> If I use InputStream which source is large file (lets say 3GB) I am getting 
> OOE. This is with default java VM options.
> Here is the code I am using to construct the blob:
> {code:java}
>  File bigFile = new File(file);
>  try (InputStream inputStream = new FileInputStream(f)) {
> Blob b = blobStore.blobBuilder(blobName)
> .payload(inputStream).contentLength(f.length())
> .contentDisposition(blobName)
> .contentType(
> MediaType.OCTET_STREAM)
> .userMetadata(ImmutableMap.of("a", "b", "test", 
> "beta"))
> .build();
> blobStore.putBlob("test", b, multipart());
> }
> {code}
> Stacktrace:
> {code:java}
> java.lang.OutOfMemoryError: Java heap space
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.getNextPayload(BasePayloadSlicer.java:101)
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:90)
>   at 
> org.jclouds.io.internal.BasePayloadSlicer$InputStreamPayloadIterator.next(BasePayloadSlicer.java:63)
>   at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:363)
>   at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:349)
>   at org.jclouds.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:262)
> {code}
>  If 'bigFile' is used as payload the bug is not reproducible.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)