[ 
https://issues.apache.org/jira/browse/JCLOUDS-1478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17303474#comment-17303474
 ] 

Blagoi Anastasov commented on JCLOUDS-1478:
-------------------------------------------

Hello guys!

Great work of releasing the newest 2.3.0 version. Much appreciated! However, I 
can confirm that this issue is still present.

> Error with multipart upload when blob uses byteArrayInputStream as payload
> --------------------------------------------------------------------------
>
>                 Key: JCLOUDS-1478
>                 URL: https://issues.apache.org/jira/browse/JCLOUDS-1478
>             Project: jclouds
>          Issue Type: Bug
>          Components: jclouds-blobstore
>    Affects Versions: 2.3.0
>            Reporter: Blagoi Anastasov
>            Priority: Major
>
> Hello,
> When trying to multipart upload 100MB file for example, and set 
> ByteArrayInputStream as payload for the blob it fails with: Caused by: 
> java.io.EOFException: reached end of stream after skipping 1198905 bytes; 
> 100663296 bytes expected
> Here is the error trace:
> {noformat}
> Dec 19, 2018 11:09:17 AM org.jclouds.logging.jdk.JDKLogger logError
> SEVERE: Cannot retry after server error, command is not replayable: 
> [method=org.jclouds.aws.s3.AWSS3Client.public abstract java.lang.String 
> org.jclouds.s3.S3Client.uploadPart(java.lang.String,java.lang.String,int,java.lang.String,org.jclouds.io.Payload)[xxx.xxx.xxx.bucket,
>  C:\xxx\xxx\xxx\xxx\xxx.zip, 2, 
> ycoF6GcM8sRKx2jFfdzcHnn.YSRv8ewWzrIgqPI7f85B8fcQqcnqTwaqQKajRjNwnXVNJ3VBbQ8HtB6lP8aLFMLok7kIrbJ4Ir1ipxQH.fkxNVPuTi.445aHG_aOPQQr,
>  [content=true, contentMetadata=[cacheControl=null, contentDisposition=null, 
> contentEncoding=null, contentLanguage=null, contentLength=33554432, 
> contentMD5=null, contentType=application/unknown, expires=null], 
> written=false, isSensitive=false]], request=PUT 
> https://xxx.xxx.xxx.bucket.s3-eu-central-1.amazonaws.com/C:%5Cxxx%5Cxxxv%5Cxxx%5Cxxx%5Cxxx.zip?partNumber=2&uploadId=ycoF6GcM8sRKx2jFfdzcHnn.YSRv8ewWzrIgqPI7f85B8fcQqcnqTwaqQKajRjNwnXVNJ3VBbQ8HtB6lP8aLFMLok7kIrbJ4Ir1ipxQH.fkxNVPuTi.445aHG_aOPQQr
>  HTTP/1.1]
> Dec 19, 2018 11:09:17 AM org.jclouds.logging.jdk.JDKLogger logError
> SEVERE: Cannot retry after server error, command is not replayable: 
> [method=org.jclouds.aws.s3.AWSS3Client.public abstract java.lang.String 
> org.jclouds.s3.S3Client.uploadPart(java.lang.String,java.lang.String,int,java.lang.String,org.jclouds.io.Payload)[xxx.xxx.xxx.bucket,
>  C:\xxx\xxx\xxx\xxx\xxx.zip, 1, 
> ycoF6GcM8sRKx2jFfdzcHnn.YSRv8ewWzrIgqPI7f85B8fcQqcnqTwaqQKajRjNwnXVNJ3VBbQ8HtB6lP8aLFMLok7kIrbJ4Ir1ipxQH.fkxNVPuTi.445aHG_aOPQQr,
>  [content=true, contentMetadata=[cacheControl=null, contentDisposition=null, 
> contentEncoding=null, contentLanguage=null, contentLength=33554432, 
> contentMD5=null, contentType=application/unknown, expires=null], 
> written=false, isSensitive=false]], request=PUT 
> https://xxx.xxx.xxx.bucket.s3-eu-central-1.amazonaws.com/C:%5Cxxx%5Cxxx%5Cxxx%5xxx%5Cxx.zip?partNumber=1&uploadId=ycoF6GcM8sRKx2jFfdzcHnn.YSRv8ewWzrIgqPI7f85B8fcQqcnqTwaqQKajRjNwnXVNJ3VBbQ8HtB6lP8aLFMLok7kIrbJ4Ir1ipxQH.fkxNVPuTi.445aHG_aOPQQr
>  HTTP/1.1]
> Dec 19, 2018 11:09:17 AM org.jclouds.logging.jdk.JDKLogger logError
> SEVERE: Cannot retry after server error, command is not replayable: 
> [method=org.jclouds.aws.s3.AWSS3Client.public abstract java.lang.String 
> org.jclouds.s3.S3Client.uploadPart(java.lang.String,java.lang.String,int,java.lang.String,org.jclouds.io.Payload)[xxx.xxx.xxx.bucket,
>  C:\xxx\xxx\xxx\xxx\xxx.zip, 3, 
> ycoF6GcM8sRKx2jFfdzcHnn.YSRv8ewWzrIgqPI7f85B8fcQqcnqTwaqQKajRjNwnXVNJ3VBbQ8HtB6lP8aLFMLok7kIrbJ4Ir1ipxQH.fkxNVPuTi.445aHG_aOPQQr,
>  [content=true, contentMetadata=[cacheControl=null, contentDisposition=null, 
> contentEncoding=null, contentLanguage=null, contentLength=33554432, 
> contentMD5=null, contentType=application/unknown, expires=null], 
> written=false, isSensitive=false]], request=PUT 
> https://xxx.xxx.xxx.bucket.s3-eu-central-1.amazonaws.com/C:%5Cxxx%5Cxxx%5Cxxx%5Clxxx%5Cxxx.zip?partNumber=3&uploadId=ycoF6GcM8sRKx2jFfdzcHnn.YSRv8ewWzrIgqPI7f85B8fcQqcnqTwaqQKajRjNwnXVNJ3VBbQ8HtB6lP8aLFMLok7kIrbJ4Ir1ipxQH.fkxNVPuTi.445aHG_aOPQQr
>  HTTP/1.1]
> Exception in thread "main" java.lang.RuntimeException: java.io.EOFException: 
> reached end of stream after skipping 1198905 bytes; 100663296 bytes expected
>  at com.google.common.base.Throwables.propagate(Throwables.java:160)
>  at 
> org.jclouds.io.internal.BasePayloadSlicer.doSlice(BasePayloadSlicer.java:253)
>  at 
> org.jclouds.io.internal.BasePayloadSlicer.slice(BasePayloadSlicer.java:228)
>  at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:371)
>  at 
> org.jclouds.blobstore.internal.BaseBlobStore.putMultipartBlob(BaseBlobStore.java:347)
>  at 
> org.jclouds.aws.s3.blobstore.AWSS3BlobStore.putBlob(AWSS3BlobStore.java:79)
>  at Test.uploadMultipart(Test.java:47)
>  at Test.main(Test.java:140)
> Caused by: java.io.EOFException: reached end of stream after skipping 1198905 
> bytes; 100663296 bytes expected
>  at com.google.common.io.ByteStreams.skipFully(ByteStreams.java:668)
>  at 
> org.jclouds.io.internal.BasePayloadSlicer.doSlice(BasePayloadSlicer.java:251)
>  ... 6 more
> {noformat}
>  
> Here is the code I am using:
> {color:#000080}private static {color}S3Client {color:#660e7a}mS3Client{color};
> {color:#000080}private static {color}ContextBuilder getJCloudContextBuilder() 
> {
>  {color:#000080}return 
> {color}ContextBuilder.newBuilder({color:#008000}"aws-s3"{color}).credentials({color:#660e7a}ACCESS_KEY{color},
>  {color:#660e7a}SECRET_KEY{color});
> }
> {color:#000080}private static {color}File getFile() {
>  {color:#000080}return new 
> {color}File({color:#660e7a}ONE_HUNDRED_MB_FILENAME{color});
> }
> {color:#000080}private static void {color}uploadMultipart() {
>  BlobStoreContext blobStoreContext = 
> getJCloudContextBuilder().build(BlobStoreContext.{color:#000080}class{color});
>  {color:#660e7a}mS3Client {color}= 
> blobStoreContext.unwrapApi(S3Client.{color:#000080}class{color});
>  
> {color:#660e7a}mS3Client{color}.bucketExists({color:#660e7a}BUCKET_NAME{color});
>  BlobStore blobStore = blobStoreContext.getBlobStore();
>  {color:#000080}try {color}(InputStream inputStream = {color:#000080}new 
> {color}ByteArrayInputStream(Files.readAllBytes(getFile().toPath()))){
>  Blob blob = 
> blobStore.blobBuilder({color:#660e7a}ONE_HUNDRED_MB_FILENAME{color}).payload(inputStream).contentLength(getFile().length()).build();
>  blobStore.putBlob({color:#660e7a}BUCKET_NAME{color}, blob, 
> PutOptions.Builder.multipart());
>  } {color:#000080}catch {color}(FileNotFoundException e) {
>  e.printStackTrace();
>  } {color:#000080}catch {color}(IOException e) {
>  e.printStackTrace();
>  }
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to