Hi Arunagiri,

Not sure what can be going on. Just to test and verify if it is an
HTTP issue, could you configure a different HTTP driver in your
application, such as the OkHttp driver [1], which has a different way
of dealing with uploads? You just need to declare the module when you
create the connection and test again.

I.

[1] https://github.com/jclouds/jclouds/tree/master/drivers/okhttp

On 2 March 2017 at 19:40, Arunagiri Rajasekaran
<arunagiri.santh...@gmail.com> wrote:
> Hi,
>
> I am trying to transfer some 100 MB data to S3 region ap-northeast-1.
> Multipart upload is enabled and the value is set to 32 MB. However after
> writing approx. 8 MB out of 32 MB, put fails with IOException. The exception
> is pasted below after enabling jclouds logs,
>
> 2017-02-27@05:46:58.779 E            [validate-47         ]
> pCommandExecutorService:logError - error after writing 8785920/33554432
> bytes to
> https://mcstore-ttv-validator-155b8afa-404b-499b-8708-10d393e7f313.s3.amazonaws.com/datathroughput?partNumber=1&uploadId=DeKeiT8KtfZS2gxx9R4klBrnqmnhknARLiU9Kt9DCAxpOP899o74LhgLWUys4u20ASe1dCN_H3yzqXFGTnOMI7nmwroqJU6QxI3bQ7dFy9HD5QJ3Ry7vDXOP3tz7qN0PVgRK4pZ6f5w8iPEg6dckig--
> java.io.IOException: Error writing request body to server
>         at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3494)
> ~[na:1.8.0-internal]
>         at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3477)
> ~[na:1.8.0-internal]
>         at
> com.google.common.io.CountingOutputStream.write(CountingOutputStream.java:53)
> ~[guava-17.0.jar:na]
>         at com.google.common.io.ByteStreams.copy(ByteStreams.java:179)
> ~[guava-17.0.jar:na]
>         at
> org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.writePayloadToConnection(JavaUrlHttpCommandExecutorService.java:297)
> [jclouds-core-1.9.1.jar:1.9.1]
>         at
> org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.convert(JavaUrlHttpCommandExecutorService.java:170)
> [jclouds-core-1.9.1.jar:1.9.1]
>         at
> org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.convert(JavaUrlHttpCommandExecutorService.java:64)
> [jclouds-core-1.9.1.jar:1.9.1]
>         at
> org.jclouds.http.internal.BaseHttpCommandExecutorService.invoke(BaseHttpCommandExecutorService.java:95)
> [jclouds-core-1.9.1.jar:1.9.1]
>         at
> org.jclouds.rest.internal.InvokeHttpMethod.invoke(InvokeHttpMethod.java:90)
> [jclouds-core-1.9.1.jar:1.9.1]
>         at
> org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:73)
> [jclouds-core-1.9.1.jar:1.9.1]
>         at
> org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:44)
> [jclouds-core-1.9.1.jar:1.9.1]
>         at
> org.jclouds.rest.internal.DelegatesToInvocationFunction.handle(DelegatesToInvocationFunction.java:156)
> [jclouds-core-1.9.1.jar:1.9.1]
>         at
> org.jclouds.rest.internal.DelegatesToInvocationFunction.invoke(DelegatesToInvocationFunction.java:123)
> [jclouds-core-1.9.1.jar:1.9.1]
>         at com.sun.proxy.$Proxy42.uploadPart(Unknown Source)
> [na:1.8.0-internal]
>         at
> org.jclouds.s3.blobstore.strategy.internal.SequentialMultipartUploadStrategy.prepareUploadPart(SequentialMultipartUploadStrategy.java:111)
> [s3-1.9.1.jar:1.9.1]
>         at
> org.jclouds.s3.blobstore.strategy.internal.SequentialMultipartUploadStrategy.execute(SequentialMultipartUploadStrategy.java:93)
> [s3-1.9.1.jar:1.9.1]
>         at
> org.jclouds.aws.s3.blobstore.AWSS3BlobStore.putBlob(AWSS3BlobStore.java:87)
> [aws-s3-1.9.1.jar:1.9.1]
>         at
> com.ibm.icstore.modules.BlobStoreConnection$BSCDoneHandler.run(BlobStoreConnection.java:749)
> [icstore-core-0.30.0-SNAPSHOT.jar:na]
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [na:1.8.0-internal]
>         at java.util.concurrent.FutureTask.run(FutureTask.java:267)
> [na:1.8.0-internal]
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143)
> [na:1.8.0-internal]
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:618)
> [na:1.8.0-internal]
>         at java.lang.Thread.run(Thread.java:785) [na:1.8.0-internal]
>
> The same code works fine for the default S3 location but fails when pointed
> towards specific regions like ap-northeast-1, ap-southeast-1 etc..
>
> Further debugging showed that the inputstream we used to generate data
> closed after reading 32 MB of data whereas in the working case for default
> region it got closed only after reading 100 MB of data.
>
> Could someone throw some light on what is going wrong here?
>
> Thanks,
> Arunagiri
>
>

Reply via email to