Hi,

I am trying to transfer some 100 MB data to S3 region ap-northeast-1.
Multipart upload is enabled and the value is set to 32 MB. However after
writing approx. 8 MB out of 32 MB, put fails with IOException. The
exception is pasted below after enabling jclouds logs,

2017-02-27@05:46:58.779 E            [validate-47         ]
pCommandExecutorService:logError - error after writing 8785920/33554432
bytes to
https://mcstore-ttv-validator-155b8afa-404b-499b-8708-10d393e7f313.s3.amazonaws.com/datathroughput?partNumber=1&uploadId=DeKeiT8KtfZS2gxx9R4klBrnqmnhknARLiU9Kt9DCAxpOP899o74LhgLWUys4u20ASe1dCN_H3yzqXFGTnOMI7nmwroqJU6QxI3bQ7dFy9HD5QJ3Ry7vDXOP3tz7qN0PVgRK4pZ6f5w8iPEg6dckig--
java.io.IOException: Error writing request body to server
        at sun.net.
www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3494
<http://www.protocol.http.httpurlconnection$streamingoutputstream.checkerror%28httpurlconnection.java:3494/>)
~[na:1.8.0-internal]
        at sun.net.
www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3477
<http://www.protocol.http.httpurlconnection$streamingoutputstream.write%28httpurlconnection.java:3477/>)
~[na:1.8.0-internal]
        at
com.google.common.io.CountingOutputStream.write(CountingOutputStream.java:53)
~[guava-17.0.jar:na]
        at com.google.common.io.ByteStreams.copy(ByteStreams.java:179)
~[guava-17.0.jar:na]
        at
org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.writePayloadToConnection(JavaUrlHttpCommandExecutorService.java:297)
[jclouds-core-1.9.1.jar:1.9.1]
        at
org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.convert(JavaUrlHttpCommandExecutorService.java:170)
[jclouds-core-1.9.1.jar:1.9.1]
        at
org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.convert(JavaUrlHttpCommandExecutorService.java:64)
[jclouds-core-1.9.1.jar:1.9.1]
        at
org.jclouds.http.internal.BaseHttpCommandExecutorService.invoke(BaseHttpCommandExecutorService.java:95)
[jclouds-core-1.9.1.jar:1.9.1]
        at
org.jclouds.rest.internal.InvokeHttpMethod.invoke(InvokeHttpMethod.java:90)
[jclouds-core-1.9.1.jar:1.9.1]
        at
org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:73)
[jclouds-core-1.9.1.jar:1.9.1]
        at
org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:44)
[jclouds-core-1.9.1.jar:1.9.1]
        at
org.jclouds.rest.internal.DelegatesToInvocationFunction.handle(DelegatesToInvocationFunction.java:156)
[jclouds-core-1.9.1.jar:1.9.1]
        at
org.jclouds.rest.internal.DelegatesToInvocationFunction.invoke(DelegatesToInvocationFunction.java:123)
[jclouds-core-1.9.1.jar:1.9.1]
        at com.sun.proxy.$Proxy42.uploadPart(Unknown Source)
[na:1.8.0-internal]
        at
org.jclouds.s3.blobstore.strategy.internal.SequentialMultipartUploadStrategy.prepareUploadPart(SequentialMultipartUploadStrategy.java:111)
[s3-1.9.1.jar:1.9.1]
        at
org.jclouds.s3.blobstore.strategy.internal.SequentialMultipartUploadStrategy.execute(SequentialMultipartUploadStrategy.java:93)
[s3-1.9.1.jar:1.9.1]
        at
org.jclouds.aws.s3.blobstore.AWSS3BlobStore.putBlob(AWSS3BlobStore.java:87)
[aws-s3-1.9.1.jar:1.9.1]
        at
com.ibm.icstore.modules.BlobStoreConnection$BSCDoneHandler.run(BlobStoreConnection.java:749)
[icstore-core-0.30.0-SNAPSHOT.jar:na]
        at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0-internal]
        at java.util.concurrent.FutureTask.run(FutureTask.java:267)
[na:1.8.0-internal]
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143)
[na:1.8.0-internal]
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:618)
[na:1.8.0-internal]
        at java.lang.Thread.run(Thread.java:785) [na:1.8.0-internal]

The same code works fine for the default S3 location but fails when pointed
towards specific regions like ap-northeast-1, ap-southeast-1 etc..

Further debugging showed that the inputstream we used to generate data
closed after reading 32 MB of data whereas in the working case for default
region it got closed only after reading 100 MB of data.

Could someone throw some light on what is going wrong here?

Thanks,
Arunagiri

Reply via email to