[ 
https://issues.apache.org/jira/browse/HDDS-1213?focusedWorklogId=208134&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-208134
 ]

ASF GitHub Bot logged work on HDDS-1213:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 05/Mar/19 22:47
            Start Date: 05/Mar/19 22:47
    Worklog Time Spent: 10m 
      Work Description: bharatviswa504 commented on issue #549: HDDS-1213. 
Support plain text S3 MPU initialization request
URL: https://github.com/apache/hadoop/pull/549#issuecomment-469889963
 
 
   Over all patch LGTM.
   Thank You @elek  for offline discussion.
   Yes, I see that when file size is large, cp is using multipart upload 
request.
   
   Few minor comments:
   1. Remove unncessary space indentation changes
   2. Removed platform.system(), as we don't need it now.
   
   I see below error when running test manually with docker ozones3 cluster
   
   ```
   s3g_1           | 2019-03-05 22:37:40 WARN  HttpChannel:499 - 
//localhost:9878/b12345/mm11?partNumber=3&uploadId=d51657f3-3276-49d0-977a-af9f4bab94bc-19533321172195
   s3g_1           | javax.servlet.ServletException: 
javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: 
java.lang.OutOfMemoryError: Java heap space
   s3g_1           |    at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:139)
   s3g_1           |    at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
   s3g_1           |    at 
org.eclipse.jetty.server.Server.handle(Server.java:539)
   s3g_1           |    at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
   s3g_1           |    at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
   s3g_1           |    at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
   s3g_1           |    at 
org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
   s3g_1           |    at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
   s3g_1           |    at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
   s3g_1           |    at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
   s3g_1           |    at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
   s3g_1           |    at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
   s3g_1           |    at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
   s3g_1           |    at java.lang.Thread.run(Thread.java:748)
   s3g_1           | Caused by: javax.servlet.ServletException: 
org.glassfish.jersey.server.ContainerException: java.lang.OutOfMemoryError: 
Java heap space
   s3g_1           |    at 
org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
   s3g_1           |    at 
org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
   s3g_1           |    at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
   s3g_1           |    at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
   s3g_1           |    at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
   s3g_1           |    at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
   s3g_1           |    at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
   s3g_1           |    at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
   s3g_1           |    at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
   s3g_1           |    at 
org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
   s3g_1           |    at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
   s3g_1           |    at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
   s3g_1           |    at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
   s3g_1           |    at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
   s3g_1           |    at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
   s3g_1           |    at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
   s3g_1           |    at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
   s3g_1           |    at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
   s3g_1           |    at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
   s3g_1           |    at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
   s3g_1           |    at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
   s3g_1           |    ... 13 more
   s3g_1           | Caused by: org.glassfish.jersey.server.ContainerException: 
java.lang.OutOfMemoryError: Java heap space
   ```
   
   I see when allocating a buffer, I am getting this error. I see by default we 
allocate 2 64mb buffer for all buffer lists. Any idea how to resolve this?
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 208134)
    Time Spent: 1h 40m  (was: 1.5h)

> Support plain text S3 MPU initialization request
> ------------------------------------------------
>
>                 Key: HDDS-1213
>                 URL: https://issues.apache.org/jira/browse/HDDS-1213
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: S3
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Blocker
>              Labels: pull-request-available
>          Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> S3 Multi-Part-Upload (MPU) is implemented recently in the Ozone s3 gateway. 
> We have extensive testing with using 'aws s3api' application which is passed.
> But it turned out that the more simple `aws s3 cp` command fails with _405 
> Media type not supported error_ message
> The root cause of this issue is the JAXRS implementation of the multipart 
> upload method:
> {code}
>   @POST
>   @Produces(MediaType.APPLICATION_XML)
>   public Response multipartUpload(
>       @PathParam("bucket") String bucket,
>       @PathParam("path") String key,
>       @QueryParam("uploads") String uploads,
>       @QueryParam("uploadId") @DefaultValue("") String uploadID,
>       CompleteMultipartUploadRequest request) throws IOException, 
> OS3Exception {
>     if (!uploadID.equals("")) {
>       //Complete Multipart upload request.
>       return completeMultipartUpload(bucket, key, uploadID, request);
>     } else {
>       // Initiate Multipart upload request.
>       return initiateMultipartUpload(bucket, key);
>     }
>   }
> {code}
> Here we have a CompleteMultipartUploadRequest parameter which is created by 
> the JAXRS framework based on the media type and the request body. With 
> _Content-Type: application/xml_ it's easy: the JAXRS framework uses the 
> built-in JAXB serialization. But with plain/text content-type it's not 
> possible as there is no serialization support for 
> CompleteMultipartUploadRequest from plain/text.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to