On Thu, 2019-06-06 at 14:34 +0200, Joan grupoventus wrote:
> No .. that's the point ... we are trying any kind of tests ... no
> success. Did you see the 'ServerHttpResponse.java' code? Do you think
> the code is correct?
> 

I cannot say 10% without seeing the complete code base.

Oleg


> We are thinking to replace the HttpClient5 by the AsyncClient1.3 and
> leave the rest of our code untouched and check again. So at least we
> will know in what side the issue is.
> 
> We'll try to do this today ... I'll keep you informed.
> 
> Thanks,
> 
> Joan.
> 
> -----Original Message-----
> From: Oleg Kalnichevski <ol...@apache.org> 
> Sent: Thursday, June 6, 2019 2:27 PM
> To: HttpClient User Discussion <httpclient-users@hc.apache.org>
> Subject: Re: Response timeout issue
> 
> On Wed, 2019-06-05 at 16:19 +0200, Joan grupoventus wrote:
> > Hi Oleg,
> > 
> > Still dealing with this stuck connections. I could get traces from
> > httpClient5 and we see all connections busy:
> > 08:12:56.031        ex-00E1DFB8: acquiring connection with route
> > {}->
> > http://10.15.15.14:8082
> > 08:12:56.087        Connection request: [route: {}->
> > http://10.15.15.14:8083][total kept alive: 0; route allocated: 108
> > of 
> > 1500; total allocated: 1500 of 1500]
> > 
> > This happened previously because we were not expanding the capacity
> > of 
> > the buffer in the 'updateCapacity' method. After your
> > clarification 
> > this was fixed:
> > @Override
> >  public void updateCapacity(CapacityChannel cc) {
> >      try { cc.update(this.capacityToPerform); } 
> >      catch (Exception e) {
> > GlobalData.getInstance().getLoggerService().flushCluster("log.inter
> > na
> > l.error.server.http", new Object[] { this.objCall.getNombre(), 
> > this.capacityToPerform, Utils.stackToString(e) }, true); }  }
> > 
> > But now, after several days working, the pool becomes full again.
> > And 
> > the only way to solve the issue is restarting it (stopping the
> > http 
> > instance and starting it again from our app). This happens at
> > 08.14.03.225 in the http log we have on this repository: 
> > https://github.com/joanbalaguero/HttpClient.git
> > 
> > 08:14:03.054        Connection request: [route: {}->
> > http://10.15.15.53:8083][total kept alive: 0; route allocated: 107
> > of 
> > 1500; total allocated: 1500 of 1500]
> > 08:14:03.080        Connection request: [route: {}->
> > http://10.15.15.53:8080][total kept alive: 0; route allocated: 108
> > of 
> > 1500; total allocated: 1500 of 1500]
> > 08:14:03.107        Connection request: [route: {}->
> > http://10.15.15.53:8082][total kept alive: 0; route allocated: 107
> > of 
> > 1500; total allocated: 1500 of 1500]
> > 08:14:03.192        ex-00E1E1D1: preparing request execution
> > 08:14:03.192        Target auth state: UNCHALLENGED
> > 08:14:03.192        Proxy auth state: UNCHALLENGED
> > 08:14:03.192        ex-00E1E1D1: acquiring connection with route
> > {}->
> > http://10.15.15.24:8080
> > 08:14:03.197        Shutdown connection pool GRACEFUL
> > 08:14:03.224        Connection request: [route: {}->
> > http://10.15.15.24:8080][total kept alive: 0; route allocated: 10
> > of 
> > 1500; total allocated: 1500 of 1500]
> > 08:14:03.225        i/o-0041DB56: Shutdown connection GRACEFUL
> > 08:14:03.226        i/o-0041DB37: Shutdown connection GRACEFUL
> > 08:14:03.226        i/o-0041DB19: Shutdown connection GRACEFUL
> > 08:14:03.242        Connection pool shut down
> > 08:14:03.242        Shutdown GRACEFUL
> > 08:14:03.276        ex-00E1E1D2: preparing request execution
> > 08:14:03.276        Target auth state: UNCHALLENGED
> > 08:14:03.276        Proxy auth state: UNCHALLENGED
> > 08:14:03.276        ex-00E1E1D2: acquiring connection with route
> > {}->
> > http://10.15.15.53:8081
> > 08:14:03.277        Connection request: [route: {}->
> > http://10.15.15.53:8081][total kept alive: 0; route allocated: 0
> > of 
> > 1500; total allocated: 0 of 1500]
> > 08:14:03.277        Connection leased: [route: {}->
> > http://10.15.15.53:8081][total kept alive: 0; route allocated: 1
> > of 
> > 1500; total allocated: 1 of 1500]
> > 08:14:03.277        ep-00DFEDB0: acquired null
> > 08:14:03.277        ex-00E1E1D2: connection acquired
> > 08:14:03.279        ep-00DFEDB0: connected i/o-00423C70
> > 08:14:03.279        Connected to target
> > 08:14:03.279        Route fully established
> > 
> > The point is the variable 'this.capacityToPerform' we use to
> > enlarge 
> > the buffer is set in the 'consumeResponse' method when we have a
> > non- 
> > null EntityDetails (you can see it in the ServerHttpResponse.java):
> > @Override
> >  public void consumeResponse(HttpResponse response, EntityDetails 
> > entityDetails, HttpContext context, FutureCallback<Void> fcback) 
> > throws IOException, HttpException  {
> >   ( . . . .)
> >   if (entityDetails != null) 
> >   {
> >    this.capacityToPerform = entityDetails.getContentLength() > 0 ?
> > (int)entityDetails.getContentLength() :
> > GlobalData.getInstance().getHttpService().getBufferInInterface();  
> > //
> > <--
> > GlobalData.getInstance().getHttpService().getBufferInInterface()
> > is 32768 
> >    this.vbaos = new
> > VentusByteArrayOutputStream(this.capacityToPerform);
> >   }
> >   else 
> >   {
> >    fcback.completed(this.getResult());
> >    this.submitTask(null);
> >   }
> >  }
> > 
> > So our guess (maybe wrong) is:
> > - There is any case when the 'updateCapacity' method is being
> > called 
> > but without previously calling the 'consumeResponse' method with a 
> > non-null entity, so this.capacityToPerform is always 0.
> > - we are still doing something wrong when the 'EntityDetails' is
> > null. 
> > Before this we did nothing, so the control was not returning to
> > our 
> > app. Now we are calling to 'this.submitTask' and thus returning
> > the 
> > control to our app (not sure if completing the FutureCallback is 
> > necessary).
> > 
> > With the classic blocking model one must close the connection by 
> > consuming the response, but in this asynchronous model we don't
> > need 
> > to explicitly close the http connections, so we are not able to
> > see 
> > where we are leaking these connections.
> > 
> > We also uploaded a histogram just with the alive instances from 
> > "org.apache.hc" when the pool was exhausted.
> > 
> > Any help would be really appreciated.
> > 
> 
> Joan,
> 
> Are you able to reproduce the issue outside PROD environment? I
> cannot glean anything useful from the log snippets you posted.
> 
> Oleg
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org
> For additional commands, e-mail: httpclient-users-h...@hc.apache.org
> 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org
> For additional commands, e-mail: httpclient-users-h...@hc.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org
For additional commands, e-mail: httpclient-users-h...@hc.apache.org

Reply via email to