Hi Oleg Thank you for your response.
> Yes, it was. It was necessary in order to have a finer control over the state > of TLS session. I understand. > The connection is meant to be inactive while it sits idle in the pool. > What is wrong however is the connection pool trying to execute a potentially > blocking i/o operation while holding a global lock on the pool. This is wrong > and needs to be fixed. > > Please raise a JIRA for this defect I have raised a JIRA. - https://issues.apache.org/jira/browse/HTTPCLIENT-2399 Thank you! > -----Original Message----- > From: Oleg Kalnichevski <[email protected]> > Sent: Friday, September 12, 2025 4:21 AM > To: HttpComponents Project <[email protected]> > Subject: Re: An issue with IdleConnectionEvictor freezing while holding a > StrictConnPool lock in HttpClient 5.4.3 > > On Thu, 2025-09-11 at 10:46 +0000, [email protected] wrote: > > Hi all > > > > Please excuse any awkward phrasing in my message, as I am not a native > > English speaker and am using translation software. > > > > I'm sharing some information and I have some questions on an issue I > > encountered where the IdleConnectionEvictor thread in HttpClient 5.4.3 > > freezes while holding a lock on StrictConnPool, making it impossible > > to acquire a connection. > > This issue does not occur in HttpClient 5.3.x or 5.4.4 and later. > > > > The issue occurs under the following conditions: > > - Version: 5.4 through 5.4.3 > > - Connection Manager: Using PoolingHttpClientConnectionManager > > - Connectivity: Connecting via a proxy > > - TLS Protocol: Using TLS 1.2 (does not occur with TLS 1.3) > > - Trigger: When IdleConnectionEvictor attempts to close an expired > > socket, > > if it fails to receive a close_notify message, it will wait > > indefinitely > > on Socket.read(). > > > > The IdleConnectionEvictor periodically attempts to close expired > > connections. > > During this process, it holds a lock using the StrictConnPool's > > enumAvailable method. If it fails to receive a close_notify message, > > it waits forever while still holding the lock. > > > > The stack trace of the IdleConnectionEvictor thread when it freezes > > while waiting for close_notify is as follows: > > > > "idle-connection-evictor-1" #326 daemon prio=5 ... runnable ... > > java.lang.Thread.State: RUNNABLE > > at sun.nio.ch.Net.poll([email protected]/Native Method) > > at > > sun.nio.ch.NioSocketImpl.park([email protected]/NioSocketImpl.java:18 > > 6) > > at > > sun.nio.ch.NioSocketImpl.park([email protected]/NioSocketImpl.java:19 > > 5) > > at > > sun.nio.ch.NioSocketImpl.implRead([email protected]/NioSocketImpl.jav > > a:319) > > at > > sun.nio.ch.NioSocketImpl.read([email protected]/NioSocketImpl.java:35 > > 5) > > at > > sun.nio.ch.NioSocketImpl$1.read([email protected]/NioSocketImpl.java: > > 808) > > at > > java.net.Socket$SocketInputStream.read([email protected]/Socket.java: > > 966) > > at > > sun.security.ssl.SSLSocketInputRecord.read([email protected]/SSLSocke > > tInputRecord.java:484) > > at > > sun.security.ssl.SSLSocketInputRecord.readHeader([email protected]/SS > > LSocketInputRecord.java:478) > > at > > sun.security.ssl.SSLSocketInputRecord.decode([email protected]/SSLSoc > > ketInputRecord.java:160) > > at > > sun.security.ssl.SSLTransport.decode([email protected]/SSLTransport.j > > ava:111) > > at > > sun.security.ssl.SSLSocketImpl.decode([email protected]/SSLSocketImpl > > .java:1510) > > at > > sun.security.ssl.SSLSocketImpl.waitForClose([email protected]/SSLSock > > etImpl.java:1847) > > at > > sun.security.ssl.SSLSocketImpl.closeSocket([email protected]/SSLSocke > > tImpl.java:1821) > > at > > sun.security.ssl.SSLSocketImpl.shutdown([email protected]/SSLSocketIm > > pl.java:1766) > > at > > sun.security.ssl.SSLSocketImpl.bruteForceCloseInput([email protected] > > /SSLSocketImpl.java:799) > > at > > sun.security.ssl.SSLSocketImpl.duplexCloseOutput([email protected]/SS > > LSocketImpl.java:664) > > at > > sun.security.ssl.SSLSocketImpl.close([email protected]/SSLSocketImpl. > > java:584) > > at org.apache.hc.core5.io.Closer.close(Closer.java:48) > > at org.apache.hc.core5.io.Closer.closeQuietly(Closer.java:71) > > at > > org.apache.hc.core5.http.impl.io.BHttpConnectionBase.close(BHttpConne > > ctionBase.java:268) > > at > > org.apache.hc.core5.http.impl.io.DefaultBHttpClientConnection.close(D > > efaultBHttpClientConnection.java:71) > > at > > org.apache.hc.client5.http.impl.io.DefaultManagedHttpClientConnection > > .close(DefaultManagedHttpClientConnection.java:176) > > at > > org.apache.hc.core5.pool.PoolEntry.discardConnection(PoolEntry.java:1 > > 80) > > at > > org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager > > .closeIfExpired(PoolingHttpClientConnectionManager.java:650) > > at > > org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager > > $1.lambda$closeExpired$0(PoolingHttpClientConnectionManager.java:228) > > at > > org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager > > $1$$Lambda$3021/0x00007f03a26afb60.execute(Unknown Source) > > at > > org.apache.hc.core5.pool.StrictConnPool.enumAvailable(StrictConnPool. > > java:590) > > at > > org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager > > $1.closeExpired(PoolingHttpClientConnectionManager.java:228) > > at > > org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager > > .closeExpired(PoolingHttpClientConnectionManager.java:542) > > at > > org.apache.hc.client5.http.impl.IdleConnectionEvictor.lambda$new$0(Id > > leConnectionEvictor.java:61) > > at > > org.apache.hc.client5.http.impl.IdleConnectionEvictor$$Lambda$2977/0x > > 00007f03a2686b48.run(Unknown Source) > > at java.lang.Thread.run([email protected]/Thread.java:840) > > > > In this state, attempts to get a connection from the pool will fail > > with a DeadlineTimeoutException because the StrictPool lock cannot be > > acquired. > > > > The stack trace for the DeadlineTimeoutException is as follows: > > > > Caused by: org.apache.hc.core5.util.DeadlineTimeoutException: > > Deadline: 2025-05-11T23:30:14.996+0000, 0 MILLISECONDS overdue > > at > > org.apache.hc.core5.util.DeadlineTimeoutException.from(DeadlineTimeou > > tException.java:49) > > at > > org.apache.hc.core5.pool.StrictConnPool.lease(StrictConnPool.java:221 > > ) > > at > > org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager > > .lease(PoolingHttpClientConnectionManager.java:326) > > at > > org.apache.hc.client5.http.impl.classic.InternalExecRuntime.acquireEn > > dpoint(InternalExecRuntime.java:105) > > ... 138 common frames omitted > > > > Analysis of the Cause and Fix > > > > The reason this bug does not occur in version 5.3.x appears to be a > > change made in the AbstractClientTlsStrategy in a previous commit > > (HTTPCLIENT-2328), which changed the autoClose setting for the socket > > from true to false. > > > > - > > > https://github.com/apache/httpcomponents-client/commit/ee0a10210#diff- > > > a5e74a0fa48dd91a1e1bac65ff84722564da9e8b73fa835551c811a845aa2f2dL20 > 7-R > > 208 > > - HTTPCLIENT-2328: Blocking i/o connections to check if the opposite > > TLS endpoint has been closed by the opposite endpoint while > > writing > > out request body > > - https://issues.apache.org/jira/browse/HTTPCLIENT-2328 > > > > The issue was resolved in version 5.4.4 by the fix for HTTPCLIENT- > > 2364. > > This fix ensures that when > > DefaultHttpClientConnectionOperator.upgrade() > > upgrades to an SSLSocket, the baseSocket is also passed to > > ManagedHttpClientConnection.bind(). > > This allows the close method for expired connections to be called on > > the baseSocket, which prevents the deadlock. > > - https://github.com/apache/httpcomponents-client/commit/f3b1536843 > > - HTTPCLIENT-2364: Fixed incorrect re-binding of the upgraded SSL > > socket > > to the HTTP connection by the #upgrade method of the > > DefaultHttpClientConnectionOperator > > - https://issues.apache.org/jira/browse/HTTPCLIENT-2364 > > > > > > I have two questions: > > > > - Given the bug I encountered, was the change to set autoClose from > > true > > to false in the HTTPCLIENT-2328 fix the correct decision? > > Yes, it was. It was necessary in order to have a finer control over the state > of TLS > session. > > > > - I tried setting a SocketTimeout to prevent the indefinite wait, but > > it > > didn't work. > > I found that this was because > > PoolingHttpClientConnectionManager.release() > > calls DefaultManagedHttpClientConnection.passivate(), which sets the > > socket > > timeout to Timeout.ZERO_MILLISECONDS when a connection is returned > > to the pool. > > Is it standard practice or correct to reset the timeout to an > > indefinite > > value when a connection is returned to the pool? > > The connection is meant to be inactive while it sits idle in the pool. > What is wrong however is the connection pool trying to execute a potentially > blocking i/o operation while holding a global lock on the pool. This is wrong > and > needs to be fixed. > > Please raise a JIRA for this defect > > https://issues.apache.org/jira/browse/HTTPCLIENT > > Oleg > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [email protected] For additional > commands, e-mail: [email protected] >
