https://bz.apache.org/bugzilla/show_bug.cgi?id=63170

--- Comment #19 from Warren D. Johnson <[email protected]> ---
Hi Stefan,

It's been several days now that we are running the reverse proxy featuring your
latest code changes.  We have experienced two out of memory conditions but I
think this had to do with not having enough memory on the server.  I've
increased the memory and I feel like it won't be a problem. 

I did my initial testing with only one website serviced by this reverse proxy. 
Everything worked fine and we were not experiencing out of memory conditions or
segmentation faults.  Feeling confident, I added several more websites behind
the reverse proxy and let it run for a day or two.  During that time we are
seeing sporadic but recurring high CPU usage.  On a similar reverse proxy which
uses only http 1, the load average rarely exceeds .5. In fact, it is usually
less than .2.  

You can see attachment cpu_usage.jpg.  Clearly it is running really high. 
There is not a lot of traffic (maybe a few visitors per second) during these
times.  

I recorded a trace8 level debug on the reverse proxy.  I also recorded a trace8
level debug on the website's server.  I include those attachments as 

"trace8 level debugging on busier reverse proxy server http2"
and
"trace8 level debugging on website server (not reverse proxy)"

I am not an expert, but when I view those logs I am seeing a lot of entries
"Resource Temporarily unavailable".  Also, I notice that it looks like the
Apache process on the reverse proxy does it's job properly but then seems to
stall out.  It waits about five seconds and doesn't get any response so it
timesout and closes the connection.  Somewhere in this process, it is eating up
CPU.  I do not mind if the process waits and then times out.  However, the
excessive CPU usage is a real problem.  If I stop Apache completely, the load
on the server drops to 0.

After the five second time out, we're getting these lines:

[Sun Feb 24 22:09:54 2019] [trace1] [pid 17223] h2_session.c(2333): [client
1.2.3.4:56750] h2_session(138,IDLE,0): pre_close

[Sun Feb 24 22:09:54 2019] [debug] [pid 17223] h2_session.c(589): [client
1.2.3.4:56750] AH03068: h2_session(138,IDLE,0): sent FRAME[GOAWAY[error=0,
reason='timeout', last_stream=39]], frames=39/52 (r/s)

[Sun Feb 24 22:09:54 2019] [trace2] [pid 17223] h2_conn_io.c(123): [client
1.2.3.4:56750] h2_session(145)-out: heap[24] flush 

[Sun Feb 24 22:09:54 2019] [debug] [pid 17223] h2_session.c(715): [client
1.2.3.4:56750] AH03069: h2_session(138,IDLE,0): sent GOAWAY, err=0, msg=timeout

[Sun Feb 24 22:09:54 2019] [debug] [pid 17223] h2_session.c(1655): [client
1.2.3.4:56750] AH03078: h2_session(138,DONE,0): transit [IDLE] -- local goaway
--> [DONE]

[Sun Feb 24 22:09:54 2019] [trace1] [pid 17223] h2_session.c(725): [client
1.2.3.4:56750] h2_session(138,DONE,0): pool_cleanup

[Sun Feb 24 22:09:54 2019] [debug] [pid 17223] h2_session.c(1655): [client
1.2.3.4:56750] AH03078: h2_session(138,CLEANUP,0): transit [DONE] -- pre_close
--> [CLEANUP]

[Sun Feb 24 22:09:54 2019] [trace2] [pid 17223] h2_mplx.c(435): [client
1.2.3.4:56750] h2_mplx(138): start release

[Sun Feb 24 22:09:54 2019] [trace1] [pid 17223] h2_ngn_shed.c(144): [client
1.2.3.4:56750] AH03394: h2_ngn_shed(145): abort

[Sun Feb 24 22:09:54 2019] [trace1] [pid 17223] h2_mplx.c(497): [client
1.2.3.4:56750] h2_mplx(138): released


I did a little searching and found someone reporting a similar problem with
http2 (though not in reverse proxy as far as I can tell) on an older version of
Apache.  I'm not sure if its related, but here is the link:

https://www.redhat.com/archives/sclorg/2017-December/msg00001.html

As always, I appreciate your help!

-- 
You are receiving this mail because:
You are the assignee for the bug.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to