GitHub user zwoop opened an issue:

    https://github.com/apache/trafficserver/issues/1593

    Thread "hangs" on Docs

    Running latest 7.1.x branch on Docs with #1583 as well, I started seeing a 
thread consuming a *lot* of CPU. In the order of 1000x more than the other 
threads (3-5 CPU hours instead of CPU minutes). Oddly enough, the problem 
clears itself on its own, in one case, it cleared just as I attached gdb to it.
    
    This has happened twice so far; once it ran for 3 CPU hours, and once for 5 
CPU hours, until they cleared. I got a stack trace from the wedged threads, and 
I examined it at least 10 times, and each time, I saw exactly the same stack 
thread. I did unfortunately not get a chance to perf test it, and no gdb 
session (before it cleared).
    
    The trace is
    
    ```
    root@qa1 377/0 # pstack 17665
    Thread 1 (Thread 0x2aaaab7ff700 (LWP 17665)):
    #0  0x000000000073045a in MIMEFieldBlockImpl::contains 
(this=0x2b068ea9c158, field=0x2b0695cc3158) at MIME.cc:3717
    #1  0x000000000072a97d in mime_hdr_field_slotnum (mh=0x2aaaac81c088, 
field=0x2b0695cc3158) at MIME.cc:1670
    #2  0x0000000000727a5f in mime_hdr_sanity_check (mh=0x2aaaac81c088) at 
MIME.cc:581
    #3  0x000000000072a91e in mime_hdr_field_delete (heap=0x2aaaac81c000, 
mh=0x2aaaac81c088, field=0x2b0695cc2cd8, delete_all_dups=false) at MIME.cc:1655
    #4  0x00000000005c8be0 in MIMEHdr::field_delete (this=0x2b068e865050, 
field=0x2b0695cc2cd8, delete_all_dups=false) at 
/usr/local/src/trafficserver/proxy/hdrs/MIME.h:1169
    #5  0x00000000006b77f6 in HpackDynamicTable::add_header_field 
(this=0x2b068e81a150, field=0x2aaad602c128) at HPACK.cc:343
    #6  0x00000000006b763c in HpackIndexingTable::add_header_field 
(this=0x2b068e8193d0, field=0x2aaad602c128) at HPACK.cc:295
    #7  0x00000000006b7d96 in encode_literal_header_field_with_indexed_name 
(buf_start=0x2b068e825c01 "\276\344\325\364\360\363\356\355\064\016\305\004", 
buf_end=0x2b068e825e82 "\177", header=..., index=33, indexing_table=..., 
type=HPACK_FIELD_INDEXED_LITERAL) at HPACK.cc:520
    #8  0x00000000006b8ea9 in hpack_encode_header_block (indexing_table=..., 
out_buf=0x2b068e825c00 "\210\276\344\325\364\360\363\356\355\064\016\305\004", 
out_buf_len=642, hdr=0x2aaaab7fa340) at HPACK.cc:963
    #9  0x00000000006af1d6 in http2_encode_header_blocks (in=0x2aaaab7fa340, 
out=0x2b068e825c00 "\210\276\344\325\364\360\363\356\355\064\016\305\004", 
out_len=642, len_written=0x2aaaab7fa298, handle=...) at HTTP2.cc:603
    #10 0x00000000006c2552 in Http2ConnectionState::send_headers_frame 
(this=0x2aaab201ddf8, stream=0x2aaad0c0c780) at Http2ConnectionState.cc:1271
    #11 0x00000000006b28bd in Http2Stream::update_write_request 
(this=0x2aaad0c0c780, buf_reader=0x2aaab0606940, write_len=322, 
call_update=false) at Http2Stream.cc:561
    #12 0x00000000006b15d7 in Http2Stream::do_io_write (this=0x2aaad0c0c780, 
c=0x2aaab1cb8988, nbytes=322, abuffer=0x2aaab0606940, owner=false) at 
Http2Stream.cc:314
    #13 0x00000000006a867a in HttpTunnel::producer_run (this=0x2aaab1cb8988, 
p=0x2aaab1cb8b90) at HttpTunnel.cc:980
    #14 0x00000000006a7dfd in HttpTunnel::tunnel_run (this=0x2aaab1cb8988, 
p_arg=0x2aaab1cb8b90) at HttpTunnel.cc:797
    #15 0x000000000064d140 in HttpSM::handle_api_return (this=0x2aaab1cb7640) 
at HttpSM.cc:1663
    #16 0x000000000064ca83 in HttpSM::state_api_callout (this=0x2aaab1cb7640, 
event=0, data=0x0) at HttpSM.cc:1545
    #17 0x000000000065b70c in HttpSM::do_api_callout_internal 
(this=0x2aaab1cb7640) at HttpSM.cc:5173
    #18 0x0000000000663f4a in HttpSM::set_next_state (this=0x2aaab1cb7640) at 
HttpSM.cc:7430
    #19 0x0000000000662ee5 in HttpSM::call_transact_and_set_next_state 
(this=0x2aaab1cb7640, f=0x0) at HttpSM.cc:7206
    #20 0x000000000064cc96 in HttpSM::handle_api_return (this=0x2aaab1cb7640) 
at HttpSM.cc:1607
    #21 0x000000000064ca83 in HttpSM::state_api_callout (this=0x2aaab1cb7640, 
event=0, data=0x0) at HttpSM.cc:1545
    #22 0x000000000065b70c in HttpSM::do_api_callout_internal 
(this=0x2aaab1cb7640) at HttpSM.cc:5173
    #23 0x000000000066a85d in HttpSM::do_api_callout (this=0x2aaab1cb7640) at 
HttpSM.cc:439
    #24 0x000000000064e201 in HttpSM::state_read_server_response_header 
(this=0x2aaab1cb7640, event=100, data=0x2aaab2a09700) at HttpSM.cc:1963
    #25 0x0000000000650ef6 in HttpSM::main_handler (this=0x2aaab1cb7640, 
event=100, data=0x2aaab2a09700) at HttpSM.cc:2663
    #26 0x000000000057472a in Continuation::handleEvent (this=0x2aaab1cb7640, 
event=100, data=0x2aaab2a09700) at 
/usr/local/src/trafficserver/iocore/eventsystem/I_Continuation.h:153
    #27 0x0000000000806e72 in read_signal_and_update (event=100, 
vc=0x2aaab2a095d0) at UnixNetVConnection.cc:145
    #28 0x0000000000807ea6 in read_from_net (nh=0x2b068cd0ae50, 
vc=0x2aaab2a095d0, thread=0x2b068cd07000) at UnixNetVConnection.cc:411
    #29 0x0000000000809f95 in UnixNetVConnection::net_read_io 
(this=0x2aaab2a095d0, nh=0x2b068cd0ae50, lthread=0x2b068cd07000) at 
UnixNetVConnection.cc:1001
    #30 0x00000000007ff35b in NetHandler::mainNetEvent (this=0x2b068cd0ae50, 
event=5, e=0x2aaaabc059a0) at UnixNet.cc:509
    #31 0x000000000057472a in Continuation::handleEvent (this=0x2b068cd0ae50, 
event=5, data=0x2aaaabc059a0) at 
/usr/local/src/trafficserver/iocore/eventsystem/I_Continuation.h:153
    #32 0x000000000082b68e in EThread::process_event (this=0x2b068cd07000, 
e=0x2aaaabc059a0, calling_code=5) at UnixEThread.cc:143
    #33 0x000000000082bc6d in EThread::execute (this=0x2b068cd07000) at 
UnixEThread.cc:270
    #34 0x000000000082aca8 in spawn_thread_internal (a=0x2b068baaeac0) at 
Thread.cc:84
    #35 0x00002b0689cf4dc5 in start_thread (arg=0x2aaaab7ff700) at 
pthread_create.c:308
    #36 0x00002b068a50673d in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
    ```
    
    One possible (but unlikely according to Oknet) scenario is that with #1583, 
we no longer crash, and therefore this is now showing up instead.

----

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to