[
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Adam Holmberg updated CASSANDRA-15013:
--------------------------------------
Impacts: Clients
> Prevent client requests from blocking on executor task queue
> ------------------------------------------------------------
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
> Issue Type: Bug
> Components: Messaging/Client
> Reporter: Sumanth Pasupuleti
> Assignee: Sumanth Pasupuleti
> Priority: Normal
> Labels: pull-request-available
> Fix For: 3.0.19, 3.11.5, 4.0
>
> Attachments: 15013-3.0.txt, 15013-3.11.txt, 15013-trunk.txt,
> BlockedEpollEventLoopFromHeapDump.png,
> BlockedEpollEventLoopFromThreadDump.png, RequestExecutorQueueFull.png, heap
> dump showing each ImmediateFlusher taking upto 600MB.png,
> perftest2_15013_base_flamegraph.svg, perftest2_15013_patch_flamegraph.svg,
> perftest2_blocked_threadpool.png, perftest2_cpu_usage.png,
> perftest2_heap.png, perftest2_read_latency_99th.png,
> perftest2_read_latency_avg.png, perftest2_readops.png,
> perftest2_write_latency_99th.png, perftest2_write_latency_avg.png,
> perftest2_writeops.png, perftest_blockedthreads.png,
> perftest_connections_count.png, perftest_cpu_usage.png,
> perftest_heap_usage.png, perftest_readlatency_99th.png,
> perftest_readlatency_avg.png, perftest_readops.png,
> perftest_writelatency_99th.png, perftest_writelatency_avg.png,
> perftest_writeops.png
>
>
> This is a follow-up ticket out of CASSANDRA-14855, to make the Flusher queue
> bounded, since, in the current state, items get added to the queue without
> any checks on queue size, nor with any checks on netty outbound buffer to
> check the isWritable state.
> We are seeing this issue hit our production 3.0 clusters quite often.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]