[ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888439#comment-15888439
 ] 

Christian Esken commented on CASSANDRA-13265:
---------------------------------------------

Here is one possibly very important observation. It looks like Coalescing is 
doing an infinite loop while doing maybeSleep(). I checked 10 Thread dumps, and 
in each of them the Thread was at the same location. Is it possible that 
averageGap is 0? This would lead to infinite recursion.
{code}
    private static boolean maybeSleep(int messages, long averageGap, long 
maxCoalesceWindow, Parker parker)
    {
        // only sleep if we can expect to double the number of messages we're 
sending in the time interval
        long sleep = messages * averageGap; // TODO can averageGap be 0 ?
        if (sleep > maxCoalesceWindow)
            return false;

        // assume we receive as many messages as we expect; apply the same 
logic to the future batch:
        // expect twice as many messages to consider sleeping for "another" 
interval; this basically translates
        // to doubling our sleep period until we exceed our max sleep window
        while (sleep * 2 < maxCoalesceWindow)
            sleep *= 2;     // <<<<<<<<<<<<<<<< CoalescingStrategies:106
        parker.park(sleep);
        return true;
    }
{code}

If sum is bigger than MEASURED_INTERVAL, then averageGap() returns 0. I am 
aware that this is highly unlikely, but I cannot explain the likely hanging in 
maybeSleep() line 106.
{code}
        private long averageGap()
        {
            if (sum == 0)
                return Integer.MAX_VALUE;
            return MEASURED_INTERVAL / sum;
        }
{code}

> Communication breakdown in OutboundTcpConnection
> ------------------------------------------------
>
>                 Key: CASSANDRA-13265
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>            Reporter: Christian Esken
>            Assignee: Christian Esken
>         Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -----
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (100000 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to