[ 
https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15624948#comment-15624948
 ] 

ramkrishna.s.vasudevan commented on HBASE-16890:
------------------------------------------------

It is pulled by only one thread but in my first patch, in RBT's onEvent()
{code}
       synchronized (waitingConsumePayloads) {
+        waitingConsumePayloads.addLast(payload);
       }
{code}
I was adding the payLoad inside the synchronized block. So though the onEvent 
was triggered sequentially but who enters the synchronized block could change. 
So I thought there could a change in sequence when it is added to 
waitingConsumePayloads. So 100, 101 and 102 could become 102, 100, 101. Or is 
it RBT itself calls onEvent one by one only and only if the current onEvent is 
done then the next onEvent happens?  I have to read that documentation of RBT. 
But I think it is going to be parallel. Will verify.

> Analyze the performance of AsyncWAL and fix the same
> ----------------------------------------------------
>
>                 Key: HBASE-16890
>                 URL: https://issues.apache.org/jira/browse/HBASE-16890
>             Project: HBase
>          Issue Type: Sub-task
>          Components: wal
>    Affects Versions: 2.0.0
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 2.0.0
>
>         Attachments: AsyncWAL_disruptor.patch, AsyncWAL_disruptor_1 
> (2).patch, HBASE-16890-remove-contention.patch, Screen Shot 2016-10-25 at 
> 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 PM.png, Screen Shot 
> 2016-10-25 at 7.39.48 PM.png, async.svg, classic.svg, contention.png, 
> contention_defaultWAL.png
>
>
> Tests reveal that AsyncWAL under load in single node cluster performs slower 
> than the Default WAL. This task is to analyze and see if we could fix it.
> See some discussions in the tail of JIRA HBASE-15536.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to