[ 
https://issues.apache.org/jira/browse/YARN-6382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15960175#comment-15960175
 ] 

Joep Rottinghuis commented on YARN-6382:
----------------------------------------

Clarified with [~haibochen] that he meant that the race conditions for the 
latter two cases are solved in YARN-6376.
That makes sense.

Synchronizing on the writer is still a little brittle there, because there is a 
getWriter method which lets callers access the writer without synchronizing on 
it.
AppLevelTimelineCollector#AppLevelAggregator#agregate() does this in line 152: 
getWriter().write(...
In this case it doesn't flush, but if that were to be added, that would 
re-introduce the race fixed in YARN-6376.
Instead of exposing the writer, perhaps it would be better to have the 
sub-classes call #putEntities instead. It defers to the private 
writeTimelineEntities which does the same work to get the context:
TimelineCollectorContext context = getTimelineEntityContext();
Should we open a separate bug for that to enhance the fix in YARN-6376?

> Address race condition on TimelineWriter.flush() caused by buffer-sized flush
> -----------------------------------------------------------------------------
>
>                 Key: YARN-6382
>                 URL: https://issues.apache.org/jira/browse/YARN-6382
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>    Affects Versions: 3.0.0-alpha2
>            Reporter: Haibo Chen
>            Assignee: Haibo Chen
>
> YARN-6376 fixes the race condition between putEntities() and periodical 
> flush() by WriterFlushThread in TimelineCollectorManager, or between 
> putEntities() in different threads.
> However, BufferedMutator can have internal size-based flush as well. We need 
> to address the resulting race condition.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to