[
https://issues.apache.org/jira/browse/HADOOP-16833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17030201#comment-17030201
]
Arpit Agarwal commented on HADOOP-16833:
----------------------------------------
Thanks for this improvement and the offline explanation [~sodonnell] .
The {{check}} method changes are a bit tricky. Lockless synchronization is a
minefield! I realized that the original implementation has a bug which is
carried over with these changes.The {{getAndSet}} after the while loop ends can
lose changes made by another thread.
{code:java}
} while (!lastLogTime.compareAndSet(localLastLogTs, now)); << Thread 1
successfully does compareAndSet
<< Thread 2 sees it cannot log yet, so it updates
suppressed and maxWait.
long suppressed = suppressedCounter.getAndSet(0); << Thread 1 resets
changes made by thread 1.
long maxWait = maxWaitCounter.getAndSet(0);
{code}
I don't see an easy solution. we are attempting to update multiple variables
atomically without getting a lock that covers all the updates. We can make a
decision that the logging is not making a guarantee of absolute precision and
we can potentially lose a few samples.
> InstrumentedLock should log lock queue time
> -------------------------------------------
>
> Key: HADOOP-16833
> URL: https://issues.apache.org/jira/browse/HADOOP-16833
> Project: Hadoop Common
> Issue Type: Improvement
> Components: util
> Affects Versions: 3.3.0
> Reporter: Stephen O'Donnell
> Assignee: Stephen O'Donnell
> Priority: Major
> Attachments: HADOOP-16833.001.patch, HADOOP-16833.002.patch,
> HADOOP-16833.003.patch, HADOOP-16833.004.patch
>
>
> Within the Datanode, we often see contention around the FsDatasetImpl lock.
> This can be for various reasons, eg the DN is under IO load, and other bugs,
> eg HDFS-15131.
> When DN slow downs happen, it is very difficult to debug what is causing it,
> as there are few messages in the logs which indicate what is happening.
> In my experience, the presence of this log is informative:
> {code}
> 2019-11-27 09:00:49,678 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Took 21540ms to process 1 commands from NN
> {code}
> This comes from the datanode heartbeat thread, and it is logged when the
> commands returned from the heartbeat cannot be enqueued, and usually this is
> because the thread cannot get the FsDatasetImpl lock due to contention with
> other slow threads.
> HDFS-14997 moved the command processing to an async thread, and hence this
> useful message will disappear.
> InstrumentedLock introduced a feature to the datanode, where it will log a
> stack trace if any thread holds the lock for over 300ms. However this will
> not catch a scenario where 10 threads each hold a lock for 200ms, leading to
> the next in the queue having waited over 2 seconds.
> I believe it would be useful to extend InstrumentedLock to log if a thread
> has to wait for over some threshold. That way, we could be able to catch
> scenarios like the heartbeat thread shows us, but in a much more obvious way,
> provided lock contention is involved.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]