Tsuyoshi Ozawa commented on YARN-4301:

[~suda] thank you for the point. I have some comments about v2 patch - could 
you update them?

1. About the synchronization of DirectoryCollection,  I got the point you 
mentioned. The change, however, causes race condition between states in the 
class(localDirs, fullDirs, errorDirs, and numFailures) - e.g. 
{{DirectoryCollection.concat(errorDirs, fullDirs))}}, {{createNonExistentDirs}} 
and other functions cannot work well without synchronization. 

I think the root cause of the problem is to calling {{DC.testDirs}} with 
synchronization in {{DC.checkDirs}}. How about releasing lock before calling 
{{testDirs}} and acquiring lock after calling {{testDirs}}?

synchronized DC.getFailedDirs() can be blocked by synchronized DC.checkDirs(), 
when File.mkdir() (called from DC.checkDirs(), via DC.testDirs()) does not 
return in a moderate timeout.
Hence NodeHealthCheckerServer.isHealthy() gets also blocked.
So I would like to make DC.getXXXs unsynchronized.

2. If the thread is preempted by OS and moves to another CPU in multicore 
environment, gap can be negative value. Hence I prefer not to abort NodeManager 
+    long diskCheckTime = dirsHandler.getLastDisksCheckTime();
+    long now = System.currentTimeMillis();
+    long gap = now - diskCheckTime;
+    if (gap < 0) {
+      throw new AssertionError("implementation error - now=" + now
+          + ", diskCheckTime=" + diskCheckTime);
+    }

3. Please move validations of configuration to serviceInit to avoid aborting at 
+    long allowedGap = this.diskHealthCheckInterval + 
+    if (allowedGap <= 0) {
+      throw new AssertionError("implementation error - interval=" + 
+          + ", timeout=" + this.diskHealthCheckTimeout);
+    }

> NM disk health checker should have a timeout
> --------------------------------------------
>                 Key: YARN-4301
>                 URL: https://issues.apache.org/jira/browse/YARN-4301
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Akihiro Suda
>         Attachments: YARN-4301-1.patch, YARN-4301-2.patch, 
> concept-async-diskchecker.txt
> The disk health checker [verifies a 
> disk|https://github.com/apache/hadoop/blob/96677bef00b03057038157efeb3c2ad4702914da/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DirectoryCollection.java#L371-L385]
>  by executing {{mkdir}} and {{rmdir}} periodically.
> If these operations does not return in a moderate timeout, the disk should be 
> marked bad, and thus {{nodeInfo.nodeHealthy}} should flip to {{false}}.
> I confirmed that current YARN does not have an implicit timeout (on JDK7, 
> Linux 4.2, ext4) using [Earthquake|https://github.com/osrg/earthquake], our 
> fault injector for distributed systems.
> (I'll introduce the reproduction script in a while)
> I consider we can fix this issue by making 
> [{{NodeHealthCheckerServer.isHealthy()}}|https://github.com/apache/hadoop/blob/96677bef00b03057038157efeb3c2ad4702914da/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeHealthCheckerService.java#L69-L73]
>  return {{false}} if the value of {{this.getLastHealthReportTime()}} is too 
> old.

This message was sent by Atlassian JIRA

Reply via email to