[
https://issues.apache.org/jira/browse/YARN-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812294#comment-16812294
]
Peter Bacsko commented on YARN-6929:
------------------------------------
[~Prabhu Joseph] Here's my quick take on the latest patch.
This part of the code looks complicated. What does it do exactly? Some comments
would definitely help here.
What are {{nodeFilesPrev}} and {{nodeFilesCur}}? Very difficult to see the
correctness of this logic.
I assume this is some sort of an interative directory walking logic, but hard
to see how we actually walk. You might consider using recursion if that's more
appropriate here.
{noformat}
291 RemoteIterator<FileStatus> nodeFilesCur = null;
292 try {
293 nodeFilesCur = getNodeFiles(conf, remoteAppLogDir, appId,
appOwner);
294 } catch(IOException ex) {
295 diagnosis.append(ex.getMessage() + "\n");
296 }
297 if(isOlderPathEnabled(conf)) {
298 remoteAppLogDir = getOlderRemoteAppLogDir(appId, appOwner,
299 remoteRootLogDir, suffix);
300 try {
301 RemoteIterator<FileStatus> nodeFilesPrev = getNodeFiles(conf,
302 remoteAppLogDir, appId, appOwner);
303 if(nodeFilesCur == null) {
304 return nodeFilesPrev;
305 } else if(nodeFilesPrev != null) {
306 RemoteIterator<FileStatus> curDir = nodeFilesCur;
307 RemoteIterator<FileStatus> prevDir = nodeFilesPrev;
308 RemoteIterator<FileStatus> nodeFilesCombined = new
309 RemoteIterator<FileStatus>() {
310 @Override
311 public boolean hasNext() throws IOException {
312 return prevDir.hasNext() || curDir.hasNext();
313 }
314 @Override
315 public FileStatus next() throws IOException {
316 return prevDir.hasNext() ? prevDir.next() :
curDir.next();
317 }
318 };
319 return nodeFilesCombined;
320 }
321 } catch(IOException ex) {
322 diagnosis.append(ex.getMessage() + "\n");
323 }
324 }
325 if(nodeFilesCur == null) {
326 throw new IOException(diagnosis.toString());
327 }
328 return nodeFilesCur;
199 } 329 }
{noformat}
2. I'd rename {{diagnosis}} to {{diagnosticsMessage}} or {{diagnosticsMsg}}.
3. This check here:
{noformat}
325 if(nodeFilesCur == null) {
326 throw new IOException(diagnosis.toString());
327 }
{noformat}
It's telling me that having {{nodeFilesCur}} as null is a bad thing. If that's
the case, it must be mentioned in the message of the exception besides the
diagnostics message.
4. We should not let the {{diagnosis}} string grow indefinitely. It should be
truncated after a certain limit. I don't know what the best value is - let's
discuss it.
5. Pay attention to the necessary spaces after keywords like {{if}} and {{for}}.
I might add some comments later.
> yarn.nodemanager.remote-app-log-dir structure is not scalable
> -------------------------------------------------------------
>
> Key: YARN-6929
> URL: https://issues.apache.org/jira/browse/YARN-6929
> Project: Hadoop YARN
> Issue Type: Bug
> Components: log-aggregation
> Affects Versions: 2.7.3
> Reporter: Prabhu Joseph
> Assignee: Prabhu Joseph
> Priority: Major
> Attachments: YARN-6929-007.patch, YARN-6929.1.patch,
> YARN-6929.2.patch, YARN-6929.2.patch, YARN-6929.3.patch, YARN-6929.4.patch,
> YARN-6929.5.patch, YARN-6929.6.patch, YARN-6929.patch
>
>
> The current directory structure for yarn.nodemanager.remote-app-log-dir is
> not scalable. Maximum Subdirectory limit by default is 1048576 (HDFS-6102).
> With retention yarn.log-aggregation.retain-seconds of 7days, there are more
> chances LogAggregationService fails to create a new directory with
> FSLimitException$MaxDirectoryItemsExceededException.
> The current structure is
> <yarn.nodemanager.remote-app-log-dir>/<user>/logs/<job_name>. This can be
> improved with adding date as a subdirectory like
> <yarn.nodemanager.remote-app-log-dir>/<user>/logs/<date>/<job_name>
> {code}
> WARN
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService:
> Application failed to init aggregation
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
> The directory item limit of /app-logs/yarn/logs is exceeded: limit=1048576
> items=1048576
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyMaxDirItems(FSDirectory.java:2021)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:2072)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedMkdir(FSDirectory.java:1841)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsRecursively(FSNamesystem.java:4351)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4262)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4221)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4194)
>
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.createAppDir(LogAggregationService.java:308)
>
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initAppAggregator(LogAggregationService.java:366)
>
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initApp(LogAggregationService.java:320)
>
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:443)
>
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:67)
>
> at
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173)
>
> at
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106)
> at java.lang.Thread.run(Thread.java:745)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
> The directory item limit of /app-logs/yarn/logs is exceeded: limit=1048576
> items=1048576
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyMaxDirItems(FSDirectory.java:2021)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:2072)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedMkdir(FSDirectory.java:1841)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsRecursively(FSNamesystem.java:4351)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4262)
>
> {code}
> Thanks to Robert Mancuso for finding this issue.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]