Zhijie Shen commented on YARN-2583:

1. Be more specific: we find a more scalable method to only write a single log 
file per LRS?
  // we find a more scalable method.

2. Make 30  and 3600 the constants of AppLogAggregatorImpl?
    int configuredRentionSize =
    if (configuredInterval > 0 && configuredInterval < 3600) {

3. Should be ">"?
      if (status.size() >= this.retentionSize) {
And should be "<"?
        for (int i = 0 ; i <= statusList.size() - this.retentionSize; i++) {

4. why not using yarnclient? The packaging issue?
  private ApplicationClientProtocol rmClient;

> Modify the LogDeletionService to support Log aggregation for LRS
> ----------------------------------------------------------------
>                 Key: YARN-2583
>                 URL: https://issues.apache.org/jira/browse/YARN-2583
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: nodemanager, resourcemanager
>            Reporter: Xuan Gong
>            Assignee: Xuan Gong
>         Attachments: YARN-2583.1.patch, YARN-2583.2.patch, 
> YARN-2583.3.1.patch, YARN-2583.3.patch
> Currently, AggregatedLogDeletionService will delete old logs from HDFS. It 
> will check the cut-off-time, if all logs for this application is older than 
> this cut-off-time. The app-log-dir from HDFS will be deleted. This will not 
> work for LRS. We expect a LRS application can keep running for a long time. 
> Two different scenarios: 
> 1) If we configured the rollingIntervalSeconds, the new log file will be 
> always uploaded to HDFS. The number of log files for this application will 
> become larger and larger. And there is no log files will be deleted.
> 2) If we did not configure the rollingIntervalSeconds, the log file can only 
> be uploaded to HDFS after the application is finished. It is very possible 
> that the logs are uploaded after the cut-off-time. It will cause problem 
> because at that time the app-log-dir for this application in HDFS has been 
> deleted.

This message was sent by Atlassian JIRA

Reply via email to