[ 
https://issues.apache.org/jira/browse/YARN-11397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17648992#comment-17648992
 ] 

Steve Loughran commented on YARN-11397:
---------------------------------------

root cause is that FileContext doesn't have a close() so willl hang on to s3a 
refs with calling close() on them.

this can leak thread pools, and if not that, the strong back references from 
metrics.

HADOOP-18476 *probably* fixes the thread pool issue, while HADOOP-18526 will 
address the instrumentation leaks.

i think really we should make FileContext closeable and do a full end-to-end 
close there, then call close() in our app. The s3a changes are very much damage 
limitation.

> Memory leak when reading aggregated logs from s3 
> (LogAggregationTFileController::readAggregatedLogs)
> ----------------------------------------------------------------------------------------------------
>
>                 Key: YARN-11397
>                 URL: https://issues.apache.org/jira/browse/YARN-11397
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: log-aggregation
>    Affects Versions: 3.2.2
>         Environment: Remote logs dir on s3.
>            Reporter: Maciej Smolenski
>            Priority: Critical
>         Attachments: YarnLogsS3Issue.scala
>
>
> Reproduction code in the attachment.
> When collecting aggregated logs from s3 in a loop (see reproduction code) we 
> can easily see that the number of 'S3AInstrumentation' is increasing although 
> the number of 'S3AFileSystem' is not increasing. It means that 
> 'S3AInstrumentation' is not released together with 'S3AFileSystem' as it 
> should be. The root cause of this seems to be the missing close on 
> S3AFileSystem.
> The issue seems similar to https://issues.apache.org/jira/browse/YARN-11039 
> but the issue is a 'memory leak' (not a 'thread leak') and affected version 
> is earlier here (3.2.2).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to