[
https://issues.apache.org/jira/browse/HADOOP-10840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
shanyu zhao updated HADOOP-10840:
---------------------------------
Attachment: HADOOP-10840.1.patch
[~cnauroth] thanks for the findings. I couldn't reproduce the specific failure
you posted, but I think it is caused by NativeAzureFileSystem.close() being
called multiple times. I verified that NativeAzureFileSystemStore.close() can
be called multiple times, but not NativeAzureFileSystem. And for this class,
since the metrics system keeps a reference counting, we cannot call its close()
multiple times, so I introduced Boolean isClosed to prevent it from being
called multiple times. I added a new test case to call
NativeAzureFileSystem.close() twice to verify this scenario.
New patch attached.
> Fix OutOfMemoryError caused by metrics system in Azure File System
> ------------------------------------------------------------------
>
> Key: HADOOP-10840
> URL: https://issues.apache.org/jira/browse/HADOOP-10840
> Project: Hadoop Common
> Issue Type: Bug
> Components: metrics
> Affects Versions: 2.4.1
> Reporter: shanyu zhao
> Assignee: shanyu zhao
> Attachments: HADOOP-10840.1.patch, HADOOP-10840.patch
>
>
> In Hadoop 2.x the Hadoop File System framework changed and no cache is
> implemented (refer to HADOOP-6356). This means for every WASB access, a new
> NativeAzureFileSystem is created, along which a Metrics source created and
> added to MetricsSystemImpl. Over time the sources accumulated, eating memory
> and causing Java OutOfMemoryError.
> The fix is to utilize the unregisterSource() method added to MetricsSystem in
> HADOOP-10839.
--
This message was sent by Atlassian JIRA
(v6.2#6252)