[
https://issues.apache.org/jira/browse/HADOOP-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14062748#comment-14062748
]
shanyu zhao commented on HADOOP-10839:
--------------------------------------
Hi [~cnauroth] Thanks for the review!
I pulled from trunk in github before I generate the patch so it should be
against the current trunk...
Regarding the unusual indentation, I was just trying to use the same
indentation in that file. If you look at other methods in the same file, it all
has the way I wrote in the patch. Do you think I need to modify the indentation
to what you proposed above? That's more normal way but would seem odd I in that
source code file.
> Add unregisterSource() to MetricsSystem API
> -------------------------------------------
>
> Key: HADOOP-10839
> URL: https://issues.apache.org/jira/browse/HADOOP-10839
> Project: Hadoop Common
> Issue Type: Bug
> Components: metrics
> Affects Versions: 2.4.1
> Reporter: shanyu zhao
> Assignee: shanyu zhao
> Attachments: HADOOP-10839.patch
>
>
> Currently the MetrisSystem API has register() method to register a
> MetricsSource but doesn't have unregister() method. This means once a
> MetricsSource is registered with the MetricsSystem, it will be there forever
> until the MetricsSystem is shut down. This in some cases can cause Java
> OutOfMemoryError.
> One such case is in file system metrics implementation. The new
> AbstractFileSystem/FileContext framework does not implement a cache so every
> file system access can lead to the creation of a NativeFileSystem instance.
> (refer to HADOOP-6356). And all these NativeFileSystem needs to share the
> same instance of MetricsSystemImpl, which means we cannot shut down
> MetricsSystem to clean up all the MetricsSources that has been registered but
> no longer active. Over time the MetricsSource instance accumulates and
> eventually we saw OutOfMemoryError.
--
This message was sent by Atlassian JIRA
(v6.2#6252)