[ 
https://issues.apache.org/jira/browse/KNOX-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150496#comment-16150496
 ] 

ASF subversion and git services commented on KNOX-989:
------------------------------------------------------

Commit ac532bd73c5da8997f865e048b36c76fd3c11301 in knox's branch 
refs/heads/KNOX-998-Package_Restructuring from [~moresandeep]
[ https://git-wip-us.apache.org/repos/asf?p=knox.git;h=ac532bd ]

KNOX-989 - Report metrics at service level (/webhdfs/v1) instead of url with 
args (/webhdfs/v1/?op=LISTSTATUS) (Mohammad Kamrul Islam via Sandeep More)


> Revisit JMX Metrics to fix the Out of Memory issue
> --------------------------------------------------
>
>                 Key: KNOX-989
>                 URL: https://issues.apache.org/jira/browse/KNOX-989
>             Project: Apache Knox
>          Issue Type: Bug
>          Components: Server
>            Reporter: Sandeep More
>            Assignee: Mohammad Kamrul Islam
>             Fix For: 0.14.0
>
>         Attachments: KNOX-989.1.patch, KNOX-989.2.patch, Screen Shot 
> 2017-08-16 at 1.56.16 PM.png
>
>
> Bug  [KNOX-986|https://issues.apache.org/jira/browse/KNOX-986] uncovers 
> problem with Metrics when large number of unique URLs are accessed via Knox. 
> The problem here is that Knox creates metrics objects per unique URL, the 
> metrics objects are not flushed out (for obvious reason - to maintain the 
> metric state). 
> We need to come up with a proper fix to mitigate this while being able to use 
> the JMX Metrics. 
> One way of doing this would be to have Metrics objects at service level ( 
> e.g. /gateway/sandbox/webhdfs/* ) the other way would be to have a reaper 
> process that clears out the unused objects. Other suggestions are welcomed !



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to