[jira] [Commented] (KNOX-989) Revisit JMX Metrics to fix the Out of Memory issue

2017-09-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150496#comment-16150496
 ] 

ASF subversion and git services commented on KNOX-989:
--

Commit ac532bd73c5da8997f865e048b36c76fd3c11301 in knox's branch 
refs/heads/KNOX-998-Package_Restructuring from [~moresandeep]
[ https://git-wip-us.apache.org/repos/asf?p=knox.git;h=ac532bd ]

KNOX-989 - Report metrics at service level (/webhdfs/v1) instead of url with 
args (/webhdfs/v1/?op=LISTSTATUS) (Mohammad Kamrul Islam via Sandeep More)


> Revisit JMX Metrics to fix the Out of Memory issue
> --
>
> Key: KNOX-989
> URL: https://issues.apache.org/jira/browse/KNOX-989
> Project: Apache Knox
>  Issue Type: Bug
>  Components: Server
>Reporter: Sandeep More
>Assignee: Mohammad Kamrul Islam
> Fix For: 0.14.0
>
> Attachments: KNOX-989.1.patch, KNOX-989.2.patch, Screen Shot 
> 2017-08-16 at 1.56.16 PM.png
>
>
> Bug  [KNOX-986|https://issues.apache.org/jira/browse/KNOX-986] uncovers 
> problem with Metrics when large number of unique URLs are accessed via Knox. 
> The problem here is that Knox creates metrics objects per unique URL, the 
> metrics objects are not flushed out (for obvious reason - to maintain the 
> metric state). 
> We need to come up with a proper fix to mitigate this while being able to use 
> the JMX Metrics. 
> One way of doing this would be to have Metrics objects at service level ( 
> e.g. /gateway/sandbox/webhdfs/* ) the other way would be to have a reaper 
> process that clears out the unused objects. Other suggestions are welcomed !



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KNOX-989) Revisit JMX Metrics to fix the Out of Memory issue

2017-08-25 Thread Sandeep More (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16141823#comment-16141823
 ] 

Sandeep More commented on KNOX-989:
---

The patch is committed, thank you for the contribution [~kislam] ! 

> Revisit JMX Metrics to fix the Out of Memory issue
> --
>
> Key: KNOX-989
> URL: https://issues.apache.org/jira/browse/KNOX-989
> Project: Apache Knox
>  Issue Type: Bug
>  Components: Server
>Reporter: Sandeep More
>Assignee: Mohammad Kamrul Islam
> Fix For: 0.14.0
>
> Attachments: KNOX-989.1.patch, KNOX-989.2.patch, Screen Shot 
> 2017-08-16 at 1.56.16 PM.png
>
>
> Bug  [KNOX-986|https://issues.apache.org/jira/browse/KNOX-986] uncovers 
> problem with Metrics when large number of unique URLs are accessed via Knox. 
> The problem here is that Knox creates metrics objects per unique URL, the 
> metrics objects are not flushed out (for obvious reason - to maintain the 
> metric state). 
> We need to come up with a proper fix to mitigate this while being able to use 
> the JMX Metrics. 
> One way of doing this would be to have Metrics objects at service level ( 
> e.g. /gateway/sandbox/webhdfs/* ) the other way would be to have a reaper 
> process that clears out the unused objects. Other suggestions are welcomed !



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KNOX-989) Revisit JMX Metrics to fix the Out of Memory issue

2017-08-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16141819#comment-16141819
 ] 

ASF subversion and git services commented on KNOX-989:
--

Commit ac532bd73c5da8997f865e048b36c76fd3c11301 in knox's branch 
refs/heads/master from [~moresandeep]
[ https://git-wip-us.apache.org/repos/asf?p=knox.git;h=ac532bd ]

KNOX-989 - Report metrics at service level (/webhdfs/v1) instead of url with 
args (/webhdfs/v1/?op=LISTSTATUS) (Mohammad Kamrul Islam via Sandeep More)


> Revisit JMX Metrics to fix the Out of Memory issue
> --
>
> Key: KNOX-989
> URL: https://issues.apache.org/jira/browse/KNOX-989
> Project: Apache Knox
>  Issue Type: Bug
>  Components: Server
>Reporter: Sandeep More
>Assignee: Mohammad Kamrul Islam
> Fix For: 0.14.0
>
> Attachments: KNOX-989.1.patch, KNOX-989.2.patch, Screen Shot 
> 2017-08-16 at 1.56.16 PM.png
>
>
> Bug  [KNOX-986|https://issues.apache.org/jira/browse/KNOX-986] uncovers 
> problem with Metrics when large number of unique URLs are accessed via Knox. 
> The problem here is that Knox creates metrics objects per unique URL, the 
> metrics objects are not flushed out (for obvious reason - to maintain the 
> metric state). 
> We need to come up with a proper fix to mitigate this while being able to use 
> the JMX Metrics. 
> One way of doing this would be to have Metrics objects at service level ( 
> e.g. /gateway/sandbox/webhdfs/* ) the other way would be to have a reaper 
> process that clears out the unused objects. Other suggestions are welcomed !



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KNOX-989) Revisit JMX Metrics to fix the Out of Memory issue

2017-08-17 Thread Sandeep More (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16130398#comment-16130398
 ] 

Sandeep More commented on KNOX-989:
---

I was testing the build with Zeppelin and noticed the "service" metrics. 

> Revisit JMX Metrics to fix the Out of Memory issue
> --
>
> Key: KNOX-989
> URL: https://issues.apache.org/jira/browse/KNOX-989
> Project: Apache Knox
>  Issue Type: Bug
>  Components: Server
>Reporter: Sandeep More
>Assignee: Mohammad Kamrul Islam
> Fix For: 0.14.0
>
> Attachments: KNOX-989.1.patch, Screen Shot 2017-08-16 at 1.56.16 
> PM.png
>
>
> Bug  [KNOX-986|https://issues.apache.org/jira/browse/KNOX-986] uncovers 
> problem with Metrics when large number of unique URLs are accessed via Knox. 
> The problem here is that Knox creates metrics objects per unique URL, the 
> metrics objects are not flushed out (for obvious reason - to maintain the 
> metric state). 
> We need to come up with a proper fix to mitigate this while being able to use 
> the JMX Metrics. 
> One way of doing this would be to have Metrics objects at service level ( 
> e.g. /gateway/sandbox/webhdfs/* ) the other way would be to have a reaper 
> process that clears out the unused objects. Other suggestions are welcomed !



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KNOX-989) Revisit JMX Metrics to fix the Out of Memory issue

2017-08-17 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16130080#comment-16130080
 ] 

Mohammad Kamrul Islam commented on KNOX-989:


Very good catch [~moresandeep]!

Btw how /when this "service" metrics are created? What commands prompted this?

 

> Revisit JMX Metrics to fix the Out of Memory issue
> --
>
> Key: KNOX-989
> URL: https://issues.apache.org/jira/browse/KNOX-989
> Project: Apache Knox
>  Issue Type: Bug
>  Components: Server
>Reporter: Sandeep More
>Assignee: Mohammad Kamrul Islam
> Fix For: 0.14.0
>
> Attachments: KNOX-989.1.patch, Screen Shot 2017-08-16 at 1.56.16 
> PM.png
>
>
> Bug  [KNOX-986|https://issues.apache.org/jira/browse/KNOX-986] uncovers 
> problem with Metrics when large number of unique URLs are accessed via Knox. 
> The problem here is that Knox creates metrics objects per unique URL, the 
> metrics objects are not flushed out (for obvious reason - to maintain the 
> metric state). 
> We need to come up with a proper fix to mitigate this while being able to use 
> the JMX Metrics. 
> One way of doing this would be to have Metrics objects at service level ( 
> e.g. /gateway/sandbox/webhdfs/* ) the other way would be to have a reaper 
> process that clears out the unused objects. Other suggestions are welcomed !



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KNOX-989) Revisit JMX Metrics to fix the Out of Memory issue

2017-08-14 Thread Sandeep More (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16125642#comment-16125642
 ] 

Sandeep More commented on KNOX-989:
---

Hello [~kamrul] sorry for the delayed response, I think your suggestion sounds 
like a good idea.

> Revisit JMX Metrics to fix the Out of Memory issue
> --
>
> Key: KNOX-989
> URL: https://issues.apache.org/jira/browse/KNOX-989
> Project: Apache Knox
>  Issue Type: Bug
>  Components: Server
>Reporter: Sandeep More
>Assignee: Mohammad Kamrul Islam
> Fix For: 0.14.0
>
>
> Bug  [KNOX-986|https://issues.apache.org/jira/browse/KNOX-986] uncovers 
> problem with Metrics when large number of unique URLs are accessed via Knox. 
> The problem here is that Knox creates metrics objects per unique URL, the 
> metrics objects are not flushed out (for obvious reason - to maintain the 
> metric state). 
> We need to come up with a proper fix to mitigate this while being able to use 
> the JMX Metrics. 
> One way of doing this would be to have Metrics objects at service level ( 
> e.g. /gateway/sandbox/webhdfs/* ) the other way would be to have a reaper 
> process that clears out the unused objects. Other suggestions are welcomed !



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KNOX-989) Revisit JMX Metrics to fix the Out of Memory issue

2017-08-09 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119612#comment-16119612
 ] 

Mohammad Kamrul Islam commented on KNOX-989:


I find the code in *InstrumentedGatewayFilter.java* where the timer keys are 
built. The related function is pasted below. I believe removing the line #7 
could reduce the unnecessary timer keys. 

However, I would like to run this idea for  comments before I send any patch.
   
{code:java}
1. private Timer timer(ServletRequest request) {
2.StringBuilder builder = new StringBuilder();
3.builder.append("client.");
4.builder.append(request.getServletContext().getContextPath());
5.if (request instanceof HttpServletRequest) {
6. HttpServletRequest httpServletRequest = (HttpServletRequest) request;
7.builder.append(httpServletRequest.getPathInfo());
8.builder.append(".");
9.builder.append(httpServletRequest.getMethod());
10.  builder.append("-requests");
11}
12return metricRegistry.timer(builder.toString());
13  }
{code}

> Revisit JMX Metrics to fix the Out of Memory issue
> --
>
> Key: KNOX-989
> URL: https://issues.apache.org/jira/browse/KNOX-989
> Project: Apache Knox
>  Issue Type: Bug
>  Components: Server
>Reporter: Sandeep More
>Assignee: Mohammad Kamrul Islam
> Fix For: 0.14.0
>
>
> Bug  [KNOX-986|https://issues.apache.org/jira/browse/KNOX-986] uncovers 
> problem with Metrics when large number of unique URLs are accessed via Knox. 
> The problem here is that Knox creates metrics objects per unique URL, the 
> metrics objects are not flushed out (for obvious reason - to maintain the 
> metric state). 
> We need to come up with a proper fix to mitigate this while being able to use 
> the JMX Metrics. 
> One way of doing this would be to have Metrics objects at service level ( 
> e.g. /gateway/sandbox/webhdfs/* ) the other way would be to have a reaper 
> process that clears out the unused objects. Other suggestions are welcomed !



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KNOX-989) Revisit JMX Metrics to fix the Out of Memory issue

2017-08-05 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16115392#comment-16115392
 ] 

Larry McCay commented on KNOX-989:
--

Hi [~kamrul] - have at it, man.
Thanks!

> Revisit JMX Metrics to fix the Out of Memory issue
> --
>
> Key: KNOX-989
> URL: https://issues.apache.org/jira/browse/KNOX-989
> Project: Apache Knox
>  Issue Type: Bug
>  Components: Server
>Reporter: Sandeep More
>Assignee: Mohammad Kamrul Islam
> Fix For: 0.14.0
>
>
> Bug  [KNOX-986|https://issues.apache.org/jira/browse/KNOX-986] uncovers 
> problem with Metrics when large number of unique URLs are accessed via Knox. 
> The problem here is that Knox creates metrics objects per unique URL, the 
> metrics objects are not flushed out (for obvious reason - to maintain the 
> metric state). 
> We need to come up with a proper fix to mitigate this while being able to use 
> the JMX Metrics. 
> One way of doing this would be to have Metrics objects at service level ( 
> e.g. /gateway/sandbox/webhdfs/* ) the other way would be to have a reaper 
> process that clears out the unused objects. Other suggestions are welcomed !



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KNOX-989) Revisit JMX Metrics to fix the Out of Memory issue

2017-08-05 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/KNOX-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16115331#comment-16115331
 ] 

Mohammad Kamrul Islam commented on KNOX-989:


I would prefer to do it in service-level (such as /gateway/sandbox/webhdfs).
I can work on a patch if there is no objection.

> Revisit JMX Metrics to fix the Out of Memory issue
> --
>
> Key: KNOX-989
> URL: https://issues.apache.org/jira/browse/KNOX-989
> Project: Apache Knox
>  Issue Type: Bug
>  Components: Server
>Reporter: Sandeep More
>Assignee: Mohammad Kamrul Islam
> Fix For: 0.14.0
>
>
> Bug  [KNOX-986|https://issues.apache.org/jira/browse/KNOX-986] uncovers 
> problem with Metrics when large number of unique URLs are accessed via Knox. 
> The problem here is that Knox creates metrics objects per unique URL, the 
> metrics objects are not flushed out (for obvious reason - to maintain the 
> metric state). 
> We need to come up with a proper fix to mitigate this while being able to use 
> the JMX Metrics. 
> One way of doing this would be to have Metrics objects at service level ( 
> e.g. /gateway/sandbox/webhdfs/* ) the other way would be to have a reaper 
> process that clears out the unused objects. Other suggestions are welcomed !



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)