[jira] [Commented] (EAGLE-1095) restart eagle server

2019-05-07 Thread Zhao, Qingwen (JIRA)


[ 
https://issues.apache.org/jira/browse/EAGLE-1095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835280#comment-16835280
 ] 

Zhao, Qingwen commented on EAGLE-1095:
--

Hi [~DylanZhao],

Do you use the in-memory storage in your configuration?  I think you can use a 
persistent one, such as MySQL. 

> restart eagle server
> 
>
> Key: EAGLE-1095
> URL: https://issues.apache.org/jira/browse/EAGLE-1095
> Project: Eagle
>  Issue Type: Bug
>Affects Versions: v0.5.0
>Reporter: Dylan Zhao
>Priority: Major
> Fix For: v0.5.0
>
>
> After restarting the eagle server, the newly created site and data will be 
> reset. How to solve this problem, the work before the service restart is in 
> vain?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (EAGLE-1096) getJSONArray may cause exception, optJSONArray should be used

2019-05-07 Thread Zhao, Qingwen (JIRA)


 [ 
https://issues.apache.org/jira/browse/EAGLE-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on EAGLE-1096 started by Zhao, Qingwen.

> getJSONArray may cause exception, optJSONArray should be used
> -
>
> Key: EAGLE-1096
> URL: https://issues.apache.org/jira/browse/EAGLE-1096
> Project: Eagle
>  Issue Type: Bug
>Reporter: bd2019us
>Assignee: Zhao, Qingwen
>Priority: Major
> Attachments: 1.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Hello,
> The method call jsonBeansObject.getJSONArray("beans")  may throw 
> JSONException when the key does not exist, which will crash the program. To 
> ensure safety, optJSONArray is recommended to use, which will avoid the 
> exception in case no such key.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (EAGLE-1096) getJSONArray may cause exception, optJSONArray should be used

2019-05-07 Thread Zhao, Qingwen (JIRA)


[ 
https://issues.apache.org/jira/browse/EAGLE-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835276#comment-16835276
 ] 

Zhao, Qingwen commented on EAGLE-1096:
--

Hi [~bd2019us],

Thanks for contributing the fixing. It will be appreciated if you could 
continue to refine your change. 

> getJSONArray may cause exception, optJSONArray should be used
> -
>
> Key: EAGLE-1096
> URL: https://issues.apache.org/jira/browse/EAGLE-1096
> Project: Eagle
>  Issue Type: Bug
>Reporter: bd2019us
>Priority: Major
> Attachments: 1.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Hello,
> The method call jsonBeansObject.getJSONArray("beans")  may throw 
> JSONException when the key does not exist, which will crash the program. To 
> ensure safety, optJSONArray is recommended to use, which will avoid the 
> exception in case no such key.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (EAGLE-1096) getJSONArray may cause exception, optJSONArray should be used

2019-05-07 Thread Zhao, Qingwen (JIRA)


 [ 
https://issues.apache.org/jira/browse/EAGLE-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen reassigned EAGLE-1096:


Assignee: (was: Zhao, Qingwen)

> getJSONArray may cause exception, optJSONArray should be used
> -
>
> Key: EAGLE-1096
> URL: https://issues.apache.org/jira/browse/EAGLE-1096
> Project: Eagle
>  Issue Type: Bug
>Reporter: bd2019us
>Priority: Major
> Attachments: 1.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Hello,
> The method call jsonBeansObject.getJSONArray("beans")  may throw 
> JSONException when the key does not exist, which will crash the program. To 
> ensure safety, optJSONArray is recommended to use, which will avoid the 
> exception in case no such key.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (EAGLE-1096) getJSONArray may cause exception, optJSONArray should be used

2019-05-07 Thread Zhao, Qingwen (JIRA)


 [ 
https://issues.apache.org/jira/browse/EAGLE-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen reassigned EAGLE-1096:


Assignee: Zhao, Qingwen

> getJSONArray may cause exception, optJSONArray should be used
> -
>
> Key: EAGLE-1096
> URL: https://issues.apache.org/jira/browse/EAGLE-1096
> Project: Eagle
>  Issue Type: Bug
>Reporter: bd2019us
>Assignee: Zhao, Qingwen
>Priority: Major
> Attachments: 1.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Hello,
> The method call jsonBeansObject.getJSONArray("beans")  may throw 
> JSONException when the key does not exist, which will crash the program. To 
> ensure safety, optJSONArray is recommended to use, which will avoid the 
> exception in case no such key.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (EAGLE-1097) CorrelationSpout存在性能问题

2019-05-07 Thread Zhao, Qingwen (JIRA)


 [ 
https://issues.apache.org/jira/browse/EAGLE-1097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen reassigned EAGLE-1097:


Assignee: Zhao, Qingwen

> CorrelationSpout存在性能问题
> --
>
> Key: EAGLE-1097
> URL: https://issues.apache.org/jira/browse/EAGLE-1097
> Project: Eagle
>  Issue Type: Improvement
>  Components: Core::Eagle Server
>Affects Versions: v0.3.0
>Reporter: zhangchi
>Assignee: Zhao, Qingwen
>Priority: Major
> Fix For: v0.3.1
>
> Attachments: Catch(04-23-09-26-01).jpg, Catch633F(04-23-09-26-01).jpg
>
>
> I ran into a problem where performance dropped dramatically when I switched 
> on two strategies.These two policies subscribe to two kafka topics, 
> respectively. I looked at processes that were using too much CPU, and found 
> that the performance bottlenecks were in the spout send phase.The topics I 
> subscribed to were divided into 10 sections, and the numOfSpoutTasks I 
> configured was 20. How shall I adjust the performance problem?
> ---
> 我遇到了一个问题,在我开启了两个策略的时候,性能会急剧下降。这两个策略分别订阅两个kafka  
> topic,我查看了CPU使用过高的进程,发现性能瓶颈都是在spout发送这一环节。我订阅的topic分别有10个分区,我配置的numOfSpoutTasks是20.请问我要怎么调整这个性能问题。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (EAGLE-1097) CorrelationSpout存在性能问题

2019-05-07 Thread Zhao, Qingwen (JIRA)


 [ 
https://issues.apache.org/jira/browse/EAGLE-1097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on EAGLE-1097 started by Zhao, Qingwen.

> CorrelationSpout存在性能问题
> --
>
> Key: EAGLE-1097
> URL: https://issues.apache.org/jira/browse/EAGLE-1097
> Project: Eagle
>  Issue Type: Improvement
>  Components: Core::Eagle Server
>Affects Versions: v0.3.0
>Reporter: zhangchi
>Assignee: Zhao, Qingwen
>Priority: Major
> Fix For: v0.3.1
>
> Attachments: Catch(04-23-09-26-01).jpg, Catch633F(04-23-09-26-01).jpg
>
>
> I ran into a problem where performance dropped dramatically when I switched 
> on two strategies.These two policies subscribe to two kafka topics, 
> respectively. I looked at processes that were using too much CPU, and found 
> that the performance bottlenecks were in the spout send phase.The topics I 
> subscribed to were divided into 10 sections, and the numOfSpoutTasks I 
> configured was 20. How shall I adjust the performance problem?
> ---
> 我遇到了一个问题,在我开启了两个策略的时候,性能会急剧下降。这两个策略分别订阅两个kafka  
> topic,我查看了CPU使用过高的进程,发现性能瓶颈都是在spout发送这一环节。我订阅的topic分别有10个分区,我配置的numOfSpoutTasks是20.请问我要怎么调整这个性能问题。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (EAGLE-1097) CorrelationSpout存在性能问题

2019-05-07 Thread Zhao, Qingwen (JIRA)


[ 
https://issues.apache.org/jira/browse/EAGLE-1097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835273#comment-16835273
 ] 

Zhao, Qingwen commented on EAGLE-1097:
--

Hi [~zhangchi],

You can either increase the topic partitions or reduce the numberOfSpoutTasks. 

If you the partition number < the totalTasks, some tasks will be idle. You can 
check the source code at here

[https://github.com/apache/storm/blob/0.9.x-branch/external/storm-kafka/src/jvm/storm/kafka/ZkCoordinator.java#L80]
 

> CorrelationSpout存在性能问题
> --
>
> Key: EAGLE-1097
> URL: https://issues.apache.org/jira/browse/EAGLE-1097
> Project: Eagle
>  Issue Type: Improvement
>  Components: Core::Eagle Server
>Affects Versions: v0.3.0
>Reporter: zhangchi
>Priority: Major
> Fix For: v0.3.1
>
> Attachments: Catch(04-23-09-26-01).jpg, Catch633F(04-23-09-26-01).jpg
>
>
> I ran into a problem where performance dropped dramatically when I switched 
> on two strategies.These two policies subscribe to two kafka topics, 
> respectively. I looked at processes that were using too much CPU, and found 
> that the performance bottlenecks were in the spout send phase.The topics I 
> subscribed to were divided into 10 sections, and the numOfSpoutTasks I 
> configured was 20. How shall I adjust the performance problem?
> ---
> 我遇到了一个问题,在我开启了两个策略的时候,性能会急剧下降。这两个策略分别订阅两个kafka  
> topic,我查看了CPU使用过高的进程,发现性能瓶颈都是在spout发送这一环节。我订阅的topic分别有10个分区,我配置的numOfSpoutTasks是20.请问我要怎么调整这个性能问题。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (EAGLE-1051) Check if a publisher can be deleted

2017-08-06 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1051:
-
Affects Version/s: (was: v0.5.1)
   v0.5.0

> Check if a publisher can be deleted 
> 
>
> Key: EAGLE-1051
> URL: https://issues.apache.org/jira/browse/EAGLE-1051
> Project: Eagle
>  Issue Type: Sub-task
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
> Fix For: v0.5.1
>
>
> A publisher can be deleted only when there are no policies using it. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1051) Check if a publisher can be deleted

2017-08-06 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1051:
-
Fix Version/s: (was: v0.5.0)
   v0.5.1

> Check if a publisher can be deleted 
> 
>
> Key: EAGLE-1051
> URL: https://issues.apache.org/jira/browse/EAGLE-1051
> Project: Eagle
>  Issue Type: Sub-task
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
> Fix For: v0.5.1
>
>
> A publisher can be deleted only when there are no policies using it. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1051) Check if a publisher can be deleted

2017-08-06 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1051:
-
Affects Version/s: (was: v0.5.0)
   v0.5.1

> Check if a publisher can be deleted 
> 
>
> Key: EAGLE-1051
> URL: https://issues.apache.org/jira/browse/EAGLE-1051
> Project: Eagle
>  Issue Type: Sub-task
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
> Fix For: v0.5.1
>
>
> A publisher can be deleted only when there are no policies using it. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (EAGLE-1029) Wrong groupSpec is generated when more than one alert engine topology

2017-08-03 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen closed EAGLE-1029.

Resolution: Fixed

> Wrong groupSpec is generated when more than one alert engine topology 
> --
>
> Key: EAGLE-1029
> URL: https://issues.apache.org/jira/browse/EAGLE-1029
> Project: Eagle
>  Issue Type: Bug
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>Priority: Critical
> Fix For: v0.5.0
>
>
> According to the schedule info returned by /rest/metadata/schedulestates
> In alertSpecs, new policies are assigned to alertBolt 9, 18, 19, 1, 12 of the 
> second alert engine topology. While in groupSpecs, the worker queue(alert 
> bolts) allocated for each policy is not [ 9, 18, 19, 1, 12]. Actually, it 
> adds all working queues, including those of other alert engine topology



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-809) Hide Kafka sink configuration used by alert engine

2017-08-03 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-809:

Affects Version/s: (was: v0.5.1)
   v0.5.0

> Hide Kafka sink configuration used by alert engine
> --
>
> Key: EAGLE-809
> URL: https://issues.apache.org/jira/browse/EAGLE-809
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
> Fix For: v0.5.1
>
>
> * Hide Kafka topic creation used by alert engine
> * Hide Kafka broker list used by alert engine



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-809) Hide Kafka sink configuration used by alert engine

2017-08-03 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-809:

Affects Version/s: (was: v0.5.0)
   v0.5.1

> Hide Kafka sink configuration used by alert engine
> --
>
> Key: EAGLE-809
> URL: https://issues.apache.org/jira/browse/EAGLE-809
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.1
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
> Fix For: v0.5.1
>
>
> * Hide Kafka topic creation used by alert engine
> * Hide Kafka broker list used by alert engine



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1029) Wrong groupSpec is generated when more than one alert engine topology

2017-08-03 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1029:
-
Fix Version/s: v0.5.0

> Wrong groupSpec is generated when more than one alert engine topology 
> --
>
> Key: EAGLE-1029
> URL: https://issues.apache.org/jira/browse/EAGLE-1029
> Project: Eagle
>  Issue Type: Bug
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>Priority: Critical
> Fix For: v0.5.0
>
>
> According to the schedule info returned by /rest/metadata/schedulestates
> In alertSpecs, new policies are assigned to alertBolt 9, 18, 19, 1, 12 of the 
> second alert engine topology. While in groupSpecs, the worker queue(alert 
> bolts) allocated for each policy is not [ 9, 18, 19, 1, 12]. Actually, it 
> adds all working queues, including those of other alert engine topology



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1029) Wrong groupSpec is generated when more than one alert engine topology

2017-08-03 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1029:
-
Affects Version/s: (was: v0.5.0)
   v0.5.1

> Wrong groupSpec is generated when more than one alert engine topology 
> --
>
> Key: EAGLE-1029
> URL: https://issues.apache.org/jira/browse/EAGLE-1029
> Project: Eagle
>  Issue Type: Bug
>Affects Versions: v0.5.1
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>Priority: Critical
>
> According to the schedule info returned by /rest/metadata/schedulestates
> In alertSpecs, new policies are assigned to alertBolt 9, 18, 19, 1, 12 of the 
> second alert engine topology. While in groupSpecs, the worker queue(alert 
> bolts) allocated for each policy is not [ 9, 18, 19, 1, 12]. Actually, it 
> adds all working queues, including those of other alert engine topology



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1031) add intervalMin setting in the metric preview page

2017-08-03 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1031:
-
Affects Version/s: (was: v0.5.0)
   v0.5.1

> add intervalMin setting in the metric preview page 
> ---
>
> Key: EAGLE-1031
> URL: https://issues.apache.org/jira/browse/EAGLE-1031
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.1
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> By default, the intervalMin is calculated by 
> {code}
> intervalMin = 5min,  |queryEndTime - queryStartTime|  <= 6h 
>  15min,  |queryEndTime - queryStartTime|  <= 24h
>  30min, |queryEndTime - queryStartTime| <= 7d
>  60min, |queryEndTime - queryStartTime| <= 14d
>  1day, |queryEndTime - queryStartTime| > 14d
> {code}
> In some cases, users want to view metric in minute level. Eagle should 
> support to customize this parameter. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1038) Support alertDuplication customization for each policy

2017-08-03 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1038:
-
Affects Version/s: (was: v0.6.0)
   v0.5.1

> Support alertDuplication customization for each policy 
> ---
>
> Key: EAGLE-1038
> URL: https://issues.apache.org/jira/browse/EAGLE-1038
> Project: Eagle
>  Issue Type: Sub-task
>Affects Versions: v0.5.1
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> Requirements
> * compatible with old versions
> * enable alertDuplication check for each policy 
> * optimize DefaultDeduplicator



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1041) Support policy processing pipeline

2017-08-03 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1041:
-
Affects Version/s: (was: v0.5.0)
   v0.5.1

> Support policy processing pipeline
> --
>
> Key: EAGLE-1041
> URL: https://issues.apache.org/jira/browse/EAGLE-1041
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.1
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> In some cases, like increment or decrement pattern, data need to be processed 
> in more than one stage. For example, alert if the metric value increases by 
> N. Two steps to get the right alert.
> 1. sort & filter events which meet the filter conditions.
> 2. define data change pattern. 
> Here is a sample policy
> {code}
> fromHADOOP_JMX_METRIC_STREAM_SANDBOX[metric=="hadoop.namenode.dfs.missingblocks"]#window.externalTime(timestamp,1min)
>  select * group by site,host,component, metric insert into temp;
> from every 
> a=HADOOP_JMX_METRIC_STREAM_SANDBOX[metric=="hadoop.namenode.dfs.missingblocks"],
> b=HADOOP_JMX_METRIC_STREAM_SANDBOX[b.component==a.component and 
> b.metric==a.metric and b.host==a.host and b.value>a.value and a.value>100]
> select b.site,b.host,b.component, b.metric, b.value as newNumOfMissingBlocks, 
> a.value as oldNumOfMissingBlocks, (b.value-a.value) as 
> increastedNumOfMissingBlocks insert into 
> HADOOP_JMX_METRIC_STREAM_SANDBOX_MISS_BLOCKS_LARGER_OUT;
> {code}
> There are two queries in this policy. The first one with the time window 
> condition tells Eagle to sort the original events. The second one defines the 
> data pattern. As the constraint of Siddhi syntax 
> (https://docs.wso2.com/display/CEP420/SiddhiQL+Guide+3.1#SiddhiQLGuide3.1-Pattern),
>  the filtering of input events does not work. 
> Luckily, if we put the output stream of the first query as the input stream 
> of the second query, it works. That's the problem this ticket tries to solve. 
> Ideally, the right policy can be written as 
> {code}
> fromHADOOP_JMX_METRIC_STREAM_SANDBOX[metric=="hadoop.namenode.dfs.missingblocks"]#window.externalTime(timestamp,1min)
>  select * group by site,host,component, metric insert into MISSING_BLOCK_OUT;
> from every a=MISSING_BLOCK_OUT[metric=="hadoop.namenode.dfs.missingblocks"],
> b=MISSING_BLOCK_OUT[b.component==a.component and b.metric==a.metric and 
> b.host==a.host and b.value>a.value and a.value>100]
> select b.site,b.host,b.component, b.metric, b.value as newNumOfMissingBlocks, 
> a.value as oldNumOfMissingBlocks, (b.value-a.value) as 
> increastedNumOfMissingBlocks insert into 
> HADOOP_JMX_METRIC_STREAM_SANDBOX_MISS_BLOCKS_LARGER_OUT;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (EAGLE-1051) Check if a publisher can be deleted

2017-08-03 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen resolved EAGLE-1051.
--
Resolution: Fixed

> Check if a publisher can be deleted 
> 
>
> Key: EAGLE-1051
> URL: https://issues.apache.org/jira/browse/EAGLE-1051
> Project: Eagle
>  Issue Type: Sub-task
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> A publisher can be deleted only when there are no policies using it. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1051) Check if a publisher can be deleted

2017-08-03 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1051:
-
Fix Version/s: v0.5.0

> Check if a publisher can be deleted 
> 
>
> Key: EAGLE-1051
> URL: https://issues.apache.org/jira/browse/EAGLE-1051
> Project: Eagle
>  Issue Type: Sub-task
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
> Fix For: v0.5.0
>
>
> A publisher can be deleted only when there are no policies using it. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1055) Improve policy prototype apis

2017-08-03 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1055:
-
Affects Version/s: (was: v0.6.0)
   v0.5.1
Fix Version/s: v0.5.1

> Improve policy prototype apis 
> --
>
> Key: EAGLE-1055
> URL: https://issues.apache.org/jira/browse/EAGLE-1055
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.1
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
> Fix For: v0.5.1
>
>
> 1. create a policy prototype from a policy
> {code}
> API: POST /rest/policyProto/create?needPolicyCreated=true
> Payload: PolicyEntity (policyDefinition + alertPublishmentIds)
> public class PolicyEntity {
> String name;   // auto created
> PolicyDefinition definition;
> List alertPublishmentIds = new ArrayList<>();
> }
> {code}
> 2. create a policy prototype by policy name
> {code}
> API: POST /rest/policyProto/create/{policyId}
> {code}
> 3. create policies for site from a list of policy protoypes
> {code}
> API: POST /rest/policyProto/export/{site}
> Payload: List
> {code}
> 4. create policies for site from a list of prototypes
> {code}
> API: POST /rest/policyProto/exportByName/{site}
> Payload: List
> {code}
> 5. delete a prototype
> {code}
> API: DELETE /rest/policyProto/{uuid}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (EAGLE-1056) Fix a link bug in the email template

2017-07-31 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen resolved EAGLE-1056.
--
Resolution: Fixed

> Fix a link bug in the email template
> 
>
> Key: EAGLE-1056
> URL: https://issues.apache.org/jira/browse/EAGLE-1056
> Project: Eagle
>  Issue Type: Bug
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>Priority: Minor
> Fix For: v0.5.1
>
>
> "View Alert Details" cannot be opened in the alert email.
> The root cause is the uuid is inconsistent generated by different publisher 
> plugins



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1056) Fix a link bug in the email template

2017-07-31 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1056:
-
Description: 
"View Alert Details" cannot be opened in the alert email.

The root cause is the uuid is inconsistent generated by different publisher 
plugins

  was:
"View Alert Details" cannot be opened in the alert email.

The root cause is the uuid is inconsistent by different publisher plugins


> Fix a link bug in the email template
> 
>
> Key: EAGLE-1056
> URL: https://issues.apache.org/jira/browse/EAGLE-1056
> Project: Eagle
>  Issue Type: Bug
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>Priority: Minor
> Fix For: v0.5.1
>
>
> "View Alert Details" cannot be opened in the alert email.
> The root cause is the uuid is inconsistent generated by different publisher 
> plugins



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1056) Fix a link bug in the email template

2017-07-31 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1056:
-
Description: 
"View Alert Details" cannot be opened in the alert email.

The root cause is the uuid is inconsistent by different publisher plugins

  was:"View Alert Details" cannot be opened in the alert email


> Fix a link bug in the email template
> 
>
> Key: EAGLE-1056
> URL: https://issues.apache.org/jira/browse/EAGLE-1056
> Project: Eagle
>  Issue Type: Bug
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>Priority: Minor
> Fix For: v0.5.1
>
>
> "View Alert Details" cannot be opened in the alert email.
> The root cause is the uuid is inconsistent by different publisher plugins



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1056) Fix a link bug in the email template

2017-07-31 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1056:
-
Priority: Minor  (was: Major)

> Fix a link bug in the email template
> 
>
> Key: EAGLE-1056
> URL: https://issues.apache.org/jira/browse/EAGLE-1056
> Project: Eagle
>  Issue Type: Bug
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>Priority: Minor
> Fix For: v0.5.1
>
>
> "View Alert Details" cannot be opened in the alert email



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (EAGLE-1059) fix a bug in policy prototype resource

2017-07-31 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1059:


 Summary: fix a bug in policy prototype resource
 Key: EAGLE-1059
 URL: https://issues.apache.org/jira/browse/EAGLE-1059
 Project: Eagle
  Issue Type: Bug
Affects Versions: v0.5.1
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


When a PolicyEntity(or policy prototype) is exported as a policy on a site, the 
partitionSpec of the new policy is not updated, which results in a wrong 
policy. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1055) Improve policy prototype apis

2017-07-10 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1055:
-
Description: 
1. create a policy prototype from a policy
{code}
API: POST /rest/policyProto/create?needPolicyCreated=true
Payload: PolicyEntity (policyDefinition + alertPublishmentIds)

public class PolicyEntity extends PersistenceEntity {
String name;   // auto created
PolicyDefinition definition;
List alertPublishmentIds = new ArrayList<>();
}
{code}

2. create a policy prototype by policy name
{code}
API: POST /rest/policyProto/create/{policyId}
{code}

3. create policies for site from a list of policy protoypes
{code}
API: POST /rest/policyProto/export/{site}
Payload: List
{code}

4. create policies for site from a list of prototypes
{code}
API: POST /rest/policyProto/exportByName/{site}
Payload: List
{code}

5. delete a prototype
{code}
API: DELETE /rest/policyProto/{uuid}
{code}

  was:
1. create a policy prototype from a policy
{code}
API: /rest/policyProto/create?needPolicyCreated=true
Payload: PolicyEntity (policy + alertPublishmentIds)
{code}

2. create a policy prototype by policy name
{code}
API: /rest/policyProto/create/\{policyId\}
{code}

3. create policies for site from a list of policy protoypes
{code}
API: /rest/policyProto/export/\{site\}
Payload: List
{code}

4. create policies for site from a list of prototypes
{code}
API: /rest/policyProto/exportByName/\{site\}
Payload: List
{code}



> Improve policy prototype apis 
> --
>
> Key: EAGLE-1055
> URL: https://issues.apache.org/jira/browse/EAGLE-1055
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.6.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> 1. create a policy prototype from a policy
> {code}
> API: POST /rest/policyProto/create?needPolicyCreated=true
> Payload: PolicyEntity (policyDefinition + alertPublishmentIds)
> public class PolicyEntity extends PersistenceEntity {
> String name;   // auto created
> PolicyDefinition definition;
> List alertPublishmentIds = new ArrayList<>();
> }
> {code}
> 2. create a policy prototype by policy name
> {code}
> API: POST /rest/policyProto/create/{policyId}
> {code}
> 3. create policies for site from a list of policy protoypes
> {code}
> API: POST /rest/policyProto/export/{site}
> Payload: List
> {code}
> 4. create policies for site from a list of prototypes
> {code}
> API: POST /rest/policyProto/exportByName/{site}
> Payload: List
> {code}
> 5. delete a prototype
> {code}
> API: DELETE /rest/policyProto/{uuid}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1044) Support policy import using a policy prototype

2017-06-26 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1044:
-
Description: 
This new feature is to facilitate policy management among multiple sites. With 
a single policy prototype, it can be imported into different sites. 
* add a new entity PolicyEntity in the metadata database
* provides RESTful APIs to query/delete/update a policy template
* easy to onboard policies on a new site

  was:
* add a new entity PolicyEntity in the metadata database
* provides RESTful APIs to query/delete/update a policy template
* easy to onboard policies on a new site


> Support policy import using a policy prototype
> --
>
> Key: EAGLE-1044
> URL: https://issues.apache.org/jira/browse/EAGLE-1044
> Project: Eagle
>  Issue Type: New Feature
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>Priority: Critical
>
> This new feature is to facilitate policy management among multiple sites. 
> With a single policy prototype, it can be imported into different sites. 
> * add a new entity PolicyEntity in the metadata database
> * provides RESTful APIs to query/delete/update a policy template
> * easy to onboard policies on a new site



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1056) Fix a link bug in the email template

2017-06-23 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1056:
-
Summary: Fix a link bug in the email template  (was: Fix a link bug in the 
Email template)

> Fix a link bug in the email template
> 
>
> Key: EAGLE-1056
> URL: https://issues.apache.org/jira/browse/EAGLE-1056
> Project: Eagle
>  Issue Type: Bug
>Affects Versions: v0.5.0, v0.6.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> "View Alert Details" cannot be opened. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (EAGLE-1056) Fix a link bug in the Email template

2017-06-23 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1056:


 Summary: Fix a link bug in the Email template
 Key: EAGLE-1056
 URL: https://issues.apache.org/jira/browse/EAGLE-1056
 Project: Eagle
  Issue Type: Bug
Affects Versions: v0.5.0, v0.6.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


"View Alert Details" cannot be opened. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1055) Improve policy prototype apis

2017-06-21 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1055:
-
Description: 
1. create a policy prototype from a policy
{code}
API: /rest/policyProto/create?needPolicyCreated=true
Payload: PolicyEntity (policy + alertPublishmentIds)
{code}

2. create a policy prototype by policy name
{code}
API: /rest/policyProto/create/\{policyId\}
{code}

3. create policies for site from a list of policy protoypes
{code}
API: /rest/policyProto/export/\{site\}
Payload: List
{code}

4. create policies for site from a list of prototypes
{code}
API: /rest/policyProto/exportByName/\{site\}
Payload: List
{code}


  was:
1. create a policy prototype from a policy
{code}
API: /rest/policyProto/create?needPolicyCreated=true
Payload: PolicyEntity (policy + alertPublishmentIds)
{code}

2. create a policy prototype by policy name
{code}
API: /rest/policyProto/create/\{policyId\}
{code}

3. create policies for {site} from a list of policy protoypes
{code}
API: /rest/policyProto/export/\{site\}
Payload: List
{code}

4. create policies for {site} from a list of prototypes
{code}
API: /rest/policyProto/exportByName/\{site\}
Payload: List
{code}



> Improve policy prototype apis 
> --
>
> Key: EAGLE-1055
> URL: https://issues.apache.org/jira/browse/EAGLE-1055
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.6.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> 1. create a policy prototype from a policy
> {code}
> API: /rest/policyProto/create?needPolicyCreated=true
> Payload: PolicyEntity (policy + alertPublishmentIds)
> {code}
> 2. create a policy prototype by policy name
> {code}
> API: /rest/policyProto/create/\{policyId\}
> {code}
> 3. create policies for site from a list of policy protoypes
> {code}
> API: /rest/policyProto/export/\{site\}
> Payload: List
> {code}
> 4. create policies for site from a list of prototypes
> {code}
> API: /rest/policyProto/exportByName/\{site\}
> Payload: List
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1055) Improve policy prototype apis

2017-06-21 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1055:
-
Description: 
1. create a policy prototype from a policy
{code}
API: /rest/policyProto/create?needPolicyCreated=true
Payload: PolicyEntity (policy + alertPublishmentIds)
{code}

2. create a policy prototype by policy name
{code}
API: /rest/policyProto/create/\{policyId\}
{code}

3. create policies for {site} from a list of policy protoypes
{code}
API: /rest/policyProto/export/\{site\}
Payload: List
{code}

4. create policies for {site} from a list of prototypes
{code}
API: /rest/policyProto/exportByName/\{site\}
Payload: List
{code}


  was:
1. create a policy prototype from a policy
API: /rest/policyProto/create?needPolicyCreated=true
Payload: PolicyEntity (policy + alertPublishmentIds)

2. create a policy prototype by policy name
API: /rest/policyProto/create/{policyId}

3. create policies for {site} from a list of policy protoypes
API: /rest/policyProto/export/{site}
Payload: List

4. create policies for {site} from a list of prototypes
API: /rest/policyProto/exportByName/{site}
Payload: List




> Improve policy prototype apis 
> --
>
> Key: EAGLE-1055
> URL: https://issues.apache.org/jira/browse/EAGLE-1055
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.6.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> 1. create a policy prototype from a policy
> {code}
> API: /rest/policyProto/create?needPolicyCreated=true
> Payload: PolicyEntity (policy + alertPublishmentIds)
> {code}
> 2. create a policy prototype by policy name
> {code}
> API: /rest/policyProto/create/\{policyId\}
> {code}
> 3. create policies for {site} from a list of policy protoypes
> {code}
> API: /rest/policyProto/export/\{site\}
> Payload: List
> {code}
> 4. create policies for {site} from a list of prototypes
> {code}
> API: /rest/policyProto/exportByName/\{site\}
> Payload: List
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1053) Support typeahead in Eagle UI

2017-06-20 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1053:
-
Fix Version/s: v0.6.0

> Support typeahead in Eagle UI
> -
>
> Key: EAGLE-1053
> URL: https://issues.apache.org/jira/browse/EAGLE-1053
> Project: Eagle
>  Issue Type: New Feature
>Reporter: Zhao, Qingwen
>Assignee: Jilin, Jiang
> Fix For: v0.6.0
>
>
> Support typeahead in Eagle UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (EAGLE-1053) Support typeahead in Eagle UI

2017-06-20 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1053:


 Summary: Support typeahead in Eagle UI
 Key: EAGLE-1053
 URL: https://issues.apache.org/jira/browse/EAGLE-1053
 Project: Eagle
  Issue Type: New Feature
Reporter: Zhao, Qingwen
Assignee: Jilin, Jiang


Support typeahead in Eagle UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1046) Eagle supports policies import to a new site from a policy prototype

2017-06-20 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1046:
-
Description: 
0. policy prototype entity 
{code}
  String name
  PolicyDefinition definition
  List alertPublishmentIds
{code}


1. load a list of policies to new site "sandbox" from policy prototypes by  
{{POST /rest/policyProto/export/sandbox}}

{code}
  [ { "definition": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
},
"alertPublishmentIds": []
}
]
{code}

2. create a new policy prototype with an existing policy by {{ POST 
/rest/policyProto/import}}
{code}
 {   "definition": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": "sandbox",
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM_SANDBOX[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
},
"alertPublishmentIds": []
 }
{code}

3. update or create a policy prototype by by {{ POST /rest/policyProto}}
{code}
{
"policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 or 
reduceOpsPerSecond > 1000 

[jira] [Updated] (EAGLE-1046) Eagle supports policies import to a new site from a policy prototype

2017-06-20 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1046:
-
Description: 
0. policy prototype entity 
{code}
  String name
  PolicyDefinition definition
  List alertPublishmentIds
{code}


1. load policies to new site "sandbox" from policy prototypes by  {{POST 
/rest/policyProto/export/sandbox}}

{code}
  [ { "definition": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
},
"alertPublishmentIds": []
}
]
{code}

2. create a new policy prototype with an existing policy by {{ POST 
/rest/policyProto/import}}
{code}
 {   "definition": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": "sandbox",
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM_SANDBOX[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
},
"alertPublishmentIds": []
 }
{code}

3. update or create a policy prototype by by {{ POST /rest/policyProto}}
{code}
{
"policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 or 
reduceOpsPerSecond > 1000 or 

[jira] [Updated] (EAGLE-1046) Eagle supports policies import to a new site from a policy prototype

2017-06-20 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1046:
-
Description: 
1. load policies to new site "sandbox" from policy prototypes by  {{POST 
/rest/policyProto/export/sandbox}}

{code}
  [ { "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
},
"alertPublishmentIds": []
}
]
{code}

2. create a new policy prototype with an existing policy by {{ POST 
/rest/policyProto/import}}
{code}
 {   "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": "sandbox",
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM_SANDBOX[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
}
 }
{code}

3. update or create a policy prototype by by {{ POST /rest/policyProto}}
{code}
{
"policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 or 
reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 50] 
select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",
"handlerClass": 

[jira] [Created] (EAGLE-1051) Check if a publisher can be deleted

2017-06-20 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1051:


 Summary: Check if a publisher can be deleted 
 Key: EAGLE-1051
 URL: https://issues.apache.org/jira/browse/EAGLE-1051
 Project: Eagle
  Issue Type: Sub-task
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


A publisher can be deleted only when there are no policies using it. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1048) Delete an alert publisher on Eagle UI

2017-06-20 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1048:
-
Description: 
Administer can delete  an alert publisher on Eagle UI

Request: DELETE /publishments/\{name\}

Response: 
{code}
OpResult {
public int code = 200;   // 200 = SUCCESS 
public String message = "";
}
{code}

  was:
Administer can delete  an alert publisher on Eagle UI

Request: DELETE /publishments/{name}

Response: 
{code}
OpResult {
public int code = 200;   // 200 = SUCCESS 
public String message = "";
}
{code}


> Delete an alert publisher on Eagle UI
> -
>
> Key: EAGLE-1048
> URL: https://issues.apache.org/jira/browse/EAGLE-1048
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.6.0
>Reporter: Zhao, Qingwen
>Assignee: Jilin, Jiang
>Priority: Minor
>
> Administer can delete  an alert publisher on Eagle UI
> Request: DELETE /publishments/\{name\}
> Response: 
> {code}
> OpResult {
> public int code = 200;   // 200 = SUCCESS 
> public String message = "";
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1048) Delete an alert publisher on Eagle UI

2017-06-20 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1048:
-
Description: 
Administer can delete  an alert publisher on Eagle UI

Request: DELETE /publishments/{name}

Response: 
{code}
OpResult {
public int code = 200;   // 200 = SUCCESS 
public String message = "";
}
{code}

  was:
Administer can delete  an alert publisher on Eagle UI

Request: DELETE /publishments/{name}

Response: 
{code}
OpResult {
public int code = 200;
public String message = "";
}
{code}


> Delete an alert publisher on Eagle UI
> -
>
> Key: EAGLE-1048
> URL: https://issues.apache.org/jira/browse/EAGLE-1048
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.6.0
>Reporter: Zhao, Qingwen
>Assignee: Jilin, Jiang
>Priority: Minor
>
> Administer can delete  an alert publisher on Eagle UI
> Request: DELETE /publishments/{name}
> Response: 
> {code}
> OpResult {
> public int code = 200;   // 200 = SUCCESS 
> public String message = "";
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1048) Delete an alert publisher on Eagle UI

2017-06-20 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1048:
-
Description: 
Administer can delete  an alert publisher on Eagle UI

Request: DELETE /publishments/{name}

Response: 
{code}
OpResult {
public int code = 200;
public String message = "";
}
{code}

  was:Administer can delete  an alert publisher on Eagle UI


> Delete an alert publisher on Eagle UI
> -
>
> Key: EAGLE-1048
> URL: https://issues.apache.org/jira/browse/EAGLE-1048
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.6.0
>Reporter: Zhao, Qingwen
>Assignee: Jilin, Jiang
>Priority: Minor
>
> Administer can delete  an alert publisher on Eagle UI
> Request: DELETE /publishments/{name}
> Response: 
> {code}
> OpResult {
> public int code = 200;
> public String message = "";
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (EAGLE-1050) Resume Eagle Jenkins check

2017-06-19 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1050:


 Summary: Resume Eagle Jenkins check
 Key: EAGLE-1050
 URL: https://issues.apache.org/jira/browse/EAGLE-1050
 Project: Eagle
  Issue Type: Task
Reporter: Zhao, Qingwen
Assignee: Michael Wu


Eagle Jenkins check seems stopped. Please help to check it. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1044) Support policy Import using a policy prototype

2017-06-19 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1044:
-
Description: 
* add a new entity PolicyEntity in the metadata database
* provides RESTful APIs to query/delete/update a policy template
* easy to onboard policies on a new site

  was:
* add a new entity PolicyTemplate in the metadata database
* provides RESTful APIs to query/delete/update a policy template
* easy to onboard policies on a new site


> Support policy Import using a policy prototype
> --
>
> Key: EAGLE-1044
> URL: https://issues.apache.org/jira/browse/EAGLE-1044
> Project: Eagle
>  Issue Type: New Feature
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>Priority: Critical
>
> * add a new entity PolicyEntity in the metadata database
> * provides RESTful APIs to query/delete/update a policy template
> * easy to onboard policies on a new site



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1044) Support policy Import using a policy prototype

2017-06-19 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1044:
-
Summary: Support policy Import using a policy prototype  (was: Create a new 
policy using a policy template)

> Support policy Import using a policy prototype
> --
>
> Key: EAGLE-1044
> URL: https://issues.apache.org/jira/browse/EAGLE-1044
> Project: Eagle
>  Issue Type: New Feature
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>Priority: Critical
>
> * add a new entity PolicyTemplate in the metadata database
> * provides RESTful APIs to query/delete/update a policy template
> * easy to onboard policies on a new site



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1046) Eagle supports policies import to a new site from a policy prototype

2017-06-19 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1046:
-
Description: 
1. load policies to new site "sandbox" from policy prototypes by  {{POST 
/rest/policyProto/loadToSite/sandbox}}

{code}
  [ { "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
},
"alertPublishmentIds": []
}
]
{code}

2. create a new policy prototype with an existing policy by {{ POST 
/rest/policyProto/saveAsProto}}
{code}
 {   "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": "sandbox",
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM_SANDBOX[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
}
 }
{code}

3. update or create a policy prototype by by {{ POST /rest/policyProto}}
{code}
{
"policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 or 
reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 50] 
select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",

[jira] [Updated] (EAGLE-1046) Eagle supports policies import to a new site from a policy prototype

2017-06-19 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1046:
-
Description: 
1. load policies to new site "sandbox" from policy prototypes by  {{POST 
/rest/policyProto/loadToSite/sandbox}}

{code}
  [ { "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
},
"alertPublishmentIds": []
}
]
{code}

2. create a new policy prototype with an existing policy by {{ POST 
/rest/policyProto/saveAsProto}}
{code}
 {   "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": "sandbox",
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM_SANDBOX[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
}
 }
{code}

3. update or create a policy prototype by by {{ POST /rest/policyProto}}
{code}
{
"policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 or 
reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 50] 
select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",

[jira] [Updated] (EAGLE-1046) Eagle supports policies import to a new site from a policy prototype

2017-06-19 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1046:
-
Description: 
1. load policies to new site "sandbox" from policy prototypes by  {{POST 
/rest/policyProto/loadToSite/sandbox}}

{code}
  [ { "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
},
"alertPublishmentIds": []
}
]
{code}

2. create a new policy prototype with an existing policy by {{ POST 
/rest/policyProto/saveAsProto}}
{code}
 {   "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": "sandbox",
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM_SANDBOX[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
}
 }
{code}

3. update or create a policy prototype by by {{ POST /rest/policyProto}}
{code}
{
"policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 or 
reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 50] 
select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",

[jira] [Updated] (EAGLE-1046) Eagle supports policies import to a new site from a policy prototype

2017-06-19 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1046:
-
Description: 
1. load policies to new site "sandbox" from policy prototypes by  {{POST 
/rest/policyProto/loadToSite/sandbox}}

{code}
  [ { "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
},
"alertPublishmentIds": []
}
]
{code}

2. create a new policy prototype with an existing policy by {{ POST 
/rest/policyProto/saveAsProto}}
{code}
 {   "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": "sandbox",
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM_SANDBOX[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
}
 }
{code}

3. update or create a policy prototype by by {{ POST /rest/policyProto}}
{code}
{
"policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 or 
reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 50] 
select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",

[jira] [Updated] (EAGLE-1046) Eagle supports policies import to a new site from a policy prototype

2017-06-19 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1046:
-
Description: 
1. load policies to new site "sandbox" from policy prototypes   {{POST 
/rest/policyProto/loadToSite/sandbox}}

{code}
  [ { "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
},
"alertPublishmentIds": []
}
]
{code}

2. create a new policy prototype with an existing policy 
POST /rest/policyProto/saveAsProto
{code}
 {   "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": "sandbox",
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM_SANDBOX[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
}
 }
{code}

3. update or create a policy prototype
{code}
{
"policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 or 
reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 50] 
select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",
"handlerClass": null,

[jira] [Created] (EAGLE-1049) Support metric filter in the metric preview page

2017-06-19 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1049:


 Summary: Support metric filter in the metric preview page 
 Key: EAGLE-1049
 URL: https://issues.apache.org/jira/browse/EAGLE-1049
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.6.0
Reporter: Zhao, Qingwen
Assignee: Jilin, Jiang
Priority: Minor


Support metric filter in the metric preview page 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1048) Delete an alert publisher on Eagle UI

2017-06-19 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1048:
-
Priority: Minor  (was: Major)

> Delete an alert publisher on Eagle UI
> -
>
> Key: EAGLE-1048
> URL: https://issues.apache.org/jira/browse/EAGLE-1048
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.6.0
>Reporter: Zhao, Qingwen
>Assignee: Jilin, Jiang
>Priority: Minor
>
> Administer can delete  an alert publisher on Eagle UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (EAGLE-1048) Delete an alert publisher on Eagle UI

2017-06-19 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1048:


 Summary: Delete an alert publisher on Eagle UI
 Key: EAGLE-1048
 URL: https://issues.apache.org/jira/browse/EAGLE-1048
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.6.0
Reporter: Zhao, Qingwen
Assignee: Jilin, Jiang


Administer can delete  an alert publisher on Eagle UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1047) View all alert publishers on Eagle UI

2017-06-19 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1047:
-
Description: Users can view all the publisher configurations

> View all alert publishers on Eagle UI
> -
>
> Key: EAGLE-1047
> URL: https://issues.apache.org/jira/browse/EAGLE-1047
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.6.0
>Reporter: Zhao, Qingwen
>Assignee: Jilin, Jiang
>
> Users can view all the publisher configurations



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (EAGLE-1047) View all alert publishers on Eagle UI

2017-06-19 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1047:


 Summary: View all alert publishers on Eagle UI
 Key: EAGLE-1047
 URL: https://issues.apache.org/jira/browse/EAGLE-1047
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.6.0
Reporter: Zhao, Qingwen
Assignee: Jilin, Jiang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (EAGLE-1046) Eagle supports policies import to a new site from a policy prototype

2017-06-19 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1046:
-
Description: 
1. load policies to new site "sandbox" from policy prototypes   `POST 
/rest/policyProto/loadToSite/sandbox` 

{code}
  [ { "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
},
"alertPublishmentIds": []
}
]
{code}

2. create a new policy prototype with an existing policy 
POST /rest/policyProto/saveAsProto
{code}
 {   "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": "sandbox",
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM_SANDBOX[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
}
 }
{code}

3. update or create a policy prototype
{code}
{
"policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 or 
reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 50] 
select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",
"handlerClass": null,

[jira] [Created] (EAGLE-1046) Eagle supports policies import to a new site from a policy prototype

2017-06-19 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1046:


 Summary: Eagle supports policies import to a new site from a 
policy prototype
 Key: EAGLE-1046
 URL: https://issues.apache.org/jira/browse/EAGLE-1046
 Project: Eagle
  Issue Type: New Feature
Affects Versions: v0.6.0
Reporter: Zhao, Qingwen
Assignee: Jilin, Jiang


1. load policies to new site `sandbox` 
POST /rest/policyProto/loadToSite/sandbox
{code}
  [ { "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": null,
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
},
"alertPublishmentIds": []
}
]
{code}

2. save a policy prototype with an existing policy
POST /rest/policyProto/saveAsProto
{code}
 {   "policyProto": {
"name": "JobRpcThroughput",
"description": "Policy for 
MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"inputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX"
],
"outputStreams": [
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT"
],
"siteId": "sandbox",
"definition": {
"type": "siddhi",
"value": "from MAP_REDUCE_JOB_STREAM_SANDBOX[mapOpsPerSecond > 1000 
or reduceOpsPerSecond > 1000 or avgOpsPerMapTask > 50 or avgOpsPerReduceTask > 
50] select * insert into MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT;",
"handlerClass": null,
"properties": {},
"inputStreams": [],
"outputStreams": []
},
"stateDefinition": null,
"policyStatus": "DISABLED",
"alertDefinition": {
"templateType": "TEXT",
"subject": "$site job rpc",
"body": "$site job rpc throughput",
"severity": "WARNING",
"category": "JPM"
},
"alertDeduplications": [
{
"outputStreamId": 
"MAP_REDUCE_JOB_STREAM_SANDBOX_RPC_THROUGHPUT_OUT",
"dedupIntervalMin": "0",
"dedupFields": []
}
],
"partitionSpec": [
{
"streamId": "MAP_REDUCE_JOB_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
}
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "JPM"
}
 }
{code}

3. get all policy prototypes 
GET /rest/policyProto

4. delete a policy prototype
DELETE /rest/policyProto/{uuid}

5. update a policy prototype
POST  /rest/policyProto



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (EAGLE-1044) Create a new policy using a policy template

2017-06-15 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1044:


 Summary: Create a new policy using a policy template
 Key: EAGLE-1044
 URL: https://issues.apache.org/jira/browse/EAGLE-1044
 Project: Eagle
  Issue Type: New Feature
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen
Priority: Critical


* add a new entity PolicyTemplate in the metadata database
* provides RESTful APIs to query/delete/update a policy template
* easy to onboard policies on a new site



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (EAGLE-1041) Support policy processing pipeline

2017-06-13 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1041:


 Summary: Support policy processing pipeline
 Key: EAGLE-1041
 URL: https://issues.apache.org/jira/browse/EAGLE-1041
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


In some cases, like increment or decrement pattern, data need to be processed 
in more than one stage. For example, alert if the metric value increases by N. 
Two steps to get the right alert.
1. sort & filter events which meet the filter conditions.
2. define data change pattern. 

Here is a sample policy
{code}
fromHADOOP_JMX_METRIC_STREAM_SANDBOX[metric=="hadoop.namenode.dfs.missingblocks"]#window.externalTime(timestamp,1min)
 select * group by site,host,component, metric insert into temp;
from every 
a=HADOOP_JMX_METRIC_STREAM_SANDBOX[metric=="hadoop.namenode.dfs.missingblocks"],
b=HADOOP_JMX_METRIC_STREAM_SANDBOX[b.component==a.componentandb.metric==a.metricandb.host==a.hostandb.value>a.valueanda.value>100]
selectb.site,b.host,b.component, b.metric, b.value as newNumOfMissingBlocks, 
a.value as oldNumOfMissingBlocks, (b.value-a.value) as 
increastedNumOfMissingBlocks insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_MISS_BLOCKS_LARGER_OUT;
{code}

There are two queries in this policy. The first one with the time window 
condition tells Eagle to sort the original events. The second one defines the 
data pattern. As the constraint of Siddhi syntax 
(https://docs.wso2.com/display/CEP420/SiddhiQL+Guide+3.1#SiddhiQLGuide3.1-Pattern),
 the filtering of input events does not work. 

Luckily, if we put the output stream of the first query as the input stream of 
the second query, it works. That's the problem this ticket tries to solve. 

Ideally, the right policy can be written as 
{code}
fromHADOOP_JMX_METRIC_STREAM_SANDBOX[metric=="hadoop.namenode.dfs.missingblocks"]#window.externalTime(timestamp,1min)
 select * group by site,host,component, metric insert into MISSING_BLOCK_OUT;
from every a=MISSING_BLOCK_OUT[metric=="hadoop.namenode.dfs.missingblocks"],
b=MISSING_BLOCK_OUT[b.component==a.componentandb.metric==a.metricandb.host==a.hostandb.value>a.valueanda.value>100]
selectb.site,b.host,b.component, b.metric, b.value as newNumOfMissingBlocks, 
a.value as oldNumOfMissingBlocks, (b.value-a.value) as 
increastedNumOfMissingBlocks insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_MISS_BLOCKS_LARGER_OUT;
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (EAGLE-1040) Metric

2017-06-13 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen closed EAGLE-1040.

Resolution: Not A Problem

> Metric
> --
>
> Key: EAGLE-1040
> URL: https://issues.apache.org/jira/browse/EAGLE-1040
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>
> In the metric preview page(/#/metric/preview), users can choose host and value



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (EAGLE-1040) Metric

2017-06-09 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1040:


 Summary: Metric
 Key: EAGLE-1040
 URL: https://issues.apache.org/jira/browse/EAGLE-1040
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen


In the metric preview page(/#/metric/preview), users can choose host and value



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-1039) load alert de-duplication configuration in the page edit page

2017-06-09 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1039:


 Summary: load alert de-duplication configuration in the page edit 
page
 Key: EAGLE-1039
 URL: https://issues.apache.org/jira/browse/EAGLE-1039
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.6.0
Reporter: Zhao, Qingwen
Assignee: Jilin, Jiang


If a user wants to edit an existing policy, alert de-duplication info should be 
loaded from the database when he/she enters to that page. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (EAGLE-1038) Support alertDuplication customization for each policy

2017-06-07 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1038:
-
Description: 
Requirements
* compatible with old versions
* enable alertDuplication check for each policy 
* optimize DefaultDeduplicator


  was:
Requirements
* compatible with old versions
* enable alertDuplication check for each policy 



> Support alertDuplication customization for each policy 
> ---
>
> Key: EAGLE-1038
> URL: https://issues.apache.org/jira/browse/EAGLE-1038
> Project: Eagle
>  Issue Type: Sub-task
>Affects Versions: v0.6.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> Requirements
> * compatible with old versions
> * enable alertDuplication check for each policy 
> * optimize DefaultDeduplicator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (EAGLE-1038) Support alertDuplication customization for each policy

2017-06-07 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1038:
-
Summary: Support alertDuplication customization for each policy   (was: 
enable alertDuplication check for each policy )

> Support alertDuplication customization for each policy 
> ---
>
> Key: EAGLE-1038
> URL: https://issues.apache.org/jira/browse/EAGLE-1038
> Project: Eagle
>  Issue Type: Sub-task
>Affects Versions: v0.6.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> Requirements
> * compatible with old versions
> * enable alertDuplication check for each policy 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (EAGLE-1037) Add alertDeduplication configurations on Eagle UI

2017-06-07 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1037:
-
Description: 
Add alertDeduplication configurations on Eagle UI

Here is the sample policy
{code}
{
"name": "capacityUsage",
"description": "Policy for 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT",
"inputStreams": [
  "HADOOP_JMX_METRIC_STREAM_SANDBOX"
],
"outputStreams": [
  "HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT"
],
"siteId": "sandbox",
"definition": {
  "type": "siddhi",
  "value": "from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
\"hadoop.namenode.fsnamesystemstate.capacityusage\" and convert(value, 
\"long\") > 90]select site, host, component, metric, convert(value, \"long\") 
as value, timestamp insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT;",
  "handlerClass": null,
  "properties": {},
  "inputStreams": [],
  "outputStreams": []
},
"stateDefinition": null,
"policyStatus": "ENABLED",
"alertDefinition": {
  "templateType": "TEXT",
  "subject": "$site capacity exceeds 90%",
  "body": "$site capacity exceeds 90%",
  "severity": "WARNING",
  "category": "HDFS"
},
"alertDeduplications": [
  {
"outputStreamId": "HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT",
"dedupIntervalMin": "1",
"dedupFields": [
  "site",
  "component",
  "host",
  "metric"
]
  }
],
"partitionSpec": [
  {
"streamId": "HADOOP_JMX_METRIC_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
  }
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "HDFS"
  }
{code}

  was:
Add alertDeduplication configurations on Eagle UI

Here is the Sample
{code}
{
"name": "capacityUsage",
"description": "Policy for 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT",
"inputStreams": [
  "HADOOP_JMX_METRIC_STREAM_SANDBOX"
],
"outputStreams": [
  "HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT"
],
"siteId": "sandbox",
"definition": {
  "type": "siddhi",
  "value": "from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
\"hadoop.namenode.fsnamesystemstate.capacityusage\" and convert(value, 
\"long\") > 90]select site, host, component, metric, convert(value, \"long\") 
as value, timestamp insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT;",
  "handlerClass": null,
  "properties": {},
  "inputStreams": [],
  "outputStreams": []
},
"stateDefinition": null,
"policyStatus": "ENABLED",
"alertDefinition": {
  "templateType": "TEXT",
  "subject": "$site capacity exceeds 90%",
  "body": "$site capacity exceeds 90%",
  "severity": "WARNING",
  "category": "HDFS"
},
"alertDeduplications": [
  {
"outputStreamId": "HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT",
"dedupIntervalMin": "1",
"dedupFields": [
  "site",
  "component",
  "host",
  "metric"
]
  }
],
"partitionSpec": [
  {
"streamId": "HADOOP_JMX_METRIC_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
  }
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "HDFS"
  }
{code}


> Add alertDeduplication configurations on Eagle UI
> -
>
> Key: EAGLE-1037
> URL: https://issues.apache.org/jira/browse/EAGLE-1037
> Project: Eagle
>  Issue Type: Sub-task
>Affects Versions: v0.6.0
>Reporter: Zhao, Qingwen
>Assignee: Jilin, Jiang
>Priority: Critical
>
> Add alertDeduplication configurations on Eagle UI
> Here is the sample policy
> {code}
> {
> "name": "capacityUsage",
> "description": "Policy for 
> HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT",
> "inputStreams": [
>   "HADOOP_JMX_METRIC_STREAM_SANDBOX"
> ],
> "outputStreams": [
>   "HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT"
> ],
> "siteId": "sandbox",
> "definition": {
>   "type": "siddhi",
>   "value": "from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
> \"hadoop.namenode.fsnamesystemstate.capacityusage\" and convert(value, 
> \"long\") > 90]select site, host, component, metric, convert(value, \"long\") 
> as value, timestamp insert into 
> HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT;",
>   "handlerClass": null,
>   "properties": {},
>   "inputStreams": [],
>   "outputStreams": []
> },
> "stateDefinition": null,
> "policyStatus": "ENABLED",
> "alertDefinition": {
>  

[jira] [Updated] (EAGLE-1037) Add alertDeduplication configurations on Eagle UI

2017-06-07 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1037:
-
Description: 
Add alertDeduplication configurations on Eagle UI

Here is the Sample
{code}
{
"name": "capacityUsage",
"description": "Policy for 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT",
"inputStreams": [
  "HADOOP_JMX_METRIC_STREAM_SANDBOX"
],
"outputStreams": [
  "HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT"
],
"siteId": "sandbox",
"definition": {
  "type": "siddhi",
  "value": "from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
\"hadoop.namenode.fsnamesystemstate.capacityusage\" and convert(value, 
\"long\") > 90]select site, host, component, metric, convert(value, \"long\") 
as value, timestamp insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT;",
  "handlerClass": null,
  "properties": {},
  "inputStreams": [],
  "outputStreams": []
},
"stateDefinition": null,
"policyStatus": "ENABLED",
"alertDefinition": {
  "templateType": "TEXT",
  "subject": "$site capacity exceeds 90%",
  "body": "$site capacity exceeds 90%",
  "severity": "WARNING",
  "category": "HDFS"
},
"alertDeduplications": [
  {
"outputStreamId": "HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT",
"dedupIntervalMin": "1",
"dedupFields": [
  "site",
  "component",
  "host",
  "metric"
]
  }
],
"partitionSpec": [
  {
"streamId": "HADOOP_JMX_METRIC_STREAM_SANDBOX",
"type": "SHUFFLE",
"columns": [],
"sortSpec": null
  }
],
"dedicated": false,
"parallelismHint": 5,
"alertSeverity": "WARNING",
"alertCategory": "HDFS"
  }
{code}

  was:Add alertDeduplication configurations on Eagle UI


> Add alertDeduplication configurations on Eagle UI
> -
>
> Key: EAGLE-1037
> URL: https://issues.apache.org/jira/browse/EAGLE-1037
> Project: Eagle
>  Issue Type: Sub-task
>Affects Versions: v0.6.0
>Reporter: Zhao, Qingwen
>Assignee: Jilin, Jiang
>Priority: Critical
>
> Add alertDeduplication configurations on Eagle UI
> Here is the Sample
> {code}
> {
> "name": "capacityUsage",
> "description": "Policy for 
> HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT",
> "inputStreams": [
>   "HADOOP_JMX_METRIC_STREAM_SANDBOX"
> ],
> "outputStreams": [
>   "HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT"
> ],
> "siteId": "sandbox",
> "definition": {
>   "type": "siddhi",
>   "value": "from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
> \"hadoop.namenode.fsnamesystemstate.capacityusage\" and convert(value, 
> \"long\") > 90]select site, host, component, metric, convert(value, \"long\") 
> as value, timestamp insert into 
> HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT;",
>   "handlerClass": null,
>   "properties": {},
>   "inputStreams": [],
>   "outputStreams": []
> },
> "stateDefinition": null,
> "policyStatus": "ENABLED",
> "alertDefinition": {
>   "templateType": "TEXT",
>   "subject": "$site capacity exceeds 90%",
>   "body": "$site capacity exceeds 90%",
>   "severity": "WARNING",
>   "category": "HDFS"
> },
> "alertDeduplications": [
>   {
> "outputStreamId": 
> "HADOOP_JMX_METRIC_STREAM_SANDBOX_CAPACITY_USAGE_OUT",
> "dedupIntervalMin": "1",
> "dedupFields": [
>   "site",
>   "component",
>   "host",
>   "metric"
> ]
>   }
> ],
> "partitionSpec": [
>   {
> "streamId": "HADOOP_JMX_METRIC_STREAM_SANDBOX",
> "type": "SHUFFLE",
> "columns": [],
> "sortSpec": null
>   }
> ],
> "dedicated": false,
> "parallelismHint": 5,
> "alertSeverity": "WARNING",
> "alertCategory": "HDFS"
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-1036) policies with deduplication settings

2017-06-07 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1036:


 Summary: policies with deduplication settings
 Key: EAGLE-1036
 URL: https://issues.apache.org/jira/browse/EAGLE-1036
 Project: Eagle
  Issue Type: New Feature
Affects Versions: v0.6.0
Reporter: Zhao, Qingwen


This improvement moves de-duplication logic from alert publishers to the policy 
side. Users can set de-duplication interval and fields when creating a new 
policy. At the same time, it's compatible with previous versions



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-1032) Add policy duplication settings in the policy definition page

2017-06-04 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1032:


 Summary: Add policy duplication settings in the policy definition 
page
 Key: EAGLE-1032
 URL: https://issues.apache.org/jira/browse/EAGLE-1032
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen


Add policy duplication settings in the policy definition page. 

1. FieldName:   dedupIntervalMin   Type: string (text)
e.g., 1, 2, 10
2. FieldName:   dedupFieldsType: array (checkbox)
Description: show all stream columns of STRING type
   e.g. ["site", "component"]




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-1031) add intervalMin setting in the metric preview page

2017-06-04 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1031:


 Summary: add intervalMin setting in the metric preview page 
 Key: EAGLE-1031
 URL: https://issues.apache.org/jira/browse/EAGLE-1031
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


By default, the intervalMin is calculated by 
{code}
intervalMin = 5min,  |queryEndTime - queryStartTime|  <= 6h 
 15min,  |queryEndTime - queryStartTime|  <= 24h
 30min, |queryEndTime - queryStartTime| <= 7d
 60min, |queryEndTime - queryStartTime| <= 14d
 1day, |queryEndTime - queryStartTime| > 14d
{code}

In some cases, users want to view metric in minute level. Eagle should support 
to customize this parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (EAGLE-1024) Monitor jobs with high RPC throughput

2017-05-24 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen resolved EAGLE-1024.
--
Resolution: Done

> Monitor jobs with high RPC throughput 
> --
>
> Key: EAGLE-1024
> URL: https://issues.apache.org/jira/browse/EAGLE-1024
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> We've identified some jobs with high RPC throughput which causes the NN heavy 
> RPC overhead. These jobs has requested extremely large HDFS operations in a 
> very short window (2 mins).
> So we tend to capture those jobs with:
> a) the job has very large RPC throughput, using the job total HDFS ops/the 
> job duration, if the throughput is larger than 1000
> b) and if the HDFS ops per task is larger than 25
> Then send out the alert out. Later, we will notify the users to optimize 
> their jobs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (EAGLE-1024) Monitor jobs with high RPC throughput

2017-05-24 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1024:
-
Fix Version/s: v0.5.0

> Monitor jobs with high RPC throughput 
> --
>
> Key: EAGLE-1024
> URL: https://issues.apache.org/jira/browse/EAGLE-1024
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
> Fix For: v0.5.0
>
>
> We've identified some jobs with high RPC throughput which causes the NN heavy 
> RPC overhead. These jobs has requested extremely large HDFS operations in a 
> very short window (2 mins).
> So we tend to capture those jobs with:
> a) the job has very large RPC throughput, using the job total HDFS ops/the 
> job duration, if the throughput is larger than 1000
> b) and if the HDFS ops per task is larger than 25
> Then send out the alert out. Later, we will notify the users to optimize 
> their jobs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (EAGLE-1024) Monitor jobs with high RPC throughput

2017-05-18 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen reassigned EAGLE-1024:


Assignee: Zhao, Qingwen

> Monitor jobs with high RPC throughput 
> --
>
> Key: EAGLE-1024
> URL: https://issues.apache.org/jira/browse/EAGLE-1024
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> We've identified some jobs with high RPC throughput which causes the NN heavy 
> RPC overhead. These jobs has requested extremely large HDFS operations in a 
> very short window (2 mins).
> So we tend to capture those jobs with:
> a) the job has very large RPC throughput, using the job total HDFS ops/the 
> job duration, if the throughput is larger than 1000
> b) and if the HDFS ops per task is larger than 25
> Then send out the alert out. Later, we will notify the users to optimize 
> their jobs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-1024) Monitor jobs with high RPC throughput

2017-05-17 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1024:


 Summary: Monitor jobs with high RPC throughput 
 Key: EAGLE-1024
 URL: https://issues.apache.org/jira/browse/EAGLE-1024
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen


We've identified some jobs with high RPC throughput which causes the NN heavy 
RPC overhead. These jobs has requested extremely large HDFS operations in a 
very short window (2 mins).

So we tend to capture those jobs with:
a) the job has very large RPC throughput, using the job total HDFS ops/the job 
duration, if the throughput is larger than 1000
b) and if the HDFS ops per task is larger than 25
Then send out the alert out. Later, we will notify the users to optimize their 
jobs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-1023) update Hadoop jmx metric collector scripts & fix bugs

2017-05-17 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1023:


 Summary: update Hadoop jmx metric collector scripts & fix bugs
 Key: EAGLE-1023
 URL: https://issues.apache.org/jira/browse/EAGLE-1023
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


update Hadoop jmx metric collector scripts



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-1022) Configuration parsing exception in SparkHistoryJob app

2017-05-16 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1022:


 Summary: Configuration parsing exception in SparkHistoryJob app
 Key: EAGLE-1022
 URL: https://issues.apache.org/jira/browse/EAGLE-1022
 Project: Eagle
  Issue Type: Bug
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


{code}
java.lang.NumberFormatException: For input string: "2g"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) 
~[na:1.8.0_91]
at java.lang.Integer.parseInt(Integer.java:580) ~[na:1.8.0_91]
at java.lang.Integer.parseInt(Integer.java:615) ~[na:1.8.0_91]
at org.apache.eagle.jpm.util.Utils.parseMemory(Utils.java:78) 
~[stormjar.jar:na]
at 
org.apache.eagle.jpm.spark.history.crawl.JHFSparkEventReader.getMemoryOverhead(JHFSparkEventReader.java:490)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.spark.history.crawl.JHFSparkEventReader.clearReader(JHFSparkEventReader.java:449)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.spark.history.crawl.JHFSparkParser.parse(JHFSparkParser.java:53)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.spark.history.crawl.SparkFilesystemInputStreamReaderImpl.read(SparkFilesystemInputStreamReaderImpl.java:47)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.spark.history.storm.SparkHistoryJobParseBolt.execute(SparkHistoryJobParseBolt.java:98)
 ~[stormjar.jar:na]
at 
backtype.storm.daemon.executor$fn__5697$tuple_action_fn__5699.invoke(executor.clj:659)
 [storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at 
backtype.storm.daemon.executor$mk_task_receiver$fn__5620.invoke(executor.clj:415)
 [storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at 
backtype.storm.disruptor$clojure_handler$reify__1741.onEvent(disruptor.clj:58) 
[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at 
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125)
 [storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at 
backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)
 [storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at 
backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) 
[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at 
backtype.storm.daemon.executor$fn__5697$fn__5710$fn__5761.invoke(executor.clj:794)
 [storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:465) 
[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (EAGLE-964) Add a column 'severity' left to description in policy list page

2017-05-14 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen resolved EAGLE-964.
-
Resolution: Fixed

> Add a column 'severity' left to description in policy list page
> ---
>
> Key: EAGLE-964
> URL: https://issues.apache.org/jira/browse/EAGLE-964
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
> Fix For: v0.5.0
>
>
> Add a column 'severity' left to description in policy list page



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (EAGLE-964) Add a column 'severity' left to description in policy list page

2017-05-14 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-964:

Fix Version/s: v0.5.0

> Add a column 'severity' left to description in policy list page
> ---
>
> Key: EAGLE-964
> URL: https://issues.apache.org/jira/browse/EAGLE-964
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
> Fix For: v0.5.0
>
>
> Add a column 'severity' left to description in policy list page



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (EAGLE-1016) Mysql table eagle_metric_eagle_metric_schema is having issue

2017-05-14 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen reassigned EAGLE-1016:


Assignee: Zhao, Qingwen

> Mysql table eagle_metric_eagle_metric_schema is having issue
> 
>
> Key: EAGLE-1016
> URL: https://issues.apache.org/jira/browse/EAGLE-1016
> Project: Eagle
>  Issue Type: Bug
>Reporter: vikash kumar
>Assignee: Zhao, Qingwen
> Attachments: Apache Eagle Error log.txt
>
>
> Hi,
> eagle_metric_eagle_metric_schema is having a column_name as "group" which is 
> a reserved keyword. I am getting an error as mysql syntax error near group 
> when i am trying to run the apache eagle application for 
> hadoop_metric_monitor. Initially i was getting error even in creating this 
> table with group keyword as column but i resolved it using group .Now it is 
> trying to write to some of the table which is having group as a cloumn_name. 
> That's why it is throwing error. Can somebody help me out.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (EAGLE-1014) No data in alertEngineSpout

2017-04-28 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on EAGLE-1014 started by Zhao, Qingwen.

> No data in alertEngineSpout 
> 
>
> Key: EAGLE-1014
> URL: https://issues.apache.org/jira/browse/EAGLE-1014
> Project: Eagle
>  Issue Type: Bug
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> Found the following exception log
> {code}
> 2017-04-27 03:26:40 Thread-11-alertEngineSpout-EventThread 
> org.apache.eagle.alert.engine.spout.CorrelationSpout [ERROR] error applying 
> new SpoutSpec
> java.lang.RuntimeException: java.lang.RuntimeException: No leader found for 
> partition 3
> at 
> storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:92) 
> ~[stormjar.jar:na]
> at storm.kafka.trident.ZkBrokerReader.(ZkBrokerReader.java:42) 
> ~[stormjar.jar:na]
> at storm.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:57) 
> ~[stormjar.jar:na]
> at storm.kafka.KafkaSpoutWrapper.open(KafkaSpoutWrapper.java:79) 
> ~[stormjar.jar:na]
> at 
> org.apache.eagle.alert.engine.spout.CorrelationSpout.createKafkaSpout(CorrelationSpout.java:355)
>  ~[stormjar.jar:na]
> at 
> org.apache.eagle.alert.engine.spout.CorrelationSpout.onReload(CorrelationSpout.java:259)
>  ~[stormjar.jar:na]
> at 
> org.apache.eagle.alert.engine.spout.CorrelationSpout.onSpoutSpecChange(CorrelationSpout.java:162)
>  ~[stormjar.jar:na]
> at 
> org.apache.eagle.alert.engine.coordinator.impl.AbstractMetadataChangeNotifyService.lambda$notifySpout$0(AbstractMetadataChangeNotifyService.java:91)
>  [stormjar.jar:na]
> at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_91]
> at 
> org.apache.eagle.alert.engine.coordinator.impl.AbstractMetadataChangeNotifyService.notifySpout(AbstractMetadataChangeNotifyService.java:91)
>  [stormjar.jar:na]
> at 
> org.apache.eagle.alert.engine.coordinator.impl.ZKMetadataChangeNotifyService.onNewConfig(ZKMetadataChangeNotifyService.java:148)
>  ~[stormjar.jar:na]
> at 
> org.apache.eagle.alert.config.ConfigBusConsumer.lambda$new$0(ConfigBusConsumer.java:44)
>  ~[stormjar.jar:na]
> at 
> org.apache.curator.framework.recipes.cache.NodeCache$4.apply(NodeCache.java:293)
>  ~[stormjar.jar:na]
> at 
> org.apache.curator.framework.recipes.cache.NodeCache$4.apply(NodeCache.java:287)
>  ~[stormjar.jar:na]
> at 
> org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92)
>  ~[stormjar.jar:na]
> at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:253)
>  ~[guava-11.0.2.jar:na]
> at 
> org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83)
>  ~[stormjar.jar:na]
> at 
> org.apache.curator.framework.recipes.cache.NodeCache.setNewData(NodeCache.java:284)
>  ~[stormjar.jar:na]
> at 
> org.apache.curator.framework.recipes.cache.NodeCache.processBackgroundResult(NodeCache.java:252)
>  ~[stormjar.jar:na]
> at 
> org.apache.curator.framework.recipes.cache.NodeCache.access$300(NodeCache.java:53)
>  ~[stormjar.jar:na]
> at 
> org.apache.curator.framework.recipes.cache.NodeCache$3.processResult(NodeCache.java:111)
>  ~[stormjar.jar:na]
> at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.sendToBackgroundCallback(CuratorFrameworkImpl.java:730)
>  ~[stormjar.jar:na]
> at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:516)
>  ~[stormjar.jar:na]
> at 
> org.apache.curator.framework.imps.GetDataBuilderImpl$3.processResult(GetDataBuilderImpl.java:254)
>  ~[stormjar.jar:na]
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:561) 
> ~[zookeeper-3.4.6.2.2.0.0-2041.jar:3.4.6-2041--1]
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498) 
> ~[zookeeper-3.4.6.2.2.0.0-2041.jar:3.4.6-2041--1]
> Caused by: java.lang.RuntimeException: No leader found for partition 3
> at 
> storm.kafka.DynamicBrokersReader.getLeaderFor(DynamicBrokersReader.java:131) 
> ~[stormjar.jar:na]
> at 
> storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:79) 
> ~[stormjar.jar:na]
> ... 25 common frames omitted
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (EAGLE-1015) Add an interface to add storm configuration in an application

2017-04-27 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-1015:
-
Summary: Add an interface to add storm configuration in an application  
(was: Support storm configuration in Eagle application configuration)

> Add an interface to add storm configuration in an application
> -
>
> Key: EAGLE-1015
> URL: https://issues.apache.org/jira/browse/EAGLE-1015
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> Apart from the application related configurations, Eagle should give an 
> interface to add some storm-related configurations, such as `workers`, 
> `topology.worker.childopts`, `topology.message.timeout.secs`, etc. 
> Here is my idea, for those configurations which start with 'storm.override.', 
> Eagle will treat these configurations as storm configurations. for example, 
> 'storm.override.workers', then the value of 'workers' are overridden.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (EAGLE-1015) Support storm configuration in Eagle application configuration

2017-04-27 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on EAGLE-1015 started by Zhao, Qingwen.

> Support storm configuration in Eagle application configuration
> --
>
> Key: EAGLE-1015
> URL: https://issues.apache.org/jira/browse/EAGLE-1015
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> Apart from the application related configurations, Eagle should give an 
> interface to add some storm-related configurations, such as `workers`, 
> `topology.worker.childopts`, `topology.message.timeout.secs`, etc. 
> Here is my idea, for those configurations which start with 'storm.override.', 
> Eagle will treat these configurations as storm configurations. for example, 
> 'storm.override.workers', then the value of 'workers' are overridden.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-1015) Support storm configuration in Eagle application configuration

2017-04-27 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1015:


 Summary: Support storm configuration in Eagle application 
configuration
 Key: EAGLE-1015
 URL: https://issues.apache.org/jira/browse/EAGLE-1015
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


Apart from the application related configurations, Eagle should give an 
interface to add some storm-related configurations, such as `workers`, 
`topology.worker.childopts`, `topology.message.timeout.secs`, etc. 

Here is my idea, for those configurations which start with 'storm.override.', 
Eagle will treat these configurations as storm configurations. for example, 
'storm.override.workers', then the value of 'workers' are overridden.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-1014) No data in alertEngineSpout

2017-04-27 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1014:


 Summary: No data in alertEngineSpout 
 Key: EAGLE-1014
 URL: https://issues.apache.org/jira/browse/EAGLE-1014
 Project: Eagle
  Issue Type: Bug
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


Found the following exception log
{code}
2017-04-27 03:26:40 Thread-11-alertEngineSpout-EventThread 
org.apache.eagle.alert.engine.spout.CorrelationSpout [ERROR] error applying new 
SpoutSpec
java.lang.RuntimeException: java.lang.RuntimeException: No leader found for 
partition 3
at 
storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:92) 
~[stormjar.jar:na]
at storm.kafka.trident.ZkBrokerReader.(ZkBrokerReader.java:42) 
~[stormjar.jar:na]
at storm.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:57) 
~[stormjar.jar:na]
at storm.kafka.KafkaSpoutWrapper.open(KafkaSpoutWrapper.java:79) 
~[stormjar.jar:na]
at 
org.apache.eagle.alert.engine.spout.CorrelationSpout.createKafkaSpout(CorrelationSpout.java:355)
 ~[stormjar.jar:na]
at 
org.apache.eagle.alert.engine.spout.CorrelationSpout.onReload(CorrelationSpout.java:259)
 ~[stormjar.jar:na]
at 
org.apache.eagle.alert.engine.spout.CorrelationSpout.onSpoutSpecChange(CorrelationSpout.java:162)
 ~[stormjar.jar:na]
at 
org.apache.eagle.alert.engine.coordinator.impl.AbstractMetadataChangeNotifyService.lambda$notifySpout$0(AbstractMetadataChangeNotifyService.java:91)
 [stormjar.jar:na]
at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_91]
at 
org.apache.eagle.alert.engine.coordinator.impl.AbstractMetadataChangeNotifyService.notifySpout(AbstractMetadataChangeNotifyService.java:91)
 [stormjar.jar:na]
at 
org.apache.eagle.alert.engine.coordinator.impl.ZKMetadataChangeNotifyService.onNewConfig(ZKMetadataChangeNotifyService.java:148)
 ~[stormjar.jar:na]
at 
org.apache.eagle.alert.config.ConfigBusConsumer.lambda$new$0(ConfigBusConsumer.java:44)
 ~[stormjar.jar:na]
at 
org.apache.curator.framework.recipes.cache.NodeCache$4.apply(NodeCache.java:293)
 ~[stormjar.jar:na]
at 
org.apache.curator.framework.recipes.cache.NodeCache$4.apply(NodeCache.java:287)
 ~[stormjar.jar:na]
at 
org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92)
 ~[stormjar.jar:na]
at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:253)
 ~[guava-11.0.2.jar:na]
at 
org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83)
 ~[stormjar.jar:na]
at 
org.apache.curator.framework.recipes.cache.NodeCache.setNewData(NodeCache.java:284)
 ~[stormjar.jar:na]
at 
org.apache.curator.framework.recipes.cache.NodeCache.processBackgroundResult(NodeCache.java:252)
 ~[stormjar.jar:na]
at 
org.apache.curator.framework.recipes.cache.NodeCache.access$300(NodeCache.java:53)
 ~[stormjar.jar:na]
at 
org.apache.curator.framework.recipes.cache.NodeCache$3.processResult(NodeCache.java:111)
 ~[stormjar.jar:na]
at 
org.apache.curator.framework.imps.CuratorFrameworkImpl.sendToBackgroundCallback(CuratorFrameworkImpl.java:730)
 ~[stormjar.jar:na]
at 
org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:516)
 ~[stormjar.jar:na]
at 
org.apache.curator.framework.imps.GetDataBuilderImpl$3.processResult(GetDataBuilderImpl.java:254)
 ~[stormjar.jar:na]
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:561) 
~[zookeeper-3.4.6.2.2.0.0-2041.jar:3.4.6-2041--1]
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498) 
~[zookeeper-3.4.6.2.2.0.0-2041.jar:3.4.6-2041--1]
Caused by: java.lang.RuntimeException: No leader found for partition 3
at 
storm.kafka.DynamicBrokersReader.getLeaderFor(DynamicBrokersReader.java:131) 
~[stormjar.jar:na]
at 
storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:79) 
~[stormjar.jar:na]
... 25 common frames omitted
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Deleted] (EAGLE-1010) Fix bugs introduced by EAGLE-992

2017-04-18 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-1010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen deleted EAGLE-1010:
-


> Fix bugs introduced by EAGLE-992
> 
>
> Key: EAGLE-1010
> URL: https://issues.apache.org/jira/browse/EAGLE-1010
> Project: Eagle
>  Issue Type: Sub-task
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> As the changes have not been tested thoroughly, some bugs have introduced. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-1010) Fix bugs introduced by EAGLE-992

2017-04-18 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1010:


 Summary: Fix bugs introduced by EAGLE-992
 Key: EAGLE-1010
 URL: https://issues.apache.org/jira/browse/EAGLE-1010
 Project: Eagle
  Issue Type: Sub-task
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


As the changes have not been tested thoroughly, some bugs have introduced. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-1008) java.lang.NullPointerException in JHFEventReaderBase.close

2017-04-18 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-1008:


 Summary: java.lang.NullPointerException in JHFEventReaderBase.close
 Key: EAGLE-1008
 URL: https://issues.apache.org/jira/browse/EAGLE-1008
 Project: Eagle
  Issue Type: Bug
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


{code}
2017-04-17 19:53:31 Thread-8-mrHistoryJobSpout 
org.apache.eagle.jpm.mr.history.parser.JHFMRVer2Parser [ERROR] Caught exception 
parsing history file after 99 events
java.io.IOException: java.lang.NullPointerException
at 
org.apache.eagle.jpm.mr.history.parser.JHFEventReaderBase.close(JHFEventReaderBase.java:153)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.mr.history.parser.JHFMRVer2Parser.parse(JHFMRVer2Parser.java:65)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.mr.history.crawler.DefaultJHFInputStreamCallback.onInputStream(DefaultJHFInputStreamCallback.java:60)
 [stormjar.jar:na]
at 
org.apache.eagle.jpm.mr.history.crawler.AbstractJobHistoryDAO.readFileContent(AbstractJobHistoryDAO.java:123)
 [stormjar.jar:na]
at 
org.apache.eagle.jpm.mr.history.crawler.JHFCrawlerDriverImpl.crawl(JHFCrawlerDriverImpl.java:166)
 [stormjar.jar:na]
at 
org.apache.eagle.jpm.mr.history.storm.JobHistorySpout.nextTuple(JobHistorySpout.java:166)
 [stormjar.jar:na]
at 
backtype.storm.daemon.executor$fn__5629$fn__5644$fn__5673.invoke(executor.clj:585)
 [storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:465) 
[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
Caused by: java.lang.NullPointerException: null
at 
org.apache.eagle.jpm.util.jobcounter.JobCounters.getCounterValue(JobCounters.java:51)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.analyzer.mr.suggestion.MapReduceJobSuggestionContext.getMinimumIOSortMemory(MapReduceJobSuggestionContext.java:137)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.analyzer.mr.suggestion.MapReduceJobSuggestionContext.buildContext(MapReduceJobSuggestionContext.java:86)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.analyzer.mr.suggestion.MapReduceJobSuggestionContext.(MapReduceJobSuggestionContext.java:64)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.analyzer.mr.suggestion.JobSuggestionEvaluator.evaluate(JobSuggestionEvaluator.java:76)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.analyzer.mr.suggestion.JobSuggestionEvaluator.evaluate(JobSuggestionEvaluator.java:38)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.analyzer.mr.MRJobPerformanceAnalyzer.analyze(MRJobPerformanceAnalyzer.java:64)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.mr.history.parser.JobSuggestionListener.flush(JobSuggestionListener.java:93)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.mr.history.parser.JobEntityCreationPublisher.flush(JobEntityCreationPublisher.java:40)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.mr.history.parser.JHFEventReaderBase.flush(JHFEventReaderBase.java:159)
 ~[stormjar.jar:na]
at 
org.apache.eagle.jpm.mr.history.parser.JHFEventReaderBase.close(JHFEventReaderBase.java:150)
 ~[stormjar.jar:na]
... 9 common frames omitted
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-999) JobConfigSerDeser fails to serialize/deserialize data with long string

2017-04-10 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-999:
---

 Summary: JobConfigSerDeser fails to serialize/deserialize data 
with long string 
 Key: EAGLE-999
 URL: https://issues.apache.org/jira/browse/EAGLE-999
 Project: Eagle
  Issue Type: Bug
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


Sample configuration value:

{code}
INSERT OVERWRITE TABLE kylin_intermediate_KYLIN_HIVE_METRICS_QUERY_CUBE SELECT
HIVE_METRICS_QUERY_CUBE.`CUBE_NAME`
,HIVE_METRICS_QUERY_CUBE.`SEGMENT_NAME`
,HIVE_METRICS_QUERY_CUBE.`CUBOID_SOURCE`
,HIVE_METRICS_QUERY_CUBE.`CUBOID_TARGET`
,HIVE_METRICS_QUERY_CUBE.`IF_MATCH`
,HIVE_METRICS_QUERY_CUBE.`IF_SUCCESS`
,HIVE_METRICS_QUERY_CUBE.`KYEAR_BEGIN_DATE`
,HIVE_METRICS_QUERY_CUBE.`KMONTH_BEGIN_DATE`
,HIVE_METRICS_QUERY_CUBE.`KWEEK_BEGIN_DATE`
,HIVE_METRICS_QUERY_CUBE.`KDAY_DATE`
,HIVE_METRICS_QUERY_CUBE.`WEIGHT_PER_HIT`
,HIVE_METRICS_QUERY_CUBE.`STORAGE_CALL_COUNT`
,HIVE_METRICS_QUERY_CUBE.`STORAGE_CALL_TIME_SUM`
,HIVE_METRICS_QUERY_CUBE.`STORAGE_CALL_TIME_MAX`
,HIVE_METRICS_QUERY_CUBE.`STORAGE_COUNT_SKIP`
,HIVE_METRICS_QUERY_CUBE.`STORAGE_SIZE_SCAN`
,HIVE_METRICS_QUERY_CUBE.`STORAGE_SIZE_RETURN`
,HIVE_METRICS_QUERY_CUBE.`STORAGE_SIZE_AGGREGATE_FILTER`
,HIVE_METRICS_QUERY_CUBE.`STORAGE_SIZE_AGGREGATE`
FROM KYLIN.HIVE_METRICS_QUERY_CUBE as HIVE_METRICS_QUERY_CUBE
WHERE (((HIVE_METRICS_QUERY_CUBE.KDAY_DATE = '2017-04-06' AND 
HIVE_METRICS_QUERY_CUBE.KDAY_TIME >= '18:00:00') OR 
(HIVE_METRICS_QUERY_CUBE.KDAY_DATE > '2017-04-06')) AND 
((HIVE_METRICS_QUERY_CUBE.KDAY_DATE = '2017-04-06' AND 
HIVE_METRICS_QUERY_CUBE.KDAY_TIME < '20:00:00') OR 
(HIVE_METRICS_QUERY_CUBE.KDAY_DATE < '2017-04-06')))
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (EAGLE-993) add duplicate removal settings in policy definition

2017-04-06 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-993:

Summary: add duplicate removal settings in policy definition  (was: add 
duplicate removal settings in policy Definition)

> add duplicate removal settings in policy definition
> ---
>
> Key: EAGLE-993
> URL: https://issues.apache.org/jira/browse/EAGLE-993
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> customize duplicate fields & interval in each policy



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-993) add duplicate removal settings in policy Definition

2017-04-06 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-993:
---

 Summary: add duplicate removal settings in policy Definition
 Key: EAGLE-993
 URL: https://issues.apache.org/jira/browse/EAGLE-993
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


customize duplicate fields & interval in each policy



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-989) Fix HA check bug in HAURLSelectorImpl

2017-04-01 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-989:
---

 Summary: Fix HA check bug in HAURLSelectorImpl 
 Key: EAGLE-989
 URL: https://issues.apache.org/jira/browse/EAGLE-989
 Project: Eagle
  Issue Type: Bug
Affects Versions: v0.4.0, v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


HAURLSelectorImpl is to check the active resource manager of two urls. 

Currently, the code does not check whether the return stream is expected and 
returns true if there is no exception caught. 




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (EAGLE-964) Add a column 'severity' left to description in policy list page

2017-03-28 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen reassigned EAGLE-964:
---

Assignee: Zhao, Qingwen  (was: Jilin, Jiang)

> Add a column 'severity' left to description in policy list page
> ---
>
> Key: EAGLE-964
> URL: https://issues.apache.org/jira/browse/EAGLE-964
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> Add a column 'severity' left to description in policy list page



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (EAGLE-971) Duplicated queues are generated under a monitored stream

2017-03-23 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-971:

Description: 
This issue is caused by the wrong routing spec generated by the coordinator. 
Here is the procedure to reproduce it. 

1. setting {{policiesPerBolt = 2, streamsPerBolt = 3, reuseBoltInStreams = 
true}} in server config
2. create four policies which has the same partition and consume the same stream
{code}
 from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.rpc.callqueuelength"]#window.length(2) select site, host, 
component, metric, min(convert(value, "long")) as minValue group by site, host, 
component, metric having minValue >= 1 insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CALL_QUEUE_EXCEEDS_OUT;

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.rpc.callqueuelength"]#window.length(30) select site, host, 
component, metric, min(convert(value, "long")) as minValue group by site, host, 
component, metric having minValue >= 1 insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CALL_QUEUE_EXCEEDS_OUT;

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.hastate.failed.count"]#window.length(2) select site, host, 
component, metric, timestamp, min(value) as minValue group by site, host, 
component, metric insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_NN_NO_RESPONSE_OUT

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.hastate.failed.count.test"]#window.length(3) select site, 
host, component, metric, count(value) as cnt group by site, host, component, 
metric insert into HADOOP_JMX_METRIC_STREAM_SANDBOX_NN_NO_RESPONSE_OUT;
{code}

After creating the four policies, the routing spec is 
{code}
routerSpecs: [
{
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
targetQueue: [
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
},
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
},
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
}
]
}
]
{code}

and the alert spec is 
{code}
boltPolicyIdsMap: {
alertBolt9: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt0: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt1: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt2: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt3: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
]
}
{code}

3. produce messages into kafka topic 'hadoop_jmx_metrics_sandbox' and trigger 
NameNodeWithOneNoResponse.

{code}
{"timestamp": 1490250963445, "metric": "hadoop.namenode.hastate.failed.count", 
"component": "namenode", "site": "artemislvs", "value": 0.0, "host": 
"localhost"}
{code}

Then one message is sent three times.

  was:
This issue is caused by the wrong routing spec generated by the coordinator. 
Here is the procedure to reproduce it. 

1. setting {{policiesPerBolt = 2, streamsPerBolt = 3, reuseBoltInStreams = 
true}} in server config
2. create four policies which the same partition and consuming 

[jira] [Updated] (EAGLE-971) Duplicated queues are generated under a monitored stream

2017-03-23 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-971:

Description: 
This issue is caused by the wrong routing spec generated by the coordinator. 
Here is the procedure to reproduce it. 

1. setting {{policiesPerBolt = 2, streamsPerBolt = 3}} in server config
2. create four policies which the same partition and consuming the same streamId
{code}
 from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.rpc.callqueuelength"]#window.length(2) select site, host, 
component, metric, min(convert(value, "long")) as minValue group by site, host, 
component, metric having minValue >= 1 insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CALL_QUEUE_EXCEEDS_OUT;

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.rpc.callqueuelength"]#window.length(30) select site, host, 
component, metric, min(convert(value, "long")) as minValue group by site, host, 
component, metric having minValue >= 1 insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CALL_QUEUE_EXCEEDS_OUT;

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.hastate.failed.count"]#window.length(2) select site, host, 
component, metric, timestamp, min(value) as minValue group by site, host, 
component, metric insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_NN_NO_RESPONSE_OUT

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.hastate.failed.count.test"]#window.length(3) select site, 
host, component, metric, count(value) as cnt group by site, host, component, 
metric insert into HADOOP_JMX_METRIC_STREAM_SANDBOX_NN_NO_RESPONSE_OUT;
{code}

After creating the four policies, the routing spec is 
{code}
routerSpecs: [
{
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
targetQueue: [
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
},
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
},
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
}
]
}
]
{code}

and the alert spec is 
{code}
boltPolicyIdsMap: {
alertBolt9: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt0: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt1: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt2: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt3: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
]
}
{code}

3. produce messages into kafka topic 'hadoop_jmx_metrics_sandbox' and trigger 
NameNodeWithOneNoResponse.

{code}
{"timestamp": 1490250963445, "metric": "hadoop.namenode.hastate.failed.count", 
"component": "namenode", "site": "artemislvs", "value": 0.0, "host": 
"localhost"}
{code}

Then one message is sent three times.

  was:
This issue is caused by the wrong routing spec generated by the coordinator. 
Here is the procedure to reproduce it. 

1. setting {{{policiesPerBolt = 2, streamsPerBolt = 3}}} in server config
2. create four policies which the same partition and consuming the same streamId
{code}
 from 

[jira] [Created] (EAGLE-971) Duplicated queues are generated under a monitored stream

2017-03-23 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-971:
---

 Summary: Duplicated queues are generated under a monitored stream
 Key: EAGLE-971
 URL: https://issues.apache.org/jira/browse/EAGLE-971
 Project: Eagle
  Issue Type: Bug
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


This issue is caused by the wrong routing spec generated by the coordinator. 
Here is the procedure to reproduce it. 

1. setting {{{policiesPerBolt = 2, streamsPerBolt = 3}}} in server config
2. create four policies which the same partition and consuming the same streamId
{code}
 from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.rpc.callqueuelength"]#window.length(2) select site, host, 
component, metric, min(convert(value, "long")) as minValue group by site, host, 
component, metric having minValue >= 1 insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CALL_QUEUE_EXCEEDS_OUT;

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.rpc.callqueuelength"]#window.length(30) select site, host, 
component, metric, min(convert(value, "long")) as minValue group by site, host, 
component, metric having minValue >= 1 insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CALL_QUEUE_EXCEEDS_OUT;

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.hastate.failed.count"]#window.length(2) select site, host, 
component, metric, timestamp, min(value) as minValue group by site, host, 
component, metric insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_NN_NO_RESPONSE_OUT

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.hastate.failed.count.test"]#window.length(3) select site, 
host, component, metric, count(value) as cnt group by site, host, component, 
metric insert into HADOOP_JMX_METRIC_STREAM_SANDBOX_NN_NO_RESPONSE_OUT;
{code}

After creating the four policies, the routing spec is 
{code}
routerSpecs: [
{
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
targetQueue: [
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
},
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
},
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
}
]
}
]
{code}

and the alert spec is 
{code}
boltPolicyIdsMap: {
alertBolt9: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt0: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt1: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt2: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt3: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
]
}
{code}

3. produce messages into kafka topic 'hadoop_jmx_metrics_sandbox' and trigger 
NameNodeWithOneNoResponse.

{code}
{"timestamp": 1490250963445, "metric": "hadoop.namenode.hastate.failed.count", 
"component": "namenode", "site": "artemislvs", "value": 0.0, "host": 
"localhost"}
{code}

Then one message is sent three times.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-964) Add a column 'severity' left to description in policy list page

2017-03-17 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-964:
---

 Summary: Add a column 'severity' left to description in policy 
list page
 Key: EAGLE-964
 URL: https://issues.apache.org/jira/browse/EAGLE-964
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Jilin, Jiang


Add a column 'severity' left to description in policy list page



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-959) Add a configuration to limit the total number of apps to be returned in SparkHistoryJobApp

2017-03-15 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-959:
---

 Summary: Add a configuration to limit the total number of apps to 
be returned in SparkHistoryJobApp
 Key: EAGLE-959
 URL: https://issues.apache.org/jira/browse/EAGLE-959
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


Add a configuration to limit the total number of apps to be returned in 
SparkHistoryJobApp. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-946) Refactor MRRunningJobApp & HadoopQueueApp

2017-03-09 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-946:
---

 Summary: Refactor MRRunningJobApp & HadoopQueueApp 
 Key: EAGLE-946
 URL: https://issues.apache.org/jira/browse/EAGLE-946
 Project: Eagle
  Issue Type: Improvement
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


Requirements 

1. REST apis to the remote cluster should be called only once.
2. For each request, the fetch running apps should be limits 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (EAGLE-940) Hdfs RPC monitoring for per cluster/user

2017-03-09 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-940:

Description: 
Monitor HDFS RPC requests at a cluster/user level. This feature leverage the 
available data from the Kafka topic produced by the jmx metric collector 
scripted. 

Sample queries are listed, and the values are updated every minute

* the last minute statistics
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.1m.count=1000

* the last 5 minutes statistics
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.5m.count=1000

* the last 25 minutes statistics 
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.25m.count=1000



  was:
Monitor HDFS RPC requests at a cluster/user level. This feature leverage the 
available data from the Kafka topic produced by the jmx metric collector 
scripted. 

Sample queries are listed 

* the last minute statistics
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.1m.count=1000

* the last 5 minutes statistics
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.5m.count=1000

* the last 25 minutes statistics 
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.25m.count=1000




> Hdfs RPC monitoring for per cluster/user 
> -
>
> Key: EAGLE-940
> URL: https://issues.apache.org/jira/browse/EAGLE-940
> Project: Eagle
>  Issue Type: New Feature
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> Monitor HDFS RPC requests at a cluster/user level. This feature leverage the 
> available data from the Kafka topic produced by the jmx metric collector 
> scripted. 
> Sample queries are listed, and the values are updated every minute
> * the last minute statistics
> http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.1m.count=1000
> * the last 5 minutes statistics
> http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.5m.count=1000
> * the last 25 minutes statistics 
> http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.25m.count=1000



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (EAGLE-940) Hdfs RPC monitoring for per cluster/user

2017-03-09 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-940:

Description: 
Monitor HDFS RPC requests at a cluster/user level. This feature leverage the 
available data from the Kafka topic produced by the jmx metric collector 
scripted. 

Sample queries are listed 

* the last minute statistics
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.1m.count=1000

* the last 5 minutes statistics
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.5m.count=1000

* the last 25 minutes statistics 
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.25m.count=1000



  was:
* monitor HDFS RPC requests at a cluster/user level
* refactor all apps which consume namenode jmx

Up to now, the last minute statistics
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.1m.count=1000

Up to now, the last 5 minutes statistics
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.5m.count=1000

Up to now, the last 25 minutes statistics 
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.25m.count=1000


> Hdfs RPC monitoring for per cluster/user 
> -
>
> Key: EAGLE-940
> URL: https://issues.apache.org/jira/browse/EAGLE-940
> Project: Eagle
>  Issue Type: New Feature
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> Monitor HDFS RPC requests at a cluster/user level. This feature leverage the 
> available data from the Kafka topic produced by the jmx metric collector 
> scripted. 
> Sample queries are listed 
> * the last minute statistics
> http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.1m.count=1000
> * the last 5 minutes statistics
> http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.5m.count=1000
> * the last 25 minutes statistics 
> http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.25m.count=1000



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (EAGLE-940) Hdfs RPC monitoring for per cluster/user

2017-03-09 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-940:

Description: 
* monitor HDFS RPC requests at a cluster/user level
* refactor all apps which consume namenode jmx

Up to now, the last minute statistics
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.1m.count=1000

Up to now, the last 5 minutes statistics
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.5m.count=1000

Up to now, the last 25 minutes statistics 
http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.25m.count=1000

  was:
* monitor HDFS RPC requests at a cluster/user level
* refactor all apps which consume namenode jmx


> Hdfs RPC monitoring for per cluster/user 
> -
>
> Key: EAGLE-940
> URL: https://issues.apache.org/jira/browse/EAGLE-940
> Project: Eagle
>  Issue Type: New Feature
>Affects Versions: v0.5.0
>Reporter: Zhao, Qingwen
>Assignee: Zhao, Qingwen
>
> * monitor HDFS RPC requests at a cluster/user level
> * refactor all apps which consume namenode jmx
> Up to now, the last minute statistics
> http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.1m.count=1000
> Up to now, the last 5 minutes statistics
> http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.5m.count=1000
> Up to now, the last 25 minutes statistics 
> http://localhost:9090/rest/entities?query=GenericMetricService[@site=%22sandbox%22]{*}=hadoop.hdfs.auditlog.25m.count=1000



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >