[GitHub] incubator-eagle issue #751: [EAGLE-844] Fix a potential NPE
Github user wujinhu commented on the issue: https://github.com/apache/incubator-eagle/pull/751 LGTM, will merge. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle issue #750: [EAGLE-843] Refactor application shared service ...
Github user wujinhu commented on the issue: https://github.com/apache/incubator-eagle/pull/750 LGTM --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (EAGLE-842) mr running job count in zookeeper does not match the number in hbase
wujinhu created EAGLE-842: - Summary: mr running job count in zookeeper does not match the number in hbase Key: EAGLE-842 URL: https://issues.apache.org/jira/browse/EAGLE-842 Project: Eagle Issue Type: Bug Affects Versions: v0.5.0 Reporter: wujinhu Assignee: Lingang Deng Fix For: v0.5.0 As you know, we save running job basic information in zookeeper and will be moved when the job finished. However, as time goes on, some running job basic information still exists in zookeeper, and when you run the zookeeper command get /apps/mr/running/sandbox in zk shell, the number increases slowly. You can compare this with the RUNNING number in JPM Web widget. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-eagle pull request #748: [MINOR] add stream data source config for...
GitHub user wujinhu opened a pull request: https://github.com/apache/incubator-eagle/pull/748 [MINOR] add stream data source config for mr history job You can merge this pull request into a Git repository by running: $ git pull https://github.com/wujinhu/incubator-eagle EAGLE-796 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-eagle/pull/748.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #748 commit db8fc495d7bb2f33b2aee9de4d0f176092cf273c Author: wujinhu <wujinhu...@126.com> Date: 2016-12-15T15:53:13Z fix stream publisher bug in storm env commit cba50fba5ca70460d7e6ff931940bc993379b897 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-16T02:09:41Z fix stream publisher bug in storm env commit b4093e101282398e75341a9fd63581f1a69c5c8c Author: wujinhu <wujinhu...@126.com> Date: 2016-12-16T05:15:23Z add stream data source config for mr history job commit 8cce6729dec7389f7420083b541e9596e3669254 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-16T05:17:58Z merge master --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #747: [MINOR] fix job & task attempt stream pub...
GitHub user wujinhu opened a pull request: https://github.com/apache/incubator-eagle/pull/747 [MINOR] fix job & task attempt stream publisher bug in storm env You can merge this pull request into a Git repository by running: $ git pull https://github.com/wujinhu/incubator-eagle EAGLE-796 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-eagle/pull/747.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #747 commit db8fc495d7bb2f33b2aee9de4d0f176092cf273c Author: wujinhu <wujinhu...@126.com> Date: 2016-12-15T15:53:13Z fix stream publisher bug in storm env --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #744: [EAGLE-840] add task attempt stream for b...
GitHub user wujinhu reopened a pull request: https://github.com/apache/incubator-eagle/pull/744 [EAGLE-840] add task attempt stream for bad node detection You can merge this pull request into a Git repository by running: $ git pull https://github.com/wujinhu/incubator-eagle EAGLE-840 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-eagle/pull/744.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #744 commit 802375f368d140620402cb7569c525c1f00badcd Author: wujinhu <wujinhu...@126.com> Date: 2016-12-15T06:15:40Z add task attempt for bad node detection commit 4dc36b2dcc4e49fa26a0f5811e8cbd406c1ad9ac Author: wujinhu <wujinhu...@126.com> Date: 2016-12-15T06:19:32Z add task attempt for bad node detection commit 8b650d74741b3d9196a0cdfd16a233bac3556844 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-15T06:54:15Z add task attempt for bad node detection commit 25adab7e9eda6718ae7f090e5f3bc86ab83275b3 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-15T07:29:17Z add siddhi string empty extension commit 8737dd8cbd219392c4d8879383d629b16451e9f6 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-15T08:11:06Z add siddhi string empty extension --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #744: [EAGLE-840] add task attempt stream for b...
GitHub user wujinhu opened a pull request: https://github.com/apache/incubator-eagle/pull/744 [EAGLE-840] add task attempt stream for bad node detection You can merge this pull request into a Git repository by running: $ git pull https://github.com/wujinhu/incubator-eagle EAGLE-840 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-eagle/pull/744.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #744 commit 802375f368d140620402cb7569c525c1f00badcd Author: wujinhu <wujinhu...@126.com> Date: 2016-12-15T06:15:40Z add task attempt for bad node detection commit 4dc36b2dcc4e49fa26a0f5811e8cbd406c1ad9ac Author: wujinhu <wujinhu...@126.com> Date: 2016-12-15T06:19:32Z add task attempt for bad node detection --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #710: [EAGLE-820] add unit test for eagle-jpm-m...
Github user wujinhu commented on a diff in the pull request: https://github.com/apache/incubator-eagle/pull/710#discussion_r92535708 --- Diff: eagle-jpm/eagle-jpm-mr-history/src/test/resources/job_1479206441898_508949_conf.xml --- @@ -0,0 +1,1337 @@ + + +hadoop.proxyuser.hive.groups*core-site.xmljob.xml +dfs.block.access.token.lifetime600hdfs-default.xmljob.xml +dfs.namenode.rpc-address.nameservice2.namenode309yhd-jqhadoop180.int.yihaodian.com:8020hdfs-site.xmljob.xml +hive.skewjoin.key10programaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +mapreduce.job.heap.memory-mb.ratio0.8mapred-default.xmljob.xml +dfs.namenode.rpc-address.nameservice2.namenode307yhd-jqhadoop175.int.yihaodian.com:8020hdfs-site.xmljob.xml +hive.index.compact.binary.searchtrueprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +mapreduce.map.log.levelINFOmapred-default.xmljob.xml +dfs.namenode.lazypersist.file.scrub.interval.sec300hdfs-default.xmljob.xml +mapreduce.admin.user.envLD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:$JAVA_LIBRARY_PATHmapred-site.xmljob.xml +file.bytes-per-checksum512core-default.xmljob.xml +mapreduce.client.completion.pollinterval5000mapred-default.xmljob.xml +yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usagefalseyarn-default.xmljob.xml +yarn.log-aggregation-enablefalseyarn-default.xmljob.xml +yarn.nodemanager.aux-services.mapreduce_shuffle.classorg.apache.hadoop.mapred.ShuffleHandleryarn-default.xmljob.xml +dfs.namenode.edit.log.autoroll.check.interval.ms30hdfs-default.xmljob.xml +mapreduce.job.speculative.retry-after-speculate15000mapred-default.xmljob.xml +ipc.client.fallback-to-simple-auth-allowedfalsecore-default.xmljob.xml +dfs.client.failover.connection.retries0hdfs-default.xmljob.xml +mapreduce.jobtracker.system.dir${hadoop.tmp.dir}/mapred/systemmapred-default.xmljob.xml +hive.metastore.event.db.listener.timetolive86400sprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +yarn.scheduler.minimum-allocation-mb1024yarn-site.xmljob.xml +mapreduce.task.profile.map.params${mapreduce.task.profile.params}mapred-default.xmljob.xml +mapreduce.map.memory.mb2048mapred-site.xmljob.xml +mapreduce.tasktracker.dns.interfacedefaultmapred-default.xmljob.xml +dfs.datanode.failed.volumes.tolerated0hdfs-default.xmljob.xml +hive.server2.authenticationNONEprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +_hive.tmp_table_space/tmp/warehouse/bi_etl/9cbc3c63-c8f9-48cc-94a3-43ae6191ddf1/_tmp_space.dbprogramaticallyjob.xml +stream.stderr.reporter.prefixreporter:programaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +dfs.client.slow.io.warning.threshold.ms3hdfs-default.xmljob.xml +hadoop.security.groups.cache.secs300core-default.xmljob.xml +yarn.nodemanager.env-whitelistJAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOMEyarn-default.xmljob.xml +dfs.namenode.top.window.num.buckets10hdfs-default.xmljob.xml +hive.metastore.authorization.storage.checksfalseprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +map.sort.classorg.apache.hadoop.util.QuickSortmapred-default.xmljob.xml +dfs.namenode.safemode.threshold-pct0.999fhdfs-default.xmljob.xml +mapreduce.jobtracker.jobhistory.task.numberprogresssplits12mapred-default.xmljob.xml +datanucleus.storeManagerTyperdbmsprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +dfs.short.circuit.shared.memory.watcher.interrupt.check.ms6hdfs-default.xmljob.xml +hive.metastore.aggregate.stats.cache.max.writer.wait5000msprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +fs.s3n.block.size67108864core-default.xmljob.xml +yarn.resourcemanager.client.thread-count50yarn-site.xmljob.xml +dfs.client.read.shortcircuitfalsehdfs-site.xmljob.xml +mapreduce.job.end-notification.max.retry.interval5000mapred-default.xml +hadoop.security.authenticationsimplecore-site.xmljob.xml +dfs.client.mmap.retry.timeout.ms30hdfs-default.xmljob.xml +dfs.datanode.readahead.bytes4193404hdfs-default.xmljob.xml +mapreduce.jobhistory.max-age-ms60480mapred-default.xmljob.xml +yarn.app.mapreduce.client-am.ipc.max-retries3mapred-default.xmljob.xml +hive.mapjoin.followby.map.aggr.hash.percentmemory0.3programaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +hive.script.recordwriterorg.apache.hadoop.hive.ql.exec.TextRecordWriterprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml
[GitHub] incubator-eagle pull request #710: [EAGLE-820] add unit test for eagle-jpm-m...
Github user wujinhu commented on a diff in the pull request: https://github.com/apache/incubator-eagle/pull/710#discussion_r92535638 --- Diff: eagle-jpm/eagle-jpm-mr-history/src/test/resources/job_1479206441898_508949_conf.xml --- @@ -0,0 +1,1337 @@ + + +hadoop.proxyuser.hive.groups*core-site.xmljob.xml +dfs.block.access.token.lifetime600hdfs-default.xmljob.xml +dfs.namenode.rpc-address.nameservice2.namenode309yhd-jqhadoop180.int.yihaodian.com:8020hdfs-site.xmljob.xml +hive.skewjoin.key10programaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +mapreduce.job.heap.memory-mb.ratio0.8mapred-default.xmljob.xml +dfs.namenode.rpc-address.nameservice2.namenode307yhd-jqhadoop175.int.yihaodian.com:8020hdfs-site.xmljob.xml +hive.index.compact.binary.searchtrueprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +mapreduce.map.log.levelINFOmapred-default.xmljob.xml +dfs.namenode.lazypersist.file.scrub.interval.sec300hdfs-default.xmljob.xml +mapreduce.admin.user.envLD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:$JAVA_LIBRARY_PATHmapred-site.xmljob.xml +file.bytes-per-checksum512core-default.xmljob.xml +mapreduce.client.completion.pollinterval5000mapred-default.xmljob.xml +yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usagefalseyarn-default.xmljob.xml +yarn.log-aggregation-enablefalseyarn-default.xmljob.xml +yarn.nodemanager.aux-services.mapreduce_shuffle.classorg.apache.hadoop.mapred.ShuffleHandleryarn-default.xmljob.xml +dfs.namenode.edit.log.autoroll.check.interval.ms30hdfs-default.xmljob.xml +mapreduce.job.speculative.retry-after-speculate15000mapred-default.xmljob.xml +ipc.client.fallback-to-simple-auth-allowedfalsecore-default.xmljob.xml +dfs.client.failover.connection.retries0hdfs-default.xmljob.xml +mapreduce.jobtracker.system.dir${hadoop.tmp.dir}/mapred/systemmapred-default.xmljob.xml +hive.metastore.event.db.listener.timetolive86400sprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +yarn.scheduler.minimum-allocation-mb1024yarn-site.xmljob.xml +mapreduce.task.profile.map.params${mapreduce.task.profile.params}mapred-default.xmljob.xml +mapreduce.map.memory.mb2048mapred-site.xmljob.xml +mapreduce.tasktracker.dns.interfacedefaultmapred-default.xmljob.xml +dfs.datanode.failed.volumes.tolerated0hdfs-default.xmljob.xml +hive.server2.authenticationNONEprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +_hive.tmp_table_space/tmp/warehouse/bi_etl/9cbc3c63-c8f9-48cc-94a3-43ae6191ddf1/_tmp_space.dbprogramaticallyjob.xml +stream.stderr.reporter.prefixreporter:programaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +dfs.client.slow.io.warning.threshold.ms3hdfs-default.xmljob.xml +hadoop.security.groups.cache.secs300core-default.xmljob.xml +yarn.nodemanager.env-whitelistJAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOMEyarn-default.xmljob.xml +dfs.namenode.top.window.num.buckets10hdfs-default.xmljob.xml +hive.metastore.authorization.storage.checksfalseprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +map.sort.classorg.apache.hadoop.util.QuickSortmapred-default.xmljob.xml +dfs.namenode.safemode.threshold-pct0.999fhdfs-default.xmljob.xml +mapreduce.jobtracker.jobhistory.task.numberprogresssplits12mapred-default.xmljob.xml +datanucleus.storeManagerTyperdbmsprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +dfs.short.circuit.shared.memory.watcher.interrupt.check.ms6hdfs-default.xmljob.xml +hive.metastore.aggregate.stats.cache.max.writer.wait5000msprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +fs.s3n.block.size67108864core-default.xmljob.xml +yarn.resourcemanager.client.thread-count50yarn-site.xmljob.xml +dfs.client.read.shortcircuitfalsehdfs-site.xmljob.xml +mapreduce.job.end-notification.max.retry.interval5000mapred-default.xml +hadoop.security.authenticationsimplecore-site.xmljob.xml +dfs.client.mmap.retry.timeout.ms30hdfs-default.xmljob.xml +dfs.datanode.readahead.bytes4193404hdfs-default.xmljob.xml +mapreduce.jobhistory.max-age-ms60480mapred-default.xmljob.xml +yarn.app.mapreduce.client-am.ipc.max-retries3mapred-default.xmljob.xml +hive.mapjoin.followby.map.aggr.hash.percentmemory0.3programaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml +hive.script.recordwriterorg.apache.hadoop.hive.ql.exec.TextRecordWriterprogramaticallyorg.apache.hadoop.hive.conf.LoopingByteArrayInputStream@78dfcf46job.xml
[GitHub] incubator-eagle pull request #739: [EAGLE-835] add task error category
GitHub user wujinhu opened a pull request: https://github.com/apache/incubator-eagle/pull/739 [EAGLE-835] add task error category You can merge this pull request into a Git repository by running: $ git pull https://github.com/wujinhu/incubator-eagle EAGLE-836 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-eagle/pull/739.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #739 commit c445e20837bf4300513ee30602ecac4b54ac5029 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-14T05:10:09Z add task error category --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (EAGLE-835) Job failure diagnostics
[ https://issues.apache.org/jira/browse/EAGLE-835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated EAGLE-835: -- Summary: Job failure diagnostics (was: Job failure diagnostics and Bad node detection) > Job failure diagnostics > --- > > Key: EAGLE-835 > URL: https://issues.apache.org/jira/browse/EAGLE-835 > Project: Eagle > Issue Type: Improvement >Affects Versions: v0.5.0 > Reporter: wujinhu >Assignee: wujinhu > Fix For: v0.5.0 > > > 1. Job failure diagnostics (task failure category) > 2. Task Failing Nodes List and bad node detection > Based on pre-aggregated the same as job failure diagnostics -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-eagle pull request #732: [EAGLE-837] Stream definition change does...
Github user wujinhu commented on a diff in the pull request: https://github.com/apache/incubator-eagle/pull/732#discussion_r92129224 --- Diff: eagle-core/eagle-alert-parent/eagle-alert/alert-engine/src/test/java/org/apache/eagle/alert/engine/router/TestAlertBolt.java --- @@ -443,6 +447,83 @@ public void reportError(Throwable error) { Assert.assertTrue(recieved.get()); } + +@Test +public void testStreamDefinitionChange() throws IOException { +PolicyDefinition def = new PolicyDefinition(); +def.setName("policy-definition"); +def.setInputStreams(Arrays.asList(TEST_STREAM)); + +PolicyDefinition.Definition definition = new PolicyDefinition.Definition(); +definition.setType(PolicyStreamHandlers.CUSTOMIZED_ENGINE); + definition.setHandlerClass("org.apache.eagle.alert.engine.router.CustomizedHandler"); +definition.setValue("PT0M,plain,1,host,host1"); +def.setDefinition(definition); +def.setPartitionSpec(Arrays.asList(createPartition())); + +AlertBoltSpec boltSpecs = new AlertBoltSpec(); + +AtomicBoolean recieved = new AtomicBoolean(false); +OutputCollector collector = new OutputCollector(new IOutputCollector() { +@Override +public List emit(String streamId, Collection anchors, List tuple) { +recieved.set(true); +return Collections.emptyList(); +} + +@Override +public void emitDirect(int taskId, String streamId, Collection anchors, List tuple) { +} + +@Override +public void ack(Tuple input) { +} + +@Override +public void fail(Tuple input) { +} + +@Override +public void reportError(Throwable error) { +} +}); +AlertBolt bolt = createAlertBolt(collector); + +boltSpecs.getBoltPoliciesMap().put(bolt.getBoltId(), Arrays.asList(def)); +boltSpecs.setVersion("spec_" + System.currentTimeMillis()); +// stream def map +Map<String, StreamDefinition> sds = new HashMap(); +StreamDefinition sdTest = new StreamDefinition(); +sdTest.setStreamId(TEST_STREAM); +sds.put(sdTest.getStreamId(), sdTest); + +boltSpecs.addPublishPartition(TEST_STREAM, "policy-definition", "testAlertPublish", null); + +bolt.onAlertBoltSpecChange(boltSpecs, sds); + +// how to assert +Tuple t = createTuple(bolt, boltSpecs.getVersion()); + +bolt.execute(t); + +Assert.assertTrue(recieved.get()); + +LOG.info("Update stream"); +sds = new HashMap(); +sdTest = new StreamDefinition(); +sdTest.setStreamId(TEST_STREAM); +sds.put(sdTest.getStreamId(), sdTest); +sdTest.setDescription("update the stream"); +bolt.onAlertBoltSpecChange(boltSpecs, sds); + +LOG.info("No any change"); +sds = new HashMap(); +sdTest = new StreamDefinition(); +sdTest.setStreamId(TEST_STREAM); +sds.put(sdTest.getStreamId(), sdTest); +sdTest.setDescription("update the stream"); +bolt.onAlertBoltSpecChange(boltSpecs, sds); --- End diff -- It seems some tests have no asserts, does it ok? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #728: [MINOR] add sortSpec to pattern match if ...
Github user wujinhu closed the pull request at: https://github.com/apache/incubator-eagle/pull/728 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #728: [MINOR] add sortSpec to pattern match if ...
GitHub user wujinhu reopened a pull request: https://github.com/apache/incubator-eagle/pull/728 [MINOR] add sortSpec to pattern match if exists corresponding StreamPartition You can merge this pull request into a Git repository by running: $ git pull https://github.com/wujinhu/incubator-eagle EAGLE-793 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-eagle/pull/728.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #728 commit 4c83f8bea18fdbc35593bf4f331c88b24547aee6 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-09T02:22:40Z Use sort spec of the stream with same Type & Columns if its sort spec is null commit 77df16deaa581603e0fae7d0afe7e10690e924d2 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-09T02:51:21Z Use sort spec of the stream with same Type & Columns if its sort spec is null commit 556266acc28389959c7d69155e11582f4cedac4f Author: wujinhu <wujinhu...@126.com> Date: 2016-12-09T03:01:53Z Use sort spec of the stream with same Type & Columns if its sort spec is null commit 8303118fa776c87309bdbddc129d89bf39900702 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-09T03:43:29Z change default health check commit 81b3e133cf51046366800e676c7237d4b4b1a465 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-09T03:46:10Z change default health check --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #728: [MINOR] add sortSpec to pattern match if ...
GitHub user wujinhu opened a pull request: https://github.com/apache/incubator-eagle/pull/728 [MINOR] add sortSpec to pattern match if exists corresponding StreamPartition You can merge this pull request into a Git repository by running: $ git pull https://github.com/wujinhu/incubator-eagle EAGLE-793 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-eagle/pull/728.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #728 commit 4c83f8bea18fdbc35593bf4f331c88b24547aee6 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-09T02:22:40Z Use sort spec of the stream with same Type & Columns if its sort spec is null commit 77df16deaa581603e0fae7d0afe7e10690e924d2 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-09T02:51:21Z Use sort spec of the stream with same Type & Columns if its sort spec is null --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #727: [MINOR] fix StreamPartitions only with di...
Github user wujinhu closed the pull request at: https://github.com/apache/incubator-eagle/pull/727 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #726: [MINOR] Use larger sortSpec of the existi...
Github user wujinhu closed the pull request at: https://github.com/apache/incubator-eagle/pull/726 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #726: [MINOR] Use sortSpec of the stream with s...
GitHub user wujinhu reopened a pull request: https://github.com/apache/incubator-eagle/pull/726 [MINOR] Use sortSpec of the stream with same StreamId & Type & Columns if sortSpec null You can merge this pull request into a Git repository by running: $ git pull https://github.com/wujinhu/incubator-eagle EAGLE-793 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-eagle/pull/726.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #726 commit 4c83f8bea18fdbc35593bf4f331c88b24547aee6 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-09T02:22:40Z Use sort spec of the stream with same Type & Columns if its sort spec is null --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #726: [MINOR] Use sortSpec of the stream with s...
Github user wujinhu closed the pull request at: https://github.com/apache/incubator-eagle/pull/726 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #726: [MINOR] Use sort spec of the stream with ...
GitHub user wujinhu opened a pull request: https://github.com/apache/incubator-eagle/pull/726 [MINOR] Use sort spec of the stream with same StreamId & Type & Columns if its sort spec⦠You can merge this pull request into a Git repository by running: $ git pull https://github.com/wujinhu/incubator-eagle EAGLE-793 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-eagle/pull/726.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #726 commit 4c83f8bea18fdbc35593bf4f331c88b24547aee6 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-09T02:22:40Z Use sort spec of the stream with same Type & Columns if its sort spec is null --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (EAGLE-827) Coordinator schedule time out
wujinhu created EAGLE-827: - Summary: Coordinator schedule time out Key: EAGLE-827 URL: https://issues.apache.org/jira/browse/EAGLE-827 Project: Eagle Issue Type: Bug Affects Versions: v0.5.0 Reporter: wujinhu Assignee: Garrett Li Fix For: v0.5.0 Coordinator schedule time out(fail to get exclusive lock) because of last fail -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (EAGLE-821) coordinator bug in alert engine
wujinhu created EAGLE-821: - Summary: coordinator bug in alert engine Key: EAGLE-821 URL: https://issues.apache.org/jira/browse/EAGLE-821 Project: Eagle Issue Type: Bug Affects Versions: v0.5.0 Reporter: wujinhu Assignee: Zhao, Qingwen Fix For: v0.5.0 Coordinator has bugs below: 1. When I disable a policy, coordinator does not clear allocated queue 2. Coordinator can not schedule policies while there are alert bolts with no resources 3. When I create some policies with stream A and B, coordinator can not schedule policy with stream C although there are free resources How to reproduce: Create 8 or more policies with stream A and some policies with Stream B, then create policy with steam C, will reproduce 2 and 3(the number may depends). When you see all the alert bolts contains stream A, then disable policy with stream A, it will reproduce 1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-eagle pull request #710: [EAGLE-820] add unit test for eagle-jpm-m...
Github user wujinhu commented on a diff in the pull request: https://github.com/apache/incubator-eagle/pull/710#discussion_r90795468 --- Diff: eagle-jpm/eagle-jpm-mr-history/src/test/java/org/apache/eagle/jpm/mr/history/storm/JobHistorySpoutTest.java --- @@ -0,0 +1,176 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.eagle.jpm.mr.history.storm; + +import backtype.storm.spout.ISpoutOutputCollector; +import backtype.storm.spout.SpoutOutputCollector; +import backtype.storm.task.TopologyContext; +import com.google.common.collect.Lists; +import com.typesafe.config.ConfigFactory; +import org.apache.commons.io.FileUtils; +import org.apache.curator.framework.CuratorFramework; +import org.apache.curator.framework.CuratorFrameworkFactory; +import org.apache.curator.framework.imps.CuratorFrameworkState; +import org.apache.curator.retry.ExponentialBackoffRetry; +import org.apache.curator.retry.RetryNTimes; +import org.apache.curator.test.TestingServer; +import org.apache.eagle.jpm.mr.history.MRHistoryJobConfig; +import org.apache.eagle.jpm.mr.history.crawler.JobHistoryContentFilter; +import org.apache.eagle.jpm.mr.history.crawler.JobHistoryContentFilterBuilder; +import org.apache.eagle.jpm.util.Constants; +import org.apache.eagle.jpm.util.HDFSUtil; +import org.apache.hadoop.conf.Configuration; +import org.junit.After; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.powermock.api.mockito.PowerMockito; +import org.powermock.core.classloader.annotations.PowerMockIgnore; +import org.powermock.core.classloader.annotations.PrepareForTest; +import org.powermock.modules.junit4.PowerMockRunner; + +import java.io.File; +import java.io.IOException; +import java.lang.reflect.Field; +import java.lang.reflect.Method; +import java.util.ArrayList; +import java.util.Date; +import java.util.List; +import java.util.regex.Pattern; + +import static org.mockito.Mockito.*; + +/** + * Created by luokun on 12/1/16. + */ +@RunWith(PowerMockRunner.class) +@PrepareForTest({CuratorFrameworkFactory.class, HDFSUtil.class}) +@PowerMockIgnore({"javax.*", "com.sun.org.*","org.apache.hadoop.conf.*"}) +public class JobHistorySpoutTest { + +private TestingServer server; +private CuratorFramework zookeeper; +private MRHistoryJobConfig appConfig; + +@Before +public void setUp() throws Exception { +this.appConfig = MRHistoryJobConfig.newInstance(ConfigFactory.load()); +createZk(); +} + +@After +public void tearDown() throws Exception { +try { +if (zookeeper != null) { +if (!zookeeper.getState().equals(CuratorFrameworkState.STOPPED)) { +zookeeper.close(); +} +} +} catch (Throwable e) { +e.printStackTrace(); +} finally { +try { +if (server != null) { +server.close(); +} +} catch (IOException e) { +e.printStackTrace(); +} +} +} + +@Test +public void testSpout() throws Exception { --- End diff -- Is this unit test meaningful? There is no assert about the test data --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #708: [MINOR] support group by in siddhi patter...
Github user wujinhu commented on a diff in the pull request: https://github.com/apache/incubator-eagle/pull/708#discussion_r90795242 --- Diff: eagle-core/eagle-alert-parent/eagle-alert/alert-engine/src/main/java/org/apache/eagle/alert/engine/interpreter/PolicyExecutionPlan.java --- @@ -32,6 +32,11 @@ private Map<String, List> inputStreams; /** + * Actual input streams alias. + */ +private Map<String, String> inputStreamAlias; --- End diff -- IMO, PolicyExecutionPlan is the data interface for user to get some details about the execution plan. Besides, I think inputStreamAlias and inputStreams are the same level information. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle issue #706: [MINOR] Sleep 1 ms after versionId was generated
Github user wujinhu commented on the issue: https://github.com/apache/incubator-eagle/pull/706 LGTM --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #705: EAGLE-811 Refactor jdbcMetadataDaoImpl of...
Github user wujinhu commented on a diff in the pull request: https://github.com/apache/incubator-eagle/pull/705#discussion_r90581912 --- Diff: eagle-core/eagle-alert-parent/eagle-alert/alert-metadata-parent/alert-metadata/src/main/java/org/apache/eagle/alert/metadata/impl/JdbcMetadataHandler.java --- @@ -0,0 +1,533 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.eagle.alert.metadata.impl; + +import org.apache.commons.collections.map.HashedMap; +import org.apache.commons.dbcp.BasicDataSource; +import org.apache.eagle.alert.coordination.model.Kafka2TupleMetadata; +import org.apache.eagle.alert.coordination.model.ScheduleState; +import org.apache.eagle.alert.coordination.model.internal.PolicyAssignment; +import org.apache.eagle.alert.coordination.model.internal.Topology; +import org.apache.eagle.alert.engine.coordinator.*; +import org.apache.eagle.alert.engine.model.AlertPublishEvent; +import org.apache.eagle.alert.metadata.MetadataUtils; +import org.apache.eagle.alert.metadata.resource.OpResult; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.DeserializationFeature; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.typesafe.config.Config; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.sql.DataSource; +import java.io.IOException; +import java.sql.*; +import java.util.*; +import java.util.function.Function; + +public class JdbcMetadataHandler { + +private static final Logger LOG = LoggerFactory.getLogger(JdbcMetadataHandler.class); +// general model +private static final String INSERT_STATEMENT = "INSERT INTO %s(content, id) VALUES (?, ?)"; +private static final String DELETE_STATEMENT = "DELETE FROM %s WHERE id=?"; +private static final String UPDATE_STATEMENT = "UPDATE %s set content=? WHERE id=?"; +private static final String QUERY_ALL_STATEMENT = "SELECT content FROM %s"; +private static final String QUERY_CONDITION_STATEMENT = "SELECT content FROM %s WHERE id=?"; +private static final String QUERY_ORDERBY_STATEMENT = "SELECT content FROM %s ORDER BY id %s"; + +// customized model +private static final String CLEAR_SCHEDULESTATES_STATEMENT = "DELETE FROM schedule_state WHERE id NOT IN (SELECT id from (SELECT id FROM schedule_state ORDER BY id DESC limit ?) as states)"; +private static final String INSERT_ALERT_STATEMENT = "INSERT INTO alert_event(alertId, siteId, appIds, policyId, alertTimestamp, policyValue, alertData) VALUES (?, ?, ?, ?, ?, ?, ?)"; +private static final String QUERY_ALERT_STATEMENT = "SELECT * FROM alert_event order by alertTimestamp DESC limit ?"; +private static final String QUERY_ALERT_BY_ID_STATEMENT = "SELECT * FROM alert_event WHERE alertId=? order by alertTimestamp DESC limit ?"; +private static final String QUERY_ALERT_BY_POLICY_STATEMENT = "SELECT * FROM alert_event WHERE policyId=? order by alertTimestamp DESC limit ?"; +private static final String INSERT_POLICYPUBLISHMENT_STATEMENT = "INSERT INTO policy_publishment(policyId, publishmentName) VALUES (?, ?)"; +private static final String DELETE_PUBLISHMENT_STATEMENT = "DELETE FROM policy_publishment WHERE policyId=?"; +private static final String QUERY_PUBLISHMENT_BY_POLICY_STATEMENT = "SELECT content FROM publishment a INNER JOIN policy_publishment b ON a.id=b.publishmentName and b.policyId=?"; +private static final String QUERY_PUBLISHMENT_STATEMENT = "SELECT a.content, b.policyId FROM publishment a LEFT JOIN policy_publishment b ON a.id=b.publishmentName"; + +public enum SortType { DESC, ASC } + +private static Map<String, String> tblNameMap = new HashMap&l
[GitHub] incubator-eagle pull request #705: EAGLE-811 Refactor jdbcMetadataDaoImpl of...
Github user wujinhu commented on a diff in the pull request: https://github.com/apache/incubator-eagle/pull/705#discussion_r90587141 --- Diff: eagle-core/eagle-alert-parent/eagle-alert/alert-metadata-parent/alert-metadata/src/main/java/org/apache/eagle/alert/metadata/impl/JdbcMetadataHandler.java --- @@ -0,0 +1,533 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.eagle.alert.metadata.impl; + +import org.apache.commons.collections.map.HashedMap; +import org.apache.commons.dbcp.BasicDataSource; +import org.apache.eagle.alert.coordination.model.Kafka2TupleMetadata; +import org.apache.eagle.alert.coordination.model.ScheduleState; +import org.apache.eagle.alert.coordination.model.internal.PolicyAssignment; +import org.apache.eagle.alert.coordination.model.internal.Topology; +import org.apache.eagle.alert.engine.coordinator.*; +import org.apache.eagle.alert.engine.model.AlertPublishEvent; +import org.apache.eagle.alert.metadata.MetadataUtils; +import org.apache.eagle.alert.metadata.resource.OpResult; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.DeserializationFeature; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.typesafe.config.Config; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.sql.DataSource; +import java.io.IOException; +import java.sql.*; +import java.util.*; +import java.util.function.Function; + +public class JdbcMetadataHandler { + +private static final Logger LOG = LoggerFactory.getLogger(JdbcMetadataHandler.class); +// general model +private static final String INSERT_STATEMENT = "INSERT INTO %s(content, id) VALUES (?, ?)"; +private static final String DELETE_STATEMENT = "DELETE FROM %s WHERE id=?"; +private static final String UPDATE_STATEMENT = "UPDATE %s set content=? WHERE id=?"; +private static final String QUERY_ALL_STATEMENT = "SELECT content FROM %s"; +private static final String QUERY_CONDITION_STATEMENT = "SELECT content FROM %s WHERE id=?"; +private static final String QUERY_ORDERBY_STATEMENT = "SELECT content FROM %s ORDER BY id %s"; + +// customized model +private static final String CLEAR_SCHEDULESTATES_STATEMENT = "DELETE FROM schedule_state WHERE id NOT IN (SELECT id from (SELECT id FROM schedule_state ORDER BY id DESC limit ?) as states)"; +private static final String INSERT_ALERT_STATEMENT = "INSERT INTO alert_event(alertId, siteId, appIds, policyId, alertTimestamp, policyValue, alertData) VALUES (?, ?, ?, ?, ?, ?, ?)"; +private static final String QUERY_ALERT_STATEMENT = "SELECT * FROM alert_event order by alertTimestamp DESC limit ?"; +private static final String QUERY_ALERT_BY_ID_STATEMENT = "SELECT * FROM alert_event WHERE alertId=? order by alertTimestamp DESC limit ?"; +private static final String QUERY_ALERT_BY_POLICY_STATEMENT = "SELECT * FROM alert_event WHERE policyId=? order by alertTimestamp DESC limit ?"; +private static final String INSERT_POLICYPUBLISHMENT_STATEMENT = "INSERT INTO policy_publishment(policyId, publishmentName) VALUES (?, ?)"; +private static final String DELETE_PUBLISHMENT_STATEMENT = "DELETE FROM policy_publishment WHERE policyId=?"; +private static final String QUERY_PUBLISHMENT_BY_POLICY_STATEMENT = "SELECT content FROM publishment a INNER JOIN policy_publishment b ON a.id=b.publishmentName and b.policyId=?"; +private static final String QUERY_PUBLISHMENT_STATEMENT = "SELECT a.content, b.policyId FROM publishment a LEFT JOIN policy_publishment b ON a.id=b.publishmentName"; + +public enum SortType { DESC, ASC } + +private static Map<String, String> tblNameMap = new HashMap&l
[GitHub] incubator-eagle pull request #705: EAGLE-811 Refactor jdbcMetadataDaoImpl of...
Github user wujinhu commented on a diff in the pull request: https://github.com/apache/incubator-eagle/pull/705#discussion_r90586834 --- Diff: eagle-core/eagle-alert-parent/eagle-alert/alert-metadata-parent/alert-metadata/src/main/java/org/apache/eagle/alert/metadata/impl/JdbcMetadataHandler.java --- @@ -0,0 +1,533 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.eagle.alert.metadata.impl; + +import org.apache.commons.collections.map.HashedMap; +import org.apache.commons.dbcp.BasicDataSource; +import org.apache.eagle.alert.coordination.model.Kafka2TupleMetadata; +import org.apache.eagle.alert.coordination.model.ScheduleState; +import org.apache.eagle.alert.coordination.model.internal.PolicyAssignment; +import org.apache.eagle.alert.coordination.model.internal.Topology; +import org.apache.eagle.alert.engine.coordinator.*; +import org.apache.eagle.alert.engine.model.AlertPublishEvent; +import org.apache.eagle.alert.metadata.MetadataUtils; +import org.apache.eagle.alert.metadata.resource.OpResult; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.DeserializationFeature; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.typesafe.config.Config; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.sql.DataSource; +import java.io.IOException; +import java.sql.*; +import java.util.*; +import java.util.function.Function; + +public class JdbcMetadataHandler { + +private static final Logger LOG = LoggerFactory.getLogger(JdbcMetadataHandler.class); +// general model +private static final String INSERT_STATEMENT = "INSERT INTO %s(content, id) VALUES (?, ?)"; +private static final String DELETE_STATEMENT = "DELETE FROM %s WHERE id=?"; +private static final String UPDATE_STATEMENT = "UPDATE %s set content=? WHERE id=?"; +private static final String QUERY_ALL_STATEMENT = "SELECT content FROM %s"; +private static final String QUERY_CONDITION_STATEMENT = "SELECT content FROM %s WHERE id=?"; +private static final String QUERY_ORDERBY_STATEMENT = "SELECT content FROM %s ORDER BY id %s"; + +// customized model +private static final String CLEAR_SCHEDULESTATES_STATEMENT = "DELETE FROM schedule_state WHERE id NOT IN (SELECT id from (SELECT id FROM schedule_state ORDER BY id DESC limit ?) as states)"; +private static final String INSERT_ALERT_STATEMENT = "INSERT INTO alert_event(alertId, siteId, appIds, policyId, alertTimestamp, policyValue, alertData) VALUES (?, ?, ?, ?, ?, ?, ?)"; +private static final String QUERY_ALERT_STATEMENT = "SELECT * FROM alert_event order by alertTimestamp DESC limit ?"; +private static final String QUERY_ALERT_BY_ID_STATEMENT = "SELECT * FROM alert_event WHERE alertId=? order by alertTimestamp DESC limit ?"; +private static final String QUERY_ALERT_BY_POLICY_STATEMENT = "SELECT * FROM alert_event WHERE policyId=? order by alertTimestamp DESC limit ?"; +private static final String INSERT_POLICYPUBLISHMENT_STATEMENT = "INSERT INTO policy_publishment(policyId, publishmentName) VALUES (?, ?)"; +private static final String DELETE_PUBLISHMENT_STATEMENT = "DELETE FROM policy_publishment WHERE policyId=?"; +private static final String QUERY_PUBLISHMENT_BY_POLICY_STATEMENT = "SELECT content FROM publishment a INNER JOIN policy_publishment b ON a.id=b.publishmentName and b.policyId=?"; +private static final String QUERY_PUBLISHMENT_STATEMENT = "SELECT a.content, b.policyId FROM publishment a LEFT JOIN policy_publishment b ON a.id=b.publishmentName"; + +public enum SortType { DESC, ASC } + +private static Map<String, String> tblNameMap = new HashMap&l
[GitHub] incubator-eagle pull request #703: [MINOR] remove system property for metada...
GitHub user wujinhu opened a pull request: https://github.com/apache/incubator-eagle/pull/703 [MINOR] remove system property for metadata.metadataDAO in JDBCMetadataStore You can merge this pull request into a Git repository by running: $ git pull https://github.com/wujinhu/incubator-eagle EAGLE-791 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-eagle/pull/703.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #703 commit 01d492e44ce87e7fde3f0cff2715ee71c5d2b964 Author: wujinhu <wujinhu...@126.com> Date: 2016-12-01T06:59:07Z remove system property for metadata.metadataDAO in JDBCMetadataStore --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #700: EAGLE-805 sync some operation in RunningJ...
Github user wujinhu commented on a diff in the pull request: https://github.com/apache/incubator-eagle/pull/700#discussion_r90199126 --- Diff: eagle-jpm/eagle-jpm-mr-running/src/test/java/org/apache/eagle/jpm/mr/running/MRRunningJobManagerTest.java --- @@ -117,4 +124,94 @@ public void testMRRunningJobManagerDelWithLock() throws Exception { } +@Test +public void testMRRunningJobManagerRecoverYarnAppWithLock() throws Exception { + +if(curator.checkExists().forPath(SHARE_RESOURCES) == null) { +curator.create() +.creatingParentsIfNeeded() +.withMode(CreateMode.PERSISTENT) +.forPath(SHARE_RESOURCES); +} + +Assert.assertTrue(curator.checkExists().forPath(SHARE_RESOURCES) != null); + +curator.setData().forPath(SHARE_RESOURCES, generateZkSetData()); + +ExecutorService service = Executors.newFixedThreadPool(QTY); +for (int i = 0; i < QTY; ++i) { +Callable task = () -> { +try { +MRRunningJobManager mrRunningJobManager = new MRRunningJobManager(zkStateConfig); +for (int j = 0; j < REPETITIONS; ++j) { +if(j % 3 == 0) { +mrRunningJobManager.delete("yarnAppId", "jobId"); +} else { + mrRunningJobManager.recoverYarnApp("yarnAppId"); +} +} +} catch (Exception e) { +e.printStackTrace(); --- End diff -- Remove useless code --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #700: EAGLE-805 sync some operation in RunningJ...
Github user wujinhu commented on a diff in the pull request: https://github.com/apache/incubator-eagle/pull/700#discussion_r90199003 --- Diff: eagle-jpm/eagle-jpm-util/src/main/java/org/apache/eagle/jpm/util/Utils.java --- @@ -116,6 +116,6 @@ public static long parseMemory(String memory) { public static String makeLockPath(String zkrootWithSiteId) { Preconditions.checkArgument(StringUtils.isNotBlank(zkrootWithSiteId), "zkrootWithSiteId must not be blank"); -return zkrootWithSiteId.toLowerCase() + "/locks"; +return "/locks" + zkrootWithSiteId.toLowerCase(); --- End diff -- Pls refactor like: zkRoot/locks zkRoot/jobs/1 zkRoot/jobs/2 ... --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #691: EAGLE-796 MRJobEntityCreationHandler flus...
Github user wujinhu commented on a diff in the pull request: https://github.com/apache/incubator-eagle/pull/691#discussion_r89947354 --- Diff: eagle-jpm/eagle-jpm-mr-running/src/main/java/org/apache/eagle/jpm/mr/running/parser/MRJobEntityCreationHandler.java --- @@ -79,10 +79,7 @@ public boolean flush() { eagleServiceConfig.password); client.getJerseyClient().setReadTimeout(eagleServiceConfig.readTimeoutSeconds * 1000); try { -LOG.info("start to flush mr job entities, size {}", entities.size()); -client.create(entities); -LOG.info("finish flushing mr job entities, size {}", entities.size()); -entities.clear(); +return createEntities(client); } catch (Exception e) { LOG.warn("exception found when flush entities, {}", e); e.printStackTrace(); --- End diff -- Pls help to remove log like this line. Thanks! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #684: [EAGLE-800] Use InterProcessMutex to sync...
Github user wujinhu commented on a diff in the pull request: https://github.com/apache/incubator-eagle/pull/684#discussion_r89567435 --- Diff: eagle-jpm/eagle-jpm-util/src/main/java/org/apache/eagle/jpm/util/jobrecover/RunningJobManager.java --- @@ -49,12 +53,21 @@ private CuratorFramework newCurator(String zkQuorum, int zkSessionTimeoutMs, int ); } -public RunningJobManager(String zkQuorum, int zkSessionTimeoutMs, int zkRetryTimes, int zkRetryInterval, String zkRoot) { +public RunningJobManager(String zkQuorum, int zkSessionTimeoutMs, int zkRetryTimes, int zkRetryInterval, String zkRoot, String siteId) { --- End diff -- I think pass the lock path to utility class directly will be better since utility class does not need to know how to make the lock path. How to define the lock path is decided by applications --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #681: [MINOR] fix email format and change defau...
GitHub user wujinhu opened a pull request: https://github.com/apache/incubator-eagle/pull/681 [MINOR] fix email format and change default delay time You can merge this pull request into a Git repository by running: $ git pull https://github.com/wujinhu/incubator-eagle EAGLE-788 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-eagle/pull/681.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #681 commit 6af39de71947a27c6738a3f65cc350bcb7be4042 Author: wujinhu <wujinhu...@126.com> Date: 2016-11-24T11:55:48Z fix email format and change default delay --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #677: [MINOR] health check optimize
GitHub user wujinhu opened a pull request: https://github.com/apache/incubator-eagle/pull/677 [MINOR] health check optimize You can merge this pull request into a Git repository by running: $ git pull https://github.com/wujinhu/incubator-eagle healthCheckOptimize Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-eagle/pull/677.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #677 commit 834c45bf7c76077acbf118da1ea267ce4ab4da00 Author: wujinhu <wujinhu...@126.com> Date: 2016-11-23T07:57:41Z healthCheckOptimize --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-eagle pull request #671: [EAGLE-787] add healthy check for hadoop-...
GitHub user wujinhu opened a pull request: https://github.com/apache/incubator-eagle/pull/671 [EAGLE-787] add healthy check for hadoop-queue/topology-health/spark-history apps You can merge this pull request into a Git repository by running: $ git pull https://github.com/wujinhu/incubator-eagle EAGLE-787 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-eagle/pull/671.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #671 commit 556954f43a82c68d551972fdae6ed3128c328060 Author: wujinhu <wujinhu...@126.com> Date: 2016-11-21T07:14:10Z add healthy check for hadoop-queue/topology-health/spark-history apps --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---