[jira] [Updated] (STORM-1311) port backtype.storm.ui.core to java
[ https://issues.apache.org/jira/browse/STORM-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1311: -- Assignee: Hugo Louro (was: Jark Wu) > port backtype.storm.ui.core to java > --- > > Key: STORM-1311 > URL: https://issues.apache.org/jira/browse/STORM-1311 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Hugo Louro > Labels: java-migration, jstorm-merger > > User Interface + REST -> java -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1280) port backtype.storm.daemon.logviewer to java
[ https://issues.apache.org/jira/browse/STORM-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1280: -- Assignee: Sivaguru Kannan (was: Jark Wu) > port backtype.storm.daemon.logviewer to java > > > Key: STORM-1280 > URL: https://issues.apache.org/jira/browse/STORM-1280 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Sivaguru Kannan > Labels: java-migration, jstorm-merger > > This is providing a UI for accessing and searching logs. hiccup will need to > be replaced, possibly with just hard coded HTML + escaping. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1282) port backtype.storm.LocalCluster to java
[ https://issues.apache.org/jira/browse/STORM-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1282: -- Assignee: Sivaguru Kannan (was: Abhishek Agarwal) > port backtype.storm.LocalCluster to java > > > Key: STORM-1282 > URL: https://issues.apache.org/jira/browse/STORM-1282 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Sivaguru Kannan > Labels: java-migration, jstorm-merger > > https://github.com/apache/storm/blob/jstorm-import/jstorm-core/src/main/java/backtype/storm/LocalCluster.java > as an example -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-2052) Kafka Spout New Client API - Log Improvements and Parameter Tuning for Better Performance
[ https://issues.apache.org/jira/browse/STORM-2052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-2052: -- Fix Version/s: 1.1.0 2.0.0 > Kafka Spout New Client API - Log Improvements and Parameter Tuning for Better > Performance > - > > Key: STORM-2052 > URL: https://issues.apache.org/jira/browse/STORM-2052 > Project: Apache Storm > Issue Type: Bug > Components: storm-kafka >Affects Versions: 1.1.0, 1.0.3, 1.x >Reporter: Hugo Louro >Assignee: Hugo Louro >Priority: Critical > Labels: performance > Fix For: 2.0.0, 1.1.0 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > Tune Kafka Spout parameters > Improve Logging to show more meaningful messages, and print detail > appropriate to logging level. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1949) Backpressure can cause spout to stop emitting and stall topology
[ https://issues.apache.org/jira/browse/STORM-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15423908#comment-15423908 ] Sriharsha Chintalapani commented on STORM-1949: --- [~zhuoliu] The problem we encountered not the storm workers are unable to add a node in zookeeper but it rather happening very fast thus erratic on/off behavior with backpressure. Thats what we are discussing in this jira. Its not just NPE but the current behavior in backpressure not ideal and there are some proposals here in improving it. > Backpressure can cause spout to stop emitting and stall topology > > > Key: STORM-1949 > URL: https://issues.apache.org/jira/browse/STORM-1949 > Project: Apache Storm > Issue Type: Bug >Reporter: Roshan Naik > Attachments: 1.x-branch-works-perfect.png > > > Problem can be reproduced by this [Word count > topology|https://github.com/hortonworks/storm/blob/perftopos1.x/examples/storm-starter/src/jvm/org/apache/storm/starter/perf/FileReadWordCountTopo.java] > within a IDE. > I ran it with 1 spout instance, 2 splitter bolt instances, 2 counter bolt > instances. > The problem is more easily reproduced with WC topology as it causes an > explosion of tuples due to splitting a sentence tuple into word tuples. As > the bolts have to process more tuples than the spout is producing, spout > needs to operate slower. > The amount of time it takes for the topology to stall can vary.. but > typically under 10 mins. > *My theory:* I suspect there is a race condition in the way ZK is being > utilized to enable/disable back pressure. When congested (i.e pressure > exceeds high water mark), the bolt's worker records this congested situation > in ZK by creating a node. Once the congestion is reduced below the low water > mark, it deletes this node. > The spout's worker has setup a watch on the parent node, expecting a callback > whenever there is change in the child nodes. On receiving the callback the > spout's worker lists the parent node to check if there are 0 or more child > nodes it is essentially trying to figure out the nature of state change > in ZK to determine whether to throttle or not. Subsequently it setsup > another watch in ZK to keep an eye on future changes. > When there are multiple bolts, there can be rapid creation/deletion of these > ZK nodes. Between the time the worker receives a callback and sets up the > next watch.. many changes may have undergone in ZK which will go unnoticed by > the spout. > The condition that the bolts are no longer congested may not get noticed as a > result of this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-2041) Make Java 8 as minimum requirement for 2.0 release
[ https://issues.apache.org/jira/browse/STORM-2041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-2041: -- Summary: Make Java 8 as minimum requirement for 2.0 release (was: Make Java 8 as minimum requirement for master branch) > Make Java 8 as minimum requirement for 2.0 release > -- > > Key: STORM-2041 > URL: https://issues.apache.org/jira/browse/STORM-2041 > Project: Apache Storm > Issue Type: Task >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-2041) Make Java 8 as minimum requirement for master branch
Sriharsha Chintalapani created STORM-2041: - Summary: Make Java 8 as minimum requirement for master branch Key: STORM-2041 URL: https://issues.apache.org/jira/browse/STORM-2041 Project: Apache Storm Issue Type: Task Reporter: Sriharsha Chintalapani Assignee: Sriharsha Chintalapani Fix For: 2.0.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1240) port backtype.storm.security.auth.authorizer.DRPCSimpleACLAuthorizer-test to java
[ https://issues.apache.org/jira/browse/STORM-1240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417904#comment-15417904 ] Sriharsha Chintalapani commented on STORM-1240: --- [~abhishek.agarwal] There are few migration related jiras assinged to you let me know if you are not actively working on these or planning to get them in near future. We would like to help. > port backtype.storm.security.auth.authorizer.DRPCSimpleACLAuthorizer-test to > java > -- > > Key: STORM-1240 > URL: https://issues.apache.org/jira/browse/STORM-1240 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Abhishek Agarwal > Labels: java-migration, jstorm-merger > > junit migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-2021) storm-kinesis missing licenses
[ https://issues.apache.org/jira/browse/STORM-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408228#comment-15408228 ] Sriharsha Chintalapani commented on STORM-2021: --- [~ptgoetz] realized that. But we should make this as default otherwise easy to miss. > storm-kinesis missing licenses > -- > > Key: STORM-2021 > URL: https://issues.apache.org/jira/browse/STORM-2021 > Project: Apache Storm > Issue Type: Bug > Components: storm-kinesis >Affects Versions: 2.0.0 >Reporter: Robert Joseph Evans >Assignee: Priyank Shah >Priority: Blocker > > {code} > Unapproved licenses: > > external/storm-kinesis/src/test/java/org/apache/storm/kinesis/spout/test/KinesisBoltTest.java > > external/storm-kinesis/src/test/java/org/apache/storm/kinesis/spout/test/KinesisSpoutTopology.java > > external/storm-kinesis/src/test/java/org/apache/storm/kinesis/spout/test/TestRecordToTupleMapper.java > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-2021) storm-kinesis missing licenses
[ https://issues.apache.org/jira/browse/STORM-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15408020#comment-15408020 ] Sriharsha Chintalapani commented on STORM-2021: --- [~rev...@viaserv.com] [~pshah] I apologize. I ran the local builds with unit tests and I didn't get into this error. > storm-kinesis missing licenses > -- > > Key: STORM-2021 > URL: https://issues.apache.org/jira/browse/STORM-2021 > Project: Apache Storm > Issue Type: Bug > Components: storm-kinesis >Affects Versions: 2.0.0 >Reporter: Robert Joseph Evans >Assignee: Priyank Shah >Priority: Blocker > > {code} > Unapproved licenses: > > external/storm-kinesis/src/test/java/org/apache/storm/kinesis/spout/test/KinesisBoltTest.java > > external/storm-kinesis/src/test/java/org/apache/storm/kinesis/spout/test/KinesisSpoutTopology.java > > external/storm-kinesis/src/test/java/org/apache/storm/kinesis/spout/test/TestRecordToTupleMapper.java > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1988) Kafka Offset not showing due to bad classpath
[ https://issues.apache.org/jira/browse/STORM-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani resolved STORM-1988. --- Resolution: Fixed > Kafka Offset not showing due to bad classpath > - > > Key: STORM-1988 > URL: https://issues.apache.org/jira/browse/STORM-1988 > Project: Apache Storm > Issue Type: Bug > Components: storm-ui >Affects Versions: 2.0.0, 1.1.0 >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim >Priority: Critical > > STORM-1950 breaks classpath of storm-kafka-monitor. > Classpath doesn't work with wildcard and filename prefix/postfix. It was > added for purposing to prevent other libs to also included as classpath, but > it just doesn't work. My bad. > We should fix classpath to specify full filename path or directory/* pattern. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1988) Kafka Offset not showing due to bad classpath
[ https://issues.apache.org/jira/browse/STORM-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386314#comment-15386314 ] Sriharsha Chintalapani commented on STORM-1988: --- [~ptgoetz] This sounds like a mutually exclusive argument in the same by-laws page {code} The code can be committed after the first +1 {code} vs {code} 1 day from initial patch (Note: Committers should consider allowing more time for review based on the complexity and/or impact of the patch in question.) {code} Either way waiting for 1 day for small patches doesn't make sense and also for critical patches. We did indeed merge some of the patches for releases. I guess thats a discussion for mailing list. I'll wait for 24-hr period until we get more clarity on this policy. > Kafka Offset not showing due to bad classpath > - > > Key: STORM-1988 > URL: https://issues.apache.org/jira/browse/STORM-1988 > Project: Apache Storm > Issue Type: Bug > Components: storm-ui >Affects Versions: 2.0.0, 1.1.0 >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim >Priority: Critical > > STORM-1950 breaks classpath of storm-kafka-monitor. > Classpath doesn't work with wildcard and filename prefix/postfix. It was > added for purposing to prevent other libs to also included as classpath, but > it just doesn't work. My bad. > We should fix classpath to specify full filename path or directory/* pattern. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1988) Kafka Offset not showing due to bad classpath
[ https://issues.apache.org/jira/browse/STORM-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386252#comment-15386252 ] Sriharsha Chintalapani commented on STORM-1988: --- [~ptgoetz] Will keep this in mind in the future. But its confusing what is the 24-hr period. Also with critical patches if we've enough +1s do we still need to wait for 24-hr period etc.. It will be great if we can document this in this page http://storm.apache.org/contribute/BYLAWS.html or here https://github.com/apache/storm/blob/master/DEVELOPER.md#pull-requests > Kafka Offset not showing due to bad classpath > - > > Key: STORM-1988 > URL: https://issues.apache.org/jira/browse/STORM-1988 > Project: Apache Storm > Issue Type: Bug > Components: storm-ui >Affects Versions: 2.0.0, 1.1.0 >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim >Priority: Critical > > STORM-1950 breaks classpath of storm-kafka-monitor. > Classpath doesn't work with wildcard and filename prefix/postfix. It was > added for purposing to prevent other libs to also included as classpath, but > it just doesn't work. My bad. > We should fix classpath to specify full filename path or directory/* pattern. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (STORM-1979) Storm Druid Connector
[ https://issues.apache.org/jira/browse/STORM-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani reassigned STORM-1979: - Assignee: Sriharsha Chintalapani > Storm Druid Connector > - > > Key: STORM-1979 > URL: https://issues.apache.org/jira/browse/STORM-1979 > Project: Apache Storm > Issue Type: Improvement >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani > > Storm Bolt & Trident state implementation for Druid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1979) Storm Druid Connector
[ https://issues.apache.org/jira/browse/STORM-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1979: -- Assignee: (was: Sriharsha Chintalapani) > Storm Druid Connector > - > > Key: STORM-1979 > URL: https://issues.apache.org/jira/browse/STORM-1979 > Project: Apache Storm > Issue Type: Improvement >Reporter: Sriharsha Chintalapani > > Storm Bolt & Trident state implementation for Druid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (STORM-1979) Storm Druid Connector
[ https://issues.apache.org/jira/browse/STORM-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani reassigned STORM-1979: - Assignee: Sriharsha Chintalapani > Storm Druid Connector > - > > Key: STORM-1979 > URL: https://issues.apache.org/jira/browse/STORM-1979 > Project: Apache Storm > Issue Type: Improvement >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani > > Storm Bolt & Trident state implementation for Druid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1978) Storm Druid Connector
Sriharsha Chintalapani created STORM-1978: - Summary: Storm Druid Connector Key: STORM-1978 URL: https://issues.apache.org/jira/browse/STORM-1978 Project: Apache Storm Issue Type: Improvement Reporter: Sriharsha Chintalapani Storm Bolt & Trident state implementation for Druid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1979) Storm Druid Connector
Sriharsha Chintalapani created STORM-1979: - Summary: Storm Druid Connector Key: STORM-1979 URL: https://issues.apache.org/jira/browse/STORM-1979 Project: Apache Storm Issue Type: Improvement Reporter: Sriharsha Chintalapani Storm Bolt & Trident state implementation for Druid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1976) Storm Nimbus H/A has issue on cleaning corrupted topologies
[ https://issues.apache.org/jira/browse/STORM-1976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15381615#comment-15381615 ] Sriharsha Chintalapani commented on STORM-1976: --- [~kabhwan] I think the old logic is seems ok to me. It makes assumption about the topology jar is replicated on all the nimbus nodes. Which makes sense when the user deploys the topology we immediately sync to all the nimbus nodes hence the reason we had min.num.replication , its the criteria that minimum number of nimbus nodes need to have the topology jar for the uploadJar to be successful. Not sure if the new code follows that. > Storm Nimbus H/A has issue on cleaning corrupted topologies > --- > > Key: STORM-1976 > URL: https://issues.apache.org/jira/browse/STORM-1976 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.0, 1.0.1 >Reporter: Raghav Kumar Gautam >Assignee: Jungtaek Lim >Priority: Blocker > > In the following scenario storm-ha runs into issues: > 1. Kill a non-leader nimbus > 2. Submit a topology > 3. Bring up the non-leader nimbus > After step-3 expectation is that the non-leader nimbus will download topology > jar. Instead it cleans up the topology. > {code} > 2016-07-12 07:11:09.511 o.a.s.c.zookeeper-state-factory [WARN] Received event > ::none: with disconnected Reader Zookeeper. > 2016-07-12 07:11:09.587 o.a.s.zookeeper [INFO] Queued up for leader lock. > 2016-07-12 07:11:09.608 o.a.s.d.nimbus [INFO] Corrupt topology > JoinedNonLeaderNimbusTriesToDownloadTopologyCode-2-1468307239 has state on > zookeeper but doesn't have a local dir on Nimbus. Cleaning up... > 2016-07-12 07:11:09.932 o.a.h.m.s.s.StormTimelineMetricsReporter [INFO] > Preparing Storm Metrics Reporter > 2016-07-12 07:11:09.946 o.a.s.d.m.MetricsUtils [INFO] Using statistics > reporter > plugin:org.apache.storm.daemon.metrics.reporters.JmxPreparableReporter > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1976) Storm Nimbus H/A has issue on cleaning corrupted topologies
[ https://issues.apache.org/jira/browse/STORM-1976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15381597#comment-15381597 ] Sriharsha Chintalapani commented on STORM-1976: --- [~kabhwan] as far as I know first version of nimbus doesn't do clean up if its non-leader. From your description it seems that non-leader nimbus is calling the cleanup of the topologies. Is that right? > Storm Nimbus H/A has issue on cleaning corrupted topologies > --- > > Key: STORM-1976 > URL: https://issues.apache.org/jira/browse/STORM-1976 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.0, 1.0.1 >Reporter: Raghav Kumar Gautam >Assignee: Jungtaek Lim >Priority: Blocker > > In the following scenario storm-ha runs into issues: > 1. Kill a non-leader nimbus > 2. Submit a topology > 3. Bring up the non-leader nimbus > After step-3 expectation is that the non-leader nimbus will download topology > jar. Instead it cleans up the topology. > {code} > 2016-07-12 07:11:09.511 o.a.s.c.zookeeper-state-factory [WARN] Received event > ::none: with disconnected Reader Zookeeper. > 2016-07-12 07:11:09.587 o.a.s.zookeeper [INFO] Queued up for leader lock. > 2016-07-12 07:11:09.608 o.a.s.d.nimbus [INFO] Corrupt topology > JoinedNonLeaderNimbusTriesToDownloadTopologyCode-2-1468307239 has state on > zookeeper but doesn't have a local dir on Nimbus. Cleaning up... > 2016-07-12 07:11:09.932 o.a.h.m.s.s.StormTimelineMetricsReporter [INFO] > Preparing Storm Metrics Reporter > 2016-07-12 07:11:09.946 o.a.s.d.m.MetricsUtils [INFO] Using statistics > reporter > plugin:org.apache.storm.daemon.metrics.reporters.JmxPreparableReporter > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1969) Modify HiveTopology to show usage of non-partition table
[ https://issues.apache.org/jira/browse/STORM-1969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani resolved STORM-1969. --- Resolution: Fixed > Modify HiveTopology to show usage of non-partition table > > > Key: STORM-1969 > URL: https://issues.apache.org/jira/browse/STORM-1969 > Project: Apache Storm > Issue Type: Improvement > Components: storm-hive >Reporter: Raghav Kumar Gautam >Assignee: Jungtaek Lim >Priority: Minor > Fix For: 2.0.0, 1.1.0 > > > There're some kinds of topology in storm-hive, but all of them uses partition > so we don't have example for accessing non-partition table. It would be > better to have one for automated tests. > Instead of creating a new topology, we can modify HiveTopology to not use > partition at all. BucketTestHiveTopology and TridentHiveTopology also use > DelimitedRecordHiveMapper.withTimeAsPartitionField() so I think usage of > withTimeAsPartitionField() is covered. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1969) Modify HiveTopology to show usage of non-partition table
[ https://issues.apache.org/jira/browse/STORM-1969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1969: -- Fix Version/s: 1.1.0 2.0.0 > Modify HiveTopology to show usage of non-partition table > > > Key: STORM-1969 > URL: https://issues.apache.org/jira/browse/STORM-1969 > Project: Apache Storm > Issue Type: Improvement > Components: storm-hive >Reporter: Raghav Kumar Gautam >Assignee: Jungtaek Lim >Priority: Minor > Fix For: 2.0.0, 1.1.0 > > > There're some kinds of topology in storm-hive, but all of them uses partition > so we don't have example for accessing non-partition table. It would be > better to have one for automated tests. > Instead of creating a new topology, we can modify HiveTopology to not use > partition at all. BucketTestHiveTopology and TridentHiveTopology also use > DelimitedRecordHiveMapper.withTimeAsPartitionField() so I think usage of > withTimeAsPartitionField() is covered. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1949) Backpressure can cause spout to stop emitting and stall topology
[ https://issues.apache.org/jira/browse/STORM-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371086#comment-15371086 ] Sriharsha Chintalapani commented on STORM-1949: --- [~knusbaum] another issue that we've seen is erratic toggling between back pressure on/off. IMO adding a time duration before turning the back pressure on/off might yield a better experience here. Currently we toggle as soon as we hit the higher watermark , how about we observe the pattern for configurable amount of time before we turn on the back pressure. Does this makes sense. > Backpressure can cause spout to stop emitting and stall topology > > > Key: STORM-1949 > URL: https://issues.apache.org/jira/browse/STORM-1949 > Project: Apache Storm > Issue Type: Bug >Reporter: Roshan Naik > > Problem can be reproduced by this [Word count > topology|https://github.com/hortonworks/storm/blob/perftopos1.x/examples/storm-starter/src/jvm/org/apache/storm/starter/perf/FileReadWordCountTopo.java] > within a IDE. > I ran it with 1 spout instance, 2 splitter bolt instances, 2 counter bolt > instances. > The problem is more easily reproduced with WC topology as it causes an > explosion of tuples due to splitting a sentence tuple into word tuples. As > the bolts have to process more tuples than the spout is producing, spout > needs to operate slower. > The amount of time it takes for the topology to stall can vary.. but > typically under 10 mins. > *My theory:* I suspect there is a race condition in the way ZK is being > utilized to enable/disable back pressure. When congested (i.e pressure > exceeds high water mark), the bolt's worker records this congested situation > in ZK by creating a node. Once the congestion is reduced below the low water > mark, it deletes this node. > The spout's worker has setup a watch on the parent node, expecting a callback > whenever there is change in the child nodes. On receiving the callback the > spout's worker lists the parent node to check if there are 0 or more child > nodes it is essentially trying to figure out the nature of state change > in ZK to determine whether to throttle or not. Subsequently it setsup > another watch in ZK to keep an eye on future changes. > When there are multiple bolts, there can be rapid creation/deletion of these > ZK nodes. Between the time the worker receives a callback and sets up the > next watch.. many changes may have undergone in ZK which will go unnoticed by > the spout. > The condition that the bolts are no longer congested may not get noticed as a > result of this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1096) UI tries to impersonate wrong user when getting topology conf for authorization, impersonation is allowed by default
[ https://issues.apache.org/jira/browse/STORM-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371011#comment-15371011 ] Sriharsha Chintalapani commented on STORM-1096: --- [~revans2] not sure if we are talking about the same issue :). I was pointing to having nimbus.impersonation.authorizer in defaults.yaml which seems to be confusing to the users given we print logs on every request. I don't think users will expect us to check impersonation in non-secure environments. Or atleast lets check on another config that indicates that the cluster is secure and than only print the logs if we want to have this as default. > UI tries to impersonate wrong user when getting topology conf for > authorization, impersonation is allowed by default > > > Key: STORM-1096 > URL: https://issues.apache.org/jira/browse/STORM-1096 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 0.10.0 >Reporter: Robert Joseph Evans >Assignee: Robert Joseph Evans >Priority: Blocker > Fix For: 0.10.0 > > > We have started using 0.10.0 under load and found a few issues around the UI > and impersonation. > The UI when trying to connect to nimbus will impersonate other users. > Nimbus, by default allows impersonation and just outputs a warning message > that it is allowed. We really should default to not allowing impersonation. > having the authorizer configured by default does not hurt when running > insecure because impersonation is not possible, but when security is enabled > if someone forgets to set this config we are now insecure by default. > If you do set all of that up correctly the UI now can impersonate the wrong > user when connecting to nimbus. > The UI decides which user to impersonate by pulling it from the request > context. The requestContext is populated from the HttpRequest when > assert-authorized-user is called. assert-authorized-user takes a > topology-conf as a parameter. The only way to get this topology conf is to > talk to nimbus, which will get the wrong user because the request context has > not been populated yet. > This just because a huge pain for users who way too often will not be able to > see pages on the UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1949) Backpressure can cause spout to stop emitting and stall topology
[ https://issues.apache.org/jira/browse/STORM-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371002#comment-15371002 ] Sriharsha Chintalapani commented on STORM-1949: --- [~abhishek.agarwal] we don't have alternative for non-acking before this back pressure feature. We always told users to depend on acking & topology.max.spout.pending if they need a way to control the fast spout. > Backpressure can cause spout to stop emitting and stall topology > > > Key: STORM-1949 > URL: https://issues.apache.org/jira/browse/STORM-1949 > Project: Apache Storm > Issue Type: Bug >Reporter: Roshan Naik > > Problem can be reproduced by this [Word count > topology|https://github.com/hortonworks/storm/blob/perftopos1.x/examples/storm-starter/src/jvm/org/apache/storm/starter/perf/FileReadWordCountTopo.java] > within a IDE. > I ran it with 1 spout instance, 2 splitter bolt instances, 2 counter bolt > instances. > The problem is more easily reproduced with WC topology as it causes an > explosion of tuples due to splitting a sentence tuple into word tuples. As > the bolts have to process more tuples than the spout is producing, spout > needs to operate slower. > The amount of time it takes for the topology to stall can vary.. but > typically under 10 mins. > *My theory:* I suspect there is a race condition in the way ZK is being > utilized to enable/disable back pressure. When congested (i.e pressure > exceeds high water mark), the bolt's worker records this congested situation > in ZK by creating a node. Once the congestion is reduced below the low water > mark, it deletes this node. > The spout's worker has setup a watch on the parent node, expecting a callback > whenever there is change in the child nodes. On receiving the callback the > spout's worker lists the parent node to check if there are 0 or more child > nodes it is essentially trying to figure out the nature of state change > in ZK to determine whether to throttle or not. Subsequently it setsup > another watch in ZK to keep an eye on future changes. > When there are multiple bolts, there can be rapid creation/deletion of these > ZK nodes. Between the time the worker receives a callback and sets up the > next watch.. many changes may have undergone in ZK which will go unnoticed by > the spout. > The condition that the bolts are no longer congested may not get noticed as a > result of this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1960) Add CORS support to STORM UI Rest api
Sriharsha Chintalapani created STORM-1960: - Summary: Add CORS support to STORM UI Rest api Key: STORM-1960 URL: https://issues.apache.org/jira/browse/STORM-1960 Project: Apache Storm Issue Type: Improvement Reporter: Sriharsha Chintalapani Assignee: Sriharsha Chintalapani Fix For: 2.0.0, 1.1.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1949) Backpressure can cause spout to stop emitting and stall topology
[ https://issues.apache.org/jira/browse/STORM-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15370955#comment-15370955 ] Sriharsha Chintalapani commented on STORM-1949: --- [~abhishek.agarwal] [~kabhwan] As we can see keeping this as default is resulting in performance degradation that we are unable to fix in near-term. Since this is a new feature turning it off by default makes sense until we figure out more details. [~kabhwan] of course the idea is to fix any issues and make it default. > Backpressure can cause spout to stop emitting and stall topology > > > Key: STORM-1949 > URL: https://issues.apache.org/jira/browse/STORM-1949 > Project: Apache Storm > Issue Type: Bug >Reporter: Roshan Naik > > Problem can be reproduced by this [Word count > topology|https://github.com/hortonworks/storm/blob/perftopos1.x/examples/storm-starter/src/jvm/org/apache/storm/starter/perf/FileReadWordCountTopo.java] > within a IDE. > I ran it with 1 spout instance, 2 splitter bolt instances, 2 counter bolt > instances. > The problem is more easily reproduced with WC topology as it causes an > explosion of tuples due to splitting a sentence tuple into word tuples. As > the bolts have to process more tuples than the spout is producing, spout > needs to operate slower. > The amount of time it takes for the topology to stall can vary.. but > typically under 10 mins. > *My theory:* I suspect there is a race condition in the way ZK is being > utilized to enable/disable back pressure. When congested (i.e pressure > exceeds high water mark), the bolt's worker records this congested situation > in ZK by creating a node. Once the congestion is reduced below the low water > mark, it deletes this node. > The spout's worker has setup a watch on the parent node, expecting a callback > whenever there is change in the child nodes. On receiving the callback the > spout's worker lists the parent node to check if there are 0 or more child > nodes it is essentially trying to figure out the nature of state change > in ZK to determine whether to throttle or not. Subsequently it setsup > another watch in ZK to keep an eye on future changes. > When there are multiple bolts, there can be rapid creation/deletion of these > ZK nodes. Between the time the worker receives a callback and sets up the > next watch.. many changes may have undergone in ZK which will go unnoticed by > the spout. > The condition that the bolts are no longer congested may not get noticed as a > result of this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1096) UI tries to impersonate wrong user when getting topology conf for authorization, impersonation is allowed by default
[ https://issues.apache.org/jira/browse/STORM-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15370946#comment-15370946 ] Sriharsha Chintalapani commented on STORM-1096: --- [~revans2] Agree. It looks confusing and would be good add some comments around that. Security is off by default and turning it on is today takes lot of steps and we are adding another step to it which is not ideal. But printing these messages when most of the users are in non-secure environment does no good to the users. Why would they need to see these messages in non-secure cluster. Turning these log messages in secure environment makes sense if the users are not configuring impersonation authorizer. > UI tries to impersonate wrong user when getting topology conf for > authorization, impersonation is allowed by default > > > Key: STORM-1096 > URL: https://issues.apache.org/jira/browse/STORM-1096 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 0.10.0 >Reporter: Robert Joseph Evans >Assignee: Robert Joseph Evans >Priority: Blocker > Fix For: 0.10.0 > > > We have started using 0.10.0 under load and found a few issues around the UI > and impersonation. > The UI when trying to connect to nimbus will impersonate other users. > Nimbus, by default allows impersonation and just outputs a warning message > that it is allowed. We really should default to not allowing impersonation. > having the authorizer configured by default does not hurt when running > insecure because impersonation is not possible, but when security is enabled > if someone forgets to set this config we are now insecure by default. > If you do set all of that up correctly the UI now can impersonate the wrong > user when connecting to nimbus. > The UI decides which user to impersonate by pulling it from the request > context. The requestContext is populated from the HttpRequest when > assert-authorized-user is called. assert-authorized-user takes a > topology-conf as a parameter. The only way to get this topology conf is to > talk to nimbus, which will get the wrong user because the request context has > not been populated yet. > This just because a huge pain for users who way too often will not be able to > see pages on the UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1930) Kafka New Client API - Support for Topic Wildcards
[ https://issues.apache.org/jira/browse/STORM-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1930: -- Fix Version/s: (was: 1.1.0) > Kafka New Client API - Support for Topic Wildcards > -- > > Key: STORM-1930 > URL: https://issues.apache.org/jira/browse/STORM-1930 > Project: Apache Storm > Issue Type: New Feature > Components: storm-kafka >Affects Versions: 1.0.2 >Reporter: Hugo Louro >Assignee: Hugo Louro >Priority: Critical > Fix For: 2.0.0, 1.0.2 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1930) Kafka New Client API - Support for Topic Wildcards
[ https://issues.apache.org/jira/browse/STORM-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani resolved STORM-1930. --- Resolution: Fixed Fix Version/s: 1.1.0 1.0.2 2.0.0 > Kafka New Client API - Support for Topic Wildcards > -- > > Key: STORM-1930 > URL: https://issues.apache.org/jira/browse/STORM-1930 > Project: Apache Storm > Issue Type: New Feature > Components: storm-kafka >Affects Versions: 1.0.2 >Reporter: Hugo Louro >Assignee: Hugo Louro >Priority: Critical > Fix For: 2.0.0, 1.0.2, 1.1.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1942) Extra closing div tag in topology.html
[ https://issues.apache.org/jira/browse/STORM-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1942: -- Fix Version/s: 1.1.0 2.0.0 > Extra closing div tag in topology.html > -- > > Key: STORM-1942 > URL: https://issues.apache.org/jira/browse/STORM-1942 > Project: Apache Storm > Issue Type: Bug > Components: storm-ui >Affects Versions: 2.0.0 >Reporter: Alessandro Bellina >Assignee: Alessandro Bellina >Priority: Minor > Fix For: 2.0.0, 1.1.0 > > > Extra in topology.html causing styling to be strage. Appears to have > been introduced in STORM-1136. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1942) Extra closing div tag in topology.html
[ https://issues.apache.org/jira/browse/STORM-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani resolved STORM-1942. --- Resolution: Fixed > Extra closing div tag in topology.html > -- > > Key: STORM-1942 > URL: https://issues.apache.org/jira/browse/STORM-1942 > Project: Apache Storm > Issue Type: Bug > Components: storm-ui >Affects Versions: 2.0.0 >Reporter: Alessandro Bellina >Assignee: Alessandro Bellina >Priority: Minor > Fix For: 2.0.0, 1.1.0 > > > Extra in topology.html causing styling to be strage. Appears to have > been introduced in STORM-1136. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1934) Race condition between sync-supervisor and sync-processes raises several strange issues
[ https://issues.apache.org/jira/browse/STORM-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1934: -- Fix Version/s: 1.1.0 2.0.0 1.0.0 > Race condition between sync-supervisor and sync-processes raises several > strange issues > --- > > Key: STORM-1934 > URL: https://issues.apache.org/jira/browse/STORM-1934 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.0, 2.0.0, 1.0.1 >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim >Priority: Critical > Fix For: 1.0.0, 2.0.0, 1.1.0 > > > There're some strange issues including STORM-1933 and others (which I will > file an issue soon) which are related to race condition in supervisor. > As I mentioned to STORM-1933, basically sync-supervisor relies on zk > assignment, and sync-processes relies on local assignment and local workers > directory, but in fact sync-supervisor also access local state and take some > actions which affects sync-processes. And also Satish left the comment to > STORM-1933 describing other issue related to race condition and idea to fix > this which is same page on me. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1956) Disable Backpressure by default
[ https://issues.apache.org/jira/browse/STORM-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani resolved STORM-1956. --- Resolution: Fixed > Disable Backpressure by default > --- > > Key: STORM-1956 > URL: https://issues.apache.org/jira/browse/STORM-1956 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.0, 1.0.1 >Reporter: Roshan Naik >Assignee: Roshan Naik >Priority: Blocker > Fix For: 2.0.0, 1.0.2, 1.1.0 > > > Some of the context on this is captured in STORM-1949 > In short.. wait for BP mechanism to mature some more and be production ready > before we enable by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1934) Race condition between sync-supervisor and sync-processes raises several strange issues
[ https://issues.apache.org/jira/browse/STORM-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani resolved STORM-1934. --- Resolution: Fixed > Race condition between sync-supervisor and sync-processes raises several > strange issues > --- > > Key: STORM-1934 > URL: https://issues.apache.org/jira/browse/STORM-1934 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.0, 2.0.0, 1.0.1 >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim >Priority: Critical > Fix For: 1.0.0, 2.0.0, 1.1.0 > > > There're some strange issues including STORM-1933 and others (which I will > file an issue soon) which are related to race condition in supervisor. > As I mentioned to STORM-1933, basically sync-supervisor relies on zk > assignment, and sync-processes relies on local assignment and local workers > directory, but in fact sync-supervisor also access local state and take some > actions which affects sync-processes. And also Satish left the comment to > STORM-1933 describing other issue related to race condition and idea to fix > this which is same page on me. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1956) Disable Backpressure by default
[ https://issues.apache.org/jira/browse/STORM-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1956: -- Fix Version/s: 1.1.0 2.0.0 > Disable Backpressure by default > --- > > Key: STORM-1956 > URL: https://issues.apache.org/jira/browse/STORM-1956 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.0, 1.0.1 >Reporter: Roshan Naik >Assignee: Roshan Naik >Priority: Blocker > Fix For: 2.0.0, 1.0.2, 1.1.0 > > > Some of the context on this is captured in STORM-1949 > In short.. wait for BP mechanism to mature some more and be production ready > before we enable by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1096) UI tries to impersonate wrong user when getting topology conf for authorization, impersonation is allowed by default
[ https://issues.apache.org/jira/browse/STORM-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369391#comment-15369391 ] Sriharsha Chintalapani commented on STORM-1096: --- [~revans2] I think the issue here is incase of a proxy server calling the UI with doAsUser param. Also we check impersonation twice it seems once in UI and another in nimbus https://github.com/apache/storm/blob/master/storm-core/src/clj/org/apache/storm/ui/core.clj#L92 By adding nimbus.impersonation.authorizer in defaults we are going to generate quite a bit of logs and could be confusing for the users who are not using it in secure mode. This is already addressed in another JIRA. I'll remove the double impersonation my side and it should be ok. Few things I would like to address is 1. remove impersonation check on UI side. Let me know if this is necessary 2. Currently we've groups, hosts and I would like to add users section to restrict the impersonation to few users. > UI tries to impersonate wrong user when getting topology conf for > authorization, impersonation is allowed by default > > > Key: STORM-1096 > URL: https://issues.apache.org/jira/browse/STORM-1096 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 0.10.0 >Reporter: Robert Joseph Evans >Assignee: Robert Joseph Evans >Priority: Blocker > Fix For: 0.10.0 > > > We have started using 0.10.0 under load and found a few issues around the UI > and impersonation. > The UI when trying to connect to nimbus will impersonate other users. > Nimbus, by default allows impersonation and just outputs a warning message > that it is allowed. We really should default to not allowing impersonation. > having the authorizer configured by default does not hurt when running > insecure because impersonation is not possible, but when security is enabled > if someone forgets to set this config we are now insecure by default. > If you do set all of that up correctly the UI now can impersonate the wrong > user when connecting to nimbus. > The UI decides which user to impersonate by pulling it from the request > context. The requestContext is populated from the HttpRequest when > assert-authorized-user is called. assert-authorized-user takes a > topology-conf as a parameter. The only way to get this topology conf is to > talk to nimbus, which will get the wrong user because the request context has > not been populated yet. > This just because a huge pain for users who way too often will not be able to > see pages on the UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1737) storm-kafka-client has compilation errors with Apache Kafka 0.10
[ https://issues.apache.org/jira/browse/STORM-1737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369241#comment-15369241 ] Sriharsha Chintalapani commented on STORM-1737: --- The above issue due to the api change from new kafka consumer 0.9 to 0.10. Given that Kafka Consumer interface marked as unstable we should make this change on our side by updating the pom.xml kafka.version to 0.10 and make the required api changes in kafka spout. > storm-kafka-client has compilation errors with Apache Kafka 0.10 > > > Key: STORM-1737 > URL: https://issues.apache.org/jira/browse/STORM-1737 > Project: Apache Storm > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Hugo Louro >Priority: Blocker > Fix For: 1.0.2 > > > when compiled with Apache Kafka 0.10 branch getting following errors > {code} > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[163,51] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[166,45] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[175,51] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[177,45] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[252,41] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1737) storm-kafka-client has compilation errors with Apache Kafka 0.10
[ https://issues.apache.org/jira/browse/STORM-1737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1737: -- Priority: Blocker (was: Critical) > storm-kafka-client has compilation errors with Apache Kafka 0.10 > > > Key: STORM-1737 > URL: https://issues.apache.org/jira/browse/STORM-1737 > Project: Apache Storm > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Hugo Louro >Priority: Blocker > Fix For: 1.0.2 > > > when compiled with Apache Kafka 0.10 branch getting following errors > {code} > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[163,51] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[166,45] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[175,51] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[177,45] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[252,41] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1737) storm-kafka-client has compilation errors with Apache Kafka 0.10
[ https://issues.apache.org/jira/browse/STORM-1737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1737: -- Fix Version/s: 1.0.2 > storm-kafka-client has compilation errors with Apache Kafka 0.10 > > > Key: STORM-1737 > URL: https://issues.apache.org/jira/browse/STORM-1737 > Project: Apache Storm > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Hugo Louro >Priority: Blocker > Fix For: 1.0.2 > > > when compiled with Apache Kafka 0.10 branch getting following errors > {code} > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[163,51] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[166,45] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[175,51] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[177,45] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > [ERROR] > /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[252,41] > incompatible types: org.apache.kafka.common.TopicPartition cannot be > converted to java.util.Collection > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1461) Storm.py - Create Tests to Validate CLI Options
[ https://issues.apache.org/jira/browse/STORM-1461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani resolved STORM-1461. --- Resolution: Fixed > Storm.py - Create Tests to Validate CLI Options > --- > > Key: STORM-1461 > URL: https://issues.apache.org/jira/browse/STORM-1461 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Reporter: Hugo Louro >Assignee: Hugo Louro > Fix For: 1.0.0 > > > Typos in function names or CLI options currently go undetected and may cause > a valid CLI option not to work when invoked from the CLI. > Create a test to validate that a valid CLI calls the correct method and > behaves as expected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1096) UI tries to impersonate wrong user when getting topology conf for authorization, impersonation is allowed by default
[ https://issues.apache.org/jira/browse/STORM-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368476#comment-15368476 ] Sriharsha Chintalapani commented on STORM-1096: --- [~revans2] Thanks for the reply. so in your above example for the config .Current implementation does't work like that. Example: Lets I say Kinit with storm user and deployed word count topology. now I am making call to storm ui to get topology details {code} http://ui-daemon-host-name:8080/api/v1/topology/wordcount-1-1425844354\?doAsUser=testUSer1 {code} In this case we need to have the following ACL in place for it to work {code} nimbus.impersonation.acl: storm: // super user users: [*] hosts: [*] {code} We are checking in ImpersonationAuthorizer if the ctx.realPrincipal() is in the nimbus.impersonation.acl list. So lets say if I kinitted as another user "topology-user1" . Than this user needs to be added to the list. Basically we need all the topology users to be in the list here. Otherwise the realPrincipal() ( where the topology owner's principal will be) won't be found in the ACLs What I was asking exactly what you showed in the example above. Instead of listing the users in the ACL we should've proxy-user there and it should've list of users, hosts, groups who can impersonate that user. It doesn't throw any issues in uploading the topology. Only issues is getting the information about topology using REST Apis. If you pass doAs even if we have the nimbus.impersonation.acl set and it passes through that authorizer it will get authorization exception through simpleACLAuthorizer because actual user(storm) is in ctx.realPrincipal and ctx.principal() contains ambari-server-storm and this user doesn't have permissions on the topology. Either we make this principal a super user than we might be getting into another issue where one user can see other users topology information if they both are in impersonation acl or we should modify the simpleACLAuthorizer to check the realPrincipal and if happens to be a owner of the topology. I've a patch ready I'll post one in another JIRA. That probably makes it easier to discuss what I am suggesting. > UI tries to impersonate wrong user when getting topology conf for > authorization, impersonation is allowed by default > > > Key: STORM-1096 > URL: https://issues.apache.org/jira/browse/STORM-1096 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 0.10.0 >Reporter: Robert Joseph Evans >Assignee: Robert Joseph Evans >Priority: Blocker > Fix For: 0.10.0 > > > We have started using 0.10.0 under load and found a few issues around the UI > and impersonation. > The UI when trying to connect to nimbus will impersonate other users. > Nimbus, by default allows impersonation and just outputs a warning message > that it is allowed. We really should default to not allowing impersonation. > having the authorizer configured by default does not hurt when running > insecure because impersonation is not possible, but when security is enabled > if someone forgets to set this config we are now insecure by default. > If you do set all of that up correctly the UI now can impersonate the wrong > user when connecting to nimbus. > The UI decides which user to impersonate by pulling it from the request > context. The requestContext is populated from the HttpRequest when > assert-authorized-user is called. assert-authorized-user takes a > topology-conf as a parameter. The only way to get this topology conf is to > talk to nimbus, which will get the wrong user because the request context has > not been populated yet. > This just because a huge pain for users who way too often will not be able to > see pages on the UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1949) Backpressure can cause spout to stop emitting and stall topology
[ https://issues.apache.org/jira/browse/STORM-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368473#comment-15368473 ] Sriharsha Chintalapani commented on STORM-1949: --- +1. Given that there are few issues we should turn it off in defaults.yaml. My understanding of backpressure by talking to/debugging with [~roshan_naik] is as soon as we hit the higher or lower ceiling we take an action. I would like to see if we can add a time duration to understand that if the bolt is actually hit the ceiling continuously i.e we are not constantly sending signal to spout turn it on or off. Lets say as user if I set this time duration to 1min we can check the higher watermark being hit consistently for 1min than send a signal to turn on back pressure and similarly on lower watermark part. Otherwise we can end up generating false positives on turning on back pressure. Also agree with [~roshan_naik] about increased load on zookeeper. If possible explore the possibility of it by a message transfer back to spout or via acker. I am not clear if this is possible need to look into it. > Backpressure can cause spout to stop emitting and stall topology > > > Key: STORM-1949 > URL: https://issues.apache.org/jira/browse/STORM-1949 > Project: Apache Storm > Issue Type: Bug >Reporter: Roshan Naik > > Problem can be reproduced by this [Word count > topology|https://github.com/hortonworks/storm/blob/perftopos1.x/examples/storm-starter/src/jvm/org/apache/storm/starter/perf/FileReadWordCountTopo.java] > within a IDE. > I ran it with 1 spout instance, 2 splitter bolt instances, 2 counter bolt > instances. > The problem is more easily reproduced with WC topology as it causes an > explosion of tuples due to splitting a sentence tuple into word tuples. As > the bolts have to process more tuples than the spout is producing, spout > needs to operate slower. > The amount of time it takes for the topology to stall can vary.. but > typically under 10 mins. > *My theory:* I suspect there is a race condition in the way ZK is being > utilized to enable/disable back pressure. When congested (i.e pressure > exceeds high water mark), the bolt's worker records this congested situation > in ZK by creating a node. Once the congestion is reduced below the low water > mark, it deletes this node. > The spout's worker has setup a watch on the parent node, expecting a callback > whenever there is change in the child nodes. On receiving the callback the > spout's worker lists the parent node to check if there are 0 or more child > nodes it is essentially trying to figure out the nature of state change > in ZK to determine whether to throttle or not. Subsequently it setsup > another watch in ZK to keep an eye on future changes. > When there are multiple bolts, there can be rapid creation/deletion of these > ZK nodes. Between the time the worker receives a callback and sets up the > next watch.. many changes may have undergone in ZK which will go unnoticed by > the spout. > The condition that the bolts are no longer congested may not get noticed as a > result of this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1096) UI tries to impersonate wrong user when getting topology conf for authorization, impersonation is allowed by default
[ https://issues.apache.org/jira/browse/STORM-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15362047#comment-15362047 ] Sriharsha Chintalapani commented on STORM-1096: --- @revan2 any thoughts on above. This behavior seems to be inverse what we want. {code} user = storm-storm, principal = storm-st...@example.com is attempting to impersonate user = ambari-server-storm {code} In above we should be listing ambari-server-storm in the {code} nimbus.impersonation.acl: ambari-server-storm: // proxy user users: [storm-storm, another-user] // wild-card can be used groups: [*] // should be optional hosts: [*] {code} if the user is allowed to impersonate ambari-server-storm in above example we should allow the principal and check if the principal (storm-storm) has access to the requested resource. currently we check if ambari-server-storm has access to the resources . IMO current behavior doesn't seem to be right especially incase of proxy services. > UI tries to impersonate wrong user when getting topology conf for > authorization, impersonation is allowed by default > > > Key: STORM-1096 > URL: https://issues.apache.org/jira/browse/STORM-1096 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 0.10.0 >Reporter: Robert Joseph Evans >Assignee: Robert Joseph Evans >Priority: Blocker > Fix For: 0.10.0 > > > We have started using 0.10.0 under load and found a few issues around the UI > and impersonation. > The UI when trying to connect to nimbus will impersonate other users. > Nimbus, by default allows impersonation and just outputs a warning message > that it is allowed. We really should default to not allowing impersonation. > having the authorizer configured by default does not hurt when running > insecure because impersonation is not possible, but when security is enabled > if someone forgets to set this config we are now insecure by default. > If you do set all of that up correctly the UI now can impersonate the wrong > user when connecting to nimbus. > The UI decides which user to impersonate by pulling it from the request > context. The requestContext is populated from the HttpRequest when > assert-authorized-user is called. assert-authorized-user takes a > topology-conf as a parameter. The only way to get this topology conf is to > talk to nimbus, which will get the wrong user because the request context has > not been populated yet. > This just because a huge pain for users who way too often will not be able to > see pages on the UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1910) One topology can't use hdfs spout to read from two locations
[ https://issues.apache.org/jira/browse/STORM-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1910: -- Assignee: Roshan Naik > One topology can't use hdfs spout to read from two locations > > > Key: STORM-1910 > URL: https://issues.apache.org/jira/browse/STORM-1910 > Project: Apache Storm > Issue Type: Bug > Components: storm-hdfs >Affects Versions: 1.0.1 >Reporter: Raghav Kumar Gautam >Assignee: Roshan Naik > Fix For: 1.1.0 > > > The hdfs uri is passed using config: > {code} > conf.put(Configs.HDFS_URI, hdfsUri); > {code} > I see two problems with this approach: > 1. If someone wants to used two hdfsUri in same or different spouts - then > that does not seem feasible. > https://github.com/apache/storm/blob/d17b3b9c3cbc89d854bfb436d213d11cfd4545ec/examples/storm-starter/src/jvm/storm/starter/HdfsSpoutTopology.java#L117-L117 > https://github.com/apache/storm/blob/d17b3b9c3cbc89d854bfb436d213d11cfd4545ec/external/storm-hdfs/src/main/java/org/apache/storm/hdfs/spout/HdfsSpout.java#L331-L331 > {code} > if ( !conf.containsKey(Configs.SOURCE_DIR) ) { > LOG.error(Configs.SOURCE_DIR + " setting is required"); > throw new RuntimeException(Configs.SOURCE_DIR + " setting is required"); > } > this.sourceDirPath = new Path( conf.get(Configs.SOURCE_DIR).toString() ); > {code} > 2. It does not fail fast i.e. at the time of topology submissing. We can fail > fast if the hdfs path is invalid or credentials/permissions are not ok. Such > errors at this time can only be detected at runtime by looking at the worker > logs. > https://github.com/apache/storm/blob/d17b3b9c3cbc89d854bfb436d213d11cfd4545ec/external/storm-hdfs/src/main/java/org/apache/storm/hdfs/spout/HdfsSpout.java#L297-L297 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1096) UI tries to impersonate wrong user when getting topology conf for authorization, impersonation is allowed by default
[ https://issues.apache.org/jira/browse/STORM-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348854#comment-15348854 ] Sriharsha Chintalapani commented on STORM-1096: --- [~revans2] sorry not bringing this before we merged into the repo. Few questions around the config, 1. we made nimbus.impersonation.authorizer as default but without any nimbus.impersonation.acl configs . This will immediately block anyone from submitting the topology except storm user. 2. configs right now requires username as key: hosts , groups as values. This is not easy to automate via deployment tools. Also every user who wants to submit should be listed in this config (correct me if I am wrong here). 3. I understand security should be closed by default . But here in this case we are just blocking everyone to submit a topology or look at topology metrics . Ideally we should provide wildcard support and make it default or at least have that option to user. Wildcard should be that a given user can impersonate user x . So I can just add config like any user can impersonate user x. The problem I am seeing here in case of Ambari we've a storm view that will send requests through Ambari proxy server. In this case it will send user "harsha" is trying to impersonate user "ambari-server1" (user x in above example). With current implementation, any user who is trying to access their topology needs to be added to the nimbus.impersonation.acl along with hostname from which they might be querying etc.. in a hosted platform this going to be harder as we keep adding users to the config. > UI tries to impersonate wrong user when getting topology conf for > authorization, impersonation is allowed by default > > > Key: STORM-1096 > URL: https://issues.apache.org/jira/browse/STORM-1096 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 0.10.0 >Reporter: Robert Joseph Evans >Assignee: Robert Joseph Evans >Priority: Blocker > Fix For: 0.10.0 > > > We have started using 0.10.0 under load and found a few issues around the UI > and impersonation. > The UI when trying to connect to nimbus will impersonate other users. > Nimbus, by default allows impersonation and just outputs a warning message > that it is allowed. We really should default to not allowing impersonation. > having the authorizer configured by default does not hurt when running > insecure because impersonation is not possible, but when security is enabled > if someone forgets to set this config we are now insecure by default. > If you do set all of that up correctly the UI now can impersonate the wrong > user when connecting to nimbus. > The UI decides which user to impersonate by pulling it from the request > context. The requestContext is populated from the HttpRequest when > assert-authorized-user is called. assert-authorized-user takes a > topology-conf as a parameter. The only way to get this topology conf is to > talk to nimbus, which will get the wrong user because the request context has > not been populated yet. > This just because a huge pain for users who way too often will not be able to > see pages on the UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1925) Nimbus fails to start in secure mode
Sriharsha Chintalapani created STORM-1925: - Summary: Nimbus fails to start in secure mode Key: STORM-1925 URL: https://issues.apache.org/jira/browse/STORM-1925 Project: Apache Storm Issue Type: Bug Reporter: Sriharsha Chintalapani Assignee: Jungtaek Lim Priority: Critical We are noticing a failure in secure cluster as nimbus failed to start 2016-06-23 06:43:48.874 o.a.s.d.nimbus [ERROR] Error when processing event java.lang.NullPointerException at org.apache.storm.security.auth.authorizer.SimpleACLAuthorizer.permit(SimpleACLAuthorizer.java:114) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) at org.apache.storm.daemon.nimbus$check_authorization_BANG_.invoke(nimbus.clj:1047) at org.apache.storm.daemon.nimbus$check_authorization_BANG_.invoke(nimbus.clj:1051) at org.apache.storm.daemon.nimbus$mk_reified_nimbus$reify__11183.getClusterInfo(nimbus.clj:1772) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) at clojure.lang.Reflector.invokeNoArgInstanceMember(Reflector.java:313) at org.apache.storm.daemon.nimbus$send_cluster_metrics_to_executors.invoke(nimbus.clj:1393) at org.apache.storm.daemon.nimbus$fn__11394$exec_fn__3529__auto11395$fn__11421.invoke(nimbus.clj:2254) at org.apache.storm.timer$schedule_recurring$this__2156.invoke(timer.clj:105) at org.apache.storm.timer$mk_timer$fn__2139$fn__2140.invoke(timer.clj:50) at org.apache.storm.timer$mk_timer$fn__2139.invoke(timer.clj:42) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) 2016-06-23 06:43:48.877 o.a.s.util [ERROR] Halting process: ("Error when processing an event") java.lang.RuntimeException: ("Error when processing an event") at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) at clojure.lang.RestFn.invoke(RestFn.java:423) at org.apache.storm.daemon.nimbus$nimbus_data$fn__10332.invoke(nimbus.clj:205) at org.apache.storm.timer$mk_timer$fn__2139$fn__2140.invoke(timer.clj:71) at org.apache.storm.timer$mk_timer$fn__2139.invoke(timer.clj:42) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1893) Support OpenTSDB for storing timeseries data.
[ https://issues.apache.org/jira/browse/STORM-1893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1893: -- Fix Version/s: 1.1.0 2.0.0 > Support OpenTSDB for storing timeseries data. > - > > Key: STORM-1893 > URL: https://issues.apache.org/jira/browse/STORM-1893 > Project: Apache Storm > Issue Type: New Feature >Reporter: Satish Duggana >Assignee: Satish Duggana > Fix For: 2.0.0, 1.1.0 > > > - Implement openTSDB bolt to store timeseries data. > - Trident implementation to store timeseries data in openTSDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1885) python script for squashing and merging prs
Sriharsha Chintalapani created STORM-1885: - Summary: python script for squashing and merging prs Key: STORM-1885 URL: https://issues.apache.org/jira/browse/STORM-1885 Project: Apache Storm Issue Type: Task Reporter: Sriharsha Chintalapani Assignee: Sriharsha Chintalapani -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1771) HiveState should flushAndClose before closing old or idle Hive connections
[ https://issues.apache.org/jira/browse/STORM-1771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1771: -- Fix Version/s: 1.1.0 1.0.2 > HiveState should flushAndClose before closing old or idle Hive connections > -- > > Key: STORM-1771 > URL: https://issues.apache.org/jira/browse/STORM-1771 > Project: Apache Storm > Issue Type: Bug > Components: storm-hive >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Critical > Fix For: 1.0.2, 1.1.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1164) Code cleanup for typos, warnings and conciseness
[ https://issues.apache.org/jira/browse/STORM-1164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani resolved STORM-1164. --- Resolution: Fixed > Code cleanup for typos, warnings and conciseness > > > Key: STORM-1164 > URL: https://issues.apache.org/jira/browse/STORM-1164 > Project: Apache Storm > Issue Type: Improvement >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas >Priority: Minor > Fix For: 1.0.0 > > > Cleaning up the following: > - Typos > - Javadoc > - Type inference > - Unnecessary variable initialization > - Simplified if statements > This is a mechanical cleanup suggested by the IDE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1352) Trident should support writing to multiple Kafka clusters
[ https://issues.apache.org/jira/browse/STORM-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani resolved STORM-1352. --- Resolution: Fixed > Trident should support writing to multiple Kafka clusters > - > > Key: STORM-1352 > URL: https://issues.apache.org/jira/browse/STORM-1352 > Project: Apache Storm > Issue Type: Improvement >Reporter: Haohui Mai >Assignee: Haohui Mai > Fix For: 1.0.0 > > > Current it is impossible to instantiate two instances of the > {{TridentKafkaState}} class that write to different Kafka cluster. This is > because that {{TridentKafkaState}} obtains the the location of the Kafka > producer from configuration. Multiple instances can only get the same > configuration in the {{prepare()}} method. > This jira proposes to introduce a configuration class like > {{TridentKafkaConfig}} to allow multiple instances of {{TridentKafkaState}} > to write to different Kafka clusters. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1352) Trident should support writing to multiple Kafka clusters
[ https://issues.apache.org/jira/browse/STORM-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1352: -- Fix Version/s: 1.0.0 > Trident should support writing to multiple Kafka clusters > - > > Key: STORM-1352 > URL: https://issues.apache.org/jira/browse/STORM-1352 > Project: Apache Storm > Issue Type: Improvement >Reporter: Haohui Mai >Assignee: Haohui Mai > Fix For: 1.0.0 > > > Current it is impossible to instantiate two instances of the > {{TridentKafkaState}} class that write to different Kafka cluster. This is > because that {{TridentKafkaState}} obtains the the location of the Kafka > producer from configuration. Multiple instances can only get the same > configuration in the {{prepare()}} method. > This jira proposes to introduce a configuration class like > {{TridentKafkaConfig}} to allow multiple instances of {{TridentKafkaState}} > to write to different Kafka clusters. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1164) Code cleanup for typos, warnings and conciseness
[ https://issues.apache.org/jira/browse/STORM-1164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1164: -- Fix Version/s: 1.0.0 > Code cleanup for typos, warnings and conciseness > > > Key: STORM-1164 > URL: https://issues.apache.org/jira/browse/STORM-1164 > Project: Apache Storm > Issue Type: Improvement >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas >Priority: Minor > Fix For: 1.0.0 > > > Cleaning up the following: > - Typos > - Javadoc > - Type inference > - Unnecessary variable initialization > - Simplified if statements > This is a mechanical cleanup suggested by the IDE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1765) KafkaBolt with new producer api should be part of storm-kaka-client
[ https://issues.apache.org/jira/browse/STORM-1765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288094#comment-15288094 ] Sriharsha Chintalapani commented on STORM-1765: --- [~supermonk] You can use use existing storm-kafka KafkaBolt which uses new producer API . You should configure SSL and pass the right configs to enable SSL in kafka producer Follow the doc here http://kafka.apache.org/documentation.html#security_ssl . > KafkaBolt with new producer api should be part of storm-kaka-client > --- > > Key: STORM-1765 > URL: https://issues.apache.org/jira/browse/STORM-1765 > Project: Apache Storm > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Hugo Louro > > During the discussion of storm-kaka-client we agreed on following > "We talked about this some more and it seems to me it would make more sense > to leave this new spout in storm-kafka-client (or whatever you want to call > it) and move the KafkaBolt which uses the new producer api over here also. > That way this component only needs to depend on the new kafka-clients java > api and not on the entire scala kafka core. We can make the old storm-kafka > depend on this component so it still picks up the bolt so if anyone is using > that its still works. We can deprecate the old KafkaSpout but keep it around > for people using older versions of Kafka - tgravescs" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1599) Kafka dependencies all marked as provided (so storm-starter does not run)
[ https://issues.apache.org/jira/browse/STORM-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286777#comment-15286777 ] Sriharsha Chintalapani commented on STORM-1599: --- [~revans2] sure thats works. But we are inconsistent with this approach with other connectors and also we are packaging external dependencies into connectors as well. If the issue only with running examples than we should include it in the module rather than the other way. > Kafka dependencies all marked as provided (so storm-starter does not run) > - > > Key: STORM-1599 > URL: https://issues.apache.org/jira/browse/STORM-1599 > Project: Apache Storm > Issue Type: Bug > Components: examples, Flux, storm-kafka >Affects Versions: 0.10.0, 1.0.0, 2.0.0 >Reporter: Robert Joseph Evans >Assignee: Hugo Louro > > When we mark a dependency as provided it indicates the shade and assembly > plugins to not include this particular dependency in the uber topology jar > because it will be {provided} on the class path by the system. > We have been doing this for all of our kafka dependencies incorrectly. This > means that storm-starter does not have any version of kafka packaged it the > resulting jar and any example that uses kafka, TridentKafkaWordCount, will > fail with missing class errors. > storm-starter/pom.xml has should change its dependency on storm-kafka to be > compile, and it should delete dependencies on kafka and kafka-clients as > those should come from storm-kafka as transitive dependencies. > the main pom.xml should not have kafka-clients marked as provided in the > dependency management section. > storm-kafka should remove its provided tag on kafka, and flux examples + > storm-sql-kafka should remove dependencies on kafka and kafka-clients, and > storm-kafka should not me marked as provided. > the flux and sql code I am not as familiar with, but looking at them, and > running `mvn dependecy:tree` and `mvn dependency:analyze` it looks like -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1843) Unified API to address micro-batching and per tuple use cases
Sriharsha Chintalapani created STORM-1843: - Summary: Unified API to address micro-batching and per tuple use cases Key: STORM-1843 URL: https://issues.apache.org/jira/browse/STORM-1843 Project: Apache Storm Issue Type: Improvement Reporter: Sriharsha Chintalapani -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1599) Kafka dependencies all marked as provided (so storm-starter does not run)
[ https://issues.apache.org/jira/browse/STORM-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285747#comment-15285747 ] Sriharsha Chintalapani commented on STORM-1599: --- [~revans2] I missed few comments on the storm-kafka-client PR to take out kafka-clients as provided scope. This would cause limitation on users side if they want to include kafka-clients version other than the one we are shipping. We make all the dependencies for all the connectors that we've as provided. If we want to run examples and other modules we should include the kafka-clients there and make storm-kafka-client's module as provided. > Kafka dependencies all marked as provided (so storm-starter does not run) > - > > Key: STORM-1599 > URL: https://issues.apache.org/jira/browse/STORM-1599 > Project: Apache Storm > Issue Type: Bug > Components: examples, Flux, storm-kafka >Affects Versions: 0.10.0, 1.0.0, 2.0.0 >Reporter: Robert Joseph Evans >Assignee: Hugo Louro > > When we mark a dependency as provided it indicates the shade and assembly > plugins to not include this particular dependency in the uber topology jar > because it will be {provided} on the class path by the system. > We have been doing this for all of our kafka dependencies incorrectly. This > means that storm-starter does not have any version of kafka packaged it the > resulting jar and any example that uses kafka, TridentKafkaWordCount, will > fail with missing class errors. > storm-starter/pom.xml has should change its dependency on storm-kafka to be > compile, and it should delete dependencies on kafka and kafka-clients as > those should come from storm-kafka as transitive dependencies. > the main pom.xml should not have kafka-clients marked as provided in the > dependency management section. > storm-kafka should remove its provided tag on kafka, and flux examples + > storm-sql-kafka should remove dependencies on kafka and kafka-clients, and > storm-kafka should not me marked as provided. > the flux and sql code I am not as familiar with, but looking at them, and > running `mvn dependecy:tree` and `mvn dependency:analyze` it looks like -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1839) Kinesis Spout
[ https://issues.apache.org/jira/browse/STORM-1839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285200#comment-15285200 ] Sriharsha Chintalapani commented on STORM-1839: --- [~erikdw] "When a Kinesis Stream is resharded, "storm rebalance" can be invoked to refresh the shard list and distribute the latest shards across the Spout tasks." My issue is with the above statement. Lets say you have 5 spout executors reading from 10 shards kinesis and if you add more 2 more shards it looks one need to rebalance just to get the new shard list. In kafka we don't need to do that , new partitions will be assigned in round-robin fashion to existing tasks. > Kinesis Spout > - > > Key: STORM-1839 > URL: https://issues.apache.org/jira/browse/STORM-1839 > Project: Apache Storm > Issue Type: Improvement >Reporter: Sriharsha Chintalapani >Assignee: Priyank Shah > > As Storm is increasingly used in Cloud environments. It will great to have a > Kinesis Spout integration in Apache Storm. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1839) Kinesis Spout
[ https://issues.apache.org/jira/browse/STORM-1839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285144#comment-15285144 ] Sriharsha Chintalapani commented on STORM-1839: --- I've looked into the code and looks like its inspired bit too much the old kafka-spout which is another issue with it. one other main motivation to re-write it and also if they create more partiitons in their kinesis topic one has to manually rebalance it which is not good either. Given that there isn't much interest shown from their side. Its better to re-write and implement the spout with the new kinesis api than try to get this in and keep fixing the bugs or missing pieces > Kinesis Spout > - > > Key: STORM-1839 > URL: https://issues.apache.org/jira/browse/STORM-1839 > Project: Apache Storm > Issue Type: Improvement >Reporter: Sriharsha Chintalapani >Assignee: Priyank Shah > > As Storm is increasingly used in Cloud environments. It will great to have a > Kinesis Spout integration in Apache Storm. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1840) LocalCluster is not properly shutting down
Sriharsha Chintalapani created STORM-1840: - Summary: LocalCluster is not properly shutting down Key: STORM-1840 URL: https://issues.apache.org/jira/browse/STORM-1840 Project: Apache Storm Issue Type: Bug Reporter: Sriharsha Chintalapani In 1.0 after blobstore introduction LocalCluster shutdown is cleanly going through. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1757) Apache Beam Runner for Storm
[ https://issues.apache.org/jira/browse/STORM-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284720#comment-15284720 ] Sriharsha Chintalapani commented on STORM-1757: --- [~revans2] can you answer my earlier question about having one API doing both per tuple and batching. With your proposed I can see we are adding more layering and more api but doesn't solve the problem that user faces today. When they come to Storm to develop their application it will be more confusing to see all these options. We can still have a storm-core API as a bottom layer without replacing anything but have Java8 api to-do micro-batching and per tuple. And all the high-level languages can integrate with java8 if some users require it. With this model there will be one api (java8) and if some domain specific languages needed we can still do so. > Apache Beam Runner for Storm > > > Key: STORM-1757 > URL: https://issues.apache.org/jira/browse/STORM-1757 > Project: Apache Storm > Issue Type: Brainstorming >Reporter: P. Taylor Goetz >Priority: Minor > > This is a call for interested parties to collaborate on an Apache Beam [1] > runner for Storm, and express their thoughts and opinions. > Given the addition of the Windowing API to Apache Storm, we should be able to > map naturally to the Beam API. If not, it may be indicative of shortcomings > of the Storm API that should be addressed. > [1] http://beam.incubator.apache.org -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1839) Kinesis Spout
Sriharsha Chintalapani created STORM-1839: - Summary: Kinesis Spout Key: STORM-1839 URL: https://issues.apache.org/jira/browse/STORM-1839 Project: Apache Storm Issue Type: Improvement Reporter: Sriharsha Chintalapani Assignee: Sriharsha Chintalapani As Storm is increasingly used in Cloud environments. It will great to have a Kinesis Spout integration in Apache Storm. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1757) Apache Beam Runner for Storm
[ https://issues.apache.org/jira/browse/STORM-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15282988#comment-15282988 ] Sriharsha Chintalapani commented on STORM-1757: --- [~revans2] [~arunmahadevan] I agree its larger effort but just adding a BEAM runner at this point will only add one more API that one can use to write a topology. We shouldn't keep it delayed just because its larger effort. I am of the opinion that it will help adoption of the storm and makes it easier for the existing users and in general keep the framework in a better shape with a single API. If the BEAM api has gaps, lets look into Java 8 API if it can answer the common API needs. > Apache Beam Runner for Storm > > > Key: STORM-1757 > URL: https://issues.apache.org/jira/browse/STORM-1757 > Project: Apache Storm > Issue Type: Brainstorming >Reporter: P. Taylor Goetz >Priority: Minor > > This is a call for interested parties to collaborate on an Apache Beam [1] > runner for Storm, and express their thoughts and opinions. > Given the addition of the Windowing API to Apache Storm, we should be able to > map naturally to the Beam API. If not, it may be indicative of shortcomings > of the Storm API that should be addressed. > [1] http://beam.incubator.apache.org -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1757) Apache Beam Runner for Storm
[ https://issues.apache.org/jira/browse/STORM-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281805#comment-15281805 ] Sriharsha Chintalapani commented on STORM-1757: --- one of the main things that I am interested as part of this work is to unifying the APIs. I can start another discussion around it but here are my early thoughts. Instead of having two APIs 1. Storm core 2. Trident we can use beam to unify this i.e Use the same api to do the per tuple and batching instead of asking users to write two different sets code if they want to do core or trident. Its up to the user's choice if they prefer throughput or latency. We can keep the core api and deprecate the trident for next versions. > Apache Beam Runner for Storm > > > Key: STORM-1757 > URL: https://issues.apache.org/jira/browse/STORM-1757 > Project: Apache Storm > Issue Type: Brainstorming >Reporter: P. Taylor Goetz >Priority: Minor > > This is a call for interested parties to collaborate on an Apache Beam [1] > runner for Storm, and express their thoughts and opinions. > Given the addition of the Windowing API to Apache Storm, we should be able to > map naturally to the Beam API. If not, it may be indicative of shortcomings > of the Storm API that should be addressed. > [1] http://beam.incubator.apache.org -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1136) Provide a bin script to check consumer lag from KafkaSpout to Kafka topic offsets
[ https://issues.apache.org/jira/browse/STORM-1136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281143#comment-15281143 ] Sriharsha Chintalapani commented on STORM-1136: --- [~pshah] [~kabhwan] I guess what you mean is to use metric info in the KafkaSpout to push the metric as more of a default for a topology. I think this is a better approach than the one I had earlier. > Provide a bin script to check consumer lag from KafkaSpout to Kafka topic > offsets > - > > Key: STORM-1136 > URL: https://issues.apache.org/jira/browse/STORM-1136 > Project: Apache Storm > Issue Type: Improvement > Components: storm-kafka >Reporter: Sriharsha Chintalapani >Assignee: Priyank Shah > > We store kafkaspout offsets in zkroot + id path in zookeeper. Kafka provides > a utility and a protocol request to fetch latest offsets into topic > {code} > example: > bin/kafka-run-classh.sh kafka.tools.GetOffsetTool > {code} > we should provide a way for the user to check how far the kafka spout read > into topic and whats the lag. If we can expose this via UI even better. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1136) Provide a bin script to check consumer lag from KafkaSpout to Kafka topic offsets
[ https://issues.apache.org/jira/browse/STORM-1136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281139#comment-15281139 ] Sriharsha Chintalapani commented on STORM-1136: --- [~kabhwan] I don't like to introduce this info into nimbus and we don't need such changes. What I had in mind is to have utils class that given a topology info along KafkaSpout config it should be enough to figure out the offsets and show the late. We already query topologySummary and other details from UI. We can use that to figure out the offsets from Kafka and topology offset thats get stored in zookeeper. > Provide a bin script to check consumer lag from KafkaSpout to Kafka topic > offsets > - > > Key: STORM-1136 > URL: https://issues.apache.org/jira/browse/STORM-1136 > Project: Apache Storm > Issue Type: Improvement > Components: storm-kafka >Reporter: Sriharsha Chintalapani >Assignee: Priyank Shah > > We store kafkaspout offsets in zkroot + id path in zookeeper. Kafka provides > a utility and a protocol request to fetch latest offsets into topic > {code} > example: > bin/kafka-run-classh.sh kafka.tools.GetOffsetTool > {code} > we should provide a way for the user to check how far the kafka spout read > into topic and whats the lag. If we can expose this via UI even better. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1771) HiveState should flushAndClose before closing old or idle Hive connections
Sriharsha Chintalapani created STORM-1771: - Summary: HiveState should flushAndClose before closing old or idle Hive connections Key: STORM-1771 URL: https://issues.apache.org/jira/browse/STORM-1771 Project: Apache Storm Issue Type: Bug Components: storm-hive Reporter: Sriharsha Chintalapani Assignee: Sriharsha Chintalapani Priority: Critical -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1765) KafkaBolt with new producer api should be part of storm-kaka-client
Sriharsha Chintalapani created STORM-1765: - Summary: KafkaBolt with new producer api should be part of storm-kaka-client Key: STORM-1765 URL: https://issues.apache.org/jira/browse/STORM-1765 Project: Apache Storm Issue Type: Bug Reporter: Sriharsha Chintalapani Assignee: Hugo Louro During the discussion of storm-kaka-client we agreed on following "We talked about this some more and it seems to me it would make more sense to leave this new spout in storm-kafka-client (or whatever you want to call it) and move the KafkaBolt which uses the new producer api over here also. That way this component only needs to depend on the new kafka-clients java api and not on the entire scala kafka core. We can make the old storm-kafka depend on this component so it still picks up the bolt so if anyone is using that its still works. We can deprecate the old KafkaSpout but keep it around for people using older versions of Kafka - tgravescs" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1760) HiveState should retire idle or old writes with flushAndClose
Sriharsha Chintalapani created STORM-1760: - Summary: HiveState should retire idle or old writes with flushAndClose Key: STORM-1760 URL: https://issues.apache.org/jira/browse/STORM-1760 Project: Apache Storm Issue Type: Bug Components: storm-hive Reporter: Sriharsha Chintalapani Assignee: Sriharsha Chintalapani Fix For: 1.0.0, 2.0.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (STORM-1757) Apache Beam Runner for Storm
[ https://issues.apache.org/jira/browse/STORM-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani reassigned STORM-1757: - Assignee: Sriharsha Chintalapani > Apache Beam Runner for Storm > > > Key: STORM-1757 > URL: https://issues.apache.org/jira/browse/STORM-1757 > Project: Apache Storm > Issue Type: Brainstorming >Reporter: P. Taylor Goetz >Assignee: Sriharsha Chintalapani >Priority: Minor > > This is a call for interested parties to collaborate on an Apache Beam [1] > runner for Storm, and express their thoughts and opinions. > Given the addition of the Windowing API to Apache Storm, we should be able to > map naturally to the Beam API. If not, it may be indicative of shortcomings > of the Storm API that should be addressed. > [1] http://beam.incubator.apache.org -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1757) Apache Beam Runner for Storm
[ https://issues.apache.org/jira/browse/STORM-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267952#comment-15267952 ] Sriharsha Chintalapani commented on STORM-1757: --- I am working on it here https://issues.apache.org/jira/browse/BEAM-216. Will share the design docs. > Apache Beam Runner for Storm > > > Key: STORM-1757 > URL: https://issues.apache.org/jira/browse/STORM-1757 > Project: Apache Storm > Issue Type: Brainstorming >Reporter: P. Taylor Goetz >Assignee: Sriharsha Chintalapani >Priority: Minor > > This is a call for interested parties to collaborate on an Apache Beam [1] > runner for Storm, and express their thoughts and opinions. > Given the addition of the Windowing API to Apache Storm, we should be able to > map naturally to the Beam API. If not, it may be indicative of shortcomings > of the Storm API that should be addressed. > [1] http://beam.incubator.apache.org -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1747) storm-kafka-client KafkaSpoutRetryExponentialBackoff throws exception
[ https://issues.apache.org/jira/browse/STORM-1747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265078#comment-15265078 ] Sriharsha Chintalapani commented on STORM-1747: --- [~flisky] can you add kafka version details as well. > storm-kafka-client KafkaSpoutRetryExponentialBackoff throws exception > - > > Key: STORM-1747 > URL: https://issues.apache.org/jira/browse/STORM-1747 > Project: Apache Storm > Issue Type: Bug >Reporter: Jifeng Yin >Assignee: Hugo Louro > > {code} > java.lang.ClassCastException: org.apache.kafka.common.TopicPartition cannot > be cast to java.lang.Comparable > at java.util.TreeMap.compare(TreeMap.java:1188) ~[?:1.7.0_80] > at java.util.TreeMap.put(TreeMap.java:531) ~[?:1.7.0_80] > at java.util.TreeSet.add(TreeSet.java:255) ~[?:1.7.0_80] > at > org.apache.storm.kafka.spout.KafkaSpoutRetryExponentialBackoff.retriableTopicPartitions(KafkaSpoutRetryExponentialBackoff.java:170) > ~[storm-kafka-client-1.0.0.jar:1.0.0] > at > org.apache.storm.kafka.spout.KafkaSpout.doSeekRetriableTopicPartitions(KafkaSpout.java:245) > ~[storm-kafka-client-1.0.0.jar:1.0.0] > at > org.apache.storm.kafka.spout.KafkaSpout.pollKafkaBroker(KafkaSpout.java:236) > ~[storm-kafka-client-1.0.0.jar:1.0.0] > at > org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:202) > ~[storm-kafka-client-1.0.0.jar:1.0.0] > at > org.apache.storm.daemon.executor$fn__7877$fn__7892$fn__7923.invoke(executor.clj:647) > ~[storm-core-1.0.0.jar:1.0.0] > at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) > [storm-core-1.0.0.jar:1.0.0] > at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?] > at java.lang.Thread.run(Thread.java:745) [?:1.7.0_80] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1747) storm-kafka-client KafkaSpoutRetryExponentialBackoff throws exception
[ https://issues.apache.org/jira/browse/STORM-1747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1747: -- Assignee: Hugo Louro > storm-kafka-client KafkaSpoutRetryExponentialBackoff throws exception > - > > Key: STORM-1747 > URL: https://issues.apache.org/jira/browse/STORM-1747 > Project: Apache Storm > Issue Type: Bug >Reporter: Jifeng Yin >Assignee: Hugo Louro > > {code} > java.lang.ClassCastException: org.apache.kafka.common.TopicPartition cannot > be cast to java.lang.Comparable > at java.util.TreeMap.compare(TreeMap.java:1188) ~[?:1.7.0_80] > at java.util.TreeMap.put(TreeMap.java:531) ~[?:1.7.0_80] > at java.util.TreeSet.add(TreeSet.java:255) ~[?:1.7.0_80] > at > org.apache.storm.kafka.spout.KafkaSpoutRetryExponentialBackoff.retriableTopicPartitions(KafkaSpoutRetryExponentialBackoff.java:170) > ~[storm-kafka-client-1.0.0.jar:1.0.0] > at > org.apache.storm.kafka.spout.KafkaSpout.doSeekRetriableTopicPartitions(KafkaSpout.java:245) > ~[storm-kafka-client-1.0.0.jar:1.0.0] > at > org.apache.storm.kafka.spout.KafkaSpout.pollKafkaBroker(KafkaSpout.java:236) > ~[storm-kafka-client-1.0.0.jar:1.0.0] > at > org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:202) > ~[storm-kafka-client-1.0.0.jar:1.0.0] > at > org.apache.storm.daemon.executor$fn__7877$fn__7892$fn__7923.invoke(executor.clj:647) > ~[storm-core-1.0.0.jar:1.0.0] > at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) > [storm-core-1.0.0.jar:1.0.0] > at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?] > at java.lang.Thread.run(Thread.java:745) [?:1.7.0_80] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1736) Change KafkaTestBroker.buildKafkaConfig to new KafkaConfig api.
[ https://issues.apache.org/jira/browse/STORM-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1736: -- Description: 1.x-branch failing with following error [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile (default-testCompile) on project storm-kafka: Compilation failure [ERROR] /Users/harsha/code/harshach/incubator-storm/external/storm-kafka/src/test/org/apache/storm/kafka/KafkaTestBroker.java:[70,16] constructor KafkaConfig in class kafka.server.KafkaConfig cannot be applied to given types; [ERROR] required: java.util.Map,boolean [ERROR] found: java.util.Properties [ERROR] reason: actual and formal argument lists differ in length [ERROR] -> [Help 1] [ERROR] > Change KafkaTestBroker.buildKafkaConfig to new KafkaConfig api. > --- > > Key: STORM-1736 > URL: https://issues.apache.org/jira/browse/STORM-1736 > Project: Apache Storm > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani > > 1.x-branch failing with following error > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile > (default-testCompile) on project storm-kafka: Compilation failure > [ERROR] > /Users/harsha/code/harshach/incubator-storm/external/storm-kafka/src/test/org/apache/storm/kafka/KafkaTestBroker.java:[70,16] > constructor KafkaConfig in class kafka.server.KafkaConfig cannot be applied > to given types; > [ERROR] required: java.util.Map,boolean > [ERROR] found: java.util.Properties > [ERROR] reason: actual and formal argument lists differ in length > [ERROR] -> [Help 1] > [ERROR] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1736) Change KafkaTestBroker.buildKafkaConfig to new KafkaConfig api.
[ https://issues.apache.org/jira/browse/STORM-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261428#comment-15261428 ] Sriharsha Chintalapani commented on STORM-1736: --- [~ptgoetz] updated the title to reflect the intended change. I hope this doesn't require reverse engineer the code to understand the intent of the JIRA. > Change KafkaTestBroker.buildKafkaConfig to new KafkaConfig api. > --- > > Key: STORM-1736 > URL: https://issues.apache.org/jira/browse/STORM-1736 > Project: Apache Storm > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1736) Change KafkaTestBroker.buildKafkaConfig to new KafkaConfig api.
[ https://issues.apache.org/jira/browse/STORM-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1736: -- Summary: Change KafkaTestBroker.buildKafkaConfig to new KafkaConfig api. (was: Change KafkaTestBroker.) > Change KafkaTestBroker.buildKafkaConfig to new KafkaConfig api. > --- > > Key: STORM-1736 > URL: https://issues.apache.org/jira/browse/STORM-1736 > Project: Apache Storm > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1736) Change KafkaTestBroker.
[ https://issues.apache.org/jira/browse/STORM-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1736: -- Summary: Change KafkaTestBroker. (was: Change storm-kafka KafkaTestBroker to newer api) > Change KafkaTestBroker. > --- > > Key: STORM-1736 > URL: https://issues.apache.org/jira/browse/STORM-1736 > Project: Apache Storm > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1736) Change storm-kafka KafkaTestBroker to newer api
[ https://issues.apache.org/jira/browse/STORM-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261422#comment-15261422 ] Sriharsha Chintalapani commented on STORM-1736: --- [~ptgoetz] I am not sure what context you are missing here .The title is good enough and I posted clear enough details in my comment above. Its a one-line fix which shouldn't require a JIRA let alone long explanation about the fix. If someone is not willing to look at the code they shouldn't worried about JIRA at all. > Change storm-kafka KafkaTestBroker to newer api > --- > > Key: STORM-1736 > URL: https://issues.apache.org/jira/browse/STORM-1736 > Project: Apache Storm > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1736) Change storm-kafka KafkaTestBroker to newer api
[ https://issues.apache.org/jira/browse/STORM-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261319#comment-15261319 ] Sriharsha Chintalapani commented on STORM-1736: --- [~ptgoetz] https://github.com/apache/storm/blob/1.x-branch/external/storm-kafka/src/test/org/apache/storm/kafka/KafkaTestBroker.java#L70 need to change above line to return KafkaConfig.fromProps(p) > Change storm-kafka KafkaTestBroker to newer api > --- > > Key: STORM-1736 > URL: https://issues.apache.org/jira/browse/STORM-1736 > Project: Apache Storm > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1737) storm-kaka-client has compilation errors with Apache Kafka 0.10 branch
Sriharsha Chintalapani created STORM-1737: - Summary: storm-kaka-client has compilation errors with Apache Kafka 0.10 branch Key: STORM-1737 URL: https://issues.apache.org/jira/browse/STORM-1737 Project: Apache Storm Issue Type: Bug Reporter: Sriharsha Chintalapani Assignee: Hugo Louro Priority: Critical when compiled with Apache Kafka 0.10 branch getting following errors {code} [ERROR] /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[163,51] incompatible types: org.apache.kafka.common.TopicPartition cannot be converted to java.util.Collection [ERROR] /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[166,45] incompatible types: org.apache.kafka.common.TopicPartition cannot be converted to java.util.Collection [ERROR] /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[175,51] incompatible types: org.apache.kafka.common.TopicPartition cannot be converted to java.util.Collection [ERROR] /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[177,45] incompatible types: org.apache.kafka.common.TopicPartition cannot be converted to java.util.Collection [ERROR] /Users/harsha/code/hwx/storm/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java:[252,41] incompatible types: org.apache.kafka.common.TopicPartition cannot be converted to java.util.Collection {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (STORM-1736) Change storm-kafka KafkaTestBroker to newer api
[ https://issues.apache.org/jira/browse/STORM-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani reassigned STORM-1736: - Assignee: Sriharsha Chintalapani > Change storm-kafka KafkaTestBroker to newer api > --- > > Key: STORM-1736 > URL: https://issues.apache.org/jira/browse/STORM-1736 > Project: Apache Storm > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1736) Change storm-kafka KafkaTestBroker to newer api
Sriharsha Chintalapani created STORM-1736: - Summary: Change storm-kafka KafkaTestBroker to newer api Key: STORM-1736 URL: https://issues.apache.org/jira/browse/STORM-1736 Project: Apache Storm Issue Type: Bug Reporter: Sriharsha Chintalapani -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1722) Improve Storm UI
Sriharsha Chintalapani created STORM-1722: - Summary: Improve Storm UI Key: STORM-1722 URL: https://issues.apache.org/jira/browse/STORM-1722 Project: Apache Storm Issue Type: Improvement Reporter: Sriharsha Chintalapani Assignee: Sriharsha Chintalapani -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1695) Create trident spout that uses the new kafka consumer API
[ https://issues.apache.org/jira/browse/STORM-1695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1695: -- Assignee: Hugo Louro > Create trident spout that uses the new kafka consumer API > - > > Key: STORM-1695 > URL: https://issues.apache.org/jira/browse/STORM-1695 > Project: Apache Storm > Issue Type: Improvement > Components: storm-kafka >Reporter: Thomas Graves >Assignee: Hugo Louro > > In storm-822 we added a new kafka spout > (org.apache.storm.kafka.spout.KafkaSpout) that uses the new consumer Api in > Kafka 0.9. We decided in that one to handle the Trident support separately. > So this jira is to add Trident support for it similar to what > OpaqueTridentKafkaSpout does for the kafka old consumer api. We need to > support the new consumer api to allow access to a secure Kafka cluster which > was added in Kafka 0.9. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1706) Add storm-env.ini and RELEASE to storm-dist assembly
Sriharsha Chintalapani created STORM-1706: - Summary: Add storm-env.ini and RELEASE to storm-dist assembly Key: STORM-1706 URL: https://issues.apache.org/jira/browse/STORM-1706 Project: Apache Storm Issue Type: Bug Reporter: Sriharsha Chintalapani Assignee: Priyank Shah Fix For: 1.0.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-822) As a storm developer I’d like to use the new kafka consumer API (0.8.3) to reduce dependencies and use long term supported kafka apis
[ https://issues.apache.org/jira/browse/STORM-822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani resolved STORM-822. -- Resolution: Fixed Fix Version/s: 1.0.0 This jira work is merged into master and 1.x branches. Closing this as resolved. > As a storm developer I’d like to use the new kafka consumer API (0.8.3) to > reduce dependencies and use long term supported kafka apis > -- > > Key: STORM-822 > URL: https://issues.apache.org/jira/browse/STORM-822 > Project: Apache Storm > Issue Type: Story > Components: storm-kafka >Reporter: Thomas Becker >Assignee: Hugo Louro > Fix For: 1.0.0, 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-822) As a storm developer I’d like to use the new kafka consumer API (0.8.3) to reduce dependencies and use long term supported kafka apis
[ https://issues.apache.org/jira/browse/STORM-822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-822: - Assignee: Hugo Louro (was: Thomas Graves) > As a storm developer I’d like to use the new kafka consumer API (0.8.3) to > reduce dependencies and use long term supported kafka apis > -- > > Key: STORM-822 > URL: https://issues.apache.org/jira/browse/STORM-822 > Project: Apache Storm > Issue Type: Story > Components: storm-kafka >Reporter: Thomas Becker >Assignee: Hugo Louro > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1694) TridentKafkaSpout using new consumer API
Sriharsha Chintalapani created STORM-1694: - Summary: TridentKafkaSpout using new consumer API Key: STORM-1694 URL: https://issues.apache.org/jira/browse/STORM-1694 Project: Apache Storm Issue Type: Bug Components: storm-kafka Reporter: Sriharsha Chintalapani Fix For: 2.0.0 As part of STORM-822 we addressed the core KafkaSpout with new consumer api. This JIRA is for trident kafkaSpout with new consumer api. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1689) set request headersize for logviewer
[ https://issues.apache.org/jira/browse/STORM-1689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1689: -- Summary: set request headersize for logviewer (was: set request headersize for logviewere) > set request headersize for logviewer > > > Key: STORM-1689 > URL: https://issues.apache.org/jira/browse/STORM-1689 > Project: Apache Storm > Issue Type: Bug >Affects Versions: 1.0.0 >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani > Fix For: 1.0.0, 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1689) set request headersize for logviewer
[ https://issues.apache.org/jira/browse/STORM-1689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1689: -- Assignee: Priyank Shah (was: Sriharsha Chintalapani) > set request headersize for logviewer > > > Key: STORM-1689 > URL: https://issues.apache.org/jira/browse/STORM-1689 > Project: Apache Storm > Issue Type: Bug >Affects Versions: 1.0.0 >Reporter: Sriharsha Chintalapani >Assignee: Priyank Shah > Fix For: 1.0.0, 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1689) set request headersize for logviewere
Sriharsha Chintalapani created STORM-1689: - Summary: set request headersize for logviewere Key: STORM-1689 URL: https://issues.apache.org/jira/browse/STORM-1689 Project: Apache Storm Issue Type: Bug Affects Versions: 1.0.0 Reporter: Sriharsha Chintalapani Assignee: Sriharsha Chintalapani Fix For: 1.0.0, 2.0.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-1684) New KafkaSpout should provide metrics
Sriharsha Chintalapani created STORM-1684: - Summary: New KafkaSpout should provide metrics Key: STORM-1684 URL: https://issues.apache.org/jira/browse/STORM-1684 Project: Apache Storm Issue Type: Bug Components: storm-kafka Reporter: Sriharsha Chintalapani Assignee: Hugo Louro >From PR ConnieYoung "The previous KafkaSpout implementation publishes a kafkaOffset metrics to track spout lag, latest time offset and earliest time offset. " -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1684) New KafkaSpout should provide metrics
[ https://issues.apache.org/jira/browse/STORM-1684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1684: -- Description: >From PR ConnieYang "The previous KafkaSpout implementation publishes a kafkaOffset metrics to track spout lag, latest time offset and earliest time offset. " was: >From PR ConnieYoung "The previous KafkaSpout implementation publishes a kafkaOffset metrics to track spout lag, latest time offset and earliest time offset. " > New KafkaSpout should provide metrics > - > > Key: STORM-1684 > URL: https://issues.apache.org/jira/browse/STORM-1684 > Project: Apache Storm > Issue Type: Bug > Components: storm-kafka >Reporter: Sriharsha Chintalapani >Assignee: Hugo Louro > > From PR ConnieYang > "The previous KafkaSpout implementation publishes a kafkaOffset metrics to > track spout lag, latest time offset and earliest time offset. " -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-676) Storm Trident support for sliding/tumbling windows
[ https://issues.apache.org/jira/browse/STORM-676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani resolved STORM-676. -- Resolution: Fixed > Storm Trident support for sliding/tumbling windows > -- > > Key: STORM-676 > URL: https://issues.apache.org/jira/browse/STORM-676 > Project: Apache Storm > Issue Type: Improvement > Components: storm-core >Reporter: Sriharsha Chintalapani >Assignee: Satish Duggana > Fix For: 1.0.0, 2.0.0 > > Attachments: StormTrident_windowing_support-676.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1544) Document Debug/Sampling of Topologies
[ https://issues.apache.org/jira/browse/STORM-1544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211431#comment-15211431 ] Sriharsha Chintalapani commented on STORM-1544: --- [~arunmahadevan] any update on this doc. > Document Debug/Sampling of Topologies > - > > Key: STORM-1544 > URL: https://issues.apache.org/jira/browse/STORM-1544 > Project: Apache Storm > Issue Type: Bug >Affects Versions: 1.0.0 >Reporter: P. Taylor Goetz >Assignee: Arun Mahadevan > > Currently the topology/component sampling feature is undocumented, and likely > confusing to users (the UI includes the "Debug" and "Stop Debug", but the > functionality does not provide any indication of what it does, or how to > access the sample logs). > We should document the basic functionality and configuration, as well as how > to extend it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1638) Integration tests are failing on Windows
[ https://issues.apache.org/jira/browse/STORM-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211409#comment-15211409 ] Sriharsha Chintalapani commented on STORM-1638: --- We can do this in minor release. Moving it out of 1.0 release. > Integration tests are failing on Windows > > > Key: STORM-1638 > URL: https://issues.apache.org/jira/browse/STORM-1638 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.0 > Environment: Windows >Reporter: Jungtaek Lim >Priority: Critical > > Though I addressed STORM-1602, STORM-1629, STORM-1630, integration tests are > still failing from Windows. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1537) Upgrade to Kryo 3
[ https://issues.apache.org/jira/browse/STORM-1537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani resolved STORM-1537. --- Resolution: Fixed > Upgrade to Kryo 3 > - > > Key: STORM-1537 > URL: https://issues.apache.org/jira/browse/STORM-1537 > Project: Apache Storm > Issue Type: Improvement >Affects Versions: 1.0.0 >Reporter: Oscar Boykin >Assignee: Abhishek Agarwal > Fix For: 1.0.0 > > > In storm, Kryo (2.21) is used for serialization: > https://github.com/apache/storm/blob/02a44c7fc1b7b3a1571b326fde7bcae13e1b5c8d/pom.xml#L231 > The user must use the same version storm does, or there will be a java class > error at runtime. > Storm depends on a quasi-abandoned library: carbonite: > https://github.com/apache/storm/blob/02a44c7fc1b7b3a1571b326fde7bcae13e1b5c8d/pom.xml#L210 > which depends on Kryo 2.21 and Twitter chill 0.3.6: > https://github.com/sritchie/carbonite/blob/master/project.clj#L8 > Chill, currently on 0.7.3, would like to upgrade to Kryo 3.0.3: > https://github.com/twitter/chill/pull/245 > because Spark, also depending on chill, would like to upgrade for performance > improvements and bugfixes. > https://issues.apache.org/jira/browse/SPARK-11416 > Unfortunately, summingbird depends on storm: > https://github.com/twitter/summingbird/blob/develop/build.sbt#L34 > so, if chill is upgraded, and that gets on the classpath, summingbird will > break at runtime. > I propose: > 1) copy the carbonite code into storm. It is likely the only consumer. > 2) bump the storm kryo dependency after chill upgrades: recall that storm > actually depends on chill-java. A dependency that could possibly be removed > after you pull carbonite in. > 3) once a new version of storm is published, summingbird (and scalding) can > upgrade to the latest chill. > Also, I hope for: > 4) we as a JVM community get better about classpath isolation and versioning. > Diamonds like this in one big classpath make large codebases very fragile. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1632) Disable event logging by default
[ https://issues.apache.org/jira/browse/STORM-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated STORM-1632: -- Fix Version/s: (was: 1.0.0) > Disable event logging by default > > > Key: STORM-1632 > URL: https://issues.apache.org/jira/browse/STORM-1632 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Reporter: Roshan Naik >Assignee: Roshan Naik >Priority: Blocker > Attachments: BasicTopology.java > > > EventLogging has performance penalty. For a simple speed of light topology > with a single instances of a spout and a bolt, disabling event logging > delivers a 7% to 9% perf improvement (with acker count =1) > Event logging can be enabled when there is need to do debug, but turned off > by default. > **Update:** with acker=0 the observed impact was much higher... **25%** > faster when event loggers = 0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-979) [storm-elasticsearch] Introduces BaseQueryFunction to query to ES while using Trident
[ https://issues.apache.org/jira/browse/STORM-979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204667#comment-15204667 ] Sriharsha Chintalapani commented on STORM-979: -- [~neo20iitkgp] assigned it you. Go for it. > [storm-elasticsearch] Introduces BaseQueryFunction to query to ES while using > Trident > - > > Key: STORM-979 > URL: https://issues.apache.org/jira/browse/STORM-979 > Project: Apache Storm > Issue Type: Improvement > Components: storm-elasticsearch >Reporter: Jungtaek Lim >Assignee: Subhankar Biswas > > storm-elasticsearch has features on storing document, not querying something. > It would be better to have BaseQueryFunction for querying to ES and emit > matched documents, as other external modules did. -- This message was sent by Atlassian JIRA (v6.3.4#6332)