[jira] [Updated] (STORM-1557) trident get repeat data from kafka
[ https://issues.apache.org/jira/browse/STORM-1557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] miaoyichen updated STORM-1557: -- Description: When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using {color:red} OpaqueTridentKafkaSpout class {color}, we found that the {color:red} different txid got same offset which lead that we got repeated data value from kafka {color}. we printed some logs here: {noformat} 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862] please NOTICE that id 142 and 97 got same kafka offset. {noformat} * When using {color:red} TransactionalTridentKafkaSpout class {color}, we got different txid and continuance offset value but the process {color:red} speed would be slow {color}, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565] 14:05:00.483 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:136,offset:2565,nextOffset:2584] 14:05:00.495 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:197,offset:2584,nextOffset:2603] 14:05:03.550 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:198,offset:2603,nextOffset:2622] 14:05:03.593 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:199,offset:2622,nextOffset:2641] please NOTICE the timestamp from id 136 to 198 and offset. {noformat} {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=1000 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 11:35:36.265 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:232,offset:228,nextOffset:247] 11:35:36.305 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:233,offset:247,nextOffset:266] 11:35:36.343 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:446,offset:266,nextOffset:285] 11:35:41.345 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1266,offset:285,nextOffset:304] 11:35:47.063 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1330,offset:304,nextOffset:323] 11:35:56.221 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1447,offset:323,nextOffset:342] please notice every log's timestamp,txid and offset. {noformat} We thought the txid's distribution algorithm needs to be with the continuous principle in MasterBatchCoordinator class. ONLY when the time windows and other condition is ready, the txid could be added. {color:red}I submitted a solution on pull request #1041 {color}[https://github.com/apache/storm/pull/1041] was: When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using {color:red} OpaqueTridentKafkaSpout class {color}, we found that the {color:red} different txid got same offset which lead that we got repeated data value from kafka {color}. we printed some logs here: {noformat} 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862] please NOTICE that id 142 and 97 got same kafka offset. {noformat} * When using {color:red} TransactionalTridentKafkaSpout class {color}, we got different txid and continuance offset value but the process {color:red} speed would be slow {color}, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565]
[jira] [Updated] (STORM-1557) trident get repeat data from kafka
[ https://issues.apache.org/jira/browse/STORM-1557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] miaoyichen updated STORM-1557: -- Description: When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using {color:red} OpaqueTridentKafkaSpout class {color}, we found that the {color:red} different txid got same offset which lead that we got repeated data value from kafka {color}. we printed some logs here: {noformat} 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862] please NOTICE that id 142 and 97 got same kafka offset. {noformat} * When using {color:red} TransactionalTridentKafkaSpout class {color}, we got different txid and continuance offset value but the process {color:red} speed would be slow {color}, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565] 14:05:00.483 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:136,offset:2565,nextOffset:2584] 14:05:00.495 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:197,offset:2584,nextOffset:2603] 14:05:03.550 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:198,offset:2603,nextOffset:2622] 14:05:03.593 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:199,offset:2622,nextOffset:2641] please NOTICE the timestamp from id 136 to 198 and offset. {noformat} {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=1000 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 11:35:36.265 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:232,offset:228,nextOffset:247] 11:35:36.305 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:233,offset:247,nextOffset:266] 11:35:36.343 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:446,offset:266,nextOffset:285] 11:35:41.345 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1266,offset:285,nextOffset:304] 11:35:47.063 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1330,offset:304,nextOffset:323] 11:35:56.221 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1447,offset:323,nextOffset:342] please notice every log's timestamp,txid and offset. {noformat} We thought the txid's distribution algorithm needs to be with the continuous principle in MasterBatchCoordinator class. ONLY when the time windows and other condition is ready, the txid could be added. I submitted a solution on pull request #1041 [https://github.com/apache/storm/pull/1041] was: hello, i'm Felix. When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using {color:red} OpaqueTridentKafkaSpout class {color}, we found that the {color:red} different txid got same offset which lead that we got repeated data value from kafka {color}. we printed some logs here: {noformat} 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862] please NOTICE that id 142 and 97 got same kafka offset. {noformat} * When using {color:red} TransactionalTridentKafkaSpout class {color}, we got different txid and continuance offset value but the process {color:red} speed would be slow {color}, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565]
[jira] [Commented] (STORM-1511) min/max operations on trident stream
[ https://issues.apache.org/jira/browse/STORM-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151755#comment-15151755 ] ASF GitHub Bot commented on STORM-1511: --- Github user vesense commented on a diff in the pull request: https://github.com/apache/storm/pull/1118#discussion_r53272946 --- Diff: examples/storm-starter/src/jvm/org/apache/storm/starter/spout/RandomNumberGeneratorSpout.java --- @@ -0,0 +1,95 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.storm.starter.spout; + +import org.apache.storm.Config; +import org.apache.storm.task.TopologyContext; +import org.apache.storm.trident.operation.TridentCollector; +import org.apache.storm.trident.spout.IBatchSpout; +import org.apache.storm.tuple.Fields; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ThreadLocalRandom; + +/** + * This spout generates random whole numbers with given {@code maxNumber} value as maximum with the given {@code fields}. + * + */ +public class RandomNumberGeneratorSpout implements IBatchSpout { +private final Fields fields; +private final int batchSize; +private final int maxNumber; +private final Map> batches = new HashMap<>(); + +public RandomNumberGeneratorSpout(Fields fields, int batchSize, int maxNumber) { +this.fields = fields; +this.batchSize = batchSize; +this.maxNumber = maxNumber; +} + +@Override +public void open(Map conf, TopologyContext context) { +} + +@Override +public void emitBatch(long batchId, TridentCollector collector) { +List values = null; +if(batches.containsKey(batchId)) { +values = batches.get(batchId); +} else { +values = new ArrayList<>(); +for (int i = 0; i < batchSize; i++) { +List numbers = new ArrayList<>(); +for (int x=0; x
min/max operations on trident stream > > > Key: STORM-1511 > URL: https://issues.apache.org/jira/browse/STORM-1511 > Project: Apache Storm > Issue Type: Improvement > Components: storm-core >Reporter: Satish Duggana >Assignee: Satish Duggana > Fix For: 1.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1511 min/max operations support on a tri...
Github user vesense commented on a diff in the pull request: https://github.com/apache/storm/pull/1118#discussion_r53272946 --- Diff: examples/storm-starter/src/jvm/org/apache/storm/starter/spout/RandomNumberGeneratorSpout.java --- @@ -0,0 +1,95 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.storm.starter.spout; + +import org.apache.storm.Config; +import org.apache.storm.task.TopologyContext; +import org.apache.storm.trident.operation.TridentCollector; +import org.apache.storm.trident.spout.IBatchSpout; +import org.apache.storm.tuple.Fields; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ThreadLocalRandom; + +/** + * This spout generates random whole numbers with given {@code maxNumber} value as maximum with the given {@code fields}. + * + */ +public class RandomNumberGeneratorSpout implements IBatchSpout { +private final Fields fields; +private final int batchSize; +private final int maxNumber; +private final Map> batches = new HashMap<>(); + +public RandomNumberGeneratorSpout(Fields fields, int batchSize, int maxNumber) { +this.fields = fields; +this.batchSize = batchSize; +this.maxNumber = maxNumber; +} + +@Override +public void open(Map conf, TopologyContext context) { +} + +@Override +public void emitBatch(long batchId, TridentCollector collector) { +List values = null; +if(batches.containsKey(batchId)) { +values = batches.get(batchId); +} else { +values = new ArrayList<>(); +for (int i = 0; i < batchSize; i++) { +List numbers = new ArrayList<>(); +for (int x=0; x
[jira] [Closed] (STORM-1336) Evalute/Port JStorm cgroup support
[ https://issues.apache.org/jira/browse/STORM-1336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Boyang Jerry Peng closed STORM-1336. Resolution: Done > Evalute/Port JStorm cgroup support > -- > > Key: STORM-1336 > URL: https://issues.apache.org/jira/browse/STORM-1336 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Boyang Jerry Peng > Labels: jstorm-merger > > Supports controlling the upper limit of CPU core usage for a worker using > cgroups > Sounds like a good start, will be nice to integrate it with RAS requests too. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (STORM-1336) Evalute/Port JStorm cgroup support
[ https://issues.apache.org/jira/browse/STORM-1336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Boyang Jerry Peng reopened STORM-1336: -- > Evalute/Port JStorm cgroup support > -- > > Key: STORM-1336 > URL: https://issues.apache.org/jira/browse/STORM-1336 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Boyang Jerry Peng > Labels: jstorm-merger > > Supports controlling the upper limit of CPU core usage for a worker using > cgroups > Sounds like a good start, will be nice to integrate it with RAS requests too. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1336) Evalute/Port JStorm cgroup support
[ https://issues.apache.org/jira/browse/STORM-1336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Boyang Jerry Peng resolved STORM-1336. -- Resolution: Done > Evalute/Port JStorm cgroup support > -- > > Key: STORM-1336 > URL: https://issues.apache.org/jira/browse/STORM-1336 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Boyang Jerry Peng > Labels: jstorm-merger > > Supports controlling the upper limit of CPU core usage for a worker using > cgroups > Sounds like a good start, will be nice to integrate it with RAS requests too. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: [STORM-1336] - Evalute/Port JStorm cgroup supp...
Github user asfgit closed the pull request at: https://github.com/apache/storm/pull/1053 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-1336) Evalute/Port JStorm cgroup support
[ https://issues.apache.org/jira/browse/STORM-1336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151742#comment-15151742 ] ASF GitHub Bot commented on STORM-1336: --- Github user asfgit closed the pull request at: https://github.com/apache/storm/pull/1053 > Evalute/Port JStorm cgroup support > -- > > Key: STORM-1336 > URL: https://issues.apache.org/jira/browse/STORM-1336 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Boyang Jerry Peng > Labels: jstorm-merger > > Supports controlling the upper limit of CPU core usage for a worker using > cgroups > Sounds like a good start, will be nice to integrate it with RAS requests too. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1511) min/max operations on trident stream
[ https://issues.apache.org/jira/browse/STORM-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151708#comment-15151708 ] ASF GitHub Bot commented on STORM-1511: --- GitHub user satishd opened a pull request: https://github.com/apache/storm/pull/1118 STORM-1511 min/max operations support on a trident stream. You can merge this pull request into a Git repository by running: $ git pull https://github.com/satishd/storm 1.x-min-max Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/1118.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1118 commit 83ea6cb7ea7818b1b834cf7e448e7844db1e818e Author: Satish DugganaDate: 2016-01-29T07:09:26Z min/max operators implementation in Trident streams API. commit d2089c4047982b22cf4583bb63018a285be4d42c Author: Satish Duggana Date: 2016-02-08T11:19:46Z Addressed review comments > min/max operations on trident stream > > > Key: STORM-1511 > URL: https://issues.apache.org/jira/browse/STORM-1511 > Project: Apache Storm > Issue Type: Improvement > Components: storm-core >Reporter: Satish Duggana >Assignee: Satish Duggana > Fix For: 1.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1511 min/max operations support on a tri...
GitHub user satishd opened a pull request: https://github.com/apache/storm/pull/1118 STORM-1511 min/max operations support on a trident stream. You can merge this pull request into a Git repository by running: $ git pull https://github.com/satishd/storm 1.x-min-max Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/1118.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1118 commit 83ea6cb7ea7818b1b834cf7e448e7844db1e818e Author: Satish DugganaDate: 2016-01-29T07:09:26Z min/max operators implementation in Trident streams API. commit d2089c4047982b22cf4583bb63018a285be4d42c Author: Satish Duggana Date: 2016-02-08T11:19:46Z Addressed review comments --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (STORM-1557) trident get repeat data from kafka
[ https://issues.apache.org/jira/browse/STORM-1557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] miaoyichen updated STORM-1557: -- Description: hello, i'm Felix. When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using {color:red} OpaqueTridentKafkaSpout class {color}, we found that the {color:red} different txid got same offset which lead that we got repeated data value from kafka {color}. we printed some logs here: {noformat} 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862] please NOTICE that id 142 and 97 got same kafka offset. {noformat} * When using {color:red} TransactionalTridentKafkaSpout class {color}, we got different txid and continuance offset value but the process {color:red} speed would be slow {color}, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565] 14:05:00.483 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:136,offset:2565,nextOffset:2584] 14:05:00.495 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:197,offset:2584,nextOffset:2603] 14:05:03.550 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:198,offset:2603,nextOffset:2622] 14:05:03.593 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:199,offset:2622,nextOffset:2641] please NOTICE the timestamp from id 136 to 198 and offset. {noformat} {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=1000 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 11:35:36.265 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:232,offset:228,nextOffset:247] 11:35:36.305 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:233,offset:247,nextOffset:266] 11:35:36.343 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:446,offset:266,nextOffset:285] 11:35:41.345 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1266,offset:285,nextOffset:304] 11:35:47.063 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1330,offset:304,nextOffset:323] 11:35:56.221 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1447,offset:323,nextOffset:342] please notice every log's timestamp,txid and offset. {noformat} We thought the txid's distribution algorithm needs to be with the continuous principle in MasterBatchCoordinator class. ONLY when the time windows and other condition is ready, the txid could be added. was: hello, i'm Felix. When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using OpaqueTridentKafkaSpout class, we found that the different txid got same offset which lead that we got repeated data value from kafka. we printed some logs here: {noformat} 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862] please NOTICE that id 142 and 97 got same kafka offset. {noformat} * When using TransactionalTridentKafkaSpout class, we got different txid and continuance offset value but the process speed would be slow, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565] 14:05:00.483 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:136,offset:2565,nextOffset:2584] 14:05:00.495 [Thread-15-spout0] ERROR
[jira] [Updated] (STORM-1557) trident get repeat data from kafka
[ https://issues.apache.org/jira/browse/STORM-1557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] miaoyichen updated STORM-1557: -- Description: hello, i'm Felix. When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using OpaqueTridentKafkaSpout class, we found that the different txid got same offset which lead that we got repeated data value from kafka. we printed some logs here: {noformat} 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862] please NOTICE that id 142 and 97 got same kafka offset. {noformat} * When using TransactionalTridentKafkaSpout class, we got different txid and continuance offset value but the process speed would be slow, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565] 14:05:00.483 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:136,offset:2565,nextOffset:2584] 14:05:00.495 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:197,offset:2584,nextOffset:2603] 14:05:03.550 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:198,offset:2603,nextOffset:2622] 14:05:03.593 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:199,offset:2622,nextOffset:2641] please NOTICE the timestamp from id 136 to 198 and offset. {noformat} {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=1000 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 11:35:36.265 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:232,offset:228,nextOffset:247] 11:35:36.305 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:233,offset:247,nextOffset:266] 11:35:36.343 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:446,offset:266,nextOffset:285] 11:35:41.345 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1266,offset:285,nextOffset:304] 11:35:47.063 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1330,offset:304,nextOffset:323] 11:35:56.221 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1447,offset:323,nextOffset:342] please notice every log's timestamp,txid and offset. {noformat} We thought the txid's distribution algorithm needs to be with the continuous principle in MasterBatchCoordinator class. ONLY when the time windows and other condition is ready, the txid could be added. was: hello, i'm Felix. When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using OpaqueTridentKafkaSpout class, we found that the different txid got same offset which lead that we got repeated data value from kafka. we printed some logs here: {noformat} 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862]{noformat} {color:red}please NOTICE that id 142 and 97 got same kafka offset.{color} * When using TransactionalTridentKafkaSpout class, we got different txid and continuance offset value but the process speed would be slow, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565] 14:05:00.483 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:136,offset:2565,nextOffset:2584] 14:05:00.495 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter -
[jira] [Updated] (STORM-1557) trident get repeat data from kafka
[ https://issues.apache.org/jira/browse/STORM-1557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] miaoyichen updated STORM-1557: -- Description: hello, i'm Felix. When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using OpaqueTridentKafkaSpout class, we found that the different txid got same offset which lead that we got repeated data value from kafka. we printed some logs here: {noformat} 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862]{noformat} {color:red}please NOTICE that id 142 and 97 got same kafka offset.{color} * When using TransactionalTridentKafkaSpout class, we got different txid and continuance offset value but the process speed would be slow, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565] 14:05:00.483 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:136,offset:2565,nextOffset:2584] 14:05:00.495 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:197,offset:2584,nextOffset:2603] 14:05:03.550 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:198,offset:2603,nextOffset:2622] 14:05:03.593 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:199,offset:2622,nextOffset:2641]{noformat} {color:red}please NOTICE the timestamp from id 136 to 198 and offset.{color} {noformat} Config.TOPOLOGY_MAX_SPOUT_PENDING=1000 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 11:35:36.265 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:232,offset:228,nextOffset:247] 11:35:36.305 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:233,offset:247,nextOffset:266] 11:35:36.343 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:446,offset:266,nextOffset:285] 11:35:41.345 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1266,offset:285,nextOffset:304] 11:35:47.063 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1330,offset:304,nextOffset:323] 11:35:56.221 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1447,offset:323,nextOffset:342]{noformat} {color:red}please notice every log's timestamp,txid and offset.{color} We thought the txid's distribution algorithm needs to be with the continuous principle in MasterBatchCoordinator class. ONLY when the time windows and other condition is ready, the txid could be added. was: hello, i'm Felix. When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using OpaqueTridentKafkaSpout class, we found that the different txid got same offset which lead that we got repeated data value from kafka. we printed some logs here: {quote} 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862]{quote} {color:red}please NOTICE that id 142 and 97 got same kafka offset.{color} * When using TransactionalTridentKafkaSpout class, we got different txid and continuance offset value but the process speed would be slow, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: {quote} Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565] 14:05:00.483 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:136,offset:2565,nextOffset:2584] 14:05:00.495 [Thread-15-spout0] ERROR
[jira] [Updated] (STORM-1557) trident get repeat data from kafka
[ https://issues.apache.org/jira/browse/STORM-1557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] miaoyichen updated STORM-1557: -- Description: hello, i'm Felix. When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using OpaqueTridentKafkaSpout class, we found that the different txid got same offset which lead that we got repeated data value from kafka. we printed some logs here: {quote} 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862]{quote} {color:red}please NOTICE that id 142 and 97 got same kafka offset.{color} * When using TransactionalTridentKafkaSpout class, we got different txid and continuance offset value but the process speed would be slow, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: {quote} Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565] 14:05:00.483 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:136,offset:2565,nextOffset:2584] 14:05:00.495 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:197,offset:2584,nextOffset:2603] 14:05:03.550 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:198,offset:2603,nextOffset:2622] 14:05:03.593 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:199,offset:2622,nextOffset:2641]{quote} {color:red}please NOTICE the timestamp from id 136 to 198 and offset.{color} {quote} Config.TOPOLOGY_MAX_SPOUT_PENDING=1000 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 11:35:36.265 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:232,offset:228,nextOffset:247] 11:35:36.305 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:233,offset:247,nextOffset:266] 11:35:36.343 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:446,offset:266,nextOffset:285] 11:35:41.345 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1266,offset:285,nextOffset:304] 11:35:47.063 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1330,offset:304,nextOffset:323] 11:35:56.221 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1447,offset:323,nextOffset:342]{quote} {color:red}please notice every log's timestamp,txid and offset.{color} We thought the txid's distribution algorithm needs to be with the continuous principle in MasterBatchCoordinator class. ONLY when the time windows and other condition is ready, the txid could be added. was: hello, i'm Felix. When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using OpaqueTridentKafkaSpout class, we found that the different txid got same offset which lead that we got repeated data value from kafka. we printed some logs here: 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862] please NOTICE that id 142 and 97 got same kafka offset. * When using TransactionalTridentKafkaSpout class, we got different txid and continuance offset value but the process speed would be slow, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565] 14:05:00.483 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:136,offset:2565,nextOffset:2584] 14:05:00.495 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter -
[jira] [Updated] (STORM-1557) trident get repeat data from kafka
[ https://issues.apache.org/jira/browse/STORM-1557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] miaoyichen updated STORM-1557: -- Description: hello, i'm Felix. When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using OpaqueTridentKafkaSpout class, we found that the different txid got same offset which lead that we got repeated data value from kafka. we printed some logs here: 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862] please NOTICE that id 142 and 97 got same kafka offset. * When using TransactionalTridentKafkaSpout class, we got different txid and continuance offset value but the process speed would be slow, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565] 14:05:00.483 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:136,offset:2565,nextOffset:2584] 14:05:00.495 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:197,offset:2584,nextOffset:2603] 14:05:03.550 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:198,offset:2603,nextOffset:2622] 14:05:03.593 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:199,offset:2622,nextOffset:2641] please NOTICE the timestamp from id 136 to 198 and offset. Config.TOPOLOGY_MAX_SPOUT_PENDING=1000 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 11:35:36.265 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:232,offset:228,nextOffset:247] 11:35:36.305 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:233,offset:247,nextOffset:266] 11:35:36.343 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:446,offset:266,nextOffset:285] 11:35:41.345 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1266,offset:285,nextOffset:304] 11:35:47.063 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1330,offset:304,nextOffset:323] 11:35:56.221 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1447,offset:323,nextOffset:342] please notice every log's timestamp,txid and offset. We thought the txid's distribution algorithm needs to be with the continuous principle in MasterBatchCoordinator class. ONLY when the time windows and other condition is ready, the txid could be added. was: hello, i'm Felix. When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using OpaqueTridentKafkaSpout class, we found that the different txid got same offset which lead that we got repeated data value from kafka. we printed some logs here: 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862] please NOTICE that id 142 and 97 got same kafka offset. * When using TransactionalTridentKafkaSpout class, we got different txid and continuance offset value but the process speed would be slow, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565] 14:05:00.483 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:136,offset:2565,nextOffset:2584] 14:05:00.495 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:197,offset:2584,nextOffset:2603] 14:05:03.550 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter -
[jira] [Created] (STORM-1557) trident get repeat data from kafka
miaoyichen created STORM-1557: - Summary: trident get repeat data from kafka Key: STORM-1557 URL: https://issues.apache.org/jira/browse/STORM-1557 Project: Apache Storm Issue Type: Bug Components: storm-core Reporter: miaoyichen Assignee: miaoyichen hello, i'm Felix. When we used Trident API and set Config.TOPOLOGY_MAX_SPOUT_PENDING greater than 1, assuming 100, we found the txid was incontinuous. That phenomenon has two effects: * When using OpaqueTridentKafkaSpout class, we found that the different txid got same offset which lead that we got repeated data value from kafka. we printed some logs here: 11:10:13.516 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:96,offset:1805,nextOffset:1824] 11:10:13.567 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:142,offset:1824,nextOffset:1843] 11:10:13.619 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:97,offset:1824,nextOffset:1843] 11:10:13.670 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:98,offset:1843,nextOffset:1862] please NOTICE that id 142 and 97 got same kafka offset. * When using TransactionalTridentKafkaSpout class, we got different txid and continuance offset value but the process speed would be slow, because some unnecessary loops in function MasterBatchCoordinator.sync(). And the greater Config.TOPOLOGY_MAX_SPOUT_PENDING, the slower speed. we printed some logs here: Config.TOPOLOGY_MAX_SPOUT_PENDING=100 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 14:05:00.337 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:135,offset:2546,nextOffset:2565] 14:05:00.483 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:136,offset:2565,nextOffset:2584] 14:05:00.495 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:197,offset:2584,nextOffset:2603] 14:05:03.550 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:198,offset:2603,nextOffset:2622] 14:05:03.593 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:199,offset:2622,nextOffset:2641] please NOTICE the timestamp from id 136 to 198 and offset. Config.TOPOLOGY_MAX_SPOUT_PENDING=1000 Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS=50 11:35:36.265 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:232,offset:228,nextOffset:247] 11:35:36.305 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:233,offset:247,nextOffset:266] 11:35:36.343 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:446,offset:266,nextOffset:285] 11:35:41.345 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1266,offset:285,nextOffset:304] 11:35:47.063 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1330,offset:304,nextOffset:323] 11:35:56.221 [Thread-15-spout0] ERROR s.kafka.trident.TridentKafkaEmitter - emit:[id:1447,offset:323,nextOffset:342] please notice every log's timestamp,txid and offset. We thought the txid's distribution algorithm needs to be with the continuous principle in MasterBatchCoordinator class. ONLY when the time windows and other condition is ready, the txid could be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-822) As a storm developer I’d like to use the new kafka consumer API (0.8.3) to reduce dependencies and use long term supported kafka apis
[ https://issues.apache.org/jira/browse/STORM-822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151680#comment-15151680 ] ASF GitHub Bot commented on STORM-822: -- Github user jianbzhou commented on the pull request: https://github.com/apache/storm/pull/986#issuecomment-185525958 **@hmcl ** thanks a lot i understand the complexity. Appreciate your help and it would be great to see your patch by EOD today your time. > As a storm developer I’d like to use the new kafka consumer API (0.8.3) to > reduce dependencies and use long term supported kafka apis > -- > > Key: STORM-822 > URL: https://issues.apache.org/jira/browse/STORM-822 > Project: Apache Storm > Issue Type: Story > Components: storm-kafka >Reporter: Thomas Becker >Assignee: Hugo Louro > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-822 Implement Kafka 0.9 consumer API
Github user jianbzhou commented on the pull request: https://github.com/apache/storm/pull/986#issuecomment-185525958 **@hmcl ** thanks a lot i understand the complexity. Appreciate your help and it would be great to see your patch by EOD today your time. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request: STORM-822 Implement Kafka 0.9 consumer API
Github user hmcl commented on the pull request: https://github.com/apache/storm/pull/986#issuecomment-185525450 @jianbzhou sorry it took me a bit longer. Trying my best to put it up within the next few hours. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-822) As a storm developer I’d like to use the new kafka consumer API (0.8.3) to reduce dependencies and use long term supported kafka apis
[ https://issues.apache.org/jira/browse/STORM-822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151676#comment-15151676 ] ASF GitHub Bot commented on STORM-822: -- Github user hmcl commented on the pull request: https://github.com/apache/storm/pull/986#issuecomment-185525450 @jianbzhou sorry it took me a bit longer. Trying my best to put it up within the next few hours. > As a storm developer I’d like to use the new kafka consumer API (0.8.3) to > reduce dependencies and use long term supported kafka apis > -- > > Key: STORM-822 > URL: https://issues.apache.org/jira/browse/STORM-822 > Project: Apache Storm > Issue Type: Story > Components: storm-kafka >Reporter: Thomas Becker >Assignee: Hugo Louro > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1336) Evalute/Port JStorm cgroup support
[ https://issues.apache.org/jira/browse/STORM-1336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151615#comment-15151615 ] ASF GitHub Bot commented on STORM-1336: --- Github user vesense commented on the pull request: https://github.com/apache/storm/pull/1053#issuecomment-185514609 LGTM > Evalute/Port JStorm cgroup support > -- > > Key: STORM-1336 > URL: https://issues.apache.org/jira/browse/STORM-1336 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Boyang Jerry Peng > Labels: jstorm-merger > > Supports controlling the upper limit of CPU core usage for a worker using > cgroups > Sounds like a good start, will be nice to integrate it with RAS requests too. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: [STORM-1336] - Evalute/Port JStorm cgroup supp...
Github user vesense commented on the pull request: https://github.com/apache/storm/pull/1053#issuecomment-185514609 LGTM --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-1232) port backtype.storm.scheduler.DefaultScheduler to java
[ https://issues.apache.org/jira/browse/STORM-1232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151607#comment-15151607 ] ASF GitHub Bot commented on STORM-1232: --- Github user vesense commented on the pull request: https://github.com/apache/storm/pull/1108#issuecomment-185513804 @abhishekagarwal87 Thanks for your review. I think I have addressed all of your comments. Does anyone have any other concerns? > port backtype.storm.scheduler.DefaultScheduler to java > --- > > Key: STORM-1232 > URL: https://issues.apache.org/jira/browse/STORM-1232 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Xin Wang > Labels: java-migration, jstorm-merger > > port the DefaultScheduler to java -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: [STORM-1232] port backtype.storm.scheduler.Def...
Github user vesense commented on the pull request: https://github.com/apache/storm/pull/1108#issuecomment-185513804 @abhishekagarwal87 Thanks for your review. I think I have addressed all of your comments. Does anyone have any other concerns? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-1541) Change scope of 'hadoop-minicluster' to test
[ https://issues.apache.org/jira/browse/STORM-1541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151599#comment-15151599 ] ASF GitHub Bot commented on STORM-1541: --- Github user vesense commented on the pull request: https://github.com/apache/storm/pull/1102#issuecomment-185511950 LGTM > Change scope of 'hadoop-minicluster' to test > > > Key: STORM-1541 > URL: https://issues.apache.org/jira/browse/STORM-1541 > Project: Apache Storm > Issue Type: Bug > Components: storm-hdfs >Affects Versions: 1.0.0, 2.0.0 >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim > > STORM-969 added dependency 'hadoop-minicluster' but not set scope to 'test' > though it's for unit test. (and normally hadoop-minicluster is) > It may come up with other unintended dependencies. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1541 Change scope of 'hadoop-minicluster...
Github user vesense commented on the pull request: https://github.com/apache/storm/pull/1102#issuecomment-185511950 LGTM --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Resolved] (STORM-1539) Improve Storm ACK-ing performance
[ https://issues.apache.org/jira/browse/STORM-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xin Wang resolved STORM-1539. - Resolution: Resolved Fix Version/s: 1.0.0 > Improve Storm ACK-ing performance > - > > Key: STORM-1539 > URL: https://issues.apache.org/jira/browse/STORM-1539 > Project: Apache Storm > Issue Type: Bug >Reporter: Roshan Naik >Assignee: Roshan Naik > Fix For: 1.0.0 > > Attachments: after.png, before.png > > > Profiling a simple speed of light topology, shows that a good chunk of time > of the SpoutOutputCollector.emit() is spent in the clojure reduce() > function.. which is part of the ACK-ing logic. > Re-implementing this reduce() logic in java gives a big performance boost in > both in the Spout.nextTuple() and Bolt.execute() -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (STORM-1270) port backtype.storm.daemon.drpc to java
[ https://issues.apache.org/jira/browse/STORM-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Fang reassigned STORM-1270: Assignee: John Fang > port backtype.storm.daemon.drpc to java > --- > > Key: STORM-1270 > URL: https://issues.apache.org/jira/browse/STORM-1270 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: John Fang > Labels: java-migration, jstorm-merger > > DRPC server with HTTP and thrift support > https://github.com/apache/storm/blob/jstorm-import/jstorm-core/src/main/java/com/alibaba/jstorm/drpc/Drpc.java > (But missing HTTP support) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (STORM-1274) port backtype.storm.LocalDRPC to java
[ https://issues.apache.org/jira/browse/STORM-1274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Fang reassigned STORM-1274: Assignee: John Fang > port backtype.storm.LocalDRPC to java > - > > Key: STORM-1274 > URL: https://issues.apache.org/jira/browse/STORM-1274 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: John Fang > Labels: java-migration, jstorm-merger > > https://github.com/apache/storm/blob/jstorm-import/jstorm-core/src/main/java/backtype/storm/LocalDRPC.java > as an example -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1147) Storm JDBCBolt should add validation to ensure either insertQuery or table name is specified and not both.
[ https://issues.apache.org/jira/browse/STORM-1147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Parth Brahmbhatt resolved STORM-1147. - Resolution: Fixed > Storm JDBCBolt should add validation to ensure either insertQuery or table > name is specified and not both. > -- > > Key: STORM-1147 > URL: https://issues.apache.org/jira/browse/STORM-1147 > Project: Apache Storm > Issue Type: Bug > Components: storm-jdbc >Affects Versions: 0.10.0 >Reporter: Parth Brahmbhatt >Assignee: Parth Brahmbhatt >Priority: Trivial > Fix For: 1.0.0 > > > The JDBCBolt takes either an insert query or table name but does not do any > validation check to ensure only one of the two option is provided. We should > add a validation check and throw an exception with proper messaging to avoid > confusion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1258) port backtype.storm.thrift to java
[ https://issues.apache.org/jira/browse/STORM-1258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151359#comment-15151359 ] Sanket Reddy commented on STORM-1258: - there is PR already > port backtype.storm.thrift to java > --- > > Key: STORM-1258 > URL: https://issues.apache.org/jira/browse/STORM-1258 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Sanket Reddy > Labels: java-migration, jstorm-merger > > helper methods for manipulating thrift objects -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1255) port backtype.storm.utils-test to java
[ https://issues.apache.org/jira/browse/STORM-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151349#comment-15151349 ] ASF GitHub Bot commented on STORM-1255: --- Github user abellina closed the pull request at: https://github.com/apache/storm/pull/1114 > port backtype.storm.utils-test to java > -- > > Key: STORM-1255 > URL: https://issues.apache.org/jira/browse/STORM-1255 > Project: Apache Storm > Issue Type: New Feature > Components: rm-core, storm-core >Reporter: Robert Joseph Evans >Assignee: Alessandro Bellina > Labels: java-migration, jstorm-merger > > junit test migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1255) port backtype.storm.utils-test to java
[ https://issues.apache.org/jira/browse/STORM-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151351#comment-15151351 ] ASF GitHub Bot commented on STORM-1255: --- GitHub user abellina reopened a pull request: https://github.com/apache/storm/pull/1114 STORM-1255: port storm_utils.clj to java and split Time tests into its own test file Added a few extra unit tests. It is hard to find what needs to be tested most, so if you have a suggestion, I am happy to add more tests for Utils (perhaps a different pr). You can merge this pull request into a Git repository by running: $ git pull https://github.com/abellina/storm STORM-1255_port_utils_test_to_java Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/1114.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1114 commit a2a656ed3fdbf76fddb730bced5bfe7f2b18df72 Author: Alessandro BellinaDate: 2016-02-15T19:30:10Z STORM-1255: port storm_utils.clj to java and split Time tests into its own test file commit 20851f8b2d47bc74fa8ac36c78c43e038d56ed81 Author: Alessandro Bellina Date: 2016-02-17T18:26:02Z STORM-1255: remove extra import, adjust formatting, shorten function names by grouping assertions commit 3fe11ecc652790684010b5fdd36c53844e89ce42 Author: Alessandro Bellina Date: 2016-02-17T18:31:58Z STORM-1255: combine two tests to make things clearer commit a8edd512c903d44bf26a57cace8f848cc4660738 Author: Alessandro Bellina Date: 2016-02-17T18:36:32Z STORM-1255: fix spacing in test > port backtype.storm.utils-test to java > -- > > Key: STORM-1255 > URL: https://issues.apache.org/jira/browse/STORM-1255 > Project: Apache Storm > Issue Type: New Feature > Components: rm-core, storm-core >Reporter: Robert Joseph Evans >Assignee: Alessandro Bellina > Labels: java-migration, jstorm-merger > > junit test migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1255: port storm_utils.clj to java and s...
GitHub user abellina reopened a pull request: https://github.com/apache/storm/pull/1114 STORM-1255: port storm_utils.clj to java and split Time tests into its own test file Added a few extra unit tests. It is hard to find what needs to be tested most, so if you have a suggestion, I am happy to add more tests for Utils (perhaps a different pr). You can merge this pull request into a Git repository by running: $ git pull https://github.com/abellina/storm STORM-1255_port_utils_test_to_java Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/1114.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1114 commit a2a656ed3fdbf76fddb730bced5bfe7f2b18df72 Author: Alessandro BellinaDate: 2016-02-15T19:30:10Z STORM-1255: port storm_utils.clj to java and split Time tests into its own test file commit 20851f8b2d47bc74fa8ac36c78c43e038d56ed81 Author: Alessandro Bellina Date: 2016-02-17T18:26:02Z STORM-1255: remove extra import, adjust formatting, shorten function names by grouping assertions commit 3fe11ecc652790684010b5fdd36c53844e89ce42 Author: Alessandro Bellina Date: 2016-02-17T18:31:58Z STORM-1255: combine two tests to make things clearer commit a8edd512c903d44bf26a57cace8f848cc4660738 Author: Alessandro Bellina Date: 2016-02-17T18:36:32Z STORM-1255: fix spacing in test --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request: STORM-1255: port storm_utils.clj to java and s...
Github user abellina closed the pull request at: https://github.com/apache/storm/pull/1114 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request: [STORM-1553] port event.clj to java
Github user ppoulosk commented on the pull request: https://github.com/apache/storm/pull/1110#issuecomment-185425571 Very good translation. +1, NB --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-1553) port backtype.storm.event to java
[ https://issues.apache.org/jira/browse/STORM-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151272#comment-15151272 ] ASF GitHub Bot commented on STORM-1553: --- Github user ppoulosk commented on the pull request: https://github.com/apache/storm/pull/1110#issuecomment-185425571 Very good translation. +1, NB > port backtype.storm.event to java > - > > Key: STORM-1553 > URL: https://issues.apache.org/jira/browse/STORM-1553 > Project: Apache Storm > Issue Type: New Feature >Reporter: John Fang >Assignee: John Fang > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1542) Taking jstack for a worker in UI results in endless empty jstack dumps
[ https://issues.apache.org/jira/browse/STORM-1542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151245#comment-15151245 ] Kishor Patil commented on STORM-1542: - [~abhishek.agarwal] It looks like a good option to turn this into a synchronous mode. I would be watchful during implementation - for multitenant, these operations needs to be launched as user. - the heap-dump like actions in synchronous mode could take long time - due to heapsize and + time to download dump results back to browser in synchronous mode. Currently, supervisor relaunches the profiler ( in case of worker restarts..) but that feature would seize to exist ( as supervisor uses ZK to remember launching profilers for that worker) The idea otherwise sounds good. > Taking jstack for a worker in UI results in endless empty jstack dumps > -- > > Key: STORM-1542 > URL: https://issues.apache.org/jira/browse/STORM-1542 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.0 >Reporter: Abhishek Agarwal >Assignee: Abhishek Agarwal >Priority: Critical > > Resolved path for jstack command on supervisor is > /home/y/share/yjava_jdk/java/jstack which doesn't exist. command returns 127 > as exit code. When a request for jstack dump is made from UI, a zookeeper > node is created. Now supervisor keeps on reading this node, executes jstack > command and since exit code is non-zero, doesn't delete the node afterwards. > Thus supervisor keeps on executing the command forever and each invocation > creates an new empty file. > {noformat} > $BINPATH/jstack $1 > "$2/${FILENAME}" > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1243) port backtype.storm.command.healthcheck to java
[ https://issues.apache.org/jira/browse/STORM-1243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151217#comment-15151217 ] ASF GitHub Bot commented on STORM-1243: --- Github user ppoulosk commented on the pull request: https://github.com/apache/storm/pull/#issuecomment-185411289 +1, NB > port backtype.storm.command.healthcheck to java > --- > > Key: STORM-1243 > URL: https://issues.apache.org/jira/browse/STORM-1243 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: John Fang > Labels: java-migration, jstorm-merger > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: [STORM-1243] port HealthCheck to java
Github user ppoulosk commented on the pull request: https://github.com/apache/storm/pull/#issuecomment-185411289 +1, NB --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-1255) port backtype.storm.utils-test to java
[ https://issues.apache.org/jira/browse/STORM-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151146#comment-15151146 ] ASF GitHub Bot commented on STORM-1255: --- Github user abellina commented on the pull request: https://github.com/apache/storm/pull/1114#issuecomment-185396798 The build appears stuck. Is there a way for me to re-trigger it other than close and re-open the pull request? > port backtype.storm.utils-test to java > -- > > Key: STORM-1255 > URL: https://issues.apache.org/jira/browse/STORM-1255 > Project: Apache Storm > Issue Type: New Feature > Components: rm-core, storm-core >Reporter: Robert Joseph Evans >Assignee: Alessandro Bellina > Labels: java-migration, jstorm-merger > > junit test migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1255: port storm_utils.clj to java and s...
Github user abellina commented on the pull request: https://github.com/apache/storm/pull/1114#issuecomment-185396798 The build appears stuck. Is there a way for me to re-trigger it other than close and re-open the pull request? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request: STORM-1255: port storm_utils.clj to java and s...
Github user redsanket commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53209094 --- Diff: storm-core/test/jvm/org/apache/storm/utils/UtilsTest.java --- @@ -0,0 +1,221 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import java.util.HashMap; +import org.junit.Test; +import org.junit.Assert; + +import org.apache.curator.ensemble.exhibitor.ExhibitorEnsembleProvider; +import org.apache.curator.ensemble.fixed.FixedEnsembleProvider; +import org.apache.curator.framework.AuthInfo; +import org.apache.curator.framework.CuratorFramework; +import org.apache.curator.framework.CuratorFrameworkFactory; + +import org.apache.storm.Config; +import org.apache.thrift.transport.TTransportException; + +import static org.mockito.Mockito.*; + +public class UtilsTest{ +@Test +public void newCuratorUsesExponentialBackoffTest() throws InterruptedException{ +final int expectedInterval = 2400; +final int expectedRetries = 10; +final int expectedCeiling = 3000; + +Mapconfig = Utils.readDefaultConfig(); +config.put(Config.STORM_ZOOKEEPER_RETRY_INTERVAL, expectedInterval); +config.put(Config.STORM_ZOOKEEPER_RETRY_TIMES, expectedRetries); +config.put(Config.STORM_ZOOKEEPER_RETRY_INTERVAL_CEILING, expectedCeiling); + +CuratorFramework curator = Utils.newCurator(config, Arrays.asList("bogus_server"), 42 /*port*/, ""); +StormBoundedExponentialBackoffRetry policy = +(StormBoundedExponentialBackoffRetry) curator.getZookeeperClient().getRetryPolicy(); +Assert.assertEquals(policy.getBaseSleepTimeMs(), expectedInterval); +Assert.assertEquals(policy.getN(), expectedRetries); +Assert.assertEquals(policy.getSleepTimeMs(10, 0), expectedCeiling); +} + +@Test(expected = RuntimeException.class) +public void getConfiguredClientThrowsRuntimeExceptionOnBadArgsTest () throws RuntimeException, TTransportException { +Map config = ConfigUtils.readStormConfig(); +config.put(Config.STORM_NIMBUS_RETRY_TIMES, 0); +new NimbusClient(config, "", 65535); +} + +private Map mockMap(String key, String value){ +Map map = new HashMap (); +map.put(key, value); +return map; +} + +private Map topologyMockMap(String value){ +return mockMap(Config.STORM_ZOOKEEPER_TOPOLOGY_AUTH_SCHEME, value); +} + +private Map serverMockMap(String value){ +return mockMap(Config.STORM_ZOOKEEPER_AUTH_SCHEME, value); +} + +private Map emptyMockMap(){ +return new HashMap (); +} + +/* isZkAuthenticationConfiguredTopology */ +@Test +public void isZkAuthenticationConfiguredTopologyReturnsFalseOnNullConfigTest(){ + Assert.assertFalse(Utils.isZkAuthenticationConfiguredTopology(null)); +} + +@Test +public void isZkAuthenticationConfiguredTopologyReturnsFalseOnSchemeKeyMissingTest(){ + Assert.assertFalse(Utils.isZkAuthenticationConfiguredTopology(emptyMockMap())); +} + +@Test +public void isZkAuthenticationConfiguredTopologyReturnsFalseOnSchemeValueNullTest(){ + Assert.assertFalse(Utils.isZkAuthenticationConfiguredTopology(topologyMockMap(null))); +} + +@Test +public void isZkAuthenticationConfiguredTopologyReturnsTrueWhenSchemeSetToStringTest(){ + Assert.assertTrue(Utils.isZkAuthenticationConfiguredTopology(topologyMockMap("foobar"))); +} + +/* isZkAuthenticationConfiguredStormServer */ +@Test +public void
[jira] [Commented] (STORM-1255) port backtype.storm.utils-test to java
[ https://issues.apache.org/jira/browse/STORM-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150946#comment-15150946 ] ASF GitHub Bot commented on STORM-1255: --- Github user abellina commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53209305 --- Diff: storm-core/test/jvm/org/apache/storm/utils/UtilsTest.java --- @@ -0,0 +1,221 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import java.util.HashMap; +import org.junit.Test; +import org.junit.Assert; + +import org.apache.curator.ensemble.exhibitor.ExhibitorEnsembleProvider; +import org.apache.curator.ensemble.fixed.FixedEnsembleProvider; +import org.apache.curator.framework.AuthInfo; +import org.apache.curator.framework.CuratorFramework; +import org.apache.curator.framework.CuratorFrameworkFactory; + +import org.apache.storm.Config; +import org.apache.thrift.transport.TTransportException; + +import static org.mockito.Mockito.*; + +public class UtilsTest{ +@Test +public void newCuratorUsesExponentialBackoffTest() throws InterruptedException{ +final int expectedInterval = 2400; +final int expectedRetries = 10; +final int expectedCeiling = 3000; + +Mapconfig = Utils.readDefaultConfig(); +config.put(Config.STORM_ZOOKEEPER_RETRY_INTERVAL, expectedInterval); +config.put(Config.STORM_ZOOKEEPER_RETRY_TIMES, expectedRetries); +config.put(Config.STORM_ZOOKEEPER_RETRY_INTERVAL_CEILING, expectedCeiling); + +CuratorFramework curator = Utils.newCurator(config, Arrays.asList("bogus_server"), 42 /*port*/, ""); +StormBoundedExponentialBackoffRetry policy = +(StormBoundedExponentialBackoffRetry) curator.getZookeeperClient().getRetryPolicy(); +Assert.assertEquals(policy.getBaseSleepTimeMs(), expectedInterval); +Assert.assertEquals(policy.getN(), expectedRetries); +Assert.assertEquals(policy.getSleepTimeMs(10, 0), expectedCeiling); +} + +@Test(expected = RuntimeException.class) +public void getConfiguredClientThrowsRuntimeExceptionOnBadArgsTest () throws RuntimeException, TTransportException { +Map config = ConfigUtils.readStormConfig(); +config.put(Config.STORM_NIMBUS_RETRY_TIMES, 0); +new NimbusClient(config, "", 65535); +} + +private Map mockMap(String key, String value){ +Map map = new HashMap (); +map.put(key, value); +return map; +} + +private Map topologyMockMap(String value){ +return mockMap(Config.STORM_ZOOKEEPER_TOPOLOGY_AUTH_SCHEME, value); +} + +private Map serverMockMap(String value){ +return mockMap(Config.STORM_ZOOKEEPER_AUTH_SCHEME, value); +} + +private Map emptyMockMap(){ +return new HashMap (); +} + +/* isZkAuthenticationConfiguredTopology */ +@Test +public void isZkAuthenticationConfiguredTopologyReturnsFalseOnNullConfigTest(){ + Assert.assertFalse(Utils.isZkAuthenticationConfiguredTopology(null)); +} + +@Test +public void isZkAuthenticationConfiguredTopologyReturnsFalseOnSchemeKeyMissingTest(){ + Assert.assertFalse(Utils.isZkAuthenticationConfiguredTopology(emptyMockMap())); +} + +@Test +public void isZkAuthenticationConfiguredTopologyReturnsFalseOnSchemeValueNullTest(){ + Assert.assertFalse(Utils.isZkAuthenticationConfiguredTopology(topologyMockMap(null))); +} + +@Test +public void isZkAuthenticationConfiguredTopologyReturnsTrueWhenSchemeSetToStringTest(){ +
[GitHub] storm pull request: STORM-1255: port storm_utils.clj to java and s...
Github user abellina commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53209305 --- Diff: storm-core/test/jvm/org/apache/storm/utils/UtilsTest.java --- @@ -0,0 +1,221 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import java.util.HashMap; +import org.junit.Test; +import org.junit.Assert; + +import org.apache.curator.ensemble.exhibitor.ExhibitorEnsembleProvider; +import org.apache.curator.ensemble.fixed.FixedEnsembleProvider; +import org.apache.curator.framework.AuthInfo; +import org.apache.curator.framework.CuratorFramework; +import org.apache.curator.framework.CuratorFrameworkFactory; + +import org.apache.storm.Config; +import org.apache.thrift.transport.TTransportException; + +import static org.mockito.Mockito.*; + +public class UtilsTest{ +@Test +public void newCuratorUsesExponentialBackoffTest() throws InterruptedException{ +final int expectedInterval = 2400; +final int expectedRetries = 10; +final int expectedCeiling = 3000; + +Mapconfig = Utils.readDefaultConfig(); +config.put(Config.STORM_ZOOKEEPER_RETRY_INTERVAL, expectedInterval); +config.put(Config.STORM_ZOOKEEPER_RETRY_TIMES, expectedRetries); +config.put(Config.STORM_ZOOKEEPER_RETRY_INTERVAL_CEILING, expectedCeiling); + +CuratorFramework curator = Utils.newCurator(config, Arrays.asList("bogus_server"), 42 /*port*/, ""); +StormBoundedExponentialBackoffRetry policy = +(StormBoundedExponentialBackoffRetry) curator.getZookeeperClient().getRetryPolicy(); +Assert.assertEquals(policy.getBaseSleepTimeMs(), expectedInterval); +Assert.assertEquals(policy.getN(), expectedRetries); +Assert.assertEquals(policy.getSleepTimeMs(10, 0), expectedCeiling); +} + +@Test(expected = RuntimeException.class) +public void getConfiguredClientThrowsRuntimeExceptionOnBadArgsTest () throws RuntimeException, TTransportException { +Map config = ConfigUtils.readStormConfig(); +config.put(Config.STORM_NIMBUS_RETRY_TIMES, 0); +new NimbusClient(config, "", 65535); +} + +private Map mockMap(String key, String value){ +Map map = new HashMap (); +map.put(key, value); +return map; +} + +private Map topologyMockMap(String value){ +return mockMap(Config.STORM_ZOOKEEPER_TOPOLOGY_AUTH_SCHEME, value); +} + +private Map serverMockMap(String value){ +return mockMap(Config.STORM_ZOOKEEPER_AUTH_SCHEME, value); +} + +private Map emptyMockMap(){ +return new HashMap (); +} + +/* isZkAuthenticationConfiguredTopology */ +@Test +public void isZkAuthenticationConfiguredTopologyReturnsFalseOnNullConfigTest(){ + Assert.assertFalse(Utils.isZkAuthenticationConfiguredTopology(null)); +} + +@Test +public void isZkAuthenticationConfiguredTopologyReturnsFalseOnSchemeKeyMissingTest(){ + Assert.assertFalse(Utils.isZkAuthenticationConfiguredTopology(emptyMockMap())); +} + +@Test +public void isZkAuthenticationConfiguredTopologyReturnsFalseOnSchemeValueNullTest(){ + Assert.assertFalse(Utils.isZkAuthenticationConfiguredTopology(topologyMockMap(null))); +} + +@Test +public void isZkAuthenticationConfiguredTopologyReturnsTrueWhenSchemeSetToStringTest(){ + Assert.assertTrue(Utils.isZkAuthenticationConfiguredTopology(topologyMockMap("foobar"))); +} + +/* isZkAuthenticationConfiguredStormServer */ +@Test +public void
[jira] [Commented] (STORM-1255) port backtype.storm.utils-test to java
[ https://issues.apache.org/jira/browse/STORM-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150944#comment-15150944 ] ASF GitHub Bot commented on STORM-1255: --- Github user redsanket commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53209094 --- Diff: storm-core/test/jvm/org/apache/storm/utils/UtilsTest.java --- @@ -0,0 +1,221 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import java.util.HashMap; +import org.junit.Test; +import org.junit.Assert; + +import org.apache.curator.ensemble.exhibitor.ExhibitorEnsembleProvider; +import org.apache.curator.ensemble.fixed.FixedEnsembleProvider; +import org.apache.curator.framework.AuthInfo; +import org.apache.curator.framework.CuratorFramework; +import org.apache.curator.framework.CuratorFrameworkFactory; + +import org.apache.storm.Config; +import org.apache.thrift.transport.TTransportException; + +import static org.mockito.Mockito.*; + +public class UtilsTest{ +@Test +public void newCuratorUsesExponentialBackoffTest() throws InterruptedException{ +final int expectedInterval = 2400; +final int expectedRetries = 10; +final int expectedCeiling = 3000; + +Mapconfig = Utils.readDefaultConfig(); +config.put(Config.STORM_ZOOKEEPER_RETRY_INTERVAL, expectedInterval); +config.put(Config.STORM_ZOOKEEPER_RETRY_TIMES, expectedRetries); +config.put(Config.STORM_ZOOKEEPER_RETRY_INTERVAL_CEILING, expectedCeiling); + +CuratorFramework curator = Utils.newCurator(config, Arrays.asList("bogus_server"), 42 /*port*/, ""); +StormBoundedExponentialBackoffRetry policy = +(StormBoundedExponentialBackoffRetry) curator.getZookeeperClient().getRetryPolicy(); +Assert.assertEquals(policy.getBaseSleepTimeMs(), expectedInterval); +Assert.assertEquals(policy.getN(), expectedRetries); +Assert.assertEquals(policy.getSleepTimeMs(10, 0), expectedCeiling); +} + +@Test(expected = RuntimeException.class) +public void getConfiguredClientThrowsRuntimeExceptionOnBadArgsTest () throws RuntimeException, TTransportException { +Map config = ConfigUtils.readStormConfig(); +config.put(Config.STORM_NIMBUS_RETRY_TIMES, 0); +new NimbusClient(config, "", 65535); +} + +private Map mockMap(String key, String value){ +Map map = new HashMap (); +map.put(key, value); +return map; +} + +private Map topologyMockMap(String value){ +return mockMap(Config.STORM_ZOOKEEPER_TOPOLOGY_AUTH_SCHEME, value); +} + +private Map serverMockMap(String value){ +return mockMap(Config.STORM_ZOOKEEPER_AUTH_SCHEME, value); +} + +private Map emptyMockMap(){ +return new HashMap (); +} + +/* isZkAuthenticationConfiguredTopology */ +@Test +public void isZkAuthenticationConfiguredTopologyReturnsFalseOnNullConfigTest(){ + Assert.assertFalse(Utils.isZkAuthenticationConfiguredTopology(null)); +} + +@Test +public void isZkAuthenticationConfiguredTopologyReturnsFalseOnSchemeKeyMissingTest(){ + Assert.assertFalse(Utils.isZkAuthenticationConfiguredTopology(emptyMockMap())); +} + +@Test +public void isZkAuthenticationConfiguredTopologyReturnsFalseOnSchemeValueNullTest(){ + Assert.assertFalse(Utils.isZkAuthenticationConfiguredTopology(topologyMockMap(null))); +} + +@Test +public void isZkAuthenticationConfiguredTopologyReturnsTrueWhenSchemeSetToStringTest(){ +
[jira] [Commented] (STORM-1255) port backtype.storm.utils-test to java
[ https://issues.apache.org/jira/browse/STORM-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150940#comment-15150940 ] ASF GitHub Bot commented on STORM-1255: --- Github user redsanket commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53208394 --- Diff: storm-core/test/jvm/org/apache/storm/utils/TimeTest.java --- @@ -0,0 +1,106 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import org.junit.Test; +import org.junit.Assert; + +public class TimeTest{ + +@Test +public void secsToMillisLongTest(){ +Assert.assertEquals(Time.secsToMillisLong(0), 0); --- End diff -- Please check whether parameter passing is consistent with others, I presume we do not add spaces when specifying arguments. > port backtype.storm.utils-test to java > -- > > Key: STORM-1255 > URL: https://issues.apache.org/jira/browse/STORM-1255 > Project: Apache Storm > Issue Type: New Feature > Components: rm-core, storm-core >Reporter: Robert Joseph Evans >Assignee: Alessandro Bellina > Labels: java-migration, jstorm-merger > > junit test migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1255: port storm_utils.clj to java and s...
Github user redsanket commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53208394 --- Diff: storm-core/test/jvm/org/apache/storm/utils/TimeTest.java --- @@ -0,0 +1,106 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import org.junit.Test; +import org.junit.Assert; + +public class TimeTest{ + +@Test +public void secsToMillisLongTest(){ +Assert.assertEquals(Time.secsToMillisLong(0), 0); --- End diff -- Please check whether parameter passing is consistent with others, I presume we do not add spaces when specifying arguments. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-1255) port backtype.storm.utils-test to java
[ https://issues.apache.org/jira/browse/STORM-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150935#comment-15150935 ] ASF GitHub Bot commented on STORM-1255: --- Github user redsanket commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53208097 --- Diff: storm-core/test/jvm/org/apache/storm/utils/TimeTest.java --- @@ -0,0 +1,106 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import org.junit.Test; +import org.junit.Assert; + +public class TimeTest{ --- End diff -- It would be good to add space throughout as we follow a indentation in other files > port backtype.storm.utils-test to java > -- > > Key: STORM-1255 > URL: https://issues.apache.org/jira/browse/STORM-1255 > Project: Apache Storm > Issue Type: New Feature > Components: rm-core, storm-core >Reporter: Robert Joseph Evans >Assignee: Alessandro Bellina > Labels: java-migration, jstorm-merger > > junit test migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1255: port storm_utils.clj to java and s...
Github user redsanket commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53208097 --- Diff: storm-core/test/jvm/org/apache/storm/utils/TimeTest.java --- @@ -0,0 +1,106 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import org.junit.Test; +import org.junit.Assert; + +public class TimeTest{ --- End diff -- It would be good to add space throughout as we follow a indentation in other files --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-1255) port backtype.storm.utils-test to java
[ https://issues.apache.org/jira/browse/STORM-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150931#comment-15150931 ] ASF GitHub Bot commented on STORM-1255: --- Github user abellina commented on the pull request: https://github.com/apache/storm/pull/1114#issuecomment-185338394 @abhishekagarwal87 I have shortened names and combined some of the tests using instead the assertX(String message, ...) format to give them more context (rather than function name) > port backtype.storm.utils-test to java > -- > > Key: STORM-1255 > URL: https://issues.apache.org/jira/browse/STORM-1255 > Project: Apache Storm > Issue Type: New Feature > Components: rm-core, storm-core >Reporter: Robert Joseph Evans >Assignee: Alessandro Bellina > Labels: java-migration, jstorm-merger > > junit test migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1255) port backtype.storm.utils-test to java
[ https://issues.apache.org/jira/browse/STORM-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150930#comment-15150930 ] ASF GitHub Bot commented on STORM-1255: --- Github user redsanket commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53207784 --- Diff: storm-core/test/jvm/org/apache/storm/utils/TimeTest.java --- @@ -0,0 +1,106 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import org.junit.Test; +import org.junit.Assert; + +public class TimeTest{ + +@Test +public void secsToMillisLongTest(){ +Assert.assertEquals(Time.secsToMillisLong(0), 0); +Assert.assertEquals(Time.secsToMillisLong(0.002), 2); +Assert.assertEquals(Time.secsToMillisLong(1), 1000); +Assert.assertEquals(Time.secsToMillisLong(1.08), 1080); +Assert.assertEquals(Time.secsToMillisLong(10),1); +Assert.assertEquals(Time.secsToMillisLong(10.1), 10100); +} + +@Test +public void ifNotSimulatingIsSimulatingReturnsFalse(){ --- End diff -- Can we have better name it is bit confusing > port backtype.storm.utils-test to java > -- > > Key: STORM-1255 > URL: https://issues.apache.org/jira/browse/STORM-1255 > Project: Apache Storm > Issue Type: New Feature > Components: rm-core, storm-core >Reporter: Robert Joseph Evans >Assignee: Alessandro Bellina > Labels: java-migration, jstorm-merger > > junit test migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1255: port storm_utils.clj to java and s...
Github user redsanket commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53207784 --- Diff: storm-core/test/jvm/org/apache/storm/utils/TimeTest.java --- @@ -0,0 +1,106 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import org.junit.Test; +import org.junit.Assert; + +public class TimeTest{ + +@Test +public void secsToMillisLongTest(){ +Assert.assertEquals(Time.secsToMillisLong(0), 0); +Assert.assertEquals(Time.secsToMillisLong(0.002), 2); +Assert.assertEquals(Time.secsToMillisLong(1), 1000); +Assert.assertEquals(Time.secsToMillisLong(1.08), 1080); +Assert.assertEquals(Time.secsToMillisLong(10),1); +Assert.assertEquals(Time.secsToMillisLong(10.1), 10100); +} + +@Test +public void ifNotSimulatingIsSimulatingReturnsFalse(){ --- End diff -- Can we have better name it is bit confusing --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request: STORM-1255: port storm_utils.clj to java and s...
Github user abellina commented on the pull request: https://github.com/apache/storm/pull/1114#issuecomment-185338394 @abhishekagarwal87 I have shortened names and combined some of the tests using instead the assertX(String message, ...) format to give them more context (rather than function name) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-1258) port backtype.storm.thrift to java
[ https://issues.apache.org/jira/browse/STORM-1258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150861#comment-15150861 ] Abhishek Agarwal commented on STORM-1258: - Hi [~sanket991] are you working on this? I can take it up otherwise > port backtype.storm.thrift to java > --- > > Key: STORM-1258 > URL: https://issues.apache.org/jira/browse/STORM-1258 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Sanket Reddy > Labels: java-migration, jstorm-merger > > helper methods for manipulating thrift objects -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (STORM-1282) port backtype.storm.LocalCluster to java
[ https://issues.apache.org/jira/browse/STORM-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abhishek Agarwal reassigned STORM-1282: --- Assignee: Abhishek Agarwal > port backtype.storm.LocalCluster to java > > > Key: STORM-1282 > URL: https://issues.apache.org/jira/browse/STORM-1282 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Abhishek Agarwal > Labels: java-migration, jstorm-merger > > https://github.com/apache/storm/blob/jstorm-import/jstorm-core/src/main/java/backtype/storm/LocalCluster.java > as an example -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (STORM-1313) port backtype.storm.versioned-store-test to java
[ https://issues.apache.org/jira/browse/STORM-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abhishek Agarwal reassigned STORM-1313: --- Assignee: Abhishek Agarwal > port backtype.storm.versioned-store-test to java > > > Key: STORM-1313 > URL: https://issues.apache.org/jira/browse/STORM-1313 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Abhishek Agarwal > Labels: java-migration, jstorm-merger > > Test VersionedStore -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1246) port backtype.storm.local-state to java
[ https://issues.apache.org/jira/browse/STORM-1246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150844#comment-15150844 ] ASF GitHub Bot commented on STORM-1246: --- GitHub user abhishekagarwal87 opened a pull request: https://github.com/apache/storm/pull/1117 STORM-1246: port backtype.storm.local-state to java 1. Whatever conversion logic was common across multiple files, remain in local_state_converter.clj. 2. Conversion logic exclusively used in a clj is moved to that clj e.g. ls-local-assignments conversion has been moved to supervisor.clj. This will make it easy to port individual clojure files. 3. As suggested by Robert in JIRA, I have ported get/set functionality into LocalState class. You can merge this pull request into a Git repository by running: $ git pull https://github.com/abhishekagarwal87/storm local-state Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/1117.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1117 commit 7363a24de22799c6488086cbc66a7b708eb3fdd0 Author: Abhishek AgarwalDate: 2016-02-17T17:29:15Z STORM-1246: port backtype.storm.local-state to java commit 35a0052ca23c153a8818707770ad5e871829389f Author: Abhishek Agarwal Date: 2016-02-17T17:36:40Z STORM-1246: Remove extra line > port backtype.storm.local-state to java > --- > > Key: STORM-1246 > URL: https://issues.apache.org/jira/browse/STORM-1246 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Abhishek Agarwal > Labels: java-migration, jstorm-merger > > Wrapper around LocalState, with some helper functions for converting between > storm and thrift. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1246: port backtype.storm.local-state to...
GitHub user abhishekagarwal87 opened a pull request: https://github.com/apache/storm/pull/1117 STORM-1246: port backtype.storm.local-state to java 1. Whatever conversion logic was common across multiple files, remain in local_state_converter.clj. 2. Conversion logic exclusively used in a clj is moved to that clj e.g. ls-local-assignments conversion has been moved to supervisor.clj. This will make it easy to port individual clojure files. 3. As suggested by Robert in JIRA, I have ported get/set functionality into LocalState class. You can merge this pull request into a Git repository by running: $ git pull https://github.com/abhishekagarwal87/storm local-state Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/1117.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1117 commit 7363a24de22799c6488086cbc66a7b708eb3fdd0 Author: Abhishek AgarwalDate: 2016-02-17T17:29:15Z STORM-1246: port backtype.storm.local-state to java commit 35a0052ca23c153a8818707770ad5e871829389f Author: Abhishek Agarwal Date: 2016-02-17T17:36:40Z STORM-1246: Remove extra line --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-1255) port backtype.storm.utils-test to java
[ https://issues.apache.org/jira/browse/STORM-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150837#comment-15150837 ] ASF GitHub Bot commented on STORM-1255: --- Github user redsanket commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53200197 --- Diff: storm-core/test/jvm/org/apache/storm/utils/TimeTest.java --- @@ -0,0 +1,106 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import org.junit.Test; +import org.junit.Assert; + +public class TimeTest{ --- End diff -- add space TimeTest { > port backtype.storm.utils-test to java > -- > > Key: STORM-1255 > URL: https://issues.apache.org/jira/browse/STORM-1255 > Project: Apache Storm > Issue Type: New Feature > Components: rm-core, storm-core >Reporter: Robert Joseph Evans >Assignee: Alessandro Bellina > Labels: java-migration, jstorm-merger > > junit test migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1255: port storm_utils.clj to java and s...
Github user redsanket commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53200197 --- Diff: storm-core/test/jvm/org/apache/storm/utils/TimeTest.java --- @@ -0,0 +1,106 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import org.junit.Test; +import org.junit.Assert; + +public class TimeTest{ --- End diff -- add space TimeTest { --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-1511) min/max operations on trident stream
[ https://issues.apache.org/jira/browse/STORM-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150677#comment-15150677 ] Satish Duggana commented on STORM-1511: --- This PR is merged into master. It needs to be merged to 1.x branch. > min/max operations on trident stream > > > Key: STORM-1511 > URL: https://issues.apache.org/jira/browse/STORM-1511 > Project: Apache Storm > Issue Type: Improvement > Components: storm-core >Reporter: Satish Duggana >Assignee: Satish Duggana > Fix For: 1.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (STORM-1511) min/max operations on trident stream
[ https://issues.apache.org/jira/browse/STORM-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana reopened STORM-1511: --- > min/max operations on trident stream > > > Key: STORM-1511 > URL: https://issues.apache.org/jira/browse/STORM-1511 > Project: Apache Storm > Issue Type: Improvement > Components: storm-core >Reporter: Satish Duggana >Assignee: Satish Duggana > Fix For: 1.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1511) min/max operations on trident stream
[ https://issues.apache.org/jira/browse/STORM-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satish Duggana resolved STORM-1511. --- Resolution: Fixed > min/max operations on trident stream > > > Key: STORM-1511 > URL: https://issues.apache.org/jira/browse/STORM-1511 > Project: Apache Storm > Issue Type: Improvement > Components: storm-core >Reporter: Satish Duggana >Assignee: Satish Duggana > Fix For: 1.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-822) As a storm developer I’d like to use the new kafka consumer API (0.8.3) to reduce dependencies and use long term supported kafka apis
[ https://issues.apache.org/jira/browse/STORM-822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150650#comment-15150650 ] ASF GitHub Bot commented on STORM-822: -- Github user jianbzhou commented on the pull request: https://github.com/apache/storm/pull/986#issuecomment-185258481 Hi **@hmcl **, could you please share the patch please? thanks! > As a storm developer I’d like to use the new kafka consumer API (0.8.3) to > reduce dependencies and use long term supported kafka apis > -- > > Key: STORM-822 > URL: https://issues.apache.org/jira/browse/STORM-822 > Project: Apache Storm > Issue Type: Story > Components: storm-kafka >Reporter: Thomas Becker >Assignee: Hugo Louro > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-822 Implement Kafka 0.9 consumer API
Github user jianbzhou commented on the pull request: https://github.com/apache/storm/pull/986#issuecomment-185258481 Hi **@hmcl **, could you please share the patch please? thanks! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request: STORM-822 Implement Kafka 0.9 consumer API
Github user jianbzhou commented on the pull request: https://github.com/apache/storm/pull/986#issuecomment-185257444 Hi @**hmcl **, could you please share the patch please? thanks a lot! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-822) As a storm developer I’d like to use the new kafka consumer API (0.8.3) to reduce dependencies and use long term supported kafka apis
[ https://issues.apache.org/jira/browse/STORM-822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150649#comment-15150649 ] ASF GitHub Bot commented on STORM-822: -- Github user jianbzhou commented on the pull request: https://github.com/apache/storm/pull/986#issuecomment-185257444 Hi @**hmcl **, could you please share the patch please? thanks a lot! > As a storm developer I’d like to use the new kafka consumer API (0.8.3) to > reduce dependencies and use long term supported kafka apis > -- > > Key: STORM-822 > URL: https://issues.apache.org/jira/browse/STORM-822 > Project: Apache Storm > Issue Type: Story > Components: storm-kafka >Reporter: Thomas Becker >Assignee: Hugo Louro > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-676) Storm Trident support for sliding/tumbling windows
[ https://issues.apache.org/jira/browse/STORM-676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150592#comment-15150592 ] ASF GitHub Bot commented on STORM-676: -- Github user satishd commented on the pull request: https://github.com/apache/storm/pull/1072#issuecomment-185238635 Upmerged, resolved and rebased. > Storm Trident support for sliding/tumbling windows > -- > > Key: STORM-676 > URL: https://issues.apache.org/jira/browse/STORM-676 > Project: Apache Storm > Issue Type: Improvement > Components: storm-core >Reporter: Sriharsha Chintalapani >Assignee: Satish Duggana > Fix For: 1.0.0, 2.0.0 > > Attachments: StormTrident_windowing_support-676.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-676 Trident windowing implementation
Github user satishd commented on the pull request: https://github.com/apache/storm/pull/1072#issuecomment-185238635 Upmerged, resolved and rebased. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-1522) REST API throws invalid worker log links.
[ https://issues.apache.org/jira/browse/STORM-1522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150130#comment-15150130 ] ASF GitHub Bot commented on STORM-1522: --- Github user satishd commented on the pull request: https://github.com/apache/storm/pull/1076#issuecomment-185095359 @harshach Raised https://github.com/apache/storm/pull/1116 on 1.x branch . > REST API throws invalid worker log links. > - > > Key: STORM-1522 > URL: https://issues.apache.org/jira/browse/STORM-1522 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Reporter: Satish Duggana >Assignee: Satish Duggana > Fix For: 1.0.0 > > > Below REST API returns response which contains invalid worker log links > http://localhost:8080/api/v1/topology/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1522 should create error worker log loca...
Github user satishd commented on the pull request: https://github.com/apache/storm/pull/1076#issuecomment-185095359 @harshach Raised https://github.com/apache/storm/pull/1116 on 1.x branch . --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Assigned] (STORM-1239) port backtype.storm.scheduler.IsolationScheduler to java
[ https://issues.apache.org/jira/browse/STORM-1239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xin Wang reassigned STORM-1239: --- Assignee: Xin Wang > port backtype.storm.scheduler.IsolationScheduler to java > - > > Key: STORM-1239 > URL: https://issues.apache.org/jira/browse/STORM-1239 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Xin Wang > Labels: java-migration, jstorm-merger > > port the isolation scheduler to java -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1255) port backtype.storm.utils-test to java
[ https://issues.apache.org/jira/browse/STORM-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150112#comment-15150112 ] ASF GitHub Bot commented on STORM-1255: --- Github user abhishekagarwal87 commented on the pull request: https://github.com/apache/storm/pull/1114#issuecomment-185092471 Please shorten the long function names in test files. > port backtype.storm.utils-test to java > -- > > Key: STORM-1255 > URL: https://issues.apache.org/jira/browse/STORM-1255 > Project: Apache Storm > Issue Type: New Feature > Components: rm-core, storm-core >Reporter: Robert Joseph Evans >Assignee: Alessandro Bellina > Labels: java-migration, jstorm-merger > > junit test migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1255: port storm_utils.clj to java and s...
Github user abhishekagarwal87 commented on the pull request: https://github.com/apache/storm/pull/1114#issuecomment-185092471 Please shorten the long function names in test files. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-1255) port backtype.storm.utils-test to java
[ https://issues.apache.org/jira/browse/STORM-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150106#comment-15150106 ] ASF GitHub Bot commented on STORM-1255: --- Github user abhishekagarwal87 commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53132559 --- Diff: storm-core/test/jvm/org/apache/storm/utils/UtilsTest.java --- @@ -0,0 +1,221 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import java.util.HashMap; +import org.junit.Test; +import org.junit.Assert; + +import org.apache.curator.ensemble.exhibitor.ExhibitorEnsembleProvider; +import org.apache.curator.ensemble.fixed.FixedEnsembleProvider; +import org.apache.curator.framework.AuthInfo; +import org.apache.curator.framework.CuratorFramework; +import org.apache.curator.framework.CuratorFrameworkFactory; + +import org.apache.storm.Config; +import org.apache.thrift.transport.TTransportException; + +import static org.mockito.Mockito.*; --- End diff -- Replace with single statement imports. > port backtype.storm.utils-test to java > -- > > Key: STORM-1255 > URL: https://issues.apache.org/jira/browse/STORM-1255 > Project: Apache Storm > Issue Type: New Feature > Components: rm-core, storm-core >Reporter: Robert Joseph Evans >Assignee: Alessandro Bellina > Labels: java-migration, jstorm-merger > > junit test migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1255: port storm_utils.clj to java and s...
Github user abhishekagarwal87 commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53132559 --- Diff: storm-core/test/jvm/org/apache/storm/utils/UtilsTest.java --- @@ -0,0 +1,221 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import java.util.HashMap; +import org.junit.Test; +import org.junit.Assert; + +import org.apache.curator.ensemble.exhibitor.ExhibitorEnsembleProvider; +import org.apache.curator.ensemble.fixed.FixedEnsembleProvider; +import org.apache.curator.framework.AuthInfo; +import org.apache.curator.framework.CuratorFramework; +import org.apache.curator.framework.CuratorFrameworkFactory; + +import org.apache.storm.Config; +import org.apache.thrift.transport.TTransportException; + +import static org.mockito.Mockito.*; --- End diff -- Replace with single statement imports. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-1255) port backtype.storm.utils-test to java
[ https://issues.apache.org/jira/browse/STORM-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150104#comment-15150104 ] ASF GitHub Bot commented on STORM-1255: --- Github user abhishekagarwal87 commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53132456 --- Diff: storm-core/test/jvm/org/apache/storm/utils/TimeTest.java --- @@ -0,0 +1,106 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import org.junit.Test; +import org.junit.Assert; + +public class TimeTest{ + +@Test +public void secsToMillisLongTest(){ +Assert.assertEquals(Time.secsToMillisLong(0), 0); +Assert.assertEquals(Time.secsToMillisLong(0.002), 2); +Assert.assertEquals(Time.secsToMillisLong(1), 1000); +Assert.assertEquals(Time.secsToMillisLong(1.08), 1080); +Assert.assertEquals(Time.secsToMillisLong(10),1); +Assert.assertEquals(Time.secsToMillisLong(10.1), 10100); +} + +@Test +public void ifNotSimulatingIsSimulatingReturnsFalse(){ +Assert.assertFalse(Time.isSimulating()); +} + +@Test +public void ifSimulatingIsSimulatingReturnsTrue(){ +Time.startSimulating(); +Assert.assertTrue(Time.isSimulating()); +Time.stopSimulating(); +} + +@Test +public void advanceTimeSimulatedTimeBy0Causes0DeltaTest(){ +Time.startSimulating(); +long current = Time.currentTimeMillis(); +Time.advanceTime(0); +Assert.assertEquals(Time.deltaMs(current), 0); +Time.stopSimulating(); +} + +@Test +public void advanceTimeSimulatedTimeBy1000Causes1000MsDeltaTest(){ +Time.startSimulating(); +long current = Time.currentTimeMillis(); +Time.advanceTime(1000); +Assert.assertEquals(Time.deltaMs(current), 1000); +Time.stopSimulating(); +} + +@Test +public void advanceTimeSimulatedTimeBy1500Causes1500MsDeltaTest(){ +Time.startSimulating(); +long current = Time.currentTimeMillis(); +Time.advanceTime(1500); +Assert.assertEquals(Time.deltaMs(current), 1500); +Time.stopSimulating(); +} + +@Test +public void advanceTimeSimulatedTimeByNegative1500CausesNegative1500MsDeltaTest(){ --- End diff -- The test names are too long. There is no need to code the parameters in the test case name itself. > port backtype.storm.utils-test to java > -- > > Key: STORM-1255 > URL: https://issues.apache.org/jira/browse/STORM-1255 > Project: Apache Storm > Issue Type: New Feature > Components: rm-core, storm-core >Reporter: Robert Joseph Evans >Assignee: Alessandro Bellina > Labels: java-migration, jstorm-merger > > junit test migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-1255: port storm_utils.clj to java and s...
Github user abhishekagarwal87 commented on a diff in the pull request: https://github.com/apache/storm/pull/1114#discussion_r53132456 --- Diff: storm-core/test/jvm/org/apache/storm/utils/TimeTest.java --- @@ -0,0 +1,106 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.utils; + +import org.junit.Test; +import org.junit.Assert; + +public class TimeTest{ + +@Test +public void secsToMillisLongTest(){ +Assert.assertEquals(Time.secsToMillisLong(0), 0); +Assert.assertEquals(Time.secsToMillisLong(0.002), 2); +Assert.assertEquals(Time.secsToMillisLong(1), 1000); +Assert.assertEquals(Time.secsToMillisLong(1.08), 1080); +Assert.assertEquals(Time.secsToMillisLong(10),1); +Assert.assertEquals(Time.secsToMillisLong(10.1), 10100); +} + +@Test +public void ifNotSimulatingIsSimulatingReturnsFalse(){ +Assert.assertFalse(Time.isSimulating()); +} + +@Test +public void ifSimulatingIsSimulatingReturnsTrue(){ +Time.startSimulating(); +Assert.assertTrue(Time.isSimulating()); +Time.stopSimulating(); +} + +@Test +public void advanceTimeSimulatedTimeBy0Causes0DeltaTest(){ +Time.startSimulating(); +long current = Time.currentTimeMillis(); +Time.advanceTime(0); +Assert.assertEquals(Time.deltaMs(current), 0); +Time.stopSimulating(); +} + +@Test +public void advanceTimeSimulatedTimeBy1000Causes1000MsDeltaTest(){ +Time.startSimulating(); +long current = Time.currentTimeMillis(); +Time.advanceTime(1000); +Assert.assertEquals(Time.deltaMs(current), 1000); +Time.stopSimulating(); +} + +@Test +public void advanceTimeSimulatedTimeBy1500Causes1500MsDeltaTest(){ +Time.startSimulating(); +long current = Time.currentTimeMillis(); +Time.advanceTime(1500); +Assert.assertEquals(Time.deltaMs(current), 1500); +Time.stopSimulating(); +} + +@Test +public void advanceTimeSimulatedTimeByNegative1500CausesNegative1500MsDeltaTest(){ --- End diff -- The test names are too long. There is no need to code the parameters in the test case name itself. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---