[jira] [Commented] (STORM-1040) SQL support for Storm

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033278#comment-15033278
 ] 

ASF GitHub Bot commented on STORM-1040:
---

GitHub user haohui opened a pull request:

https://github.com/apache/storm/pull/907

Upmerge STORM-1040 to the latest master

This PR merges the latest master to the STORM-1040 branch.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/haohui/storm STORM-1040

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/storm/pull/907.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #907


commit f6268a08499072e0f091df19605ee64fb3e80ba3
Author: Thomas Graves 
Date:   2015-11-03T19:54:42Z

fix typo

commit d59e936af6ed797026ad5b6a86b3b4b28f5660ff
Author: Kyle Nusbaum 
Date:   2015-11-03T20:31:51Z

Fixing spacing in pacemaker_test

commit b1fca5e65f877d1ef73e68bd6f87ad11340a01c1
Author: Thomas Graves 
Date:   2015-11-03T20:49:40Z

add note about supervisor being under supervision

commit 664b0ce39a26e92cbc7a08ebb1f099871d30ca9d
Author: Robert (Bobby) Evans 
Date:   2015-11-03T20:56:23Z

Addressed review comments

commit 0f0d238deab48178da78cb6f0645d0323b95cafa
Author: Boyang Jerry Peng 
Date:   2015-10-27T19:56:33Z

wrapped exceptions in runtime exception

commit 5c6368c91ceb24448e65a94eae57bad007ac0728
Author: Boyang Jerry Peng 
Date:   2015-11-03T21:12:07Z

fixing formating

commit 25f31a0eee02ceee75f72a8f82f9f33e98cdd192
Author: Kishor Patil 
Date:   2015-11-03T21:16:58Z

Moving Documentation to more appropriate location

commit 780a472ad2dc4e838b86f65f0c40eb9443046038
Author: Boyang Jerry Peng 
Date:   2015-11-03T23:13:08Z

[YSTORM-1433] - normalize the scales of CPU/Mem/Net when choosing the best 
node

commit 8d91ad942374108fce588d3a144cb1831d505521
Author: Kishor Patil 
Date:   2015-11-04T06:24:19Z

Update REST API documentation for profiling and debugging endpoints.

commit 77429d07d7c080962b8c414cf02ee93839b25ca2
Author: Arun Mahadevan 
Date:   2015-11-04T11:29:19Z

STORM-1167: Add windowing support for storm core

1. Added new interface IWindowedBolt and wrapper classes for bolts that 
need windowing support
2. New constants for specifying the window length and sliding interval
3. WindowManager and related classes that handles the windowing logic

commit 8d6ab39ac7d6625254c9dedd217b9d0ce63531fe
Author: zhuol 
Date:   2015-11-04T16:47:19Z

[STORM-1163] using rmr rather than rmpath for remove worker-root

commit a4876b2dc972e4a05f820125da2b933233a217ef
Author: Kishor Patil 
Date:   2015-11-04T17:03:32Z

Fixing rest-api documentation and relocating it.

commit f35ce6016ad519dced9aff44bb8458cc02740f93
Author: Boyang Jerry Peng 
Date:   2015-11-04T15:57:59Z

add a +1 in the denominator to deal with the case if available resources is 
zero

commit 2b859fb02db965ea7a68bd69c667526913cf6711
Author: Derek Dagit 
Date:   2015-11-04T17:08:54Z

removes noisy log message & a TODO

commit 6e0fc9ea4da8d3d2ae919c05000bf6f2a1c709b8
Author: Robert (Bobby) Evans 
Date:   2015-11-04T17:10:20Z

Merge branch 'profiling-worker' of 
https://github.com/kishorvpatil/incubator-storm into STORM-1157

STORM-1157: Adding dynamic profiling for worker, restarting worker, jstack, 
heap dump, and profiling

commit f3ed08b9ffb32f0d0c756f176756484366f37f2c
Author: Robert (Bobby) Evans 
Date:   2015-11-04T17:10:53Z

Added STORM-1157 to Changelog

commit 686e9061fbe921fed23b81b02faeb5965556b6b8
Author: zhuol 
Date:   2015-11-04T17:11:08Z

use rmr also for worker/pids

commit b98acd3c41278870e10cc0c1123ae69f24070ad6
Author: Boyang Jerry Peng 
Date:   2015-11-04T19:45:12Z

[STORM-1158] - Storm metrics to profile various storm functions

commit 8501184e39b565888995b4262c1758e334428eff
Author: Boyang Jerry Peng 
Date:   2015-11-03T22:36:29Z

adding documentation for storm metrics

commit 3586bc3eb1d98ead97cee996f201e1b90405cb63
Author: Boyang Jerry Peng 
Date:   2015-11-04T19:07:16Z

fixing format and naming conventions

commit 4a57e500b3e0c3f3256e776357d1e1de1c7f5e49
Author: Sriharsha Chintalapani 
Date:   2015-11-04T20:04:59Z

Merge branch 'master' of https://github.com/DigitalPebble/storm

commit 77aa363ddf9de8a1bd0a11e86ee29c1c283424c2
Author: zhuol 
Date:   2015-11-04T20:24:18Z

[STORM-1170] Fix the producer alive issue in DisruptoQueueTest

commit faa33f922aa7deac0622de2d4d48c86d0cc61875
Author: Xin Wang 
Date:   2015-11-05T02:57:56Z

Check exceptions for response

commit 58f099421eeed8286d1e7729dad45a60de38f0c8
Author: Xin Wang 
Date:   2015-11-05T03:27:30Z

update TestUtils kafka configs

commit 0c2fce5bf8fa41430ca09ce6a55afef607b031d7
Author: Arun Mahadevan 

[GitHub] storm pull request: Upmerge STORM-1040 to the latest master

2015-11-30 Thread haohui
GitHub user haohui opened a pull request:

https://github.com/apache/storm/pull/907

Upmerge STORM-1040 to the latest master

This PR merges the latest master to the STORM-1040 branch.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/haohui/storm STORM-1040

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/storm/pull/907.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #907


commit f6268a08499072e0f091df19605ee64fb3e80ba3
Author: Thomas Graves 
Date:   2015-11-03T19:54:42Z

fix typo

commit d59e936af6ed797026ad5b6a86b3b4b28f5660ff
Author: Kyle Nusbaum 
Date:   2015-11-03T20:31:51Z

Fixing spacing in pacemaker_test

commit b1fca5e65f877d1ef73e68bd6f87ad11340a01c1
Author: Thomas Graves 
Date:   2015-11-03T20:49:40Z

add note about supervisor being under supervision

commit 664b0ce39a26e92cbc7a08ebb1f099871d30ca9d
Author: Robert (Bobby) Evans 
Date:   2015-11-03T20:56:23Z

Addressed review comments

commit 0f0d238deab48178da78cb6f0645d0323b95cafa
Author: Boyang Jerry Peng 
Date:   2015-10-27T19:56:33Z

wrapped exceptions in runtime exception

commit 5c6368c91ceb24448e65a94eae57bad007ac0728
Author: Boyang Jerry Peng 
Date:   2015-11-03T21:12:07Z

fixing formating

commit 25f31a0eee02ceee75f72a8f82f9f33e98cdd192
Author: Kishor Patil 
Date:   2015-11-03T21:16:58Z

Moving Documentation to more appropriate location

commit 780a472ad2dc4e838b86f65f0c40eb9443046038
Author: Boyang Jerry Peng 
Date:   2015-11-03T23:13:08Z

[YSTORM-1433] - normalize the scales of CPU/Mem/Net when choosing the best 
node

commit 8d91ad942374108fce588d3a144cb1831d505521
Author: Kishor Patil 
Date:   2015-11-04T06:24:19Z

Update REST API documentation for profiling and debugging endpoints.

commit 77429d07d7c080962b8c414cf02ee93839b25ca2
Author: Arun Mahadevan 
Date:   2015-11-04T11:29:19Z

STORM-1167: Add windowing support for storm core

1. Added new interface IWindowedBolt and wrapper classes for bolts that 
need windowing support
2. New constants for specifying the window length and sliding interval
3. WindowManager and related classes that handles the windowing logic

commit 8d6ab39ac7d6625254c9dedd217b9d0ce63531fe
Author: zhuol 
Date:   2015-11-04T16:47:19Z

[STORM-1163] using rmr rather than rmpath for remove worker-root

commit a4876b2dc972e4a05f820125da2b933233a217ef
Author: Kishor Patil 
Date:   2015-11-04T17:03:32Z

Fixing rest-api documentation and relocating it.

commit f35ce6016ad519dced9aff44bb8458cc02740f93
Author: Boyang Jerry Peng 
Date:   2015-11-04T15:57:59Z

add a +1 in the denominator to deal with the case if available resources is 
zero

commit 2b859fb02db965ea7a68bd69c667526913cf6711
Author: Derek Dagit 
Date:   2015-11-04T17:08:54Z

removes noisy log message & a TODO

commit 6e0fc9ea4da8d3d2ae919c05000bf6f2a1c709b8
Author: Robert (Bobby) Evans 
Date:   2015-11-04T17:10:20Z

Merge branch 'profiling-worker' of 
https://github.com/kishorvpatil/incubator-storm into STORM-1157

STORM-1157: Adding dynamic profiling for worker, restarting worker, jstack, 
heap dump, and profiling

commit f3ed08b9ffb32f0d0c756f176756484366f37f2c
Author: Robert (Bobby) Evans 
Date:   2015-11-04T17:10:53Z

Added STORM-1157 to Changelog

commit 686e9061fbe921fed23b81b02faeb5965556b6b8
Author: zhuol 
Date:   2015-11-04T17:11:08Z

use rmr also for worker/pids

commit b98acd3c41278870e10cc0c1123ae69f24070ad6
Author: Boyang Jerry Peng 
Date:   2015-11-04T19:45:12Z

[STORM-1158] - Storm metrics to profile various storm functions

commit 8501184e39b565888995b4262c1758e334428eff
Author: Boyang Jerry Peng 
Date:   2015-11-03T22:36:29Z

adding documentation for storm metrics

commit 3586bc3eb1d98ead97cee996f201e1b90405cb63
Author: Boyang Jerry Peng 
Date:   2015-11-04T19:07:16Z

fixing format and naming conventions

commit 4a57e500b3e0c3f3256e776357d1e1de1c7f5e49
Author: Sriharsha Chintalapani 
Date:   2015-11-04T20:04:59Z

Merge branch 'master' of https://github.com/DigitalPebble/storm

commit 77aa363ddf9de8a1bd0a11e86ee29c1c283424c2
Author: zhuol 
Date:   2015-11-04T20:24:18Z

[STORM-1170] Fix the producer alive issue in DisruptoQueueTest

commit faa33f922aa7deac0622de2d4d48c86d0cc61875
Author: Xin Wang 
Date:   2015-11-05T02:57:56Z

Check exceptions for response

commit 58f099421eeed8286d1e7729dad45a60de38f0c8
Author: Xin Wang 
Date:   2015-11-05T03:27:30Z

update TestUtils kafka configs

commit 0c2fce5bf8fa41430ca09ce6a55afef607b031d7
Author: Arun Mahadevan 
Date:   2015-11-05T09:31:38Z

Addressed review comments

commit 23f75d973bc242c4c42433cfc518bfd1435cf7d0
Author: Boyang Jerry Peng 
Date:   2015-11-05T16:56:59Z

revising documentation and adding additional metrics

commit e9c2495361b3c72d2a1ca1da

[jira] [Commented] (STORM-1075) Storm Cassandra connector

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033216#comment-15033216
 ] 

ASF GitHub Bot commented on STORM-1075:
---

Github user harshach commented on the pull request:

https://github.com/apache/storm/pull/827#issuecomment-160877676
  
@fhussonnois I am getting following errors in the tests can you take a look
Tests in error: 
  
BatchCassandraWriterBoltTest.shouldBatchInsertGivenStaticTableNameAndDynamicQueryBuildFromAllTupleFields:41->executeAndAssertWith:57->BaseTopologyTest.runLocalTopologyAndWait:49
 » IllegalState
  
CassandraWriterBoltTest.shouldAsyncInsertGivenStaticTableNameAndDynamicQueryBuildFromAllTupleFields:40->executeAndAssertWith:56->BaseTopologyTest.runLocalTopologyAndWait:49
 » IllegalState


> Storm Cassandra connector
> -
>
> Key: STORM-1075
> URL: https://issues.apache.org/jira/browse/STORM-1075
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Sriharsha Chintalapani
>Assignee: Satish Duggana
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: STORM-1075 add external module storm-cassandra

2015-11-30 Thread harshach
Github user harshach commented on the pull request:

https://github.com/apache/storm/pull/827#issuecomment-160877676
  
@fhussonnois I am getting following errors in the tests can you take a look
Tests in error: 
  
BatchCassandraWriterBoltTest.shouldBatchInsertGivenStaticTableNameAndDynamicQueryBuildFromAllTupleFields:41->executeAndAssertWith:57->BaseTopologyTest.runLocalTopologyAndWait:49
 » IllegalState
  
CassandraWriterBoltTest.shouldAsyncInsertGivenStaticTableNameAndDynamicQueryBuildFromAllTupleFields:40->executeAndAssertWith:56->BaseTopologyTest.runLocalTopologyAndWait:49
 » IllegalState


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


FOSDEM 2016 - take action by 4th of December 2015

2015-11-30 Thread Roman Shaposhnik
As most of you probably know FOSDEM 2016 (the biggest,
100% free open source developer conference) is right 
around the corner:
   https://fosdem.org/2016/

We hope to have an ASF booth and we would love to see as
many ASF projects as possible present at various tracks
(AKA Developer rooms):
   https://fosdem.org/2016/schedule/#devrooms

This year, for the first time, we are running a dedicated
Big Data and HPC Developer Room and given how much of that
open source development is done at ASF it would be great
to have folks submit talks to:
   https://hpc-bigdata-fosdem16.github.io

While the CFPs for different Developer Rooms follow slightly 
different schedules, but if you submit by the end of this week 
you should be fine.

Finally if you don't want to fish for CFP submission URL,
here it is:
   https://fosdem.org/submit

If you have any questions -- please email me *directly* and
hope to see as many of you as possible in two months! 

Thanks,
Roman.


[jira] [Commented] (STORM-1357) Support writing to Kafka streams in Storm SQL

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033115#comment-15033115
 ] 

ASF GitHub Bot commented on STORM-1357:
---

Github user haohui commented on the pull request:

https://github.com/apache/storm/pull/906#issuecomment-160856554
  
Commit 15d3c2e is cherry-picked from the master. Just checked the kafka 
website the link is valid.


> Support writing to Kafka streams in Storm SQL
> -
>
> Key: STORM-1357
> URL: https://issues.apache.org/jira/browse/STORM-1357
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-sql
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> This jira proposes to add supports to write SQL results to Kafka streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: STORM-1357. Support writing to Kafka streams i...

2015-11-30 Thread haohui
Github user haohui commented on the pull request:

https://github.com/apache/storm/pull/906#issuecomment-160856554
  
Commit 15d3c2e is cherry-picked from the master. Just checked the kafka 
website the link is valid.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1357) Support writing to Kafka streams in Storm SQL

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033066#comment-15033066
 ] 

ASF GitHub Bot commented on STORM-1357:
---

Github user zhuoliu commented on a diff in the pull request:

https://github.com/apache/storm/pull/906#discussion_r46239771
  
--- Diff: external/storm-kafka/src/test/storm/kafka/TridentKafkaTest.java 
---
@@ -71,7 +66,7 @@ public void testKeyValue() {
 }
 
 private List generateTupleBatch(String key, String 
message, int batchsize) {
-List batch = new ArrayList();
+List batch = new ArrayList<>();
--- End diff --

we want the deleted type declare back.


> Support writing to Kafka streams in Storm SQL
> -
>
> Key: STORM-1357
> URL: https://issues.apache.org/jira/browse/STORM-1357
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-sql
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> This jira proposes to add supports to write SQL results to Kafka streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: STORM-1357. Support writing to Kafka streams i...

2015-11-30 Thread zhuoliu
Github user zhuoliu commented on a diff in the pull request:

https://github.com/apache/storm/pull/906#discussion_r46239771
  
--- Diff: external/storm-kafka/src/test/storm/kafka/TridentKafkaTest.java 
---
@@ -71,7 +66,7 @@ public void testKeyValue() {
 }
 
 private List generateTupleBatch(String key, String 
message, int batchsize) {
-List batch = new ArrayList();
+List batch = new ArrayList<>();
--- End diff --

we want the deleted type declare back.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: STORM-1357. Support writing to Kafka streams i...

2015-11-30 Thread darionyaphet
Github user darionyaphet commented on a diff in the pull request:

https://github.com/apache/storm/pull/906#discussion_r46239419
  
--- Diff: external/storm-kafka/src/test/storm/kafka/TridentKafkaTest.java 
---
@@ -71,7 +66,7 @@ public void testKeyValue() {
 }
 
 private List generateTupleBatch(String key, String 
message, int batchsize) {
-List batch = new ArrayList();
+List batch = new ArrayList<>();
--- End diff --

@zhuoliu remove type declare have some benefit ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1357) Support writing to Kafka streams in Storm SQL

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033058#comment-15033058
 ] 

ASF GitHub Bot commented on STORM-1357:
---

Github user darionyaphet commented on a diff in the pull request:

https://github.com/apache/storm/pull/906#discussion_r46239419
  
--- Diff: external/storm-kafka/src/test/storm/kafka/TridentKafkaTest.java 
---
@@ -71,7 +66,7 @@ public void testKeyValue() {
 }
 
 private List generateTupleBatch(String key, String 
message, int batchsize) {
-List batch = new ArrayList();
+List batch = new ArrayList<>();
--- End diff --

@zhuoliu remove type declare have some benefit ?


> Support writing to Kafka streams in Storm SQL
> -
>
> Key: STORM-1357
> URL: https://issues.apache.org/jira/browse/STORM-1357
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-sql
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> This jira proposes to add supports to write SQL results to Kafka streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1357) Support writing to Kafka streams in Storm SQL

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033052#comment-15033052
 ] 

ASF GitHub Bot commented on STORM-1357:
---

Github user zhuoliu commented on a diff in the pull request:

https://github.com/apache/storm/pull/906#discussion_r46239195
  
--- Diff: external/storm-kafka/src/test/storm/kafka/TridentKafkaTest.java 
---
@@ -71,7 +66,7 @@ public void testKeyValue() {
 }
 
 private List generateTupleBatch(String key, String 
message, int batchsize) {
-List batch = new ArrayList();
+List batch = new ArrayList<>();
--- End diff --

May revert this line?


> Support writing to Kafka streams in Storm SQL
> -
>
> Key: STORM-1357
> URL: https://issues.apache.org/jira/browse/STORM-1357
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-sql
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> This jira proposes to add supports to write SQL results to Kafka streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: STORM-1357. Support writing to Kafka streams i...

2015-11-30 Thread zhuoliu
Github user zhuoliu commented on a diff in the pull request:

https://github.com/apache/storm/pull/906#discussion_r46239195
  
--- Diff: external/storm-kafka/src/test/storm/kafka/TridentKafkaTest.java 
---
@@ -71,7 +66,7 @@ public void testKeyValue() {
 }
 
 private List generateTupleBatch(String key, String 
message, int batchsize) {
-List batch = new ArrayList();
+List batch = new ArrayList<>();
--- End diff --

May revert this line?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1357) Support writing to Kafka streams in Storm SQL

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033047#comment-15033047
 ] 

ASF GitHub Bot commented on STORM-1357:
---

Github user zhuoliu commented on a diff in the pull request:

https://github.com/apache/storm/pull/906#discussion_r46239106
  
--- Diff: 
external/storm-kafka/src/jvm/storm/kafka/trident/TridentKafkaState.java ---
@@ -66,33 +64,30 @@ public void commit(Long txid) {
 LOG.debug("commit is Noop.");
 }
 
-public void prepare(Map stormConf) {
+public void prepare(Properties options) {
 Validate.notNull(mapper, "mapper can not be null");
 Validate.notNull(topicSelector, "topicSelector can not be null");
-Map configMap = (Map) stormConf.get(KAFKA_BROKER_PROPERTIES);
-Properties properties = new Properties();
-properties.putAll(configMap);
-ProducerConfig config = new ProducerConfig(properties);
-producer = new Producer(config);
+producer = new KafkaProducer(options);
 }
 
 public void updateState(List tuples, TridentCollector 
collector) {
-String topic = null;
-for (TridentTuple tuple : tuples) {
-try {
-topic = topicSelector.getTopic(tuple);
-
-if(topic != null) {
-producer.send(new KeyedMessage(topic, 
mapper.getKeyFromTuple(tuple),
-mapper.getMessageFromTuple(tuple)));
-} else {
-LOG.warn("skipping key = " + 
mapper.getKeyFromTuple(tuple) + ", topic selector returned null.");
-}
-} catch (Exception ex) {
-String errorMsg = "Could not send message with key = " + 
mapper.getKeyFromTuple(tuple)
-+ " to topic = " + topic;
-LOG.warn(errorMsg, ex);
-throw new FailedException(errorMsg, ex);
+for (final TridentTuple tuple : tuples) {
+final String topic = topicSelector.getTopic(tuple);
+if(topic != null) {
+producer.send(new ProducerRecord(topic, 
mapper.getKeyFromTuple(tuple),
+mapper.getMessageFromTuple(tuple)),new Callback() {
+@Override
--- End diff --

Nit. Line 79 to 88 may only has 4 space indention after Line 77. Also, need 
indention for Line 83.


> Support writing to Kafka streams in Storm SQL
> -
>
> Key: STORM-1357
> URL: https://issues.apache.org/jira/browse/STORM-1357
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-sql
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> This jira proposes to add supports to write SQL results to Kafka streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: STORM-1357. Support writing to Kafka streams i...

2015-11-30 Thread zhuoliu
Github user zhuoliu commented on a diff in the pull request:

https://github.com/apache/storm/pull/906#discussion_r46239106
  
--- Diff: 
external/storm-kafka/src/jvm/storm/kafka/trident/TridentKafkaState.java ---
@@ -66,33 +64,30 @@ public void commit(Long txid) {
 LOG.debug("commit is Noop.");
 }
 
-public void prepare(Map stormConf) {
+public void prepare(Properties options) {
 Validate.notNull(mapper, "mapper can not be null");
 Validate.notNull(topicSelector, "topicSelector can not be null");
-Map configMap = (Map) stormConf.get(KAFKA_BROKER_PROPERTIES);
-Properties properties = new Properties();
-properties.putAll(configMap);
-ProducerConfig config = new ProducerConfig(properties);
-producer = new Producer(config);
+producer = new KafkaProducer(options);
 }
 
 public void updateState(List tuples, TridentCollector 
collector) {
-String topic = null;
-for (TridentTuple tuple : tuples) {
-try {
-topic = topicSelector.getTopic(tuple);
-
-if(topic != null) {
-producer.send(new KeyedMessage(topic, 
mapper.getKeyFromTuple(tuple),
-mapper.getMessageFromTuple(tuple)));
-} else {
-LOG.warn("skipping key = " + 
mapper.getKeyFromTuple(tuple) + ", topic selector returned null.");
-}
-} catch (Exception ex) {
-String errorMsg = "Could not send message with key = " + 
mapper.getKeyFromTuple(tuple)
-+ " to topic = " + topic;
-LOG.warn(errorMsg, ex);
-throw new FailedException(errorMsg, ex);
+for (final TridentTuple tuple : tuples) {
+final String topic = topicSelector.getTopic(tuple);
+if(topic != null) {
+producer.send(new ProducerRecord(topic, 
mapper.getKeyFromTuple(tuple),
+mapper.getMessageFromTuple(tuple)),new Callback() {
+@Override
--- End diff --

Nit. Line 79 to 88 may only has 4 space indention after Line 77. Also, need 
indention for Line 83.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: STORM-1357. Support writing to Kafka streams i...

2015-11-30 Thread darionyaphet
Github user darionyaphet commented on the pull request:

https://github.com/apache/storm/pull/906#issuecomment-160843954
  
Why don't use external  storm-kafka component as income stream and output 
stream ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1357) Support writing to Kafka streams in Storm SQL

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033021#comment-15033021
 ] 

ASF GitHub Bot commented on STORM-1357:
---

Github user darionyaphet commented on the pull request:

https://github.com/apache/storm/pull/906#issuecomment-160843954
  
Why don't use external  storm-kafka component as income stream and output 
stream ?


> Support writing to Kafka streams in Storm SQL
> -
>
> Key: STORM-1357
> URL: https://issues.apache.org/jira/browse/STORM-1357
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-sql
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> This jira proposes to add supports to write SQL results to Kafka streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1357) Support writing to Kafka streams in Storm SQL

2015-11-30 Thread darion yaphets (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033019#comment-15033019
 ] 

darion yaphets commented on STORM-1357:
---

we should write a well structure message into kafka broker. JSON maybe a good 
choose ?

> Support writing to Kafka streams in Storm SQL
> -
>
> Key: STORM-1357
> URL: https://issues.apache.org/jira/browse/STORM-1357
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-sql
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> This jira proposes to add supports to write SQL results to Kafka streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1357) Support writing to Kafka streams in Storm SQL

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032990#comment-15032990
 ] 

ASF GitHub Bot commented on STORM-1357:
---

GitHub user haohui opened a pull request:

https://github.com/apache/storm/pull/906

STORM-1357. Support writing to Kafka streams in Storm SQL.

This PR adds supports to stream the results from Storm SQL to 
`KafkaProducer`. 

It contains two main components:

* Allow plugging in producer config in `CREATE TABLE`.
* Add serializers for primitive SQL fields.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/haohui/storm STORM-1357

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/storm/pull/906.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #906


commit 34c484b1ed9e4300a1d98256c1283511e70b62f2
Author: Xin Wang 
Date:   2015-09-18T05:58:06Z

use new kafka producer api

commit d054dfd2f5b76c14aa752cff2cb9a23c0070701e
Author: Xin Wang 
Date:   2015-09-18T06:05:21Z

use new kafka producer configs

commit d71fe3506264be385675c5ae8dd0ba4ac2e23ab6
Author: Xin Wang 
Date:   2015-11-05T03:27:30Z

update TestUtils kafka configs

commit 15dec2ec1901d46116c37e114909306a3b79fd85
Author: Xin Wang 
Date:   2015-09-18T07:19:44Z

correct storm-kafka readme file

correct readme file: link & configs

commit 02a19ca8ed82b860fc3e869c88a608b90ce9d790
Author: Haohui Mai 
Date:   2015-11-25T19:30:22Z

STORM-1352. Trident should support writing to multiple Kafka clusters.

commit a74f421995e83d3ac241dd674723776f434743f8
Author: Haohui Mai 
Date:   2015-12-01T02:58:29Z

STORM-1357. Support writing to Kafka streams in Storm SQL.




> Support writing to Kafka streams in Storm SQL
> -
>
> Key: STORM-1357
> URL: https://issues.apache.org/jira/browse/STORM-1357
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-sql
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> This jira proposes to add supports to write SQL results to Kafka streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: STORM-1357. Support writing to Kafka streams i...

2015-11-30 Thread haohui
GitHub user haohui opened a pull request:

https://github.com/apache/storm/pull/906

STORM-1357. Support writing to Kafka streams in Storm SQL.

This PR adds supports to stream the results from Storm SQL to 
`KafkaProducer`. 

It contains two main components:

* Allow plugging in producer config in `CREATE TABLE`.
* Add serializers for primitive SQL fields.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/haohui/storm STORM-1357

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/storm/pull/906.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #906


commit 34c484b1ed9e4300a1d98256c1283511e70b62f2
Author: Xin Wang 
Date:   2015-09-18T05:58:06Z

use new kafka producer api

commit d054dfd2f5b76c14aa752cff2cb9a23c0070701e
Author: Xin Wang 
Date:   2015-09-18T06:05:21Z

use new kafka producer configs

commit d71fe3506264be385675c5ae8dd0ba4ac2e23ab6
Author: Xin Wang 
Date:   2015-11-05T03:27:30Z

update TestUtils kafka configs

commit 15dec2ec1901d46116c37e114909306a3b79fd85
Author: Xin Wang 
Date:   2015-09-18T07:19:44Z

correct storm-kafka readme file

correct readme file: link & configs

commit 02a19ca8ed82b860fc3e869c88a608b90ce9d790
Author: Haohui Mai 
Date:   2015-11-25T19:30:22Z

STORM-1352. Trident should support writing to multiple Kafka clusters.

commit a74f421995e83d3ac241dd674723776f434743f8
Author: Haohui Mai 
Date:   2015-12-01T02:58:29Z

STORM-1357. Support writing to Kafka streams in Storm SQL.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (STORM-1358) Porting JStorm multi-thread mode of spout to Storm

2015-11-30 Thread Basti Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Basti Liu updated STORM-1358:
-
Description: 
There are two modes of spout, "single-thread" and "multi-thread" in JStorm. The 
"single-thread" mode is simliar to Storm while the "multi-thread" mode 
separates the processing of ack/fail and nextTuple to two threads. It means we 
can stay in nextTuple for a long time without any side effect on ack/fail. 
Let's think about an example of kafka spout. We can initiate a consumer thread 
for kafka when initialization of spout. Then the comsumer starts to pull events 
from kafka and pulish the retreived events into a local queue. At meantime, 
nextTuple waits to read at this queue. If any available events, nextTuple will 
get notification faster and flush them to downstream. This model could probably 
introduce better performance compared with "single-thread" mode.

For this mode, the max pending configuration of spout might not be useful as 
expectation. It depends on how long we stay in nextTuple. But backpressure is a 
good choice to resolve flow control problem.

  was:
There are two modes of spout, "single-thread" and "multi-thread" in JStorm. The 
"single-thread" mode is simliar to Storm while the "multi-thread" mode 
separates the processing of ack/fail and nextTuple to two threads. It means we 
can stay in nextTuple for a long time without any impact on ack/fail. 
Let's think about an example of kafka spout. We can initiate a consumer thread 
for kafka when initialization of spout. Then the comsumer starts to pull events 
from kafka and pulish the retreived events into a local queue. At meantime, 
nextTuple waits to read at this queue. If any available events, nextTuple will 
get notification faster and flush them to downstream. This model could probably 
introduce better performance compared with "single-thread" mode.

For this mode, the max pending configuration of spout might not be useful as 
expectation. It depends on how long we stay in nextTuple. But backpressure is a 
good choice to resolve flow control problem.


> Porting JStorm multi-thread mode of spout to Storm
> --
>
> Key: STORM-1358
> URL: https://issues.apache.org/jira/browse/STORM-1358
> Project: Apache Storm
>  Issue Type: New Feature
>Reporter: Basti Liu
>Assignee: Basti Liu
>  Labels: jstorm-merger
>
> There are two modes of spout, "single-thread" and "multi-thread" in JStorm. 
> The "single-thread" mode is simliar to Storm while the "multi-thread" mode 
> separates the processing of ack/fail and nextTuple to two threads. It means 
> we can stay in nextTuple for a long time without any side effect on ack/fail. 
> Let's think about an example of kafka spout. We can initiate a consumer 
> thread for kafka when initialization of spout. Then the comsumer starts to 
> pull events from kafka and pulish the retreived events into a local queue. At 
> meantime, nextTuple waits to read at this queue. If any available events, 
> nextTuple will get notification faster and flush them to downstream. This 
> model could probably introduce better performance compared with 
> "single-thread" mode.
> For this mode, the max pending configuration of spout might not be useful as 
> expectation. It depends on how long we stay in nextTuple. But backpressure is 
> a good choice to resolve flow control problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1358) Porting JStorm multi-thread mode of spout to Storm

2015-11-30 Thread Basti Liu (JIRA)
Basti Liu created STORM-1358:


 Summary: Porting JStorm multi-thread mode of spout to Storm
 Key: STORM-1358
 URL: https://issues.apache.org/jira/browse/STORM-1358
 Project: Apache Storm
  Issue Type: New Feature
Reporter: Basti Liu
Assignee: Basti Liu


There are two modes of spout, "single-thread" and "multi-thread" in JStorm. The 
"single-thread" mode is simliar to Storm while the "multi-thread" mode 
separates the processing of ack/fail and nextTuple to two threads. It means we 
can stay in nextTuple for a long time without any impact on ack/fail. 
Let's think about an example of kafka spout. We can initiate a consumer thread 
for kafka when initialization of spout. Then the comsumer starts to pull events 
from kafka and pulish the retreived events into a local queue. At meantime, 
nextTuple waits to read at this queue. If any available events, nextTuple will 
get notification faster and flush them to downstream. This model could probably 
introduce better performance compared with "single-thread" mode.

For this mode, the max pending configuration of spout might not be useful as 
expectation. It depends on how long we stay in nextTuple. But backpressure is a 
good choice to resolve flow control problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1351) Storm spouts and bolts need a way to communicate problems back to toplogy runner

2015-11-30 Thread Basti Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032944#comment-15032944
 ] 

Basti Liu commented on STORM-1351:
--

Yes, that is for a single spout. But I don't think we need to do this in bolt. 
Because, after returning from execute(), the thread of bolt will wait untill 
there is available tuples in receiving queue. The returning immediately from 
execute() in bolt will not cause any cpu spike.

> Storm spouts and bolts need a way to communicate problems back to toplogy 
> runner
> 
>
> Key: STORM-1351
> URL: https://issues.apache.org/jira/browse/STORM-1351
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>
> A spout can be having a problem generating a  tuple in nextTuple()  because 
>  -a) there is no data to generate currently 
>  - b) there is some I/O  issues it is experiencing
> If the spout returns immediately from the nextTuple() call then the 
> nextTuple() will be invoked immediately leading to CPU spike. The CPU spike 
> would last till the situation is remedied by new coming data or the i/o 
> issues getting resolved.
> Currently to work around this, the spouts will have to implement a 
> exponential backoff mechanism internally. There are two problems with this:
>  - each spout needs to implement this backoff logic
>  - since nextTuple() has an internal sleep and takes longer to return, the 
> latency metrics computation gets thrown off
> *Thoughts for Solution:*
> The spout should be able to indicate a 'no data',  'experiencing error' or 
> 'all good' status back to the caller of nextTuple() so that the right backoff 
> logic can kick in.
> - The most natural way to do this is using the return type of the nextTuple 
> method. Currently nextTuple() returns void.  However, this will break source 
> and binary compat since the new storm will not be able to invoke the methods 
> on the unmodified spouts. This breaking change can only be considered as an 
> option only prior to v1.0. 
> - Alternatively this can be done by providing an additional method on the 
> collector to indicate the condition to the topology runner. The spout can 
> invoke this explicitly. the metrics can then also account for 'no data' and 
> 'error attempts'
> - Alternatively - The toplogy  runner may just examine the collector if there 
> was new data generated by the nextTuple() call. In this case it cannot 
> distinguish between errors v/s no incoming data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1351) Storm spouts and bolts need a way to communicate problems back to toplogy runner

2015-11-30 Thread Basti Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032923#comment-15032923
 ] 

Basti Liu commented on STORM-1351:
--

OK, I will create one for porting that.

> Storm spouts and bolts need a way to communicate problems back to toplogy 
> runner
> 
>
> Key: STORM-1351
> URL: https://issues.apache.org/jira/browse/STORM-1351
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>
> A spout can be having a problem generating a  tuple in nextTuple()  because 
>  -a) there is no data to generate currently 
>  - b) there is some I/O  issues it is experiencing
> If the spout returns immediately from the nextTuple() call then the 
> nextTuple() will be invoked immediately leading to CPU spike. The CPU spike 
> would last till the situation is remedied by new coming data or the i/o 
> issues getting resolved.
> Currently to work around this, the spouts will have to implement a 
> exponential backoff mechanism internally. There are two problems with this:
>  - each spout needs to implement this backoff logic
>  - since nextTuple() has an internal sleep and takes longer to return, the 
> latency metrics computation gets thrown off
> *Thoughts for Solution:*
> The spout should be able to indicate a 'no data',  'experiencing error' or 
> 'all good' status back to the caller of nextTuple() so that the right backoff 
> logic can kick in.
> - The most natural way to do this is using the return type of the nextTuple 
> method. Currently nextTuple() returns void.  However, this will break source 
> and binary compat since the new storm will not be able to invoke the methods 
> on the unmodified spouts. This breaking change can only be considered as an 
> option only prior to v1.0. 
> - Alternatively this can be done by providing an additional method on the 
> collector to indicate the condition to the topology runner. The spout can 
> invoke this explicitly. the metrics can then also account for 'no data' and 
> 'error attempts'
> - Alternatively - The toplogy  runner may just examine the collector if there 
> was new data generated by the nextTuple() call. In this case it cannot 
> distinguish between errors v/s no incoming data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1357) Support writing to Kafka streams in Storm SQL

2015-11-30 Thread Haohui Mai (JIRA)
Haohui Mai created STORM-1357:
-

 Summary: Support writing to Kafka streams in Storm SQL
 Key: STORM-1357
 URL: https://issues.apache.org/jira/browse/STORM-1357
 Project: Apache Storm
  Issue Type: New Feature
Reporter: Haohui Mai
Assignee: Haohui Mai


This jira proposes to add supports to write SQL results to Kafka streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-1357) Support writing to Kafka streams in Storm SQL

2015-11-30 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated STORM-1357:
--
Component/s: storm-sql

> Support writing to Kafka streams in Storm SQL
> -
>
> Key: STORM-1357
> URL: https://issues.apache.org/jira/browse/STORM-1357
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-sql
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> This jira proposes to add supports to write SQL results to Kafka streams.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-876) Dist Cache: Basic Functionality

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032757#comment-15032757
 ] 

ASF GitHub Bot commented on STORM-876:
--

Github user redsanket commented on the pull request:

https://github.com/apache/storm/pull/845#issuecomment-160803909
  
@revan2 updated docs. Let me know if you think more information needs to go 
in. Thanks


> Dist Cache: Basic Functionality
> ---
>
> Key: STORM-876
> URL: https://issues.apache.org/jira/browse/STORM-876
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: DISTCACHE.md, DistributedCacheDesignDocument.pdf
>
>
> Basic functionality for the Dist Cache feature.
> As part of this a new API should be added to support uploading and 
> downloading dist cache items.  storm-core.ser, storm-conf.ser and storm.jar 
> should be written into the blob store instead of residing locally. We need a 
> default implementation of the blob store that does essentially what nimbus 
> currently does and does not need anything extra.  But having an HDFS backend 
> too would be great for scalability and HA.
> The supervisor should provide a way to download and manage these blobs and 
> provide a working directory for the worker process with symlinks to the 
> blobs.  It should also allow the blobs to be updated and switch the symlink 
> atomically to point to the new blob once it is downloaded.
> All of this is already done by code internal to Yahoo! we are in the process 
> of getting it ready to push back to open source shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: [STORM-876] Blobstore API

2015-11-30 Thread redsanket
Github user redsanket commented on the pull request:

https://github.com/apache/storm/pull/845#issuecomment-160803909
  
@revan2 updated docs. Let me know if you think more information needs to go 
in. Thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-876) Dist Cache: Basic Functionality

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032497#comment-15032497
 ] 

ASF GitHub Bot commented on STORM-876:
--

Github user redsanket commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r46205703
  
--- Diff: storm-core/src/jvm/backtype/storm/blobstore/ClientBlobStore.java 
---
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.blobstore;
+
+import backtype.storm.daemon.Shutdownable;
+import backtype.storm.generated.AuthorizationException;
+import backtype.storm.generated.ReadableBlobMeta;
+import backtype.storm.generated.SettableBlobMeta;
+import backtype.storm.generated.KeyAlreadyExistsException;
+import backtype.storm.generated.KeyNotFoundException;
+import backtype.storm.utils.NimbusClient;
+
+import java.util.Iterator;
+import java.util.Map;
+
+public abstract class ClientBlobStore implements Shutdownable {
+protected Map conf;
+
+public abstract void prepare(Map conf);
+protected abstract AtomicOutputStream createBlobToExtend(String key, 
SettableBlobMeta meta) throws AuthorizationException, KeyAlreadyExistsException;
+public abstract AtomicOutputStream updateBlob(String key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract ReadableBlobMeta getBlobMeta(String key) throws 
AuthorizationException, KeyNotFoundException;
+protected abstract void setBlobMetaToExtend(String key, 
SettableBlobMeta meta) throws AuthorizationException, KeyNotFoundException;
+public abstract void deleteBlob(String key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract InputStreamWithMeta getBlob(String key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract Iterator listKeys();
+public abstract int getBlobReplication(String Key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract int updateBlobReplication(String Key, int replication) 
throws AuthorizationException, KeyNotFoundException;
+public abstract boolean setClient(Map conf, NimbusClient client);
+public abstract void createStateInZookeeper(String key);
+
+public final AtomicOutputStream createBlob(String key, 
SettableBlobMeta meta) throws AuthorizationException, KeyAlreadyExistsException 
{
+if (meta !=null && meta.is_set_acl()) {
+BlobStoreAclHandler.validateSettableACLs(key, meta.get_acl());
+}
+return createBlobToExtend(key, meta);
+}
+
+public final void setBlobMeta(String key, SettableBlobMeta meta) 
throws AuthorizationException, KeyNotFoundException {
--- End diff --

sure will do as part of this PR itself


> Dist Cache: Basic Functionality
> ---
>
> Key: STORM-876
> URL: https://issues.apache.org/jira/browse/STORM-876
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: DISTCACHE.md, DistributedCacheDesignDocument.pdf
>
>
> Basic functionality for the Dist Cache feature.
> As part of this a new API should be added to support uploading and 
> downloading dist cache items.  storm-core.ser, storm-conf.ser and storm.jar 
> should be written into the blob store instead of residing locally. We need a 
> default implementation of the blob store that does essentially what nimbus 
> currently does and does not need anything extra.  But having an HDFS backend 
> too would be great for scalability and HA.
> The supervisor should provide a way to download and manage these blobs and 
> provide a working directory for the worker process with symlinks to the 
> blobs.  It should also allow the blobs to be updated and switch the symlink 
> atomically to point to the new b

[GitHub] storm pull request: [STORM-876] Blobstore API

2015-11-30 Thread redsanket
Github user redsanket commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r46205703
  
--- Diff: storm-core/src/jvm/backtype/storm/blobstore/ClientBlobStore.java 
---
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.blobstore;
+
+import backtype.storm.daemon.Shutdownable;
+import backtype.storm.generated.AuthorizationException;
+import backtype.storm.generated.ReadableBlobMeta;
+import backtype.storm.generated.SettableBlobMeta;
+import backtype.storm.generated.KeyAlreadyExistsException;
+import backtype.storm.generated.KeyNotFoundException;
+import backtype.storm.utils.NimbusClient;
+
+import java.util.Iterator;
+import java.util.Map;
+
+public abstract class ClientBlobStore implements Shutdownable {
+protected Map conf;
+
+public abstract void prepare(Map conf);
+protected abstract AtomicOutputStream createBlobToExtend(String key, 
SettableBlobMeta meta) throws AuthorizationException, KeyAlreadyExistsException;
+public abstract AtomicOutputStream updateBlob(String key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract ReadableBlobMeta getBlobMeta(String key) throws 
AuthorizationException, KeyNotFoundException;
+protected abstract void setBlobMetaToExtend(String key, 
SettableBlobMeta meta) throws AuthorizationException, KeyNotFoundException;
+public abstract void deleteBlob(String key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract InputStreamWithMeta getBlob(String key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract Iterator listKeys();
+public abstract int getBlobReplication(String Key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract int updateBlobReplication(String Key, int replication) 
throws AuthorizationException, KeyNotFoundException;
+public abstract boolean setClient(Map conf, NimbusClient client);
+public abstract void createStateInZookeeper(String key);
+
+public final AtomicOutputStream createBlob(String key, 
SettableBlobMeta meta) throws AuthorizationException, KeyAlreadyExistsException 
{
+if (meta !=null && meta.is_set_acl()) {
+BlobStoreAclHandler.validateSettableACLs(key, meta.get_acl());
+}
+return createBlobToExtend(key, meta);
+}
+
+public final void setBlobMeta(String key, SettableBlobMeta meta) 
throws AuthorizationException, KeyNotFoundException {
--- End diff --

sure will do as part of this PR itself


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: [STORM-876] Blobstore API

2015-11-30 Thread revans2
Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/845#issuecomment-160761949
  
Things look good to me.  I am +1 on this going in.  I had one comment about 
getting some documentation for the ClientBlobStore.java interface.  But that 
can go in later if we need to.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-876) Dist Cache: Basic Functionality

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032460#comment-15032460
 ] 

ASF GitHub Bot commented on STORM-876:
--

Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/845#issuecomment-160761949
  
Things look good to me.  I am +1 on this going in.  I had one comment about 
getting some documentation for the ClientBlobStore.java interface.  But that 
can go in later if we need to.


> Dist Cache: Basic Functionality
> ---
>
> Key: STORM-876
> URL: https://issues.apache.org/jira/browse/STORM-876
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: DISTCACHE.md, DistributedCacheDesignDocument.pdf
>
>
> Basic functionality for the Dist Cache feature.
> As part of this a new API should be added to support uploading and 
> downloading dist cache items.  storm-core.ser, storm-conf.ser and storm.jar 
> should be written into the blob store instead of residing locally. We need a 
> default implementation of the blob store that does essentially what nimbus 
> currently does and does not need anything extra.  But having an HDFS backend 
> too would be great for scalability and HA.
> The supervisor should provide a way to download and manage these blobs and 
> provide a working directory for the worker process with symlinks to the 
> blobs.  It should also allow the blobs to be updated and switch the symlink 
> atomically to point to the new blob once it is downloaded.
> All of this is already done by code internal to Yahoo! we are in the process 
> of getting it ready to push back to open source shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-876) Dist Cache: Basic Functionality

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032457#comment-15032457
 ] 

ASF GitHub Bot commented on STORM-876:
--

Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r46203383
  
--- Diff: storm-core/src/jvm/backtype/storm/blobstore/ClientBlobStore.java 
---
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.blobstore;
+
+import backtype.storm.daemon.Shutdownable;
+import backtype.storm.generated.AuthorizationException;
+import backtype.storm.generated.ReadableBlobMeta;
+import backtype.storm.generated.SettableBlobMeta;
+import backtype.storm.generated.KeyAlreadyExistsException;
+import backtype.storm.generated.KeyNotFoundException;
+import backtype.storm.utils.NimbusClient;
+
+import java.util.Iterator;
+import java.util.Map;
+
+public abstract class ClientBlobStore implements Shutdownable {
+protected Map conf;
+
+public abstract void prepare(Map conf);
+protected abstract AtomicOutputStream createBlobToExtend(String key, 
SettableBlobMeta meta) throws AuthorizationException, KeyAlreadyExistsException;
+public abstract AtomicOutputStream updateBlob(String key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract ReadableBlobMeta getBlobMeta(String key) throws 
AuthorizationException, KeyNotFoundException;
+protected abstract void setBlobMetaToExtend(String key, 
SettableBlobMeta meta) throws AuthorizationException, KeyNotFoundException;
+public abstract void deleteBlob(String key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract InputStreamWithMeta getBlob(String key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract Iterator listKeys();
+public abstract int getBlobReplication(String Key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract int updateBlobReplication(String Key, int replication) 
throws AuthorizationException, KeyNotFoundException;
+public abstract boolean setClient(Map conf, NimbusClient client);
+public abstract void createStateInZookeeper(String key);
+
+public final AtomicOutputStream createBlob(String key, 
SettableBlobMeta meta) throws AuthorizationException, KeyAlreadyExistsException 
{
+if (meta !=null && meta.is_set_acl()) {
+BlobStoreAclHandler.validateSettableACLs(key, meta.get_acl());
+}
+return createBlobToExtend(key, meta);
+}
+
+public final void setBlobMeta(String key, SettableBlobMeta meta) 
throws AuthorizationException, KeyNotFoundException {
--- End diff --

The BlobStore.java has great documentation, but ClientBlobStore does not.  
Could we update the documentation to describe what each command does, and just 
as importantly how it determines who it is that is doing this operation, in 
this case it appears to be implementation specific.  This becomes important 
because the HDFSBlobStoreClient and the NimbusBlobStoreClient seem to do things 
differently, and it should be important for the client to know how this works, 
perhaps this will require an update to the documentation for those as well.


> Dist Cache: Basic Functionality
> ---
>
> Key: STORM-876
> URL: https://issues.apache.org/jira/browse/STORM-876
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: DISTCACHE.md, DistributedCacheDesignDocument.pdf
>
>
> Basic functionality for the Dist Cache feature.
> As part of this a new API should be added to support uploading and 
> downloading dist cache items.  storm-core.ser, storm-conf.ser and storm.jar 
> should be written into the blob store ins

[GitHub] storm pull request: [STORM-876] Blobstore API

2015-11-30 Thread revans2
Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r46203383
  
--- Diff: storm-core/src/jvm/backtype/storm/blobstore/ClientBlobStore.java 
---
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.blobstore;
+
+import backtype.storm.daemon.Shutdownable;
+import backtype.storm.generated.AuthorizationException;
+import backtype.storm.generated.ReadableBlobMeta;
+import backtype.storm.generated.SettableBlobMeta;
+import backtype.storm.generated.KeyAlreadyExistsException;
+import backtype.storm.generated.KeyNotFoundException;
+import backtype.storm.utils.NimbusClient;
+
+import java.util.Iterator;
+import java.util.Map;
+
+public abstract class ClientBlobStore implements Shutdownable {
+protected Map conf;
+
+public abstract void prepare(Map conf);
+protected abstract AtomicOutputStream createBlobToExtend(String key, 
SettableBlobMeta meta) throws AuthorizationException, KeyAlreadyExistsException;
+public abstract AtomicOutputStream updateBlob(String key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract ReadableBlobMeta getBlobMeta(String key) throws 
AuthorizationException, KeyNotFoundException;
+protected abstract void setBlobMetaToExtend(String key, 
SettableBlobMeta meta) throws AuthorizationException, KeyNotFoundException;
+public abstract void deleteBlob(String key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract InputStreamWithMeta getBlob(String key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract Iterator listKeys();
+public abstract int getBlobReplication(String Key) throws 
AuthorizationException, KeyNotFoundException;
+public abstract int updateBlobReplication(String Key, int replication) 
throws AuthorizationException, KeyNotFoundException;
+public abstract boolean setClient(Map conf, NimbusClient client);
+public abstract void createStateInZookeeper(String key);
+
+public final AtomicOutputStream createBlob(String key, 
SettableBlobMeta meta) throws AuthorizationException, KeyAlreadyExistsException 
{
+if (meta !=null && meta.is_set_acl()) {
+BlobStoreAclHandler.validateSettableACLs(key, meta.get_acl());
+}
+return createBlobToExtend(key, meta);
+}
+
+public final void setBlobMeta(String key, SettableBlobMeta meta) 
throws AuthorizationException, KeyNotFoundException {
--- End diff --

The BlobStore.java has great documentation, but ClientBlobStore does not.  
Could we update the documentation to describe what each command does, and just 
as importantly how it determines who it is that is doing this operation, in 
this case it appears to be implementation specific.  This becomes important 
because the HDFSBlobStoreClient and the NimbusBlobStoreClient seem to do things 
differently, and it should be important for the client to know how this works, 
perhaps this will require an update to the documentation for those as well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (STORM-874) Netty Threads do not handle Errors properly

2015-11-30 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-874.

   Resolution: Fixed
Fix Version/s: 0.11.0

This issue is addressed with STORM-885

> Netty Threads do not handle Errors properly
> ---
>
> Key: STORM-874
> URL: https://issues.apache.org/jira/browse/STORM-874
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 0.9.2-incubating, 0.10.0
>Reporter: Kishor Patil
>Assignee: Kishor Patil
> Fix For: 0.11.0
>
>
> When low on memory, netty thread could get OOM which if not handled correctly 
> can lead to unexpected behavior such as netty connection leaks.
> {code:java}
> java.lang.OutOfMemoryError: Direct buffer memory
>   at java.nio.Bits.reserveMemory(Bits.java:658) ~[?:1.8.0_25]
>   at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[?:1.8.0_25]
>   at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) ~[?:1.8.0_25]
>   at 
> org.jboss.netty.buffer.ChannelBuffers.directBuffer(ChannelBuffers.java:167) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.buffer.ChannelBuffers.directBuffer(ChannelBuffers.java:151) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> backtype.storm.messaging.netty.MessageBatch.buffer(MessageBatch.java:101) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
>   at 
> backtype.storm.messaging.netty.MessageEncoder.encode(MessageEncoder.java:32) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
>   at 
> org.jboss.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:66)
>  ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
>  ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
>  ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
>  ~[netty-3.9.4.Final.jar:?]
>   at org.jboss.netty.channel.Channels.write(Channels.java:704) 
> ~[netty-3.9.4.Final.jar:?]
>   at org.jboss.netty.channel.Channels.write(Channels.java:671) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.channel.AbstractChannel.write(AbstractChannel.java:248) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> backtype.storm.messaging.netty.Client.tryDeliverMessages(Client.java:226) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
>   at backtype.storm.messaging.netty.Client.send(Client.java:173) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-874) Netty Threads do not handle Errors properly

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032443#comment-15032443
 ] 

ASF GitHub Bot commented on STORM-874:
--

Github user kishorvpatil closed the pull request at:

https://github.com/apache/storm/pull/597


> Netty Threads do not handle Errors properly
> ---
>
> Key: STORM-874
> URL: https://issues.apache.org/jira/browse/STORM-874
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 0.9.2-incubating, 0.10.0
>Reporter: Kishor Patil
>Assignee: Kishor Patil
>
> When low on memory, netty thread could get OOM which if not handled correctly 
> can lead to unexpected behavior such as netty connection leaks.
> {code:java}
> java.lang.OutOfMemoryError: Direct buffer memory
>   at java.nio.Bits.reserveMemory(Bits.java:658) ~[?:1.8.0_25]
>   at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[?:1.8.0_25]
>   at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) ~[?:1.8.0_25]
>   at 
> org.jboss.netty.buffer.ChannelBuffers.directBuffer(ChannelBuffers.java:167) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.buffer.ChannelBuffers.directBuffer(ChannelBuffers.java:151) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> backtype.storm.messaging.netty.MessageBatch.buffer(MessageBatch.java:101) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
>   at 
> backtype.storm.messaging.netty.MessageEncoder.encode(MessageEncoder.java:32) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
>   at 
> org.jboss.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:66)
>  ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
>  ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
>  ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
>  ~[netty-3.9.4.Final.jar:?]
>   at org.jboss.netty.channel.Channels.write(Channels.java:704) 
> ~[netty-3.9.4.Final.jar:?]
>   at org.jboss.netty.channel.Channels.write(Channels.java:671) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.channel.AbstractChannel.write(AbstractChannel.java:248) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> backtype.storm.messaging.netty.Client.tryDeliverMessages(Client.java:226) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
>   at backtype.storm.messaging.netty.Client.send(Client.java:173) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: [STORM-874] Add UncaughtExceptionHandler for n...

2015-11-30 Thread kishorvpatil
Github user kishorvpatil closed the pull request at:

https://github.com/apache/storm/pull/597


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-874) Netty Threads do not handle Errors properly

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032442#comment-15032442
 ] 

ASF GitHub Bot commented on STORM-874:
--

Github user kishorvpatil commented on the pull request:

https://github.com/apache/storm/pull/597#issuecomment-160759703
  
These changes went in as part of #838. So closing the PR and JIRA


> Netty Threads do not handle Errors properly
> ---
>
> Key: STORM-874
> URL: https://issues.apache.org/jira/browse/STORM-874
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 0.9.2-incubating, 0.10.0
>Reporter: Kishor Patil
>Assignee: Kishor Patil
>
> When low on memory, netty thread could get OOM which if not handled correctly 
> can lead to unexpected behavior such as netty connection leaks.
> {code:java}
> java.lang.OutOfMemoryError: Direct buffer memory
>   at java.nio.Bits.reserveMemory(Bits.java:658) ~[?:1.8.0_25]
>   at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[?:1.8.0_25]
>   at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) ~[?:1.8.0_25]
>   at 
> org.jboss.netty.buffer.ChannelBuffers.directBuffer(ChannelBuffers.java:167) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.buffer.ChannelBuffers.directBuffer(ChannelBuffers.java:151) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> backtype.storm.messaging.netty.MessageBatch.buffer(MessageBatch.java:101) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
>   at 
> backtype.storm.messaging.netty.MessageEncoder.encode(MessageEncoder.java:32) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
>   at 
> org.jboss.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:66)
>  ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
>  ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
>  ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
>  ~[netty-3.9.4.Final.jar:?]
>   at org.jboss.netty.channel.Channels.write(Channels.java:704) 
> ~[netty-3.9.4.Final.jar:?]
>   at org.jboss.netty.channel.Channels.write(Channels.java:671) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.channel.AbstractChannel.write(AbstractChannel.java:248) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> backtype.storm.messaging.netty.Client.tryDeliverMessages(Client.java:226) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
>   at backtype.storm.messaging.netty.Client.send(Client.java:173) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: [STORM-874] Add UncaughtExceptionHandler for n...

2015-11-30 Thread kishorvpatil
Github user kishorvpatil commented on the pull request:

https://github.com/apache/storm/pull/597#issuecomment-160759703
  
These changes went in as part of #838. So closing the PR and JIRA


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1351) Storm spouts and bolts need a way to communicate problems back to toplogy runner

2015-11-30 Thread Roshan Naik (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032431#comment-15032431
 ] 

Roshan Naik commented on STORM-1351:


Thanks [~basti.lj]  for the insight on CPU usage drop with a simple 1ms 
timeout.  I assume thats for a single spout/bolt ?
Seems like that might be adequate for many cases.

Given the prevalence of exponential back-off...there will surely be users who 
would prefer to employ it.. and we could give that as an option... as 
[~revans2] suggests.



> Storm spouts and bolts need a way to communicate problems back to toplogy 
> runner
> 
>
> Key: STORM-1351
> URL: https://issues.apache.org/jira/browse/STORM-1351
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>
> A spout can be having a problem generating a  tuple in nextTuple()  because 
>  -a) there is no data to generate currently 
>  - b) there is some I/O  issues it is experiencing
> If the spout returns immediately from the nextTuple() call then the 
> nextTuple() will be invoked immediately leading to CPU spike. The CPU spike 
> would last till the situation is remedied by new coming data or the i/o 
> issues getting resolved.
> Currently to work around this, the spouts will have to implement a 
> exponential backoff mechanism internally. There are two problems with this:
>  - each spout needs to implement this backoff logic
>  - since nextTuple() has an internal sleep and takes longer to return, the 
> latency metrics computation gets thrown off
> *Thoughts for Solution:*
> The spout should be able to indicate a 'no data',  'experiencing error' or 
> 'all good' status back to the caller of nextTuple() so that the right backoff 
> logic can kick in.
> - The most natural way to do this is using the return type of the nextTuple 
> method. Currently nextTuple() returns void.  However, this will break source 
> and binary compat since the new storm will not be able to invoke the methods 
> on the unmodified spouts. This breaking change can only be considered as an 
> option only prior to v1.0. 
> - Alternatively this can be done by providing an additional method on the 
> collector to indicate the condition to the topology runner. The spout can 
> invoke this explicitly. the metrics can then also account for 'no data' and 
> 'error attempts'
> - Alternatively - The toplogy  runner may just examine the collector if there 
> was new data generated by the nextTuple() call. In this case it cannot 
> distinguish between errors v/s no incoming data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-876) Dist Cache: Basic Functionality

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032424#comment-15032424
 ] 

ASF GitHub Bot commented on STORM-876:
--

Github user redsanket commented on the pull request:

https://github.com/apache/storm/pull/845#issuecomment-160755034
  
@d2r I have squashed the commits.
Thanks a lot to @revans2, @kishorvpatil , @unsleepy22 , @ptgoetz, @knusbaum 
,@d2r  for helping me get a thorough review and fix issues. 


> Dist Cache: Basic Functionality
> ---
>
> Key: STORM-876
> URL: https://issues.apache.org/jira/browse/STORM-876
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: DISTCACHE.md, DistributedCacheDesignDocument.pdf
>
>
> Basic functionality for the Dist Cache feature.
> As part of this a new API should be added to support uploading and 
> downloading dist cache items.  storm-core.ser, storm-conf.ser and storm.jar 
> should be written into the blob store instead of residing locally. We need a 
> default implementation of the blob store that does essentially what nimbus 
> currently does and does not need anything extra.  But having an HDFS backend 
> too would be great for scalability and HA.
> The supervisor should provide a way to download and manage these blobs and 
> provide a working directory for the worker process with symlinks to the 
> blobs.  It should also allow the blobs to be updated and switch the symlink 
> atomically to point to the new blob once it is downloaded.
> All of this is already done by code internal to Yahoo! we are in the process 
> of getting it ready to push back to open source shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: [STORM-876] Blobstore API

2015-11-30 Thread redsanket
Github user redsanket commented on the pull request:

https://github.com/apache/storm/pull/845#issuecomment-160755034
  
@d2r I have squashed the commits.
Thanks a lot to @revans2, @kishorvpatil , @unsleepy22 , @ptgoetz, @knusbaum 
,@d2r  for helping me get a thorough review and fix issues. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-876) Dist Cache: Basic Functionality

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032292#comment-15032292
 ] 

ASF GitHub Bot commented on STORM-876:
--

Github user d2r commented on the pull request:

https://github.com/apache/storm/pull/845#issuecomment-160730596
  
+1, thanks for your work and patience, @redsanket 

@revans2, @kishorvpatil , @unsleepy22 , @ptgoetz, It looks as if your 
comments have been addressed.

@redsanket , Since there are a lot of commits on this branch, would you 
squash them now before we merge?


> Dist Cache: Basic Functionality
> ---
>
> Key: STORM-876
> URL: https://issues.apache.org/jira/browse/STORM-876
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: DISTCACHE.md, DistributedCacheDesignDocument.pdf
>
>
> Basic functionality for the Dist Cache feature.
> As part of this a new API should be added to support uploading and 
> downloading dist cache items.  storm-core.ser, storm-conf.ser and storm.jar 
> should be written into the blob store instead of residing locally. We need a 
> default implementation of the blob store that does essentially what nimbus 
> currently does and does not need anything extra.  But having an HDFS backend 
> too would be great for scalability and HA.
> The supervisor should provide a way to download and manage these blobs and 
> provide a working directory for the worker process with symlinks to the 
> blobs.  It should also allow the blobs to be updated and switch the symlink 
> atomically to point to the new blob once it is downloaded.
> All of this is already done by code internal to Yahoo! we are in the process 
> of getting it ready to push back to open source shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: [STORM-876] Blobstore API

2015-11-30 Thread d2r
Github user d2r commented on the pull request:

https://github.com/apache/storm/pull/845#issuecomment-160730596
  
+1, thanks for your work and patience, @redsanket 

@revans2, @kishorvpatil , @unsleepy22 , @ptgoetz, It looks as if your 
comments have been addressed.

@redsanket , Since there are a lot of commits on this branch, would you 
squash them now before we merge?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-876) Dist Cache: Basic Functionality

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032268#comment-15032268
 ] 

ASF GitHub Bot commented on STORM-876:
--

Github user knusbaum commented on the pull request:

https://github.com/apache/storm/pull/845#issuecomment-160727100
  
+1


> Dist Cache: Basic Functionality
> ---
>
> Key: STORM-876
> URL: https://issues.apache.org/jira/browse/STORM-876
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: DISTCACHE.md, DistributedCacheDesignDocument.pdf
>
>
> Basic functionality for the Dist Cache feature.
> As part of this a new API should be added to support uploading and 
> downloading dist cache items.  storm-core.ser, storm-conf.ser and storm.jar 
> should be written into the blob store instead of residing locally. We need a 
> default implementation of the blob store that does essentially what nimbus 
> currently does and does not need anything extra.  But having an HDFS backend 
> too would be great for scalability and HA.
> The supervisor should provide a way to download and manage these blobs and 
> provide a working directory for the worker process with symlinks to the 
> blobs.  It should also allow the blobs to be updated and switch the symlink 
> atomically to point to the new blob once it is downloaded.
> All of this is already done by code internal to Yahoo! we are in the process 
> of getting it ready to push back to open source shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: [STORM-876] Blobstore API

2015-11-30 Thread knusbaum
Github user knusbaum commented on the pull request:

https://github.com/apache/storm/pull/845#issuecomment-160727100
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-876) Dist Cache: Basic Functionality

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15031927#comment-15031927
 ] 

ASF GitHub Bot commented on STORM-876:
--

Github user redsanket commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r46154410
  
--- Diff: pom.xml ---
@@ -469,7 +475,11 @@
 curator-client
 ${curator.version}
 
-
+
+org.apache.curator
+curator-test
+${curator.version}
--- End diff --

addressed


> Dist Cache: Basic Functionality
> ---
>
> Key: STORM-876
> URL: https://issues.apache.org/jira/browse/STORM-876
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: DISTCACHE.md, DistributedCacheDesignDocument.pdf
>
>
> Basic functionality for the Dist Cache feature.
> As part of this a new API should be added to support uploading and 
> downloading dist cache items.  storm-core.ser, storm-conf.ser and storm.jar 
> should be written into the blob store instead of residing locally. We need a 
> default implementation of the blob store that does essentially what nimbus 
> currently does and does not need anything extra.  But having an HDFS backend 
> too would be great for scalability and HA.
> The supervisor should provide a way to download and manage these blobs and 
> provide a working directory for the worker process with symlinks to the 
> blobs.  It should also allow the blobs to be updated and switch the symlink 
> atomically to point to the new blob once it is downloaded.
> All of this is already done by code internal to Yahoo! we are in the process 
> of getting it ready to push back to open source shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-876) Dist Cache: Basic Functionality

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15031925#comment-15031925
 ] 

ASF GitHub Bot commented on STORM-876:
--

Github user redsanket commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r46154395
  
--- Diff: 
external/storm-hdfs/src/test/java/org/apache/storm/hdfs/blobstore/BlobStoreTest.java
 ---
@@ -0,0 +1,518 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.hdfs.blobstore;
+
+import backtype.storm.Config;
+import backtype.storm.blobstore.AtomicOutputStream;
+import backtype.storm.blobstore.BlobStore;
+import backtype.storm.blobstore.BlobStoreAclHandler;
+import backtype.storm.generated.AccessControl;
+import backtype.storm.generated.AuthorizationException;
+import backtype.storm.generated.KeyAlreadyExistsException;
+import backtype.storm.generated.KeyNotFoundException;
+import backtype.storm.generated.ReadableBlobMeta;
+import backtype.storm.generated.SettableBlobMeta;
+import backtype.storm.generated.AccessControlType;
+
+import backtype.storm.security.auth.NimbusPrincipal;
+import backtype.storm.security.auth.SingleUserPrincipal;
+import backtype.storm.utils.Utils;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.security.auth.Subject;
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.UUID;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.Iterator;
+import java.util.Arrays;
+import java.util.List;
+import java.util.ArrayList;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.*;
+
+public class BlobStoreTest {
+  private static final Logger LOG = 
LoggerFactory.getLogger(BlobStoreTest.class);
+  protected static MiniDFSCluster dfscluster = null;
+  protected static Configuration hadoopConf = null;
+  URI base;
+  File baseFile;
+  private static Map conf = new HashMap();
+  public static final int READ = 0x01;
+  public static final int WRITE = 0x02;
+  public static final int ADMIN = 0x04;
+
+  @Before
+  public void init() {
+initializeConfigs();
+baseFile = new File("/tmp/blob-store-test-"+UUID.randomUUID());
+base = baseFile.toURI();
+  }
+
+  @After
+  public void cleanup() throws IOException {
+FileUtils.deleteDirectory(baseFile);
+  }
+
+  @AfterClass
+  public static void cleanupAfterClass() throws IOException {
+if (dfscluster != null) {
+  dfscluster.shutdown();
+}
+  }
+
+  // Method which initializes nimbus admin
+  public static void initializeConfigs() {
+conf.put(Config.NIMBUS_ADMINS,"admin");
+conf.put(Config.NIMBUS_SUPERVISOR_USERS,"supervisor");
+  }
+
+  //Gets Nimbus Subject with NimbusPrincipal set on it
+  public static Subject getNimbusSubject() {
+Subject nimbus = new Subject();
+nimbus.getPrincipals().add(new NimbusPrincipal());
+return nimbus;
+  }
+
+  // Overloading the assertStoreHasExactly method accomodate Subject in 
order to check for authorization
+  public static void assertStoreHasExactly(BlobStore store, Subject who, 
String ... keys)
+  throws IOException, KeyNotFoundException, AuthorizationException 
{
+Set expected = new HashSet(Arrays.asList(keys));

[GitHub] storm pull request: [STORM-876] Blobstore API

2015-11-30 Thread redsanket
Github user redsanket commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r46154395
  
--- Diff: 
external/storm-hdfs/src/test/java/org/apache/storm/hdfs/blobstore/BlobStoreTest.java
 ---
@@ -0,0 +1,518 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.hdfs.blobstore;
+
+import backtype.storm.Config;
+import backtype.storm.blobstore.AtomicOutputStream;
+import backtype.storm.blobstore.BlobStore;
+import backtype.storm.blobstore.BlobStoreAclHandler;
+import backtype.storm.generated.AccessControl;
+import backtype.storm.generated.AuthorizationException;
+import backtype.storm.generated.KeyAlreadyExistsException;
+import backtype.storm.generated.KeyNotFoundException;
+import backtype.storm.generated.ReadableBlobMeta;
+import backtype.storm.generated.SettableBlobMeta;
+import backtype.storm.generated.AccessControlType;
+
+import backtype.storm.security.auth.NimbusPrincipal;
+import backtype.storm.security.auth.SingleUserPrincipal;
+import backtype.storm.utils.Utils;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.security.auth.Subject;
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.UUID;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.Iterator;
+import java.util.Arrays;
+import java.util.List;
+import java.util.ArrayList;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.*;
+
+public class BlobStoreTest {
+  private static final Logger LOG = 
LoggerFactory.getLogger(BlobStoreTest.class);
+  protected static MiniDFSCluster dfscluster = null;
+  protected static Configuration hadoopConf = null;
+  URI base;
+  File baseFile;
+  private static Map conf = new HashMap();
+  public static final int READ = 0x01;
+  public static final int WRITE = 0x02;
+  public static final int ADMIN = 0x04;
+
+  @Before
+  public void init() {
+initializeConfigs();
+baseFile = new File("/tmp/blob-store-test-"+UUID.randomUUID());
+base = baseFile.toURI();
+  }
+
+  @After
+  public void cleanup() throws IOException {
+FileUtils.deleteDirectory(baseFile);
+  }
+
+  @AfterClass
+  public static void cleanupAfterClass() throws IOException {
+if (dfscluster != null) {
+  dfscluster.shutdown();
+}
+  }
+
+  // Method which initializes nimbus admin
+  public static void initializeConfigs() {
+conf.put(Config.NIMBUS_ADMINS,"admin");
+conf.put(Config.NIMBUS_SUPERVISOR_USERS,"supervisor");
+  }
+
+  //Gets Nimbus Subject with NimbusPrincipal set on it
+  public static Subject getNimbusSubject() {
+Subject nimbus = new Subject();
+nimbus.getPrincipals().add(new NimbusPrincipal());
+return nimbus;
+  }
+
+  // Overloading the assertStoreHasExactly method accomodate Subject in 
order to check for authorization
+  public static void assertStoreHasExactly(BlobStore store, Subject who, 
String ... keys)
+  throws IOException, KeyNotFoundException, AuthorizationException 
{
+Set expected = new HashSet(Arrays.asList(keys));
+Set found = new HashSet();
+Iterator c = store.listKeys();
+while (c.hasNext()) {
+  String keyName = c.next();
+  found.add(keyName);
+}
+Set extra = new HashSet(found);
+extra.removeAll(

[GitHub] storm pull request: [STORM-876] Blobstore API

2015-11-30 Thread redsanket
Github user redsanket commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r46154410
  
--- Diff: pom.xml ---
@@ -469,7 +475,11 @@
 curator-client
 ${curator.version}
 
-
+
+org.apache.curator
+curator-test
+${curator.version}
--- End diff --

addressed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Multiple query processing in apache storm

2015-11-30 Thread Bobby Evans
We don't currently support rewriting a topology while it is in flight.  There 
are a few tunable parameters that you can modify while the topology is in 
flight using the rebalance command, but not a lot of them. We hope to add more 
over time.  If there are specific things you would like to change dynamically 
please let us know.
 - Bobby 


On Wednesday, November 25, 2015 9:20 AM, Renya nath N  
wrote:
 

 Thank you  for the information

On Wed, Nov 25, 2015 at 6:39 PM, Nathan Leung  wrote:

> Obviously you cannot change what is in your jar file but if the bolt is
> already compiled you can try flux to change how your topology is linked.
> http://storm.apache.org/documentation/flux.html
>
> On Wed, Nov 25, 2015 at 5:03 AM, Renya nath N  wrote:
>
> > Hi Ankur,
> >
> > Thank you for valuable reply,
> > Actually I was doing a project on intelligent query placemenet strategy
> for
> > real time analytics. Dynamic topology evolution is the main focus. Is it
> > possible to dynamically tune a topology when new queries( in terms of
> > bolts) submitted to the system?
> >
> >
> > Thank you
> > Renya Nath N
> >
> > On Wed, Nov 25, 2015 at 1:55 PM, Ankur Garg 
> wrote:
> >
> > > Hi Renya,
> > >
> > > Can you be more elaborate what exactly are u looking for .
> > >
> > > Specifically , what is a query here . Are these queries independent of
> > each
> > > other ? Where are u getting/retrieving these queries from .
> > >
> > > Nevertheless , assuming multiple queries means simple multiprocessing
> ,I
> > > think  u r looking for parallelism hint while defining ur topologies
> like
> > >
> > >  TopologyBuilder topicTopologyBuilder = new TopologyBuilder();
> > >
> > >    topicTopologyBuilder.setSpout("spoutName", new
> TopicListenerSpout(),
> > > 1);
> > >    topicTopologyBuilder.setBolt("boltName", new TopicMysqlBolt(), 10
> > > ).shuffleGrouping("spoutName");
> > >
> > > Third parameter is what will allow multiple instances/threads of Spouts
> > and
> > > Bolts .
> > >
> > >
> > >
> > > On Wed, Nov 25, 2015 at 1:16 PM, Renya nath N 
> > wrote:
> > >
> > > > Sir,
> > > >
> > > >  How can I process multile queries in aache storm?
> > > > Do I need to create separate topology for each query?
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Thank you
> > > > Renya Nath N
> > > >
> > >
> >
>


  

[jira] [Commented] (STORM-1351) Storm spouts and bolts need a way to communicate problems back to toplogy runner

2015-11-30 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15031800#comment-15031800
 ] 

Robert Joseph Evans commented on STORM-1351:


Multi-thread spout support is interesting, and is something we could support.  
[~basti.lj] I don't think we have a JIRA for porting that functionality over 
yet.  If we don't could you file one? We can move discussion about that to the 
corresponding JIRA.

Exponential backoff makes since if you want to avoid wasting a resource and are 
willing to sacrifice latency to do that.  I think a 1ms fixed sleep makes the 
most since for the default, but if someone wants to donate a bounded 
exponential backoff ISpoutWaitStrategy that would be fine with me.

> Storm spouts and bolts need a way to communicate problems back to toplogy 
> runner
> 
>
> Key: STORM-1351
> URL: https://issues.apache.org/jira/browse/STORM-1351
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>
> A spout can be having a problem generating a  tuple in nextTuple()  because 
>  -a) there is no data to generate currently 
>  - b) there is some I/O  issues it is experiencing
> If the spout returns immediately from the nextTuple() call then the 
> nextTuple() will be invoked immediately leading to CPU spike. The CPU spike 
> would last till the situation is remedied by new coming data or the i/o 
> issues getting resolved.
> Currently to work around this, the spouts will have to implement a 
> exponential backoff mechanism internally. There are two problems with this:
>  - each spout needs to implement this backoff logic
>  - since nextTuple() has an internal sleep and takes longer to return, the 
> latency metrics computation gets thrown off
> *Thoughts for Solution:*
> The spout should be able to indicate a 'no data',  'experiencing error' or 
> 'all good' status back to the caller of nextTuple() so that the right backoff 
> logic can kick in.
> - The most natural way to do this is using the return type of the nextTuple 
> method. Currently nextTuple() returns void.  However, this will break source 
> and binary compat since the new storm will not be able to invoke the methods 
> on the unmodified spouts. This breaking change can only be considered as an 
> option only prior to v1.0. 
> - Alternatively this can be done by providing an additional method on the 
> collector to indicate the condition to the topology runner. The spout can 
> invoke this explicitly. the metrics can then also account for 'no data' and 
> 'error attempts'
> - Alternatively - The toplogy  runner may just examine the collector if there 
> was new data generated by the nextTuple() call. In this case it cannot 
> distinguish between errors v/s no incoming data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-1276) port backtype.storm.daemon.nimbus to java

2015-11-30 Thread Basti Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Basti Liu updated STORM-1276:
-
Attachment: Nimbus_Diff.pdf

> port backtype.storm.daemon.nimbus to java
> -
>
> Key: STORM-1276
> URL: https://issues.apache.org/jira/browse/STORM-1276
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Basti Liu
>  Labels: java-migration, jstorm-merger
> Attachments: Nimbus_Diff.pdf
>
>
> https://github.com/apache/storm/tree/jstorm-import/jstorm-core/src/main/java/com/alibaba/jstorm/daemon/nimbus
> as a possible example



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-1323) Explore support for JStorm TopologyMaster

2015-11-30 Thread Basti Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Basti Liu updated STORM-1323:
-
Attachment: topologyMaster_design_document.pdf

Please find the design document of topology master and the draft evalution for 
topology master & pacemaker in attachment. Any comments are welcome.

> Explore support for JStorm TopologyMaster
> -
>
> Key: STORM-1323
> URL: https://issues.apache.org/jira/browse/STORM-1323
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Basti Liu
>  Labels: jstorm-merger
> Attachments: topologyMaster_design_document.pdf
>
>
> JStorm uses a TopologyMaster to reduce the load on zookeeper and to provide 
> coordination within the topology.
> Need to evaluate how this impacts storm architecture especially around 
> security, and port it over, if it makes since.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)