Re: [VOTE] Move Storm from JIRA to GitHub issues

2024-02-29 Thread Kishor Patil
+1

Best regards,
Kishor Patil

> On Feb 28, 2024, at 4:37 PM, Bipin Prasad  
> wrote:
> 
>  +1
>On Wednesday, February 28, 2024 at 02:42:21 PM CST, Richard Zowalla 
>  wrote:  
> 
> +1
> 
>> Am Mittwoch, dem 28.02.2024 um 09:08 + schrieb Julien Nioche:
>> Hi,
>> 
>> Following the discussion initiated by Richard [1], please vote on
>> moving
>> Storm from JIRA to GitHub issues
>> 
>> The vote is open for at least the next 72 hours or as long as needed.
>> 
>> [ ] +1 Move Storm from JIRA to GitHub issues
>> [ ]  0 No opinion
>> [ ] -1 Keep the issues on JIRA
>> 
>> Here is my +1
>> 
>> Thanks
>> 
>> Julien
>> 
>> [1] https://lists.apache.org/thread/ty80h0kqfh2r7vh6wmzmhzh07njbq0jn
> 


Re: [VOTE] Release Apache Storm 1.2.4 (rc1)

2021-10-04 Thread Kishor Patil
+1
Built from source, unit tests passed. manually tested UI and topology 
functionality.
release docs checksum passed.

-Kishor 

On 2021/10/01 19:19:05, Ethan Li  wrote: 
> This is a call to vote on releasing Apache Storm 1.2.4 (rc1)
> 
> Full list of changes in this release:
> 
> https://dist.apache.org/repos/dist/dev/storm/apache-storm-1.2.4-rc1/RELEASE_NOTES.html
> 
> The tag/commit to be voted upon is v1.2.4:
> 
> https://gitbox.apache.org/repos/asf?p=storm.git;a=commit;h=1bc944091b727e9a892bb36349231fcc57d9ea30
> 
> The source archive being voted upon can be found here:
> 
> https://dist.apache.org/repos/dist/dev/storm/apache-storm-1.2.4-rc1/apache-storm-1.2.4-src.tar.gz
> 
> Other release files, signatures and digests can be found here:
> 
> https://dist.apache.org/repos/dist/dev/storm/apache-storm-1.2.4-rc1/
> 
> The release artifacts are signed with the following key:
> 
> https://www.apache.org/dist/storm/KEYS
> 
> The Nexus staging repository for this release is:
> 
> https://repository.apache.org/content/repositories/orgapachestorm-1099
> 
> Please vote on releasing this package as Apache Storm 1.2.4.
> 
> When voting, please list the actions taken to verify the release.
> 
> This vote will be open for at least 72 hours.
> 
> [ ] +1 Release this package as Apache Storm 1.2.4
> [ ]  0 No opinion
> [ ] -1 Do not release this package because...
> 
> Thanks to everyone who contributed to this release.
> 


Re: [VOTE] Release Apache Storm 2.1.1 (rc1)

2021-09-30 Thread Kishor Patil
+1.
verified distribution files,
built from source and unit tests passed.
launched cluster and tested UI and basic topology functionality.

-Kishor 

On 2021/09/29 17:33:23, Ethan Li  wrote: 
> This is a call to vote on releasing Apache Storm 2.1.1 (rc1)
> 
> Full list of changes in this release:
> 
> https://dist.apache.org/repos/dist/dev/storm/apache-storm-2.1.1-rc1/RELEASE_NOTES.html
> 
> The tag/commit to be voted upon is v2.1.1:
> 
> https://gitbox.apache.org/repos/asf?p=storm.git;a=commit;h=c5009d993ee049cd0b7c3fe0dad2fc8f700ddb5f
> 
> The source archive being voted upon can be found here:
> 
> https://dist.apache.org/repos/dist/dev/storm/apache-storm-2.1.1-rc1/apache-storm-2.1.1-src.tar.gz
> 
> Other release files, signatures and digests can be found here:
> 
> https://dist.apache.org/repos/dist/dev/storm/apache-storm-2.1.1-rc1/
> 
> The release artifacts are signed with the following key:
> 
> https://www.apache.org/dist/storm/KEYS
> 
> The Nexus staging repository for this release is:
> 
> https://repository.apache.org/content/repositories/orgapachestorm-1098
> 
> Please vote on releasing this package as Apache Storm 2.1.1.
> 
> When voting, please list the actions taken to verify the release.
> 
> This vote will be open for at least 72 hours.
> 
> [ ] +1 Release this package as Apache Storm 2.1.1
> [ ]  0 No opinion
> [ ] -1 Do not release this package because...
> 
> Thanks to everyone who contributed to this release.
> 


Re: [VOTE] Release Apache Storm 2.2.1 (rc1)

2021-09-30 Thread Kishor Patil
+1

verified checksum and signature, built it from source and ran unit tests. 
launched cluster and manually tested UI functionality.

Regards,
Kishor

On 2021/09/29 16:04:09, Ethan Li  wrote: 
> This is a call to vote on releasing Apache Storm 2.2.1 (rc1)
> 
> Full list of changes in this release:
> 
> https://dist.apache.org/repos/dist/dev/storm/apache-storm-2.2.1-rc1/RELEASE_NOTES.html
> 
> The tag/commit to be voted upon is v2.2.1:
> 
> https://gitbox.apache.org/repos/asf?p=storm.git;a=commit;h=1d86ffd1adc1920e6788a76f017b2e2f873a7162
> 
> The source archive being voted upon can be found here:
> 
> https://dist.apache.org/repos/dist/dev/storm/apache-storm-2.2.1-rc1/apache-storm-2.2.1-src.tar.gz
> 
> Other release files, signatures and digests can be found here:
> 
> https://dist.apache.org/repos/dist/dev/storm/apache-storm-2.2.1-rc1/
> 
> The release artifacts are signed with the following key:
> 
> https://www.apache.org/dist/storm/KEYS
> 
> The Nexus staging repository for this release is:
> 
> https://repository.apache.org/content/repositories/orgapachestorm-1097
> 
> Please vote on releasing this package as Apache Storm 2.2.1.
> 
> When voting, please list the actions taken to verify the release.
> 
> This vote will be open for at least 72 hours.
> 
> [ ] +1 Release this package as Apache Storm 2.2.1
> [ ]  0 No opinion
> [ ] -1 Do not release this package because...
> 
> Thanks to everyone who contributed to this release.
> 


Re: [VOTE] Release Apache Storm 2.3.0 (rc1)

2021-09-24 Thread Kishor Patil
+1.
I build the code from source, ran unit tests, started local cluster, tested 
topologies functionalities and UI functionalities. I verified the signature for 
release files. It all LGTM.

Regards,
Kishor

On 2021/09/24 00:36:43, Ethan Li  wrote: 
> This is a call to vote on releasing Apache Storm 2.3.0 (rc1)
> 
> Full list of changes in this release:
> 
> https://dist.apache.org/repos/dist/dev/storm/apache-storm-2.3.0-rc1/RELEASE_NOTES.html
> 
> The tag/commit to be voted upon is v2.3.0:
> 
> https://gitbox.apache.org/repos/asf?p=storm.git;a=commit;h=b5252eda18e76c4f42af58d7481ea66cf3ec8471
> 
> The source archive being voted upon can be found here:
> 
> https://dist.apache.org/repos/dist/dev/storm/apache-storm-2.3.0-rc1/apache-storm-2.3.0-src.tar.gz
> 
> Other release files, signatures and digests can be found here:
> 
> https://dist.apache.org/repos/dist/dev/storm/apache-storm-2.3.0-rc1/
> 
> The release artifacts are signed with the following key:
> 
> https://www.apache.org/dist/storm/KEYS
> 
> The Nexus staging repository for this release is:
> 
> https://repository.apache.org/content/repositories/orgapachestorm-1096
> 
> Please vote on releasing this package as Apache Storm 2.3.0.
> 
> When voting, please list the actions taken to verify the release.
> 
> This vote will be open for at least 72 hours.
> 
> [ ] +1 Release this package as Apache Storm 2.3.0
> [ ]  0 No opinion
> [ ] -1 Do not release this package because...
> 
> Thanks to everyone who contributed to this release.
> 


Re: Significant Bug

2020-10-29 Thread Kishor Patil
Hello Thomas,

Apologies for delay in responding here. I tested the topology code provided in 
storm-issue repo. 
*only one machine gets peggeg*: Although it appears, his is not a bug. This is 
related to Locality Awareness. Please refer to 
https://github.com/apache/storm/blob/master/docs/LocalityAwareness.md
It appears spout to bolt ratio is 200, so if there are enough bolt's on single 
node to handle events generated by the spout, it won't send events out to 
another node unless it runs out of capacity on single node. If you do not like 
this and want to distribute events evenly, you can try disabling this feature. 
You can turn off LoadAwareShuffleGrouping by setting 
topology.disable.loadaware.messaging to true.
-Kishor

On 2020/10/28 15:21:54, "Thomas L. Redman"  wrote: 
> What’s the word on this? I sent this out some time ago, including a GitHub 
> project that clearly demonstrates the brokenness, yet I have not heard a 
> word. Is there anybody supporting Storm?
> 
> > On Sep 30, 2020, at 9:03 AM, Thomas L. Redman  wrote:
> > 
> > I believe I have encountered a significant bug. It seems topologies 
> > employing anchored tuples do not distribute across multiple nodes, 
> > regardless of the computation demands of the bolts. It works fine on a 
> > single node, but when throwing multiple nodes into the mix, only one 
> > machine gets pegged. When we disable anchoring, it will distribute across 
> > all nodes just fine, pegging each machine appropriately.
> > 
> > This bug manifests from version 2.1 forward. I first encountered this issue 
> > with my own production cluster on an app that does significant NLP 
> > computation across hundreds of millions of documents. This topology is 
> > fairly complex, so I developed a very simple exemplar that demonstrates the 
> > issue with only one spout and bolt. I pushed this demonstration up to 
> > github to provide the developers with a mechanism to easily isolate the 
> > bug, and maybe provide some workaround. I used gradle to build this simple 
> > topology and software and package the results. This code is well 
> > documented, so it should be fairly simple to reproduce the issue. I first 
> > encountered this issue on 3 32 core nodes, but when I started 
> > experimenting, I set up a test cluster with 8 cores, and then I increased 
> > each node to 16 cores, and plenty of memory in every case.
> > 
> > The topology can be accessed from github at 
> > https://github.com/cowchipkid/storm-issue.git 
> > . Please feel free to 
> > respond to me directory if you have any questions that are beyond the scope 
> > of this mail list.
> 
> 


[jira] [Reopened] (STORM-1469) Unable to deploy large topologies on apache storm

2016-03-07 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil reopened STORM-1469:
-

Reopning, as we need to merge https://github.com/apache/storm/pull/1178 and 
https://github.com/apache/storm/pull/1179

> Unable to deploy large topologies on apache storm
> -
>
> Key: STORM-1469
> URL: https://issues.apache.org/jira/browse/STORM-1469
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Rudra Sharma
>    Assignee: Kishor Patil
> Fix For: 1.0.0
>
>
> When deploying to a nimbus a topology which is larger in size >17MB, we get 
> an exception. In storm 0.9.3 this could be mitigated by using the following 
> config on the storm.yaml to increse the buffer size to handle the topology 
> size. i.e. 50MB would be
> nimbus.thrift.max_buffer_size: 5000
> This configuration does not resolve the issue in the master branch of storm 
> and we cannot deploy topologies which are large in size.
> Here is the log on the client side when attempting to deploy to the nimbus 
> node:
> java.lang.RuntimeException: org.apache.thrift7.transport.TTransportException
>   at 
> backtype.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:251) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:272) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:155) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> com.trustwave.siem.storm.topology.deployer.TopologyDeployer.deploy(TopologyDeployer.java:149)
>  [siem-ng-storm-deployer-cloud.jar:]
>   at 
> com.trustwave.siem.storm.topology.deployer.TopologyDeployer.main(TopologyDeployer.java:87)
>  [siem-ng-storm-deployer-cloud.jar:]
> Caused by: org.apache.thrift7.transport.TTransportException
>   at 
> org.apache.thrift7.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
>  ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at org.apache.thrift7.transport.TTransport.readAll(TTransport.java:86) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> org.apache.thrift7.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
>  ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> org.apache.thrift7.transport.TFramedTransport.read(TFramedTransport.java:101) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at org.apache.thrift7.transport.TTransport.readAll(TTransport.java:86) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> org.apache.thrift7.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> org.apache.thrift7.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> org.apache.thrift7.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
>  ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> org.apache.thrift7.TServiceClient.receiveBase(TServiceClient.java:77) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> backtype.storm.generated.Nimbus$Client.recv_submitTopology(Nimbus.java:238) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> backtype.storm.generated.Nimbus$Client.submitTopology(Nimbus.java:222) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> backtype.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:237) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   ... 4 more
> Here is the log on the server side (nimbus.log):
> 2016-01-13 10:48:07.206 o.a.s.d.nimbus [INFO] Cleaning inbox ... deleted: 
> stormjar-c8666220-fa19-426b-a7e4-c62dfb57f1f0.jar
> 2016-01-13 10:55:09.823 o.a.s.d.nimbus [INFO] Uploading file from client to 
> /var/storm-data/nimbus/inbox/stormjar-80ecdf05-6a25-4281-8c78-10062ac5e396.jar
> 2016-01-13 10:55:11.910 o.a.s.d.nimbus [INFO] Finished uploading file from 
> client: 
> /var/storm-data/nimbus/inbox/stormjar-80ecdf05-6a25-4281-8c78-10062ac5e396.jar
> 2016-01-13 10:55:12.084 o.a.t.s.AbstractNonblockingServer$FrameBuffer [WARN] 
> Exception while invoking!
> org.apache.thrift7.transport.TTransportException: Frame size (17435758) 
> larger than max length (16384000)!
>   at 
> org.apache.thrift7.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
>   at 
> org.a

[jira] [Resolved] (STORM-1529) Change default worker temp directory location for workers

2016-03-04 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-1529.
-
   Resolution: Fixed
Fix Version/s: 2.0.0
   1.0.0

> Change default worker temp directory location for workers
> -
>
> Key: STORM-1529
> URL: https://issues.apache.org/jira/browse/STORM-1529
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>    Reporter: Kishor Patil
>Assignee: Kishor Patil
> Fix For: 1.0.0, 2.0.0
>
>
> Allowing workers to create temp files unde  /tmp/ directory creates different 
> challenges for monitoring disk usage and cleanup. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-1543) DRPCSpout should always try to reconnect disconnected DRPCInvocationsClient

2016-03-04 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-1543.
-
   Resolution: Fixed
Fix Version/s: 2.0.0
   1.0.0

> DRPCSpout should always try to reconnect disconnected DRPCInvocationsClient
> ---
>
> Key: STORM-1543
> URL: https://issues.apache.org/jira/browse/STORM-1543
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 0.10.0, 1.0.0, 0.10.1, 2.0.0
>Reporter: Kishor Patil
>    Assignee: Kishor Patil
> Fix For: 1.0.0, 2.0.0
>
>
> It appears, DRPCSpout skips pull request from DRPC Server if its not 
> connected - but does not request reconnects..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-1543) DRPCSpout should always try to reconnect disconnected DRPCInvocationsClient

2016-03-04 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil updated STORM-1543:

Affects Version/s: 2.0.0
   0.10.1
   1.0.0
   0.10.0

> DRPCSpout should always try to reconnect disconnected DRPCInvocationsClient
> ---
>
> Key: STORM-1543
> URL: https://issues.apache.org/jira/browse/STORM-1543
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 0.10.0, 1.0.0, 0.10.1, 2.0.0
>Reporter: Kishor Patil
>    Assignee: Kishor Patil
>
> It appears, DRPCSpout skips pull request from DRPC Server if its not 
> connected - but does not request reconnects..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-1561) Supervisor should relaunch worker if assignments have changed

2016-03-04 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-1561.
-
   Resolution: Fixed
Fix Version/s: 2.0.0
   1.0.0

> Supervisor should relaunch worker if assignments have changed
> -
>
> Key: STORM-1561
> URL: https://issues.apache.org/jira/browse/STORM-1561
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Kishor Patil
>    Assignee: Kishor Patil
> Fix For: 1.0.0, 2.0.0
>
>
> Currently, supervisor validates new assignments against existing assignments 
> by port. It should also check on the same port - if executors have changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-1561) Supervisor should relaunch worker if assignments have changed

2016-03-04 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil updated STORM-1561:

Affects Version/s: 2.0.0
   1.0.0

> Supervisor should relaunch worker if assignments have changed
> -
>
> Key: STORM-1561
> URL: https://issues.apache.org/jira/browse/STORM-1561
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Kishor Patil
>    Assignee: Kishor Patil
>
> Currently, supervisor validates new assignments against existing assignments 
> by port. It should also check on the same port - if executors have changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-1528) Fix CsvPreparableReporter log directory

2016-03-04 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-1528.
-
   Resolution: Fixed
Fix Version/s: 2.0.0

> Fix CsvPreparableReporter log directory
> ---
>
> Key: STORM-1528
> URL: https://issues.apache.org/jira/browse/STORM-1528
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>    Reporter: Kishor Patil
>Assignee: Kishor Patil
>Priority: Minor
> Fix For: 2.0.0
>
>
> The default csv metrics Log directory location is inappropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-1528) Fix CsvPreparableReporter log directory

2016-03-04 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil updated STORM-1528:

Component/s: storm-core

> Fix CsvPreparableReporter log directory
> ---
>
> Key: STORM-1528
> URL: https://issues.apache.org/jira/browse/STORM-1528
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>    Reporter: Kishor Patil
>Assignee: Kishor Patil
>Priority: Minor
> Fix For: 2.0.0
>
>
> The default csv metrics Log directory location is inappropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1601) Cluster-state must check if znode exists before getting children for storm backpressure

2016-03-03 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1601:
---

 Summary: Cluster-state must check if znode exists before getting 
children for storm backpressure
 Key: STORM-1601
 URL: https://issues.apache.org/jira/browse/STORM-1601
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Affects Versions: 1.0.0, 2.0.0
Reporter: Kishor Patil
Assignee: Kishor Patil


You see below exception in the integration tests..

{panel}
15:46:23 java.lang.RuntimeException: 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /backpressure/topologytest-22ffcfa0-8992-4258-b8b6-52346a129b58-1-0
15:46:23at backtype.storm.util$wrap_in_runtime.invoke(util.clj:52) 
~[classes/:?]
15:46:23at 
backtype.storm.zookeeper$get_children.invoke(zookeeper.clj:168) ~[classes/:?]
15:46:23at 
backtype.storm.cluster_state.zookeeper_state_factory$_mkState$reify__4184.get_children(zookeeper_state_factory.clj:129)
 ~[classes/:?]
15:46:23at sun.reflect.GeneratedMethodAccessor53.invoke(Unknown Source) 
~[?:?]
15:46:23at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_60]
15:46:23at java.lang.reflect.Method.invoke(Method.java:497) 
~[?:1.8.0_60]
15:46:23at 
clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) 
~[clojure-1.6.0.jar:?]
15:46:23at 
clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) 
~[clojure-1.6.0.jar:?]
15:46:23at 
backtype.storm.cluster$mk_storm_cluster_state$reify__4091.topology_backpressure(cluster.clj:407)
 ~[classes/:?]
15:46:23at sun.reflect.GeneratedMethodAccessor210.invoke(Unknown 
Source) ~[?:?]
15:46:23at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_60]
15:46:23at java.lang.reflect.Method.invoke(Method.java:497) 
~[?:1.8.0_60]
15:46:23at 
clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) 
~[clojure-1.6.0.jar:?]
15:46:23at 
clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) 
~[clojure-1.6.0.jar:?]
15:46:23at 
backtype.storm.daemon.worker$fn__6837$exec_fn__1477__auto__$reify__6839$check_throttle_changed__6910$cb__6911.doInvoke(worker.clj:704)
 ~[classes/:?]
15:46:23at clojure.lang.RestFn.invoke(RestFn.java:408) 
~[clojure-1.6.0.jar:?]
15:46:23at 
backtype.storm.cluster$issue_map_callback_BANG_.invoke(cluster.clj:183) 
~[classes/:?]
15:46:23at 
backtype.storm.cluster$mk_storm_cluster_state$fn__4081.invoke(cluster.clj:239) 
~[classes/:?]
15:46:23at 
backtype.storm.cluster_state.zookeeper_state_factory$_mkState$fn__4166.invoke(zookeeper_state_factory.clj:45)
 ~[classes/:?]
15:46:23at 
backtype.storm.zookeeper$mk_client$reify__2993.eventReceived(zookeeper.clj:63) 
~[classes/:?]
15:46:23at 
org.apache.curator.framework.imps.CuratorFrameworkImpl$8.apply(CuratorFrameworkImpl.java:860)
 [curator-framework-2.5.0.jar:?]
15:46:23at 
org.apache.curator.framework.imps.CuratorFrameworkImpl$8.apply(CuratorFrameworkImpl.java:853)
 [curator-framework-2.5.0.jar:?]
15:46:23at 
org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92)
 [curator-framework-2.5.0.jar:?]
15:46:23at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
 [guava-16.0.1.jar:?]
15:46:23at 
org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83)
 [curator-framework-2.5.0.jar:?]
15:46:23at 
org.apache.curator.framework.imps.CuratorFrameworkImpl.processEvent(CuratorFrameworkImpl.java:850)
 [curator-framework-2.5.0.jar:?]
15:46:23at 
org.apache.curator.framework.imps.CuratorFrameworkImpl.access$000(CuratorFrameworkImpl.java:57)
 [curator-framework-2.5.0.jar:?]
15:46:23at 
org.apache.curator.framework.imps.CuratorFrameworkImpl$1.process(CuratorFrameworkImpl.java:138)
 [curator-framework-2.5.0.jar:?]
15:46:23at 
org.apache.curator.ConnectionState.process(ConnectionState.java:152) 
[curator-client-2.5.0.jar:?]
15:46:23at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522) 
[zookeeper-3.4.6.jar:3.4.6-1569965]
15:46:23at 
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498) 
[zookeeper-3.4.6.jar:3.4.6-1569965]
15:46:23 Caused by: org.apache.zookeeper.KeeperException$NoNodeException: 
KeeperErrorCode = NoNode for 
/backpressure/topologytest-22ffcfa0-8992-4258-b8b6-52346a129b58-1-0
15:46:23at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:111) 
~[zookeeper-3.4.6.jar:3.4.6-1569965]
15:46:23at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:51) 
~[zookeeper-3.4.6.jar:3.4.6-1569965]
15:46:23

[jira] [Created] (STORM-1596) Multiple Subject sharing Kerberos TGT - causes services to fail

2016-03-02 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1596:
---

 Summary: Multiple Subject sharing Kerberos TGT - causes services 
to fail
 Key: STORM-1596
 URL: https://issues.apache.org/jira/browse/STORM-1596
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 0.10.0, 1.0.0, 0.10.1, 2.0.0
Reporter: Kishor Patil
Assignee: Kishor Patil
Priority: Critical


With multiple threads accessing same {{Subject}}, it can cause 
{{ServiceTicket}} in use be by one thread be destroyed by another thread.

Running BasicDRPCTopology with high parallelism in secure cluster would 
reproduce the issue.

Here is sample log from such a scenarios:
{code}
2016-01-20 15:52:26.904 o.a.t.t.TSaslTransport [ERROR] SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed
at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
 ~[?:1.8.0_40]
at 
org.apache.thrift7.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
 ~[storm-core-0.10.1.y.jar:0.10.1.y]
at 
org.apache.thrift7.transport.TSaslTransport.open(TSaslTransport.java:271) 
[storm-core-0.10.1.y.jar:0.10.1.y]
at 
org.apache.thrift7.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
 [storm-core-0.10.1.y.jar:0.10.1.y]
at 
backtype.storm.security.auth.kerberos.KerberosSaslTransportPlugin$1.run(KerberosSaslTransportPlugin.java:195)
 [storm-core-0.10.1.y.jar:0.10.1.y]
at 
backtype.storm.security.auth.kerberos.KerberosSaslTransportPlugin$1.run(KerberosSaslTransportPlugin.java:191)
 [storm-core-0.10.1.y.jar:0.10.1.y]
at java.security.AccessController.doPrivileged(Native Method) 
~[?:1.8.0_40]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_40]
at 
backtype.storm.security.auth.kerberos.KerberosSaslTransportPlugin.connect(KerberosSaslTransportPlugin.java:190)
 [storm-core-0.10.1.y.jar:0.10.1.y]
at 
backtype.storm.security.auth.TBackoffConnect.doConnectWithRetry(TBackoffConnect.java:54)
 [storm-core-0.10.1.y.jar:0.10.1.y]
at 
backtype.storm.security.auth.ThriftClient.reconnect(ThriftClient.java:109) 
[storm-core-0.10.1.y.jar:0.10.1.y]
at 
backtype.storm.drpc.DRPCInvocationsClient.reconnectClient(DRPCInvocationsClient.java:57)
 [storm-core-0.10.1.y.jar:0.10.1.y]
at 
backtype.storm.drpc.ReturnResults.reconnectClient(ReturnResults.java:113) 
[storm-core-0.10.1.y.jar:0.10.1.y]
at backtype.storm.drpc.ReturnResults.execute(ReturnResults.java:103) 
[storm-core-0.10.1.y.jar:0.10.1.y]
at 
backtype.storm.daemon.executor$fn__6377$tuple_action_fn__6379.invoke(executor.clj:689)
 [storm-core-0.10.1.y.jar:0.10.1.y]
at 
backtype.storm.daemon.executor$mk_task_receiver$fn__6301.invoke(executor.clj:448)
 [storm-core-0.10.1.y.jar:0.10.1.y]
at 
backtype.storm.disruptor$clojure_handler$reify__6018.onEvent(disruptor.clj:40) 
[storm-core-0.10.1.y.jar:0.10.1.y]
at 
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:437)
 [storm-core-0.10.1.y.jar:0.10.1.y]
at 
backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:416)
 [storm-core-0.10.1.y.jar:0.10.1.y]
at 
backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) 
[storm-core-0.10.1.y.jar:0.10.1.y]
at 
backtype.storm.daemon.executor$fn__6377$fn__6390$fn__6441.invoke(executor.clj:801)
 [storm-core-0.10.1.y.jar:0.10.1.y]
at backtype.storm.util$async_loop$fn__742.invoke(util.clj:482) 
[storm-core-0.10.1.y.jar:0.10.1.y]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_40]
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism 
level: The ticket isn't for us (35) - BAD TGS SERVER NAME)
at 
sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:770) 
~[?:1.8.0_40]
at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) 
~[?:1.8.0_40]
at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) 
~[?:1.8.0_40]
at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
 ~[?:1.8.0_40]
... 23 more
Caused by: sun.security.krb5.KrbException: The ticket isn't for us (35) - BAD 
TGS SERVER NAME
at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:73) ~[?:1.8.0_40]
at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:259) 
~[?:1.8.0_40]
at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:270) 
~[?:1.8.0_40]
at 
sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:302)
 ~[?:1.8.0_40]
at 
sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:120)
 ~[?:1.8.0_40

[jira] [Assigned] (STORM-1469) Unable to deploy large topologies on apache storm

2016-03-01 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil reassigned STORM-1469:
---

Assignee: Kishor Patil

> Unable to deploy large topologies on apache storm
> -
>
> Key: STORM-1469
> URL: https://issues.apache.org/jira/browse/STORM-1469
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Rudra Sharma
>    Assignee: Kishor Patil
> Fix For: 1.0.0, 2.0.0
>
>
> When deploying to a nimbus a topology which is larger in size >17MB, we get 
> an exception. In storm 0.9.3 this could be mitigated by using the following 
> config on the storm.yaml to increse the buffer size to handle the topology 
> size. i.e. 50MB would be
> nimbus.thrift.max_buffer_size: 5000
> This configuration does not resolve the issue in the master branch of storm 
> and we cannot deploy topologies which are large in size.
> Here is the log on the client side when attempting to deploy to the nimbus 
> node:
> java.lang.RuntimeException: org.apache.thrift7.transport.TTransportException
>   at 
> backtype.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:251) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:272) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:155) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> com.trustwave.siem.storm.topology.deployer.TopologyDeployer.deploy(TopologyDeployer.java:149)
>  [siem-ng-storm-deployer-cloud.jar:]
>   at 
> com.trustwave.siem.storm.topology.deployer.TopologyDeployer.main(TopologyDeployer.java:87)
>  [siem-ng-storm-deployer-cloud.jar:]
> Caused by: org.apache.thrift7.transport.TTransportException
>   at 
> org.apache.thrift7.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
>  ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at org.apache.thrift7.transport.TTransport.readAll(TTransport.java:86) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> org.apache.thrift7.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
>  ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> org.apache.thrift7.transport.TFramedTransport.read(TFramedTransport.java:101) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at org.apache.thrift7.transport.TTransport.readAll(TTransport.java:86) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> org.apache.thrift7.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> org.apache.thrift7.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> org.apache.thrift7.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
>  ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> org.apache.thrift7.TServiceClient.receiveBase(TServiceClient.java:77) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> backtype.storm.generated.Nimbus$Client.recv_submitTopology(Nimbus.java:238) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> backtype.storm.generated.Nimbus$Client.submitTopology(Nimbus.java:222) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   at 
> backtype.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:237) 
> ~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
>   ... 4 more
> Here is the log on the server side (nimbus.log):
> 2016-01-13 10:48:07.206 o.a.s.d.nimbus [INFO] Cleaning inbox ... deleted: 
> stormjar-c8666220-fa19-426b-a7e4-c62dfb57f1f0.jar
> 2016-01-13 10:55:09.823 o.a.s.d.nimbus [INFO] Uploading file from client to 
> /var/storm-data/nimbus/inbox/stormjar-80ecdf05-6a25-4281-8c78-10062ac5e396.jar
> 2016-01-13 10:55:11.910 o.a.s.d.nimbus [INFO] Finished uploading file from 
> client: 
> /var/storm-data/nimbus/inbox/stormjar-80ecdf05-6a25-4281-8c78-10062ac5e396.jar
> 2016-01-13 10:55:12.084 o.a.t.s.AbstractNonblockingServer$FrameBuffer [WARN] 
> Exception while invoking!
> org.apache.thrift7.transport.TTransportException: Frame size (17435758) 
> larger than max length (16384000)!
>   at 
> org.apache.thrift7.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
>   at 
> org.apache.thrift7.transport.TFramedTransport.read(TFramedTrans

[jira] [Created] (STORM-1587) ThroughputVsLatency printMetrics Throws NullPointerException

2016-02-29 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1587:
---

 Summary: ThroughputVsLatency printMetrics Throws 
NullPointerException
 Key: STORM-1587
 URL: https://issues.apache.org/jira/browse/STORM-1587
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Reporter: Kishor Patil
Assignee: Kishor Patil


The printmetrics can cause NPE.

{code}
1701 [main] INFO  o.a.s.StormSubmitter - Submitting topology wc-test in 
distributed mode with conf 
{"topology.worker.metrics":{"CPU":"org.apache.storm.metrics.sigar.CPUMetric"},"storm.zookeeper.topology.auth.scheme":"digest","topology.worker.gc.childopts":"-XX:+UseConcMarkSweepGC
 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:NewSize=128m 
-XX:CMSInitiatingOccupancyFraction=70 
-XX:-CMSConcurrentMTEnabled","topology.workers":4,"topology.builtin.metrics.bucket.size.secs":10,"topology.worker.childopts":"-Xmx2g","storm.zookeeper.topology.auth.payload":"-5601074936064852696:-8332153375154710952","topology.metrics.consumer.register":[{"argument":null,"class":"org.apache.storm.metric.LoggingMetricsConsumer","parallelism.hint":1},{"argument":"http:\/\/survivedlived.corp.ir2.yahoo.com:45976\/","class":"org.apache.storm.metric.HttpForwardingMetricsConsumer","parallelism.hint":1}]}
2137 [main] INFO  o.a.s.StormSubmitter - Finished submitting topology: wc-test
Exception in thread "main" java.lang.NullPointerException
at 
org.apache.storm.starter.ThroughputVsLatency.printMetrics(ThroughputVsLatency.java:277)
at 
org.apache.storm.starter.ThroughputVsLatency.main(ThroughputVsLatency.java:425)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1561) Supervisor should relaunch worker if assignments have changed

2016-02-18 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1561:
---

 Summary: Supervisor should relaunch worker if assignments have 
changed
 Key: STORM-1561
 URL: https://issues.apache.org/jira/browse/STORM-1561
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Reporter: Kishor Patil
Assignee: Kishor Patil


Currently, supervisor validates new assignments against existing assignments by 
port. It should also check on the same port - if executors have changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1542) Taking jstack for a worker in UI results in endless empty jstack dumps

2016-02-17 Thread Kishor Patil (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151245#comment-15151245
 ] 

Kishor Patil commented on STORM-1542:
-

[~abhishek.agarwal] It looks like a good option to turn this into a synchronous 
mode. I would be watchful during implementation 
- for multitenant, these operations needs to be launched as user. 
- the heap-dump like actions in synchronous mode could take long time - due to 
heapsize and + time to download dump results back to browser in synchronous 
mode.

Currently, supervisor relaunches the profiler ( in case of worker restarts..) 
but that feature would seize to exist ( as supervisor uses ZK to remember 
launching profilers for that worker)

The idea otherwise sounds good.

> Taking jstack for a worker in UI results in endless empty jstack dumps
> --
>
> Key: STORM-1542
> URL: https://issues.apache.org/jira/browse/STORM-1542
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 1.0.0
>Reporter: Abhishek Agarwal
>Assignee: Abhishek Agarwal
>Priority: Critical
>
> Resolved path for jstack command on supervisor is
> /home/y/share/yjava_jdk/java/jstack which doesn't exist. command returns 127 
> as exit code. When a request for jstack dump is made from UI, a zookeeper 
> node is created. Now supervisor keeps on reading this node, executes jstack 
> command and since exit code is non-zero, doesn't delete the node afterwards. 
> Thus supervisor keeps on executing the command forever and each invocation 
> creates an new empty file.
> {noformat}
> $BINPATH/jstack $1 > "$2/${FILENAME}"
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1542) Taking jstack for a worker in UI results in endless empty jstack dumps

2016-02-16 Thread Kishor Patil (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149134#comment-15149134
 ] 

Kishor Patil commented on STORM-1542:
-


We can change logviewer to expose the REST API to create jstack - as it might 
work. Even though it is overhead of refactoring portions of logic supervisor 
uses to find which pid to launch the jstack command. 
Not all actions are synchronous - run profiler for next 10 minutes ( needs to 
remember to shut it down/ take profiler dump before shutting down the profiler.)
It may good advice to do that for other profiling actions such as run profiler 
( which requires remembering how long to run and stop etc. - we are using 
Zookeeper for that.). Also, it is useful to have UI route all pending actions 
via ZK to supervisor - so that multiple users requesting for same action - does 
not reflect in multiple actions being taken. Currently, supervisor kind of 
merged those to single action for profiling.

[~abhishek.agarwal],
I don't see any special advantage to avoiding ZK here. Can you please elaborate 
if I am missing something?

> Taking jstack for a worker in UI results in endless empty jstack dumps
> --
>
> Key: STORM-1542
> URL: https://issues.apache.org/jira/browse/STORM-1542
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 1.0.0
>Reporter: Abhishek Agarwal
>Assignee: Abhishek Agarwal
>Priority: Critical
>
> Resolved path for jstack command on supervisor is
> /home/y/share/yjava_jdk/java/jstack which doesn't exist. command returns 127 
> as exit code. When a request for jstack dump is made from UI, a zookeeper 
> node is created. Now supervisor keeps on reading this node, executes jstack 
> command and since exit code is non-zero, doesn't delete the node afterwards. 
> Thus supervisor keeps on executing the command forever and each invocation 
> creates an new empty file.
> {noformat}
> $BINPATH/jstack $1 > "$2/${FILENAME}"
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1543) DRPCSpout should always try to reconnect disconnected DRPCInvocationsClient

2016-02-12 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1543:
---

 Summary: DRPCSpout should always try to reconnect disconnected 
DRPCInvocationsClient
 Key: STORM-1543
 URL: https://issues.apache.org/jira/browse/STORM-1543
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Reporter: Kishor Patil
Assignee: Kishor Patil


It appears, DRPCSpout skips pull request from DRPC Server if its not connected 
- but does not request reconnects..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1529) Change default worker temp directory location for workers

2016-02-07 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1529:
---

 Summary: Change default worker temp directory location for workers
 Key: STORM-1529
 URL: https://issues.apache.org/jira/browse/STORM-1529
 Project: Apache Storm
  Issue Type: New Feature
  Components: storm-core
Reporter: Kishor Patil
Assignee: Kishor Patil


Allowing workers to create temp files unde  /tmp/ directory creates different 
challenges for monitoring disk usage and cleanup. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1528) Fix CsvPreparableReporter log directory

2016-02-05 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1528:
---

 Summary: Fix CsvPreparableReporter log directory
 Key: STORM-1528
 URL: https://issues.apache.org/jira/browse/STORM-1528
 Project: Apache Storm
  Issue Type: Bug
Reporter: Kishor Patil
Assignee: Kishor Patil
Priority: Minor


The default csv metrics Log directory location is inappropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-1524) Make Storm daemon function statistics reporter pluggable

2016-02-05 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-1524.
-
   Resolution: Fixed
Fix Version/s: 1.0.0

> Make Storm daemon function statistics reporter pluggable
> 
>
> Key: STORM-1524
> URL: https://issues.apache.org/jira/browse/STORM-1524
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Affects Versions: 0.10.0, 1.0.0, 0.10.1
>Reporter: Kishor Patil
>    Assignee: Kishor Patil
> Fix For: 1.0.0
>
>
> We use codahale, metrics-clojure to gather daemon side stats, but currently 
> the we have only three reporters available which use builder pattern. So it 
> would be useful to have ability to plugin different reporters that can use 
> configuration instead of builder pattern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1524) Make Storm daemon function statistics reporter pluggable

2016-02-03 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1524:
---

 Summary: Make Storm daemon function statistics reporter pluggable
 Key: STORM-1524
 URL: https://issues.apache.org/jira/browse/STORM-1524
 Project: Apache Storm
  Issue Type: New Feature
  Components: storm-core
Affects Versions: 0.10.0, 1.0.0, 0.10.1
Reporter: Kishor Patil
Assignee: Kishor Patil


We use codahale, metrics-clojure to gather daemon side stats, but currently the 
we have only three reporters available which use builder pattern. So it would 
be useful to have ability to plugin different reporters that can use 
configuration instead of builder pattern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1503) PacemakerClient Reconnection issue

2016-01-26 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1503:
---

 Summary: PacemakerClient Reconnection issue
 Key: STORM-1503
 URL: https://issues.apache.org/jira/browse/STORM-1503
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 0.10.0, 0.10.1
Reporter: Kishor Patil
Assignee: Kishor Patil


Worker should not restart for failure to send heartbeats to Pacemaker or worker.

Also, PacemakerClient should make reconnect efforts on failure to write on 
existing channel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1485) DRPC Connectivity Issues

2016-01-19 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1485:
---

 Summary: DRPC Connectivity Issues
 Key: STORM-1485
 URL: https://issues.apache.org/jira/browse/STORM-1485
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Reporter: Kishor Patil
Assignee: Kishor Patil


We need fixing following issues for DRPC 
1. DRPCClient should not stop trying to connect to DRPC after certain fixed 
number of retries - because DRPC server may be down for longer duration.
2. KerberosSaslTransportPlugin uses zookeeper Login session for creating Thread 
that dies if TGT has expired - which not recoverable state. In such scenario, 
JVM should be restarted gracefully.
3. DRPC ReturnResults bolt Should re-retry Connection to DRPC on 
ThriftException while sending results for particular request.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1457) Avoid Unnecessary caching of tuples by executor

2016-01-08 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1457:
---

 Summary: Avoid Unnecessary caching of tuples by executor
 Key: STORM-1457
 URL: https://issues.apache.org/jira/browse/STORM-1457
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Reporter: Kishor Patil
Assignee: Kishor Patil


It looks like the pending RotatingMap is caching list of tuples for printing 
debug message, but it is not required if "topology.debug" is turned off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-874) Netty Threads do not handle Errors properly

2015-11-30 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-874.

   Resolution: Fixed
Fix Version/s: 0.11.0

This issue is addressed with STORM-885

> Netty Threads do not handle Errors properly
> ---
>
> Key: STORM-874
> URL: https://issues.apache.org/jira/browse/STORM-874
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 0.9.2-incubating, 0.10.0
>Reporter: Kishor Patil
>    Assignee: Kishor Patil
> Fix For: 0.11.0
>
>
> When low on memory, netty thread could get OOM which if not handled correctly 
> can lead to unexpected behavior such as netty connection leaks.
> {code:java}
> java.lang.OutOfMemoryError: Direct buffer memory
>   at java.nio.Bits.reserveMemory(Bits.java:658) ~[?:1.8.0_25]
>   at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[?:1.8.0_25]
>   at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) ~[?:1.8.0_25]
>   at 
> org.jboss.netty.buffer.ChannelBuffers.directBuffer(ChannelBuffers.java:167) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.buffer.ChannelBuffers.directBuffer(ChannelBuffers.java:151) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> backtype.storm.messaging.netty.MessageBatch.buffer(MessageBatch.java:101) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
>   at 
> backtype.storm.messaging.netty.MessageEncoder.encode(MessageEncoder.java:32) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
>   at 
> org.jboss.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:66)
>  ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
>  ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
>  ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
>  ~[netty-3.9.4.Final.jar:?]
>   at org.jboss.netty.channel.Channels.write(Channels.java:704) 
> ~[netty-3.9.4.Final.jar:?]
>   at org.jboss.netty.channel.Channels.write(Channels.java:671) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> org.jboss.netty.channel.AbstractChannel.write(AbstractChannel.java:248) 
> ~[netty-3.9.4.Final.jar:?]
>   at 
> backtype.storm.messaging.netty.Client.tryDeliverMessages(Client.java:226) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
>   at backtype.storm.messaging.netty.Client.send(Client.java:173) 
> ~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-1215) Use Async Loggers to avoid locking and logging overhead

2015-11-19 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-1215.
-
   Resolution: Fixed
Fix Version/s: 0.11.0

> Use Async Loggers to avoid locking  and logging overhead
> 
>
> Key: STORM-1215
> URL: https://issues.apache.org/jira/browse/STORM-1215
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>    Reporter: Kishor Patil
>Assignee: Kishor Patil
> Fix For: 0.11.0
>
>
> The loggers are synchronous with immediateFlush to disk, making some of the 
> daemons slow down. In  some other cases, nimbus is slow too with submit-lock.
> Making loggers asynchronous with no necessity to write to disk on every 
> logger event would improve cpu resource usage for logging.
> {code}
> "pool-7-thread-986" #1025 prio=5 os_prio=0 tid=0x7f0f9628c800 nid=0x1b84 
> runnable [0x7f0f0fa2a000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.FileOutputStream.writeBytes(Native Method)
>   at java.io.FileOutputStream.write(FileOutputStream.java:326)
>   at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>   at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
>   - locked <0x0003c00ae520> (a java.io.BufferedOutputStream)
>   at java.io.PrintStream.write(PrintStream.java:482)
>   - locked <0x0003c00ae500> (a java.io.PrintStream)
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>   at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>   at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104)
>   - locked <0x0003c00ae640> (a java.io.OutputStreamWriter)
>   at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185)
>   at java.io.PrintStream.write(PrintStream.java:527)
>   - locked <0x0003c00ae500> (a java.io.PrintStream)
>   at java.io.PrintStream.print(PrintStream.java:669)
>   at java.io.PrintStream.println(PrintStream.java:806)
>   - locked <0x0003c00ae500> (a java.io.PrintStream)
>   at 
> org.apache.logging.log4j.status.StatusConsoleListener.log(StatusConsoleListener.java:81)
>   at 
> org.apache.logging.log4j.status.StatusLogger.logMessage(StatusLogger.java:218)
>   at 
> org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:727)
>   at 
> org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:716)
>   at 
> org.apache.logging.log4j.spi.AbstractLogger.error(AbstractLogger.java:344)
>   at 
> org.apache.logging.log4j.core.appender.DefaultErrorHandler.error(DefaultErrorHandler.java:59)
>   at 
> org.apache.logging.log4j.core.appender.AbstractAppender.error(AbstractAppender.java:86)
>   at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:116)
>   at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:99)
>   at 
> org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:430)
>   at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:409)
>   at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:367)
>   at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:112)
>   at 
> org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:727)
>   at 
> org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:716)
>   at org.apache.logging.slf4j.Log4jLogger.info(Log4jLogger.java:198)
>   at clojure.tools.logging$eval1$fn__7.invoke(NO_SOURCE_FILE:0)
>   at clojure.tools.logging.impl$fn__28$G__8__39.invoke(impl.clj:16)
>   at clojure.tools.logging$log_STAR_.invoke(logging.clj:59)
>   at backtype.storm.daemon.nimbus$mk_assignments.doInvoke(nimbus.clj:781)
>   at clojure.lang.RestFn.invoke(RestFn.java:410)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-1208) UI: NPE seen when aggregating bolt streams stats

2015-11-18 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-1208.
-
   Resolution: Fixed
Fix Version/s: 0.11.0

> UI: NPE seen when aggregating bolt streams stats
> 
>
> Key: STORM-1208
> URL: https://issues.apache.org/jira/browse/STORM-1208
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 0.11.0
>Reporter: Derek Dagit
>Assignee: Derek Dagit
> Fix For: 0.11.0
>
>
> A stack trace is seen on the UI via its thrift connection to nimbus.
> On nimbus, a stack trace similar to the following is seen:
> {noformat}
> 2015-11-09 19:26:48.921 o.a.t.s.TThreadPoolServer [ERROR] Error occurred 
> during processing of message.
> java.lang.NullPointerException
> at 
> backtype.storm.stats$agg_bolt_streams_lat_and_count$iter__2219__2223$fn__2224.invoke(stats.clj:346)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.6.0.jar:?]
> at clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.6.0.jar:?]
> at clojure.lang.RT.seq(RT.java:484) ~[clojure-1.6.0.jar:?]
> at clojure.core$seq.invoke(core.clj:133) ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$seq_reduce.invoke(protocols.clj:30) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6078.invoke(protocols.clj:54) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core$reduce.invoke(core.clj:6289) ~[clojure-1.6.0.jar:?]
> at clojure.core$into.invoke(core.clj:6341) ~[clojure-1.6.0.jar:?]
> at 
> backtype.storm.stats$agg_bolt_streams_lat_and_count.invoke(stats.clj:344) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.stats$agg_pre_merge_comp_page_bolt.invoke(stats.clj:439) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at backtype.storm.stats$fn__2578.invoke(stats.clj:1093) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.MultiFn.invoke(MultiFn.java:241) 
> ~[clojure-1.6.0.jar:?]
> at clojure.lang.AFn.applyToHelper(AFn.java:165) ~[clojure-1.6.0.jar:?]
> at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.6.0.jar:?]
> at clojure.core$apply.invoke(core.clj:628) ~[clojure-1.6.0.jar:?]
> at clojure.core$partial$fn__4230.doInvoke(core.clj:2470) 
> ~[clojure-1.6.0.jar:?]
> at clojure.lang.RestFn.invoke(RestFn.java:421) ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6086.invoke(protocols.clj:143) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6057$G__6052__6066.invoke(protocols.clj:19) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6078.invoke(protocols.clj:54) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core$reduce.invoke(core.clj:6289) ~[clojure-1.6.0.jar:?]
> at 
> backtype.storm.stats$aggregate_comp_stats_STAR_.invoke(stats.clj:1106) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.AFn.applyToHelper(AFn.java:165) ~[clojure-1.6.0.jar:?]
> at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.6.0.jar:?]
> at clojure.core$apply.invoke(core.clj:624) ~[clojure-1.6.0.jar:?]
> at backtype.storm.stats$fn__2589.doInvoke(stats.clj:1127) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.RestFn.invoke(RestFn.java:436) ~[clojure-1.6.0.jar:?]
> at clojure.lang.MultiFn.invoke(MultiFn.java:236) 
> ~[clojure-1.6.0.jar:?]
> at backtype.storm.stats$agg_comp_execs_stats.invoke(stats.clj:1303) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.daemon.nimbus$fn__5893$exec_fn__1502__auto__$reify__5917.getComponentPageInfo(nimbus.clj:1715)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.generated.Nimbus$Processor$getComponentPageInfo.getResult(Nimbus.java:3677)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.generated.Nimbus$Processor$getComponentPageInfo.getResult(Nimbus.java:3661)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at 
> org.apache.thrift7.ProcessFunction.process(ProcessFunction.java:39) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at org.apache.thrift7.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[storm-core-0.1

[jira] [Resolved] (STORM-831) Add Jira and Central Logging URL to UI

2015-11-18 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-831.

   Resolution: Fixed
Fix Version/s: 0.11.0

> Add Jira and Central Logging URL to UI
> --
>
> Key: STORM-831
> URL: https://issues.apache.org/jira/browse/STORM-831
> Project: Apache Storm
>  Issue Type: Documentation
>  Components: documentation
>    Reporter: Kishor Patil
>Assignee: Kishor Patil
>Priority: Trivial
> Fix For: 0.11.0
>
>
> As a user, I would like to see a link to take me to JIRA for reporting bug. 
> Also, optionally if link to splunk/logstash/kibana from UI would be helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-1204) Logviewer should graceful report page-not-found instead of 500 for bad topo-id etc

2015-11-18 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-1204.
-
   Resolution: Fixed
Fix Version/s: 0.11.0

> Logviewer should graceful report page-not-found instead of 500 for bad 
> topo-id etc
> --
>
> Key: STORM-1204
> URL: https://issues.apache.org/jira/browse/STORM-1204
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Reporter: Kishor Patil
>    Assignee: Kishor Patil
> Fix For: 0.11.0
>
>
> Whenever topology-id or filename is wrong or ( in case of secure cluster if 
> user is not authorized), the logviewer shows HTTP-500 exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1204) Logviewer should graceful report page-not-found instead of 500 for bad topo-id etc

2015-11-13 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1204:
---

 Summary: Logviewer should graceful report page-not-found instead 
of 500 for bad topo-id etc
 Key: STORM-1204
 URL: https://issues.apache.org/jira/browse/STORM-1204
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Reporter: Kishor Patil
Assignee: Kishor Patil


Whenever topology-id or filename is wrong or ( in case of secure cluster if 
user is not authorized), the logviewer shows HTTP-500 exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-898) Add priorities and per user resource guarantees to Resource Aware Scheduler

2015-11-02 Thread Kishor Patil (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985607#comment-14985607
 ] 

Kishor Patil commented on STORM-898:


In my opinion, it is important to use order of topologies ( by submission 
time.. may be) while scheduling topology so it would avoid same user, same 
priority topologies getting ( evicted in wrong fashion) or forcing each other 
from getting evicted over recurring scheduling iterations.

> Add priorities and per user resource guarantees to Resource Aware Scheduler
> ---
>
> Key: STORM-898
> URL: https://issues.apache.org/jira/browse/STORM-898
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Boyang Jerry Peng
> Attachments: Resource Aware Scheduler for Storm.pdf
>
>
> In a multi-tenant environment we would like to be able to give individual 
> users a guarantee of how much CPU/Memory/Network they will be able to use in 
> a cluster.  We would also like to know which topologies a user feels are the 
> most important to keep running if there are not enough resources to run all 
> of their topologies.
> Each user should be able to specify if their topology is production, staging, 
> or development. Within each of those categories a user should be able to give 
> a topology a priority, 0 to 10 with 10 being the highest priority (or 
> something like this).
> If there are not enough resources on a cluster to run a topology assume this 
> topology is running using resources and find the user that is most over their 
> guaranteed resources.  Shoot the lowest priority topology for that user, and 
> repeat until, this topology is able to run, or this topology would be the one 
> shot.   Ideally we don't actually shoot anything until we know that we would 
> have made enough room.
> If the cluster is over-subscribed and everyone is under their guarantee, and 
> this topology would not put the user over their guarantee.  Shoot the lowest 
> priority topology in this workers resource pool until there is enough room to 
> run the topology or this topology is the one that would be shot.  We might 
> also want to think about what to do if we are going to shoot a production 
> topology in an oversubscribed case, and perhaps we can shoot a non-production 
> topology instead even if the other user is not over their guarantee.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1157) Dynamic Worker Profiler - jmap, jstack, profiling and restarting worker

2015-11-02 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1157:
---

 Summary: Dynamic Worker Profiler - jmap, jstack, profiling and 
restarting worker
 Key: STORM-1157
 URL: https://issues.apache.org/jira/browse/STORM-1157
 Project: Apache Storm
  Issue Type: Improvement
  Components: storm-core
Reporter: Kishor Patil
Assignee: Kishor Patil


In multi-tenant mode, storm launches long-running JVMs across cluster without 
sudo access to user. Self-serving of Java heap-dumps, jstacks and java 
profiling of these JVMs would improve users' ability to analyze and debug 
issues when monitoring it actively.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-350) Update disruptor to latest version (3.2.1)

2015-10-28 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil updated STORM-350:
---
Assignee: Robert Joseph Evans  (was: Boris Aksenov)

> Update disruptor to latest version (3.2.1)
> --
>
> Key: STORM-350
> URL: https://issues.apache.org/jira/browse/STORM-350
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>  Components: storm-core
>Reporter: Boris Aksenov
>Assignee: Robert Joseph Evans
>Priority: Minor
> Fix For: 0.10.0
>
> Attachments: 
> 20141117-0.9.3-rc1-3-worker-separate-1-spout-and-2-bolts-failing-tuples.png, 
> 20141117-0.9.3-rc1-one-worker-failing-tuples.png, 
> 20141117-0.9.3-rc1-three-workers-1-spout-3-bolts-failing-tuples.png, 
> 20141118-0.9.3-branch-3-worker-separate-1-spout-and-2-bolts-ok.png, 
> 20141118-0.9.3-branch-one-worker-ok.png, 
> 20141118-0.9.3-branch-three-workers-1-spout-3-bolts-ok.png, Storm UI1.pdf, 
> Storm UI2.pdf, storm-0.9.3-rc1-failing-tuples.png, 
> storm-0_9_2-incubating-failing-tuples.png, 
> storm-0_9_2-incubating-no-failing-tuples.png, 
> storm-failed-tuples-multi-node.png, storm-multi-node-without-350.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-350) Update disruptor to latest version (3.3.2)

2015-10-28 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-350.

Resolution: Fixed

> Update disruptor to latest version (3.3.2)
> --
>
> Key: STORM-350
> URL: https://issues.apache.org/jira/browse/STORM-350
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>  Components: storm-core
>Reporter: Boris Aksenov
>Assignee: Robert Joseph Evans
> Fix For: 0.10.0
>
> Attachments: 
> 20141117-0.9.3-rc1-3-worker-separate-1-spout-and-2-bolts-failing-tuples.png, 
> 20141117-0.9.3-rc1-one-worker-failing-tuples.png, 
> 20141117-0.9.3-rc1-three-workers-1-spout-3-bolts-failing-tuples.png, 
> 20141118-0.9.3-branch-3-worker-separate-1-spout-and-2-bolts-ok.png, 
> 20141118-0.9.3-branch-one-worker-ok.png, 
> 20141118-0.9.3-branch-three-workers-1-spout-3-bolts-ok.png, Storm UI1.pdf, 
> Storm UI2.pdf, storm-0.9.3-rc1-failing-tuples.png, 
> storm-0_9_2-incubating-failing-tuples.png, 
> storm-0_9_2-incubating-no-failing-tuples.png, 
> storm-failed-tuples-multi-node.png, storm-multi-node-without-350.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-350) Update disruptor to latest version (3.2.1)

2015-10-28 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil updated STORM-350:
---
Priority: Major  (was: Minor)

> Update disruptor to latest version (3.2.1)
> --
>
> Key: STORM-350
> URL: https://issues.apache.org/jira/browse/STORM-350
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>  Components: storm-core
>Reporter: Boris Aksenov
>Assignee: Robert Joseph Evans
> Fix For: 0.10.0
>
> Attachments: 
> 20141117-0.9.3-rc1-3-worker-separate-1-spout-and-2-bolts-failing-tuples.png, 
> 20141117-0.9.3-rc1-one-worker-failing-tuples.png, 
> 20141117-0.9.3-rc1-three-workers-1-spout-3-bolts-failing-tuples.png, 
> 20141118-0.9.3-branch-3-worker-separate-1-spout-and-2-bolts-ok.png, 
> 20141118-0.9.3-branch-one-worker-ok.png, 
> 20141118-0.9.3-branch-three-workers-1-spout-3-bolts-ok.png, Storm UI1.pdf, 
> Storm UI2.pdf, storm-0.9.3-rc1-failing-tuples.png, 
> storm-0_9_2-incubating-failing-tuples.png, 
> storm-0_9_2-incubating-no-failing-tuples.png, 
> storm-failed-tuples-multi-node.png, storm-multi-node-without-350.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-350) Update disruptor to latest version (3.2.2)

2015-10-28 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil updated STORM-350:
---
Summary: Update disruptor to latest version (3.2.2)  (was: Update disruptor 
to latest version (3.2.1))

> Update disruptor to latest version (3.2.2)
> --
>
> Key: STORM-350
> URL: https://issues.apache.org/jira/browse/STORM-350
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>  Components: storm-core
>Reporter: Boris Aksenov
>Assignee: Robert Joseph Evans
> Fix For: 0.10.0
>
> Attachments: 
> 20141117-0.9.3-rc1-3-worker-separate-1-spout-and-2-bolts-failing-tuples.png, 
> 20141117-0.9.3-rc1-one-worker-failing-tuples.png, 
> 20141117-0.9.3-rc1-three-workers-1-spout-3-bolts-failing-tuples.png, 
> 20141118-0.9.3-branch-3-worker-separate-1-spout-and-2-bolts-ok.png, 
> 20141118-0.9.3-branch-one-worker-ok.png, 
> 20141118-0.9.3-branch-three-workers-1-spout-3-bolts-ok.png, Storm UI1.pdf, 
> Storm UI2.pdf, storm-0.9.3-rc1-failing-tuples.png, 
> storm-0_9_2-incubating-failing-tuples.png, 
> storm-0_9_2-incubating-no-failing-tuples.png, 
> storm-failed-tuples-multi-node.png, storm-multi-node-without-350.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-350) Update disruptor to latest version (3.3.2)

2015-10-28 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil updated STORM-350:
---
Summary: Update disruptor to latest version (3.3.2)  (was: Update disruptor 
to latest version (3.2.2))

> Update disruptor to latest version (3.3.2)
> --
>
> Key: STORM-350
> URL: https://issues.apache.org/jira/browse/STORM-350
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>  Components: storm-core
>Reporter: Boris Aksenov
>Assignee: Robert Joseph Evans
> Fix For: 0.10.0
>
> Attachments: 
> 20141117-0.9.3-rc1-3-worker-separate-1-spout-and-2-bolts-failing-tuples.png, 
> 20141117-0.9.3-rc1-one-worker-failing-tuples.png, 
> 20141117-0.9.3-rc1-three-workers-1-spout-3-bolts-failing-tuples.png, 
> 20141118-0.9.3-branch-3-worker-separate-1-spout-and-2-bolts-ok.png, 
> 20141118-0.9.3-branch-one-worker-ok.png, 
> 20141118-0.9.3-branch-three-workers-1-spout-3-bolts-ok.png, Storm UI1.pdf, 
> Storm UI2.pdf, storm-0.9.3-rc1-failing-tuples.png, 
> storm-0_9_2-incubating-failing-tuples.png, 
> storm-0_9_2-incubating-no-failing-tuples.png, 
> storm-failed-tuples-multi-node.png, storm-multi-node-without-350.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1125) Separate ZK Write Client for Nimbus

2015-10-22 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1125:
---

 Summary: Separate ZK Write Client for Nimbus
 Key: STORM-1125
 URL: https://issues.apache.org/jira/browse/STORM-1125
 Project: Apache Storm
  Issue Type: Improvement
Reporter: Kishor Patil
Assignee: Kishor Patil


Given the amount of reads from ZK by nimbus, shared ZK connections for write 
quickly starts overwhelm single connection blocking writes without any 
dependence.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1121) Improve Nimbus Topology submission time

2015-10-21 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1121:
---

 Summary: Improve Nimbus Topology submission time
 Key: STORM-1121
 URL: https://issues.apache.org/jira/browse/STORM-1121
 Project: Apache Storm
  Issue Type: Bug
Reporter: Kishor Patil
Assignee: Kishor Patil


It appears, nimbus is blocking itself as active topologies count goes up. It  
increases submitTopology response time exponentially for submission of newer 
topology.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-1106) Netty Client Connection Attempts should not be limited

2015-10-15 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-1106.
-
   Resolution: Fixed
Fix Version/s: 0.11.0
   0.10.0

This was merged into master and 0.10.x-branch.

> Netty Client Connection Attempts should not be limited
> --
>
> Key: STORM-1106
> URL: https://issues.apache.org/jira/browse/STORM-1106
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 0.10.0
>Reporter: Kishor Patil
>    Assignee: Kishor Patil
>Priority: Blocker
> Fix For: 0.10.0, 0.11.0
>
>
> The workers should not give-up making connection with other workers. This 
> could cause the worker to be blocked forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1110) System Components Page is not available

2015-10-14 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1110:
---

 Summary: System Components Page is not available
 Key: STORM-1110
 URL: https://issues.apache.org/jira/browse/STORM-1110
 Project: Apache Storm
  Issue Type: Bug
Reporter: Kishor Patil



{code}
org.apache.thrift7.transport.TTransportException
at 
org.apache.thrift7.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift7.transport.TTransport.readAll(TTransport.java:86)
at 
org.apache.thrift7.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
at 
org.apache.thrift7.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift7.transport.TTransport.readAll(TTransport.java:86)
at 
org.apache.thrift7.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at 
org.apache.thrift7.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at 
org.apache.thrift7.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift7.TServiceClient.receiveBase(TServiceClient.java:69)
at 
backtype.storm.generated.Nimbus$Client.recv_getComponentPageInfo(Nimbus.java:793)
at 
backtype.storm.generated.Nimbus$Client.getComponentPageInfo(Nimbus.java:777)
at backtype.storm.ui.core$component_page.invoke(core.clj:728)
at backtype.storm.ui.core$fn__11410.invoke(core.clj:869)
at 
org.apache.storm.shade.compojure.core$make_route$fn__8404.invoke(core.clj:93)
at 
org.apache.storm.shade.compojure.core$if_route$fn__8392.invoke(core.clj:39)
at 
org.apache.storm.shade.compojure.core$if_method$fn__8385.invoke(core.clj:24)
at 
org.apache.storm.shade.compojure.core$routing$fn__8410.invoke(core.clj:106)
at clojure.core$some.invoke(core.clj:2570)
at org.apache.storm.shade.compojure.core$routing.doInvoke(core.clj:106)
at clojure.lang.RestFn.applyTo(RestFn.java:139)
at clojure.core$apply.invoke(core.clj:632)
at 
org.apache.storm.shade.compojure.core$routes$fn__8414.invoke(core.clj:111)
at 
org.apache.storm.shade.ring.middleware.json$wrap_json_params$fn__10896.invoke(json.clj:56)
at 
org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__8938.invoke(multipart_params.clj:103)
at 
org.apache.storm.shade.ring.middleware.reload$wrap_reload$fn__9777.invoke(reload.clj:22)
at backtype.storm.ui.core$catch_errors$fn__11523.invoke(core.clj:1003)
at 
org.apache.storm.shade.ring.middleware.keyword_params$wrap_keyword_params$fn__8869.invoke(keyword_params.clj:27)
at 
org.apache.storm.shade.ring.middleware.nested_params$wrap_nested_params$fn__8909.invoke(nested_params.clj:65)
at 
org.apache.storm.shade.ring.middleware.params$wrap_params$fn__8840.invoke(params.clj:55)
at 
org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__8938.invoke(multipart_params.clj:103)
at 
org.apache.storm.shade.ring.middleware.flash$wrap_flash$fn__9124.invoke(flash.clj:14)
at 
org.apache.storm.shade.ring.middleware.session$wrap_session$fn__9112.invoke(session.clj:43)
at 
org.apache.storm.shade.ring.middleware.cookies$wrap_cookies$fn__9040.invoke(cookies.clj:160)
at 
org.apache.storm.shade.ring.util.servlet$make_service_method$fn__8746.invoke(servlet.clj:127)
at 
org.apache.storm.shade.ring.util.servlet$servlet$fn__8750.invoke(servlet.clj:136)
at 
org.apache.storm.shade.ring.util.servlet.proxy$javax.servlet.http.HttpServlet$ff19274a.service(Unknown
 Source)
at 
org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:654)
at 
org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1320)
at 
org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:247)
at 
org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:210)
at 
org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291)
at 
org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:443)
at 
org.apache.storm.shade.org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1044)
at 
org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:372)
at 
org.apache.storm.shade.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:978)
at 
org.apache.storm.shade.org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.apache.storm.shade.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116

[jira] [Created] (STORM-1106) Netty Client Connection Attempts should not be limited

2015-10-12 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1106:
---

 Summary: Netty Client Connection Attempts should not be limited
 Key: STORM-1106
 URL: https://issues.apache.org/jira/browse/STORM-1106
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Affects Versions: 0.10.0
Reporter: Kishor Patil
Assignee: Kishor Patil
Priority: Blocker


The workers should not give-up making connection with other workers. This could 
cause the worker to be blocked forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (STORM-1107) Remove deprecated Config STORM_MESSAGING_NETTY_MAX_RETRIES and fix Netty Client backoff calculations

2015-10-12 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil reassigned STORM-1107:
---

Assignee: Kishor Patil

> Remove deprecated Config STORM_MESSAGING_NETTY_MAX_RETRIES and fix Netty 
> Client backoff calculations
> 
>
> Key: STORM-1107
> URL: https://issues.apache.org/jira/browse/STORM-1107
> Project: Apache Storm
>  Issue Type: Bug
>    Reporter: Kishor Patil
>Assignee: Kishor Patil
>Priority: Minor
>
> Since Netty Client should not limit retry attempts, we should not use 
> deprecated STORM_MESSAGING_NETTY_MAX_RETRIES configuration for  backoff 
> calculation 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1107) Remove deprecated Config STORM_MESSAGING_NETTY_MAX_RETRIES and fix Netty Client backoff calculations

2015-10-12 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-1107:
---

 Summary: Remove deprecated Config 
STORM_MESSAGING_NETTY_MAX_RETRIES and fix Netty Client backoff calculations
 Key: STORM-1107
 URL: https://issues.apache.org/jira/browse/STORM-1107
 Project: Apache Storm
  Issue Type: Bug
Reporter: Kishor Patil
Priority: Minor


Since Netty Client should not limit retry attempts, we should not use 
deprecated STORM_MESSAGING_NETTY_MAX_RETRIES configuration for  backoff 
calculation 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-477) Incorrectly set JAVA_HOME is not detected

2015-06-26 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-477.

Resolution: Fixed

 Incorrectly set JAVA_HOME is not detected
 -

 Key: STORM-477
 URL: https://issues.apache.org/jira/browse/STORM-477
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 0.9.2-incubating
 Environment: Rhel6
Reporter: Paul Poulosky
Assignee: Paul Poulosky
Priority: Minor
  Labels: newbie
 Fix For: 0.11.0


 If JAVA_HOME is incorrectly set in a user's environment when launching storm, 
   
 it fails with an error message that is confusing to end users.
 Traceback (most recent call last):
   File /home/y/bin/storm, line 485, in module
 main()
   File /home/y/bin/storm, line 482, in main
 (COMMANDS.get(COMMAND, unknown_command))(*ARGS)
   File /home/y/bin/storm, line 225, in listtopos
 extrajars=[USER_CONF_DIR, STORM_DIR + /bin])
   File /home/y/bin/storm, line 153, in exec_storm_class
 ] + jvmopts + [klass] + list(args)
   File /home/y/bin/storm, line 97, in confvalue
 p = sub.Popen(command, stdout=sub.PIPE)
   File /usr/lib64/python2.6/subprocess.py, line 642, in __init__
 errread, errwrite)
   File /usr/lib64/python2.6/subprocess.py, line 1234, in _execute_child
 raise child_exception
 It would be nice if this were either detected and a proper error message 
 printed, or if it warned and fell back to the java found in PATH.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-477) Incorrectly set JAVA_HOME is not detected

2015-06-26 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil updated STORM-477:
---
Assignee: Paul Poulosky

 Incorrectly set JAVA_HOME is not detected
 -

 Key: STORM-477
 URL: https://issues.apache.org/jira/browse/STORM-477
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 0.9.2-incubating
 Environment: Rhel6
Reporter: Paul Poulosky
Assignee: Paul Poulosky
Priority: Minor
  Labels: newbie
 Fix For: 0.11.0


 If JAVA_HOME is incorrectly set in a user's environment when launching storm, 
   
 it fails with an error message that is confusing to end users.
 Traceback (most recent call last):
   File /home/y/bin/storm, line 485, in module
 main()
   File /home/y/bin/storm, line 482, in main
 (COMMANDS.get(COMMAND, unknown_command))(*ARGS)
   File /home/y/bin/storm, line 225, in listtopos
 extrajars=[USER_CONF_DIR, STORM_DIR + /bin])
   File /home/y/bin/storm, line 153, in exec_storm_class
 ] + jvmopts + [klass] + list(args)
   File /home/y/bin/storm, line 97, in confvalue
 p = sub.Popen(command, stdout=sub.PIPE)
   File /usr/lib64/python2.6/subprocess.py, line 642, in __init__
 errread, errwrite)
   File /usr/lib64/python2.6/subprocess.py, line 1234, in _execute_child
 raise child_exception
 It would be nice if this were either detected and a proper error message 
 printed, or if it warned and fell back to the java found in PATH.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-874) Netty Threads do not handle Errors properly

2015-06-17 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-874:
--

 Summary: Netty Threads do not handle Errors properly
 Key: STORM-874
 URL: https://issues.apache.org/jira/browse/STORM-874
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 0.9.2-incubating, 0.10.0
Reporter: Kishor Patil


When low on memory, netty thread could get OOM which if not handled correctly 
can lead to unexpected behavior such as netty connection leaks.
{code:java}
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:658) ~[?:1.8.0_25]
at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:123) 
~[?:1.8.0_25]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) ~[?:1.8.0_25]
at 
org.jboss.netty.buffer.ChannelBuffers.directBuffer(ChannelBuffers.java:167) 
~[netty-3.9.4.Final.jar:?]
at 
org.jboss.netty.buffer.ChannelBuffers.directBuffer(ChannelBuffers.java:151) 
~[netty-3.9.4.Final.jar:?]
at 
backtype.storm.messaging.netty.MessageBatch.buffer(MessageBatch.java:101) 
~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
at 
backtype.storm.messaging.netty.MessageEncoder.encode(MessageEncoder.java:32) 
~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
at 
org.jboss.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:66)
 ~[netty-3.9.4.Final.jar:?]
at 
org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
 ~[netty-3.9.4.Final.jar:?]
at 
org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
 ~[netty-3.9.4.Final.jar:?]
at 
org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
 ~[netty-3.9.4.Final.jar:?]
at org.jboss.netty.channel.Channels.write(Channels.java:704) 
~[netty-3.9.4.Final.jar:?]
at org.jboss.netty.channel.Channels.write(Channels.java:671) 
~[netty-3.9.4.Final.jar:?]
at 
org.jboss.netty.channel.AbstractChannel.write(AbstractChannel.java:248) 
~[netty-3.9.4.Final.jar:?]
at 
backtype.storm.messaging.netty.Client.tryDeliverMessages(Client.java:226) 
~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
at backtype.storm.messaging.netty.Client.send(Client.java:173) 
~[storm-core-0.9.2-incubating-security.jar:0.9.2-incubating-security]
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-864) Exclude storm-kafka tests from Travis CI build

2015-06-15 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil updated STORM-864:
---
Fix Version/s: 0.11.0

 Exclude storm-kafka tests from Travis CI build
 --

 Key: STORM-864
 URL: https://issues.apache.org/jira/browse/STORM-864
 Project: Apache Storm
  Issue Type: Sub-task
  Components: storm-kafka
Reporter: Jungtaek Lim
Assignee: Jungtaek Lim
Priority: Minor
 Fix For: 0.11.0


 We observed almost test fails from Travis CI are due to storm-kafka.
 It is really bad, and I adjust backoff timeout values to bigger with no luck. 
 It makes another strange issues, which makes me blocked.
 IMO it seems to be an issue with slow machine, so for now we can just turn 
 off storm-kafka tests from Travis CI build and make it stable first. 
 We can try to resolve origin issue later, though I'm wondering it could be 
 solved without excluding tests. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-864) Exclude storm-kafka tests from Travis CI build

2015-06-15 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-864.

Resolution: Fixed

 Exclude storm-kafka tests from Travis CI build
 --

 Key: STORM-864
 URL: https://issues.apache.org/jira/browse/STORM-864
 Project: Apache Storm
  Issue Type: Sub-task
  Components: storm-kafka
Reporter: Jungtaek Lim
Assignee: Jungtaek Lim
Priority: Minor
 Fix For: 0.11.0


 We observed almost test fails from Travis CI are due to storm-kafka.
 It is really bad, and I adjust backoff timeout values to bigger with no luck. 
 It makes another strange issues, which makes me blocked.
 IMO it seems to be an issue with slow machine, so for now we can just turn 
 off storm-kafka tests from Travis CI build and make it stable first. 
 We can try to resolve origin issue later, though I'm wondering it could be 
 solved without excluding tests. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-860) UI: while topology is transitioned to killed, Activate button is enabled but not functioning

2015-06-15 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil updated STORM-860:
---
Fix Version/s: 0.11.0

 UI: while topology is transitioned to killed, Activate button is enabled 
 but not functioning
 --

 Key: STORM-860
 URL: https://issues.apache.org/jira/browse/STORM-860
 Project: Apache Storm
  Issue Type: Bug
Reporter: Jungtaek Lim
Assignee: Jungtaek Lim
Priority: Minor
 Fix For: 0.11.0


 When I kill Topology from UI, its state is transitioned to 'killed', but 
 'Activate' button is still enabled.
 And I push the button, it complains Error while communicating to nimbus 
 from popup.
 It would be better to disable 'Activate' button while killing topology.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-860) UI: while topology is transitioned to killed, Activate button is enabled but not functioning

2015-06-15 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-860.

Resolution: Fixed

 UI: while topology is transitioned to killed, Activate button is enabled 
 but not functioning
 --

 Key: STORM-860
 URL: https://issues.apache.org/jira/browse/STORM-860
 Project: Apache Storm
  Issue Type: Bug
Reporter: Jungtaek Lim
Assignee: Jungtaek Lim
Priority: Minor
 Fix For: 0.11.0


 When I kill Topology from UI, its state is transitioned to 'killed', but 
 'Activate' button is still enabled.
 And I push the button, it complains Error while communicating to nimbus 
 from popup.
 It would be better to disable 'Activate' button while killing topology.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-847) Add cli to get the last storm error from the topology

2015-06-12 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-847.

Resolution: Fixed

Changed pull into master

 Add cli to get the last storm error from the topology
 -

 Key: STORM-847
 URL: https://issues.apache.org/jira/browse/STORM-847
 Project: Apache Storm
  Issue Type: Improvement
Reporter: Nikhil Singh
Assignee: Nikhil Singh
Priority: Minor
 Fix For: 0.11.0


 We often require to get the status of topology after deployments. For getting 
 the errors this cli will be very useful. Currently the way to do it is 
 through the UI apis. That becomes cumbersome in secure hosted storm clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-833) Logging framework logback - log4j 2.x

2015-05-22 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-833:
--

 Summary: Logging framework logback - log4j 2.x
 Key: STORM-833
 URL: https://issues.apache.org/jira/browse/STORM-833
 Project: Apache Storm
  Issue Type: Story
Reporter: Kishor Patil
Assignee: Kishor Patil


Based on previous discussion about migrating Logging framework logback - log4j 
2.x. 
Below are set of changes we want to perform:
- Migrate from logback to log4j2 
- Use RFC5424 foramt for routing logs to rsyslog
- Get stdout and stderr logged and rolled
- Add username and sensitivity to rsyslog RFC524 message format.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-831) Add Jira and Central Logging URL to UI

2015-05-20 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-831:
--

 Summary: Add Jira and Central Logging URL to UI
 Key: STORM-831
 URL: https://issues.apache.org/jira/browse/STORM-831
 Project: Apache Storm
  Issue Type: Improvement
Reporter: Kishor Patil
Assignee: Kishor Patil
Priority: Trivial


As a user, I would like to see a link to take me to JIRA for reporting bug. 
Also, optionally if link to splunk/logstash/kibana from UI would be helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-797) Disruptor Queue message order issue

2015-04-23 Thread Kishor Patil (JIRA)
Kishor Patil created STORM-797:
--

 Summary: Disruptor Queue message order issue
 Key: STORM-797
 URL: https://issues.apache.org/jira/browse/STORM-797
 Project: Apache Storm
  Issue Type: Bug
Reporter: Kishor Patil


We notice following ??DisruptorQueueTest.testMessageDisorder?? unit test fails 
intermittently:
{code}
java.lang.AssertionError: We expect to receive first published message first, 
but received null expected:1 but was:null
  DisruptorQueueTest.testMessageDisorder:60 We expect to receive first 
published message first, but received null expected:1 but was:null
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-570) Switch from tablesorter to datatables jquery plugin

2015-03-19 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-570.

   Resolution: Fixed
Fix Version/s: 0.10.0

 Switch from tablesorter to datatables jquery plugin
 ---

 Key: STORM-570
 URL: https://issues.apache.org/jira/browse/STORM-570
 Project: Apache Storm
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Minor
 Fix For: 0.10.0


 Datatables (http://www.datatables.net/) allows for more than just sorting, it 
 can do filtering and pagination if needed.  This is especially nice for 
 things like the config that can be rather large, and fill up the page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-570) Switch from tablesorter to datatables jquery plugin

2015-03-19 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil updated STORM-570:
---
Assignee: Robert Joseph Evans

 Switch from tablesorter to datatables jquery plugin
 ---

 Key: STORM-570
 URL: https://issues.apache.org/jira/browse/STORM-570
 Project: Apache Storm
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Minor
 Fix For: 0.10.0


 Datatables (http://www.datatables.net/) allows for more than just sorting, it 
 can do filtering and pagination if needed.  This is especially nice for 
 things like the config that can be rather large, and fill up the page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-557) High Quality Images for presentations, etc.

2015-03-02 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil updated STORM-557:
---
Fix Version/s: 0.10.0

 High Quality Images for presentations, etc.
 ---

 Key: STORM-557
 URL: https://issues.apache.org/jira/browse/STORM-557
 Project: Apache Storm
  Issue Type: Documentation
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 0.10.0


 Recently I created a couple of svg diagrams for a poster I was doing about 
 secure storm.  I thought it was a complete waste to not release them as open 
 source, but I wasn't totally sure where we wanted to keep them.  I can check 
 them into git, because that would make it simple for others to find, and we 
 could link to them from the markdown documentation. But I could also put them 
 in the svn repo for the official site documentation.
 I'll throw up a very basic pull request and see what people think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-557) High Quality Images for presentations, etc.

2015-03-02 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-557.

Resolution: Fixed

 High Quality Images for presentations, etc.
 ---

 Key: STORM-557
 URL: https://issues.apache.org/jira/browse/STORM-557
 Project: Apache Storm
  Issue Type: Documentation
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 0.10.0


 Recently I created a couple of svg diagrams for a poster I was doing about 
 secure storm.  I thought it was a complete waste to not release them as open 
 source, but I wasn't totally sure where we wanted to keep them.  I can check 
 them into git, because that would make it simple for others to find, and we 
 could link to them from the markdown documentation. But I could also put them 
 in the svn repo for the official site documentation.
 I'll throw up a very basic pull request and see what people think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)