[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011182#comment-15011182
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45212112
  
--- Diff: storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java ---
@@ -0,0 +1,37 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.hooks;
+
+import backtype.storm.task.WorkerTopologyContext;
+
+import java.io.Serializable;
+import java.util.List;
+import java.util.Map;
+
+public class BaseWorkerHook implements IWorkerHook, Serializable {
+private static final long serialVersionUID = 2589466485198339529L;
+
+@Override
+public void start(Map stormConf, WorkerTopologyContext context, List 
taskIds) {
+
+}
+
+@Override
+public void shutdown() {
+}
--- End diff --

Similar comment here.


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L352
> I think the 

[GitHub] storm pull request: [STORM-126] Add Lifecycle support API for work...

2015-11-18 Thread revans2
Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45212243
  
--- Diff: storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java ---
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.hooks;
+
+import backtype.storm.task.WorkerTopologyContext;
+
+import java.io.Serializable;
+import java.util.List;
+import java.util.Map;
+
+public interface IWorkerHook extends Serializable {
+void start(Map stormConf, WorkerTopologyContext context, List taskIds);
+void shutdown();
+}
--- End diff --

Could you put in some javadoc comments about this interface and the methods 
in it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011183#comment-15011183
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45212243
  
--- Diff: storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java ---
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.hooks;
+
+import backtype.storm.task.WorkerTopologyContext;
+
+import java.io.Serializable;
+import java.util.List;
+import java.util.Map;
+
+public interface IWorkerHook extends Serializable {
+void start(Map stormConf, WorkerTopologyContext context, List taskIds);
+void shutdown();
+}
--- End diff --

Could you put in some javadoc comments about this interface and the methods 
in it.


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L352
> I think the interface for the lifecycle stuff should look something like this:
> interface WorkerHook extends Serializable {
> void 

[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011181#comment-15011181
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45212086
  
--- Diff: storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java ---
@@ -0,0 +1,37 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.hooks;
+
+import backtype.storm.task.WorkerTopologyContext;
+
+import java.io.Serializable;
+import java.util.List;
+import java.util.Map;
+
+public class BaseWorkerHook implements IWorkerHook, Serializable {
+private static final long serialVersionUID = 2589466485198339529L;
+
+@Override
+public void start(Map stormConf, WorkerTopologyContext context, List 
taskIds) {
+
--- End diff --

Could we put a comment in the body here like `//NOOP`


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L352
> I think the interface for the lifecycle stuff should look something 

[GitHub] storm pull request: [STORM-126] Add Lifecycle support API for work...

2015-11-18 Thread revans2
Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45212086
  
--- Diff: storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java ---
@@ -0,0 +1,37 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.hooks;
+
+import backtype.storm.task.WorkerTopologyContext;
+
+import java.io.Serializable;
+import java.util.List;
+import java.util.Map;
+
+public class BaseWorkerHook implements IWorkerHook, Serializable {
+private static final long serialVersionUID = 2589466485198339529L;
+
+@Override
+public void start(Map stormConf, WorkerTopologyContext context, List 
taskIds) {
+
--- End diff --

Could we put a comment in the body here like `//NOOP`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011198#comment-15011198
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45213086
  
--- Diff: storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java ---
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.hooks;
+
+import backtype.storm.task.WorkerTopologyContext;
+
+import java.io.Serializable;
+import java.util.List;
+import java.util.Map;
+
+public interface IWorkerHook extends Serializable {
+void start(Map stormConf, WorkerTopologyContext context, List taskIds);
+void shutdown();
+}
--- End diff --

Especially describing the type that taskIds is.  It would be nice to put in 
generics for it if we could.


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L352
> I think the interface for the lifecycle stuff should look something like this:
> interface WorkerHook extends 

[GitHub] storm pull request: [STORM-126] Add Lifecycle support API for work...

2015-11-18 Thread revans2
Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45213086
  
--- Diff: storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java ---
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.hooks;
+
+import backtype.storm.task.WorkerTopologyContext;
+
+import java.io.Serializable;
+import java.util.List;
+import java.util.Map;
+
+public interface IWorkerHook extends Serializable {
+void start(Map stormConf, WorkerTopologyContext context, List taskIds);
+void shutdown();
+}
--- End diff --

Especially describing the type that taskIds is.  It would be nice to put in 
generics for it if we could.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1214) Trident API Improvements

2015-11-18 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011375#comment-15011375
 ] 

Robert Joseph Evans commented on STORM-1214:


+1 for the concept.

> Trident API Improvements
> 
>
> Key: STORM-1214
> URL: https://issues.apache.org/jira/browse/STORM-1214
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: P. Taylor Goetz
>Assignee: P. Taylor Goetz
>
> There are a few idiosyncrasies in the Trident API that can sometimes trip 
> developers up (e.g. when and how to set the parallelism of components). There 
> are also a few areas where the API could be made slightly more intuitive 
> (e.g. add Java 8 streams-like methods like {{filter()}}, {{map()}}, 
> {{flatMap()}}, etc.).
> Some of these concerns can be addressed through documentation, and some by 
> altering the API. Since we are approaching a 1.0 release, it would be good to 
> address any API changes before a major release.
> The goad of this JIRA is to identify specific areas of improvement and 
> formulate an implementation that addresses them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1187) Support for late and out of order events in time based windows

2015-11-18 Thread Arun Mahadevan (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011489#comment-15011489
 ] 

Arun Mahadevan commented on STORM-1187:
---

At a high level this is captured under "Tuple timestamp and out of order 
tuples" at 

https://github.com/arunmahadevan/storm/blob/cd051103bd05be52d4f621b847da8a200f7e7c79/docs/documentation/Windowing.md#tuple-timestamp-and-out-of-order-tuples

I am mostly through with the changes and can raise PR for review once PR# 855 
gets merged.

> Support for late and out of order events in time based windows
> --
>
> Key: STORM-1187
> URL: https://issues.apache.org/jira/browse/STORM-1187
> Project: Apache Storm
>  Issue Type: Sub-task
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>
> Right now the time based windows uses the timestamp when the tuple is 
> received by the bolt. 
> However there are use cases where the tuples can be processed based on the 
> time when they are actually generated vs the time when they are received. So 
> we need to add support for processing events with a time lag and also have 
> some way to specify and read tuple timestamps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011345#comment-15011345
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/884#issuecomment-157771089
  
@schonfeld we can make it work easily with specific components.  The 
taskIds are not needed when calling the start method of IWorkerHook.  You can 
get them out by calling getThisWorkerTasks of the WorkerTopologyContext.  But 
that too is really minor.  You can also call getComponentId using the task id 
to get the component ID.

So if we wanted to we could have something like (probably does not compile)
```java
public class ComponentWorkerHook implements IWorkerHook {
private final IWorkerHook _wrapped;
private final Set _comps;
private boolean _active = false;

public ComponentWorkerHook(IWorkerHook wrapped, Set comps) {
_wrapped = wrapped;
_comps = new HashSet<>(comps);
}

public void addComp(String comp) {
_comps.add(comp);
}

   @Override
public void start(Map stormConf, WorkerTopologyContext context, 
List taskIds) {
 Set found = new HashSet();
 for (Integer id: taskids) {
 found.put(context.getComponentId(id));
 }
 found.retainAll(_comps);
 _active = !found.isEmpty();
if (_active) {
_wrapped.start(stromConf, context, taskIds);
}
}

@Override
public void shutdown() {
if (_active) {
   _wrapped.shutdown();
}
}
}
```

I am +1 for merging this in, but I want to hear from @Parth-Brahmbhatt if 
he really wants to change this to be config based instead of through 
TopologyBuilder.


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> 

[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011390#comment-15011390
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user schonfeld commented on the pull request:

https://github.com/apache/storm/pull/884#issuecomment-15138
  
@revans2 yah, you're right.. I'll go ahead and remove the taskIds from 
@start. re ComponentWorkerHook... I can't think of a reason you'd want to run 
the a WorkerHook only on specific components... And if you do want that, 
shouldn't we just implement `start` & `shutdown` methods in `IComponent`?


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L352
> I think the interface for the lifecycle stuff should look something like this:
> interface WorkerHook extends Serializable {
> void start(Map conf, TopologyContext context, List taskIds);
> void shutdown();
> }
> You'll need to add a definition for worker hooks into the topology definition 
> Thrift structure:
> https://github.com/nathanmarz/storm/blob/master/src/storm.thrift#L91
> I think for the first go it's ok to make this a Java-only feature by adding 
> something like "4: list worker_hooks" to the StormTopology structure 
> (where the "binary" refers to a Java-serialized object).
> Then TopologyBuilder can have a simple "addWorkerHook" method that will 
> serialize the object and add it to the Thrift struct.
> --
> danehammer: I've started working on this. I've followed Nathan's proposed 
> design, but I keep hitting snags with calls to ThriftTopologyUtils, now that 
> there is an optional list on StormTopology.
> I would like to add some unit tests for what I change there, would it make 
> more sense for those to be in Java instead of Clojure? If so, are there any 
> strong preferences on what dependencies I add and how I go about adding Java 
> unit tests to storm-core?
> --
> nathanmarz: No... unit tests should remain in Clojure. You can run Java code 
> in 

[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011415#comment-15011415
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/884#issuecomment-157780934
  
@schonfeld the reason for doing it per component is because you have a 
framework, like spring, that wants to be setup and cleaned up, but only a 
subset of the components use it.  I supposed having the worker hook run on 
workers that don't use spring probably is not that bad, but I can also see that 
if it adds extra overhead some people may want to turn it off.

I am still +1 on this change.


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L352
> I think the interface for the lifecycle stuff should look something like this:
> interface WorkerHook extends Serializable {
> void start(Map conf, TopologyContext context, List taskIds);
> void shutdown();
> }
> You'll need to add a definition for worker hooks into the topology definition 
> Thrift structure:
> https://github.com/nathanmarz/storm/blob/master/src/storm.thrift#L91
> I think for the first go it's ok to make this a Java-only feature by adding 
> something like "4: list worker_hooks" to the StormTopology structure 
> (where the "binary" refers to a Java-serialized object).
> Then TopologyBuilder can have a simple "addWorkerHook" method that will 
> serialize the object and add it to the Thrift struct.
> --
> danehammer: I've started working on this. I've followed Nathan's proposed 
> design, but I keep hitting snags with calls to ThriftTopologyUtils, now that 
> there is an optional list on StormTopology.
> I would like to add some unit tests for what I change there, would it make 
> more sense for those to be in Java instead of Clojure? If so, are there any 
> strong preferences on what dependencies I add and how I go about adding Java 
> unit tests to 

[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011208#comment-15011208
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user schonfeld commented on the pull request:

https://github.com/apache/storm/pull/884#issuecomment-157751007
  
@Parth-Brahmbhatt @revans2 to be honest, the reason I implement this as a 
worker-specific serializable hook is cos that's what @nathanmarz suggested in 
the discussion on 
[STORM-126](https://issues.apache.org/jira/browse/STORM-126)... Seemed like a 
logical approach, since it is bound to a worker and not a specific component..?


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L352
> I think the interface for the lifecycle stuff should look something like this:
> interface WorkerHook extends Serializable {
> void start(Map conf, TopologyContext context, List taskIds);
> void shutdown();
> }
> You'll need to add a definition for worker hooks into the topology definition 
> Thrift structure:
> https://github.com/nathanmarz/storm/blob/master/src/storm.thrift#L91
> I think for the first go it's ok to make this a Java-only feature by adding 
> something like "4: list worker_hooks" to the StormTopology structure 
> (where the "binary" refers to a Java-serialized object).
> Then TopologyBuilder can have a simple "addWorkerHook" method that will 
> serialize the object and add it to the Thrift struct.
> --
> danehammer: I've started working on this. I've followed Nathan's proposed 
> design, but I keep hitting snags with calls to ThriftTopologyUtils, now that 
> there is an optional list on StormTopology.
> I would like to add some unit tests for what I change there, would it make 
> more sense for those to be in Java instead of Clojure? If so, are there any 
> strong preferences on what dependencies I add and how I go about adding Java 
> unit tests to storm-core?
> --
> nathanmarz: No... unit tests should remain in 

[GitHub] storm pull request: [STORM-126] Add Lifecycle support API for work...

2015-11-18 Thread schonfeld
Github user schonfeld commented on the pull request:

https://github.com/apache/storm/pull/884#issuecomment-157751007
  
@Parth-Brahmbhatt @revans2 to be honest, the reason I implement this as a 
worker-specific serializable hook is cos that's what @nathanmarz suggested in 
the discussion on 
[STORM-126](https://issues.apache.org/jira/browse/STORM-126)... Seemed like a 
logical approach, since it is bound to a worker and not a specific component..?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: [STORM-126] Add Lifecycle support API for work...

2015-11-18 Thread revans2
Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45212055
  
--- Diff: storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java ---
@@ -0,0 +1,37 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.hooks;
+
+import backtype.storm.task.WorkerTopologyContext;
+
+import java.io.Serializable;
+import java.util.List;
+import java.util.Map;
+
+public class BaseWorkerHook implements IWorkerHook, Serializable {
--- End diff --

Could you add in some javadoc comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011200#comment-15011200
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user Parth-Brahmbhatt commented on the pull request:

https://github.com/apache/storm/pull/884#issuecomment-157749683
  
Thanks for the contribution, any reason you decided to make this hook part 
of serialized topology. If you see some other examples, like the nimbus hook 
(though it is not really a great example as its on nimbus side and not worker 
but still close) we ask the user to just provide the Fully qualified classname 
as a config option, create an instance of it using reflection and invoke 
prepare (start in your case) or cleanup on that instance. The only advantage of 
putting this as part of topology is that users will be able to provide objects 
that are completely serialized so it can be initialized with constructor args 
or with any other way that relies on instance variable initialization but I 
don't see that as a huge upside. On the other hand a consistent way to 
implement all hooks will make code easy to read and reason about.


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L352
> I think the interface for the lifecycle stuff should look something like this:
> interface WorkerHook extends Serializable {
> void start(Map conf, TopologyContext context, List taskIds);
> void shutdown();
> }
> You'll need to add a definition for worker hooks into the topology definition 
> Thrift structure:
> https://github.com/nathanmarz/storm/blob/master/src/storm.thrift#L91
> I think for the first go it's ok to make this a Java-only feature by adding 
> something like "4: list worker_hooks" to the StormTopology structure 
> (where the "binary" refers to a Java-serialized object).
> Then TopologyBuilder can have a simple "addWorkerHook" method that will 
> serialize the object and add it to the Thrift struct.
> --
> danehammer: I've 

[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011201#comment-15011201
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user schonfeld commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45213316
  
--- Diff: 
storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java ---
@@ -0,0 +1,94 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.utils;
+
+import backtype.storm.generated.*;
+import backtype.storm.hooks.BaseWorkerHook;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import junit.framework.TestCase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.nio.ByteBuffer;
+import java.util.Set;
+
+public class ThriftTopologyUtilsTest extends TestCase {
--- End diff --

I did that because all other test classes in `backtype.storm.utils` 
(DisruptorQueueBackpressureTest, DisruptorQueueTest, 
StormBoundedExponentialBackoffRetryTest, etc) extend `TestCase`... Should I 
remove?


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> 

[GitHub] storm pull request: [STORM-126] Add Lifecycle support API for work...

2015-11-18 Thread revans2
Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/884#issuecomment-157750106
  
Overall this looks good.  I have a few minor comments, mostly around code 
cleanup.  My biggest concern is that we seem to be installing the IWorkerHook 
on all workers, but not providing a simple way for them to selectively decide 
if they want to run on a per-bolt/spout basis.  But that can be done later on 
if we need it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011179#comment-15011179
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45212055
  
--- Diff: storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java ---
@@ -0,0 +1,37 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.hooks;
+
+import backtype.storm.task.WorkerTopologyContext;
+
+import java.io.Serializable;
+import java.util.List;
+import java.util.Map;
+
+public class BaseWorkerHook implements IWorkerHook, Serializable {
--- End diff --

Could you add in some javadoc comments.


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L352
> I think the interface for the lifecycle stuff should look something like this:
> interface WorkerHook extends Serializable {
> void start(Map conf, TopologyContext context, List taskIds);
> void shutdown();
> }
> You'll need to add a definition for worker hooks into the 

[GitHub] storm pull request: [STORM-126] Add Lifecycle support API for work...

2015-11-18 Thread schonfeld
Github user schonfeld commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45213316
  
--- Diff: 
storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java ---
@@ -0,0 +1,94 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.utils;
+
+import backtype.storm.generated.*;
+import backtype.storm.hooks.BaseWorkerHook;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import junit.framework.TestCase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.nio.ByteBuffer;
+import java.util.Set;
+
+public class ThriftTopologyUtilsTest extends TestCase {
--- End diff --

I did that because all other test classes in `backtype.storm.utils` 
(DisruptorQueueBackpressureTest, DisruptorQueueTest, 
StormBoundedExponentialBackoffRetryTest, etc) extend `TestCase`... Should I 
remove?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: [STORM-126] Add Lifecycle support API for work...

2015-11-18 Thread revans2
Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45221046
  
--- Diff: 
storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java ---
@@ -0,0 +1,94 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.utils;
+
+import backtype.storm.generated.*;
+import backtype.storm.hooks.BaseWorkerHook;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import junit.framework.TestCase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.nio.ByteBuffer;
+import java.util.Set;
+
+public class ThriftTopologyUtilsTest extends TestCase {
--- End diff --

It is very minor I am fine with leaving it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011301#comment-15011301
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45221046
  
--- Diff: 
storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java ---
@@ -0,0 +1,94 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.utils;
+
+import backtype.storm.generated.*;
+import backtype.storm.hooks.BaseWorkerHook;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import junit.framework.TestCase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.nio.ByteBuffer;
+import java.util.Set;
+
+public class ThriftTopologyUtilsTest extends TestCase {
--- End diff --

It is very minor I am fine with leaving it.


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L352
> I think the interface for the lifecycle stuff should look something 

[GitHub] storm pull request: [STORM-126] Add Lifecycle support API for work...

2015-11-18 Thread revans2
Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/884#issuecomment-157771089
  
@schonfeld we can make it work easily with specific components.  The 
taskIds are not needed when calling the start method of IWorkerHook.  You can 
get them out by calling getThisWorkerTasks of the WorkerTopologyContext.  But 
that too is really minor.  You can also call getComponentId using the task id 
to get the component ID.

So if we wanted to we could have something like (probably does not compile)
```java
public class ComponentWorkerHook implements IWorkerHook {
private final IWorkerHook _wrapped;
private final Set _comps;
private boolean _active = false;

public ComponentWorkerHook(IWorkerHook wrapped, Set comps) {
_wrapped = wrapped;
_comps = new HashSet<>(comps);
}

public void addComp(String comp) {
_comps.add(comp);
}

   @Override
public void start(Map stormConf, WorkerTopologyContext context, 
List taskIds) {
 Set found = new HashSet();
 for (Integer id: taskids) {
 found.put(context.getComponentId(id));
 }
 found.retainAll(_comps);
 _active = !found.isEmpty();
if (_active) {
_wrapped.start(stromConf, context, taskIds);
}
}

@Override
public void shutdown() {
if (_active) {
   _wrapped.shutdown();
}
}
}
```

I am +1 for merging this in, but I want to hear from @Parth-Brahmbhatt if 
he really wants to change this to be config based instead of through 
TopologyBuilder.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: [STORM-126] Add Lifecycle support API for work...

2015-11-18 Thread revans2
Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45211932
  
--- Diff: storm-core/src/clj/backtype/storm/daemon/worker.clj ---
@@ -665,6 +686,8 @@
 (close-resources worker)
 
 ;; TODO: here need to invoke the "shutdown" method of 
WorkerHook
--- End diff --

Please remove this TODO, you implemented it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: [STORM-126] Add Lifecycle support API for work...

2015-11-18 Thread revans2
Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45212589
  
--- Diff: 
storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java ---
@@ -0,0 +1,94 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.utils;
+
+import backtype.storm.generated.*;
+import backtype.storm.hooks.BaseWorkerHook;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import junit.framework.TestCase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.nio.ByteBuffer;
+import java.util.Set;
+
+public class ThriftTopologyUtilsTest extends TestCase {
--- End diff --

For JUNIT4 I don't think you should extend TestCase.  Is there a reason you 
are doing it?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-956) When the execute() or nextTuple() hang on external resources, stop the Worker's heartbeat

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011248#comment-15011248
 ] 

ASF GitHub Bot commented on STORM-956:
--

Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/647#issuecomment-157757168
  
I personally can see both sides of this.  There are situations where a 
bolt/spout may be hung because of a bug, like a thread deadlock in some 
external client, and restarting the worker will fix the issue.  But I agree 
that this should be a very rare situation.  

I don't get the argument that this is expensive if we get it wrong.  We 
already fail fast if some bolt or spout throws an unexpected exception.  I 
don't see this being that different.

A bolt or spout not able to process anything for 5 mins seems like an OK 
time to see if we can restart things, especially for a low latency framework.  
I personally am +1 on the concept of having timeouts.  I would like to see some 
changes to the implementation of this patch, but there is no reason to go into 
that if @kishorvpatil has a -1 on even the idea of it.  @kishorvpatil and 
@bastiliu have I swayed you at all with my argument?


> When the execute() or nextTuple() hang on external resources, stop the 
> Worker's heartbeat
> -
>
> Key: STORM-956
> URL: https://issues.apache.org/jira/browse/STORM-956
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Chuanlei Ni
>Assignee: Chuanlei Ni
>Priority: Minor
>   Original Estimate: 6h
>  Remaining Estimate: 6h
>
> Sometimes the work threads produced by mk-threads in executor.clj hang on 
> external resources or other unknown reasons. This makes the workers stop 
> processing the tuples.  I think it is better to kill this worker to resolve 
> the "hang". I plan to :
> 1. like `setup-ticks`, send a system-tick to receive-queue
> 2. the tuple-action-fn deal with this system-tick and remember the time that 
> processes this tuple in the executor-data
> 3. when worker do local heartbeat, check the time the executor writes to 
> executor-data. If the time is long from current (for example, 3 minutes), the 
> worker does not do the heartbeat.  So the supervisor could deal with this 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: add storm-id to worker log filename

2015-11-18 Thread zhuoliu
Github user zhuoliu commented on the pull request:

https://github.com/apache/storm/pull/886#issuecomment-157761155
  
Hi jessicasco, for the versions before workers-artifacts, the following 
code in util.clj might achieve what you want:
```
(defn- logs-rootname [storm-id port]
  (str storm-id "-worker-" port))

(defn logs-filename [storm-id port]
  (str (logs-rootname storm-id port) ".log"))

(defn logs-metadata-filename [storm-id port]
  (str (logs-rootname storm-id port) ".yaml"))

(def worker-log-filename-pattern #"^((.*-\d+-\d+)-worker-(\d+))\.log")

(defn get-log-metadata-file
  ([fname]
(if-let [[_ _ id port] (re-matches worker-log-filename-pattern fname)]
  (get-log-metadata-file id port)))
  ([id port]
(clojure.java.io/file LOG-DIR "metadata" (logs-metadata-filename id 
port
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: [STORM-126] Add Lifecycle support API for work...

2015-11-18 Thread Parth-Brahmbhatt
Github user Parth-Brahmbhatt commented on the pull request:

https://github.com/apache/storm/pull/884#issuecomment-157749683
  
Thanks for the contribution, any reason you decided to make this hook part 
of serialized topology. If you see some other examples, like the nimbus hook 
(though it is not really a great example as its on nimbus side and not worker 
but still close) we ask the user to just provide the Fully qualified classname 
as a config option, create an instance of it using reflection and invoke 
prepare (start in your case) or cleanup on that instance. The only advantage of 
putting this as part of topology is that users will be able to provide objects 
that are completely serialized so it can be initialized with constructor args 
or with any other way that relies on instance variable initialization but I 
don't see that as a huge upside. On the other hand a consistent way to 
implement all hooks will make code easy to read and reason about.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011176#comment-15011176
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45211932
  
--- Diff: storm-core/src/clj/backtype/storm/daemon/worker.clj ---
@@ -665,6 +686,8 @@
 (close-resources worker)
 
 ;; TODO: here need to invoke the "shutdown" method of 
WorkerHook
--- End diff --

Please remove this TODO, you implemented it.


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L352
> I think the interface for the lifecycle stuff should look something like this:
> interface WorkerHook extends Serializable {
> void start(Map conf, TopologyContext context, List taskIds);
> void shutdown();
> }
> You'll need to add a definition for worker hooks into the topology definition 
> Thrift structure:
> https://github.com/nathanmarz/storm/blob/master/src/storm.thrift#L91
> I think for the first go it's ok to make this a Java-only feature by adding 
> something like "4: list worker_hooks" to the StormTopology structure 
> (where the "binary" refers to a Java-serialized object).
> Then TopologyBuilder can have a simple "addWorkerHook" method that will 
> serialize the object and add it to the Thrift struct.
> --
> danehammer: I've started working on this. I've followed Nathan's proposed 
> design, but I keep hitting snags with calls to ThriftTopologyUtils, now that 
> there is an optional list on StormTopology.
> I would like to add some unit tests for what I change there, would it make 
> more sense for those to be in Java instead of Clojure? If so, are there any 
> strong preferences on what dependencies I add and how I go about adding Java 
> unit tests to storm-core?
> --
> nathanmarz: No... unit tests should remain in Clojure. You can run 

[jira] [Commented] (STORM-126) Add Lifecycle support API for worker nodes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011191#comment-15011191
 ] 

ASF GitHub Bot commented on STORM-126:
--

Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/884#discussion_r45212589
  
--- Diff: 
storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java ---
@@ -0,0 +1,94 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.utils;
+
+import backtype.storm.generated.*;
+import backtype.storm.hooks.BaseWorkerHook;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import junit.framework.TestCase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.nio.ByteBuffer;
+import java.util.Set;
+
+public class ThriftTopologyUtilsTest extends TestCase {
--- End diff --

For JUNIT4 I don't think you should extend TestCase.  Is there a reason you 
are doing it?


> Add Lifecycle support API for worker nodes
> --
>
> Key: STORM-126
> URL: https://issues.apache.org/jira/browse/STORM-126
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: James Xu
>Assignee: Michael Schonfeld
>Priority: Minor
> Fix For: 0.11.0
>
>
> https://github.com/nathanmarz/storm/issues/155
> Storm is already used in variety of environments. It is important that Storm 
> provides some form of "lifecycle" API specified at Topology builder level to 
> be called when worker nodes start and stop.
> It is a very crucial functional piece that is missing from Storm. Many 
> project have to integrate, for example, with various container-like 
> frameworks like Spring or Google Guice that need to be started at stopped in 
> a controlled fashion before worker nodes begin or finish their work.
> I think something like a WorkerContextListener interface with two methods: 
> onStartup(SomeContextClass) and onShutdown(SomeContextClass) could go a very 
> long way for allowing to user's to plugin various third-party libraries 
> easily.
> Then, the TopologyBuilder needs to be modified to accept classes that 
> implement this interface.
> SomeContextClass does not need to be much more than a Map for now. But it is 
> important to have it as it allows propagation ofl information between those 
> lifecycle context listeners.
> Nathan, it would interesting to hear your opinion.
> Thanks!
> --
> nathanmarz: I agree, this should be added to Storm. The lifecycle methods 
> should be parameterized with which tasks are running in this worker.
> Additionally, I think lifecycle methods should be added for bolt/spouts in 
> the context of workers. Sometimes there's some code you want to run for a 
> spout/bolt within a worker only one time, regardless of how many tasks for 
> that bolt are within the worker. Then individual tasks should be able to 
> access that "global state" within the worker for that spout/bolt.
> --
> kyrill007: Thank you, Nathan, I think it would be relatively simple to 
> implement and would have big impact. Now we're forced to manage container 
> initializations through lazy static fields. You'd love to see that code. :-)
> --
> nathanmarz: Yup, this should be fairly easy to implement. I encourage you to 
> submit a patch for this.
> --
> kyrill007: Oh, well, my Clojure is unfortunately too weak for this kind of 
> work. But I am working on it... Any pointers as to where in Storm code 
> workers are started and stopped?
> --
> nathanmarz: Here's the function that's called to start a worker: 
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L315
> And here's the code in the same file that shuts down a worker:
> https://github.com/nathanmarz/storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L352
> I think the interface 

[jira] [Commented] (STORM-1187) Support for late and out of order events in time based windows

2015-11-18 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011207#comment-15011207
 ] 

Parth Brahmbhatt commented on STORM-1187:
-

[~arunmahadevan] Do you want to post your design doc for review?

> Support for late and out of order events in time based windows
> --
>
> Key: STORM-1187
> URL: https://issues.apache.org/jira/browse/STORM-1187
> Project: Apache Storm
>  Issue Type: Sub-task
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>
> Right now the time based windows uses the timestamp when the tuple is 
> received by the bolt. 
> However there are use cases where the tuples can be processed based on the 
> time when they are actually generated vs the time when they are received. So 
> we need to add support for processing events with a time lag and also have 
> some way to specify and read tuple timestamps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: (STORM-956) When the execute() or nextTuple() ...

2015-11-18 Thread revans2
Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/647#issuecomment-157757168
  
I personally can see both sides of this.  There are situations where a 
bolt/spout may be hung because of a bug, like a thread deadlock in some 
external client, and restarting the worker will fix the issue.  But I agree 
that this should be a very rare situation.  

I don't get the argument that this is expensive if we get it wrong.  We 
already fail fast if some bolt or spout throws an unexpected exception.  I 
don't see this being that different.

A bolt or spout not able to process anything for 5 mins seems like an OK 
time to see if we can restart things, especially for a low latency framework.  
I personally am +1 on the concept of having timeouts.  I would like to see some 
changes to the implementation of this patch, but there is no reason to go into 
that if @kishorvpatil has a -1 on even the idea of it.  @kishorvpatil and 
@bastiliu have I swayed you at all with my argument?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: [STORM-876] Blobstore API

2015-11-18 Thread d2r
Github user d2r commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r45240246
  
--- Diff: storm-core/src/clj/backtype/storm/daemon/supervisor.clj ---
@@ -24,9 +24,13 @@
[org.apache.commons.io FileUtils]
[java.io File])
   (:use [backtype.storm config util log timer local-state])
+  (:import [backtype.storm.generated AuthorizationException 
KeyNotFoundException WorkerResources])
+  (:import [java.util.concurrent Executors])
   (:import [backtype.storm.utils VersionInfo])
+  (:import [java.nio.file Files Path Paths StandardCopyOption])
--- End diff --

`Path` and `Paths` unused?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-876) Dist Cache: Basic Functionality

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011651#comment-15011651
 ] 

ASF GitHub Bot commented on STORM-876:
--

Github user d2r commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r45241587
  
--- Diff: storm-core/src/clj/backtype/storm/daemon/supervisor.clj ---
@@ -326,16 +330,60 @@
  (log-error t "Error when 
processing event")
  (exit-process! 20 "Error when 
processing an event")
  ))
+   :blob-update-timer (mk-timer :kill-fn (fn [t]
+   (log-error t "Error when 
processing event")
+   (exit-process! 20 "Error when 
processing a event")))
--- End diff --

Can we set `:timer-name` here in addition to the `:kill-fn`?
It might also be helpful for debugging to give a name here to the anonymous 
`fn`, as we have seen some mysterious hangs while logging.


> Dist Cache: Basic Functionality
> ---
>
> Key: STORM-876
> URL: https://issues.apache.org/jira/browse/STORM-876
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: DISTCACHE.md, DistributedCacheDesignDocument.pdf
>
>
> Basic functionality for the Dist Cache feature.
> As part of this a new API should be added to support uploading and 
> downloading dist cache items.  storm-core.ser, storm-conf.ser and storm.jar 
> should be written into the blob store instead of residing locally. We need a 
> default implementation of the blob store that does essentially what nimbus 
> currently does and does not need anything extra.  But having an HDFS backend 
> too would be great for scalability and HA.
> The supervisor should provide a way to download and manage these blobs and 
> provide a working directory for the worker process with symlinks to the 
> blobs.  It should also allow the blobs to be updated and switch the symlink 
> atomically to point to the new blob once it is downloaded.
> All of this is already done by code internal to Yahoo! we are in the process 
> of getting it ready to push back to open source shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: [STORM-876] Blobstore API

2015-11-18 Thread d2r
Github user d2r commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r45241587
  
--- Diff: storm-core/src/clj/backtype/storm/daemon/supervisor.clj ---
@@ -326,16 +330,60 @@
  (log-error t "Error when 
processing event")
  (exit-process! 20 "Error when 
processing an event")
  ))
+   :blob-update-timer (mk-timer :kill-fn (fn [t]
+   (log-error t "Error when 
processing event")
+   (exit-process! 20 "Error when 
processing a event")))
--- End diff --

Can we set `:timer-name` here in addition to the `:kill-fn`?
It might also be helpful for debugging to give a name here to the anonymous 
`fn`, as we have seen some mysterious hangs while logging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (STORM-885) Heartbeat Server (Pacemaker)

2015-11-18 Thread Kyle Nusbaum (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011663#comment-15011663
 ] 

Kyle Nusbaum edited comment on STORM-885 at 11/18/15 6:52 PM:
--

[~LongdaFeng]
I disagree with your assessment of this patch.

Addressing each of your numbered points:
1. I'm not sure what you mean by this. Will you please clarify?
2. Pacemaker may become a bottleneck in the system, yes. But in our internal 
testing, Nimbus itself becomes a bottleneck before Pacemaker. Furthermore, it 
is *not* very hard to do extension. I have a prototype here for Pacemaker HA 
that only adds a small number of lines of code, which I am planning on pushing 
out in the relatively near future.
3. The logic looks fairly straight-forward to me, and the pacemaker server file 
itself is only 239 lines long, with nearly 100 of those being the license, 
imports, and statistics helpers. The client is only 125 lines long. 

Pacemaker is also an optional component, and it's ready now. 
I see the benefit of a 'TopologyMaster', but I don't see the benefit in 
blocking the merge of Pacemaker at this point.


was (Author: knusbaum):
[~LongdaFeng]
I disagree with your assessment of this patch.

1. I'm not sure what you mean by this. Will you please clarify?
2. Pacemaker may become a bottleneck in the system, yes. But in our internal 
testing, Nimbus itself becomes a bottleneck before Pacemaker. Furthermore, it 
is *not* very hard to do extension. I have a prototype here for Pacemaker HA 
that only adds a small number of lines of code, which I am planning on pushing 
out in the relatively near future.
3. The logic looks fairly straight-forward to me, and the pacemaker server file 
itself is only 239 lines long, with nearly 100 of those being the license, 
imports, and statistics helpers. The client is only 125 lines long. 

Pacemaker is also an optional component, and it's ready now. 
I see the benefit of a 'TopologyMaster', but I don't see the benefit in 
blocking the merge of Pacemaker at this point.

> Heartbeat Server (Pacemaker)
> 
>
> Key: STORM-885
> URL: https://issues.apache.org/jira/browse/STORM-885
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Kyle Nusbaum
>
> Large highly connected topologies and large clusters write a lot of data into 
> ZooKeeper.  The heartbeats, that make up the majority of this data, do not 
> need to be persisted to disk.  Pacemaker is intended to be a secure 
> replacement for storing the heartbeats without changing anything within the 
> heartbeats.  In the future as more metrics are added in, we may want to look 
> into switching it over to look more like Heron, where a metrics server is 
> running for each node/topology.  And can be used to aggregate/per-aggregate 
> them in a more scalable manor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: STORM-904: Move bin/storm command line to java...

2015-11-18 Thread priyank5485
Github user priyank5485 commented on the pull request:

https://github.com/apache/storm/pull/662#issuecomment-157804563
  
@hustfxj  I might have missed something and its been a long time since i 
did this. I could not find any storm command that would not need a waitFor. Can 
you give an example? If we do need that flag it should be easy to add and I can 
change that.

@longdafeng I am not sure I understand the point about debugging. You can 
still put a break point in this java version of storm command line. I agree 
with you that its easier to write new functions in script. But the whole point 
of this JIRA is to have one client that works for Unix based systems and 
Windows platform. The last time I was working on it we had storm.py for Unix 
and storm batch client for windows. Both of them also had a separate file for 
environment variables. It is hard to maintain two scripts anytime we want to 
change the storm command. We can ask others about their opinion and what they 
think is better between the two approaches.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: [STORM-876] Blobstore API

2015-11-18 Thread d2r
Github user d2r commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r45240485
  
--- Diff: storm-core/src/clj/backtype/storm/daemon/supervisor.clj ---
@@ -14,7 +14,7 @@
 ;; See the License for the specific language governing permissions and
 ;; limitations under the License.
 (ns backtype.storm.daemon.supervisor
-  (:import [java.io OutputStreamWriter BufferedWriter IOException])
+  (:import [java.io OutputStreamWriter BufferedWriter IOException 
FileOutputStream])
--- End diff --

* OutputStreamWriter and BufferedWriter appear unused, we could remove them 
now.
* Below, we import File from java.io, we can consolidate the remaining two 
on this line.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-876) Dist Cache: Basic Functionality

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011619#comment-15011619
 ] 

ASF GitHub Bot commented on STORM-876:
--

Github user d2r commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r45240485
  
--- Diff: storm-core/src/clj/backtype/storm/daemon/supervisor.clj ---
@@ -14,7 +14,7 @@
 ;; See the License for the specific language governing permissions and
 ;; limitations under the License.
 (ns backtype.storm.daemon.supervisor
-  (:import [java.io OutputStreamWriter BufferedWriter IOException])
+  (:import [java.io OutputStreamWriter BufferedWriter IOException 
FileOutputStream])
--- End diff --

* OutputStreamWriter and BufferedWriter appear unused, we could remove them 
now.
* Below, we import File from java.io, we can consolidate the remaining two 
on this line.


> Dist Cache: Basic Functionality
> ---
>
> Key: STORM-876
> URL: https://issues.apache.org/jira/browse/STORM-876
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: DISTCACHE.md, DistributedCacheDesignDocument.pdf
>
>
> Basic functionality for the Dist Cache feature.
> As part of this a new API should be added to support uploading and 
> downloading dist cache items.  storm-core.ser, storm-conf.ser and storm.jar 
> should be written into the blob store instead of residing locally. We need a 
> default implementation of the blob store that does essentially what nimbus 
> currently does and does not need anything extra.  But having an HDFS backend 
> too would be great for scalability and HA.
> The supervisor should provide a way to download and manage these blobs and 
> provide a working directory for the worker process with symlinks to the 
> blobs.  It should also allow the blobs to be updated and switch the symlink 
> atomically to point to the new blob once it is downloaded.
> All of this is already done by code internal to Yahoo! we are in the process 
> of getting it ready to push back to open source shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-876) Dist Cache: Basic Functionality

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011641#comment-15011641
 ] 

ASF GitHub Bot commented on STORM-876:
--

Github user d2r commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r45241070
  
--- Diff: storm-core/src/clj/backtype/storm/daemon/supervisor.clj ---
@@ -238,20 +241,21 @@
 (defn- rmr-as-user
   "Launches a process owned by the given user that deletes the given path
   recursively.  Throws RuntimeException if the directory is not removed."
-  [conf id user path]
+  [conf id path]
+  (let [user (Utils/getFileOwner path)]
   (worker-launcher-and-wait conf
 user
 ["rmr" path]
 :log-prefix (str "rmr " id))
   (if (exists-file? path)
-(throw (RuntimeException. (str path " was not deleted")
+(throw (RuntimeException. (str path " was not deleted"))
--- End diff --

Need to indent since we are adding the `let`.  Removing the `user` param 
was a good change here.


> Dist Cache: Basic Functionality
> ---
>
> Key: STORM-876
> URL: https://issues.apache.org/jira/browse/STORM-876
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: DISTCACHE.md, DistributedCacheDesignDocument.pdf
>
>
> Basic functionality for the Dist Cache feature.
> As part of this a new API should be added to support uploading and 
> downloading dist cache items.  storm-core.ser, storm-conf.ser and storm.jar 
> should be written into the blob store instead of residing locally. We need a 
> default implementation of the blob store that does essentially what nimbus 
> currently does and does not need anything extra.  But having an HDFS backend 
> too would be great for scalability and HA.
> The supervisor should provide a way to download and manage these blobs and 
> provide a working directory for the worker process with symlinks to the 
> blobs.  It should also allow the blobs to be updated and switch the symlink 
> atomically to point to the new blob once it is downloaded.
> All of this is already done by code internal to Yahoo! we are in the process 
> of getting it ready to push back to open source shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (STORM-885) Heartbeat Server (Pacemaker)

2015-11-18 Thread Kyle Nusbaum (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011663#comment-15011663
 ] 

Kyle Nusbaum edited comment on STORM-885 at 11/18/15 7:05 PM:
--

[~LongdaFeng]
I disagree with your assessment of this patch.

Addressing each of your numbered points:
1. I'm not sure what you mean by this. Will you please clarify?
2. Pacemaker may become a bottleneck in the system, yes. But in our internal 
testing, Nimbus itself becomes a bottleneck before Pacemaker. Furthermore, it 
is *not* very hard to do extension. I have a prototype here for Pacemaker HA 
that only adds a small number of lines of code, which I am planning on pushing 
out in the relatively near future.
3. The logic looks fairly straight-forward to me, and the pacemaker server file 
itself is only 239 lines long, with nearly 100 of those being the license, 
imports, and statistics helpers. The client is only 125 lines long. 

Pacemaker is also an optional component, and it's ready now. 
A TopologyMaster also does not reduce the total amount of data written to 
ZooKeeper, whereas Pacemaker does.


was (Author: knusbaum):
[~LongdaFeng]
I disagree with your assessment of this patch.

Addressing each of your numbered points:
1. I'm not sure what you mean by this. Will you please clarify?
2. Pacemaker may become a bottleneck in the system, yes. But in our internal 
testing, Nimbus itself becomes a bottleneck before Pacemaker. Furthermore, it 
is *not* very hard to do extension. I have a prototype here for Pacemaker HA 
that only adds a small number of lines of code, which I am planning on pushing 
out in the relatively near future.
3. The logic looks fairly straight-forward to me, and the pacemaker server file 
itself is only 239 lines long, with nearly 100 of those being the license, 
imports, and statistics helpers. The client is only 125 lines long. 

Pacemaker is also an optional component, and it's ready now. 
I see the benefit of a 'TopologyMaster', but I don't see the benefit in 
blocking the merge of Pacemaker at this point.

> Heartbeat Server (Pacemaker)
> 
>
> Key: STORM-885
> URL: https://issues.apache.org/jira/browse/STORM-885
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Kyle Nusbaum
>
> Large highly connected topologies and large clusters write a lot of data into 
> ZooKeeper.  The heartbeats, that make up the majority of this data, do not 
> need to be persisted to disk.  Pacemaker is intended to be a secure 
> replacement for storing the heartbeats without changing anything within the 
> heartbeats.  In the future as more metrics are added in, we may want to look 
> into switching it over to look more like Heron, where a metrics server is 
> running for each node/topology.  And can be used to aggregate/per-aggregate 
> them in a more scalable manor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-904) move storm bin commands to java and provide appropriate bindings for windows and linux

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011585#comment-15011585
 ] 

ASF GitHub Bot commented on STORM-904:
--

Github user priyank5485 commented on the pull request:

https://github.com/apache/storm/pull/662#issuecomment-157808005
  
@longdafeng Also please see one of the previous comments about why not to 
use python.


> move storm bin commands to java and provide appropriate bindings for windows 
> and linux
> --
>
> Key: STORM-904
> URL: https://issues.apache.org/jira/browse/STORM-904
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Reporter: Sriharsha Chintalapani
>Assignee: Priyank Shah
>
> Currently we have python and .cmd implementation for windows. This is 
> becoming increasing difficult upkeep both versions. Lets make all the main 
> code of starting daemons etc. to java and provider wrapper scripts in shell 
> and batch for linux and windows respectively. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-876) Dist Cache: Basic Functionality

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011617#comment-15011617
 ] 

ASF GitHub Bot commented on STORM-876:
--

Github user d2r commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r45240246
  
--- Diff: storm-core/src/clj/backtype/storm/daemon/supervisor.clj ---
@@ -24,9 +24,13 @@
[org.apache.commons.io FileUtils]
[java.io File])
   (:use [backtype.storm config util log timer local-state])
+  (:import [backtype.storm.generated AuthorizationException 
KeyNotFoundException WorkerResources])
+  (:import [java.util.concurrent Executors])
   (:import [backtype.storm.utils VersionInfo])
+  (:import [java.nio.file Files Path Paths StandardCopyOption])
--- End diff --

`Path` and `Paths` unused?


> Dist Cache: Basic Functionality
> ---
>
> Key: STORM-876
> URL: https://issues.apache.org/jira/browse/STORM-876
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: DISTCACHE.md, DistributedCacheDesignDocument.pdf
>
>
> Basic functionality for the Dist Cache feature.
> As part of this a new API should be added to support uploading and 
> downloading dist cache items.  storm-core.ser, storm-conf.ser and storm.jar 
> should be written into the blob store instead of residing locally. We need a 
> default implementation of the blob store that does essentially what nimbus 
> currently does and does not need anything extra.  But having an HDFS backend 
> too would be great for scalability and HA.
> The supervisor should provide a way to download and manage these blobs and 
> provide a working directory for the worker process with symlinks to the 
> blobs.  It should also allow the blobs to be updated and switch the symlink 
> atomically to point to the new blob once it is downloaded.
> All of this is already done by code internal to Yahoo! we are in the process 
> of getting it ready to push back to open source shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: making small fixes in RAS

2015-11-18 Thread jerrypeng
GitHub user jerrypeng opened a pull request:

https://github.com/apache/storm/pull/890

making small fixes in RAS



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jerrypeng/storm RAS_small_fixes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/storm/pull/890.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #890


commit ef716467caaada143c951fb88223dd476f72557a
Author: Boyang Jerry Peng 
Date:   2015-11-18T18:59:14Z

making small fixes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: STORM-1075 add external module storm-cassandra

2015-11-18 Thread fhussonnois
Github user fhussonnois commented on a diff in the pull request:

https://github.com/apache/storm/pull/827#discussion_r45238310
  
--- Diff: 
external/storm-cassandra/src/main/java/org/apache/storm/cassandra/Murmur3StreamGrouping.java
 ---
@@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.storm.cassandra;
+
+import backtype.storm.generated.GlobalStreamId;
+import backtype.storm.grouping.CustomStreamGrouping;
+import backtype.storm.task.WorkerTopologyContext;
+import backtype.storm.topology.FailedException;
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.Lists;
+import com.google.common.hash.Hashing;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.List;
+
+/**
+ *
+ * Simple {@link backtype.storm.grouping.CustomStreamGrouping} that uses 
Murmur3 algorithm to choose the target task of a tuple.
+ *
+ * This stream grouping may be used to optimise writes to Apache Cassandra.
+ */
+public class Murmur3StreamGrouping implements CustomStreamGrouping {
--- End diff --

Yes, that's right! Actually, I didn't even noticed there is no method like 
this on component : customGrouping(componentId, customStreamGrouping, 
fields...).

So, one solution could be to pass the indexes of the the partition keys as 
follows : 
myBolt.customGrouping("comp", new Murmur3StreamGrouping(0,1)) ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1075) Storm Cassandra connector

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011599#comment-15011599
 ] 

ASF GitHub Bot commented on STORM-1075:
---

Github user fhussonnois commented on a diff in the pull request:

https://github.com/apache/storm/pull/827#discussion_r45238310
  
--- Diff: 
external/storm-cassandra/src/main/java/org/apache/storm/cassandra/Murmur3StreamGrouping.java
 ---
@@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.storm.cassandra;
+
+import backtype.storm.generated.GlobalStreamId;
+import backtype.storm.grouping.CustomStreamGrouping;
+import backtype.storm.task.WorkerTopologyContext;
+import backtype.storm.topology.FailedException;
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.Lists;
+import com.google.common.hash.Hashing;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.List;
+
+/**
+ *
+ * Simple {@link backtype.storm.grouping.CustomStreamGrouping} that uses 
Murmur3 algorithm to choose the target task of a tuple.
+ *
+ * This stream grouping may be used to optimise writes to Apache Cassandra.
+ */
+public class Murmur3StreamGrouping implements CustomStreamGrouping {
--- End diff --

Yes, that's right! Actually, I didn't even noticed there is no method like 
this on component : customGrouping(componentId, customStreamGrouping, 
fields...).

So, one solution could be to pass the indexes of the the partition keys as 
follows : 
myBolt.customGrouping("comp", new Murmur3StreamGrouping(0,1)) ?


> Storm Cassandra connector
> -
>
> Key: STORM-1075
> URL: https://issues.apache.org/jira/browse/STORM-1075
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Sriharsha Chintalapani
>Assignee: Satish Duggana
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (STORM-885) Heartbeat Server (Pacemaker)

2015-11-18 Thread Kyle Nusbaum (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011663#comment-15011663
 ] 

Kyle Nusbaum edited comment on STORM-885 at 11/18/15 6:51 PM:
--

[~LongdaFeng]
I disagree with your assessment of this patch.

1. I'm not sure what you mean by this. Will you please clarify?
2. Pacemaker may become a bottleneck in the system, yes. But in our internal 
testing, Nimbus itself becomes a bottleneck before Pacemaker. Furthermore, it 
is *not* very hard to do extension. I have a prototype here for Pacemaker HA 
that only adds a small number of lines of code, which I am planning on pushing 
out in the relatively near future.
3. The logic looks fairly straight-forward to me, and the pacemaker server file 
itself is only 239 lines long, with nearly 100 of those being the license, 
imports, and statistics helpers. The client is only 125 lines long. 

Pacemaker is also an optional component, and it's ready now. 
I see the benefit of a 'TopologyMaster', but I don't see the benefit in 
blocking the merge of Pacemaker at this point.


was (Author: knusbaum):
I disagree with your assessment of this patch.

1. I'm not sure what you mean by this. Will you please clarify?
2. Pacemaker may become a bottleneck in the system, yes. But in our internal 
testing, Nimbus itself becomes a bottleneck before Pacemaker. Furthermore, it 
is *not* very hard to do extension. I have a prototype here for Pacemaker HA 
that only adds a small number of lines of code, which I am planning on pushing 
out in the relatively near future.
3. The logic looks fairly straight-forward to me, and the pacemaker server file 
itself is only 239 lines long, with nearly 100 of those being the license, 
imports, and statistics helpers. The client is only 125 lines long. 

Pacemaker is also an optional component, and it's ready now. 
I see the benefit of a 'TopologyMaster', but I don't see the benefit in 
blocking the merge of Pacemaker at this point.

> Heartbeat Server (Pacemaker)
> 
>
> Key: STORM-885
> URL: https://issues.apache.org/jira/browse/STORM-885
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Kyle Nusbaum
>
> Large highly connected topologies and large clusters write a lot of data into 
> ZooKeeper.  The heartbeats, that make up the majority of this data, do not 
> need to be persisted to disk.  Pacemaker is intended to be a secure 
> replacement for storing the heartbeats without changing anything within the 
> heartbeats.  In the future as more metrics are added in, we may want to look 
> into switching it over to look more like Heron, where a metrics server is 
> running for each node/topology.  And can be used to aggregate/per-aggregate 
> them in a more scalable manor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-885) Heartbeat Server (Pacemaker)

2015-11-18 Thread Kyle Nusbaum (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011663#comment-15011663
 ] 

Kyle Nusbaum commented on STORM-885:


I disagree with your assessment of this patch.

1. I'm not sure what you mean by this. Will you please clarify?
2. Pacemaker may become a bottleneck in the system, yes. But in our internal 
testing, Nimbus itself becomes a bottleneck before Pacemaker. Furthermore, it 
is *not* very hard to do extension. I have a prototype here for Pacemaker HA 
that only adds a small number of lines of code, which I am planning on pushing 
out in the relatively near future.
3. The logic looks fairly straight-forward to me, and the pacemaker server file 
itself is only 239 lines long, with nearly 100 of those being the license, 
imports, and statistics helpers. The client is only 125 lines long. 

Pacemaker is also an optional component, and it's ready now. 
I see the benefit of a 'TopologyMaster', but I don't see the benefit in 
blocking the merge of Pacemaker at this point.

> Heartbeat Server (Pacemaker)
> 
>
> Key: STORM-885
> URL: https://issues.apache.org/jira/browse/STORM-885
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Kyle Nusbaum
>
> Large highly connected topologies and large clusters write a lot of data into 
> ZooKeeper.  The heartbeats, that make up the majority of this data, do not 
> need to be persisted to disk.  Pacemaker is intended to be a secure 
> replacement for storing the heartbeats without changing anything within the 
> heartbeats.  In the future as more metrics are added in, we may want to look 
> into switching it over to look more like Heron, where a metrics server is 
> running for each node/topology.  And can be used to aggregate/per-aggregate 
> them in a more scalable manor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Plan for Merging JStorm Code

2015-11-18 Thread Bobby Evans
Taylor and others I was hoping to get started filing JIRA and planning on how 
we are going to do the java migration + JStorm merger.  Is anyone else starting 
to do this?  If not would anyone object to me starting on it? - Bobby 


On Thursday, November 12, 2015 12:04 PM, P. Taylor Goetz 
 wrote:
 

 Thanks for putting this together Basti, that comparison helps a lot.

And thanks Bobby for converting it into markdown. I was going to just attach 
the spreadsheet to JIRA, but markdown is a much better solution.

-Taylor

> On Nov 12, 2015, at 12:03 PM, Bobby Evans  wrote:
> 
> I translated the excel spreadsheet into a markdown file and put up a pull 
> request for it.
> https://github.com/apache/storm/pull/877
> I did a few edits to it to make it work with Markdown, and to add in a few of 
> my own comments.  I also put in a field for JIRAs to be able to track the 
> migration.
> Overall I think your evaluation was very good.  We have a fair amount of work 
> ahead of us to decide what version of various features we want to go forward 
> with.
>  - Bobby
> 
> 
>    On Thursday, November 12, 2015 9:37 AM, 刘键(Basti Liu) 
> wrote:
> 
> 
> Hi Bobby & Jungtaek,
> 
> Thanks for your replay.
> I totally agree that compatibility is the most important thing. Actually, 
> JStorm has been compatible with the user API of Storm.
> As you mentioned below, we indeed still have some features different between 
> Storm and JStorm. I have tried to list them (minor update or improvements are 
> not included).
> Please refer to attachment for details. If any missing, please help to point 
> out. (The current working features are probably missing here.)
> Just have a look at these differences. For the missing features in JStorm, I 
> did not see any obstacle which will block the merge to JStorm.
> For the features which has different solution between Storm and JStorm, we 
> can evaluate the solution one by one to decision which one is appropriate.
> After the finalization of evaluation, I think JStorm team can take the 
> merging job and publish a stable release in 2 months.
> But anyway, the detailed implementation for these features with different 
> solution is transparent to user. So, from user's point of view, there is not 
> any compatibility problem.
> 
> Besides compatibility, by our experience, stability is also important and is 
> not an easy job. 4 people in JStorm team took almost one year to finish the 
> porting from "clojure core"
> to "java core", and to make it stable. Of course, we have many devs in 
> community to make the porting job faster. But it still needs a long time to 
> run many online complex topologys to find bugs and fix them. So, that is the 
> reason why I proposed to do merging and build on a stable "java core".
> 
> -Original Message-
> From: Bobby Evans [mailto:ev...@yahoo-inc.com.INVALID]
> Sent: Wednesday, November 11, 2015 10:51 PM
> To: dev@storm.apache.org
> Subject: Re: [DISCUSS] Plan for Merging JStorm Code
> 
> +1 for doing a 1.0 release based off of the clojure 0.11.x code.  Migrating 
> the APIs to org.apache.storm is a big non-backwards compatible move, and a 
> major version bump to 2.x seems like a good move there.
> +1 for the release plan
> 
> I would like the move for user facing APIs to org.apache to be one of the 
> last things we do.  Translating clojure code into java and moving it to 
> org.apache I am not too concerned about.
> 
> Basti,
> We have two code bases that have diverged significantly from one another in 
> terms of functionality.  The storm code now or soon will have A Heartbeat 
> Server, Nimbus HA (Different Implementation), Resource Aware Scheduling, a 
> distributed cache like API, log searching, security, massive performance 
> improvements, shaded almost all of our dependencies, a REST API for 
> programtically accessing everything on the UI, and I am sure I am missing a 
> few other things.  JStorm also has many changes including cgroup isolation, 
> restructured zookeeper layout, classpath isolation, and more too.
> No matter what we do it will be a large effort to port changes from one code 
> base to another, and from clojure to java.  I proposed this initially because 
> it can be broken up into incremental changes.  It may take a little longer, 
> but we will always have a working codebase that is testable and compatible 
> with the current storm release, at least until we move the user facing APIs 
> to be under org.apache.  This lets the community continue to build and test 
> the master branch and report problems that they find, which is incredibly 
> valuable.  I personally don't think it will be much easier, especially if we 
> are intent on always maintaining compatibility with storm. - Bobby
> 
> 
>    On Wednesday, November 11, 2015 5:42 AM, 刘键(Basti Liu) 
> wrote:
> 
> 
> Hi Taylor,
> 
> 
> 
> Thanks for the merge plan. I have a 

[GitHub] storm pull request: [STORM-1210] Set Output Stream id in KafkaSpou...

2015-11-18 Thread jerrypeng
Github user jerrypeng commented on the pull request:

https://github.com/apache/storm/pull/885#issuecomment-157831515
  
just a quick question. where is outputStreamId set?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: [STORM-1208] Guard against NPE, and avoid usin...

2015-11-18 Thread jerrypeng
Github user jerrypeng commented on a diff in the pull request:

https://github.com/apache/storm/pull/881#discussion_r45250004
  
--- Diff: storm-core/src/clj/backtype/storm/stats.clj ---
@@ -335,30 +358,21 @@
 (defn- agg-bolt-streams-lat-and-count
   "Aggregates number executed and process & execute latencies."
   [idk->exec-avg idk->proc-avg idk->executed]
-  {:pre (apply = (map #(set (keys %))
-  [idk->exec-avg
-   idk->proc-avg
-   idk->executed]))}
-  (letfn [(weight-avg [id avg] (let [num-e (idk->executed id)]
-   (if (and avg num-e)
- (* avg num-e)
- 0)))]
+  (letfn [(weight-avg [id avg]
+(let [num-e (idk->executed id)]
+  (product-or-0 avg num-e)))]
 (into {}
   (for [k (keys idk->exec-avg)]
-[k {:executeLatencyTotal (weight-avg k (idk->exec-avg k))
-:processLatencyTotal (weight-avg k (idk->proc-avg k))
+[k {:executeLatencyTotal (weight-avg k (get idk->exec-avg k))
+:processLatencyTotal (weight-avg k (get idk->proc-avg k))
--- End diff --

what is the point of adding the get especially for (get idk->exec-avg k)? 
you are getting the keys from the map and then accessing it right? the keys 
cannot possibly not exist in the map


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: making small fixes in RAS

2015-11-18 Thread jerrypeng
Github user jerrypeng commented on the pull request:

https://github.com/apache/storm/pull/890#issuecomment-157843989
  
@revans2 will fill a JIRA to track


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1213) Remove sigar binaries from source tree

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011839#comment-15011839
 ] 

ASF GitHub Bot commented on STORM-1213:
---

Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/887#issuecomment-157843581
  
+1


> Remove sigar binaries from source tree
> --
>
> Key: STORM-1213
> URL: https://issues.apache.org/jira/browse/STORM-1213
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 0.11.0
>Reporter: P. Taylor Goetz
>Assignee: P. Taylor Goetz
>
> In {{external/storm-metrics}} sigar native binaries were added to the source 
> tree. Since Apache releases are source-only, these binaries can't be included 
> in a release.
> My initial thought was just to exclude the binaries from the source 
> distribution, but that would mean that distributions built from a source 
> tarball would not match the convenience binaries from a release (the sigar 
> native binaries would not be included.
> The solution I came up with was to leverage the fact that pre-built native 
> binaries are included in the sigar maven distribution 
> ({{sigar-x.x.x-native.jar}}) and use the maven dependency plugin to unpack 
> them into place during the build, rather than check them into git. One 
> benefit is that it will ensure the versions of the sigar jar and the native 
> binaries match. Another is that mavens checksum/signature checking mechanism 
> will also be applied.
> This isn't an ideal solution since the {{sigar-x.x.x-native.jar}} only 
> includes binaries for linux, OSX, and solaris (notably missing windows DLLs), 
> whereas the non-maven sigar download includes support for a wider range of 
> OSes and architectures.
> I view this as an interim measure until we can find a better way to include 
> the native binaries in the build process, rather than checking them into the 
> source tree -- which would be a blocker for releasing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (STORM-1217) making small fixes in RAS

2015-11-18 Thread Boyang Jerry Peng (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Jerry Peng reassigned STORM-1217:


Assignee: Boyang Jerry Peng

> making small fixes in RAS
> -
>
> Key: STORM-1217
> URL: https://issues.apache.org/jira/browse/STORM-1217
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Boyang Jerry Peng
>Assignee: Boyang Jerry Peng
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: STORM-1213: Remove sigar binaries from source ...

2015-11-18 Thread jerrypeng
Github user jerrypeng commented on the pull request:

https://github.com/apache/storm/pull/887#issuecomment-157827199
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1213) Remove sigar binaries from source tree

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011727#comment-15011727
 ] 

ASF GitHub Bot commented on STORM-1213:
---

Github user jerrypeng commented on the pull request:

https://github.com/apache/storm/pull/887#issuecomment-157827199
  
+1


> Remove sigar binaries from source tree
> --
>
> Key: STORM-1213
> URL: https://issues.apache.org/jira/browse/STORM-1213
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 0.11.0
>Reporter: P. Taylor Goetz
>Assignee: P. Taylor Goetz
>
> In {{external/storm-metrics}} sigar native binaries were added to the source 
> tree. Since Apache releases are source-only, these binaries can't be included 
> in a release.
> My initial thought was just to exclude the binaries from the source 
> distribution, but that would mean that distributions built from a source 
> tarball would not match the convenience binaries from a release (the sigar 
> native binaries would not be included.
> The solution I came up with was to leverage the fact that pre-built native 
> binaries are included in the sigar maven distribution 
> ({{sigar-x.x.x-native.jar}}) and use the maven dependency plugin to unpack 
> them into place during the build, rather than check them into git. One 
> benefit is that it will ensure the versions of the sigar jar and the native 
> binaries match. Another is that mavens checksum/signature checking mechanism 
> will also be applied.
> This isn't an ideal solution since the {{sigar-x.x.x-native.jar}} only 
> includes binaries for linux, OSX, and solaris (notably missing windows DLLs), 
> whereas the non-maven sigar download includes support for a wider range of 
> OSes and architectures.
> I view this as an interim measure until we can find a better way to include 
> the native binaries in the build process, rather than checking them into the 
> source tree -- which would be a blocker for releasing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1208) UI: NPE seen when aggregating bolt streams stats

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011814#comment-15011814
 ] 

ASF GitHub Bot commented on STORM-1208:
---

Github user asfgit closed the pull request at:

https://github.com/apache/storm/pull/881


> UI: NPE seen when aggregating bolt streams stats
> 
>
> Key: STORM-1208
> URL: https://issues.apache.org/jira/browse/STORM-1208
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 0.11.0
>Reporter: Derek Dagit
>Assignee: Derek Dagit
> Fix For: 0.11.0
>
>
> A stack trace is seen on the UI via its thrift connection to nimbus.
> On nimbus, a stack trace similar to the following is seen:
> {noformat}
> 2015-11-09 19:26:48.921 o.a.t.s.TThreadPoolServer [ERROR] Error occurred 
> during processing of message.
> java.lang.NullPointerException
> at 
> backtype.storm.stats$agg_bolt_streams_lat_and_count$iter__2219__2223$fn__2224.invoke(stats.clj:346)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.6.0.jar:?]
> at clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.6.0.jar:?]
> at clojure.lang.RT.seq(RT.java:484) ~[clojure-1.6.0.jar:?]
> at clojure.core$seq.invoke(core.clj:133) ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$seq_reduce.invoke(protocols.clj:30) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6078.invoke(protocols.clj:54) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core$reduce.invoke(core.clj:6289) ~[clojure-1.6.0.jar:?]
> at clojure.core$into.invoke(core.clj:6341) ~[clojure-1.6.0.jar:?]
> at 
> backtype.storm.stats$agg_bolt_streams_lat_and_count.invoke(stats.clj:344) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.stats$agg_pre_merge_comp_page_bolt.invoke(stats.clj:439) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at backtype.storm.stats$fn__2578.invoke(stats.clj:1093) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.MultiFn.invoke(MultiFn.java:241) 
> ~[clojure-1.6.0.jar:?]
> at clojure.lang.AFn.applyToHelper(AFn.java:165) ~[clojure-1.6.0.jar:?]
> at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.6.0.jar:?]
> at clojure.core$apply.invoke(core.clj:628) ~[clojure-1.6.0.jar:?]
> at clojure.core$partial$fn__4230.doInvoke(core.clj:2470) 
> ~[clojure-1.6.0.jar:?]
> at clojure.lang.RestFn.invoke(RestFn.java:421) ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6086.invoke(protocols.clj:143) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6057$G__6052__6066.invoke(protocols.clj:19) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6078.invoke(protocols.clj:54) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core$reduce.invoke(core.clj:6289) ~[clojure-1.6.0.jar:?]
> at 
> backtype.storm.stats$aggregate_comp_stats_STAR_.invoke(stats.clj:1106) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.AFn.applyToHelper(AFn.java:165) ~[clojure-1.6.0.jar:?]
> at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.6.0.jar:?]
> at clojure.core$apply.invoke(core.clj:624) ~[clojure-1.6.0.jar:?]
> at backtype.storm.stats$fn__2589.doInvoke(stats.clj:1127) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.RestFn.invoke(RestFn.java:436) ~[clojure-1.6.0.jar:?]
> at clojure.lang.MultiFn.invoke(MultiFn.java:236) 
> ~[clojure-1.6.0.jar:?]
> at backtype.storm.stats$agg_comp_execs_stats.invoke(stats.clj:1303) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.daemon.nimbus$fn__5893$exec_fn__1502__auto__$reify__5917.getComponentPageInfo(nimbus.clj:1715)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.generated.Nimbus$Processor$getComponentPageInfo.getResult(Nimbus.java:3677)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.generated.Nimbus$Processor$getComponentPageInfo.getResult(Nimbus.java:3661)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at 
> org.apache.thrift7.ProcessFunction.process(ProcessFunction.java:39) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at org.apache.thrift7.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.security.auth.SaslTransportPlugin$TUGIWrapProcessor.process(SaslTransportPlugin.java:143)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at 
> 

[jira] [Resolved] (STORM-1208) UI: NPE seen when aggregating bolt streams stats

2015-11-18 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-1208.
-
   Resolution: Fixed
Fix Version/s: 0.11.0

> UI: NPE seen when aggregating bolt streams stats
> 
>
> Key: STORM-1208
> URL: https://issues.apache.org/jira/browse/STORM-1208
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 0.11.0
>Reporter: Derek Dagit
>Assignee: Derek Dagit
> Fix For: 0.11.0
>
>
> A stack trace is seen on the UI via its thrift connection to nimbus.
> On nimbus, a stack trace similar to the following is seen:
> {noformat}
> 2015-11-09 19:26:48.921 o.a.t.s.TThreadPoolServer [ERROR] Error occurred 
> during processing of message.
> java.lang.NullPointerException
> at 
> backtype.storm.stats$agg_bolt_streams_lat_and_count$iter__2219__2223$fn__2224.invoke(stats.clj:346)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.6.0.jar:?]
> at clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.6.0.jar:?]
> at clojure.lang.RT.seq(RT.java:484) ~[clojure-1.6.0.jar:?]
> at clojure.core$seq.invoke(core.clj:133) ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$seq_reduce.invoke(protocols.clj:30) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6078.invoke(protocols.clj:54) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core$reduce.invoke(core.clj:6289) ~[clojure-1.6.0.jar:?]
> at clojure.core$into.invoke(core.clj:6341) ~[clojure-1.6.0.jar:?]
> at 
> backtype.storm.stats$agg_bolt_streams_lat_and_count.invoke(stats.clj:344) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.stats$agg_pre_merge_comp_page_bolt.invoke(stats.clj:439) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at backtype.storm.stats$fn__2578.invoke(stats.clj:1093) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.MultiFn.invoke(MultiFn.java:241) 
> ~[clojure-1.6.0.jar:?]
> at clojure.lang.AFn.applyToHelper(AFn.java:165) ~[clojure-1.6.0.jar:?]
> at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.6.0.jar:?]
> at clojure.core$apply.invoke(core.clj:628) ~[clojure-1.6.0.jar:?]
> at clojure.core$partial$fn__4230.doInvoke(core.clj:2470) 
> ~[clojure-1.6.0.jar:?]
> at clojure.lang.RestFn.invoke(RestFn.java:421) ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6086.invoke(protocols.clj:143) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6057$G__6052__6066.invoke(protocols.clj:19) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6078.invoke(protocols.clj:54) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core$reduce.invoke(core.clj:6289) ~[clojure-1.6.0.jar:?]
> at 
> backtype.storm.stats$aggregate_comp_stats_STAR_.invoke(stats.clj:1106) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.AFn.applyToHelper(AFn.java:165) ~[clojure-1.6.0.jar:?]
> at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.6.0.jar:?]
> at clojure.core$apply.invoke(core.clj:624) ~[clojure-1.6.0.jar:?]
> at backtype.storm.stats$fn__2589.doInvoke(stats.clj:1127) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.RestFn.invoke(RestFn.java:436) ~[clojure-1.6.0.jar:?]
> at clojure.lang.MultiFn.invoke(MultiFn.java:236) 
> ~[clojure-1.6.0.jar:?]
> at backtype.storm.stats$agg_comp_execs_stats.invoke(stats.clj:1303) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.daemon.nimbus$fn__5893$exec_fn__1502__auto__$reify__5917.getComponentPageInfo(nimbus.clj:1715)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.generated.Nimbus$Processor$getComponentPageInfo.getResult(Nimbus.java:3677)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.generated.Nimbus$Processor$getComponentPageInfo.getResult(Nimbus.java:3661)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at 
> org.apache.thrift7.ProcessFunction.process(ProcessFunction.java:39) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at org.apache.thrift7.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.security.auth.SaslTransportPlugin$TUGIWrapProcessor.process(SaslTransportPlugin.java:143)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at 
> org.apache.thrift7.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
>  

[GitHub] storm pull request: making small fixes in RAS

2015-11-18 Thread revans2
Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/890#issuecomment-157842860
  
The bug fixes look good, but could you file a JIRA just to track that these 
fixes went in?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: STORM-1213: Remove sigar binaries from source ...

2015-11-18 Thread revans2
Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/887#issuecomment-157843581
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1215) Use Async Loggers to avoid locking and logging overhead

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011834#comment-15011834
 ] 

ASF GitHub Bot commented on STORM-1215:
---

Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/888#issuecomment-157843254
  
@d2r any other comments on the most recent changes?


> Use Async Loggers to avoid locking  and logging overhead
> 
>
> Key: STORM-1215
> URL: https://issues.apache.org/jira/browse/STORM-1215
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Kishor Patil
>Assignee: Kishor Patil
>
> The loggers are synchronous with immediateFlush to disk, making some of the 
> daemons slow down. In  some other cases, nimbus is slow too with submit-lock.
> Making loggers asynchronous with no necessity to write to disk on every 
> logger event would improve cpu resource usage for logging.
> {code}
> "pool-7-thread-986" #1025 prio=5 os_prio=0 tid=0x7f0f9628c800 nid=0x1b84 
> runnable [0x7f0f0fa2a000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.FileOutputStream.writeBytes(Native Method)
>   at java.io.FileOutputStream.write(FileOutputStream.java:326)
>   at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>   at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
>   - locked <0x0003c00ae520> (a java.io.BufferedOutputStream)
>   at java.io.PrintStream.write(PrintStream.java:482)
>   - locked <0x0003c00ae500> (a java.io.PrintStream)
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>   at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>   at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104)
>   - locked <0x0003c00ae640> (a java.io.OutputStreamWriter)
>   at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185)
>   at java.io.PrintStream.write(PrintStream.java:527)
>   - locked <0x0003c00ae500> (a java.io.PrintStream)
>   at java.io.PrintStream.print(PrintStream.java:669)
>   at java.io.PrintStream.println(PrintStream.java:806)
>   - locked <0x0003c00ae500> (a java.io.PrintStream)
>   at 
> org.apache.logging.log4j.status.StatusConsoleListener.log(StatusConsoleListener.java:81)
>   at 
> org.apache.logging.log4j.status.StatusLogger.logMessage(StatusLogger.java:218)
>   at 
> org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:727)
>   at 
> org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:716)
>   at 
> org.apache.logging.log4j.spi.AbstractLogger.error(AbstractLogger.java:344)
>   at 
> org.apache.logging.log4j.core.appender.DefaultErrorHandler.error(DefaultErrorHandler.java:59)
>   at 
> org.apache.logging.log4j.core.appender.AbstractAppender.error(AbstractAppender.java:86)
>   at 
> org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:116)
>   at 
> org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:99)
>   at 
> org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:430)
>   at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:409)
>   at 
> org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:367)
>   at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:112)
>   at 
> org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:727)
>   at 
> org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:716)
>   at org.apache.logging.slf4j.Log4jLogger.info(Log4jLogger.java:198)
>   at clojure.tools.logging$eval1$fn__7.invoke(NO_SOURCE_FILE:0)
>   at clojure.tools.logging.impl$fn__28$G__8__39.invoke(impl.clj:16)
>   at clojure.tools.logging$log_STAR_.invoke(logging.clj:59)
>   at backtype.storm.daemon.nimbus$mk_assignments.doInvoke(nimbus.clj:781)
>   at clojure.lang.RestFn.invoke(RestFn.java:410)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-1217) making small fixes in RAS

2015-11-18 Thread Boyang Jerry Peng (JIRA)
Boyang Jerry Peng created STORM-1217:


 Summary: making small fixes in RAS
 Key: STORM-1217
 URL: https://issues.apache.org/jira/browse/STORM-1217
 Project: Apache Storm
  Issue Type: Bug
Reporter: Boyang Jerry Peng
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (STORM-831) Add Jira and Central Logging URL to UI

2015-11-18 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-831.

   Resolution: Fixed
Fix Version/s: 0.11.0

> Add Jira and Central Logging URL to UI
> --
>
> Key: STORM-831
> URL: https://issues.apache.org/jira/browse/STORM-831
> Project: Apache Storm
>  Issue Type: Documentation
>  Components: documentation
>Reporter: Kishor Patil
>Assignee: Kishor Patil
>Priority: Trivial
> Fix For: 0.11.0
>
>
> As a user, I would like to see a link to take me to JIRA for reporting bug. 
> Also, optionally if link to splunk/logstash/kibana from UI would be helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1210) Set Output Stream id in KafkaSpout

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011747#comment-15011747
 ] 

ASF GitHub Bot commented on STORM-1210:
---

Github user jerrypeng commented on the pull request:

https://github.com/apache/storm/pull/885#issuecomment-157831515
  
just a quick question. where is outputStreamId set?


> Set Output Stream id in KafkaSpout
> --
>
> Key: STORM-1210
> URL: https://issues.apache.org/jira/browse/STORM-1210
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-kafka
>Affects Versions: 0.11.0
>Reporter: Zhiqiang He
>Assignee: Zhiqiang He
>Priority: Minor
>
> topicAsStreamId can only set output stream id to topic name. In some case ,we 
> need to set output stream id to other name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1208) UI: NPE seen when aggregating bolt streams stats

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011805#comment-15011805
 ] 

ASF GitHub Bot commented on STORM-1208:
---

Github user jerrypeng commented on a diff in the pull request:

https://github.com/apache/storm/pull/881#discussion_r45250004
  
--- Diff: storm-core/src/clj/backtype/storm/stats.clj ---
@@ -335,30 +358,21 @@
 (defn- agg-bolt-streams-lat-and-count
   "Aggregates number executed and process & execute latencies."
   [idk->exec-avg idk->proc-avg idk->executed]
-  {:pre (apply = (map #(set (keys %))
-  [idk->exec-avg
-   idk->proc-avg
-   idk->executed]))}
-  (letfn [(weight-avg [id avg] (let [num-e (idk->executed id)]
-   (if (and avg num-e)
- (* avg num-e)
- 0)))]
+  (letfn [(weight-avg [id avg]
+(let [num-e (idk->executed id)]
+  (product-or-0 avg num-e)))]
 (into {}
   (for [k (keys idk->exec-avg)]
-[k {:executeLatencyTotal (weight-avg k (idk->exec-avg k))
-:processLatencyTotal (weight-avg k (idk->proc-avg k))
+[k {:executeLatencyTotal (weight-avg k (get idk->exec-avg k))
+:processLatencyTotal (weight-avg k (get idk->proc-avg k))
--- End diff --

what is the point of adding the get especially for (get idk->exec-avg k)? 
you are getting the keys from the map and then accessing it right? the keys 
cannot possibly not exist in the map


> UI: NPE seen when aggregating bolt streams stats
> 
>
> Key: STORM-1208
> URL: https://issues.apache.org/jira/browse/STORM-1208
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 0.11.0
>Reporter: Derek Dagit
>Assignee: Derek Dagit
>
> A stack trace is seen on the UI via its thrift connection to nimbus.
> On nimbus, a stack trace similar to the following is seen:
> {noformat}
> 2015-11-09 19:26:48.921 o.a.t.s.TThreadPoolServer [ERROR] Error occurred 
> during processing of message.
> java.lang.NullPointerException
> at 
> backtype.storm.stats$agg_bolt_streams_lat_and_count$iter__2219__2223$fn__2224.invoke(stats.clj:346)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.6.0.jar:?]
> at clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.6.0.jar:?]
> at clojure.lang.RT.seq(RT.java:484) ~[clojure-1.6.0.jar:?]
> at clojure.core$seq.invoke(core.clj:133) ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$seq_reduce.invoke(protocols.clj:30) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6078.invoke(protocols.clj:54) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core$reduce.invoke(core.clj:6289) ~[clojure-1.6.0.jar:?]
> at clojure.core$into.invoke(core.clj:6341) ~[clojure-1.6.0.jar:?]
> at 
> backtype.storm.stats$agg_bolt_streams_lat_and_count.invoke(stats.clj:344) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.stats$agg_pre_merge_comp_page_bolt.invoke(stats.clj:439) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at backtype.storm.stats$fn__2578.invoke(stats.clj:1093) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.MultiFn.invoke(MultiFn.java:241) 
> ~[clojure-1.6.0.jar:?]
> at clojure.lang.AFn.applyToHelper(AFn.java:165) ~[clojure-1.6.0.jar:?]
> at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.6.0.jar:?]
> at clojure.core$apply.invoke(core.clj:628) ~[clojure-1.6.0.jar:?]
> at clojure.core$partial$fn__4230.doInvoke(core.clj:2470) 
> ~[clojure-1.6.0.jar:?]
> at clojure.lang.RestFn.invoke(RestFn.java:421) ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6086.invoke(protocols.clj:143) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6057$G__6052__6066.invoke(protocols.clj:19) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6078.invoke(protocols.clj:54) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core$reduce.invoke(core.clj:6289) ~[clojure-1.6.0.jar:?]
> at 
> backtype.storm.stats$aggregate_comp_stats_STAR_.invoke(stats.clj:1106) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.AFn.applyToHelper(AFn.java:165) ~[clojure-1.6.0.jar:?]
> at 

[GitHub] storm pull request: [STORM-1198] Web UI to show resource usages an...

2015-11-18 Thread zhuoliu
Github user zhuoliu commented on the pull request:

https://github.com/apache/storm/pull/875#issuecomment-157843055
  
Hi @kishorvpatil , your comments have been addressed. Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1198) Web UI to show resource usages and Total Resources on all supervisors

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011827#comment-15011827
 ] 

ASF GitHub Bot commented on STORM-1198:
---

Github user zhuoliu commented on the pull request:

https://github.com/apache/storm/pull/875#issuecomment-157843055
  
Hi @kishorvpatil , your comments have been addressed. Thanks.


> Web UI to show resource usages and Total Resources on all supervisors
> -
>
> Key: STORM-1198
> URL: https://issues.apache.org/jira/browse/STORM-1198
> Project: Apache Storm
>  Issue Type: Story
>  Components: storm-core
>Reporter: Zhuo Liu
>Assignee: Zhuo Liu
>Priority: Minor
> Attachments: supervisor-resources.png
>
>
> As we have resource aware scheduler (STORM-894), we want to be able to 
> display resource capacity (CPU, memory; and network in future) and scheduled 
> resource usage on each supervisor node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: [STORM-1208] Guard against NPE, and avoid usin...

2015-11-18 Thread d2r
Github user d2r commented on a diff in the pull request:

https://github.com/apache/storm/pull/881#discussion_r45251619
  
--- Diff: storm-core/src/clj/backtype/storm/stats.clj ---
@@ -335,30 +358,21 @@
 (defn- agg-bolt-streams-lat-and-count
   "Aggregates number executed and process & execute latencies."
   [idk->exec-avg idk->proc-avg idk->executed]
-  {:pre (apply = (map #(set (keys %))
-  [idk->exec-avg
-   idk->proc-avg
-   idk->executed]))}
-  (letfn [(weight-avg [id avg] (let [num-e (idk->executed id)]
-   (if (and avg num-e)
- (* avg num-e)
- 0)))]
+  (letfn [(weight-avg [id avg]
+(let [num-e (idk->executed id)]
+  (product-or-0 avg num-e)))]
 (into {}
   (for [k (keys idk->exec-avg)]
-[k {:executeLatencyTotal (weight-avg k (idk->exec-avg k))
-:processLatencyTotal (weight-avg k (idk->proc-avg k))
+[k {:executeLatencyTotal (weight-avg k (get idk->exec-avg k))
+:processLatencyTotal (weight-avg k (get idk->proc-avg k))
--- End diff --

```
user=> (def nilmap nil)
#'user/nilmap
user=> (get nilmap :somekey)
nil
user=> (nilmap :somekey)
NullPointerException   user/eval3 (NO_SOURCE_FILE:4)
```
`get` guards against the map itself is `nil`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1208) UI: NPE seen when aggregating bolt streams stats

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011844#comment-15011844
 ] 

ASF GitHub Bot commented on STORM-1208:
---

Github user d2r commented on a diff in the pull request:

https://github.com/apache/storm/pull/881#discussion_r45251619
  
--- Diff: storm-core/src/clj/backtype/storm/stats.clj ---
@@ -335,30 +358,21 @@
 (defn- agg-bolt-streams-lat-and-count
   "Aggregates number executed and process & execute latencies."
   [idk->exec-avg idk->proc-avg idk->executed]
-  {:pre (apply = (map #(set (keys %))
-  [idk->exec-avg
-   idk->proc-avg
-   idk->executed]))}
-  (letfn [(weight-avg [id avg] (let [num-e (idk->executed id)]
-   (if (and avg num-e)
- (* avg num-e)
- 0)))]
+  (letfn [(weight-avg [id avg]
+(let [num-e (idk->executed id)]
+  (product-or-0 avg num-e)))]
 (into {}
   (for [k (keys idk->exec-avg)]
-[k {:executeLatencyTotal (weight-avg k (idk->exec-avg k))
-:processLatencyTotal (weight-avg k (idk->proc-avg k))
+[k {:executeLatencyTotal (weight-avg k (get idk->exec-avg k))
+:processLatencyTotal (weight-avg k (get idk->proc-avg k))
--- End diff --

```
user=> (def nilmap nil)
#'user/nilmap
user=> (get nilmap :somekey)
nil
user=> (nilmap :somekey)
NullPointerException   user/eval3 (NO_SOURCE_FILE:4)
```
`get` guards against the map itself is `nil`.


> UI: NPE seen when aggregating bolt streams stats
> 
>
> Key: STORM-1208
> URL: https://issues.apache.org/jira/browse/STORM-1208
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 0.11.0
>Reporter: Derek Dagit
>Assignee: Derek Dagit
> Fix For: 0.11.0
>
>
> A stack trace is seen on the UI via its thrift connection to nimbus.
> On nimbus, a stack trace similar to the following is seen:
> {noformat}
> 2015-11-09 19:26:48.921 o.a.t.s.TThreadPoolServer [ERROR] Error occurred 
> during processing of message.
> java.lang.NullPointerException
> at 
> backtype.storm.stats$agg_bolt_streams_lat_and_count$iter__2219__2223$fn__2224.invoke(stats.clj:346)
>  ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.6.0.jar:?]
> at clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.6.0.jar:?]
> at clojure.lang.RT.seq(RT.java:484) ~[clojure-1.6.0.jar:?]
> at clojure.core$seq.invoke(core.clj:133) ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$seq_reduce.invoke(protocols.clj:30) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6078.invoke(protocols.clj:54) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core$reduce.invoke(core.clj:6289) ~[clojure-1.6.0.jar:?]
> at clojure.core$into.invoke(core.clj:6341) ~[clojure-1.6.0.jar:?]
> at 
> backtype.storm.stats$agg_bolt_streams_lat_and_count.invoke(stats.clj:344) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at 
> backtype.storm.stats$agg_pre_merge_comp_page_bolt.invoke(stats.clj:439) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at backtype.storm.stats$fn__2578.invoke(stats.clj:1093) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at clojure.lang.MultiFn.invoke(MultiFn.java:241) 
> ~[clojure-1.6.0.jar:?]
> at clojure.lang.AFn.applyToHelper(AFn.java:165) ~[clojure-1.6.0.jar:?]
> at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.6.0.jar:?]
> at clojure.core$apply.invoke(core.clj:628) ~[clojure-1.6.0.jar:?]
> at clojure.core$partial$fn__4230.doInvoke(core.clj:2470) 
> ~[clojure-1.6.0.jar:?]
> at clojure.lang.RestFn.invoke(RestFn.java:421) ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6086.invoke(protocols.clj:143) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6057$G__6052__6066.invoke(protocols.clj:19) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core.protocols$fn__6078.invoke(protocols.clj:54) 
> ~[clojure-1.6.0.jar:?]
> at 
> clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13) 
> ~[clojure-1.6.0.jar:?]
> at clojure.core$reduce.invoke(core.clj:6289) ~[clojure-1.6.0.jar:?]
> at 
> backtype.storm.stats$aggregate_comp_stats_STAR_.invoke(stats.clj:1106) 
> ~[storm-core-0.10.1.jar:0.10.1]
> at 

[jira] [Commented] (STORM-831) Add Jira and Central Logging URL to UI

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011860#comment-15011860
 ] 

ASF GitHub Bot commented on STORM-831:
--

Github user asfgit closed the pull request at:

https://github.com/apache/storm/pull/559


> Add Jira and Central Logging URL to UI
> --
>
> Key: STORM-831
> URL: https://issues.apache.org/jira/browse/STORM-831
> Project: Apache Storm
>  Issue Type: Documentation
>  Components: documentation
>Reporter: Kishor Patil
>Assignee: Kishor Patil
>Priority: Trivial
> Fix For: 0.11.0
>
>
> As a user, I would like to see a link to take me to JIRA for reporting bug. 
> Also, optionally if link to splunk/logstash/kibana from UI would be helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: [STORM-831] Adding jira and central logging li...

2015-11-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/storm/pull/559


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1213) Remove sigar binaries from source tree

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011926#comment-15011926
 ] 

ASF GitHub Bot commented on STORM-1213:
---

Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/887#discussion_r45257318
  
--- Diff: external/storm-metrics/pom.xml ---
@@ -55,4 +62,28 @@
   provided
 
   
+  
+
+  
+org.apache.maven.plugins
+maven-dependency-plugin
+2.10
+
+  
+unpack-dependencies
+generate-resources
+
+  unpack-dependencies
+
+
+  
${project.build.directory}/classes/resources/resources
--- End diff --

This is the line causing the issue.  It should just be 
`${project.build.directory}/classes/resources`


> Remove sigar binaries from source tree
> --
>
> Key: STORM-1213
> URL: https://issues.apache.org/jira/browse/STORM-1213
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 0.11.0
>Reporter: P. Taylor Goetz
>Assignee: P. Taylor Goetz
>
> In {{external/storm-metrics}} sigar native binaries were added to the source 
> tree. Since Apache releases are source-only, these binaries can't be included 
> in a release.
> My initial thought was just to exclude the binaries from the source 
> distribution, but that would mean that distributions built from a source 
> tarball would not match the convenience binaries from a release (the sigar 
> native binaries would not be included.
> The solution I came up with was to leverage the fact that pre-built native 
> binaries are included in the sigar maven distribution 
> ({{sigar-x.x.x-native.jar}}) and use the maven dependency plugin to unpack 
> them into place during the build, rather than check them into git. One 
> benefit is that it will ensure the versions of the sigar jar and the native 
> binaries match. Another is that mavens checksum/signature checking mechanism 
> will also be applied.
> This isn't an ideal solution since the {{sigar-x.x.x-native.jar}} only 
> includes binaries for linux, OSX, and solaris (notably missing windows DLLs), 
> whereas the non-maven sigar download includes support for a wider range of 
> OSes and architectures.
> I view this as an interim measure until we can find a better way to include 
> the native binaries in the build process, rather than checking them into the 
> source tree -- which would be a blocker for releasing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: STORM-1213: Remove sigar binaries from source ...

2015-11-18 Thread revans2
Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/887#issuecomment-157858618
  
Sorry about the comments after giving it a +1.  I manually ran 
LatencyVsThroughput and saw that it was not working.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (STORM-1204) Logviewer should graceful report page-not-found instead of 500 for bad topo-id etc

2015-11-18 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil resolved STORM-1204.
-
   Resolution: Fixed
Fix Version/s: 0.11.0

> Logviewer should graceful report page-not-found instead of 500 for bad 
> topo-id etc
> --
>
> Key: STORM-1204
> URL: https://issues.apache.org/jira/browse/STORM-1204
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Reporter: Kishor Patil
>Assignee: Kishor Patil
> Fix For: 0.11.0
>
>
> Whenever topology-id or filename is wrong or ( in case of secure cluster if 
> user is not authorized), the logviewer shows HTTP-500 exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-956) When the execute() or nextTuple() hang on external resources, stop the Worker's heartbeat

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011911#comment-15011911
 ] 

ASF GitHub Bot commented on STORM-956:
--

Github user kishorvpatil commented on the pull request:

https://github.com/apache/storm/pull/647#issuecomment-157856771
  
@revans2 Yes, you have convinced me on conceptually it option where 
executor could be truely deadlocked. But implementation wise, I think we need 
better control around how/when we choose to shoot worker itself. 


> When the execute() or nextTuple() hang on external resources, stop the 
> Worker's heartbeat
> -
>
> Key: STORM-956
> URL: https://issues.apache.org/jira/browse/STORM-956
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Chuanlei Ni
>Assignee: Chuanlei Ni
>Priority: Minor
>   Original Estimate: 6h
>  Remaining Estimate: 6h
>
> Sometimes the work threads produced by mk-threads in executor.clj hang on 
> external resources or other unknown reasons. This makes the workers stop 
> processing the tuples.  I think it is better to kill this worker to resolve 
> the "hang". I plan to :
> 1. like `setup-ticks`, send a system-tick to receive-queue
> 2. the tuple-action-fn deal with this system-tick and remember the time that 
> processes this tuple in the executor-data
> 3. when worker do local heartbeat, check the time the executor writes to 
> executor-data. If the time is long from current (for example, 3 minutes), the 
> worker does not do the heartbeat.  So the supervisor could deal with this 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1213) Remove sigar binaries from source tree

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011914#comment-15011914
 ] 

ASF GitHub Bot commented on STORM-1213:
---

Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/887#issuecomment-157857431
  
Actually I just tried to run this and it looks like the libraries are under 
`resources/resources/` instead of just `resources/`  because of this I am 
getting linking errors when trying to use the metrics.


> Remove sigar binaries from source tree
> --
>
> Key: STORM-1213
> URL: https://issues.apache.org/jira/browse/STORM-1213
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 0.11.0
>Reporter: P. Taylor Goetz
>Assignee: P. Taylor Goetz
>
> In {{external/storm-metrics}} sigar native binaries were added to the source 
> tree. Since Apache releases are source-only, these binaries can't be included 
> in a release.
> My initial thought was just to exclude the binaries from the source 
> distribution, but that would mean that distributions built from a source 
> tarball would not match the convenience binaries from a release (the sigar 
> native binaries would not be included.
> The solution I came up with was to leverage the fact that pre-built native 
> binaries are included in the sigar maven distribution 
> ({{sigar-x.x.x-native.jar}}) and use the maven dependency plugin to unpack 
> them into place during the build, rather than check them into git. One 
> benefit is that it will ensure the versions of the sigar jar and the native 
> binaries match. Another is that mavens checksum/signature checking mechanism 
> will also be applied.
> This isn't an ideal solution since the {{sigar-x.x.x-native.jar}} only 
> includes binaries for linux, OSX, and solaris (notably missing windows DLLs), 
> whereas the non-maven sigar download includes support for a wider range of 
> OSes and architectures.
> I view this as an interim measure until we can find a better way to include 
> the native binaries in the build process, rather than checking them into the 
> source tree -- which would be a blocker for releasing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: STORM-1213: Remove sigar binaries from source ...

2015-11-18 Thread revans2
Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/887#discussion_r45257318
  
--- Diff: external/storm-metrics/pom.xml ---
@@ -55,4 +62,28 @@
   provided
 
   
+  
+
+  
+org.apache.maven.plugins
+maven-dependency-plugin
+2.10
+
+  
+unpack-dependencies
+generate-resources
+
+  unpack-dependencies
+
+
+  
${project.build.directory}/classes/resources/resources
--- End diff --

This is the line causing the issue.  It should just be 
`${project.build.directory}/classes/resources`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1213) Remove sigar binaries from source tree

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011927#comment-15011927
 ] 

ASF GitHub Bot commented on STORM-1213:
---

Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/887#issuecomment-157858618
  
Sorry about the comments after giving it a +1.  I manually ran 
LatencyVsThroughput and saw that it was not working.


> Remove sigar binaries from source tree
> --
>
> Key: STORM-1213
> URL: https://issues.apache.org/jira/browse/STORM-1213
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 0.11.0
>Reporter: P. Taylor Goetz
>Assignee: P. Taylor Goetz
>
> In {{external/storm-metrics}} sigar native binaries were added to the source 
> tree. Since Apache releases are source-only, these binaries can't be included 
> in a release.
> My initial thought was just to exclude the binaries from the source 
> distribution, but that would mean that distributions built from a source 
> tarball would not match the convenience binaries from a release (the sigar 
> native binaries would not be included.
> The solution I came up with was to leverage the fact that pre-built native 
> binaries are included in the sigar maven distribution 
> ({{sigar-x.x.x-native.jar}}) and use the maven dependency plugin to unpack 
> them into place during the build, rather than check them into git. One 
> benefit is that it will ensure the versions of the sigar jar and the native 
> binaries match. Another is that mavens checksum/signature checking mechanism 
> will also be applied.
> This isn't an ideal solution since the {{sigar-x.x.x-native.jar}} only 
> includes binaries for linux, OSX, and solaris (notably missing windows DLLs), 
> whereas the non-maven sigar download includes support for a wider range of 
> OSes and architectures.
> I view this as an interim measure until we can find a better way to include 
> the native binaries in the build process, rather than checking them into the 
> source tree -- which would be a blocker for releasing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: [STORM-1204] Fixing attempt to access to direc...

2015-11-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/storm/pull/879


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1204) Logviewer should graceful report page-not-found instead of 500 for bad topo-id etc

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011941#comment-15011941
 ] 

ASF GitHub Bot commented on STORM-1204:
---

Github user asfgit closed the pull request at:

https://github.com/apache/storm/pull/879


> Logviewer should graceful report page-not-found instead of 500 for bad 
> topo-id etc
> --
>
> Key: STORM-1204
> URL: https://issues.apache.org/jira/browse/STORM-1204
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Reporter: Kishor Patil
>Assignee: Kishor Patil
>
> Whenever topology-id or filename is wrong or ( in case of secure cluster if 
> user is not authorized), the logviewer shows HTTP-500 exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: STORM-1213: Remove sigar binaries from source ...

2015-11-18 Thread revans2
Github user revans2 commented on the pull request:

https://github.com/apache/storm/pull/887#issuecomment-157862181
  
Even with changing the directory I am getting 
```
java.lang.UnsatisfiedLinkError: org.hyperic.sigar.Sigar.getPid()J
at org.hyperic.sigar.Sigar.getPid(Native Method) 
~[stormjar.jar:0.11.0-SNAPSHOT]
at org.apache.storm.metrics.sigar.CPUMetric.(CPUMetric.java:38) 
~[stormjar.jar:0.11.0-SNAPSHOT]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method) ~[?:1.8.0]
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[?:1.8.0]
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:1.8.0]
at java.lang.reflect.Constructor.newInstance(Constructor.java:408) 
~[?:1.8.0]
at java.lang.Class.newInstance(Class.java:433) ~[?:1.8.0]
at backtype.storm.utils.Utils.newInstance(Utils.java:91) 
~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
at 
backtype.storm.metric.SystemBolt.registerMetrics(SystemBolt.java:150) 
~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
at backtype.storm.metric.SystemBolt.prepare(SystemBolt.java:143) 
~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
at 
backtype.storm.daemon.executor$fn__5157$fn__5170.invoke(executor.clj:777) 
~[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
at backtype.storm.util$async_loop$fn__556.invoke(util.clj:480) 
[storm-core-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:744) [?:1.8.0]
```

We need to do some more work to understand the difference between this 
patch and what we were doing before.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: [STORM-885] Heartbeat Server (Pacemaker)

2015-11-18 Thread knusbaum
Github user knusbaum commented on the pull request:

https://github.com/apache/storm/pull/838#issuecomment-157862156
  
For the record, the netty messaging layer changes are only to achieve 2 
things that were necessary to make the heartbeats work.
 - Add Kerberos SASL plugin for the Netty pipeline
 - Facilitate generic serialization with INettySerializable


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] 1.0 Release (was Re: [DISCUSS] Initial 0.11.0 Release)

2015-11-18 Thread 封仲淹(纪君祥LongdaFeng)
+1 as well.

(1) What we have been calling “0.11.0” will become the Storm 1.0 
release.[Longda] Agree.  we almost stop developing new feature after 0.11, so 
0.11 is one big version of clojure core, I prefer we give it one big milestone 
number 1.0 to this version. In fact, this version will import some big featuer.
(2) switch from "backtype.storm" to "org.apache.storm"[Longda] Agree, but what 
I concern is three point: 2.1  is the storm-compat solution can resolve all 
package name problem. Has any other system used this technology before.2.2 The 
develop speed, I wish we can finish this feature as fast as we can. 2.3 This 
will lead to a little hard to trace all file's history. Because all file will 
be rename and modify.

RegardsLongda
--From:Hugo Da 
Cruz Louro Send Time:2015年11月18日(星期三) 
07:57To:dev@storm.apache.org Subject:Re: [DISCUSS] 1.0 
Release (was Re: [DISCUSS] Initial 0.11.0 Release)
+1 as well. I support moving to the org.apache.storm package as early as 
possible and I am OK with storm-compat. My only concern with using storm-compat 
is if we are going to have to support it forever, or plan on dropping it after 
a certain release. Backwards compatibility is a valid concern but it can become 
very difficult to maintain older versions and at some point we must say that 
only topologies built after version X will run for version Y onwards (Y > X)

Hugo

> On Nov 16, 2015, at 5:23 PM, Harsha  wrote:
> 
> +1 on Bobby's suggestion on moving the packages to storm-compat and have
> it part of lib folder. 
> Moving 1.0 with org.apache.storm will make it easier in the future
> rather than wait for 2.0 release we should make 
> this change now and in 2.0 we can remove the storm-compat jar.
> 
> Thanks,
> Harsha
> 
> 
> On Thu, Nov 12, 2015, at 07:15 AM, Bobby Evans wrote:
>> I agree that having the old package names for a 1.0 release is a little
>> odd, but I also am nervous about breaking backwards compatibility for our
>> customers in a very significant way.  The upgrade for us from 0.9.x to
>> 0.10.x has gone fairly smoothly.  Most customers read the announcement
>> and recompiled their code against 0.10, but followed our instructions so
>> that their topologies could run on both 0.9.x and 0.10.x.  Having the
>> ability for a topology to run on both an old cluster and a new cluster is
>> very important for us, because of failover.  If we want to minimize
>> downtime we need to have the ability to fail a topology over from one
>> cluster to another, usually running on the other side of the
>> country/world.  For stability reasons we do not want to upgrade both
>> clusters at once, so we need to have confidence that a topology can run
>> on both clusters.  Maintaining two versions of a topology is a huge pain
>> as well.
>> Perhaps what we can do for 1.0 is to move all of the packages to
>> org.apache.storm, but provide a storm-compat package that will still have
>> the old user facing APIs in it, that are actually (for the most part)
>> subclasses/interfaces of the org.apache versions.  I am not sure this
>> will work perfectly, but I think I will give it a try.
>>  - Bobby 
>> 
>> 
>> On Thursday, November 12, 2015 9:04 AM, Aaron. Dossett
>>  wrote:
>> 
>> 
>> As a user/developer this sounds great.  The only item that gives me
>> pause
>> is #2.  Still seeing backtype.* package names would be unexpected (for
>> me)
>> for a 1.0 release.  That small reservation aside, I am +1 (non-binding).
>> 
>> On 11/11/15, 2:45 PM, "임정택"  wrote:
>> 
>>> +1
>>> 
>>> Jungtaek Lim (HeartSaVioR)
>>> 
>>> 2015-11-12 7:21 GMT+09:00 P. Taylor Goetz :
>>> 
 Changing subject in order to consolidate discussion of a 1.0 release in
 one thread (there was some additional discussion in the thread regarding
 the JStorm code merge).
 
 I just want to make sure I’m accurately capturing the sentiment of the
 community with regard to a 1.0 release. Please correct me if my summary
 seems off-base or jump in with an opinion.
 
 In summary:
 
 1. What we have been calling “0.11.0” will become the Storm 1.0 release.
 2. We will NOT be migrating package names for this release (i.e.
 “backtype.storm” —> “org.apache.storm”).
 3. Post 1.0 release we will go into feature freeze for core
 functionality
 to facilitate the JStorm merge.
 4. During the feature freeze only fixes for high priority bugs in core
 functionality will be accepted (no new features).
 5. During the feature freeze, enhancements to “external” modules can be
 accepted.
 6. We will stop using the “beta” flag in favor of purely numeric version
 numbers. Stable vs. non-stable (development) releases can be indicated
 on
 the download page.
 
 Do we all agree?
 
 -Taylor

[GitHub] storm pull request: STORM-904: Move bin/storm command line to java...

2015-11-18 Thread hustfxj
Github user hustfxj commented on a diff in the pull request:

https://github.com/apache/storm/pull/662#discussion_r45188898
  
--- Diff: storm-core/src/jvm/backtype/storm/utils/StormCommandExecutor.java 
---
@@ -0,0 +1,868 @@
+package backtype.storm.utils;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.nio.charset.StandardCharsets;
+import java.util.*;
+
+import clojure.lang.IFn;
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.lang.SystemUtils;
+
+/**
+ * Created by pshah on 7/17/15.
+ */
+abstract class StormCommandExecutor {
+final String NIMBUS_CLASS = "backtype.storm.daemon.nimbus";
+final String SUPERVISOR_CLASS = "backtype.storm.daemon.supervisor";
+final String UI_CLASS = "backtype.storm.ui.core";
+final String LOGVIEWER_CLASS = "backtype.storm.daemon.logviewer";
+final String DRPC_CLASS = "backtype.storm.daemon.drpc";
+final String REPL_CLASS = "clojure.main";
+final String ACTIVATE_CLASS = "backtype.storm.command.activate";
+final String DEACTIVATE_CLASS = "backtype.storm.command.deactivate";
+final String REBALANCE_CLASS = "backtype.storm.command.rebalance";
+final String LIST_CLASS = "backtype.storm.command.list";
+final String DEVZOOKEEPER_CLASS = 
"backtype.storm.command.dev_zookeeper";
+final String VERSION_CLASS = "backtype.storm.utils.VersionInfo";
+final String MONITOR_CLASS = "backtype.storm.command.monitor";
+final String UPLOADCREDENTIALS_CLASS = "backtype.storm.command" +
+".upload_credentials";
+final String GETERRORS_CLASS = "backtype.storm.command.get_errors";
+final String SHELL_CLASS = "backtype.storm.command.shell_submission";
+String stormHomeDirectory;
+String userConfDirectory;
+String stormConfDirectory;
+String clusterConfDirectory;
+String stormLibDirectory;
+String stormBinDirectory;
+String stormLog4jConfDirectory;
+String configFile = "";
+String javaCommand;
+List configOptions = new ArrayList();
+String stormExternalClasspath;
+String stormExternalClasspathDaemon;
+String fileSeparator;
+final List COMMANDS = Arrays.asList("jar", "kill", "shell",
+"nimbus", "ui", "logviewer", "drpc", "supervisor",
+"localconfvalue",  "remoteconfvalue", "repl", "classpath",
+"activate", "deactivate", "rebalance", "help",  "list",
+"dev-zookeeper", "version", "monitor", "upload-credentials",
+"get-errors");
+
+public static void main (String[] args) {
+for (String arg : args) {
+System.out.println("Argument ++ is " + arg);
+}
+StormCommandExecutor stormCommandExecutor;
+if (System.getProperty("os.name").startsWith("Windows")) {
+stormCommandExecutor = new WindowsStormCommandExecutor();
+} else {
+stormCommandExecutor = new UnixStormCommandExecutor();
+}
+stormCommandExecutor.initialize();
+stormCommandExecutor.execute(args);
+}
+
+StormCommandExecutor () {
+
+}
+
+abstract void initialize ();
+
+abstract void execute (String[] args);
+
+void callMethod (String command, List args) {
+Class implementation = this.getClass();
+String methodName = command.replace("-", "") + "Command";
+try {
+Method method = implementation.getDeclaredMethod(methodName, 
List
+.class);
+method.invoke(this, args);
+} catch (NoSuchMethodException ex) {
+System.out.println("No such method exception occured while 
trying" +
+" to run storm method " + command);
+} catch (IllegalAccessException ex) {
+System.out.println("Illegal access exception occured while 
trying" +
+" to run storm method " + command);
+} catch (IllegalArgumentException ex) {
+System.out.println("Illegal argument exception occured while " 
+
+"trying" + " to run storm method " + command);
+} catch (InvocationTargetException ex) {
+System.out.println("Invocation target exception occured while 
" +
+"trying" + " to run storm method " + command);
+}
+}
+}
+
+class UnixStormCommandExecutor extends StormCommandExecutor {
+
+UnixStormCommandExecutor () {
+
+}
+
+   

[jira] [Commented] (STORM-904) move storm bin commands to java and provide appropriate bindings for windows and linux

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15010812#comment-15010812
 ] 

ASF GitHub Bot commented on STORM-904:
--

Github user hustfxj commented on a diff in the pull request:

https://github.com/apache/storm/pull/662#discussion_r45188898
  
--- Diff: storm-core/src/jvm/backtype/storm/utils/StormCommandExecutor.java 
---
@@ -0,0 +1,868 @@
+package backtype.storm.utils;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.nio.charset.StandardCharsets;
+import java.util.*;
+
+import clojure.lang.IFn;
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.lang.SystemUtils;
+
+/**
+ * Created by pshah on 7/17/15.
+ */
+abstract class StormCommandExecutor {
+final String NIMBUS_CLASS = "backtype.storm.daemon.nimbus";
+final String SUPERVISOR_CLASS = "backtype.storm.daemon.supervisor";
+final String UI_CLASS = "backtype.storm.ui.core";
+final String LOGVIEWER_CLASS = "backtype.storm.daemon.logviewer";
+final String DRPC_CLASS = "backtype.storm.daemon.drpc";
+final String REPL_CLASS = "clojure.main";
+final String ACTIVATE_CLASS = "backtype.storm.command.activate";
+final String DEACTIVATE_CLASS = "backtype.storm.command.deactivate";
+final String REBALANCE_CLASS = "backtype.storm.command.rebalance";
+final String LIST_CLASS = "backtype.storm.command.list";
+final String DEVZOOKEEPER_CLASS = 
"backtype.storm.command.dev_zookeeper";
+final String VERSION_CLASS = "backtype.storm.utils.VersionInfo";
+final String MONITOR_CLASS = "backtype.storm.command.monitor";
+final String UPLOADCREDENTIALS_CLASS = "backtype.storm.command" +
+".upload_credentials";
+final String GETERRORS_CLASS = "backtype.storm.command.get_errors";
+final String SHELL_CLASS = "backtype.storm.command.shell_submission";
+String stormHomeDirectory;
+String userConfDirectory;
+String stormConfDirectory;
+String clusterConfDirectory;
+String stormLibDirectory;
+String stormBinDirectory;
+String stormLog4jConfDirectory;
+String configFile = "";
+String javaCommand;
+List configOptions = new ArrayList();
+String stormExternalClasspath;
+String stormExternalClasspathDaemon;
+String fileSeparator;
+final List COMMANDS = Arrays.asList("jar", "kill", "shell",
+"nimbus", "ui", "logviewer", "drpc", "supervisor",
+"localconfvalue",  "remoteconfvalue", "repl", "classpath",
+"activate", "deactivate", "rebalance", "help",  "list",
+"dev-zookeeper", "version", "monitor", "upload-credentials",
+"get-errors");
+
+public static void main (String[] args) {
+for (String arg : args) {
+System.out.println("Argument ++ is " + arg);
+}
+StormCommandExecutor stormCommandExecutor;
+if (System.getProperty("os.name").startsWith("Windows")) {
+stormCommandExecutor = new WindowsStormCommandExecutor();
+} else {
+stormCommandExecutor = new UnixStormCommandExecutor();
+}
+stormCommandExecutor.initialize();
+stormCommandExecutor.execute(args);
+}
+
+StormCommandExecutor () {
+
+}
+
+abstract void initialize ();
+
+abstract void execute (String[] args);
+
+void callMethod (String command, List args) {
+Class implementation = this.getClass();
+String methodName = command.replace("-", "") + "Command";
+try {
+Method method = implementation.getDeclaredMethod(methodName, 
List
+.class);
+method.invoke(this, args);
+} catch (NoSuchMethodException ex) {
+System.out.println("No such method exception occured while 
trying" +
+" to run storm method " + command);
+} catch (IllegalAccessException ex) {
+System.out.println("Illegal access exception occured while 
trying" +
+" to run storm method " + command);
+} catch (IllegalArgumentException ex) {
+System.out.println("Illegal argument exception occured while " 
+
+"trying" + " to run storm method " + command);
+} catch (InvocationTargetException ex) {
+System.out.println("Invocation target exception occured while 
" +
+ 

Re: [jira] [Commented] (STORM-1216) button to kill all topologies in Storm UI

2015-11-18 Thread Cody Innowhere
And I'd like to see another button to restart all topologies :-)

On Wed, Nov 18, 2015 at 6:55 PM, Longda Feng (JIRA)  wrote:

>
> [
> https://issues.apache.org/jira/browse/STORM-1216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15010754#comment-15010754
> ]
>
> Longda Feng commented on STORM-1216:
> 
>
> Assign this issue to me, But I won't resolve it in 0.11. Current UI is
> hard to do this. But JStorm UI can do this.
> So please wait until JStorm merge into Storm. We will start this.
>
> We have met several times migrate one cluster's topology to another
> cluster. so this is useful for this case.
>
> > button to kill all topologies in Storm UI
> > -
> >
> > Key: STORM-1216
> > URL: https://issues.apache.org/jira/browse/STORM-1216
> > Project: Apache Storm
> >  Issue Type: Wish
> >  Components: storm-core
> >Affects Versions: 0.11.0
> >Reporter: Erik Weathers
> >Assignee: Longda Feng
> >Priority: Minor
> >
> > In the Storm-on-Mesos project we had a [request to have an ability to
> "shut down the storm cluster" via a UI button|
> https://github.com/mesos/storm/issues/46].   That could be accomplished
> via a button in the Storm UI to kill all of the topologies.
> > I understand if this is viewed as an undesirable feature, but I just
> wanted to document the request.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>


[jira] [Commented] (STORM-885) Heartbeat Server (Pacemaker)

2015-11-18 Thread Longda Feng (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15010807#comment-15010807
 ] 

Longda Feng commented on STORM-885:
---

@d2r

Can we defer this patch . Why not do this like Heron. Every Topology has one 
component "TopologyMaster", TopologyMaster will collect all heartbeat then send 
this heartbeats to Nimbus.

1. Add Pacemaker will add difficulty of maintain one cluster, especially take 
care of HA.
2. It is hard to do extension, the Pacemaker will become the cluster's 
bottleneck, if it is become the cluster's bottleneck, it is very hard to do 
extension. If something is wrong with Pacemaker, it will impact all topology, 
not only one topology.
3. The logic is a little complicated, the JStorm's TopologyMaster logic is more 
simple and clear, the code of handle heartbeat is no more than 300 lines. 
worker --> topologymaster --> nimbus, due to topologymaster is one component of 
Topology, it is easy to do HA. Even if the topologyMaster is down, it won't 
impact any other topology. 

> Heartbeat Server (Pacemaker)
> 
>
> Key: STORM-885
> URL: https://issues.apache.org/jira/browse/STORM-885
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Robert Joseph Evans
>Assignee: Kyle Nusbaum
>
> Large highly connected topologies and large clusters write a lot of data into 
> ZooKeeper.  The heartbeats, that make up the majority of this data, do not 
> need to be persisted to disk.  Pacemaker is intended to be a secure 
> replacement for storing the heartbeats without changing anything within the 
> heartbeats.  In the future as more metrics are added in, we may want to look 
> into switching it over to look more like Heron, where a metrics server is 
> running for each node/topology.  And can be used to aggregate/per-aggregate 
> them in a more scalable manor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-904) move storm bin commands to java and provide appropriate bindings for windows and linux

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15010827#comment-15010827
 ] 

ASF GitHub Bot commented on STORM-904:
--

Github user longdafeng commented on the pull request:

https://github.com/apache/storm/pull/662#issuecomment-157686065
  
@priyank5485 

I don't prefer switch storm.py from python to java. Because  I often add 
debug point in the storm.py, it is very easy to debug in online 
system(especially when environment is bad or jvm parameter is wrong). if we 
change to java version, it is very hard to debug. 

Another point is that python's code is more easy to add new functions in 
the script. 




> move storm bin commands to java and provide appropriate bindings for windows 
> and linux
> --
>
> Key: STORM-904
> URL: https://issues.apache.org/jira/browse/STORM-904
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Reporter: Sriharsha Chintalapani
>Assignee: Priyank Shah
>
> Currently we have python and .cmd implementation for windows. This is 
> becoming increasing difficult upkeep both versions. Lets make all the main 
> code of starting daemons etc. to java and provider wrapper scripts in shell 
> and batch for linux and windows respectively. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: STORM-904: Move bin/storm command line to java...

2015-11-18 Thread longdafeng
Github user longdafeng commented on the pull request:

https://github.com/apache/storm/pull/662#issuecomment-157686065
  
@priyank5485 

I don't prefer switch storm.py from python to java. Because  I often add 
debug point in the storm.py, it is very easy to debug in online 
system(especially when environment is bad or jvm parameter is wrong). if we 
change to java version, it is very hard to debug. 

Another point is that python's code is more easy to add new functions in 
the script. 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: STORM-1213: Remove sigar binaries from source ...

2015-11-18 Thread kishorvpatil
Github user kishorvpatil commented on the pull request:

https://github.com/apache/storm/pull/887#issuecomment-157705671
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1213) Remove sigar binaries from source tree

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15010934#comment-15010934
 ] 

ASF GitHub Bot commented on STORM-1213:
---

Github user kishorvpatil commented on the pull request:

https://github.com/apache/storm/pull/887#issuecomment-157705671
  
+1


> Remove sigar binaries from source tree
> --
>
> Key: STORM-1213
> URL: https://issues.apache.org/jira/browse/STORM-1213
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 0.11.0
>Reporter: P. Taylor Goetz
>Assignee: P. Taylor Goetz
>
> In {{external/storm-metrics}} sigar native binaries were added to the source 
> tree. Since Apache releases are source-only, these binaries can't be included 
> in a release.
> My initial thought was just to exclude the binaries from the source 
> distribution, but that would mean that distributions built from a source 
> tarball would not match the convenience binaries from a release (the sigar 
> native binaries would not be included.
> The solution I came up with was to leverage the fact that pre-built native 
> binaries are included in the sigar maven distribution 
> ({{sigar-x.x.x-native.jar}}) and use the maven dependency plugin to unpack 
> them into place during the build, rather than check them into git. One 
> benefit is that it will ensure the versions of the sigar jar and the native 
> binaries match. Another is that mavens checksum/signature checking mechanism 
> will also be applied.
> This isn't an ideal solution since the {{sigar-x.x.x-native.jar}} only 
> includes binaries for linux, OSX, and solaris (notably missing windows DLLs), 
> whereas the non-maven sigar download includes support for a wider range of 
> OSes and architectures.
> I view this as an interim measure until we can find a better way to include 
> the native binaries in the build process, rather than checking them into the 
> source tree -- which would be a blocker for releasing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (STORM-1216) button to kill all topologies in Storm UI

2015-11-18 Thread Longda Feng (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Longda Feng reassigned STORM-1216:
--

Assignee: Longda Feng

> button to kill all topologies in Storm UI
> -
>
> Key: STORM-1216
> URL: https://issues.apache.org/jira/browse/STORM-1216
> Project: Apache Storm
>  Issue Type: Wish
>  Components: storm-core
>Affects Versions: 0.11.0
>Reporter: Erik Weathers
>Assignee: Longda Feng
>Priority: Minor
>
> In the Storm-on-Mesos project we had a [request to have an ability to "shut 
> down the storm cluster" via a UI 
> button|https://github.com/mesos/storm/issues/46].   That could be 
> accomplished via a button in the Storm UI to kill all of the topologies.
> I understand if this is viewed as an undesirable feature, but I just wanted 
> to document the request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1216) button to kill all topologies in Storm UI

2015-11-18 Thread Longda Feng (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15010754#comment-15010754
 ] 

Longda Feng commented on STORM-1216:


Assign this issue to me, But I won't resolve it in 0.11. Current UI is hard to 
do this. But JStorm UI can do this. 
So please wait until JStorm merge into Storm. We will start this.

We have met several times migrate one cluster's topology to another cluster. so 
this is useful for this case.

> button to kill all topologies in Storm UI
> -
>
> Key: STORM-1216
> URL: https://issues.apache.org/jira/browse/STORM-1216
> Project: Apache Storm
>  Issue Type: Wish
>  Components: storm-core
>Affects Versions: 0.11.0
>Reporter: Erik Weathers
>Assignee: Longda Feng
>Priority: Minor
>
> In the Storm-on-Mesos project we had a [request to have an ability to "shut 
> down the storm cluster" via a UI 
> button|https://github.com/mesos/storm/issues/46].   That could be 
> accomplished via a button in the Storm UI to kill all of the topologies.
> I understand if this is viewed as an undesirable feature, but I just wanted 
> to document the request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1075) Storm Cassandra connector

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15010617#comment-15010617
 ] 

ASF GitHub Bot commented on STORM-1075:
---

Github user satishd commented on a diff in the pull request:

https://github.com/apache/storm/pull/827#discussion_r4512
  
--- Diff: 
external/storm-cassandra/src/main/java/org/apache/storm/cassandra/Murmur3StreamGrouping.java
 ---
@@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.storm.cassandra;
+
+import backtype.storm.generated.GlobalStreamId;
+import backtype.storm.grouping.CustomStreamGrouping;
+import backtype.storm.task.WorkerTopologyContext;
+import backtype.storm.topology.FailedException;
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.Lists;
+import com.google.common.hash.Hashing;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.List;
+
+/**
+ *
+ * Simple {@link backtype.storm.grouping.CustomStreamGrouping} that uses 
Murmur3 algorithm to choose the target task of a tuple.
+ *
+ * This stream grouping may be used to optimise writes to Apache Cassandra.
+ */
+public class Murmur3StreamGrouping implements CustomStreamGrouping {
--- End diff --

@fhussonnois Cassandra should be partitioning rows based on partition key 
on a row according to the configured partitioner. But here, the grouping is 
done based on the hash generated on tuple values which may contain values other 
than partition key. Can you elaborate on this?  Pl add more details in java 
doc. 

It maybe helpful for users to have information about this grouping in 
ReadMe.md.


> Storm Cassandra connector
> -
>
> Key: STORM-1075
> URL: https://issues.apache.org/jira/browse/STORM-1075
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Sriharsha Chintalapani
>Assignee: Satish Duggana
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request: STORM-1075 add external module storm-cassandra

2015-11-18 Thread satishd
Github user satishd commented on a diff in the pull request:

https://github.com/apache/storm/pull/827#discussion_r4512
  
--- Diff: 
external/storm-cassandra/src/main/java/org/apache/storm/cassandra/Murmur3StreamGrouping.java
 ---
@@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.storm.cassandra;
+
+import backtype.storm.generated.GlobalStreamId;
+import backtype.storm.grouping.CustomStreamGrouping;
+import backtype.storm.task.WorkerTopologyContext;
+import backtype.storm.topology.FailedException;
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.Lists;
+import com.google.common.hash.Hashing;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.List;
+
+/**
+ *
+ * Simple {@link backtype.storm.grouping.CustomStreamGrouping} that uses 
Murmur3 algorithm to choose the target task of a tuple.
+ *
+ * This stream grouping may be used to optimise writes to Apache Cassandra.
+ */
+public class Murmur3StreamGrouping implements CustomStreamGrouping {
--- End diff --

@fhussonnois Cassandra should be partitioning rows based on partition key 
on a row according to the configured partitioner. But here, the grouping is 
done based on the hash generated on tuple values which may contain values other 
than partition key. Can you elaborate on this?  Pl add more details in java 
doc. 

It maybe helpful for users to have information about this grouping in 
ReadMe.md.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request: STORM-1075 add external module storm-cassandra

2015-11-18 Thread satishd
Github user satishd commented on the pull request:

https://github.com/apache/storm/pull/827#issuecomment-157723266
  
@fhussonnois Can you add an example(or an integration test, which can be 
disabled for now)? It will be helpful for users to try Cassandra connector. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (STORM-1075) Storm Cassandra connector

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011040#comment-15011040
 ] 

ASF GitHub Bot commented on STORM-1075:
---

Github user satishd commented on the pull request:

https://github.com/apache/storm/pull/827#issuecomment-157723266
  
@fhussonnois Can you add an example(or an integration test, which can be 
disabled for now)? It will be helpful for users to try Cassandra connector. 


> Storm Cassandra connector
> -
>
> Key: STORM-1075
> URL: https://issues.apache.org/jira/browse/STORM-1075
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Reporter: Sriharsha Chintalapani
>Assignee: Satish Duggana
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-1187) Support for late and out of order events in time based windows

2015-11-18 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011136#comment-15011136
 ] 

Robert Joseph Evans commented on STORM-1187:


Yes event time is just another dimension that we can put into buckets and 
aggregate on but the hard part is knowing when you can consider a bucket 
complete and when you consider a bucket completely dead and no more late data 
will be accepted.  A lot of this depends on your use case and where the 
aggregated data is stored.  I really think the cloud dataflow API captures most 
of what an API like this should support.  If we could make it cleaner that 
would be great, but what we are doing is complex enough that I am not sure we 
really can make it that much cleaner.

> Support for late and out of order events in time based windows
> --
>
> Key: STORM-1187
> URL: https://issues.apache.org/jira/browse/STORM-1187
> Project: Apache Storm
>  Issue Type: Sub-task
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>
> Right now the time based windows uses the timestamp when the tuple is 
> received by the bolt. 
> However there are use cases where the tuples can be processed based on the 
> time when they are actually generated vs the time when they are received. So 
> we need to add support for processing events with a time lag and also have 
> some way to specify and read tuple timestamps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] 1.0 Release (was Re: [DISCUSS] Initial 0.11.0 Release)

2015-11-18 Thread Bobby Evans
My concern for backwards compatibility is really to provide a clean upgrade 
path to my users.  So for me it is mostly maintaining some form of backwards 
compatibility for a few months until all the users have upgraded.  This is 
because we have multiple clusters and having users maintain two versions of 
their code so they can run on multiple clusters is something they would 
complain a lot about.  I know that for other groups it will likely be similar, 
but their time-frame and the versions they will be going from/to will be 
different.  In general I am in favor of maintaining API, wire, and binary 
compatibility as long as is easily possible, and if we cannot it needs to be a 
conscious decision that involves changing the major version number of storm.  
Changing the package names is very much a non-backwards compatible change.  But 
in this case the backwards compatibility I put in is a real hack.  That is why 
I put it in a jar named storm-rename-hack in a java package named 
org.apache.storm.hack. Additionally it is off by default, and when it is on, 
and it does anything to the jar it warns you repeatedly that you need to 
upgrade your namespaces. I personally think that this is a one off situation.  
We are not going to break compatibility that often, because we have the tools 
we need to maintain compatibility, and we have been using them fairly well 
lately.  Now that being said compatibility is theoretical until someone 
actually tests it.  Small bugs can always creep in so if you do want to go from 
one version to another you need to be sure that you have tested it, like any 
other software.
 - Bobby 


On Tuesday, November 17, 2015 5:57 PM, Hugo Da Cruz Louro 
 wrote:
 

 +1 as well. I support moving to the org.apache.storm package as early as 
possible and I am OK with storm-compat. My only concern with using storm-compat 
is if we are going to have to support it forever, or plan on dropping it after 
a certain release. Backwards compatibility is a valid concern but it can become 
very difficult to maintain older versions and at some point we must say that 
only topologies built after version X will run for version Y onwards (Y > X)

Hugo

> On Nov 16, 2015, at 5:23 PM, Harsha  wrote:
> 
> +1 on Bobby's suggestion on moving the packages to storm-compat and have
> it part of lib folder. 
> Moving 1.0 with org.apache.storm will make it easier in the future
> rather than wait for 2.0 release we should make 
> this change now and in 2.0 we can remove the storm-compat jar.
> 
> Thanks,
> Harsha
> 
> 
> On Thu, Nov 12, 2015, at 07:15 AM, Bobby Evans wrote:
>> I agree that having the old package names for a 1.0 release is a little
>> odd, but I also am nervous about breaking backwards compatibility for our
>> customers in a very significant way.  The upgrade for us from 0.9.x to
>> 0.10.x has gone fairly smoothly.  Most customers read the announcement
>> and recompiled their code against 0.10, but followed our instructions so
>> that their topologies could run on both 0.9.x and 0.10.x.  Having the
>> ability for a topology to run on both an old cluster and a new cluster is
>> very important for us, because of failover.  If we want to minimize
>> downtime we need to have the ability to fail a topology over from one
>> cluster to another, usually running on the other side of the
>> country/world.  For stability reasons we do not want to upgrade both
>> clusters at once, so we need to have confidence that a topology can run
>> on both clusters.  Maintaining two versions of a topology is a huge pain
>> as well.
>> Perhaps what we can do for 1.0 is to move all of the packages to
>> org.apache.storm, but provide a storm-compat package that will still have
>> the old user facing APIs in it, that are actually (for the most part)
>> subclasses/interfaces of the org.apache versions.  I am not sure this
>> will work perfectly, but I think I will give it a try.
>>  - Bobby 
>> 
>> 
>>    On Thursday, November 12, 2015 9:04 AM, Aaron. Dossett
>>     wrote:
>> 
>> 
>> As a user/developer this sounds great.  The only item that gives me
>> pause
>> is #2.  Still seeing backtype.* package names would be unexpected (for
>> me)
>> for a 1.0 release.  That small reservation aside, I am +1 (non-binding).
>> 
>> On 11/11/15, 2:45 PM, "임정택"  wrote:
>> 
>>> +1
>>> 
>>> Jungtaek Lim (HeartSaVioR)
>>> 
>>> 2015-11-12 7:21 GMT+09:00 P. Taylor Goetz :
>>> 
 Changing subject in order to consolidate discussion of a 1.0 release in
 one thread (there was some additional discussion in the thread regarding
 the JStorm code merge).
 
 I just want to make sure I’m accurately capturing the sentiment of the
 community with regard to a 1.0 release. Please correct me if my summary
 seems off-base or jump in with an opinion.
 
 In summary:
 
 1. What we have been calling “0.11.0” will 

[GitHub] storm pull request: [STORM-876] Blobstore API

2015-11-18 Thread d2r
Github user d2r commented on a diff in the pull request:

https://github.com/apache/storm/pull/845#discussion_r45264867
  
--- Diff: storm-core/src/clj/backtype/storm/daemon/supervisor.clj ---
@@ -927,31 +1133,32 @@
first ))
 
 (defmethod download-storm-code
-:local [conf storm-id master-code-dir supervisor download-lock]
-(let [stormroot (supervisor-stormdist-root conf storm-id)]
-  (locking download-lock
-(FileUtils/copyDirectory (File. master-code-dir) (File. 
stormroot))
-(let [classloader (.getContextClassLoader 
(Thread/currentThread))
-  resources-jar (resources-jar)
-  url (.getResource classloader RESOURCES-SUBDIR)
-  target-dir (str stormroot file-path-separator 
RESOURCES-SUBDIR)]
-  (cond
-   resources-jar
-   (do
- (log-message "Extracting resources from jar at " 
resources-jar " to " target-dir)
- (extract-dir-from-jar resources-jar RESOURCES-SUBDIR 
stormroot))
-   url
-   (do
- (log-message "Copying resources at " (URI. (str url)) " 
to " target-dir)
- (if (= (.getProtocol url) "jar" )
-   (extract-dir-from-jar (.getFile (.getJarFileURL 
(.openConnection url))) RESOURCES-SUBDIR stormroot)
-   (FileUtils/copyDirectory (File. (.getPath (URI. (str 
url (File. target-dir)))
- )
-   )
-  )
-)))
-
-(defmethod mk-code-distributor :local [conf] nil)
+  :local [conf storm-id master-code-dir localizer]
+  (let [tmproot (str (supervisor-tmp-dir conf) file-path-separator (uuid))
+stormroot (supervisor-stormdist-root conf storm-id)
+blob-store (Utils/getNimbusBlobStore conf master-code-dir nil)]
+(try
+  (FileUtils/forceMkdir (File. tmproot))
+  (.readBlobTo blob-store (master-stormcode-key storm-id) 
(FileOutputStream. (supervisor-stormcode-path tmproot)) nil)
+  (.readBlobTo blob-store (master-stormconf-key storm-id) 
(FileOutputStream. (supervisor-stormconf-path tmproot)) nil)
+  (finally
+(.shutdown blob-store)))
+(FileUtils/moveDirectory (File. tmproot) (File. stormroot))
+(setup-storm-code-dir conf (read-supervisor-storm-conf conf storm-id) 
stormroot)
+(let [classloader (.getContextClassLoader (Thread/currentThread))
+  resources-jar (resources-jar)
+  url (.getResource classloader RESOURCES-SUBDIR)
+  target-dir (str stormroot file-path-separator RESOURCES-SUBDIR)]
+  (cond
+resources-jar
--- End diff --

param `localizer` is not used


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   3   >