[jira] [Commented] (STORM-2629) Can't build site on Windows due to Nokogiri failing to install

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165935#comment-16165935
 ] 

ASF GitHub Bot commented on STORM-2629:
---

Github user srdo commented on the issue:

https://github.com/apache/storm-site/pull/1
  
@HeartSaVioR I've tried it out on Windows 10 and ruby 2.3.4p301. I just 
went and tested it on a fresh install of Ubuntu 17.04 and ruby 2.4.1p111.

Did you run `bundler install` after switching to this branch?

I'm wondering if the README is a little misleading, it's my understanding 
that people should use Bundler to update dependencies and run Jekyll, but the 
README doesn't mention it. 

Here's the commands I ran to start the local server:
`bundler install`
`bundler exec jekyll serve -w`


> Can't build site on Windows due to Nokogiri failing to install
> --
>
> Key: STORM-2629
> URL: https://issues.apache.org/jira/browse/STORM-2629
> Project: Apache Storm
>  Issue Type: Bug
>  Components: asf-site
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
> Attachments: STORM-2629.patch
>
>
> I'm using Windows 10's bash support, and I'm having some trouble building the 
> site since Nokogiri won't install. 
> {code}
> Running 'configure' for libxml2 2.9.2... ERROR, review
> '/tmp/bundler20170714-31-159r6j1nokogiri-1.6.7.2/gems/nokogiri-1.6.7.2/ext/nokogiri/tmp/x86_64-pc-linux-gnu/ports/libxml2/2.9.2/configure.log'
> to see what happened. Last lines are:
> 
> checking build system type... ./config.guess: line 4: $'\r': command not found
> ./config.guess: line 6: $'\r': command not found
> ./config.guess: line 33: $'\r': command not found
> {code}
> Upgrading Nokogiri fixes this issue, so I'd like to upgrade the gemfile to 
> the latest version of github-pages, i.e. run "bundler update". As far as I 
> can tell, we only need to make a small number of changes to get it working.
> * It seems like the meaning of the {{page}} variable in a layout has changed 
> in Jekyll. _layouts/about.html should use {{layout}} to refer to it's own 
> variables instead of {{page}} (which would belong to the concrete page being 
> rendered). The other layouts don't refer to their own front matter, so there 
> shouldn't be any issue there
> * Jekyll has made redcarpet an optional dependency, so the gemfile should 
> list that dependency explicitly



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2741) Add in config options for metrics consumer cpu and memory

2017-09-15 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2741:
--
Labels: pull-request-available  (was: )

> Add in config options for metrics consumer cpu and memory
> -
>
> Key: STORM-2741
> URL: https://issues.apache.org/jira/browse/STORM-2741
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> Similar to STORM-2730, we want to add in configurations for metric consumer 
> cpu and memory requirements instead of just using 
> topology.component.resources.onheap.memory.mb etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2740) Add caching of some blobs in nimbus to improve performance

2017-09-15 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2740:
--
Labels: pull-request-available  (was: )

> Add caching of some blobs in nimbus to improve performance
> --
>
> Key: STORM-2740
> URL: https://issues.apache.org/jira/browse/STORM-2740
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>  Labels: pull-request-available
>
> In nimbus we read the topology and the topology config from the blob store 
> all the time.  Even multiple times for a single thrift call.  It would really 
> be great if we could cache these instead of rereading them all the time. (We 
> have found it is a cause of the UI being slow in some cases).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2549) The fix for STORM-2343 is incomplete, and the spout can still get stuck on failed tuples

2017-09-13 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2549:
--
Labels: pull-request-available  (was: )

> The fix for STORM-2343 is incomplete, and the spout can still get stuck on 
> failed tuples
> 
>
> Key: STORM-2549
> URL: https://issues.apache.org/jira/browse/STORM-2549
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Example:
> Say maxUncommittedOffsets is 10, maxPollRecords is 5, and the committedOffset 
> is 0.
> The spout will initially emit up to offset 10, because it is allowed to poll 
> until numNonRetriableTuples is >= maxUncommittedOffsets
> The spout will be allowed to emit another 5 tuples if offset 10 fails, so if 
> that happens, offsets 10-14 will get emitted. If offset 1 fails and 2-14 get 
> acked, the spout gets stuck because it will count the "extra tuples" 11-14 in 
> numNonRetriableTuples.
> An similar case is the one where maxPollRecords doesn't divide 
> maxUncommittedOffsets evenly. If it were 3 in the example above, the spout 
> might just immediately emit offsets 1-12. If 2-12 get acked, offset 1 cannot 
> be reemitted.
> The proposed solution is the following:
> * Enforce maxUncommittedOffsets on a per partition basis (i.e. actual limit 
> will be multiplied by the number of partitions) by always allowing poll for 
> retriable tuples that are within maxUncommittedOffsets tuples of the 
> committed offset. Pause any non-retriable partitions if the partition has 
> passed the maxUncommittedOffsets limit, and some other partition is polling 
> for retries while also at the maxUncommittedOffsets limit. 
> Example of this functionality:
> MaxUncommittedOffsets is 100
> MaxPollRecords is 10
> Committed offset for partition 0 and 1 is 0.
> Partition 0 has emitted 0
> Partition 1 has emitted 0...95, 97, 99, 101, 103 (some offsets compacted away)
> Partition 1, message 99 is retriable
> We check that message 99 is within 100 emitted tuples of offset 0 (it is the 
> 97th tuple after offset 0, so it is)
> We do not pause partition 0 because that partition isn't at the 
> maxUncommittedOffsets limit.
> Seek to offset 99 on partition 1 and poll
> We get back offset 99, 101, 103 and potentially 7 new tuples. Say the lowest 
> of these is at offset 104.
> The spout emits offset 99, filters out 101 and 103 because they were already 
> emitted, and emits the 7 new tuples.
> If offset 104 (or later) become retriable, they are not retried until the 
> committed offset moves. This is because offset 104 is the 101st tuple emitted 
> after offset 0, so it isn't allowed to retry until the committed offset moves.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2733) Make Load Aware Shuffle much better at really bad situations

2017-09-14 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2733:
--
Labels: pull-request-available  (was: )

> Make Load Aware Shuffle much better at really bad situations
> 
>
> Key: STORM-2733
> URL: https://issues.apache.org/jira/browse/STORM-2733
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-client
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We recently had an issue where some bolts got really backed up and started to 
> die from OOMs.  The issue ended up being 2 fold.
> First the GC really slowed down the worker so much that it could not keep up 
> even with < 1% of the traffic that was still being sent to it.  Which made it 
> almost impossible to recover.
> The second issue was that the serialization of the tuples took a lot longer 
> than the processing, which resulted in the send queue filling up much more 
> quickly than the receive queue.
> To help fix this issue I plan to address this in 2 ways.  First we need a 
> better algorithm that can actually shut off the flow entirely to a very slow 
> bolt and second we need to take the send queue into account when shuffling.
> This is not a full set of changes needed by STORM-2686 but it is a step in 
> that direction.  I am going to try and set it up so that the two algorithms 
> would work nicely together.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2607) [kafka-client] Consumer group every time with lag 1

2017-09-18 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2607:
--
Labels: pull-request-available  (was: )

> [kafka-client] Consumer group every time with lag 1
> ---
>
> Key: STORM-2607
> URL: https://issues.apache.org/jira/browse/STORM-2607
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 2.0.0, 1.1.1
>Reporter: Rodolfo Ribeiro SIlva
>Assignee: Hugo Louro
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> When i put a message a partition, the storm-kafka-client consume this message.
> But storm-kafka-client commit the offset -1.
> storm-kafka-client: 1.1.0
> storm-core : 1.1.0
> kafka: 0.10.2.0
> Steps to bug
> #1 - Insert message in kafka
> #2 - Read with storm Spout this topic
> #3 - Get the offset for the consumer group and the offset is always offset -1
> The KafkaSpoutConfig
> protected static KafkaSpoutConfig newKafkaSpoutConfig() {
>   return KafkaSpoutConfig.builder("192.168.57.11:9092", "topic").
>   setGroupId("storm").setOffsetCommitPeriodMs(10_000).
>   
> setMaxUncommittedOffsets(100_).setRetry(newRetryService())
>   .build();
>   }



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2742) Logviewer leaking file descriptors

2017-09-18 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2742:
--
Labels: pull-request-available  (was: )

> Logviewer leaking file descriptors
> --
>
> Key: STORM-2742
> URL: https://issues.apache.org/jira/browse/STORM-2742
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Kyle Nusbaum
>Assignee: Kyle Nusbaum
>  Labels: pull-request-available
>
> The logviewer leaks file descriptors from the search module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2731) Simple checks in Storm Windowing

2017-09-19 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2731:
--
Labels: pull-request-available  (was: )

> Simple checks in Storm Windowing
> 
>
> Key: STORM-2731
> URL: https://issues.apache.org/jira/browse/STORM-2731
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Boyang Jerry Peng
>Assignee: Boyang Jerry Peng
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2713) when the connection to the first zkserver is timeout,storm-kafka's kafkaspout will throw a exception

2017-09-19 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2713:
--
Labels: pull-request-available  (was: )

> when the connection to the first zkserver is timeout,storm-kafka's kafkaspout 
> will throw a exception
> 
>
> Key: STORM-2713
> URL: https://issues.apache.org/jira/browse/STORM-2713
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: liuzhaokun
>Assignee: liuzhaokun
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> when the connection to the first zkserver is timeout,storm-kafka's kafkaspout 
> will throw a exception without attempting to connect other zkserver,even zk 
> can also work with one node down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2745) Hdfs Open Files problem

2017-09-19 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2745:
--
Labels: features pull-request-available starter  (was: features starter)

> Hdfs Open Files problem
> ---
>
> Key: STORM-2745
> URL: https://issues.apache.org/jira/browse/STORM-2745
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-hdfs
>Affects Versions: 2.0.0, 1.x
>Reporter: Shoeb
>  Labels: features, pull-request-available, starter
> Fix For: 2.0.0, 1.x
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Issue:
> Problem exists when there are multiple HDFS writers in writersMap. Each 
> writer keeps an open hdfs handle to the file. Incase of Inactive writer(i.e. 
> one which is not consuming any data from long period), the files are not 
> closed and always remain in open state.
> Ideally, these files should get closed and Hdfs writers removed from the 
> WritersMap.
> Solution:
> Implement a ClosingFilesPolicy that is based on Tick tuple intervals. At each 
> tick tuple all Writers are checked and closed if they exist for a long time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2743) Add logging to monitor how long scheduling is taking

2017-09-19 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2743:
--
Labels: pull-request-available  (was: )

> Add logging to monitor how long scheduling is taking
> 
>
> Key: STORM-2743
> URL: https://issues.apache.org/jira/browse/STORM-2743
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-server
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Trivial
>  Labels: pull-request-available
>
> Add logging to monitor how long scheduling is taking



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2084) after supervisor v2 merge async localizer and localizer

2017-09-21 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2084:
--
Labels: pull-request-available  (was: )

> after supervisor v2 merge async localizer and localizer
> ---
>
> Key: STORM-2084
> URL: https://issues.apache.org/jira/browse/STORM-2084
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Affects Versions: 2.0.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>  Labels: pull-request-available
>
> Once we mere in STORM-2018 
> https://github.com/apache/storm/pull/1642 
> we should look into merging the two localizers into a single class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2753) Avoid shutting down netty server on netty exception

2017-09-21 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2753:
--
Labels: pull-request-available  (was: )

> Avoid shutting down netty server on netty exception
> ---
>
> Key: STORM-2753
> URL: https://issues.apache.org/jira/browse/STORM-2753
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-client
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> We should avoid shutting down netty server on netty exception



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2748) TickTupleTest is useless

2017-09-20 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2748:
--
Labels: pull-request-available  (was: )

> TickTupleTest is useless
> 
>
> Key: STORM-2748
> URL: https://issues.apache.org/jira/browse/STORM-2748
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-server
>Affects Versions: 2.0.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>  Labels: pull-request-available
>
> The test starts up a small topology on a simulated time cluster with 
> TOPOLOGY_TICK_TUPLE_FREQ_SECS set to 1.  Then it simulates 2 seconds of 
> cluster time.  This is not enough time to even launch the topology.  How do I 
> know this?  Because the Bolt and Spout in the topology override `writeObject` 
> so the resulting serialized bolt and spout are empty and trying to 
> deserialize them results in an exception.
> Just running a topology that does nothing and never verifies that the ticks 
> showed up is a really horrible test.  We should either delete it entirely or 
> actually verify that ticks are showing up once a second.  I am leaning 
> towards just removing it totally.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2744) Add in "restart timeout" for backpressure

2017-09-21 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2744:
--
Labels: pull-request-available  (was: )

> Add in "restart timeout" for backpressure
> -
>
> Key: STORM-2744
> URL: https://issues.apache.org/jira/browse/STORM-2744
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> Instead of stopping indefinitely we want to add a timeout value to the 
> backpressure mechanism so that spouts won't get stuck if bolts fail to switch 
> back on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2738) The number of ackers should default to the number of actual running workers on RAS cluster

2017-09-14 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2738:
--
Labels: pull-request-available  (was: )

> The number of ackers should default to the number of actual running workers 
> on RAS cluster
> --
>
> Key: STORM-2738
> URL: https://issues.apache.org/jira/browse/STORM-2738
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: Screen Shot 2017-09-13 at 11.13.41 AM.png
>
>
> *Problem*:
> If topology.acker.executors is not set,  the number of ackers will be equal 
> to topology.workers. But on RAS cluster, we don't set topology.workers 
> because the number of workers will be determined by the scheduler. So in this 
> case, the number of ackers will always be 1 (see attached screenshot)
> *Analysis*:
> The number of ackers has to be computed before scheduling happens, so it 
> knows how to schedule the topology. The number of workers is not set until 
> the topology is scheduled, so it is a bit of a chicken and egg problem.
> *Solution*:
> We could probably use the total amount of requested memory when the topology 
> is submitted divided by the memory per worker to get an estimate that is 
> better than 1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (STORM-2629) Can't build site on Windows due to Nokogiri failing to install

2017-09-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165544#comment-16165544
 ] 

ASF GitHub Bot commented on STORM-2629:
---

Github user HeartSaVioR commented on the issue:

https://github.com/apache/storm-site/pull/1
  
Which pair(s) of OS and Ruby version are you trying out?
I'm trying out the change but I'm experiencing crash on redcarpet. 
(redcarpet.rb has `require 'redcarpet.so'` but there's no file) macOS Sierra 
(10.12.6) and Ruby 2.4.1. Odd thing is that standalone command 'redcarpet' 
works.


> Can't build site on Windows due to Nokogiri failing to install
> --
>
> Key: STORM-2629
> URL: https://issues.apache.org/jira/browse/STORM-2629
> Project: Apache Storm
>  Issue Type: Bug
>  Components: asf-site
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
> Attachments: STORM-2629.patch
>
>
> I'm using Windows 10's bash support, and I'm having some trouble building the 
> site since Nokogiri won't install. 
> {code}
> Running 'configure' for libxml2 2.9.2... ERROR, review
> '/tmp/bundler20170714-31-159r6j1nokogiri-1.6.7.2/gems/nokogiri-1.6.7.2/ext/nokogiri/tmp/x86_64-pc-linux-gnu/ports/libxml2/2.9.2/configure.log'
> to see what happened. Last lines are:
> 
> checking build system type... ./config.guess: line 4: $'\r': command not found
> ./config.guess: line 6: $'\r': command not found
> ./config.guess: line 33: $'\r': command not found
> {code}
> Upgrading Nokogiri fixes this issue, so I'd like to upgrade the gemfile to 
> the latest version of github-pages, i.e. run "bundler update". As far as I 
> can tell, we only need to make a small number of changes to get it working.
> * It seems like the meaning of the {{page}} variable in a layout has changed 
> in Jekyll. _layouts/about.html should use {{layout}} to refer to it's own 
> variables instead of {{page}} (which would belong to the concrete page being 
> rendered). The other layouts don't refer to their own front matter, so there 
> shouldn't be any issue there
> * Jekyll has made redcarpet an optional dependency, so the gemfile should 
> list that dependency explicitly



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2629) Can't build site on Windows due to Nokogiri failing to install

2017-09-13 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2629:
--
Labels: pull-request-available  (was: )

> Can't build site on Windows due to Nokogiri failing to install
> --
>
> Key: STORM-2629
> URL: https://issues.apache.org/jira/browse/STORM-2629
> Project: Apache Storm
>  Issue Type: Bug
>  Components: asf-site
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
> Attachments: STORM-2629.patch
>
>
> I'm using Windows 10's bash support, and I'm having some trouble building the 
> site since Nokogiri won't install. 
> {code}
> Running 'configure' for libxml2 2.9.2... ERROR, review
> '/tmp/bundler20170714-31-159r6j1nokogiri-1.6.7.2/gems/nokogiri-1.6.7.2/ext/nokogiri/tmp/x86_64-pc-linux-gnu/ports/libxml2/2.9.2/configure.log'
> to see what happened. Last lines are:
> 
> checking build system type... ./config.guess: line 4: $'\r': command not found
> ./config.guess: line 6: $'\r': command not found
> ./config.guess: line 33: $'\r': command not found
> {code}
> Upgrading Nokogiri fixes this issue, so I'd like to upgrade the gemfile to 
> the latest version of github-pages, i.e. run "bundler update". As far as I 
> can tell, we only need to make a small number of changes to get it working.
> * It seems like the meaning of the {{page}} variable in a layout has changed 
> in Jekyll. _layouts/about.html should use {{layout}} to refer to it's own 
> variables instead of {{page}} (which would belong to the concrete page being 
> rendered). The other layouts don't refer to their own front matter, so there 
> shouldn't be any issue there
> * Jekyll has made redcarpet an optional dependency, so the gemfile should 
> list that dependency explicitly



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2648) Kafka spout can't show acks/fails and complete latency when auto commit is enabled

2017-09-22 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2648:
--
Labels: pull-request-available  (was: )

> Kafka spout can't show acks/fails and complete latency when auto commit is 
> enabled
> --
>
> Key: STORM-2648
> URL: https://issues.apache.org/jira/browse/STORM-2648
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-kafka-client
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The storm-kafka-client spout currently emits tuples with no message ids if 
> auto commit is enabled. This causes the ack/fail/complete latency counters in 
> Storm UI to be 0. In some cases this is desirable because the user may not 
> care, and doesn't want the overhead of Storm tracking tuples. 
> [~avermeerbergen] expressed a desire to be able to use auto commit without 
> these counters being disabled, presumably to monitor topology performance.
> We should add a toggle that allows users to enable/disable tuple anchoring in 
> the auto commit case. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2722) JMsSpout test fails way too often

2017-09-18 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2722:
--
Labels: pull-request-available  (was: )

> JMsSpout test fails way too often
> -
>
> Key: STORM-2722
> URL: https://issues.apache.org/jira/browse/STORM-2722
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-jms
>Affects Versions: 2.0.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>  Labels: pull-request-available
>
> {code}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:92)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at org.junit.Assert.assertTrue(Assert.java:54)
>   at 
> org.apache.storm.jms.spout.JmsSpoutTest.testFailure(JmsSpoutTest.java:62)
> {code}
> Which corresponds to 
> https://github.com/apache/storm/blob/d6e5e6d4e0a20c4c9f0ce0e3000e730dcb4700da/external/storm-jms/src/test/java/org/apache/storm/jms/spout/JmsSpoutTest.java?utf8=%E2%9C%93#L62



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2698) Upgrade to newest Mockito and Hamcrest versions

2017-10-08 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2698:
--
Labels: pull-request-available  (was: )

> Upgrade to newest Mockito and Hamcrest versions
> ---
>
> Key: STORM-2698
> URL: https://issues.apache.org/jira/browse/STORM-2698
> Project: Apache Storm
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
>
> We are currently depending on Mockito 1.9.5, which is from 2012. I think we 
> should upgrade to the latest version, since some APIs have become a little 
> nicer to work with.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2769) Fast-fail if output stream Id is null

2017-10-03 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2769:
--
Labels: pull-request-available  (was: )

> Fast-fail if output stream Id is null 
> --
>
> Key: STORM-2769
> URL: https://issues.apache.org/jira/browse/STORM-2769
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> If we accidentally set null to output stream Id and end up with the code like:
> {code:java}
> @Override
> public void declareOutputFields(OutputFieldsDeclarer declarer) {
>   declarer.declareStream(null, new Fields("word", "count"));
> }
> {code}
> We could get the following exception:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.writeString(TBinaryProtocol.java:200)
>   at 
> org.apache.storm.generated.ComponentCommon$ComponentCommonStandardScheme.write(ComponentCommon.java:739)
>   at 
> org.apache.storm.generated.ComponentCommon$ComponentCommonStandardScheme.write(ComponentCommon.java:636)
>   at 
> org.apache.storm.generated.ComponentCommon.write(ComponentCommon.java:556)
>   at 
> org.apache.storm.generated.Bolt$BoltStandardScheme.write(Bolt.java:477)
>   at 
> org.apache.storm.generated.Bolt$BoltStandardScheme.write(Bolt.java:427)
>   at org.apache.storm.generated.Bolt.write(Bolt.java:362)
>   at 
> org.apache.storm.generated.StormTopology$StormTopologyStandardScheme.write(StormTopology.java:1483)
>   at 
> org.apache.storm.generated.StormTopology$StormTopologyStandardScheme.write(StormTopology.java:1254)
>   at 
> org.apache.storm.generated.StormTopology.write(StormTopology.java:1110)
>   at 
> org.apache.storm.generated.Nimbus$submitTopology_args$submitTopology_argsStandardScheme.write(Nimbus.java:7676)
>   at 
> org.apache.storm.generated.Nimbus$submitTopology_args$submitTopology_argsStandardScheme.write(Nimbus.java:7601)
>   at 
> org.apache.storm.generated.Nimbus$submitTopology_args.write(Nimbus.java:7528)
>   at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:71)
>   at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
>   at 
> org.apache.storm.generated.Nimbus$Client.send_submitTopology(Nimbus.java:304)
>   at 
> org.apache.storm.generated.Nimbus$Client.submitTopology(Nimbus.java:293)
>   at 
> org.apache.storm.StormSubmitter.submitTopologyInDistributeMode(StormSubmitter.java:332)
>   at 
> org.apache.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:266)
>   at 
> org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:393)
>   at 
> org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:165)
>   at 
> org.apache.storm.topology.ConfigurableTopology.submit(ConfigurableTopology.java:94)
>   at 
> org.apache.storm.starter.WordCountTopology.run(WordCountTopology.java:100)
>   at 
> org.apache.storm.topology.ConfigurableTopology.start(ConfigurableTopology.java:70)
>   at 
> org.apache.storm.starter.WordCountTopology.main(WordCountTopology.java:79)
> {code}
> It's because null in map is not supported by thrift. We should check stream 
> Id not null 
> [here|https://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/topology/OutputFieldsGetter.java#L42]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2686) Add Locality Aware ShuffleGrouping

2017-10-10 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2686:
--
Labels: pull-request-available  (was: )

> Add Locality Aware ShuffleGrouping
> --
>
> Key: STORM-2686
> URL: https://issues.apache.org/jira/browse/STORM-2686
> Project: Apache Storm
>  Issue Type: Sub-task
>  Components: storm-client
>Reporter: Ethan Li
>Assignee: Ethan Li
>  Labels: pull-request-available
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2759) Let users indicate if a worker should restart on blob download

2017-10-09 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2759:
--
Labels: pull-request-available  (was: )

> Let users indicate if a worker should restart on blob download
> --
>
> Key: STORM-2759
> URL: https://issues.apache.org/jira/browse/STORM-2759
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>  Labels: pull-request-available
>
> Some blobs (like jar files) really should be tied to the life cycle of a 
> worker.  If a new blob is ready the worker should be restarted.  Otherwise 
> there is no way to pick up the contents of the newly downloaded blob.
> STORM-2438 already sets the ground work for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2777) The number of ackers doesn't default to the number of workers

2017-10-16 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2777:
--
Labels: pull-request-available  (was: )

> The number of ackers doesn't default to the number of workers
> -
>
> Key: STORM-2777
> URL: https://issues.apache.org/jira/browse/STORM-2777
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Ethan Li
>Assignee: Ethan Li
>  Labels: pull-request-available
> Attachments: Screen Shot 2017-10-13 at 9.53.45 PM.png
>
>
> It's a bug from the code change of https://github.com/apache/storm/pull/2325 
> .  The number of ackers doesn't default to the number of workers when not on 
> RAS cluster.  
> [^Screen Shot 2017-10-13 at 9.53.45 PM.png]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2780) MetricsConsumer record unnecessary timestamp

2017-10-17 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2780:
--
Labels: pull-request-available  (was: )

> MetricsConsumer record unnecessary timestamp
> 
>
> Key: STORM-2780
> URL: https://issues.apache.org/jira/browse/STORM-2780
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-client
>Reporter: Xin Liu
>Assignee: Xin Liu
>  Labels: pull-request-available
>
> In a topology I could call 
> conf.registerMetricsConsumer(LoggingMetricsConsumer.class,2);
> to generate a file named worker.log.metrics with metrics data.
> like  this:
> 2017-10-17 10:16:13,272 307174   1508206573   host1:6702 16:count 
>   __fail-count{}
> 2017-10-17 10:16:13,272 307174   1508206573   host1:6702 16:count 
>   __emit-count{default=41700}
> 2017-10-17 10:16:13,272 307174   1508206573   host1:6702 16:count 
>   __execute-count {split:default=41700}
> 2017-10-17 10:29:20,898 126900   1508207360   host1:6702 29:spout 
>   __ack-count {}
> 2017-10-17 10:29:20,906 126908   1508207360   host1:6702 29:spout 
>   __sendqueue {sojourn_time_ms=0.0, write_pos=299526, 
> read_pos=299526, overflow=1, arrival_rate_secs=2643.9024390243903, 
> capacity=1024, population=0}
> But it records both date-and-timestamp(2017-10-17 10:29:20,906 126908) and 
> only-timestamp(1508207360),I think the only-timestamp should be deleted 
> because it's unnecessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2546) Kafka spout can stall / get stuck due to edge case with failing tuples

2017-10-14 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2546:
--
Labels: pull-request-available  (was: )

> Kafka spout can stall / get stuck due to edge case with failing tuples
> --
>
> Key: STORM-2546
> URL: https://issues.apache.org/jira/browse/STORM-2546
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 2.0.0, 1.x
>Reporter: Prasanna Ranganathan
>Assignee: Stig Rohde Døssing
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The mechanism for replaying a failed tuple involves seeking the kafka 
> consumer to the failing offset and then re-emitting it into the topology. A 
> tuple, when emitted the first time, will have an entry created in 
> OffsetManager. This entry will be removed only after the tuple is 
> successfully acknowledged and its offset successfully committed. Till then, 
> commits for offsets beyond the failing offset for that TopicPartition will be 
> blocked.
> It is possible that when the spout seeks the consumer to the failing offset, 
> the corresponding kafka message is not returned in the poll response. This 
> can happen due to that offset being deleted or compacted away. In this 
> scenario that partition will be blocked from committing and progressing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2779) NPE on shutting down WindowedBoltExecutor

2017-10-16 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2779:
--
Labels: pull-request-available  (was: )

> NPE on shutting down WindowedBoltExecutor
> -
>
> Key: STORM-2779
> URL: https://issues.apache.org/jira/browse/STORM-2779
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-client
>Affects Versions: 2.0.0, 1.2.0, 1.1.2
>Reporter: Jungtaek Lim
>Assignee: Jungtaek Lim
>  Labels: pull-request-available
>
> STORM-2724 introduced a bug on WindowedBoltExecutor which throws NPE when 
> shutting down WindowedBoltExecutor which has waterMarkEventGenerator field as 
> null.
> https://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/topology/WindowedBoltExecutor.java#L330



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (STORM-2629) Can't build site on Windows due to Nokogiri failing to install

2017-09-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160046#comment-16160046
 ] 

ASF GitHub Bot commented on STORM-2629:
---

GitHub user srdo opened a pull request:

https://github.com/apache/storm-site/pull/1

STORM-2629: Upgrade to latest github-pages to allow Windows build

See https://issues.apache.org/jira/browse/STORM-2629

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/srdo/storm-site STORM-2629

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/storm-site/pull/1.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1


commit 5601e679786871cff4affd6a98e07f74c7b0a6cd
Author: Stig Rohde Døssing 
Date:   2017-09-09T18:34:19Z

STORM-2629: Upgrade to latest github-pages to allow Windows build




> Can't build site on Windows due to Nokogiri failing to install
> --
>
> Key: STORM-2629
> URL: https://issues.apache.org/jira/browse/STORM-2629
> Project: Apache Storm
>  Issue Type: Bug
>  Components: asf-site
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
> Attachments: STORM-2629.patch
>
>
> I'm using Windows 10's bash support, and I'm having some trouble building the 
> site since Nokogiri won't install. 
> {code}
> Running 'configure' for libxml2 2.9.2... ERROR, review
> '/tmp/bundler20170714-31-159r6j1nokogiri-1.6.7.2/gems/nokogiri-1.6.7.2/ext/nokogiri/tmp/x86_64-pc-linux-gnu/ports/libxml2/2.9.2/configure.log'
> to see what happened. Last lines are:
> 
> checking build system type... ./config.guess: line 4: $'\r': command not found
> ./config.guess: line 6: $'\r': command not found
> ./config.guess: line 33: $'\r': command not found
> {code}
> Upgrading Nokogiri fixes this issue, so I'd like to upgrade the gemfile to 
> the latest version of github-pages, i.e. run "bundler update". As far as I 
> can tell, we only need to make a small number of changes to get it working.
> * It seems like the meaning of the {{page}} variable in a layout has changed 
> in Jekyll. _layouts/about.html should use {{layout}} to refer to it's own 
> variables instead of {{page}} (which would belong to the concrete page being 
> rendered). The other layouts don't refer to their own front matter, so there 
> shouldn't be any issue there
> * Jekyll has made redcarpet an optional dependency, so the gemfile should 
> list that dependency explicitly



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2693) Topology submission or kill takes too much time when topologies grow to a few hundred

2017-09-13 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2693:
--
Labels: pull-request-available  (was: )

> Topology submission or kill takes too much time when topologies grow to a few 
> hundred
> -
>
> Key: STORM-2693
> URL: https://issues.apache.org/jira/browse/STORM-2693
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Affects Versions: 0.9.6, 1.0.2, 1.1.0, 1.0.3
>Reporter: Yuzhao Chen
>  Labels: pull-request-available
> Attachments: 2FA30CD8-AF15-4352-992D-A67BD724E7FB.png, 
> D4A30D40-25D5-4ACF-9A96-252EBA9E6EF6.png
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Now for a storm cluster with 40 hosts [with 32 cores/128G memory] and 
> hundreds of topologies, nimbus submission and killing will take about minutes 
> to finish. For example, for a cluster with 300 hundred of topologies,it will 
> take about 8 minutes to submit a topology, this affect our efficiency 
> seriously.
> So, i check out the nimbus code and find two factor that will effect nimbus 
> submission/killing time for a scheduling round:
> * read existing-assignments from zookeeper for every topology [will take 
> about 4 seconds for a 300 topologies cluster]
> * read all the workers heartbeats and update the state to nimbus cache [will 
> take about 30 seconds for a 300 topologies cluster]
> the key here is that Storm now use zookeeper to collect heartbeats [not RPC], 
> and also keep physical plan [assignments] using zookeeper which can be 
> totally local in nimbus.
> So, i think we should make some changes to storm's heartbeats and assignments 
> management.
> For assignment promotion:
> 1. nimbus will put the assignments in local disk
> 2. when restart or HA leader trigger nimbus will recover assignments from zk 
> to local disk
> 3. nimbus will tell supervisor its assignment every time through RPC every 
> scheduling round
> 4. supervisor will sync assignments at fixed time
> For heartbeats promotion:
> 1. workers will report executors ok or wrong to supervisor at fixed time
> 2. supervisor will report workers heartbeats to nimbus at fixed time
> 3. if supervisor die, it will tell nimbus through runtime hook
> or let nimbus find it through aware supervisor if is survive 
> 4. let supervisor decide if worker is running ok or invalid , supervisor will 
> tell nimbus which executors of every topology are ok



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2675) KafkaTridentSpoutOpaque not committing offsets to Kafka

2017-09-13 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2675:
--
Labels: pull-request-available  (was: )

> KafkaTridentSpoutOpaque not committing offsets to Kafka
> ---
>
> Key: STORM-2675
> URL: https://issues.apache.org/jira/browse/STORM-2675
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 1.1.0
>Reporter: Preet Puri
>Assignee: Stig Rohde Døssing
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Every time I restart the topology the spout was picking the earliest message 
> even though poll strategy is set UNCOMMITTED_EARLIEST.  I looked at Kafka's  
> __consumer_offsets topic to see if spout (consumer) is committing the offsets 
> but did not find any commits. I am not even able to locate the code in the 
> KafkaTridentSpoutEmitter class where we are updating the commits?
> conf.put(Config.TOPOLOGY_DEBUG, true);
> conf.put(Config.TOPOLOGY_WORKERS, 1);
> conf.put(Config.TOPOLOGY_MAX_SPOUT_PENDING, 4); //tried with1 as well
> conf.put(Config.TRANSACTIONAL_ZOOKEEPER_ROOT, "/aggregate");
> conf.put(Config.TRANSACTIONAL_ZOOKEEPER_SERVERS, Arrays.asList(new 
> String[]{"localhost"}));
> conf.put(Config.TRANSACTIONAL_ZOOKEEPER_PORT, 2181);
>  protected static KafkaSpoutConfig 
> getPMStatKafkaSpoutConfig() {
> ByTopicRecordTranslator byTopic =
> new ByTopicRecordTranslator<>((r) -> new Values(r.topic(), r.key(), 
> r.value()),
> new Fields(TOPIC, PARTITION_KEY, PAYLOAD), SENSOR_STREAM);
> return new KafkaSpoutConfig.Builder String>(Utils.getBrokerHosts(),
> StringDeserializer.class, null, Utils.getKafkaEnrichedPMSTopicName())
> .setMaxPartitionFectchBytes(10 * 1024) // 10 KB
> .setRetry(getRetryService())
> .setOffsetCommitPeriodMs(10_000)
> 
> .setFirstPollOffsetStrategy(FirstPollOffsetStrategy.UNCOMMITTED_EARLIEST)
> .setMaxUncommittedOffsets(250)
> .setProp("value.deserializer", 
> "io.confluent.kafka.serializers.KafkaAvroDeserializer")
> .setProp("schema.registry.url","http://localhost:8081;)
> .setProp("specific.avro.reader",true)
> .setGroupId(AGGREGATION_CONSUMER_GROUP)
> .setRecordTranslator(byTopic).build();
>   }
> Stream pmStatStream =
> topology.newStream("statStream", new 
> KafkaTridentSpoutOpaque<>(getPMStatKafkaSpoutConfig())).parallelismHint(1)
> storm-version - 1.1.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2736) o.a.s.b.BlobStoreUtils [ERROR] Could not update the blob with key

2017-09-12 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2736:
--
Labels: pull-request-available  (was: )

> o.a.s.b.BlobStoreUtils [ERROR] Could not update the blob with key
> -
>
> Key: STORM-2736
> URL: https://issues.apache.org/jira/browse/STORM-2736
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 1.1.1
>Reporter: Heather McCartney
>Assignee: Heather McCartney
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Sometimes, after our topologies have been running for a while, Zookeeper does 
> not respond within an appropriate time and we see
> {code}
> 2017-08-16 10:18:38.859 o.a.s.zookeeper [INFO] ip-10-181-20-70.ec2.internal 
> lost leadership.
> 2017-08-16 10:21:31.144 o.a.s.zookeeper [INFO] ip-10-181-20-70.ec2.internal 
> gained leadership, checking if it has all the topology code locally.
> 2017-08-16 10:21:46.201 o.a.s.zookeeper [INFO] Accepting leadership, all 
> active topology found localy.
> {code}
> That's fine, and we probably need to allocate more resources. But after a new 
> leader is chosen, we then see:
> {code}
> o.a.s.b.BlobStoreUtils [ERROR] Could not update the blob with key
> {code}
> over and over.
> I can't figure out yet how to cause the conditions that lead to Zookeeper 
> becoming unresponsive, but it is possible to reproduce the {{BlobStoreUtils}} 
> error by restarting Zookeeper.
> The problem, I think, is that the loop 
> [here|https://github.com/apache/storm/blob/v1.1.1/storm-core/src/jvm/org/apache/storm/blobstore/BlobStoreUtils.java#L175]
>  never executes because the {{nimbusInfos}} list is empty. If I add a check 
> similar to 
> [this|https://github.com/apache/storm/blob/v1.1.1/storm-core/src/jvm/org/apache/storm/blobstore/BlobStoreUtils.java#L244]
>  for a node which exists but has no children, the error goes away.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2758) logviewer_search page not found

2017-09-26 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2758:
--
Labels: pull-request-available  (was: )

> logviewer_search page not found
> ---
>
> Key: STORM-2758
> URL: https://issues.apache.org/jira/browse/STORM-2758
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: Screen Shot 2017-09-26 at 11.35.03 AM.png, Screen Shot 
> 2017-09-26 at 11.35.17 AM.png
>
>
> I was able to search logs using search/deepSearch. 
> For example, 
> [^Screen Shot 2017-09-26 at 11.35.17 AM.png]
> But the link doesn't work. I got 404 not found when I clicked the result.
>  [^Screen Shot 2017-09-26 at 11.35.03 AM.png]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2760) Add Blobstore Migration Scripts

2017-09-26 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2760:
--
Labels: pull-request-available  (was: )

> Add Blobstore Migration Scripts
> ---
>
> Key: STORM-2760
> URL: https://issues.apache.org/jira/browse/STORM-2760
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Kyle Nusbaum
>Assignee: Kyle Nusbaum
>  Labels: pull-request-available
>
> Add code and helper scripts for migrating active Storm clusters from a 
> locally-backed BlobStore to an HDFS-backed BlobStore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2438) on-demand resource requirement scaling

2017-09-26 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2438:
--
Labels: pull-request-available  (was: )

> on-demand resource requirement scaling
> --
>
> Key: STORM-2438
> URL: https://issues.apache.org/jira/browse/STORM-2438
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-core
>Affects Versions: 2.0.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>  Labels: pull-request-available
>
> As a first step towards true elasticity in a storm topology we propose 
> allowing rebalance to also modify the resource requirements for each 
> bolt/spout in the topology.  It will not be automatic, but it will let users 
> scale up and down the CPU/memory needed for a component.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2757) Links are broken when logviewer https port is used

2017-09-26 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2757:
--
Labels: pull-request-available  (was: )

> Links are broken when logviewer https port is used
> --
>
> Key: STORM-2757
> URL: https://issues.apache.org/jira/browse/STORM-2757
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Ethan Li
>Assignee: Ethan Li
>  Labels: pull-request-available
> Attachments: screenshot-1.png
>
>
> Some links are broken when logviewer.https.port is configured.  
> For example,  with the configuration:
> {code:java}
> logviewer.https.port: 9093
> logviewer.https.keystore.type: "JKS"
> logviewer.https.keystore.path: "/keystore-path"
> logviewer.https.keystore.password: "xx"
> logviewer.https.key.password: "xx"
> {code}
> We will get:
> [^screenshot-1.png]
> The logLink is still using http port. However, it's not reachable. 
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2083) Blacklist Scheduler

2017-09-25 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2083:
--
Labels: blacklist pull-request-available scheduling  (was: blacklist 
scheduling)

> Blacklist Scheduler
> ---
>
> Key: STORM-2083
> URL: https://issues.apache.org/jira/browse/STORM-2083
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Reporter: Howard Lee
>  Labels: blacklist, pull-request-available, scheduling
>  Time Spent: 15h 10m
>  Remaining Estimate: 0h
>
> My company has gone through a fault in production, in which a critical switch 
> causes unstable network for a set of machines with package loss rate of 
> 30%-50%. In such fault, the supervisors and workers on the machines are not 
> definitely dead, which is easy to handle. Instead they are still alive but 
> very unstable. They lost heartbeat to the nimbus occasionally. The nimbus, in 
> such circumstance, will still assign jobs to these machines, but will soon 
> find them invalid again, result in a very slow convergence to stable status.
> To deal with such unstable cases, we intend to implement a blacklist 
> scheduler, which will add the unstable nodes (supervisors, slots) to the 
> blacklist temporarily, and resume them later. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2756) STORM-2548 on 1.x-branch broke setting key/value deserializers with the now deprecated setKey/setValue methods

2017-09-24 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2756:
--
Labels: pull-request-available  (was: )

> STORM-2548 on 1.x-branch broke setting key/value deserializers with the now 
> deprecated setKey/setValue methods
> --
>
> Key: STORM-2756
> URL: https://issues.apache.org/jira/browse/STORM-2756
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 1.2.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>  Labels: pull-request-available
>
> When STORM-2548 was backported, the setKey/setValue methods on 
> KafkaSpoutConfig.Builder were deprecated, and users were directed to use 
> setProp along with the relevant ConsumerConfig constants for setting 
> deserializers instead.
> As part of this change, the KafkaConsumerFactoryDefault switched from using 
> the KafkaConsumer(props, keyDes, valDes) constructor to using the 
> KafkaConsumer(props) constructor. Unfortunately I forgot to update the 
> KafkaSpoutConfig.Builder constructor properly, so if the user configures the 
> deserializer via either the Builder constructor parameters or 
> setKey/setValue, the setting is not put in the kafkaProps map and the 
> deserializer is not used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2764) HDFSBlobStore leaks file system objects

2017-09-28 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2764:
--
Labels: pull-request-available  (was: )

> HDFSBlobStore leaks file system objects
> ---
>
> Key: STORM-2764
> URL: https://issues.apache.org/jira/browse/STORM-2764
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-hdfs
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>  Labels: pull-request-available
>
> This impacts all of the releases.  Each time we create a new HDFSBlobStore 
> instance we call 
> https://github.com/apache/storm/blob/v1.0.0/external/storm-hdfs/src/main/java/org/apache/storm/hdfs/blobstore/HdfsBlobStore.java#L140
> loginUserFromKeytab.
> This results in a new subject being created each time when ends up causing a 
> FileSystem object to leak each time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2767) Surefire now truncates too much of the stack trace

2017-09-30 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2767:
--
Labels: pull-request-available  (was: )

> Surefire now truncates too much of the stack trace
> --
>
> Key: STORM-2767
> URL: https://issues.apache.org/jira/browse/STORM-2767
> Project: Apache Storm
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
>
> Surefire is truncating so much of the stack trace when tests fail that we 
> often can't easily spot the error. As an example I manually threw an NPE from 
> storm-kafka-client's KafkaSpout.commit() method, and here are the stack 
> traces with trimStackTrace enabled and disabled:
> trimmed
> {code}
> testCommitSuccessWithOffsetVoids(org.apache.storm.kafka.spout.KafkaSpoutCommitTest)
>   Time elapsed: 0.714 sec  <<< ERROR!
> java.lang.NullPointerException: This is an NPE from inside nextTuple
>   at 
> org.apache.storm.kafka.spout.KafkaSpoutCommitTest.testCommitSuccessWithOffsetVoids(KafkaSpoutCommitTest.java:87)
> {code}
> not trimmed
> {code}
> testCommitSuccessWithOffsetVoids(org.apache.storm.kafka.spout.KafkaSpoutCommitTest)
>   Time elapsed: 0.78 sec  <<< ERROR!
> java.lang.NullPointerException: This is an NPE from inside nextTuple
>   at org.apache.storm.kafka.spout.KafkaSpout.commit(KafkaSpout.java:266)
>   at 
> org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:235)
>   at 
> org.apache.storm.kafka.spout.KafkaSpoutCommitTest.testCommitSuccessWithOffsetVoids(KafkaSpoutCommitTest.java:87)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.junit.runners.Suite.runChild(Suite.java:127)
>   at org.junit.runners.Suite.runChild(Suite.java:26)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:161)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:290)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:242)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)
> {code}
> Note how the trimmed stack trace is also removing the trace lines from inside 
> KafkaSpout.
> As part of fixing 

[jira] [Updated] (STORM-2666) Storm-kafka-client spout can sometimes emit messages that were already committed.

2017-10-02 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2666:
--
Labels: pull-request-available  (was: )

> Storm-kafka-client spout can sometimes emit messages that were already 
> committed. 
> --
>
> Key: STORM-2666
> URL: https://issues.apache.org/jira/browse/STORM-2666
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 1.0.0, 2.0.0, 1.1.0, 1.1.1, 1.2.0
>Reporter: Guang Du
>Assignee: Stig Rohde Døssing
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Under a certain heavy load, for failed/timeout tuples, the retry service will 
> ack tuple for failed max times. Kafka Client Spout will commit after reached 
> the commit interval. However seems some 'on the way' tuples will be failed 
> again, the retry service will cause Spout to emit again, and acked eventually 
> to OffsetManager.
> In some cases such offsets are too many, exceeding the max-uncommit, causing 
> org.apache.storm.kafka.spout.internal.OffsetManager#findNextCommitOffset 
> unable to find next commit point, and Spout for this partition will not poll 
> any more.
> By the way I've applied STORM-2549 PR#2156 from Stig Døssing to fix 
> STORM-2625, and I'm using Python Shell Bolt as processing bolt, if this 
> information helps.
> resulting logs like below. I'm not sure if the issue has already been 
> raised/fixed, glad if anyone could help to point out existing JIRA. Thank you.
> 2017-07-27 22:23:48.398 o.a.s.k.s.KafkaSpout Thread-23-spout-executor[248 
> 248] [INFO] Successful ack for tuple message 
> [{topic-partition=kafka_bd_trigger_action-20, offset=18204, numFails=0}].
> 2017-07-27 22:23:49.203 o.a.s.k.s.i.OffsetManager 
> Thread-23-spout-executor[248 248] [WARN] topic-partition 
> [kafka_bd_trigger_action-18] has unexpected offset [16002]. Current committed 
> Offset [16003]
> Edit:
> See 
> https://issues.apache.org/jira/browse/STORM-2666?focusedCommentId=16125893=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16125893
>  for the current best guess at the root cause of this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2452) Storm Metric classes are not thread safe

2017-09-27 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2452:
--
Labels: pull-request-available  (was: )

> Storm Metric classes are not thread safe
> 
>
> Key: STORM-2452
> URL: https://issues.apache.org/jira/browse/STORM-2452
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Kyle Nusbaum
>Assignee: Kyle Nusbaum
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Classes in org.apache.storm.metric.api are not thread-safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2153) New Metrics Reporting API

2017-09-25 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2153:
--
Labels: pull-request-available  (was: )

> New Metrics Reporting API
> -
>
> Key: STORM-2153
> URL: https://issues.apache.org/jira/browse/STORM-2153
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: P. Taylor Goetz
>Assignee: P. Taylor Goetz
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> This is a proposal to provide a new metrics reporting API based on [Coda 
> Hale's metrics library | http://metrics.dropwizard.io/3.1.0/] (AKA 
> Dropwizard/Yammer metrics).
> h2. Background
> In a [discussion on the dev@ mailing list | 
> http://mail-archives.apache.org/mod_mbox/storm-dev/201610.mbox/%3ccagx0urh85nfh0pbph11pmc1oof6htycjcxsxgwp2nnofukq...@mail.gmail.com%3e]
>   a number of community and PMC members recommended replacing Storm’s metrics 
> system with a new API as opposed to enhancing the existing metrics system. 
> Some of the objections to the existing metrics API include:
> # Metrics are reported as an untyped Java object, making it very difficult to 
> reason about how to report it (e.g. is it a gauge, a counter, etc.?)
> # It is difficult to determine if metrics coming into the consumer are 
> pre-aggregated or not.
> # Storm’s metrics collection occurs through a specialized bolt, which in 
> addition to potentially affecting system performance, complicates certain 
> types of aggregation when the parallelism of that bolt is greater than one.
> In the discussion on the developer mailing list, there is growing consensus 
> for replacing Storm’s metrics API with a new API based on Coda Hale’s metrics 
> library. This approach has the following benefits:
> # Coda Hale’s metrics library is very stable, performant, well thought out, 
> and widely adopted among open source projects (e.g. Kafka).
> # The metrics library provides many existing metric types: Meters, Gauges, 
> Counters, Histograms, and more.
> # The library has a pluggable “reporter” API for publishing metrics to 
> various systems, with existing implementations for: JMX, console, CSV, SLF4J, 
> Graphite, Ganglia.
> # Reporters are straightforward to implement, and can be reused by any 
> project that uses the metrics library (i.e. would have broader application 
> outside of Storm)
> As noted earlier, the metrics library supports pluggable reporters for 
> sending metrics data to other systems, and implementing a reporter is fairly 
> straightforward (an example reporter implementation can be found here). For 
> example if someone develops a reporter based on Coda Hale’s metrics, it could 
> not only be used for pushing Storm metrics, but also for any system that used 
> the metrics library, such as Kafka.
> h2. Scope of Effort
> The effort to implement a new metrics API for Storm can be broken down into 
> the following development areas:
> # Implement API for Storms internal worker metrics: latencies, queue sizes, 
> capacity, etc.
> # Implement API for user defined, topology-specific metrics (exposed via the 
> {{org.apache.storm.task.TopologyContext}} class)
> # Implement API for storm daemons: nimbus, supervisor, etc.
> h2. Relationship to Existing Metrics
> This would be a new API that would not affect the existing metrics API. Upon 
> completion, the old metrics API would presumably be deprecated, but kept in 
> place for backward compatibility.
> Internally the current metrics API uses Storm bolts for the reporting 
> mechanism. The proposed metrics API would not depend on any of Storm's 
> messaging capabilities and instead use the [metrics library's built-in 
> reporter mechanism | 
> http://metrics.dropwizard.io/3.1.0/manual/core/#man-core-reporters]. This 
> would allow users to use existing {{Reporter}} implementations which are not 
> Storm-specific, and would simplify the process of collecting metrics. 
> Compared to Storm's {{IMetricCollector}} interface, implementing a reporter 
> for the metrics library is much more straightforward (an example can be found 
> [here | 
> https://github.com/dropwizard/metrics/blob/3.2-development/metrics-core/src/main/java/com/codahale/metrics/ConsoleReporter.java].
> The new metrics capability would not use or affect the ZooKeeper-based 
> metrics used by Storm UI.
> h2. Relationship to JStorm Metrics
> [TBD]
> h2. Target Branches
> [TBD]
> h2. Performance Implications
> [TBD]
> h2. Metrics Namespaces
> [TBD]
> h2. Metrics Collected
> *Worker*
> || Namespace || Metric Type || Description ||
> *Nimbus*
> || Namespace || Metric Type || Description ||
> *Supervisor*
> || Namespace || Metric Type || Description ||
> h2. User-Defined Metrics
> [TBD]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2237) Nimbus reports bad supervisor heartbeat - unknown version or thrift exception

2017-09-27 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2237:
--
Labels: pull-request-available  (was: )

> Nimbus reports bad supervisor heartbeat - unknown version or thrift exception
> -
>
> Key: STORM-2237
> URL: https://issues.apache.org/jira/browse/STORM-2237
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Kyle Nusbaum
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2772) In the DRPCSpout class, when the fetch from the DRPC server fails, the log should return to get the DRPC request failed instead of getting the DRPC result failed

2017-10-09 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2772:
--
Labels: pull-request-available  (was: )

> In the DRPCSpout class, when the fetch from the DRPC server fails, the log 
> should return to get the DRPC request failed instead of getting the DRPC 
> result failed
> -
>
> Key: STORM-2772
> URL: https://issues.apache.org/jira/browse/STORM-2772
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-client
>Affects Versions: 2.0.0
>Reporter: hantiantian
>  Labels: pull-request-available
>
> In the DRPCSpout class, when the fetch from the DRPC server fails, the log 
> error should return to get the DRPC request failed instead of getting the 
> DRPC result failed.
> for example, in line 216 of DRPCSpout class,
>   LOG.error("Not authorized to fetch DRPC result from DRPC server", aze);
> this should be modified to 
>  LOG.error("Not authorized to fetch DRPC request from DRPC server", aze);



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2773) If a drpcserver node in cluster is down,drpc cluster won't work if we don't modify the drpc.server configuration and restart the cluster

2017-10-09 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2773:
--
Labels: pull-request-available  (was: )

> If a drpcserver node in cluster is down,drpc cluster won't work if we don't 
> modify the drpc.server configuration and restart the cluster
> 
>
> Key: STORM-2773
> URL: https://issues.apache.org/jira/browse/STORM-2773
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-client
>Affects Versions: 2.0.0
>Reporter: liuzhaokun
>Assignee: liuzhaokun
>  Labels: pull-request-available
>
> There is a cluster which includes three nodes named storm1,storm2,storm3.And 
> there is a drpcserver in every node,a worker which has been started on 
> strom1.When strom1 was down with hardware failure,my drpc topology won't 
> work,when I send request from drpcclient.
> As storm1 was down,so the worker will be restarted on another node,but it 
> can't Initialize successfully because the call method of Adder will throw a 
> RuntimeException,when drpcspout try to connect to storm1,so the worker will 
> restart again. 
> In conclusion,If a drpcserver node in cluster is down,drpc cluster won't work 
> until we modify the drpc.server configuration and restart the cluster,but in 
> production,it's difficult to restart whole cluster.
> So I think we should catch the RuntimeException and log it,and the drpc 
> topology will work normally.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2770) Add fragmentation metrics for CPU and Memory

2017-10-04 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2770:
--
Labels: pull-request-available  (was: )

> Add fragmentation metrics for CPU and Memory
> 
>
> Key: STORM-2770
> URL: https://issues.apache.org/jira/browse/STORM-2770
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-client, storm-core
>Reporter: Kishor Patil
>Assignee: Kishor Patil
>  Labels: pull-request-available
>
> While using RAS, it is necessary to see cluster fragmentation for CPU/memory.
> It would be using to see it both as metrics being reported and displayed to 
> UI under cluster summary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2716) Storm-webapp tests don't work on Windows

2017-10-07 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2716:
--
Labels: pull-request-available  (was: )

> Storm-webapp tests don't work on Windows
> 
>
> Key: STORM-2716
> URL: https://issues.apache.org/jira/browse/STORM-2716
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-webapp
>Affects Versions: 2.0.0
> Environment: Windows 10
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>  Labels: pull-request-available
>
> Several storm-webapp tests don't work on Windows because file paths like 
> "/tmp" are used, and paths are sometimes constructed by String concatenation 
> instead of using Path.resolve. The logviewer also doesn't seem to work on 
> Windows, probably for the same reason.
> I think there might be a few similar issues in other parts of the code, which 
> I'd like to also fix as part of this. 
> I haven't checked whether this is a problem in 1.x, but it's likely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2775) Improve KafkaPartition Metric Names

2017-10-12 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2775:
--
Labels: pull-request-available  (was: )

> Improve KafkaPartition Metric Names
> ---
>
> Key: STORM-2775
> URL: https://issues.apache.org/jira/browse/STORM-2775
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-kafka
>Affects Versions: 1.1.2
>Reporter: Kevin Conaway
>  Labels: pull-request-available
>
> The _storm-kafka_ `KafkaSpout` emits a metric group called _kafkaPartition_
> These metric names are prefixed with 
> _Partition{host=some.broker.host.mycompany.com:9092,-topic=some/topic/name,-partition=40}
> _
> Which makes for ugly, difficult to discover metrics on systems like Graphite.
> The metric prefix should match the metrics emitted by the _kafkaOffset_ 
> metric group that look like:
> _topicName/partition__



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2706) Nimbus stuck in exception and does not fail fast

2017-10-18 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2706:
--
Labels: nimbus pull-request-available  (was: nimbus)

> Nimbus stuck in exception and does not fail fast
> 
>
> Key: STORM-2706
> URL: https://issues.apache.org/jira/browse/STORM-2706
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 1.1.1
>Reporter: Bijan Fahimi Shemrani
>Assignee: Stig Rohde Døssing
>  Labels: nimbus, pull-request-available
>
> We experience a problem in nimbus which leads it to get stuck in a retry and 
> fail loop. When I manually restart the nimbus it works again as expected. 
> However, it would be great if nimbus would shut down so our monitoring can 
> automatically restart the nimbus. 
> The nimbus log. 
> {noformat}
> 24.8.2017 15:39:1913:39:19.804 [pool-13-thread-51] ERROR 
> org.apache.storm.thrift.server.AbstractNonblockingServer$FrameBuffer - 
> Unexpected throwable while invoking!
> 24.8.2017 
> 15:39:19org.apache.storm.shade.org.apache.zookeeper.KeeperException$NoNodeException:
>  KeeperErrorCode = NoNode for /storm/leader-lock
> 24.8.2017 15:39:19at 
> org.apache.storm.shade.org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at 
> org.apache.storm.shade.org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at 
> org.apache.storm.shade.org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1590)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at 
> org.apache.storm.shade.org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:230)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at 
> org.apache.storm.shade.org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:219)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at 
> org.apache.storm.shade.org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:109)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at 
> org.apache.storm.shade.org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:216)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at 
> org.apache.storm.shade.org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:207)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at 
> org.apache.storm.shade.org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:40)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at 
> org.apache.storm.shade.org.apache.curator.framework.recipes.locks.LockInternals.getSortedChildren(LockInternals.java:151)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at 
> org.apache.storm.shade.org.apache.curator.framework.recipes.locks.LockInternals.getParticipantNodes(LockInternals.java:133)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at 
> org.apache.storm.shade.org.apache.curator.framework.recipes.leader.LeaderLatch.getLeader(LeaderLatch.java:453)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown 
> Source) ~[?:?]
> 24.8.2017 15:39:19at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_131]
> 24.8.2017 15:39:19at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_131]
> 24.8.2017 15:39:19at 
> clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) 
> ~[clojure-1.7.0.jar:?]
> 24.8.2017 15:39:19at 
> clojure.lang.Reflector.invokeNoArgInstanceMember(Reflector.java:313) 
> ~[clojure-1.7.0.jar:?]
> 24.8.2017 15:39:19at 
> org.apache.storm.zookeeper$zk_leader_elector$reify__1043.getLeader(zookeeper.clj:296)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown 
> Source) ~[?:?]
> 24.8.2017 15:39:19at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_131]
> 24.8.2017 15:39:19at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_131]
> 24.8.2017 15:39:19at 
> clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) 
> ~[clojure-1.7.0.jar:?]
> 24.8.2017 15:39:19at 
> clojure.lang.Reflector.invokeNoArgInstanceMember(Reflector.java:313) 
> ~[clojure-1.7.0.jar:?]
> 24.8.2017 15:39:19at 
> org.apache.storm.daemon.nimbus$mk_reified_nimbus$reify__10780.getLeader(nimbus.clj:2412)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 24.8.2017 15:39:19at 
> org.apache.storm.generated.Nimbus$Processor$getLeader.getResult(Nimbus.java:3944)
>  ~[storm-core-1.1.1.jar:1.1.1]
> 

[jira] [Updated] (STORM-2820) validateTopologyWorkerMaxHeapSizeConfigs function never picks up the value set by nimbus

2017-11-15 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2820:
--
Labels: pull-request-available  (was: )

> validateTopologyWorkerMaxHeapSizeConfigs function never picks up the value 
> set by nimbus 
> -
>
> Key: STORM-2820
> URL: https://issues.apache.org/jira/browse/STORM-2820
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Ethan Li
>Assignee: Ethan Li
>  Labels: pull-request-available
>
> https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java#L2548
> {code:java}
>@VisibleForTesting
> static void validateTopologyWorkerMaxHeapSizeConfigs(
> Map stormConf, StormTopology topology) {
> double largestMemReq = getMaxExecutorMemoryUsageForTopo(topology, 
> stormConf);
> double topologyWorkerMaxHeapSize =
> 
> ObjectReader.getDouble(stormConf.get(Config.TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB),
>  768.0);
> if (topologyWorkerMaxHeapSize < largestMemReq) {
> throw new IllegalArgumentException(
> "Topology will not be able to be successfully scheduled: 
> Config "
> + "TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB="
> + topologyWorkerMaxHeapSize
> + " < " + largestMemReq + " (Largest memory requirement 
> of a component in the topology)."
> + " Perhaps set TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB to a 
> larger amount");
> }
> }
> {code}
> The topologyWorkerMaxHeapSize in the above code is either the value from 
> topology configuration (set by the topology) or 768.0. It never picks up the 
> value set by nimbus . 
> A test:
> I set 
> {code:java}
> topology.worker.max.heap.size.mb: 2000.0
> {code}
> in storm.yaml.
> And have the WordCountTopology to have the WordCount bolt with memory load of 
> 1024MB.
> {code:java}
> builder.setBolt("count", new WordCount(), 12).fieldsGrouping("split", new 
> Fields("word")).setMemoryLoad(1024);
> {code}
> I got an error when submitting this topology. The nimbus log shows 
> {code:java}
> 2017-11-15 19:46:43.085 o.a.s.d.n.Nimbus pool-14-thread-2 [WARN] Topology 
> submission exception. (topology name='wc')
> java.lang.IllegalArgumentException: Topology will not be able to be 
> successfully scheduled: Config TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB=768.0 < 
> 1024.0 (Largest memory requirement of a component in the topology). Perhaps 
> set TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB to a larger amount
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2815) UI HTTP server should return 403 if the user is unauthorized

2017-11-14 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2815:
--
Labels: pull-request-available  (was: )

> UI HTTP server should return 403 if the user is unauthorized
> 
>
> Key: STORM-2815
> URL: https://issues.apache.org/jira/browse/STORM-2815
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> Storm UI HTTP server returns 500 for all exceptions. It's probably better to 
> return 403 if it's AuthorizationException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2833) Cached Netty Connections can have different keys for the same thing.

2017-11-27 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2833:
--
Labels: pull-request-available  (was: )

> Cached Netty Connections can have different keys for the same thing.
> 
>
> Key: STORM-2833
> URL: https://issues.apache.org/jira/browse/STORM-2833
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-client
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>  Labels: pull-request-available
>
> It turns out that if you set {{storm.local.hostname}} on your supervisors 
> that the netty caching code might not work.  The issue is that when we go to 
> add a netty connection to the cache we use the host name provided by the 
> scheduling.  Which ultimately comes from the {{storm.local.hostname}} setting 
> on each of the nodes.  But when we go to remove it from the cache, we use the 
> resolved INetSocket address for the destination.  If the two do not match 
> exactly then we can close a connection, but not have it removed from the 
> cache, so when we go to try and use it again, the connection is closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2834) getOwnerResourceSummaries not working properly because scheduler is wrapped as BlacklistScheduler

2017-11-28 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2834:
--
Labels: pull-request-available  (was: )

> getOwnerResourceSummaries not working properly because scheduler is wrapped 
> as BlacklistScheduler
> -
>
> Key: STORM-2834
> URL: https://issues.apache.org/jira/browse/STORM-2834
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java#L4101
> {code:java}
>  if (clusterSchedulerConfig.containsKey(theOwner)) {
> if (scheduler instanceof ResourceAwareScheduler) {
> Map schedulerConfig = (Map) 
> clusterSchedulerConfig.get(theOwner);
> if (schedulerConfig != null) {
> 
> ownerResourceSummary.set_memory_guarantee((double)schedulerConfig.getOrDefault("memory",
>  0));
> 
> ownerResourceSummary.set_cpu_guarantee((double)schedulerConfig.getOrDefault("cpu",
>  0));
> 
> ownerResourceSummary.set_memory_guarantee_remaining(ownerResourceSummary.get_memory_guarantee()
> - 
> ownerResourceSummary.get_memory_usage());
> 
> ownerResourceSummary.set_cpu_guarantee_remaining(ownerResourceSummary.get_cpu_guarantee()
> - ownerResourceSummary.get_cpu_usage());
> }
> } else if (scheduler instanceof  MultitenantScheduler) {
> 
> ownerResourceSummary.set_isolated_node_guarantee((int) 
> clusterSchedulerConfig.getOrDefault(theOwner, 0));
> }
> }
> {code}
> Because scheduler is wrapped as BlackListScheduler 
> (https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java#L474),
>  these two "instanceof" will never be true.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2835) storm-kafka-client KafkaSpout can fail to remove all tuples from waitingToEmit

2017-11-28 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2835:
--
Labels: pull-request-available  (was: )

> storm-kafka-client KafkaSpout can fail to remove all tuples from waitingToEmit
> --
>
> Key: STORM-2835
> URL: https://issues.apache.org/jira/browse/STORM-2835
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Hugo Louro
>Assignee: Hugo Louro
>Priority: Critical
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2838) Replace log4j-over-slf4j with log4j-1.2-api

2017-11-30 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2838:
--
Labels: pull-request-available  (was: )

> Replace log4j-over-slf4j with log4j-1.2-api
> ---
>
> Key: STORM-2838
> URL: https://issues.apache.org/jira/browse/STORM-2838
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Ethan Li
>Assignee: Ethan Li
>  Labels: pull-request-available
>
> I tried to setup HdfsBlobStore and an exception shows up when I launch the 
> nimbus.
> {code:java}
> Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class path, 
> preempting StackOverflowError.
> {code}
> Found an explanation: https://www.slf4j.org/codes.html#log4jDelegationLoop
> This is because storm and hadoop use different logging system:
> {code:java}
> Storm:  log4j-over-slf4j --> slf4j --> log4j2  or slf4j --> log4j2
> Hadoop:  slf4j --> log4j1.2  or   log4j1.2 
> (note: --> means redirecting)
> {code}
> When we add hadoop common lib classpath to nimbus,  log4j-over-slf4j.jar and 
> slf4j-log4j12.jar coexist. 
> One way to let storm work with hadoop is to replace log4j-over-slf4j in storm 
> with log4j1.2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2837) RAS Constraint Solver Strategy

2017-11-30 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2837:
--
Labels: pull-request-available  (was: )

> RAS Constraint Solver Strategy
> --
>
> Key: STORM-2837
> URL: https://issues.apache.org/jira/browse/STORM-2837
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-server
>Affects Versions: 2.0.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>  Labels: pull-request-available
>
> We have a use case where a user has some old native code and they need it to 
> work with storm, but sadly the code is not thread safe so they need to be 
> sure that each instance of a specific bolt is in a worker without other 
> instances of the same bolt.  It also cannot co-exist with other bolts for a 
> similar reason.  I know that this is a fairly strange use case, but to help 
> fix the issue we wrote a strategy for RAS that can do a simple search of the 
> state space trying to honor these constrains and we thought it best to push 
> it back then to keep it internal.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2842) Fixed links for YARN Integration

2017-12-05 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2842:
--
Labels: pull-request-available  (was: )

> Fixed links for YARN Integration
> ---
>
> Key: STORM-2842
> URL: https://issues.apache.org/jira/browse/STORM-2842
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Xin Wang
>Assignee: Xin Wang
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2844) KafkaSpout Throws IllegalStateException After Committing to Kafka When First Poll Strategy Set to EARLIEST

2017-12-16 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2844:
--
Labels: pull-request-available  (was: )

> KafkaSpout Throws IllegalStateException After Committing to Kafka When First 
> Poll Strategy Set to EARLIEST
> --
>
> Key: STORM-2844
> URL: https://issues.apache.org/jira/browse/STORM-2844
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Hugo Louro
>Assignee: Hugo Louro
>Priority: Critical
>  Labels: pull-request-available
>
> This 
> [code|https://github.com/apache/storm/blob/1.x-branch/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java#L407-L409],
>  which was committed to fix 
> [STORM-2666|https://issues.apache.org/jira/browse/STORM-2666] throws 
> IllegalStateException when the KafkaSpout commits to Kafka and is restarted 
> with the same consumer group id and first poll strategy is set to EARLIEST.
> For example consider the following sequence:
> # KafkaSpout with consumer_group_id=TEST polls and commits offsets 1-5 
> # KafkaSpout with consumer_group_id=TEST is restarted with first poll 
> strategy set to EARLIEST
> ==> IllegalStateException will be thrown
> This bug could be a blocker. I am setting it to Critical because assigning a 
> different consumer id serves as a workaround to the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2860) Add Kerberos support to Solr bolt

2017-12-17 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2860:
--
Labels: pull-request-available  (was: )

> Add Kerberos support to Solr bolt
> -
>
> Key: STORM-2860
> URL: https://issues.apache.org/jira/browse/STORM-2860
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Manikumar
>Assignee: Manikumar
>  Labels: pull-request-available
>
> Update Solr bolt to work with Kerberized Solr clusters. 
> Instructions for the SolrJ clients are here:
> https://lucene.apache.org/solr/guide/6_6/kerberos-authentication-plugin.html#KerberosAuthenticationPlugin-UsingSolrJwithaKerberizedSolr



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2859) NormalizedResources is leaking static state in tests, and has some other bugs in special cases where 0 of a resource is available

2017-12-15 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2859:
--
Labels: pull-request-available  (was: )

> NormalizedResources is leaking static state in tests, and has some other bugs 
> in special cases where 0 of a resource is available
> -
>
> Key: STORM-2859
> URL: https://issues.apache.org/jira/browse/STORM-2859
> Project: Apache Storm
>  Issue Type: Sub-task
>  Components: storm-server
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2857) Loosen some constraints on validation to support running topologies of older version

2017-12-14 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2857:
--
Labels: pull-request-available  (was: )

> Loosen some constraints on validation to support running topologies of older 
> version
> 
>
> Key: STORM-2857
> URL: https://issues.apache.org/jira/browse/STORM-2857
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> We want to be able to have topologies of older version (for example 0.10) to 
> run on 2.x cluster. User might submit topologies with 0.10 configs which 
> includes "backtype.storm". It will fail the validation process on the nimbus 
> because the "backtype.storm.XXX" class not found. For now it's really just 
> about TOPOLOGY_SCHEDULER_STRATEGY config. We might want to loosen the 
> constraints for @isImplementationOfClass so that if "backtype.storm.XXX" is 
> not found, we try with "org.apache.storm.XXX and if it passed, we let it pass 
> the validation. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2690) resurrect invocation of ISupervisor.assigned() & make Supervisor.launchDaemon() accessible

2017-12-18 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2690:
--
Labels: pull-request-available  (was: )

> resurrect invocation of ISupervisor.assigned() & make 
> Supervisor.launchDaemon() accessible
> --
>
> Key: STORM-2690
> URL: https://issues.apache.org/jira/browse/STORM-2690
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0, 1.1.0, 1.0.3, 1.0.4, 1.1.1, 1.1.2, 1.0.5
>Reporter: Erik Weathers
>Assignee: Erik Weathers
>Priority: Minor
>  Labels: pull-request-available
>
> As [discussed in 
> STORM-2018|https://issues.apache.org/jira/browse/STORM-2018?focusedCommentId=16108307=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16108307],
>  that change subtly broke the storm-mesos integration framework because of 
> the removal of the invocation of 
> [{{ISupervisor.assigned()}}|https://github.com/apache/storm/blob/v1.0.4/storm-core/src/jvm/org/apache/storm/scheduler/ISupervisor.java#L44].
> So this ticket is tracking the reinstatement of that invocation from the 
> supervisor core code.
> Also, the 
> [{{launchDaemon()}}|https://github.com/apache/storm/blob/v1.0.4/storm-core/src/jvm/org/apache/storm/daemon/supervisor/Supervisor.java#L248]
>  method of the {{Supervisor}} is not public, so we had to use reflection to 
> allow calling it from the storm-mesos integration.  That should be changed 
> too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2858) Fix worker-launcher build

2017-12-15 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2858:
--
Labels: pull-request-available  (was: )

> Fix worker-launcher build
> -
>
> Key: STORM-2858
> URL: https://issues.apache.org/jira/browse/STORM-2858
> Project: Apache Storm
>  Issue Type: Sub-task
>  Components: storm-core
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>  Labels: pull-request-available
>
> I got an error when building with -Pnative because GCC has marked asprintf as 
> a function where you shouldn't ignore the return value. I'm guessing it's the 
> same issue that's preventing Travis from building storm-core.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2854) Expose IEventLogger to make event logging pluggable

2017-12-13 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2854:
--
Labels: pull-request-available  (was: )

> Expose IEventLogger to make event logging pluggable
> ---
>
> Key: STORM-2854
> URL: https://issues.apache.org/jira/browse/STORM-2854
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-client
>Reporter: Jungtaek Lim
>Assignee: Jungtaek Lim
>  Labels: pull-request-available
>
> For the first time, "Event Logger" feature is designed to make implementation 
> pluggable, so that's why IEventLogger exists, but we didn't have actual use 
> case other than just writing them to the file at that time, so we just 
> simplified the case.
> Now we have use case which also write events to file, but with awareness of 
> structure of event so that it can be easily parseable from log feeder. We 
> would want to have custom IEventLogger to represent event as our own format 
> in this case.
> There's another issue as well: EventInfo has `ts` which stores epoch but it's 
> defined as String, not long.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2862) More flexible logging in multilang (Python, Ruby, JS)

2017-12-20 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2862:
--
Labels: pull-request-available  (was: )

> More flexible logging in multilang (Python, Ruby, JS)
> -
>
> Key: STORM-2862
> URL: https://issues.apache.org/jira/browse/STORM-2862
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-client, storm-multilang
>Affects Versions: 2.0.0, 1.1.1
>Reporter: Heather McCartney
>Assignee: Heather McCartney
>Priority: Trivial
>  Labels: pull-request-available
>
> We're running a Storm topology written in Python, using storm.py from 
> storm-multilang. As well as human-readable logs, the topology is also 
> configured to write JSON logs which are sent to ELK.
> At the moment, when storm-core receives a "log" command, it outputs the pid, 
> component name, and the message it received, like so:
> {{ShellLog pid:, name: }}
> The code that does this is (currently) in [ShellBolt line 
> 254|https://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/task/ShellBolt.java#L254]
>  and [ShellSpout line 
> 227|https://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/spout/ShellSpout.java#L227].
> As well as the pid and component name, it would be great to have the task ID, 
> tuple ID, and the ID of the originating tuple - but this would make parsing 
> the string even more laborious than it is now, and would make the default log 
> message too long. 
> Would it be possible to put contextual information like this in the 
> [ThreadContext|https://logging.apache.org/log4j/2.x/manual/thread-context.html]
>  instead? Then our JSON layout could read from the context instead of parsing 
> the string, and human-readable logs could use "%mdc" in the PatternLayout 
> format string.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2525) Fix flaky integration tests

2017-11-11 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2525:
--
Labels: pull-request-available  (was: )

> Fix flaky integration tests
> ---
>
> Key: STORM-2525
> URL: https://issues.apache.org/jira/browse/STORM-2525
> Project: Apache Storm
>  Issue Type: Bug
>  Components: integration-test
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The integration tests fail fairly often, e.g. 
> https://travis-ci.org/apache/storm/jobs/233690012. The tests should be fixed 
> so they're more reliable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2796) Flux: Provide means for invoking static factory methods

2017-11-10 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2796:
--
Labels: pull-request-available  (was: )

> Flux: Provide means for invoking static factory methods
> ---
>
> Key: STORM-2796
> URL: https://issues.apache.org/jira/browse/STORM-2796
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: Flux
>Affects Versions: 2.0.0, 1.1.1, 1.2.0, 1.0.6
>Reporter: P. Taylor Goetz
>Assignee: P. Taylor Goetz
>  Labels: pull-request-available
>
> Provide a means to invoke static factory methods for flux components. E.g:
> Java signature:
> {code}
> public static MyComponent newInstance(String... params)
> {code}
> Yaml:
> {code}
> className: "org.apache.storm.flux.test.MyComponent"
> factory: "newInstance"
> factoryArgs: ["a", "b", "c"]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2811) Nimbus may throw NPE if the same topology is killed multiple times, and the integration test kills the same topology multiple times

2017-11-12 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2811:
--
Labels: pull-request-available  (was: )

> Nimbus may throw NPE if the same topology is killed multiple times, and the 
> integration test kills the same topology multiple times
> ---
>
> Key: STORM-2811
> URL: https://issues.apache.org/jira/browse/STORM-2811
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>  Labels: pull-request-available
>
> {quote}
> 2017-11-12 08:45:50.353 o.a.s.d.n.Nimbus pool-14-thread-47 [WARN] Kill 
> topology exception. (topology name='SlidingWindowTest-window20-slide10')
> java.lang.NullPointerException: null
>   at 
> org.apache.storm.cluster.IStormClusterState.getTopoId(IStormClusterState.java:171)
>  ~[storm-client-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
>   at 
> org.apache.storm.daemon.nimbus.Nimbus.tryReadTopoConfFromName(Nimbus.java:1970)
>  ~[storm-server-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
>   at 
> org.apache.storm.daemon.nimbus.Nimbus.killTopologyWithOpts(Nimbus.java:2760) 
> ~[storm-server-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
>   at 
> org.apache.storm.generated.Nimbus$Processor$killTopologyWithOpts.getResult(Nimbus.java:3226)
>  ~[storm-client-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
>   at 
> org.apache.storm.generated.Nimbus$Processor$killTopologyWithOpts.getResult(Nimbus.java:3210)
>  ~[storm-client-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[libthrift-0.10.0.jar:0.10.0]
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[libthrift-0.10.0.jar:0.10.0]
>   at 
> org.apache.storm.security.auth.SimpleTransportPlugin$SimpleWrapProcessor.process(SimpleTransportPlugin.java:167)
>  ~[storm-client-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
>   at 
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:518)
>  ~[libthrift-0.10.0.jar:0.10.0]
>   at org.apache.thrift.server.Invocation.run(Invocation.java:18) 
> ~[libthrift-0.10.0.jar:0.10.0]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_144]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_144]
>   at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2805) Clean up configs in topology builders

2017-11-14 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2805:
--
Labels: pull-request-available  (was: )

> Clean up configs in topology builders
> -
>
> Key: STORM-2805
> URL: https://issues.apache.org/jira/browse/STORM-2805
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>  Labels: pull-request-available
>
> There are a lot of topology builders that are storing an array of maps for 
> configs.  But then the array gets smashed together into a single map when the 
> topology is actually submitted.  This makes the code really confusing and is 
> completely unnecessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2827) Logviewer search returns incorrect logviewerUrl

2017-11-20 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2827:
--
Labels: pull-request-available  (was: )

> Logviewer search returns incorrect logviewerUrl
> ---
>
> Key: STORM-2827
> URL: https://issues.apache.org/jira/browse/STORM-2827
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> Code in LogviewerLogSearchHandler
> {code:java}
>   @VisibleForTesting
> String urlToMatchCenteredInLogPage(byte[] needle, Path canonicalPath, int 
> offset, Integer port) throws UnknownHostException {
> final String host = Utils.hostname();
> final Path truncatedFilePath = 
> truncatePathToLastElements(canonicalPath, 3);
> Map parameters = new HashMap<>();
> parameters.put("file", truncatedFilePath.toString());
> parameters.put("start", Math.max(0, offset - 
> (LogviewerConstant.DEFAULT_BYTES_PER_PAGE / 2) - (needle.length / -2)));
> parameters.put("length", LogviewerConstant.DEFAULT_BYTES_PER_PAGE);
> return UrlBuilder.build(String.format("http://%s:%d/api/v1/log;, 
> host, port), parameters);
> }
> @VisibleForTesting
> String urlToMatchCenteredInLogPageDaemonFile(byte[] needle, Path 
> canonicalPath, int offset, Integer port) throws UnknownHostException {
> final String host = Utils.hostname();
> final Path truncatedFilePath = 
> truncatePathToLastElements(canonicalPath, 1);
> Map parameters = new HashMap<>();
> parameters.put("file", truncatedFilePath.toString());
> parameters.put("start", Math.max(0, offset - 
> (LogviewerConstant.DEFAULT_BYTES_PER_PAGE / 2) - (needle.length / -2)));
> parameters.put("length", LogviewerConstant.DEFAULT_BYTES_PER_PAGE);
> return 
> UrlBuilder.build(String.format("http://%s:%d/api/v1/daemonlog;, host, port), 
> parameters);
> }
> {code}
> only returns http url. This url will be invalid if logviewer https port is 
> configured, in which case the http url will be not found



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2826) KafkaSpoutConfig.builder doesn't set key/value deserializer properties in storm-kafka-client

2017-11-19 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2826:
--
Labels: pull-request-available  (was: )

> KafkaSpoutConfig.builder doesn't set key/value deserializer properties in 
> storm-kafka-client
> 
>
> Key: STORM-2826
> URL: https://issues.apache.org/jira/browse/STORM-2826
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 1.2.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Blocker
>  Labels: pull-request-available
>
> STORM-2548 replaced the KafkaSpoutConfig.builder() implementations with ones 
> that don't set the key/value deserializer fields in KafkaSpoutConfig, but 
> instead just sets the corresponding property in the kafkaProps map. This is a 
> breaking change for applications that assume those properties are set after 
> the builder is created.
> Code like the following would break.
> {quote}
> this.keyDeserializer = config.getKeyDeserializer().getClass();
> this.valueDeserializer = config.getValueDeserializer().getClass();
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2825) storm-kafka-client configuration fails with a ClassCastException if "enable.auto.commit" is present in the consumer config map, and the value is a string

2017-11-19 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2825:
--
Labels: pull-request-available  (was: )

> storm-kafka-client configuration fails with a ClassCastException if 
> "enable.auto.commit" is present in the consumer config map, and the value is 
> a string
> -
>
> Key: STORM-2825
> URL: https://issues.apache.org/jira/browse/STORM-2825
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 1.2.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Blocker
>  Labels: pull-request-available
>
> {quote}
> Exception in thread "main" java.lang.ClassCastException: java.lang.String
> cannot be cast to java.lang.Boolean
> at
> org.apache.storm.kafka.spout.KafkaSpoutConfig.setAutoCommitMode(KafkaSpoutConfig.java:721)
> at
> org.apache.storm.kafka.spout.KafkaSpoutConfig.(KafkaSpoutConfig.java:97)
> at
> org.apache.storm.kafka.spout.KafkaSpoutConfig$Builder.build(KafkaSpoutConfig.java:671)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2829) Logviewer deepSearch not working

2017-11-21 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2829:
--
Labels: pull-request-available  (was: )

> Logviewer deepSearch not working
> 
>
> Key: STORM-2829
> URL: https://issues.apache.org/jira/browse/STORM-2829
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> {code:java}
> 2017-11-21 21:06:19.369 o.e.j.s.HttpChannel qtp1471948789-17 [WARN] 
> /api/v1/deepSearch/wc-1-1511188542
> javax.servlet.ServletException: java.lang.RuntimeException: 
> com.fasterxml.jackson.databind.JsonMappingException: Direct self-reference 
> leading to cycle (through reference chain: 
> org.apache.storm.daemon.logviewer.handler.Matched["matches"]->java.util.ArrayList[0]->java.util.HashMap["port"]->sun.nio.fs.UnixPath["fileSystem"]->sun.nio.fs.LinuxFileSystem["rootDirectories"]->sun.nio.fs.UnixPath["root"])
> at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489) 
> ~[jersey-container-servlet-core-2.24.1.jar:?]
> at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427) 
> ~[jersey-container-servlet-core-2.24.1.jar:?]
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
>  ~[jersey-container-servlet-core-2.24.1.jar:?]
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
>  ~[jersey-container-servlet-core-2.24.1.jar:?]
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
>  ~[jersey-container-servlet-core-2.24.1.jar:?]
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:841) 
> ~[jetty-servlet-9.4.7.v20170914.jar:9.4.7.v20170914]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2814) Logviewer HTTP server should return 403 instead of 200 if the user is unauthorized

2017-11-14 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2814:
--
Labels: pull-request-available  (was: )

> Logviewer HTTP server should return 403 instead of 200 if the user is 
> unauthorized
> --
>
> Key: STORM-2814
> URL: https://issues.apache.org/jira/browse/STORM-2814
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> {code:java}
> public static Response buildResponseUnautohrizedUser(String user) {
> String entity = buildUnauthorizedUserHtml(user);
> return Response.status(OK)
> .entity(entity)
> .type(MediaType.TEXT_HTML_TYPE)
> .build();
> }
> {code}
> It returns OK which is confusing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2809) Integration test is failing consistently and topologies sometimes fail to start workers

2017-11-13 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2809:
--
Labels: pull-request-available  (was: )

> Integration test is failing consistently and topologies sometimes fail to 
> start workers
> ---
>
> Key: STORM-2809
> URL: https://issues.apache.org/jira/browse/STORM-2809
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Robert Joseph Evans
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> The integration test has been failing fairly consistently since 
> https://github.com/apache/storm/pull/2363. I tried running the test outside a 
> VM with a locally installed Storm setup, and it has failed every time for me.
> Most runs seem to fail in ways that make it look like the integration test is 
> just flaky (e.g. tuple windows not matching the calculated window), but in at 
> least a few tests I saw the topology get submitted to Nimbus followed by 
> about 3 minutes of nothing happening. The workers never started and the 
> supervisor didn't seem aware of the scheduling. The only evidence that the 
> topology was submitted was in the Nimbus log. This still happens even if the 
> test topologies are killed with a timeout of 0, so there should be slots free 
> for the next test immediately.
> I tried reverting https://github.com/apache/storm/pull/2363 and it seems to 
> make the integration test pass much more often. Over 5 runs there was still 
> an instance of a supervisor failing to start the workers, but the other 4 
> passed.
> We should try to fix whatever is causing the supervisor to fail to start 
> workers, and get the integration test more stable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2535) test-reset-timeout is flaky. Replace with a more reliable test.

2017-11-11 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2535:
--
Labels: pull-request-available  (was: )

> test-reset-timeout is flaky. Replace with a more reliable test.
> ---
>
> Key: STORM-2535
> URL: https://issues.apache.org/jira/browse/STORM-2535
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> test-reset-timeout is flaky, because the Time.sleep calls in the test bolt 
> can race with the calls to advanceClusterTime in the main thread. Also the 
> test breaks if the spout's pending map gets rotated at an unlucky time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2792) Clean up RAS and remove possible loops

2017-11-03 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2792:
--
Labels: pull-request-available  (was: )

> Clean up RAS and remove possible loops
> --
>
> Key: STORM-2792
> URL: https://issues.apache.org/jira/browse/STORM-2792
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>Priority: Major
>  Labels: pull-request-available
>
> The code for RAS is rather complex, and we have found that if there is a 
> mismatch between the priority strategy and the eviction strategy that it can 
> result in infinite loops.  To make this simpler there really should just be 
> one strategy to prioritize all topologies.  Scheduling happens for the 
> highest priority topologies and eviction happens for the lowest priority 
> ones. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2799) Ensure jdk.tools is not being included transitively since tools.jar doesn't exist in JDK 9 and we don't need it.

2017-11-05 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2799:
--
Labels: pull-request-available  (was: )

> Ensure jdk.tools is not being included transitively since tools.jar doesn't 
> exist in JDK 9 and we don't need it.
> 
>
> Key: STORM-2799
> URL: https://issues.apache.org/jira/browse/STORM-2799
> Project: Apache Storm
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
>
> A few of our dependencies are leaking a jdk.tools dependency to us from 
> hbase-annotations and hadoop-annotations. It seems like those projects use 
> jdk.tools to run a custom doclet for generating their own Javadoc. We 
> shouldn't need jdk.tools since we don't run custom doclets, and tools.jar 
> doesn't exist in JDK 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2800) Use JAXB api dependency from Maven instead of relying on that API being available in the standard JDK

2017-11-05 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2800:
--
Labels: pull-request-available  (was: )

> Use JAXB api dependency from Maven instead of relying on that API being 
> available in the standard JDK
> -
>
> Key: STORM-2800
> URL: https://issues.apache.org/jira/browse/STORM-2800
> Project: Apache Storm
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
>
> JDK 9 doesn't expose the javax.xml.bind package by default anymore. We should 
> use the Maven package to get the APIs instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-1464) storm-hdfs should support writing to multiple files

2017-11-06 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-1464:
--
Labels: avro pull-request-available  (was: avro)

> storm-hdfs should support writing to multiple files
> ---
>
> Key: STORM-1464
> URL: https://issues.apache.org/jira/browse/STORM-1464
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-hdfs
>Reporter: Aaron Dossett
>Assignee: Aaron Dossett
>  Labels: avro, pull-request-available
> Fix For: 2.0.0, 1.1.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Examples of when this is needed include:
> - One avro bolt writing multiple schemas, each of which require a different 
> file. Schema evolution is a common use of avro and the avro bolt should 
> support that seamlessly.
> - Partitioning output to different directories based on the tuple contents.  
> For example, if the tuple contains a "USER" field, it should be possible to 
> partition based on that value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2795) Race in downloading resources can cause failure

2017-11-01 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2795:
--
Labels: pull-request-available  (was: )

> Race in downloading resources can cause failure
> ---
>
> Key: STORM-2795
> URL: https://issues.apache.org/jira/browse/STORM-2795
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-server
>Affects Versions: 2.0.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>Priority: Major
>  Labels: pull-request-available
>
> Recently had a failure/hang in the async localizer test.  Turns out that 
> there is a race when downloading dependencies and there is a race in trying 
> to create the parent directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2806) Give users the option to disable the login cache

2017-11-09 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2806:
--
Labels: pull-request-available  (was: )

> Give users the option to disable the login cache
> 
>
> Key: STORM-2806
> URL: https://issues.apache.org/jira/browse/STORM-2806
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> Currently we cache login using [LoginCacheKey 
> title|https://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/KerberosSaslTransportPlugin.java#L57]
> But the LoginCacheKey failed to work correctly when we use TGT cache instead 
> of keytab.
> The proposed solution is to add an option to jaas.conf so that user can 
> disable the login cache if needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2807) Integration test should shut down topologies immediately after the test

2017-11-10 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2807:
--
Labels: pull-request-available  (was: )

> Integration test should shut down topologies immediately after the test
> ---
>
> Key: STORM-2807
> URL: https://issues.apache.org/jira/browse/STORM-2807
> Project: Apache Storm
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
>
> The integration test kills topologies with the default 30 second timeout. 
> This is unnecessary and delays the following tests, because the killed 
> topology is still occupying worker slots.
> When the integration test kills topologies, it tries sending the kill message 
> to Nimbus once, and may fail quietly. This breaks following tests, because 
> the default Storm install has only 4 worker slots, and the test topologies 
> each take up 3. When a topology is not shut down, it prevents the following 
> topologies from being assigned.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2793) Add transferred byte count metrics

2017-11-01 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2793:
--
Labels: pull-request-available  (was: )

> Add transferred byte count metrics
> --
>
> Key: STORM-2793
> URL: https://issues.apache.org/jira/browse/STORM-2793
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-metrics
>Reporter: Joshua Martell
>Assignee: Joshua Martell
>Priority: Major
>  Labels: pull-request-available
>
> Existing Storm metrics track tuple counts, but don't account for the size of 
> the tuple payload.  If a task always gets large tuples, it can be slower than 
> a task that gets small tuples without showing any reason why.  It would be 
> good to track the byte counts as well so data skew can be observed directly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2803) SlotTest failing on travis frequently

2017-11-07 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2803:
--
Labels: pull-request-available  (was: )

> SlotTest failing on travis frequently
> -
>
> Key: STORM-2803
> URL: https://issues.apache.org/jira/browse/STORM-2803
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Robert Joseph Evans
>Assignee: Stig Rohde Døssing
>  Labels: pull-request-available
>
> I have seen SlotTest fail way too frequently on travis, but it never fails 
> off of travis.
> My guess is that there is some kind of a race condition happening and on 
> slower hardware (aka VMs or Containers on overloaded build machines) that the 
> tests tend to fail.
> I'll try to find some time to look at this, but if someone else wants to 
> steal it from me feel free to.  I don't know exactly when I will find time to 
> do it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2850) ManualPartitionSubscription assigns new partitions before calling onPartitionsRevoked

2017-12-08 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2850:
--
Labels: pull-request-available  (was: )

> ManualPartitionSubscription assigns new partitions before calling 
> onPartitionsRevoked
> -
>
> Key: STORM-2850
> URL: https://issues.apache.org/jira/browse/STORM-2850
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>  Labels: pull-request-available
>
> ManualPartitionSubscription does partition assignment updates in the wrong 
> order. It calls KafkaConsumer.assign, then onPartitionsRevoked and last 
> onPartitionsAssigned. The order should be onPartitionsRevoked, then assign, 
> then onPartitionsAssigned.
> onPartitionsRevoked has to be called before we reassign partitions, because 
> we try to commit offsets for the revoked partitions. If we try to commit to a 
> partition the consumer is not assigned, it will throw an exception. The 
> onRevoke, assign, onAssign order is also more in line with the javadoc for 
> ConsumerRebalanceListener, which specifies that onRevoke should be called 
> before the partition rebalance begins.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2846) Add extra classpath for nimbus and supervisor

2017-12-07 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2846:
--
Labels: pull-request-available  (was: )

> Add extra classpath for nimbus and supervisor
> -
>
> Key: STORM-2846
> URL: https://issues.apache.org/jira/browse/STORM-2846
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> Currently we have STORM_EXT_CLASSPATH_DAEMON and STORM_EXT_CLASSPATH for 
> extra classpath. We might want to have extra classpath for nimbus and 
> supervisor specifically. One of the issue I am facing with is about setting 
> up HdfsBlobstore. I point STORM_EXT_CLASSPATH_DAEMON to the classpath of 
> existing hadoop cluster. But because the hadoop classpath includes 
> jersey-core-1.9, storm logviewer and drpc server failed to run (since storm 
> uses jersey-2.x).
> {code:java}
> java.lang.Error: java.lang.NoSuchMethodError: 
> javax.ws.rs.core.Application.getProperties()Ljava/util/Map;
> at 
> org.apache.storm.utils.Utils.handleUncaughtException(Utils.java:568) 
> ~[storm-client-2.0.0.y.jar:2.0.0.y]
> at 
> org.apache.storm.utils.Utils.handleUncaughtException(Utils.java:547) 
> ~[storm-client-2.0.0.y.jar:2.0.0.y]
> at org.apache.storm.utils.Utils$5.uncaughtException(Utils.java:877) 
> ~[storm-client-2.0.0.y.jar:2.0.0.y]
> at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1057) 
> ~[?:1.8.0_131]
> at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1052) 
> ~[?:1.8.0_131]
> at java.lang.Thread.dispatchUncaughtException(Thread.java:1959) 
> [?:1.8.0_131]
> Caused by: java.lang.NoSuchMethodError: 
> javax.ws.rs.core.Application.getProperties()Ljava/util/Map;
> at 
> org.glassfish.jersey.server.ApplicationHandler.(ApplicationHandler.java:331)
>  ~[jersey-server-2.24.1.jar:?]
> at 
> org.glassfish.jersey.servlet.WebComponent.(WebComponent.java:392) 
> ~[jersey-container-servlet-core-2.24.1.jar:?]
> at 
> org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:177) 
> ~[jersey-container-servlet-core-2.24.1.jar:?]
> at 
> org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:369) 
> ~[jersey-container-servlet-core-2.24.1.jar:?]
> at javax.servlet.GenericServlet.init(GenericServlet.java:244) 
> ~[javax.servlet-api-3.1.0.jar:3.1.0]
> at 
> org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:637) 
> ~[jetty-servlet-9.4.7.v20170914.jar:9.4.7.v20170914]
> at 
> org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:421) 
> ~[jetty-servlet-9.4.7.v20170914.jar:9.4.7.v20170914]
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:760) 
> ~[jetty-servlet-9.4.7.v20170914.jar:9.4.7.v20170914]
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:348)
>  ~[jetty-servlet-9.4.7.v20170914.jar:9.4.7.v20170914]
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:785)
>  ~[jetty-server-9.4.7.v20170914.jar:9.4.7.v20170914]
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
>  ~[jetty-servlet-9.4.7.v20170914.jar:9.4.7.v20170914]
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  ~[jetty-util-9.4.7.v20170914.jar:9.4.7.v20170914]
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
>  ~[jetty-util-9.4.7.v20170914.jar:9.4.7.v20170914]
> at org.eclipse.jetty.server.Server.start(Server.java:449) 
> ~[jetty-server-9.4.7.v20170914.jar:9.4.7.v20170914]
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
>  ~[jetty-util-9.4.7.v20170914.jar:9.4.7.v20170914]
> at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
>  ~[jetty-server-9.4.7.v20170914.jar:9.4.7.v20170914]
> at org.eclipse.jetty.server.Server.doStart(Server.java:416) 
> ~[jetty-server-9.4.7.v20170914.jar:9.4.7.v20170914]
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  ~[jetty-util-9.4.7.v20170914.jar:9.4.7.v20170914]
> at 
> org.apache.storm.daemon.logviewer.LogviewerServer.start(LogviewerServer.java:129)
>  ~[storm-webapp-2.0.0.y.jar:2.0.0.y]
> at 
> org.apache.storm.daemon.logviewer.LogviewerServer.main(LogviewerServer.java:170)
>  ~[storm-webapp-2.0.0.y.jar:2.0.0.y]
> {code}
> Adding the jvm option -verbose:class, we have the following message:
> {code:java}
> [Loaded 

[jira] [Updated] (STORM-2406) [Storm SQL] Change underlying API to Streams API (for 2.0.0)

2017-12-07 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2406:
--
Labels: pull-request-available  (was: )

> [Storm SQL] Change underlying API to Streams API (for 2.0.0)
> 
>
> Key: STORM-2406
> URL: https://issues.apache.org/jira/browse/STORM-2406
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-sql
>Affects Versions: 2.0.0
>Reporter: Jungtaek Lim
>Assignee: Jungtaek Lim
>  Labels: pull-request-available
>
> Since we dropped features which conform to the Trident semantic, Storm SQL 
> doesn't need to rely on Trident, which is micro-batch.
> Both core API and Streams API are candidates, but we should implement some 
> bolts when we decide to rely on core API, whereas we don't need to do that 
> for Streams API. (If we need to, that's the point to improve Streams API.)
> Streams API also provides windowing feature via tuple-to-tuple semantic, so 
> it's ready for STORM-2405 too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2790) Add nimbus admin groups

2017-10-30 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2790:
--
Labels: pull-request-available  (was: )

> Add nimbus admin groups
> ---
>
> Key: STORM-2790
> URL: https://issues.apache.org/jira/browse/STORM-2790
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-client, storm-server
>Reporter: Kishor Patil
>Assignee: Kishor Patil
>  Labels: pull-request-available
>
> Add "nimbus.admins.groups" to help enabling group level permissions part from 
> individuals using "nimbus.admins"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2787) storm-kafka-client should set initialized flag independently of processing guarantees

2017-10-25 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2787:
--
Labels: pull-request-available  (was: )

> storm-kafka-client should set initialized flag independently of processing 
> guarantees
> -
>
> Key: STORM-2787
> URL: https://issues.apache.org/jira/browse/STORM-2787
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Hugo Louro
>Assignee: Hugo Louro
>Priority: Critical
>  Labels: pull-request-available
>
> Currently the method 
> {code:java}
> public void onPartitionsRevoked(Collection partitions) {
> {code}
> has the following condition
> {code:java}
> if (isAtLeastOnceProcessing() && initialized) {
> initialized = false;
> ...
> }
> {code}
> initialized should be set to false independently of the processing guarantee



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-2794) Translate backtype.storm to org.apache for topology.scheduler.strategy if scheduling older version topologies

2017-10-31 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2794:
--
Labels: pull-request-available  (was: )

> Translate backtype.storm to org.apache for topology.scheduler.strategy if 
> scheduling older version topologies
> -
>
> Key: STORM-2794
> URL: https://issues.apache.org/jira/browse/STORM-2794
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> We want to support running workers of older version on 2.x cluster. The most 
> of work is done by STORM-2448.  We need one more.
> When we schedule a topology of older version, the confs in TopologyDetails 
> could be "backtype.storm". To make scheduler work, we need to translate 
> "backtype.storm" to "org.storm" for Config.TOPOLOGY_SCHEDULER_STRATEGY
>  
> https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/scheduler/resource/ResourceAwareScheduler.java#L98



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (STORM-3055) never refresh connection

2018-05-04 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-3055:
--
Labels: pull-request-available  (was: )

> never refresh connection
> 
>
> Key: STORM-3055
> URL: https://issues.apache.org/jira/browse/STORM-3055
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 1.1.1
>Reporter: zhangbiao
>Priority: Major
>  Labels: pull-request-available
>
> in our enviroment some worker's connection to other worker being closed and 
> never reconnect,
> the log show's that 
> 2018-05-02 10:28:49.302 o.a.s.m.n.Client 
> Thread-90-disruptor-worker-transfer-queue [ERROR] discarding 1 messages 
> because the Netty client to Netty-Client-/192.168.31.1:6800 is being closed
> ..
> 2018-05-02 11:00:29.540 o.a.s.m.n.Client 
> Thread-90-disruptor-worker-transfer-queue [ERROR] discarding 1 messages 
> because the Netty client to Netty-Client-/192.168.31.1:6800 is being closed
> the log shows that it never can reconnect again. i can only fix it after 
> restart the topo, 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (STORM-3072) Frequent test failures in storm-sql-core

2018-05-14 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-3072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-3072:
--
Labels: pull-request-available  (was: )

> Frequent test failures in storm-sql-core
> 
>
> Key: STORM-3072
> URL: https://issues.apache.org/jira/browse/STORM-3072
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-sql
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Major
>  Labels: pull-request-available
>
> Seeing test failures in storm-sql-core, sometimes regular test failures, 
> other times JVM crashes.
> {code}
> testExternalUdf(org.apache.storm.sql.TestStormSql)  Time elapsed: 8.177 sec  
> <<< ERROR!
> java.lang.RuntimeException: java.lang.RuntimeException: not a leader, current 
> leader is NimbusInfo{host='DESKTOP-AGC8TKM.localdomain', port=6627, 
> isLeader=true}
> at 
> org.apache.storm.daemon.nimbus.Nimbus.submitTopologyWithOpts(Nimbus.java:2952)
> at 
> org.apache.storm.daemon.nimbus.Nimbus.submitTopology(Nimbus.java:2761)
> at org.apache.storm.LocalCluster.submitTopology(LocalCluster.java:378)
> at 
> org.apache.storm.sql.StormSqlLocalClusterImpl.runLocal(StormSqlLocalClusterImpl.java:68)
> at 
> org.apache.storm.sql.TestStormSql.testExternalUdf(TestStormSql.java:214)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at org.junit.runners.Suite.runChild(Suite.java:127)
> at org.junit.runners.Suite.runChild(Suite.java:26)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:161)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:290)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:242)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)
> Caused by: java.lang.RuntimeException: not a leader, current leader is 
> NimbusInfo{host='DESKTOP-AGC8TKM.localdomain', port=6627, isLeader=true}
> at 
> 

[jira] [Updated] (STORM-3061) Upgrade Dependencies before 2.x release

2018-05-14 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-3061:
--
Labels: pull-request-available  (was: )

> Upgrade Dependencies before 2.x release
> ---
>
> Key: STORM-3061
> URL: https://issues.apache.org/jira/browse/STORM-3061
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>Priority: Major
>  Labels: pull-request-available
>
> Storm has a lot of dependencies.  It would be great to upgrade many of them 
> to newer versions ahead of a 2.x release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (STORM-2472) kafkaspout should work normally in kerberos mode with kafka 0.9.x API

2018-05-15 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2472:
--
Labels: pull-request-available  (was: )

> kafkaspout should work normally in kerberos mode with kafka 0.9.x API
> -
>
> Key: STORM-2472
> URL: https://issues.apache.org/jira/browse/STORM-2472
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-kafka-client
>Reporter: liuzhaokun
>Assignee: liuzhaokun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> storm 1.0.x-branch didn't support kafkaspout to consume from kafka in 
> kerberos mode with we can't  set kafka's parameter 
> ,'java.security.auth.login.config', in storm's process .So I solve it via 
> preparing a kafkaspout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (STORM-2988) "Error on initialization of server mk-worker" when using org.apache.storm.metrics2.reporters.JmxStormReporter on worker

2018-05-07 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-2988:
--
Labels: pull-request-available  (was: )

> "Error on initialization of server mk-worker" when using 
> org.apache.storm.metrics2.reporters.JmxStormReporter on worker
> ---
>
> Key: STORM-2988
> URL: https://issues.apache.org/jira/browse/STORM-2988
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 1.2.1
> Environment: CentOS 7.4
> java.version=1.8.0_161
> java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre
>Reporter: Federico Chiacchiaretta
>Assignee: Artem Ervits
>Priority: Major
>  Labels: pull-request-available
>
> As per documentation, I configured metrics v2 in my storm.yaml using the 
> following configuration:
>  
> {code:yaml}
> storm.metrics.reporters:
>   - class: "org.apache.storm.metrics2.reporters.JmxStormReporter"
> daemons:
> - "supervisor"
> - "nimbus"
> - "worker"
> report.period: 10
> report.period.units: "SECONDS"
> {code}
> When I start nimbus and supervisors everything works properly, I can see 
> metrics reported to JMX, and logs (for nimbus in this example) report:
> {code}
> 2018-03-07 15:35:22.201 o.a.s.d.m.MetricsUtils main [INFO] Using statistics 
> reporter 
> plugin:org.apache.storm.daemon.metrics.reporters.JmxPreparableReporter
> 2018-03-07 15:35:22.203 o.a.s.d.m.r.JmxPreparableReporter main [INFO] 
> Preparing...
> 2018-03-07 15:35:22.221 o.a.s.d.common main [INFO] Started statistics report 
> plugin...
> {code}
> When I submit a topology, workers cannot initialize and report this error
> {code:java}
> 2018-03-07 15:39:19.136 o.a.s.d.worker main [INFO] Launching worker for 
> stp_topology-1-1520433551 on [... cut ...]
> 2018-03-07 15:39:19.169 o.a.s.m.StormMetricRegistry main [INFO] Starting 
> metrics reporters...
> 2018-03-07 15:39:19.172 o.a.s.m.StormMetricRegistry main [INFO] Attempting to 
> instantiate reporter class: 
> org.apache.storm.metrics2.reporters.JmxStormReporter
> 2018-03-07 15:39:19.175 o.a.s.m.r.JmxStormReporter main [INFO] Preparing...
> 2018-03-07 15:39:19.182 o.a.s.d.worker main [ERROR] Error on initialization 
> of server mk-worker
> java.lang.IllegalArgumentException: Don't know how to convert {"class" 
> "org.apache.storm.metrics2.reporters.JmxStormReporter", "daemons" 
> ["supervisor" "nimbus" "worker"], "report.period" 10, "report.period.units" 
> "SECONDS"} + to String
>   at org.apache.storm.utils.Utils.getString(Utils.java:848) 
> ~[storm-core-1.2.1.jar:1.2.1]
>   at 
> org.apache.storm.metrics2.reporters.JmxStormReporter.getMetricsJMXDomain(JmxStormReporter.java:70)
>  ~[storm-core-1.2.1.jar:1.2.1]
>   at 
> org.apache.storm.metrics2.reporters.JmxStormReporter.prepare(JmxStormReporter.java:51)
>  ~[storm-core-1.2.1.jar:1.2.1]
>   at 
> org.apache.storm.metrics2.StormMetricRegistry.startReporter(StormMetricRegistry.java:119)
>  ~[storm-core-1.2.1.jar:1.2.1]
>   at 
> org.apache.storm.metrics2.StormMetricRegistry.start(StormMetricRegistry.java:102)
>  ~[storm-core-1.2.1.jar:1.2.1]
>   at 
> org.apache.storm.daemon.worker$fn__5545$exec_fn__1369__auto5546.invoke(worker.clj:611)
>  ~[storm-core-1.2.1.jar:1.2.1]
>   at clojure.lang.AFn.applyToHelper(AFn.java:178) ~[clojure-1.7.0.jar:?]
>   at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.7.0.jar:?]
>   at clojure.core$apply.invoke(core.clj:630) ~[clojure-1.7.0.jar:?]
>   at 
> org.apache.storm.daemon.worker$fn__5545$mk_worker__5636.doInvoke(worker.clj:598)
>  [storm-core-1.2.1.jar:1.2.1]
>   at clojure.lang.RestFn.invoke(RestFn.java:512) [clojure-1.7.0.jar:?]
>   at org.apache.storm.daemon.worker$_main.invoke(worker.clj:787) 
> [storm-core-1.2.1.jar:1.2.1]
>   at clojure.lang.AFn.applyToHelper(AFn.java:165) [clojure-1.7.0.jar:?]
>   at clojure.lang.AFn.applyTo(AFn.java:144) [clojure-1.7.0.jar:?]
>   at org.apache.storm.daemon.worker.main(Unknown Source) 
> [storm-core-1.2.1.jar:1.2.1]
> 2018-03-07 15:39:19.195 o.a.s.util main [ERROR] Halting process: ("Error on 
> initialization")
> java.lang.RuntimeException: ("Error on initialization")
>   at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) 
> [storm-core-1.2.1.jar:1.2.1]
>   at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.7.0.jar:?]
>   at 
> org.apache.storm.daemon.worker$fn__5545$mk_worker__5636.doInvoke(worker.clj:598)
>  [storm-core-1.2.1.jar:1.2.1]
>   at clojure.lang.RestFn.invoke(RestFn.java:512) [clojure-1.7.0.jar:?]
>   at org.apache.storm.daemon.worker$_main.invoke(worker.clj:787) 
> 

[jira] [Updated] (STORM-3060) Configuration mapping between storm-kafka & storm-kafka-client

2018-05-07 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-3060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-3060:
--
Labels: pull-request-available storm-kafka-client  (was: storm-kafka-client)

> Configuration mapping between storm-kafka & storm-kafka-client
> --
>
> Key: STORM-3060
> URL: https://issues.apache.org/jira/browse/STORM-3060
> Project: Apache Storm
>  Issue Type: Documentation
>  Components: storm-kafka, storm-kafka-client
>Reporter: Srishty Agrawal
>Assignee: Srishty Agrawal
>Priority: Minor
>  Labels: pull-request-available, storm-kafka-client
>
> A document which contains mapping of configurations from {{storm-kafka 
> SpoutConfig}} to {{storm-kafka-client KafkaSpoutConfig}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (STORM-3065) Very frequent test failures in storm-server

2018-05-09 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-3065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-3065:
--
Labels: pull-request-available  (was: )

> Very frequent test failures in storm-server
> ---
>
> Key: STORM-3065
> URL: https://issues.apache.org/jira/browse/STORM-3065
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-server
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
>
> I'm seeing the following intermittent test failures in storm-server when I 
> run locally
> {code}
> 2018-05-09 16:32:23.377 [main] ERROR 
> org.apache.storm.blobstore.KeySequenceNumber - Exception {}
> java.lang.NullPointerException: null
>   at java.lang.String.contains(String.java:2133) ~[?:1.8.0_152]
>   at 
> org.apache.storm.blobstore.KeySequenceNumber.checkIfStateContainsCurrentNimbusHost(KeySequenceNumber.java:206)
>  ~[classes/:?]
>   at 
> org.apache.storm.blobstore.KeySequenceNumber.getKeySequenceNumber(KeySequenceNumber.java:159)
>  [classes/:?]
>   at 
> org.apache.storm.daemon.nimbus.Nimbus.getVersionForKey(Nimbus.java:655) 
> [classes/:?]
>   at 
> org.apache.storm.blobstore.LocalFsBlobStore.createBlob(LocalFsBlobStore.java:223)
>  [classes/:?]
>   at 
> org.apache.storm.blobstore.LocalFsBlobStore$MockitoMock$1067706995.createBlob$accessor$Ub7aO1Cr(Unknown
>  Source) [classes/:?]
>   at 
> org.apache.storm.blobstore.LocalFsBlobStore$MockitoMock$1067706995$auxiliary$x0GVZISq.call(Unknown
>  Source) [classes/:?]
>   at 
> org.mockito.internal.invocation.RealMethod$FromCallable.invoke(RealMethod.java:48)
>  [mockito-core-2.10.0.jar:?]
>   at 
> org.mockito.internal.creation.bytebuddy.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:129)
>  [mockito-core-2.10.0.jar:?]
>   at 
> org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:43)
>  [mockito-core-2.10.0.jar:?]
>   at org.mockito.Answers.answer(Answers.java:100) 
> [mockito-core-2.10.0.jar:?]
>   at 
> org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:97) 
> [mockito-core-2.10.0.jar:?]
>   at 
> org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
>  [mockito-core-2.10.0.jar:?]
>   at 
> org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:35)
>  [mockito-core-2.10.0.jar:?]
>   at 
> org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:65)
>  [mockito-core-2.10.0.jar:?]
>   at 
> org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:51)
>  [mockito-core-2.10.0.jar:?]
>   at 
> org.mockito.internal.creation.bytebuddy.MockMethodInterceptor$DispatcherDefaultingToRealMethod.interceptSuperCallable(MockMethodInterceptor.java:135)
>  [mockito-core-2.10.0.jar:?]
>   at 
> org.apache.storm.blobstore.LocalFsBlobStore$MockitoMock$1067706995.createBlob(Unknown
>  Source) [classes/:?]
>   at 
> org.apache.storm.blobstore.LocalFsBlobStoreTest.testBasic(LocalFsBlobStoreTest.java:325)
>  [test-classes/:?]
>   at 
> org.apache.storm.blobstore.LocalFsBlobStoreTest.testBasicLocalFs(LocalFsBlobStoreTest.java:114)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_152]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_152]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_152]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_152]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  [junit-4.11.jar:?]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.11.jar:?]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  [junit-4.11.jar:?]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  [junit-4.11.jar:?]
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> [junit-4.11.jar:?]
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> [junit-4.11.jar:?]
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) 
> [junit-4.11.jar:?]
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  [junit-4.11.jar:?]
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  [junit-4.11.jar:?]

[jira] [Updated] (STORM-3044) AutoTGT should ideally check if a TGT is specific to IP addresses and reject

2018-04-27 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-3044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-3044:
--
Labels: pull-request-available  (was: )

> AutoTGT should ideally check if a TGT is specific to IP addresses and reject
> 
>
> Key: STORM-3044
> URL: https://issues.apache.org/jira/browse/STORM-3044
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Ethan Li
>Assignee: Ethan Li
>Priority: Major
>  Labels: pull-request-available
>
> There are several options that a TGT can have, one of them is being 
> forwardable, another is having one or more IP addresses in it that make it so 
> it cannot be used anywhere else. If the ticket is forwardable, but is tied to 
> IP addresses it will not likely work for storm so we should reject it.
> It looks like we can call getClientAddresses() on the ticket and if it 
> returns something then we should reject it. We should also include 
> instructions about how to get a proper ticekt in the error message.
> `kinit -A -f`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   6   7   >