[GitHub] storm issue #2066: [STORM-2472] kafkaspout should work normally in kerberos ...

2018-05-15 Thread aniketalhat
Github user aniketalhat commented on the issue:

https://github.com/apache/storm/pull/2066
  
@liu-zhaokun how did you manage to solve this? Currently new 
storm-kafka-client isn't very stable and doesn't support the transactional 
spout. I've to connect to Kafka (0.11.0.2) via. Kerberos on Storm (1.0.6)

Any ideas?, it will be a great help!


---


[GitHub] storm issue #2066: [STORM-2472] kafkaspout should work normally in kerberos ...

2018-05-15 Thread liu-zhaokun
Github user liu-zhaokun commented on the issue:

https://github.com/apache/storm/pull/2066
  
@aniketalhat
You can look through my PR,and my intention was to export 
java.security.auth.login.config=the path to your keytab,and it works.You can 
try it.


---


[GitHub] storm-site pull request #5: Style tables so they appear similar to how they ...

2018-05-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/storm-site/pull/5


---


[GitHub] storm-site issue #5: Style tables so they appear similar to how they look on...

2018-05-15 Thread srdo
Github user srdo commented on the issue:

https://github.com/apache/storm-site/pull/5
  
Thanks for the reviews, I'll merge and push this to the site.

@erikdw I probably should have thought to include a screenshot, thanks :)


---


[GitHub] storm pull request #2674: STORM-3072: Reduce fork count for storm-sql-core t...

2018-05-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/storm/pull/2674


---


[GitHub] storm pull request #2676: STORM-3073: Uncap pendingEmits for bolt executors,...

2018-05-15 Thread srdo
GitHub user srdo opened a pull request:

https://github.com/apache/storm/pull/2676

STORM-3073: Uncap pendingEmits for bolt executors, and prevent LoadSpout 
from overflowing pendingEmits in spout executors

https://issues.apache.org/jira/browse/STORM-3073

The first commit contains the changes I made to ExclamationTopology to 
provoke the error. I'll remove it after review, it is just included so it's 
easier to understand the error.

There are two changes in this PR. The first is to uncap the pendingEmits 
queue for bolt executors. It's currently capped at 1024 elements, which makes 
it dangerous for bolts to emit more than 1024 tuples in an execute invocation. 
If the bolt executor is experiencing backpressure and tries to add the tuples 
to pendingEmits, the queue size will be exceeded and the worker will crash.

The second change is to make LoadSpout emit failed tuples from nextTuple 
instead of doing it from fail. Since the spout executor is also limited to 1024 
tuples in the pending queue, it is likely that the spout executor will exceed 
the queue limit and crash if a bunch of tuples fail at the same time (e.g. due 
to timeout) while the spout is adding tuples to pendingEmits. Since the spout 
won't call nextTuple if there are tuples in pendingEmits, we can just move the 
retries to that method to prevent the spout from exceeding the pendingEmits 
limit.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/srdo/storm STORM-3073

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/storm/pull/2676.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2676


commit 538adbbe2d6129ab36a596d612870cef80f3af5a
Author: Stig Rohde Døssing 
Date:   2018-05-15T13:09:54Z

WIP test

commit 9d5adc57bd3aa5dcdf4945196d7b7843fbddf2d1
Author: Stig Rohde Døssing 
Date:   2018-05-15T13:13:09Z

STORM-3073: Uncap pendingEmits for bolt executors, and prevent LoadSpout 
from overflowing pendingEmits in spout executors




---


[GitHub] storm issue #2066: [STORM-2472] kafkaspout should work normally in kerberos ...

2018-05-15 Thread aniketalhat
Github user aniketalhat commented on the issue:

https://github.com/apache/storm/pull/2066
  
Hello Liu,

Thanks for your quick reply, my one concern is that your PR is for
storm-kafka-client(new consumer) where as I'm looking for storm-kafka(old
consumer) solution, what do you think ?

On Tue, May 15, 2018 at 4:40 PM, zhaokun liu 
wrote:

> @aniketalhat 
> You can look through my PR,and my intention was to export
> java.security.auth.login.config=the path to your keytab,and it works.You
> can try it.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> , or 
mute
> the thread
> 

> .
>



-- 

*Aniket Alhat*



---


[GitHub] storm pull request #2669: [STORM-3055] remove conext connection cache

2018-05-15 Thread srdo
Github user srdo commented on a diff in the pull request:

https://github.com/apache/storm/pull/2669#discussion_r188377925
  
--- Diff: storm-client/src/jvm/org/apache/storm/messaging/IContext.java ---
@@ -38,6 +38,7 @@
 
 /**
  * This method establishes a server side connection
+ * implementation should return a new connection every call
--- End diff --

Bind doesn't always return a new connection, the local Context will return 
a cached one if possible. Consider changing to something like "This method 
returns a server side connection. If one does not exist for the given ID and 
port, a new one will be established."


---


[GitHub] storm issue #2669: [STORM-3055] remove conext connection cache

2018-05-15 Thread srdo
Github user srdo commented on the issue:

https://github.com/apache/storm/pull/2669
  
As far as I can tell all connections are still being shut down with this 
change. The connections cached by WorkerState are closed during worker shutdown 
in 
https://github.com/apache/storm/blob/14b0b4fc5e0945456769fd58a3595188e3dea234/storm-client/src/jvm/org/apache/storm/daemon/worker/Worker.java#L449,
 and the context is shut down a few lines further down. 

The WorkerState.refreshConnections method also makes sure to never create a 
connection for a NodeInfo that is already present, so I don't think we're 
leaking connections there that would need to be picked up by context.term. 

Would like to see the cleanup @revans2 mentioned, as well as a change in 
the IContext docs so it's specified that the created connections are new.


---


[GitHub] storm pull request #2669: [STORM-3055] remove conext connection cache

2018-05-15 Thread pczb
Github user pczb commented on a diff in the pull request:

https://github.com/apache/storm/pull/2669#discussion_r188355473
  
--- Diff: storm-client/src/jvm/org/apache/storm/messaging/netty/Client.java 
---
@@ -451,7 +451,6 @@ public int getPort() {
 public void close() {
 if (!closing) {
 LOG.info("closing Netty Client {}", dstAddressPrefixedName);
-context.removeClient(dstHost, dstAddress.getPort());
--- End diff --

done


---


[GitHub] storm issue #2669: [STORM-3055] remove conext connection cache

2018-05-15 Thread pczb
Github user pczb commented on the issue:

https://github.com/apache/storm/pull/2669
  
@srdo add doc specification, i was wonder if we need remove synchronized 
when bind port
@revans2  just as srdo says the connection will be closed by the worker 
before worker shutdown.
i will squash all commit after review


---


[GitHub] storm issue #2677: STORM-3075 fix NPE

2018-05-15 Thread Ethanlm
Github user Ethanlm commented on the issue:

https://github.com/apache/storm/pull/2677
  
There might be a bug in the code: 
https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java#L490-L497
```
if (leaderElector == null) {
leaderElector = Zookeeper.zkLeaderElector(conf, zkClient, 
blobStore, topoCache, stormClusterState, getNimbusAcls(conf));
}
this.leaderElector = leaderElector;
this.blobStore.setLeaderElector(this.leaderElector);
if (topoCache == null) {
topoCache = new TopoCache(blobStore, conf);
}
```
`topoCache` initialization should happen before `leaderElector`


---


[GitHub] storm issue #2673: STORM-3070: Rewind buffer position if MessageDecoder enco...

2018-05-15 Thread srdo
Github user srdo commented on the issue:

https://github.com/apache/storm/pull/2673
  
Okay, pretty sure I've seen this break now. Ran the TVL topology and got 
occasional worker crashes, e.g.

```
2018-05-15 16:36:01.018 o.a.s.m.n.StormClientHandler client-worker-1 [INFO] 
Connection to DESKTOP-AGC8TKM/10.0.75.1:6700 failed:
io.netty.handler.codec.DecoderException: java.lang.ClassCastException: 
java.util.ArrayList cannot be cast to 
org.apache.storm.messaging.netty.BackPressureStatus
at 
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459)
 ~[netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
 ~[netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
 [netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
 [netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
 [netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
 [netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
 [netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
 [netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
 [netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
 [netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) 
[netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
 [netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) 
[netty-all-4.1.24.Final.jar:4.1.24.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) 
[netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
 [netty-all-4.1.24.Final.jar:4.1.24.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
Caused by: java.lang.ClassCastException: java.util.ArrayList cannot be cast 
to org.apache.storm.messaging.netty.BackPressureStatus
at 
org.apache.storm.messaging.netty.BackPressureStatus.read(BackPressureStatus.java:56)
 ~[storm-client-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
at 
org.apache.storm.messaging.netty.MessageDecoder.decode(MessageDecoder.java:121) 
~[storm-client-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
at 
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
 ~[netty-all-4.1.24.Final.jar:4.1.24.Final]
at 
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
 ~[netty-all-4.1.24.Final.jar:4.1.24.Final]
... 15 more
2018-05-15 16:36:01.040 o.a.s.m.n.StormClientHandler client-worker-1 [INFO] 
Connection to DESKTOP-AGC8TKM/10.0.75.1:6700 failed:
com.esotericsoftware.kryo.KryoException: Unable to find class: :
at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
 ~[kryo-3.0.3.jar:?]
at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
 ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670) 
~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:781) 
~[kryo-3.0.3.jar:?]
at 
com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:134)
 ~[kryo-3.0.3.jar:?]
at 
com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:40)
 ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:689) 
~[kryo-3.0.3.jar:?]
at 
org.apache.storm.serialization.KryoValuesDeserializer.deserializeFrom(KryoValuesDeserializer.java:31)
 ~[storm-client-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
at 
org.apache.storm.serialization.KryoValuesDeserializer.deserialize(KryoValuesDeserializer.java:37)
 ~[storm-client-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
at 
org.apache.storm.messaging.netty.StormClientHandler.channelRead(StormClientHandler.java:69)
 ~[storm-client-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
   

[GitHub] storm issue #2677: STORM-3075 fix NPE

2018-05-15 Thread agresch
Github user agresch commented on the issue:

https://github.com/apache/storm/pull/2677
  
@Ethanlm - I see tc being null whenever I start nimbus (at least with no 
topologies).


---


[GitHub] storm pull request #2677: STORM-3075 fix NPE

2018-05-15 Thread agresch
GitHub user agresch opened a pull request:

https://github.com/apache/storm/pull/2677

STORM-3075 fix NPE



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/agresch/storm agresch_storm-3075

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/storm/pull/2677.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2677


commit 6a3150edef5ea58ddd8218df91e2b93ad9f90cd3
Author: Aaron Gresch 
Date:   2018-05-15T20:06:53Z

STORM-3075 fix NPE




---


[GitHub] storm issue #2677: STORM-3075 fix NPE

2018-05-15 Thread Ethanlm
Github user Ethanlm commented on the issue:

https://github.com/apache/storm/pull/2677
  
It looks reasonable. But would like to understand more about it. Could you 
elaborate on in which case tc will be null?


---


[GitHub] storm issue #2677: STORM-3075 fix NPE

2018-05-15 Thread agresch
Github user agresch commented on the issue:

https://github.com/apache/storm/pull/2677
  
@Ethanlm - nice catch.  Will fix the source of the problem instead


---


[GitHub] storm issue #2677: STORM-3075 fix NPE

2018-05-15 Thread agresch
Github user agresch commented on the issue:

https://github.com/apache/storm/pull/2677
  
@Ethanlm - please take a look.  Thanks.


---