[jira] [Commented] (STORM-4003) storm-kafka-monitor fails with Java 17 runtime, missing jakarta.xml.bind dependency

2023-11-15 Thread Alexandre Vermeerbergen (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17786382#comment-17786382
 ] 

Alexandre Vermeerbergen commented on STORM-4003:


Without this fix, when running Storm UI with Java 17 runtime, we have this kind 
of exceptions in ui.log whenever there is a topology consuming Kafka topics:

 

2023-11-15 10:54:27.473 o.e.j.s.AbstractConnector main [INFO] Started
ServerConnector@e3c9c02f\{HTTP/1.1, 
(http/1.1)}{[0.0.0.0:8070|http://0.0.0.0:8070/]}
2023-11-15 10:54:27.474 o.e.j.s.Server main [INFO] Started @4256ms
2023-11-15 10:55:02.781 o.a.s.u.NimbusClient qtp-1075191296-25 [INFO]
Found leader nimbus :
[ec23-1-251-0-65.eu-west-1.compute.amazonaws.com:6627|http://ec23-1-251-0-65.eu-west-1.compute.amazonaws.com:6627/]
2023-11-15 10:59:31.128 o.a.s.u.ShellUtils qtp-1075191296-24 [INFO]
Failed running command
[/usr/local/Storm/storm-stable/bin/storm-kafka-monitor, -t, audit, -g,
StormAuditPublisherTopology_SbxRealTimeSupervisionAVEEEZZeuw1, -b,
[ec23-1-251-0-66.eu-west-1.compute.amazonaws.com|http://ec23-1-251-0-66.eu-west-1.compute.amazonaws.com/],[ec23-1-251-0-65.eu-west-1.compute.amazonaws.com|http://ec23-1-251-0-65.eu-west-1.compute.amazonaws.com/],ec2-52-18-172-77,
-s, SASL_SSL, -c, /tmp/kafka-consumer-extra6421423768198678110props]
org.apache.storm.utils.ShellUtils$ExitCodeException: SLF4J: Failed to
load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See [http://www.slf4j.org/codes.html#StaticLoggerBinder] for
further details.
Exception in thread "main" java.lang.NoClassDefFoundError:
javax.xml.bind.DatatypeConverter
 at 
org.apache.kafka.common.security.scram.ScramMessages$ServerFirstMessage.(ScramMessages.java:143)
 at 
org.apache.kafka.common.security.scram.ScramSaslClient.evaluateChallenge(ScramSaslClient.java:112)
 at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:280)
 at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:278)
 at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:784)
 at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
 at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken(SaslClientAuthenticator.java:278)
 at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendSaslToken(SaslClientAuthenticator.java:215)
 at 
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:189)
 at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:76)
 at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:385)
 at org.apache.kafka.common.network.Selector.poll(Selector.java:334)
 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:433)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:184)
 at 
org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:314)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1386)
 at 
org.apache.storm.kafka.monitor.KafkaOffsetLagUtil.getOffsetLags(KafkaOffsetLagUtil.java:165)
 at 
org.apache.storm.kafka.monitor.KafkaOffsetLagUtil.main(KafkaOffsetLagUtil.java:74)
Caused by: java.lang.ClassNotFoundException: javax.xml.bind.DatatypeConverter
 at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:827)
 at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
 at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:1095)
 ... 20 more

Plan to fix consists in making a change so that 
jakarta.xml.bind-api.jar will be available in 
lib-tools/storm-kafka-monitor/ of the runtime-view.

 

 

> storm-kafka-monitor fails with Java 17 runtime, missing jakarta.xml.bind 
> dependency
> ---
>
> Key: STORM-4003
> URL: https://issues.apache.org/jira/browse/STORM-4003
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-kafka-monitor
>Affects Versions: 2.5.0
>Reporter: Alexandre Vermeerbergen
>Assignee: Alexandre Vermeerbergen
>Priority: Major
> Fix For: 2.6.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-4003) storm-kafka-monitor fails with Java 17 runtime, missing jakarta.xml.bind dependency

2023-11-15 Thread Alexandre Vermeerbergen (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Vermeerbergen updated STORM-4003:
---
Priority: Blocker  (was: Major)

> storm-kafka-monitor fails with Java 17 runtime, missing jakarta.xml.bind 
> dependency
> ---
>
> Key: STORM-4003
> URL: https://issues.apache.org/jira/browse/STORM-4003
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>  Components: storm-kafka-monitor
>Affects Versions: 2.5.0
>Reporter: Alexandre Vermeerbergen
>Assignee: Alexandre Vermeerbergen
>Priority: Blocker
> Fix For: 2.6.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-4003) storm-kafka-monitor fails with Java 17 runtime, missing jakarta.xml.bind dependency

2023-11-15 Thread Alexandre Vermeerbergen (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Vermeerbergen updated STORM-4003:
---
Issue Type: Bug  (was: Dependency upgrade)

> storm-kafka-monitor fails with Java 17 runtime, missing jakarta.xml.bind 
> dependency
> ---
>
> Key: STORM-4003
> URL: https://issues.apache.org/jira/browse/STORM-4003
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-monitor
>Affects Versions: 2.5.0
>Reporter: Alexandre Vermeerbergen
>Assignee: Alexandre Vermeerbergen
>Priority: Blocker
> Fix For: 2.6.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4003) storm-kafka-monitor fails with Java 17 runtime, missing jakarta.xml.bind dependency

2023-11-15 Thread Alexandre Vermeerbergen (Jira)
Alexandre Vermeerbergen created STORM-4003:
--

 Summary: storm-kafka-monitor fails with Java 17 runtime, missing 
jakarta.xml.bind dependency
 Key: STORM-4003
 URL: https://issues.apache.org/jira/browse/STORM-4003
 Project: Apache Storm
  Issue Type: Dependency upgrade
  Components: storm-kafka-monitor
Affects Versions: 2.5.0
Reporter: Alexandre Vermeerbergen
Assignee: Alexandre Vermeerbergen
 Fix For: 2.6.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-3986) Get rid of BlacklistScheduler timer [INFO] logs

2023-10-01 Thread Alexandre Vermeerbergen (Jira)
Alexandre Vermeerbergen created STORM-3986:
--

 Summary: Get rid of BlacklistScheduler timer [INFO] logs
 Key: STORM-3986
 URL: https://issues.apache.org/jira/browse/STORM-3986
 Project: Apache Storm
  Issue Type: Dependency upgrade
  Components: storm-server
Affects Versions: 2.5.0
Reporter: Alexandre Vermeerbergen
Assignee: Alexandre Vermeerbergen


Every 10 seconds, we have the following INFO-level line of log:

2023-10-01 08:02:01.607 o.a.s.s.b.BlacklistScheduler timer [INFO]
 Supervisors [] are blacklisted.

I propose to change current behavior to only print the list of blocklisted 
supervisors only if there is at least one which is blocklisted (to use a more 
neutral wording than "blacklisted").



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-3985) Upgrade dependencies to ones based on Jarkarta EE APIs

2023-09-29 Thread Alexandre Vermeerbergen (Jira)
Alexandre Vermeerbergen created STORM-3985:
--

 Summary: Upgrade dependencies to ones based on Jarkarta EE APIs
 Key: STORM-3985
 URL: https://issues.apache.org/jira/browse/STORM-3985
 Project: Apache Storm
  Issue Type: Dependency upgrade
Reporter: Alexandre Vermeerbergen






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (STORM-3957) Replace static Community/People page by a link to ASF's https://projects.apache.org/committee.html?storm

2023-08-24 Thread Alexandre Vermeerbergen (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-3957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Vermeerbergen resolved STORM-3957.

Resolution: Fixed

> Replace static Community/People page by a link to ASF's 
> https://projects.apache.org/committee.html?storm
> 
>
> Key: STORM-3957
> URL: https://issues.apache.org/jira/browse/STORM-3957
> Project: Apache Storm
>  Issue Type: Documentation
>Reporter: Alexandre Vermeerbergen
>Assignee: Alexandre Vermeerbergen
>Priority: Trivial
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-3958) Capacity to set Storm UI's title in conf/storm.yaml

2023-08-19 Thread Alexandre Vermeerbergen (Jira)
Alexandre Vermeerbergen created STORM-3958:
--

 Summary: Capacity to set Storm UI's title in conf/storm.yaml
 Key: STORM-3958
 URL: https://issues.apache.org/jira/browse/STORM-3958
 Project: Apache Storm
  Issue Type: New Feature
  Components: storm-ui
Reporter: Alexandre Vermeerbergen


I have several Storm clusters to manage (integration test cluster, 
pre-production ones, QA, several productions), and I always find disturbing to 
have the same "Storm UI" title in my web browser's tabs for the Storm UI of all 
these different environments.

I know it's trivial to update the content of {{public/index.html }}to change 
the {{Storm UI}} line to whatever I'd like, but I feel it would 
be cleaner to have the possibility to set this title in {{conf/storm.yaml}} 
file, for example with {{ui.title}} key.

Of course {{conf/defaults.yaml}} would need to have an extra line with:

{{ui.title: "Storm UI"}}

to avoid any regression for anyone not willing to customize Storm UI's title.

Also I think that, for consistency, this other place in {{public/index.html}} 
where 'Storm UI' can be found should also get its value from {{ui.title}} key:

{{    }}
{{      Storm UI}}
{{    }}

If there is no objection to this proposal, then I could self-assign this task.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-3957) Replace static Community/People page by a link to ASF'shttps://projects.apache.org/committee.html?storm

2023-08-18 Thread Alexandre Vermeerbergen (Jira)
Alexandre Vermeerbergen created STORM-3957:
--

 Summary: Replace static Community/People page by a link to 
ASF'shttps://projects.apache.org/committee.html?storm
 Key: STORM-3957
 URL: https://issues.apache.org/jira/browse/STORM-3957
 Project: Apache Storm
  Issue Type: Documentation
Reporter: Alexandre Vermeerbergen
Assignee: Alexandre Vermeerbergen






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-3957) Replace static Community/People page by a link to ASF's https://projects.apache.org/committee.html?storm

2023-08-18 Thread Alexandre Vermeerbergen (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-3957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Vermeerbergen updated STORM-3957:
---
Summary: Replace static Community/People page by a link to ASF's 
https://projects.apache.org/committee.html?storm  (was: Replace static 
Community/People page by a link to 
ASF'shttps://projects.apache.org/committee.html?storm)

> Replace static Community/People page by a link to ASF's 
> https://projects.apache.org/committee.html?storm
> 
>
> Key: STORM-3957
> URL: https://issues.apache.org/jira/browse/STORM-3957
> Project: Apache Storm
>  Issue Type: Documentation
>Reporter: Alexandre Vermeerbergen
>Assignee: Alexandre Vermeerbergen
>Priority: Trivial
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-3918) Bump snakeyaml from 1.32 to 2.0

2023-05-13 Thread Alexandre Vermeerbergen (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Vermeerbergen closed STORM-3918.
--
Resolution: Fixed

Sorry for duplicate JIRA

> Bump snakeyaml from 1.32 to 2.0
> ---
>
> Key: STORM-3918
> URL: https://issues.apache.org/jira/browse/STORM-3918
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.4.0
>Reporter: Alexandre Vermeerbergen
>Assignee: Alexandre Vermeerbergen
>Priority: Critical
>
> Current snakeyaml version is vulnerable to 
> [CVE-2022-1471|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-1471] 
> which is rated [9.8 
> CRITICAL|https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?name=CVE-2022-1471=AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H=3.1=NIST]
>  by NIST.
> Trivial fix is to update to snakeyaml 2.0.
> I tried to manually replace existing snakeyaml JAR with 2.0 version (but 
> keeping the same JAR file name to avoid issue with potentially hard coded 
> CLASSPATH), and then I restarted all Storm related processes (Nimbus, 
> logview, Supervisor, Nimbus UI...) and deployed some topologies => everything 
> worked fine
> So it looks like a trivial task
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (STORM-3918) Bump snakeyaml from 1.32 to 2.0

2023-05-12 Thread Alexandre Vermeerbergen (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17722248#comment-17722248
 ] 

Alexandre Vermeerbergen commented on STORM-3918:


Strange: in master, I see that snakeyaml version pom.xml already has been set 
to 2.0.

Yet I find no Jira about this upgrade : last Jira on SnakeYaml update is: 
https://issues.apache.org/jira/browse/STORM-3889

I can close this Jira as "already done", but it's a shame not to have 
tracking...

> Bump snakeyaml from 1.32 to 2.0
> ---
>
> Key: STORM-3918
> URL: https://issues.apache.org/jira/browse/STORM-3918
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.4.0
>Reporter: Alexandre Vermeerbergen
>Assignee: Alexandre Vermeerbergen
>Priority: Critical
>
> Current snakeyaml version is vulnerable to 
> [CVE-2022-1471|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-1471] 
> which is rated [9.8 
> CRITICAL|https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?name=CVE-2022-1471=AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H=3.1=NIST]
>  by NIST.
> Trivial fix is to update to snakeyaml 2.0.
> I tried to manually replace existing snakeyaml JAR with 2.0 version (but 
> keeping the same JAR file name to avoid issue with potentially hard coded 
> CLASSPATH), and then I restarted all Storm related processes (Nimbus, 
> logview, Supervisor, Nimbus UI...) and deployed some topologies => everything 
> worked fine
> So it looks like a trivial task
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (STORM-3918) Bump snakeyaml from 1.32 to 2.0

2023-05-12 Thread Alexandre Vermeerbergen (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Vermeerbergen reassigned STORM-3918:
--

Assignee: Alexandre Vermeerbergen

> Bump snakeyaml from 1.32 to 2.0
> ---
>
> Key: STORM-3918
> URL: https://issues.apache.org/jira/browse/STORM-3918
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.4.0
>Reporter: Alexandre Vermeerbergen
>Assignee: Alexandre Vermeerbergen
>Priority: Critical
>
> Current snakeyaml version is vulnerable to 
> [CVE-2022-1471|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-1471] 
> which is rated [9.8 
> CRITICAL|https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?name=CVE-2022-1471=AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H=3.1=NIST]
>  by NIST.
> Trivial fix is to update to snakeyaml 2.0.
> I tried to manually replace existing snakeyaml JAR with 2.0 version (but 
> keeping the same JAR file name to avoid issue with potentially hard coded 
> CLASSPATH), and then I restarted all Storm related processes (Nimbus, 
> logview, Supervisor, Nimbus UI...) and deployed some topologies => everything 
> worked fine
> So it looks like a trivial task
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-3918) Bump snakeyaml from 1.32 to 2.0

2023-05-12 Thread Alexandre Vermeerbergen (Jira)
Alexandre Vermeerbergen created STORM-3918:
--

 Summary: Bump snakeyaml from 1.32 to 2.0
 Key: STORM-3918
 URL: https://issues.apache.org/jira/browse/STORM-3918
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Affects Versions: 2.4.0
Reporter: Alexandre Vermeerbergen


Current snakeyaml version is vulnerable to 
[CVE-2022-1471|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-1471] 
which is rated [9.8 
CRITICAL|https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?name=CVE-2022-1471=AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H=3.1=NIST]
 by NIST.

Trivial fix is to update to snakeyaml 2.0.

I tried to manually replace existing snakeyaml JAR with 2.0 version (but 
keeping the same JAR file name to avoid issue with potentially hard coded 
CLASSPATH), and then I restarted all Storm related processes (Nimbus, logview, 
Supervisor, Nimbus UI...) and deployed some topologies => everything worked fine

So it looks like a trivial task

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (STORM-2428) Flux-core jar contains unpacked dependencies

2020-03-23 Thread Alexandre Vermeerbergen (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-2428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064887#comment-17064887
 ] 

Alexandre Vermeerbergen commented on STORM-2428:


Hello,

+1 : This issue is causing our attempt to use Apache SolrJ 8.4.1 client in our 
topologies to fail with latest 1.2.x Storm release.
Any plan to fix it in Storm 1.x ?
Has it been fixed in Storm 2.x ?


Kind regards,
Alexandre Vermeerbergen

> Flux-core jar contains unpacked dependencies
> 
>
> Key: STORM-2428
> URL: https://issues.apache.org/jira/browse/STORM-2428
> Project: Apache Storm
>  Issue Type: Bug
>  Components: Flux
>Affects Versions: 1.1.0, 1.0.3, 1.2.1, 1.2.2
>Reporter: Julien Nioche
>Priority: Major
>
> The jar file for flux-core contains classes from /org/apache/http/. This was 
> not the case before and causes problems with projects which rely on a 
> different version of http-client. 
> I can't see any references to http-client in the pom though.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (STORM-3032) NullPointerException in KafkaSpout emitOrRetryTuple

2019-09-25 Thread Alexandre Vermeerbergen (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-3032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937469#comment-16937469
 ] 

Alexandre Vermeerbergen commented on STORM-3032:


Important detail : we reproduce this issue using Storm 1.2.3 with OpenJ9 flavor 
of AdoptOpenJDK 8 update 222 to run Supervisor (supervisor, worker, log 
analyzer) processes.

We do not reproduce this issue using Storm 1.2.3 with HotSpot flavor of 
AdoptOpenJDK8 update 222 run Supervisor processes.

Hope it helps,
Alexandre Vermeerbergen

> NullPointerException in KafkaSpout emitOrRetryTuple
> ---
>
> Key: STORM-3032
> URL: https://issues.apache.org/jira/browse/STORM-3032
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 1.2.1
> Environment: Linux
>Reporter: Ajeesh B
>Priority: Major
>
> {code:java}
> java.lang.NullPointerException: null
> at clojure.lang.Numbers.ops(Numbers.java:1013) ~[clojure-1.7.0.jar:?]
> at clojure.lang.Numbers.multiply(Numbers.java:148) ~[clojure-1.7.0.jar:?]
> at org.apache.storm.stats$transferred_tuples_BANG_.invoke(stats.clj:131) 
> ~[storm-core-1.2.1.jar:1.2.1]
> at org.apache.storm.daemon.task$mk_tasks_fn$fn__4656.invoke(task.clj:167) 
> ~[storm-core-1.2.1.jar:1.2.1]
> at org.apache.storm.daemon.task$send_unanchored.invoke(task.clj:119) 
> ~[storm-core-1.2.1.jar:1.2.1]
> at 
> org.apache.storm.daemon.executor$fn__4975$fn__4990$send_spout_msg__4996.invoke(executor.clj:594)
>  ~[storm-core-1.2.1.jar:1.2.1]
> at 
> org.apache.storm.daemon.executor$fn__4975$fn$reify__5006.emit(executor.clj:618)
>  ~[storm-core-1.2.1.jar:1.2.1]
> at 
> org.apache.storm.spout.SpoutOutputCollector.emit(SpoutOutputCollector.java:50)
>  ~[storm-core-1.2.1.jar:1.2.1]
> at 
> org.apache.storm.kafka.spout.KafkaSpout.emitOrRetryTuple(KafkaSpout.java:496) 
> ~[stormjar.jar:?]
> at 
> org.apache.storm.kafka.spout.KafkaSpout.emitIfWaitingNotEmitted(KafkaSpout.java:440)
>  ~[stormjar.jar:?]
> at org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:308) 
> ~[stormjar.jar:?]
> at 
> org.apache.storm.daemon.executor$fn__4975$fn__4990$fn__5021.invoke(executor.clj:654)
>  ~[storm-core-1.2.1.jar:1.2.1]
> at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484) 
> [storm-core-1.2.1.jar:1.2.1]
> at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> at java.lang.Thread.run(Thread.java:785) [?:?]
> {code}
> Error in KafkaSpout emitOrRetry method



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (STORM-3032) NullPointerException in KafkaSpout emitOrRetryTuple

2019-09-18 Thread Alexandre Vermeerbergen (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-3032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16932498#comment-16932498
 ] 

Alexandre Vermeerbergen commented on STORM-3032:


Hello,


We are getting the same exception with Storm 1.2.3, using Kafka client at 
version 2.0.1, against Kafka Brokers at version 2.1.1.

Since this is a NullPointerException that seems to be related to some 
statistics computation, isn't there a way to catch it so as to limit noise in 
Storm logs, unless that hint about a severe issue and thus some guidance to 
understand the issue & recommendations to solve it would be welcomed ?

 

Kind regards,

Alexandre Vermeerbergen

> NullPointerException in KafkaSpout emitOrRetryTuple
> ---
>
> Key: STORM-3032
> URL: https://issues.apache.org/jira/browse/STORM-3032
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 1.2.1
> Environment: Linux
>Reporter: Ajeesh B
>Priority: Major
>
> {code:java}
> java.lang.NullPointerException: null
> at clojure.lang.Numbers.ops(Numbers.java:1013) ~[clojure-1.7.0.jar:?]
> at clojure.lang.Numbers.multiply(Numbers.java:148) ~[clojure-1.7.0.jar:?]
> at org.apache.storm.stats$transferred_tuples_BANG_.invoke(stats.clj:131) 
> ~[storm-core-1.2.1.jar:1.2.1]
> at org.apache.storm.daemon.task$mk_tasks_fn$fn__4656.invoke(task.clj:167) 
> ~[storm-core-1.2.1.jar:1.2.1]
> at org.apache.storm.daemon.task$send_unanchored.invoke(task.clj:119) 
> ~[storm-core-1.2.1.jar:1.2.1]
> at 
> org.apache.storm.daemon.executor$fn__4975$fn__4990$send_spout_msg__4996.invoke(executor.clj:594)
>  ~[storm-core-1.2.1.jar:1.2.1]
> at 
> org.apache.storm.daemon.executor$fn__4975$fn$reify__5006.emit(executor.clj:618)
>  ~[storm-core-1.2.1.jar:1.2.1]
> at 
> org.apache.storm.spout.SpoutOutputCollector.emit(SpoutOutputCollector.java:50)
>  ~[storm-core-1.2.1.jar:1.2.1]
> at 
> org.apache.storm.kafka.spout.KafkaSpout.emitOrRetryTuple(KafkaSpout.java:496) 
> ~[stormjar.jar:?]
> at 
> org.apache.storm.kafka.spout.KafkaSpout.emitIfWaitingNotEmitted(KafkaSpout.java:440)
>  ~[stormjar.jar:?]
> at org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:308) 
> ~[stormjar.jar:?]
> at 
> org.apache.storm.daemon.executor$fn__4975$fn__4990$fn__5021.invoke(executor.clj:654)
>  ~[storm-core-1.2.1.jar:1.2.1]
> at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484) 
> [storm-core-1.2.1.jar:1.2.1]
> at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> at java.lang.Thread.run(Thread.java:785) [?:?]
> {code}
> Error in KafkaSpout emitOrRetry method



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (STORM-3386) Set minimum Maven version for build to 3.5.0

2019-05-03 Thread Alexandre Vermeerbergen (JIRA)


[ 
https://issues.apache.org/jira/browse/STORM-3386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832454#comment-16832454
 ] 

Alexandre Vermeerbergen commented on STORM-3386:


Glad to help, and thank you very much for this fixed issue!

> Set minimum Maven version for build to 3.5.0
> 
>
> Key: STORM-3386
> URL: https://issues.apache.org/jira/browse/STORM-3386
> Project: Apache Storm
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 2.0.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Alexandre Vermeerbergen found that the build doesn't work on Maven 3.3.9. 
> This is likely because some plugin requires Maven version 3.5.0, but they 
> forgot to specify that requirement in their POM.
> We might as well bump our Maven version check to 3.5.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (STORM-3386) Set minimum Maven version for build to 3.5.0

2019-05-01 Thread Alexandre Vermeerbergen (JIRA)


[ 
https://issues.apache.org/jira/browse/STORM-3386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831241#comment-16831241
 ] 

Alexandre Vermeerbergen edited comment on STORM-3386 at 5/1/19 8:48 PM:


I feel good to see that my suggestion led to this so quickly created Jira :)


was (Author: avermeerbergen):
I feel good to see that my suggestion lead to this so quickly created Jira :)

> Set minimum Maven version for build to 3.5.0
> 
>
> Key: STORM-3386
> URL: https://issues.apache.org/jira/browse/STORM-3386
> Project: Apache Storm
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Alexandre Vermeerbergen found that the build doesn't work on Maven 3.3.9. 
> This is likely because some plugin requires Maven version 3.5.0, but they 
> forgot to specify that requirement in their POM.
> We might as well bump our Maven version check to 3.5.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (STORM-3386) Set minimum Maven version for build to 3.5.0

2019-05-01 Thread Alexandre Vermeerbergen (JIRA)


[ 
https://issues.apache.org/jira/browse/STORM-3386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831241#comment-16831241
 ] 

Alexandre Vermeerbergen commented on STORM-3386:


I feel good to see that my suggestion lead to this so quickly created Jira :)

> Set minimum Maven version for build to 3.5.0
> 
>
> Key: STORM-3386
> URL: https://issues.apache.org/jira/browse/STORM-3386
> Project: Apache Storm
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Alexandre Vermeerbergen found that the build doesn't work on Maven 3.3.9. 
> This is likely because some plugin requires Maven version 3.5.0, but they 
> forgot to specify that requirement in their POM.
> We might as well bump our Maven version check to 3.5.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (STORM-2914) Remove enable.auto.commit support from storm-kafka-client

2018-02-03 Thread Alexandre Vermeerbergen (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351308#comment-16351308
 ] 

Alexandre Vermeerbergen commented on STORM-2914:


Hello [~hmclouro]

+1 for your proposal to get things decided quickly !

In your 7 questions lists, my vote is:

+1 for item #5 : because I want to trust you guys on the meaning of "identical" 
in this sentence: "_Document that ProcessingGuarantee.NO_GUARANTEES has a 
behavior identical to auto.commit.enable=true but disregards this property 
altogether (if auto.commit.enable is set the KafkaSpoutConfig will throw an 
exception)_".

And I have no problem changing the way I configure Kafka spout to get this 
"identical" behavior if documentation is crystal clear about it. The 
documentation needs to be very clear about how to replace legacy use of 
auto.commit.enable and related setting by newer ones.

+ 0 for other items, meaning I am neutral there, as I'm selfishly concerned by 
my need to keep a behavior "identical" to auto.commit.enable=true.

I just hope that "identical" means "no noticeable difference in performance".

Best regards,

Alexandre Vermeerbergen

PS: I also hope that in next RC, the toollib/ directory will be cleaned up from 
*src* and *javadoc* artifacts, which seems to break the display of Kafka spout 
statistics in Nimbus UI in 1.2.0 RC2 (see my previous comment about this in 
this thread).**

> Remove enable.auto.commit support from storm-kafka-client
> -
>
> Key: STORM-2914
> URL: https://issues.apache.org/jira/browse/STORM-2914
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-kafka-client
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The enable.auto.commit option causes the KafkaConsumer to periodically commit 
> the latest offsets it has returned from poll(). It is convenient for use 
> cases where messages are polled from Kafka and processed synchronously, in a 
> loop. 
> Due to https://issues.apache.org/jira/browse/STORM-2913 we'd really like to 
> store some metadata in Kafka when the spout commits. This is not possible 
> with enable.auto.commit. I took at look at what that setting actually does, 
> and it just causes the KafkaConsumer to call commitAsync during poll (and 
> during a few other operations, e.g. close and assign) with some interval. 
> Ideally I'd like to get rid of ProcessingGuarantee.NONE, since I think 
> ProcessingGuarantee.AT_MOST_ONCE covers the same use cases, and is likely 
> almost as fast. The primary difference between them is that AT_MOST_ONCE 
> commits synchronously.
> If we really want to keep ProcessingGuarantee.NONE, I think we should make 
> our ProcessingGuarantee.NONE setting cause the spout to call commitAsync 
> after poll, and never use the enable.auto.commit option. This allows us to 
> include metadata in the commit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (STORM-2914) Remove enable.auto.commit support from storm-kafka-client

2018-02-01 Thread Alexandre Vermeerbergen (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349315#comment-16349315
 ] 

Alexandre Vermeerbergen edited comment on STORM-2914 at 2/1/18 9:46 PM:


Hello [~pshah] and [~Srdo],

Regarding storm-kafka-monitoring*.jar : thanks for your explanation, it was 
indeed already in our toollib/ directory from Storm 1.2.0RC2:

{{[root@ip-172-31-18-84 storm-stable]# ls toollib/}}
 {{storm-kafka-monitor-1.2.0.jar  storm-kafka-monitor-1.2.0-javadoc.jar  
storm-kafka-monitor-1.2.0-sources.jar}}

But since I was no longer seeing Kafka spout stats in Nimbus UI's log, and it 
seem that this was broken because of some missing class issue (very strange 
that it's trying to load a class from *javadoc.jar" file, isn't it ?) :

{{org.apache.storm.utils.ShellUtils$ExitCodeException: Error: Could not find or 
load main class 
.usr.local.Storm.storm-stable.toollib.storm-kafka-monitor-1.2.0-javadoc.jar}}

{{    at org.apache.storm.utils.ShellUtils.runCommand(ShellUtils.java:231) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at org.apache.storm.utils.ShellUtils.run(ShellUtils.java:161) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.utils.ShellUtils$ShellCommandExecutor.execute(ShellUtils.java:371)
 ~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.utils.ShellUtils.execCommand(ShellUtils.java:461) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.utils.ShellUtils.execCommand(ShellUtils.java:444) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.utils.TopologySpoutLag.getLagResultForKafka(TopologySpoutLag.java:163)
 ~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.utils.TopologySpoutLag.getLagResultForNewKafkaSpout(TopologySpoutLag.java:189)
 ~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.utils.TopologySpoutLag.lag(TopologySpoutLag.java:57) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at org.apache.storm.ui.core$topology_lag.invoke(core.clj:805) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at org.apache.storm.ui.core$fn__9572.invoke(core.clj:1165) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.compojure.core$make_route$fn__5965.invoke(core.clj:100) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.compojure.core$if_route$fn__5953.invoke(core.clj:46) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.compojure.core$if_method$fn__5946.invoke(core.clj:31) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.compojure.core$routing$fn__5971.invoke(core.clj:113) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at clojure.core$some.invoke(core.clj:2570) ~[clojure-1.7.0.jar:?]}}
 {{    at 
org.apache.storm.shade.compojure.core$routing.doInvoke(core.clj:113) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at clojure.lang.RestFn.applyTo(RestFn.java:139) 
~[clojure-1.7.0.jar:?]}}
 {{    at clojure.core$apply.invoke(core.clj:632) ~[clojure-1.7.0.jar:?]}}
 {{    at 
org.apache.storm.shade.compojure.core$routes$fn__5975.invoke(core.clj:118) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.ring.middleware.cors$wrap_cors$fn__8880.invoke(cors.clj:149)
 ~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.ring.middleware.json$wrap_json_params$fn__8827.invoke(json.clj:56)
 ~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__6607.invoke(multipart_params.clj:118)
 ~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.ring.middleware.reload$wrap_reload$fn__7890.invoke(reload.clj:22)
 ~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.ui.helpers$requests_middleware$fn__6860.invoke(helpers.clj:52) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.ui.core$catch_errors$fn__9747.invoke(core.clj:1428) 
~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.ring.middleware.keyword_params$wrap_keyword_params$fn__6527.invoke(keyword_params.clj:35)
 ~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.ring.middleware.nested_params$wrap_nested_params$fn__6570.invoke(nested_params.clj:84)
 ~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.ring.middleware.params$wrap_params$fn__6499.invoke(params.clj:64)
 ~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__6607.invoke(multipart_params.clj:118)
 ~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.ring.middleware.flash$wrap_flash$fn__6822.invoke(flash.clj:35)
 ~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 
org.apache.storm.shade.ring.middleware.session$wrap_session$fn__6808.invoke(session.clj:98)
 ~[storm-core-1.2.0.jar:1.2.0]}}
 {{    at 

[jira] [Commented] (STORM-2914) Remove enable.auto.commit support from storm-kafka-client

2018-02-01 Thread Alexandre Vermeerbergen (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349315#comment-16349315
 ] 

Alexandre Vermeerbergen commented on STORM-2914:


Hello [~Srdo]

Regarding storm-kafka-monitoring*.jar : thanks for your explanation, it was 
indeed already in our toollib/ directory from Storm 1.2.0RC2:

{{[root@ip-172-31-18-84 storm-stable]# ls toollib/}}
{{storm-kafka-monitor-1.2.0.jar  storm-kafka-monitor-1.2.0-javadoc.jar  
storm-kafka-monitor-1.2.0-sources.jar}}

But since I was no longer seeing Kafka spout stats in Nimbus UI's log, and it 
seem that this was broken because of some missing class issue (very strange 
that it's trying to load a class from *javadoc.jar" file, isn't it ?) :

{{org.apache.storm.utils.ShellUtils$ExitCodeException: Error: Could not find or 
load main class 
.usr.local.Storm.storm-stable.toollib.storm-kafka-monitor-1.2.0-javadoc.jar}}

{{    at org.apache.storm.utils.ShellUtils.runCommand(ShellUtils.java:231) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at org.apache.storm.utils.ShellUtils.run(ShellUtils.java:161) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.utils.ShellUtils$ShellCommandExecutor.execute(ShellUtils.java:371)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at org.apache.storm.utils.ShellUtils.execCommand(ShellUtils.java:461) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at org.apache.storm.utils.ShellUtils.execCommand(ShellUtils.java:444) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.utils.TopologySpoutLag.getLagResultForKafka(TopologySpoutLag.java:163)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.utils.TopologySpoutLag.getLagResultForNewKafkaSpout(TopologySpoutLag.java:189)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.utils.TopologySpoutLag.lag(TopologySpoutLag.java:57) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at org.apache.storm.ui.core$topology_lag.invoke(core.clj:805) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at org.apache.storm.ui.core$fn__9572.invoke(core.clj:1165) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.compojure.core$make_route$fn__5965.invoke(core.clj:100) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.compojure.core$if_route$fn__5953.invoke(core.clj:46) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.compojure.core$if_method$fn__5946.invoke(core.clj:31) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.compojure.core$routing$fn__5971.invoke(core.clj:113) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at clojure.core$some.invoke(core.clj:2570) ~[clojure-1.7.0.jar:?]}}
{{    at 
org.apache.storm.shade.compojure.core$routing.doInvoke(core.clj:113) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at clojure.lang.RestFn.applyTo(RestFn.java:139) 
~[clojure-1.7.0.jar:?]}}
{{    at clojure.core$apply.invoke(core.clj:632) ~[clojure-1.7.0.jar:?]}}
{{    at 
org.apache.storm.shade.compojure.core$routes$fn__5975.invoke(core.clj:118) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.ring.middleware.cors$wrap_cors$fn__8880.invoke(cors.clj:149)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.ring.middleware.json$wrap_json_params$fn__8827.invoke(json.clj:56)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__6607.invoke(multipart_params.clj:118)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.ring.middleware.reload$wrap_reload$fn__7890.invoke(reload.clj:22)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.ui.helpers$requests_middleware$fn__6860.invoke(helpers.clj:52) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.ui.core$catch_errors$fn__9747.invoke(core.clj:1428) 
~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.ring.middleware.keyword_params$wrap_keyword_params$fn__6527.invoke(keyword_params.clj:35)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.ring.middleware.nested_params$wrap_nested_params$fn__6570.invoke(nested_params.clj:84)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.ring.middleware.params$wrap_params$fn__6499.invoke(params.clj:64)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__6607.invoke(multipart_params.clj:118)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.ring.middleware.flash$wrap_flash$fn__6822.invoke(flash.clj:35)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.ring.middleware.session$wrap_session$fn__6808.invoke(session.clj:98)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 
org.apache.storm.shade.ring.util.servlet$make_service_method$fn__6357.invoke(servlet.clj:127)
 ~[storm-core-1.2.0.jar:1.2.0]}}
{{    at 

[jira] [Commented] (STORM-2914) Remove enable.auto.commit support from storm-kafka-client

2018-02-01 Thread Alexandre Vermeerbergen (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348563#comment-16348563
 ] 

Alexandre Vermeerbergen commented on STORM-2914:


[~Srdo]

Also I noticed that in the artifacts built by Maven from your 1.x branch, 
there's this new JAR file:

storm-kafka-monitor-1.2.1-SNAPSHOT.jar

What is it meant for? Should I include it into my topologies "BigJars" ?

Worth mentioning: in Storm UI, the stats on Kafka consumption are no longer 
displayed.. is it related to this new JAR ?

Best regards,

Alexandre

 

> Remove enable.auto.commit support from storm-kafka-client
> -
>
> Key: STORM-2914
> URL: https://issues.apache.org/jira/browse/STORM-2914
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-kafka-client
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The enable.auto.commit option causes the KafkaConsumer to periodically commit 
> the latest offsets it has returned from poll(). It is convenient for use 
> cases where messages are polled from Kafka and processed synchronously, in a 
> loop. 
> Due to https://issues.apache.org/jira/browse/STORM-2913 we'd really like to 
> store some metadata in Kafka when the spout commits. This is not possible 
> with enable.auto.commit. I took at look at what that setting actually does, 
> and it just causes the KafkaConsumer to call commitAsync during poll (and 
> during a few other operations, e.g. close and assign) with some interval. 
> Ideally I'd like to get rid of ProcessingGuarantee.NONE, since I think 
> ProcessingGuarantee.AT_MOST_ONCE covers the same use cases, and is likely 
> almost as fast. The primary difference between them is that AT_MOST_ONCE 
> commits synchronously.
> If we really want to keep ProcessingGuarantee.NONE, I think we should make 
> our ProcessingGuarantee.NONE setting cause the spout to call commitAsync 
> after poll, and never use the enable.auto.commit option. This allows us to 
> include metadata in the commit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (STORM-2914) Remove enable.auto.commit support from storm-kafka-client

2018-02-01 Thread Alexandre Vermeerbergen (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348558#comment-16348558
 ] 

Alexandre Vermeerbergen commented on STORM-2914:


Hello [~Srdo]

I have build storm-kafka-client.jar from your 1.x branch, tests are starting 
now.

While it's too early to say if it's OK or not, I noticed the following warning 
in one of our topologies's log:

 

2018-02-01 13:08:25.016 o.a.k.c.u.AppInfoParser 
Thread-4-podOrTenantsFromKafkaSpout-executor[2 2] [INFO] Kafka version : 
0.10.2.1 2018-02-01 13:08:25.016 o.a.k.c.u.AppInfoParser 
Thread-4-podOrTenantsFromKafkaSpout-executor[2 2] [INFO] Kafka commitId : 
e89bffd6b2eff799 2018-02-01 13:08:25.076 o.a.s.k.s.KafkaSpout 
Thread-4-podOrTenantsFromKafkaSpout-executor[2 2] [INFO] Partitions revoked. 
[consumer-group=StormPodsyncTopology_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ,
 consumer=org.apache.kafka.clients.consumer.KafkaConsumer@5cb084d9, 
topic-partitions=[]] 2018-02-01 13:08:25.077 o.a.s.k.s.KafkaSpout 
Thread-4-podOrTenantsFromKafkaSpout-executor[2 2] [INFO] Partitions 
reassignment. [task-ID=2, 
consumer-group=StormPodsyncTopology_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ, 
consumer=org.apache.kafka.clients.consumer.KafkaConsumer@5cb084d9, 
topic-partitions=[podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-15, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-4, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-3, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-6, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-5, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-0, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-2, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-1, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-12, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-11, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-14, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-13, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-8, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-7, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-10, 
podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-9]] 2018-02-01 13:08:25.083 
o.a.k.c.c.i.AbstractCoordinator Thread-4-podOrTenantsFromKafkaSpout-executor[2 
2] [INFO] Discovered coordinator 
ec2-34-242-207-227.eu-west-1.compute.amazonaws.com:9092 (id: 2147483644 rack: 
null) for group 
StormPodsyncTopology_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ. 2018-02-01 
13:08:25.116 o.a.s.k.s.i.CommitMetadata 
Thread-4-podOrTenantsFromKafkaSpout-executor[2 2] [WARN] Failed to deserialize 
[OffsetAndMetadata\{offset=63683, 
metadata='{topic-partition=podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-15,
 offset=63682, numFails=0, 
thread='Thread-4-podOrTenantsFromKafkaSpout-executor[2 2]'}'}]. Error likely 
occurred because the last commit for this topic-partition was done using an 
earlier version of Storm. Defaulting to behavior compatible with earlier 
version 2018-02-01 13:08:25.125 o.a.s.k.s.i.CommitMetadata 
Thread-4-podOrTenantsFromKafkaSpout-executor[2 2] [WARN] Failed to deserialize 
[OffsetAndMetadata\{offset=67701, 
metadata='{topic-partition=podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-4,
 offset=67700, numFails=0, 
thread='Thread-4-podOrTenantsFromKafkaSpout-executor[2 2]'}'}]. Error likely 
occurred because the last commit for this topic-partition was done using an 
earlier version of Storm. Defaulting to behavior compatible with earlier 
version 2018-02-01 13:08:25.130 o.a.s.k.s.i.CommitMetadata 
Thread-4-podOrTenantsFromKafkaSpout-executor[2 2] [WARN] Failed to deserialize 
[OffsetAndMetadata\{offset=69382, 
metadata='{topic-partition=podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-3,
 offset=69381, numFails=0, 
thread='Thread-4-podOrTenantsFromKafkaSpout-executor[2 2]'}'}]. Error likely 
occurred because the last commit for this topic-partition was done using an 
earlier version of Storm. Defaulting to behavior compatible with earlier 
version 2018-02-01 13:08:25.134 o.a.s.k.s.i.CommitMetadata 
Thread-4-podOrTenantsFromKafkaSpout-executor[2 2] [WARN] Failed to deserialize 
[OffsetAndMetadata\{offset=78943, 
metadata='{topic-partition=podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-6,
 offset=78942, numFails=0, 
thread='Thread-4-podOrTenantsFromKafkaSpout-executor[2 2]'}'}]. Error likely 
occurred because the last commit for this topic-partition was done using an 
earlier version of Storm. Defaulting to behavior compatible with earlier 
version 2018-02-01 13:08:25.135 o.a.s.k.s.i.CommitMetadata 
Thread-4-podOrTenantsFromKafkaSpout-executor[2 2] [WARN] Failed to deserialize 
[OffsetAndMetadata\{offset=76603, 
metadata='{topic-partition=podsync_RealTimeSupervision_zDHZGCMKTHGySBqVw9DiqQ-5,
 offset=76602, numFails=0, 
thread='Thread-4-podOrTenantsFromKafkaSpout-executor[2 2]'}'}]. Error likely 
occurred because the last commit for this 

[jira] [Commented] (STORM-2914) Remove enable.auto.commit support from storm-kafka-client

2018-01-28 Thread Alexandre Vermeerbergen (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342987#comment-16342987
 ] 

Alexandre Vermeerbergen commented on STORM-2914:


[~Srdo]

We do set topology.max.spout.pending, and we already had a discussion in the 
past that until Storm 2.x, we had to use our by-pass to overcome the 
limitations of Storm 1.x back-pressure:

See:  
[https://mail-archives.apache.org/mod_mbox/storm-user/201705.mbox/%3CCADeKz6qZG12mN-=gf+mpta1jwxdk8_wwz1npbcgslx7fga4...@mail.gmail.com%3E]

(at subsequent posts). We observe no OOM, just the consumption from Spouts 
stops, regardless of whether it our own Kafka spout, the old Storm Kafka client 
or the newer Storm Kafka Client.

Back to Storm 1.2.0 : any idea when I could test a fix which removes the 
cycling "WARN" message about metadata ?

Thanks!

> Remove enable.auto.commit support from storm-kafka-client
> -
>
> Key: STORM-2914
> URL: https://issues.apache.org/jira/browse/STORM-2914
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-kafka-client
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The enable.auto.commit option causes the KafkaConsumer to periodically commit 
> the latest offsets it has returned from poll(). It is convenient for use 
> cases where messages are polled from Kafka and processed synchronously, in a 
> loop. 
> Due to https://issues.apache.org/jira/browse/STORM-2913 we'd really like to 
> store some metadata in Kafka when the spout commits. This is not possible 
> with enable.auto.commit. I took at look at what that setting actually does, 
> and it just causes the KafkaConsumer to call commitAsync during poll (and 
> during a few other operations, e.g. close and assign) with some interval. 
> Ideally I'd like to get rid of ProcessingGuarantee.NONE, since I think 
> ProcessingGuarantee.AT_MOST_ONCE covers the same use cases, and is likely 
> almost as fast. The primary difference between them is that AT_MOST_ONCE 
> commits synchronously.
> If we really want to keep ProcessingGuarantee.NONE, I think we should make 
> our ProcessingGuarantee.NONE setting cause the spout to call commitAsync 
> after poll, and never use the enable.auto.commit option. This allows us to 
> include metadata in the commit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (STORM-2914) Remove enable.auto.commit support from storm-kafka-client

2018-01-27 Thread Alexandre Vermeerbergen (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342221#comment-16342221
 ] 

Alexandre Vermeerbergen edited comment on STORM-2914 at 1/27/18 4:47 PM:
-

Hello [~Srdo],

We use auto-commit when consuming Kafka messages in our "real-time alerting" 
topology.

This topology doesn't uses acking nor anchoring

Indeed, the purpose of our "real-time alerting" topology is to evaluate metrics 
read from Kafka through expressions (called triggers, like in Zabbix) into 
"alerting severities", which we write to both a Redis database (for rendering 
in an alerting web app) and into other Kafka topics (for notifications / 
persistency purposes handled by other topologies).

We observed that sometimes our alerting topology is flooded by an excessive 
rate of metrics = Kafka messages read by kafka spout, because sometimes a 
remote Kafka cluster which was temporarily unable to replicate data to our 
central Kafka cluster (the one plugged to our topologies) comes "back to life" 
and sends all metrics at once, including metrics which are too old to be 
relevant for real-time alerting.

And because if Storm 1.x back-pressure is kind of ... well, say "limited", the 
result of such flooding is that our topology is just stuck and no longer 
consumes any metric.

So we have a simple self-healer "cron" which periodically checks whether or not 
our topology consumes metrics over a sliding windows of 10 minutes, and when 
consumption is stopped for 10 minutes, this cron will restart our topology with 
"LATEST " strategy. 

 

No with this context, after trying many combinations we found out that with 
Storm 1.1.0 and with our own Kafka spout (working in auto-commit mode, but not 
as powerful as storm kafka client), we had to enable auto-commit otherwise our 
topology would get stuck quite too often.

We had been waiting for Storm 1.2.0 to use the official storm-kafka-client  
with auto-commit support, because our own spout was quite limited.

Now, in order to answer your question about storm-kafka-client removal of 
auto-commit support, I am unsure about what I should answer: what would 
guarantee me that my real-time alerting topology's stability won't be worse 
than it was when I used storm-kafka-client before the following commit broke it 
:[https://github.com/apache/storm/commit/a3899b75a79781602fa58b90de6c8aa784af5332#diff-7d7cbc8f5444fa7ada7962033fc31c5e
 
|https://github.com/apache/storm/commit/a3899b75a79781602fa58b90de6c8aa784af5332#diff-7d7cbc8f5444fa7ada7962033fc31c5e]?

Best regards,

Alexandre Vermeerbergen

 


was (Author: avermeerbergen):
Hello [~Srdo],

We use auto-commit when consuming Kafka messages in our "real-time alerting" 
topology.

This topology doesn't uses acking not anchoring

Indeed, the purpose of our "real-time alerting" topology is to evaluate metrics 
read from Kafka through expressions (called triggers, like in Zabbix) into 
"alerting severities", which we write to both a Redis database (for rendering 
in an alerting web app) and into other Kafka topics (for notifications / 
persistency purposes handled by other topologies).

We observed that sometimes our alerting topology is flooded by an excessive 
rate of metrics = Kafka messages read by kafka spout, because sometimes a 
remote Kafka cluster which was temporarily unable to replicate data to our 
central Kafka cluster (the one plugged to our topologies) comes "back to life" 
and sends all metrics at once, including metrics which are too old to be 
relevant for real-time alerting.

And because if Storm 1.x back-pressure is kind of ... well, say "limited", the 
result of such flooding is that our topology is just stuck and no longer 
consumes any metric.

So we have a simple self-healer "cron" which periodically checks whether or not 
our topology consumes metrics over a sliding windows of 10 minutes, and when 
consumption is stopped for 10 minutes, this cron will restart our topology with 
"LATEST " strategy. 

 

No with this context, after trying many combinations we found out that with 
Storm 1.1.0 and with our own Kafka spout (working in auto-commit mode, but not 
as powerful as storm kafka client), we had to enable auto-commit otherwise our 
topology would get stuck quite too often.

We had been waiting for Storm 1.2.0 to use the official storm-kafka-client  
with auto-commit support, because our own spout was quite limited.

Now, in order to answer your question about storm-kafka-client removal of 
auto-commit support, I am unsure about what I should answer: what would 
guarantee me that my real-time alerting topology's stability won't be worse 
than it was when I used storm-kafka-client before the following commit broke it 
:[https://github.com/apache/storm/commit/a3899b75a79781602fa58b90de6c8aa784af5332#diff-7d7cbc8f5444fa7ada7962033fc31c5e
 

[jira] [Commented] (STORM-2914) Remove enable.auto.commit support from storm-kafka-client

2018-01-27 Thread Alexandre Vermeerbergen (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342221#comment-16342221
 ] 

Alexandre Vermeerbergen commented on STORM-2914:


Hello [~Srdo],

We use auto-commit when consuming Kafka messages in our "real-time alerting" 
topology.

This topology doesn't uses acking not anchoring

Indeed, the purpose of our "real-time alerting" topology is to evaluate metrics 
read from Kafka through expressions (called triggers, like in Zabbix) into 
"alerting severities", which we write to both a Redis database (for rendering 
in an alerting web app) and into other Kafka topics (for notifications / 
persistency purposes handled by other topologies).

We observed that sometimes our alerting topology is flooded by an excessive 
rate of metrics = Kafka messages read by kafka spout, because sometimes a 
remote Kafka cluster which was temporarily unable to replicate data to our 
central Kafka cluster (the one plugged to our topologies) comes "back to life" 
and sends all metrics at once, including metrics which are too old to be 
relevant for real-time alerting.

And because if Storm 1.x back-pressure is kind of ... well, say "limited", the 
result of such flooding is that our topology is just stuck and no longer 
consumes any metric.

So we have a simple self-healer "cron" which periodically checks whether or not 
our topology consumes metrics over a sliding windows of 10 minutes, and when 
consumption is stopped for 10 minutes, this cron will restart our topology with 
"LATEST " strategy. 

 

No with this context, after trying many combinations we found out that with 
Storm 1.1.0 and with our own Kafka spout (working in auto-commit mode, but not 
as powerful as storm kafka client), we had to enable auto-commit otherwise our 
topology would get stuck quite too often.

We had been waiting for Storm 1.2.0 to use the official storm-kafka-client  
with auto-commit support, because our own spout was quite limited.

Now, in order to answer your question about storm-kafka-client removal of 
auto-commit support, I am unsure about what I should answer: what would 
guarantee me that my real-time alerting topology's stability won't be worse 
than it was when I used storm-kafka-client before the following commit broke it 
:[https://github.com/apache/storm/commit/a3899b75a79781602fa58b90de6c8aa784af5332#diff-7d7cbc8f5444fa7ada7962033fc31c5e
 
|https://github.com/apache/storm/commit/a3899b75a79781602fa58b90de6c8aa784af5332#diff-7d7cbc8f5444fa7ada7962033fc31c5e]?

Best regards,

Alexandre Vermeerbergen

 

> Remove enable.auto.commit support from storm-kafka-client
> -
>
> Key: STORM-2914
> URL: https://issues.apache.org/jira/browse/STORM-2914
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-kafka-client
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Major
>
> The enable.auto.commit option causes the KafkaConsumer to periodically commit 
> the latest offsets it has returned from poll(). It is convenient for use 
> cases where messages are polled from Kafka and processed synchronously, in a 
> loop. 
> Due to https://issues.apache.org/jira/browse/STORM-2913 we'd really like to 
> store some metadata in Kafka when the spout commits. This is not possible 
> with enable.auto.commit. I took at look at what that setting actually does, 
> and it just causes the KafkaConsumer to call commitAsync during poll (and 
> during a few other operations, e.g. close and assign) with some interval. 
> Ideally I'd like to get rid of ProcessingGuarantee.NONE, since I think 
> ProcessingGuarantee.AT_MOST_ONCE covers the same use cases, and is likely 
> almost as fast. The primary difference between them is that AT_MOST_ONCE 
> commits synchronously.
> If we really want to keep ProcessingGuarantee.NONE, I think we should make 
> our ProcessingGuarantee.NONE setting cause the spout to call commitAsync 
> after poll, and never use the enable.auto.commit option. This allows us to 
> include metadata in the commit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (STORM-2851) org.apache.storm.kafka.spout.KafkaSpout.doSeekRetriableTopicPartitions sometimes throws ConcurrentModificationException

2017-12-10 Thread Alexandre Vermeerbergen (JIRA)
Alexandre Vermeerbergen created STORM-2851:
--

 Summary: 
org.apache.storm.kafka.spout.KafkaSpout.doSeekRetriableTopicPartitions 
sometimes throws ConcurrentModificationException
 Key: STORM-2851
 URL: https://issues.apache.org/jira/browse/STORM-2851
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-kafka-client
Affects Versions: 1.2.0
 Environment: Using Storm 1.2.0 preview binaries shared by Stig Rohde 
Døssing & Jungtaek Lim through the "[Discuss] Release Storm 1.2.0" discussion 
is Storm Developer's mailing list

With one Nimbus Vm, 6 Supervisor VMs, 3 Zookeeper VMs, 15 topologies, talking 
with a 5 VMs Kafka Brokers set (based on Kafka 0.10.2), all with ORACLE Server 
JRE 8 update 152.

About 15 topologies, handling around 1 million Kafka messages per minute, and 
connected to Redis, OpenTSDB & HBase.
Reporter: Alexandre Vermeerbergen


Hello,

We have been running Storm 1.2.0 preview on our pre-production supervision 
system.
We noticed that in the logs of our topology to logs persistency in Hbase, we 
got the following exceptions (about 4 times in a 48 hours period):

java.util.ConcurrentModificationException at 
java.util.HashMap$HashIterator.nextNode(HashMap.java:1442) 
at java.util.HashMap$KeyIterator.next(HashMap.java:1466) 
at 
org.apache.storm.kafka.spout.KafkaSpout.doSeekRetriableTopicPartitions(KafkaSpout.java:347)
 
at org.apache.storm.kafka.spout.KafkaSpout.pollKafkaBroker(KafkaSpout.java:320) 
at org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:245) 
at 
org.apache.storm.daemon.executor$fn__4963$fn__4978$fn__5009.invoke(executor.clj:647)
 
at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484) 
at clojure.lang.AFn.run(AFn.java:22) 
at java.lang.Thread.run(Thread.java:748) 

It looks like there's something to fix here, such as making the map 
thread-safe, or managing the exclusivity of modification of this map at a 
caller level.

Note: this topology is using Storm Kafka Client spout with default properties 
(unlike other topologies we have based on autocommit). However, it's the one 
which deals with highest rate of messages (line of logs coming from about 1 
VMs, a nice scale test for Storm :))

Could it be fixed in Storm 1.2.0 final version?

Best regards,
Alexandre Vermeerbergen




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (STORM-1757) Apache Beam Runner for Storm

2017-08-29 Thread Alexandre Vermeerbergen (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145626#comment-16145626
 ] 

Alexandre Vermeerbergen commented on STORM-1757:


Hello,

Yes, I am still interested in Beam Runner for Storm because I think that this 
feature would help comparing the various stream processing engines against a 
same application (without any need to rewrite the application).

Such comparison could benefit to Storm community because I feel that Storm is 
appropriate for lots of throughput-driven use cases.

But maybe it could also be an incentive for Storm developers to try matching 
other engines' capacities, as proposed in Beam's Runners matrix at: 
https://beam.apache.org/documentation/runners/capability-matrix.

Alexandre Vermeerbergen




> Apache Beam Runner for Storm
> 
>
> Key: STORM-1757
> URL: https://issues.apache.org/jira/browse/STORM-1757
> Project: Apache Storm
>  Issue Type: Brainstorming
>Reporter: P. Taylor Goetz
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a call for interested parties to collaborate on an Apache Beam [1] 
> runner for Storm, and express their thoughts and opinions.
> Given the addition of the Windowing API to Apache Storm, we should be able to 
> map naturally to the Beam API. If not, it may be indicative of shortcomings 
> of the Storm API that should be addressed.
> [1] http://beam.incubator.apache.org



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (STORM-2707) Nimbus loops forever with getClusterInfo error if it looses storm.local.dir contents

2017-08-26 Thread Alexandre Vermeerbergen (JIRA)
Alexandre Vermeerbergen created STORM-2707:
--

 Summary: Nimbus loops forever with getClusterInfo error if it 
looses storm.local.dir contents
 Key: STORM-2707
 URL: https://issues.apache.org/jira/browse/STORM-2707
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Affects Versions: 1.1.0, 1.1.1
 Environment: Storm deployed with only one Nimbus node (no HA).
Probably unrelated, but in our case we have:
*  3 Zookeeper VMs, and 4 to 8 Supervisor VMs. 
* All is running with Java 8 update 131 (ORACLE Server JRE package)
* All is running in EC2 VM / instances
Reporter: Alexandre Vermeerbergen


Hello,

Short issue description:
* Remove storm.local.dir directory
* Storm UI isn't anymore able to query anything from Nimbus
* Nimbus process prints getClusterInfo exceptions in its log whenever it gets a 
query from Nimbus UI or from "storm" command line
* To fix this issue, we have to stop all Storm processes, cleanup the content 
of Zookeeper nodes, then restart & redeploy our topologies

Excepted behavior:
* In such case, Storm should cleanup the content of Zookeeper and recover in a 
mode allowing to kill & restart all topologies

More details:
===
Sometimes we loose the content of storm.local.dir on our single-node Nimbus 
production cluster.

We haven't yet considered deploying Nimbus in HA because this is a relatively 
modest deployment with budget constrains on the number of the number of IaaS 
resources which can be used for this application. So far so good, because in 
our environment, Nimbus & Nimbus UI (hosted on same VM) are supervized, and we 
also have self-healing crons to automatically kill & restart topologies blocked 
in Kafka consumption or having too many failed tuples (because Storm back 
pressure has some fuzzy limits, so we use this by-pass, as approved by Roshan 
in a past discussion, but that's not the point here).

Our problem is that sometime, we loose the content of storm.local.dir.

When it happens, our supervision detects the issue because it cannot anymore 
query Nimbus REST services on Nimbus-UI process.

In such case it tries to restart Storm-UI but this doesn't help because queries 
to Storm-UI fails with the following stack trace when it tries to list all 
topologies:

org.apache.storm.thrift.TApplicationException: Internal error processing 
getClusterInfo
at 
org.apache.storm.thrift.TApplicationException.read(TApplicationException.java:111)
at 
org.apache.storm.thrift.TServiceClient.receiveBase(TServiceClient.java:79)
at 
org.apache.storm.generated.Nimbus$Client.recv_getClusterInfo(Nimbus.java:1168)
at 
org.apache.storm.generated.Nimbus$Client.getClusterInfo(Nimbus.java:1156)
at org.apache.storm.ui.core$cluster_summary.invoke(core.clj:356)
at org.apache.storm.ui.core$fn__9556.invoke(core.clj:1113)
at 
org.apache.storm.shade.compojure.core$make_route$fn__5976.invoke(core.clj:100)
at 
org.apache.storm.shade.compojure.core$if_route$fn__5964.invoke(core.clj:46)
at 
org.apache.storm.shade.compojure.core$if_method$fn__5957.invoke(core.clj:31)
at 
org.apache.storm.shade.compojure.core$routing$fn__5982.invoke(core.clj:113)
at clojure.core$some.invoke(core.clj:2570)
at org.apache.storm.shade.compojure.core$routing.doInvoke(core.clj:113)
at clojure.lang.RestFn.applyTo(RestFn.java:139)
at clojure.core$apply.invoke(core.clj:632)
at 
org.apache.storm.shade.compojure.core$routes$fn__5986.invoke(core.clj:118)
at 
org.apache.storm.shade.ring.middleware.cors$wrap_cors$fn__8891.invoke(cors.clj:149)
at 
org.apache.storm.shade.ring.middleware.json$wrap_json_params$fn__8838.invoke(json.clj:56)
at 
org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__6618.invoke(multipart_params.clj:118)
at 
org.apache.storm.shade.ring.middleware.reload$wrap_reload$fn__7901.invoke(reload.clj:22)
at 
org.apache.storm.ui.helpers$requests_middleware$fn__6871.invoke(helpers.clj:50)
at org.apache.storm.ui.core$catch_errors$fn__9758.invoke(core.clj:1428)
at 
org.apache.storm.shade.ring.middleware.keyword_params$wrap_keyword_params$fn__6538.invoke(keyword_params.clj:35)
at 
org.apache.storm.shade.ring.middleware.nested_params$wrap_nested_params$fn__6581.invoke(nested_params.clj:84)
at 
org.apache.storm.shade.ring.middleware.params$wrap_params$fn__6510.invoke(params.clj:64)
at 
org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__6618.invoke(multipart_params.clj:118)
at 
org.apache.storm.shade.ring.middleware.flash$wrap_flash$fn__6833.invoke(flash.clj:35)
at 
org.apache.storm.shade.ring.middleware.session$wrap_session$fn__6819.invoke(session.clj:98)
at