[jira] [Closed] (STORM-4058) Provide ClusterMetrics via a Prometheus Preparable Reporter

2024-06-10 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4058.
--
Fix Version/s: 2.7.0
   Resolution: Fixed

> Provide ClusterMetrics via a Prometheus Preparable Reporter
> ---
>
> Key: STORM-4058
> URL: https://issues.apache.org/jira/browse/STORM-4058
> Project: Apache Storm
>  Issue Type: New Feature
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.7.0, 2.6.3
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> as the title says. Prepare basic metrics via a pushgateway client for 
> prometheus



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4059) Provide Topology / Other Metrics via a Prometheus Preparable Reporter

2024-06-10 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4059:
--

 Summary: Provide Topology / Other Metrics via a Prometheus 
Preparable Reporter
 Key: STORM-4059
 URL: https://issues.apache.org/jira/browse/STORM-4059
 Project: Apache Storm
  Issue Type: New Feature
Reporter: Richard Zowalla
 Fix For: 2.7.0, 2.6.3


as the title says



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4058) Provide ClusterMetrics via a Prometheus Preparable Reporter

2024-06-07 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4058:
--

 Summary: Provide ClusterMetrics via a Prometheus Preparable 
Reporter
 Key: STORM-4058
 URL: https://issues.apache.org/jira/browse/STORM-4058
 Project: Apache Storm
  Issue Type: New Feature
Reporter: Richard Zowalla
Assignee: Richard Zowalla
 Fix For: 2.6.3


as the title says. Prepare basic metrics via a pushgateway client for prometheus



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4057) Fix Worker Termination in K8S with Security Context

2024-06-07 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4057:
--

 Summary: Fix Worker Termination in K8S with Security Context
 Key: STORM-4057
 URL: https://issues.apache.org/jira/browse/STORM-4057
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 2.6.1
Reporter: Richard Zowalla
 Fix For: 2.6.2


https://lists.apache.org/thread/p79msxdkpzt3d57ycf1cpl8gn3j6tqkg



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (STORM-3768) Zero Values for Assigned Mem (MB) in Topology Summary and Used Mem (MB) in Supervisor Summary

2024-05-23 Thread Richard Zowalla (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-3768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848910#comment-17848910
 ] 

Richard Zowalla commented on STORM-3768:


Is this still an issue? Have you tried 2.6.2 ?

> Zero Values for Assigned Mem (MB) in Topology Summary and Used Mem (MB) in 
> Supervisor Summary
> -
>
> Key: STORM-3768
> URL: https://issues.apache.org/jira/browse/STORM-3768
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-ui
>Affects Versions: 2.3.0
> Environment: Linux storm 4.9.0-15-amd64 #1 SMP Debian 4.9.258-1 
> (2021-03-08) x86_64
> OpenJDK 64-Bit Server VM (build 13.0.2+8, mixed mode, sharing)
> Python 2.7.13 / Python3 3.5.3
>Reporter: Jonas Krauß
>Priority: Major
> Attachments: supervisor_zero_values_overview.jpg, 
> topology_values_overview.jpg, worker_values_topology.jpg, 
> zero_value_topology.jpg
>
>
> We have recently built storm 2.3.0-SNAPSHOT from git with commit hash 
> 4a1eea700766da2f175ac7eaba6064f0d7f0ff03 because we needed the bugfix in 
> STORM-3652.
> It seems in this built information about the sum of the used memory is lost 
> (it was working before in 2.2.0, release version). Individual 
> topologies/components still get their memory assignment displayed, but the 
> total is missing. This affects the root view (*index.html*) of the UI, where 
> information is missing in the *supervisor section* (displayed value for used 
> mem is zero), as well as the topology view (*topology.html*, displayed value 
> for assigned mem in *topology summary* is zero).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (STORM-4037) confluent maven resolver may not be necessary any more

2024-05-23 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla resolved STORM-4037.

Fix Version/s: 2.6.2
   Resolution: Fixed

> confluent maven resolver may not be necessary any more
> --
>
> Key: STORM-4037
> URL: https://issues.apache.org/jira/browse/STORM-4037
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: PJ Fanning
>Priority: Major
> Fix For: 2.6.2
>
>
> After https://github.com/apache/storm/pull/3627, the repository ref at
> https://github.com/apache/storm/blob/d8ab31aa5c296d018c783cb159e1ccaf288ecc1c/external/storm-hdfs/pom.xml#L41
> may not be necessary.
> It is best to only use non standard maven repos when absolutely necessary.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4056) Build with -Pnative on MacOS

2024-05-23 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4056:
--

 Summary: Build with -Pnative on MacOS
 Key: STORM-4056
 URL: https://issues.apache.org/jira/browse/STORM-4056
 Project: Apache Storm
  Issue Type: Dependency upgrade
Affects Versions: 2.6.2
Reporter: Richard Zowalla


Building with macOS is currently not supported if -Pnative is added to the list 
of profiles.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (STORM-4055) ConcurrentModificationException on KafkaConsumer when running topology with Metrics Reporters

2024-05-20 Thread Anthony Castrati (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847973#comment-17847973
 ] 

Anthony Castrati commented on STORM-4055:
-

Hi [~rzo1]  I would be happy to provide a PR for fix. It would be my first 
contribution to this code base, so I will want to take some time with it to 
ensure no other regressions. At the moment it is just difficult to find the 
time to pick this up now that we have a workaround. 

> ConcurrentModificationException on KafkaConsumer when running topology with 
> Metrics Reporters
> -
>
> Key: STORM-4055
> URL: https://issues.apache.org/jira/browse/STORM-4055
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 2.6.1
>Reporter: Anthony Castrati
>Priority: Major
>
> After a recent upgrade to storm-kafka-client on storm server 2.6.1, we are 
> seeing ConcurrentModificationException in our topology at runtime. I believe 
> this is due to the re-use of a KafkaConsumer instance between the KafkaSpout 
> and the 
> KafkaOffsetPartitionMetrics which were added some time between 2.4.0 and 
> 2.6.1.
>  
> h3. Steps to Reproduce:
> Configure a topology with a basic KafkaSpout. Configure the topology with one 
> of the metrics loggers. We used our own custom one, but reproduced it with 
> ConsoleStormReporter as well. The JMXReporter did not reproduce the issue for 
> us, but we did not dig into why.
> *reporter config:*
> {{topology.metrics.reporters: [}}
> {{  {}}
> {{    "filter": {}}
> {{      "expression": ".*",}}
> {{      "class": "org.apache.storm.metrics2.filters.RegexFilter"}}
> {{    },}}
> {{    "report.period": 15,}}
> {{    "report.period.units": "SECONDS",}}
> {{    "class": "org.apache.storm.metrics2.reporters.ConsoleStormReporter"}}
>     }
> {{]}}
> h3. Stacktrace:
> {quote}[ERROR] Exception thrown from NewRelicReporter#report. Exception was 
> suppressed.
> java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
> multi-threaded access. currentThread(name: 
> metrics-newRelicReporter-1-thread-1, id: 24) otherThread(id: 40)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2484)
>  ~[stormjar.jar:?]
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2465)
>  ~[stormjar.jar:?]
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2144)
>  ~[stormjar.jar:?]
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2123)
>  ~[stormjar.jar:?]
>     at 
> org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics.getBeginningOffsets(KafkaOffsetPartitionMetrics.java:181)
>  ~[stormjar.jar:?]
>     at 
> org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:93)
>  ~[stormjar.jar:?]
>     at 
> org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:90)
>  ~[stormjar.jar:?]
>     at 
> com.codahale.metrics.newrelic.transformer.GaugeTransformer.transform(GaugeTransformer.java:60)
>  ~[stormjar.jar:?]
>     at 
> com.codahale.metrics.newrelic.NewRelicReporter.lambda$transform$0(NewRelicReporter.java:154)
>  ~[stormjar.jar:?]
>     at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown 
> Source) ~[?:?]
>     at 
> java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet.lambda$entryConsumer$0(Unknown
>  Source) ~[?:?]
>     at java.base/java.util.TreeMap$EntrySpliterator.forEachRemaining(Unknown 
> Source) ~[?:?]
>     at 
> java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet$UnmodifiableEntrySetSpliterator.forEachRemaining(Unknown
>  Source) ~[?:?]
>     at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
> ~[?:?]
>     at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
> Source) ~[?:?]
>     at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown 
> Source) ~[?:?]
>     at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown
>  Source) ~[?:?]
>     at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
> ~[?:?]
>     at java.base/java.util.stream.ReferencePipeline.forEach(Unknown Source) 
> ~[?:?]
>     at java.b

[jira] [Commented] (STORM-4055) ConcurrentModificationException on KafkaConsumer when running topology with Metrics Reporters

2024-05-02 Thread Richard Zowalla (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843001#comment-17843001
 ] 

Richard Zowalla commented on STORM-4055:


Thanks for the report. Since you have already dived into the code, would you be 
willing to provide a PR to fix ?

> ConcurrentModificationException on KafkaConsumer when running topology with 
> Metrics Reporters
> -
>
> Key: STORM-4055
> URL: https://issues.apache.org/jira/browse/STORM-4055
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 2.6.1
>Reporter: Anthony Castrati
>Priority: Major
>
> After a recent upgrade to storm-kafka-client on storm server 2.6.1, we are 
> seeing ConcurrentModificationException in our topology at runtime. I believe 
> this is due to the re-use of a KafkaConsumer instance between the KafkaSpout 
> and the 
> KafkaOffsetPartitionMetrics which were added some time between 2.4.0 and 
> 2.6.1.
>  
> h3. Steps to Reproduce:
> Configure a topology with a basic KafkaSpout. Configure the topology with one 
> of the metrics loggers. We used our own custom one, but reproduced it with 
> ConsoleStormReporter as well. The JMXReporter did not reproduce the issue for 
> us, but we did not dig into why.
> *reporter config:*
> {{topology.metrics.reporters: [}}
> {{  {}}
> {{    "filter": {}}
> {{      "expression": ".*",}}
> {{      "class": "org.apache.storm.metrics2.filters.RegexFilter"}}
> {{    },}}
> {{    "report.period": 15,}}
> {{    "report.period.units": "SECONDS",}}
> {{    "class": "org.apache.storm.metrics2.reporters.ConsoleStormReporter"}}
>     }
> {{]}}
> h3. Stacktrace:
> {quote}[ERROR] Exception thrown from NewRelicReporter#report. Exception was 
> suppressed.
> java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
> multi-threaded access. currentThread(name: 
> metrics-newRelicReporter-1-thread-1, id: 24) otherThread(id: 40)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2484)
>  ~[stormjar.jar:?]
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2465)
>  ~[stormjar.jar:?]
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2144)
>  ~[stormjar.jar:?]
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2123)
>  ~[stormjar.jar:?]
>     at 
> org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics.getBeginningOffsets(KafkaOffsetPartitionMetrics.java:181)
>  ~[stormjar.jar:?]
>     at 
> org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:93)
>  ~[stormjar.jar:?]
>     at 
> org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:90)
>  ~[stormjar.jar:?]
>     at 
> com.codahale.metrics.newrelic.transformer.GaugeTransformer.transform(GaugeTransformer.java:60)
>  ~[stormjar.jar:?]
>     at 
> com.codahale.metrics.newrelic.NewRelicReporter.lambda$transform$0(NewRelicReporter.java:154)
>  ~[stormjar.jar:?]
>     at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown 
> Source) ~[?:?]
>     at 
> java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet.lambda$entryConsumer$0(Unknown
>  Source) ~[?:?]
>     at java.base/java.util.TreeMap$EntrySpliterator.forEachRemaining(Unknown 
> Source) ~[?:?]
>     at 
> java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet$UnmodifiableEntrySetSpliterator.forEachRemaining(Unknown
>  Source) ~[?:?]
>     at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
> ~[?:?]
>     at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
> Source) ~[?:?]
>     at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown 
> Source) ~[?:?]
>     at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown
>  Source) ~[?:?]
>     at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
> ~[?:?]
>     at java.base/java.util.stream.ReferencePipeline.forEach(Unknown Source) 
> ~[?:?]
>     at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown 
> Source) ~[?:?]
>     at 
> java.base/java.util.Spliterators$ArraySpliterator.forEachRema

[jira] [Updated] (STORM-4055) ConcurrentModificationException on KafkaConsumer when running topology with Metrics Reporters

2024-04-30 Thread Anthony Castrati (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Castrati updated STORM-4055:

Description: 
After a recent upgrade to storm-kafka-client on storm server 2.6.1, we are 
seeing ConcurrentModificationException in our topology at runtime. I believe 
this is due to the re-use of a KafkaConsumer instance between the KafkaSpout 
and the 
KafkaOffsetPartitionMetrics which were added some time between 2.4.0 and 2.6.1.
 
h3. Steps to Reproduce:

Configure a topology with a basic KafkaSpout. Configure the topology with one 
of the metrics loggers. We used our own custom one, but reproduced it with 
ConsoleStormReporter as well. The JMXReporter did not reproduce the issue for 
us, but we did not dig into why.

*reporter config:*

{{topology.metrics.reporters: [}}
{{  {}}
{{    "filter": {}}
{{      "expression": ".*",}}
{{      "class": "org.apache.storm.metrics2.filters.RegexFilter"}}
{{    },}}
{{    "report.period": 15,}}
{{    "report.period.units": "SECONDS",}}
{{    "class": "org.apache.storm.metrics2.reporters.ConsoleStormReporter"}}

    }
{{]}}
h3. Stacktrace:
{quote}[ERROR] Exception thrown from NewRelicReporter#report. Exception was 
suppressed.
java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
multi-threaded access. currentThread(name: metrics-newRelicReporter-1-thread-1, 
id: 24) otherThread(id: 40)
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2484)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2465)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2144)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2123)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics.getBeginningOffsets(KafkaOffsetPartitionMetrics.java:181)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:93)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:90)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.newrelic.transformer.GaugeTransformer.transform(GaugeTransformer.java:60)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.newrelic.NewRelicReporter.lambda$transform$0(NewRelicReporter.java:154)
 ~[stormjar.jar:?]
    at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) 
~[?:?]
    at 
java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet.lambda$entryConsumer$0(Unknown
 Source) ~[?:?]
    at java.base/java.util.TreeMap$EntrySpliterator.forEachRemaining(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet$UnmodifiableEntrySetSpliterator.forEachRemaining(Unknown
 Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown
 Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline.forEach(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) 
~[?:?]
    at 
java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source) 
~[?:?]
    at 
com.codahale.metrics.newrelic.NewRelicReporter.report(NewRelicReporter.java:138)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:243) 
~[metrics-core-3.2.6.jar:3.2.6]
    at com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:182) 
[metrics-core-3.2.6.jar:3.2.6]
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) [?:?]
    at java.base/java.util.concurrent.FutureTask.runAndReset(Unknown Source) 
[?:?]
    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
 Source) [?:?]
    at

[jira] [Updated] (STORM-4055) ConcurrentModificationException on KafkaConsumer when running topology with Metrics Reporters

2024-04-30 Thread Anthony Castrati (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Castrati updated STORM-4055:

Description: 
After a recent upgrade to storm-kafka-client on storm server 2.6.1, we are 
seeing ConcurrentModificationException in our topology at runtime. I believe 
this is due to the re-use of a KafkaConsumer instance between the KafkaSpout 
and the 
KafkaOffsetPartitionMetrics which were added some time between 2.4.0 and 2.6.1.
 
h3. Steps to Reproduce:

Configure a topology with a basic KafkaSpout. Configure the topology with one 
of the metrics loggers. We used our own custom one, but reproduced it with 
ConsoleStormReporter as well. The JMXReporter did not reproduce the issue for 
us, but we did not dig into why.

*reporter config:*

{{topology.metrics.reporters: [}}
{{  {}}
{{    "filter": {}}
{{      "expression": ".*",}}
{{      "class": "org.apache.storm.metrics2.filters.RegexFilter"}}
{{    },}}
{{    "report.period": 15,}}
{{    "report.period.units": "SECONDS",}}
{{    "class": "org.apache.storm.metrics2.reporters.ConsoleStormReporter"}}
{{  }}}
{{]}}
h3. Stacktrace:
{quote}[ERROR] Exception thrown from NewRelicReporter#report. Exception was 
suppressed.
java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
multi-threaded access. currentThread(name: metrics-newRelicReporter-1-thread-1, 
id: 24) otherThread(id: 40)
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2484)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2465)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2144)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2123)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics.getBeginningOffsets(KafkaOffsetPartitionMetrics.java:181)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:93)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:90)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.newrelic.transformer.GaugeTransformer.transform(GaugeTransformer.java:60)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.newrelic.NewRelicReporter.lambda$transform$0(NewRelicReporter.java:154)
 ~[stormjar.jar:?]
    at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) 
~[?:?]
    at 
java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet.lambda$entryConsumer$0(Unknown
 Source) ~[?:?]
    at java.base/java.util.TreeMap$EntrySpliterator.forEachRemaining(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet$UnmodifiableEntrySetSpliterator.forEachRemaining(Unknown
 Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown
 Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline.forEach(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) 
~[?:?]
    at 
java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source) 
~[?:?]
    at 
com.codahale.metrics.newrelic.NewRelicReporter.report(NewRelicReporter.java:138)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:243) 
~[metrics-core-3.2.6.jar:3.2.6]
    at com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:182) 
[metrics-core-3.2.6.jar:3.2.6]
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) [?:?]
    at java.base/java.util.concurrent.FutureTask.runAndReset(Unknown Source) 
[?:?]
    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
 Source) [?:?]
    at

[jira] [Updated] (STORM-4055) ConcurrentModificationException on KafkaConsumer when running topology with Metrics Reporters

2024-04-30 Thread Anthony Castrati (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Castrati updated STORM-4055:

Description: 
After a recent upgrade to storm-kafka-client on storm server 2.6.1, we are 
seeing ConcurrentModificationException in our topology at runtime. I believe 
this is due to the re-use of a KafkaConsumer instance between the KafkaSpout 
and the 
KafkaOffsetPartitionMetrics which were added some time between 2.4.0 and 2.6.1.
 
h3. Steps to Reproduce:

Configure a topology with a basic KafkaSpout. Configure the topology with one 
of the metrics loggers. We used our own custom one, but reproduced it with 
ConsoleStormReporter as well. The JMXReporter did not reproduce the issue for 
us, but we did not dig into why.

*reporter config:*

{{topology.metrics.reporters: [}}
{{  {}}
{{    "filter": {}}
{{      "expression": ".*",}}
{{      "class": "org.apache.storm.metrics2.filters.RegexFilter"}}
{{    },}}
{{    "report.period": 15,}}
{{    "report.period.units": "SECONDS",}}
{{    "class": "org.apache.storm.metrics2.reporters.ConsoleStormReporter"}}
{{]}}
h3. Stacktrace:
{quote}[ERROR] Exception thrown from NewRelicReporter#report. Exception was 
suppressed.
java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
multi-threaded access. currentThread(name: metrics-newRelicReporter-1-thread-1, 
id: 24) otherThread(id: 40)
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2484)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2465)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2144)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2123)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics.getBeginningOffsets(KafkaOffsetPartitionMetrics.java:181)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:93)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:90)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.newrelic.transformer.GaugeTransformer.transform(GaugeTransformer.java:60)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.newrelic.NewRelicReporter.lambda$transform$0(NewRelicReporter.java:154)
 ~[stormjar.jar:?]
    at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) 
~[?:?]
    at 
java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet.lambda$entryConsumer$0(Unknown
 Source) ~[?:?]
    at java.base/java.util.TreeMap$EntrySpliterator.forEachRemaining(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet$UnmodifiableEntrySetSpliterator.forEachRemaining(Unknown
 Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown
 Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline.forEach(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) 
~[?:?]
    at 
java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source) 
~[?:?]
    at 
com.codahale.metrics.newrelic.NewRelicReporter.report(NewRelicReporter.java:138)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:243) 
~[metrics-core-3.2.6.jar:3.2.6]
    at com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:182) 
[metrics-core-3.2.6.jar:3.2.6]
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) [?:?]
    at java.base/java.util.concurrent.FutureTask.runAndReset(Unknown Source) 
[?:?]
    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
 Source) [?:?]
    at

[jira] [Updated] (STORM-4055) ConcurrentModificationException on KafkaConsumer when running topology with Metrics Reporters

2024-04-30 Thread Anthony Castrati (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Castrati updated STORM-4055:

Description: 
After a recent upgrade to storm-kafka-client on storm server 2.6.1, we are 
seeing ConcurrentModificationException in our topology at runtime. I believe 
this is due to the re-use of a KafkaConsumer instance between the KafkaSpout 
and the 
KafkaOffsetPartitionMetrics which were added some time between 2.4.0 and 2.6.1.
 
h3. Steps to Reproduce:

Configure a topology with a basic KafkaSpout. Configure the topology with one 
of the metrics loggers. We used our own custom one, but reproduced it with 
ConsoleStormReporter as well. The JMXReporter did not reproduce the issue for 
us, but we did not dig into why.

reporter config:
{quote}topology.metrics.reporters: [
  {
    "filter": {
      "expression": ".*",
      "class": "org.apache.storm.metrics2.filters.RegexFilter"
    },
    "report.period": 15,
    "report.period.units": "SECONDS",
    "class": "org.apache.storm.metrics2.reporters.ConsoleStormReporter"
  }
]
{quote}
h3. Stacktrace:
{quote}[ERROR] Exception thrown from NewRelicReporter#report. Exception was 
suppressed.
java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
multi-threaded access. currentThread(name: metrics-newRelicReporter-1-thread-1, 
id: 24) otherThread(id: 40)
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2484)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2465)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2144)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2123)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics.getBeginningOffsets(KafkaOffsetPartitionMetrics.java:181)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:93)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:90)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.newrelic.transformer.GaugeTransformer.transform(GaugeTransformer.java:60)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.newrelic.NewRelicReporter.lambda$transform$0(NewRelicReporter.java:154)
 ~[stormjar.jar:?]
    at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) 
~[?:?]
    at 
java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet.lambda$entryConsumer$0(Unknown
 Source) ~[?:?]
    at java.base/java.util.TreeMap$EntrySpliterator.forEachRemaining(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet$UnmodifiableEntrySetSpliterator.forEachRemaining(Unknown
 Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown
 Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline.forEach(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) 
~[?:?]
    at 
java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source) 
~[?:?]
    at 
com.codahale.metrics.newrelic.NewRelicReporter.report(NewRelicReporter.java:138)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:243) 
~[metrics-core-3.2.6.jar:3.2.6]
    at com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:182) 
[metrics-core-3.2.6.jar:3.2.6]
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) [?:?]
    at java.base/java.util.concurrent.FutureTask.runAndReset(Unknown Source) 
[?:?]
    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
 Source) [?:?]
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorke

[jira] [Updated] (STORM-4055) ConcurrentModificationException on KafkaConsumer when running topology with Metrics Reporters

2024-04-30 Thread Anthony Castrati (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Castrati updated STORM-4055:

Description: 
After a recent upgrade to storm-kafka-client on storm server 2.6.1, we are 
seeing ConcurrentModificationException in our topology at runtime. I believe 
this is due to the re-use of a KafkaConsumer instance between the KafkaSpout 
and the 
KafkaOffsetPartitionMetrics which were added some time between 2.4.0 and 2.6.1.
 
h3. Steps to Reproduce:

Configure a topology with a basic KafkaSpout. Configure the topology with one 
of the metrics loggers. We used our own custom one, but reproduced it with 
ConsoleStormReporter as well. The JMXReporter did not reproduce the issue for 
us, but we did not dig into why.

reporter config:
{quote}topology.metrics.reporters: [
  {
    "filter":
Unknown macro: \{       "expression"}
,
    "report.period": 15,
    "report.period.units": "SECONDS",
    "class": "org.apache.storm.metrics2.reporters.ConsoleStormReporter"
  }
]
{quote}
h3. Stacktrace:
{quote}[ERROR] Exception thrown from NewRelicReporter#report. Exception was 
suppressed.
java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
multi-threaded access. currentThread(name: metrics-newRelicReporter-1-thread-1, 
id: 24) otherThread(id: 40)
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2484)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2465)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2144)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2123)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics.getBeginningOffsets(KafkaOffsetPartitionMetrics.java:181)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:93)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:90)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.newrelic.transformer.GaugeTransformer.transform(GaugeTransformer.java:60)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.newrelic.NewRelicReporter.lambda$transform$0(NewRelicReporter.java:154)
 ~[stormjar.jar:?]
    at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) 
~[?:?]
    at 
java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet.lambda$entryConsumer$0(Unknown
 Source) ~[?:?]
    at java.base/java.util.TreeMap$EntrySpliterator.forEachRemaining(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet$UnmodifiableEntrySetSpliterator.forEachRemaining(Unknown
 Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown
 Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline.forEach(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) 
~[?:?]
    at 
java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source) 
~[?:?]
    at 
com.codahale.metrics.newrelic.NewRelicReporter.report(NewRelicReporter.java:138)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:243) 
~[metrics-core-3.2.6.jar:3.2.6]
    at com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:182) 
[metrics-core-3.2.6.jar:3.2.6]
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) [?:?]
    at java.base/java.util.concurrent.FutureTask.runAndReset(Unknown Source) 
[?:?]
    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
 Source) [?:?]
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
Source) [?:?]
    at java.base/java.util.concurrent.ThreadPoolExecutor$W

[jira] [Created] (STORM-4055) ConcurrentModificationException on KafkaConsumer when running topology with Metrics Reporters

2024-04-30 Thread Anthony Castrati (Jira)
Anthony Castrati created STORM-4055:
---

 Summary: ConcurrentModificationException on KafkaConsumer when 
running topology with Metrics Reporters
 Key: STORM-4055
 URL: https://issues.apache.org/jira/browse/STORM-4055
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-kafka-client
Affects Versions: 2.6.1
Reporter: Anthony Castrati


After a recent upgrade to storm-kafka-client on storm server 2.6.1, we are 
seeing ConcurrentModificationException in our topology at runtime. I believe 
this is due to the re-use of a KafkaConsumer instance between the KafkaSpout 
and the 
KafkaOffsetPartitionMetrics which were added some time between 2.4.0 and 2.6.1.
 
h3. Steps to Reproduce:

Configure a topology with a basic KafkaSpout. Configure the topology with one 
of the metrics loggers. We used our own custom one, but reproduced it with 
ConsoleStormReporter as well. The JMXReporter did not reproduce the issue for 
us, but we did not dig into why.

reporter config:
{quote}topology.metrics.reporters: [
  {
    "filter": {
      "expression": ".*",
      "class": "org.apache.storm.metrics2.filters.RegexFilter"
    },
    "report.period": 15,
    "report.period.units": "SECONDS",
    "class": "org.apache.storm.metrics2.reporters.ConsoleStormReporter"
  }
]
{quote}
h3. Stacktrace:
{quote}[ERROR] Exception thrown from NewRelicReporter#report. Exception was 
suppressed.
java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
multi-threaded access. currentThread(name: metrics-newRelicReporter-1-thread-1, 
id: 24) otherThread(id: 40)
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2484)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2465)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2144)
 ~[stormjar.jar:?]
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:2123)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics.getBeginningOffsets(KafkaOffsetPartitionMetrics.java:181)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:93)
 ~[stormjar.jar:?]
    at 
org.apache.storm.kafka.spout.metrics2.KafkaOffsetPartitionMetrics$2.getValue(KafkaOffsetPartitionMetrics.java:90)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.newrelic.transformer.GaugeTransformer.transform(GaugeTransformer.java:60)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.newrelic.NewRelicReporter.lambda$transform$0(NewRelicReporter.java:154)
 ~[stormjar.jar:?]
    at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) 
~[?:?]
    at 
java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet.lambda$entryConsumer$0(Unknown
 Source) ~[?:?]
    at java.base/java.util.TreeMap$EntrySpliterator.forEachRemaining(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet$UnmodifiableEntrySetSpliterator.forEachRemaining(Unknown
 Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown 
Source) ~[?:?]
    at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown
 Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline.forEach(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline$7$1.accept(Unknown Source) 
~[?:?]
    at 
java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source) ~[?:?]
    at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source) 
~[?:?]
    at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source) 
~[?:?]
    at 
com.codahale.metrics.newrelic.NewRelicReporter.report(NewRelicReporter.java:138)
 ~[stormjar.jar:?]
    at 
com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:243) 
~[metrics-core-3.2.6.jar:3.2.6]
    at com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:182) 
[metrics-core-3.2.6.jar:3.2.6]
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
Source) [?:?]
 

[jira] [Closed] (STORM-4054) Add Worker CPU Metric

2024-04-16 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4054.
--
Resolution: Fixed

> Add Worker CPU Metric
> -
>
> Key: STORM-4054
> URL: https://issues.apache.org/jira/browse/STORM-4054
> Project: Apache Storm
>  Issue Type: Improvement
>Reporter: Richard Zowalla
>Assignee: Aaron Gresch
>Priority: Minor
> Fix For: 2.6.3
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Allow reporting of worker cpu usage if no cgroups are configured.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4053) Add Hadoop client API dependency back to storm-hdfs

2024-04-16 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4053.
--
Fix Version/s: 2.6.3
 Assignee: Richard Zowalla
   Resolution: Fixed

> Add Hadoop client API dependency back to storm-hdfs
> ---
>
> Key: STORM-4053
> URL: https://issues.apache.org/jira/browse/STORM-4053
> Project: Apache Storm
>  Issue Type: Task
>  Components: storm-hdfs
>Affects Versions: 2.6.2
>Reporter: Julien Nioche
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.6.3
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> See https://github.com/apache/incubator-stormcrawler/pull/1189



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4054) Add Worker CPU Metric

2024-04-15 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4054:
--

 Summary: Add Worker CPU Metric
 Key: STORM-4054
 URL: https://issues.apache.org/jira/browse/STORM-4054
 Project: Apache Storm
  Issue Type: Improvement
Reporter: Richard Zowalla
Assignee: Aaron Gresch
 Fix For: 2.6.3


Allow reporting of worker cpu usage if no cgroups are configured.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4053) Add Hadoop client API dependency back to storm-hdfs

2024-04-10 Thread Julien Nioche (Jira)
Julien Nioche created STORM-4053:


 Summary: Add Hadoop client API dependency back to storm-hdfs
 Key: STORM-4053
 URL: https://issues.apache.org/jira/browse/STORM-4053
 Project: Apache Storm
  Issue Type: Task
  Components: storm-hdfs
Affects Versions: 2.6.2
Reporter: Julien Nioche


See https://github.com/apache/incubator-stormcrawler/pull/1189



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4052) Simplify/Remove double delete/lookup in heartbeat cleanup code

2024-04-07 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4052.
--
Resolution: Fixed

> Simplify/Remove double delete/lookup in heartbeat cleanup code
> --
>
> Key: STORM-4052
> URL: https://issues.apache.org/jira/browse/STORM-4052
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 2.6.1, 2.6.2
>Reporter: Richard Zowalla
>Priority: Major
> Fix For: 2.6.3
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This changes slightly simplifies the heartbeat cleanup code so it no longer 
> tries to delete the heartbeat files twice. It also removes an unneeded 
> directory listing (and possible race) by truncating the versions list and 
> using it for removal instead of for keeping.
> Removing the double delete attempt is important because it removes a lookup 
> for now non-existent files. Looking up non existent files, especially highly 
> unique (like timestamped) ones can adversely affect many operating systems as 
> these lookups are cached as negative dentries.
> [https://lwn.net/Articles/814535/]
> When cleanup runs, it iterates over the heartbeat directory that contains a 
> token and version file for each heartbeat. It calls deleteVersion for each 
> file in the directory which attempts to delete both files associated with the 
> heartbeat. As deleteVersion already deletes both when it first iterates over 
> the token file, the iteration for the version file has nothing to do.
> Before removing, the deleteVersion code checks for the existence of these now 
> non existent files. On linux (and other OSs) a lookup for a non-existent path 
> will create a negative dentry in the operating system's cache. On some 
> configurations this cache can grow effectively unbounded leading to 
> performance issues. On newer systems this cache is better managed, but this 
> will still dilute an otherwise useful OS cache with useless entries.
>  
> Copied from [https://github.com/apache/storm/pull/3635] (Author: sammac)
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4052) Simplify/Remove double delete/lookup in heartbeat cleanup code

2024-04-05 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4052:
--

 Summary: Simplify/Remove double delete/lookup in heartbeat cleanup 
code
 Key: STORM-4052
 URL: https://issues.apache.org/jira/browse/STORM-4052
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 2.6.2, 2.6.1
Reporter: Richard Zowalla
 Fix For: 2.6.3


This changes slightly simplifies the heartbeat cleanup code so it no longer 
tries to delete the heartbeat files twice. It also removes an unneeded 
directory listing (and possible race) by truncating the versions list and using 
it for removal instead of for keeping.

Removing the double delete attempt is important because it removes a lookup for 
now non-existent files. Looking up non existent files, especially highly unique 
(like timestamped) ones can adversely affect many operating systems as these 
lookups are cached as negative dentries.
[https://lwn.net/Articles/814535/]

When cleanup runs, it iterates over the heartbeat directory that contains a 
token and version file for each heartbeat. It calls deleteVersion for each file 
in the directory which attempts to delete both files associated with the 
heartbeat. As deleteVersion already deletes both when it first iterates over 
the token file, the iteration for the version file has nothing to do.

Before removing, the deleteVersion code checks for the existence of these now 
non existent files. On linux (and other OSs) a lookup for a non-existent path 
will create a negative dentry in the operating system's cache. On some 
configurations this cache can grow effectively unbounded leading to performance 
issues. On newer systems this cache is better managed, but this will still 
dilute an otherwise useful OS cache with useless entries.

 

Copied from [https://github.com/apache/storm/pull/3635] (Author: sammac)

 

 

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4051) Scheduler needs to include acker memory for topology resources

2024-04-05 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4051.
--
Resolution: Fixed

> Scheduler needs to include acker memory for topology resources
> --
>
> Key: STORM-4051
> URL: https://issues.apache.org/jira/browse/STORM-4051
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Aaron Gresch
>Assignee: Aaron Gresch
>Priority: Major
> Fix For: 2.6.3
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The scheduler has a bug where acker memory is not considered in the 
> scheduling estimate. The case I found was where a topology should fit on two 
> supervisors, but the cluster has 1 available and 2 blacklisted. The scheduler 
> thinks the topology should fit on one supervisor and fails to schedule, but 
> also fails to release a supervisor from the blacklist, resulting in the 
> topology never getting scheduled.
> With this fix, the scheduler properly detects the topology will need to be 
> scheduled on two supervisors and releases one from the blacklist and 
> schedules successfully.
> Switched some scheduling logs from trace to debug to make debugging 
> scheduling issues easier.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4051) Scheduler needs to include acker memory for topology resources

2024-04-05 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4051:
--

 Summary: Scheduler needs to include acker memory for topology 
resources
 Key: STORM-4051
 URL: https://issues.apache.org/jira/browse/STORM-4051
 Project: Apache Storm
  Issue Type: Bug
Reporter: Aaron Gresch
Assignee: Aaron Gresch
 Fix For: 2.6.3


The scheduler has a bug where acker memory is not considered in the scheduling 
estimate. The case I found was where a topology should fit on two supervisors, 
but the cluster has 1 available and 2 blacklisted. The scheduler thinks the 
topology should fit on one supervisor and fails to schedule, but also fails to 
release a supervisor from the blacklist, resulting in the topology never 
getting scheduled.

With this fix, the scheduler properly detects the topology will need to be 
scheduled on two supervisors and releases one from the blacklist and schedules 
successfully.

Switched some scheduling logs from trace to debug to make debugging scheduling 
issues easier.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4050) rocksdbjni:8.10.0

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4050.
--
Resolution: Fixed

> rocksdbjni:8.10.0 
> --
>
> Key: STORM-4050
> URL: https://issues.apache.org/jira/browse/STORM-4050
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Reporter: Richard Zowalla
>Assignee: Julien Nioche
>Priority: Major
> Fix For: 2.6.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4048) netty 4.1.107.Final

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4048.
--
Resolution: Fixed

> netty 4.1.107.Final 
> 
>
> Key: STORM-4048
> URL: https://issues.apache.org/jira/browse/STORM-4048
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Reporter: Richard Zowalla
>Assignee: Julien Nioche
>Priority: Major
> Fix For: 2.6.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4048) netty 4.1.107.Final

2024-04-02 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4048:
--

 Summary: netty 4.1.107.Final 
 Key: STORM-4048
 URL: https://issues.apache.org/jira/browse/STORM-4048
 Project: Apache Storm
  Issue Type: Dependency upgrade
Reporter: Richard Zowalla
Assignee: Julien Nioche
 Fix For: 2.6.2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4050) rocksdbjni:8.10.0

2024-04-02 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4050:
--

 Summary: rocksdbjni:8.10.0 
 Key: STORM-4050
 URL: https://issues.apache.org/jira/browse/STORM-4050
 Project: Apache Storm
  Issue Type: Dependency upgrade
Reporter: Richard Zowalla
Assignee: Julien Nioche
 Fix For: 2.6.2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4047) jackson 2.16.1

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4047.
--
Resolution: Fixed

> jackson 2.16.1
> --
>
> Key: STORM-4047
> URL: https://issues.apache.org/jira/browse/STORM-4047
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Reporter: Richard Zowalla
>Assignee: Julien Nioche
>Priority: Major
> Fix For: 2.6.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4049) snakeyaml:2.2

2024-04-02 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4049:
--

 Summary: snakeyaml:2.2
 Key: STORM-4049
 URL: https://issues.apache.org/jira/browse/STORM-4049
 Project: Apache Storm
  Issue Type: Dependency upgrade
Reporter: Richard Zowalla
Assignee: Julien Nioche
 Fix For: 2.6.2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4049) snakeyaml:2.2

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4049.
--
Resolution: Fixed

> snakeyaml:2.2
> -
>
> Key: STORM-4049
> URL: https://issues.apache.org/jira/browse/STORM-4049
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Reporter: Richard Zowalla
>Assignee: Julien Nioche
>Priority: Major
> Fix For: 2.6.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4047) jackson 2.16.1

2024-04-02 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4047:
--

 Summary: jackson 2.16.1
 Key: STORM-4047
 URL: https://issues.apache.org/jira/browse/STORM-4047
 Project: Apache Storm
  Issue Type: Dependency upgrade
Reporter: Richard Zowalla
Assignee: Julien Nioche
 Fix For: 2.6.2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4046) caffeine:3.1.8

2024-04-02 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4046:
--

 Summary: caffeine:3.1.8
 Key: STORM-4046
 URL: https://issues.apache.org/jira/browse/STORM-4046
 Project: Apache Storm
  Issue Type: Dependency upgrade
Reporter: Richard Zowalla
Assignee: Julien Nioche






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4045) log4j2 2.23.0

2024-04-02 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4045:
--

 Summary: log4j2 2.23.0 
 Key: STORM-4045
 URL: https://issues.apache.org/jira/browse/STORM-4045
 Project: Apache Storm
  Issue Type: Dependency upgrade
Reporter: Richard Zowalla
Assignee: Julien Nioche






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4045) log4j2 2.23.0

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4045.
--
Fix Version/s: 2.6.2
   Resolution: Fixed

> log4j2 2.23.0 
> --
>
> Key: STORM-4045
> URL: https://issues.apache.org/jira/browse/STORM-4045
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Reporter: Richard Zowalla
>Assignee: Julien Nioche
>Priority: Major
> Fix For: 2.6.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4044) commons-lang3:3.14.0

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4044.
--
Resolution: Fixed

> commons-lang3:3.14.0
> 
>
> Key: STORM-4044
> URL: https://issues.apache.org/jira/browse/STORM-4044
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Reporter: Richard Zowalla
>Assignee: Julien Nioche
>Priority: Major
> Fix For: 2.6.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4046) caffeine:3.1.8

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4046.
--
Fix Version/s: 2.6.2
   Resolution: Fixed

> caffeine:3.1.8
> --
>
> Key: STORM-4046
> URL: https://issues.apache.org/jira/browse/STORM-4046
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Reporter: Richard Zowalla
>Assignee: Julien Nioche
>Priority: Major
> Fix For: 2.6.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4044) commons-lang3:3.14.0

2024-04-02 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4044:
--

 Summary: commons-lang3:3.14.0
 Key: STORM-4044
 URL: https://issues.apache.org/jira/browse/STORM-4044
 Project: Apache Storm
  Issue Type: Dependency upgrade
Reporter: Richard Zowalla
Assignee: Julien Nioche
 Fix For: 2.6.2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4043) commons-io:2.14.0

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4043.
--
Resolution: Fixed

> commons-io:2.14.0
> -
>
> Key: STORM-4043
> URL: https://issues.apache.org/jira/browse/STORM-4043
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Reporter: Richard Zowalla
>Assignee: Julien Nioche
>Priority: Major
> Fix For: 2.6.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4043) commons-io:2.14.0

2024-04-02 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4043:
--

 Summary: commons-io:2.14.0
 Key: STORM-4043
 URL: https://issues.apache.org/jira/browse/STORM-4043
 Project: Apache Storm
  Issue Type: Dependency upgrade
Reporter: Richard Zowalla
Assignee: Julien Nioche
 Fix For: 2.6.2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4030) Dependency upgrades

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4030.
--
Resolution: Fixed

> Dependency upgrades
> ---
>
> Key: STORM-4030
> URL: https://issues.apache.org/jira/browse/STORM-4030
> Project: Apache Storm
>  Issue Type: Task
>Affects Versions: 2.6.1
>Reporter: Julien Nioche
>Assignee: Julien Nioche
>Priority: Minor
> Fix For: 2.6.2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code:java}
> mvn versions:display-dependency-updates 
> "-Dmaven.version.ignore=.*-M.*,.*-alpha.*,.*-beta.*,.*-BETA.*,.*-b.*,.*.ALPHA.*"
>  | grep '\->' | sort | uniq{code}
> {{shows a large number of upgradable dependencies.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (STORM-4030) Dependency upgrades

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla reopened STORM-4030:


> Dependency upgrades
> ---
>
> Key: STORM-4030
> URL: https://issues.apache.org/jira/browse/STORM-4030
> Project: Apache Storm
>  Issue Type: Task
>Affects Versions: 2.6.1
>Reporter: Julien Nioche
>Assignee: Julien Nioche
>Priority: Minor
> Fix For: 2.7.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code:java}
> mvn versions:display-dependency-updates 
> "-Dmaven.version.ignore=.*-M.*,.*-alpha.*,.*-beta.*,.*-BETA.*,.*-b.*,.*.ALPHA.*"
>  | grep '\->' | sort | uniq{code}
> {{shows a large number of upgradable dependencies.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4038) Cleanup Hadoop Dependencies

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4038.
--
Resolution: Fixed

> Cleanup Hadoop Dependencies
> ---
>
> Key: STORM-4038
> URL: https://issues.apache.org/jira/browse/STORM-4038
> Project: Apache Storm
>  Issue Type: Improvement
>Affects Versions: 2.6.1
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.6.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> hadoop pulls in a lot of stuff we do not actually need. might be good to 
> prune / exclude



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4029) Bump org.apache.commons:commons-compress from 1.21 to 1.26.

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4029.
--
Resolution: Fixed

> Bump org.apache.commons:commons-compress from 1.21 to 1.26.
> ---
>
> Key: STORM-4029
> URL: https://issues.apache.org/jira/browse/STORM-4029
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Affects Versions: 2.6.1
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.6.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (STORM-4029) Bump org.apache.commons:commons-compress from 1.21 to 1.26.

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla reopened STORM-4029:


> Bump org.apache.commons:commons-compress from 1.21 to 1.26.
> ---
>
> Key: STORM-4029
> URL: https://issues.apache.org/jira/browse/STORM-4029
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Affects Versions: 2.6.1
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.7.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (STORM-4038) Cleanup Hadoop Dependencies

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla reopened STORM-4038:


> Cleanup Hadoop Dependencies
> ---
>
> Key: STORM-4038
> URL: https://issues.apache.org/jira/browse/STORM-4038
> Project: Apache Storm
>  Issue Type: Improvement
>Affects Versions: 2.6.1
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.7.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> hadoop pulls in a lot of stuff we do not actually need. might be good to 
> prune / exclude



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-4030) Dependency upgrades

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla updated STORM-4030:
---
Fix Version/s: 2.6.2
   (was: 2.7.0)

> Dependency upgrades
> ---
>
> Key: STORM-4030
> URL: https://issues.apache.org/jira/browse/STORM-4030
> Project: Apache Storm
>  Issue Type: Task
>Affects Versions: 2.6.1
>Reporter: Julien Nioche
>Assignee: Julien Nioche
>Priority: Minor
> Fix For: 2.6.2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code:java}
> mvn versions:display-dependency-updates 
> "-Dmaven.version.ignore=.*-M.*,.*-alpha.*,.*-beta.*,.*-BETA.*,.*-b.*,.*.ALPHA.*"
>  | grep '\->' | sort | uniq{code}
> {{shows a large number of upgradable dependencies.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-4038) Cleanup Hadoop Dependencies

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla updated STORM-4038:
---
Fix Version/s: 2.6.2
   (was: 2.7.0)

> Cleanup Hadoop Dependencies
> ---
>
> Key: STORM-4038
> URL: https://issues.apache.org/jira/browse/STORM-4038
> Project: Apache Storm
>  Issue Type: Improvement
>Affects Versions: 2.6.1
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.6.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> hadoop pulls in a lot of stuff we do not actually need. might be good to 
> prune / exclude



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-4029) Bump org.apache.commons:commons-compress from 1.21 to 1.26.

2024-04-02 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla updated STORM-4029:
---
Fix Version/s: 2.6.2
   (was: 2.7.0)

> Bump org.apache.commons:commons-compress from 1.21 to 1.26.
> ---
>
> Key: STORM-4029
> URL: https://issues.apache.org/jira/browse/STORM-4029
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Affects Versions: 2.6.1
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.6.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4042) Clojure 1.11.2

2024-03-28 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4042.
--
Resolution: Fixed

> Clojure 1.11.2
> --
>
> Key: STORM-4042
> URL: https://issues.apache.org/jira/browse/STORM-4042
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Affects Versions: 2.6.1
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.6.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4041) Zookeeper 3.9.2

2024-03-28 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4041.
--
Resolution: Fixed

> Zookeeper 3.9.2
> ---
>
> Key: STORM-4041
> URL: https://issues.apache.org/jira/browse/STORM-4041
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.6.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4042) Clojure 1.11.2

2024-03-28 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4042:
--

 Summary: Clojure 1.11.2
 Key: STORM-4042
 URL: https://issues.apache.org/jira/browse/STORM-4042
 Project: Apache Storm
  Issue Type: Dependency upgrade
Affects Versions: 2.6.1
Reporter: Richard Zowalla
Assignee: Richard Zowalla
 Fix For: 2.6.2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4041) Zookeeper 3.9.2

2024-03-28 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4041:
--

 Summary: Zookeeper 3.9.2
 Key: STORM-4041
 URL: https://issues.apache.org/jira/browse/STORM-4041
 Project: Apache Storm
  Issue Type: Dependency upgrade
Reporter: Richard Zowalla
Assignee: Richard Zowalla
 Fix For: 2.6.2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4040) Nimbus fails to start up on older CPUs (RocksDB v7.x.x onwards)

2024-03-28 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4040.
--
Fix Version/s: 2.6.2
   Resolution: Fixed

> Nimbus fails to start up on older CPUs (RocksDB v7.x.x onwards)
> ---
>
> Key: STORM-4040
> URL: https://issues.apache.org/jira/browse/STORM-4040
> Project: Apache Storm
>  Issue Type: Documentation
>  Components: documentation
>Affects Versions: 2.5.0
> Environment: CPU is pre-Haswell
>Reporter: Scott Moore
>Assignee: Scott Moore
>Priority: Minor
> Fix For: 2.6.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When Nimbus start-up and storm_rocks already contains data, the JVM will 
> encounter an illegal instruction exception if the CPU is pre-Haswell era.
> This can be seen on such a CPU by deploying a running topology then 
> restarting Nimbus. It will fail to startup because the JVM will have crashed 
> when RocksDB reads the state from storm_rocks folder. (You can recover from 
> this by deleting the storm_rocks folder - Nimbus will then startup ok ... 
> until the next time it restarts!)
> The issue is that RocksDB v7.x.x or higher (so, applies to Storm 2.5.0 
> onwards from this commit 
> [https://github.com/apache/storm/commit/d7b4c084a89961a4060edb2e755491a21015c200])
>  is a C++ component built for modern CPUs. The fix for this is to downgrade 
> RocksDB to pre v7.x.x (or you can build your own version of v7+ from source 
> with certain compiler options set)
> You can find other reports of this - e.g. 
> [https://github.com/facebook/rocksdb/issues/11096] or 
> [https://github.com/trezor/blockbook/issues/684]
> This is not a bug in Storm, of course. Nor is it a bug - more a 'minimum 
> requirements'.
> So I have a pull request to suggest workarounds for this in the 
> Troubleshooting guide.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-4040) Nimbus fails to start up on older CPUs (RocksDB v7.x.x onwards)

2024-03-28 Thread Scott Moore (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Moore updated STORM-4040:
---
External issue URL: https://github.com/apache/storm/pull/3632

> Nimbus fails to start up on older CPUs (RocksDB v7.x.x onwards)
> ---
>
> Key: STORM-4040
> URL: https://issues.apache.org/jira/browse/STORM-4040
> Project: Apache Storm
>  Issue Type: Documentation
>  Components: documentation
>Affects Versions: 2.5.0
> Environment: CPU is pre-Haswell
>Reporter: Scott Moore
>Assignee: Scott Moore
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When Nimbus start-up and storm_rocks already contains data, the JVM will 
> encounter an illegal instruction exception if the CPU is pre-Haswell era.
> This can be seen on such a CPU by deploying a running topology then 
> restarting Nimbus. It will fail to startup because the JVM will have crashed 
> when RocksDB reads the state from storm_rocks folder. (You can recover from 
> this by deleting the storm_rocks folder - Nimbus will then startup ok ... 
> until the next time it restarts!)
> The issue is that RocksDB v7.x.x or higher (so, applies to Storm 2.5.0 
> onwards from this commit 
> [https://github.com/apache/storm/commit/d7b4c084a89961a4060edb2e755491a21015c200])
>  is a C++ component built for modern CPUs. The fix for this is to downgrade 
> RocksDB to pre v7.x.x (or you can build your own version of v7+ from source 
> with certain compiler options set)
> You can find other reports of this - e.g. 
> [https://github.com/facebook/rocksdb/issues/11096] or 
> [https://github.com/trezor/blockbook/issues/684]
> This is not a bug in Storm, of course. Nor is it a bug - more a 'minimum 
> requirements'.
> So I have a pull request to suggest workarounds for this in the 
> Troubleshooting guide.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4040) Nimbus fails to start up on older CPUs (RocksDB v7.x.x onwards)

2024-03-28 Thread Scott Moore (Jira)
Scott Moore created STORM-4040:
--

 Summary: Nimbus fails to start up on older CPUs (RocksDB v7.x.x 
onwards)
 Key: STORM-4040
 URL: https://issues.apache.org/jira/browse/STORM-4040
 Project: Apache Storm
  Issue Type: Documentation
  Components: documentation
Affects Versions: 2.5.0
 Environment: CPU is pre-Haswell
Reporter: Scott Moore
Assignee: Scott Moore


When Nimbus start-up and storm_rocks already contains data, the JVM will 
encounter an illegal instruction exception if the CPU is pre-Haswell era.

This can be seen on such a CPU by deploying a running topology then restarting 
Nimbus. It will fail to startup because the JVM will have crashed when RocksDB 
reads the state from storm_rocks folder. (You can recover from this by deleting 
the storm_rocks folder - Nimbus will then startup ok ... until the next time it 
restarts!)

The issue is that RocksDB v7.x.x or higher (so, applies to Storm 2.5.0 onwards 
from this commit 
[https://github.com/apache/storm/commit/d7b4c084a89961a4060edb2e755491a21015c200])
 is a C++ component built for modern CPUs. The fix for this is to downgrade 
RocksDB to pre v7.x.x (or you can build your own version of v7+ from source 
with certain compiler options set)

You can find other reports of this - e.g. 
[https://github.com/facebook/rocksdb/issues/11096] or 
[https://github.com/trezor/blockbook/issues/684]

This is not a bug in Storm, of course. Nor is it a bug - more a 'minimum 
requirements'.

So I have a pull request to suggest workarounds for this in the Troubleshooting 
guide.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (STORM-4039) Bump org.apache.commons:commons-configuration2 from 2.9.0 to 2.10.1

2024-03-22 Thread Julien Nioche (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Nioche resolved STORM-4039.
--
  Assignee: Julien Nioche
Resolution: Fixed

> Bump org.apache.commons:commons-configuration2 from 2.9.0 to 2.10.1
> ---
>
> Key: STORM-4039
> URL: https://issues.apache.org/jira/browse/STORM-4039
> Project: Apache Storm
>  Issue Type: Task
>Affects Versions: 2.6.1
>Reporter: Julien Nioche
>Assignee: Julien Nioche
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://github.com/apache/storm/pull/3631



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-4039) Bump org.apache.commons:commons-configuration2 from 2.9.0 to 2.10.1

2024-03-22 Thread Julien Nioche (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Nioche updated STORM-4039:
-
Fix Version/s: 2.6.2

> Bump org.apache.commons:commons-configuration2 from 2.9.0 to 2.10.1
> ---
>
> Key: STORM-4039
> URL: https://issues.apache.org/jira/browse/STORM-4039
> Project: Apache Storm
>  Issue Type: Task
>Affects Versions: 2.6.1
>Reporter: Julien Nioche
>Assignee: Julien Nioche
>Priority: Major
> Fix For: 2.6.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://github.com/apache/storm/pull/3631



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4039) Bump org.apache.commons:commons-configuration2 from 2.9.0 to 2.10.1

2024-03-22 Thread Julien Nioche (Jira)
Julien Nioche created STORM-4039:


 Summary: Bump org.apache.commons:commons-configuration2 from 2.9.0 
to 2.10.1
 Key: STORM-4039
 URL: https://issues.apache.org/jira/browse/STORM-4039
 Project: Apache Storm
  Issue Type: Task
Affects Versions: 2.6.1
Reporter: Julien Nioche


https://github.com/apache/storm/pull/3631



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4038) Cleanup Hadoop Dependencies

2024-03-15 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4038.
--
Resolution: Fixed

> Cleanup Hadoop Dependencies
> ---
>
> Key: STORM-4038
> URL: https://issues.apache.org/jira/browse/STORM-4038
> Project: Apache Storm
>  Issue Type: Improvement
>Affects Versions: 2.6.1
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.7.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> hadoop pulls in a lot of stuff we do not actually need. might be good to 
> prune / exclude



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4038) Cleanup Hadoop Dependencies

2024-03-07 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4038:
--

 Summary: Cleanup Hadoop Dependencies
 Key: STORM-4038
 URL: https://issues.apache.org/jira/browse/STORM-4038
 Project: Apache Storm
  Issue Type: Improvement
Affects Versions: 2.6.1
Reporter: Richard Zowalla
Assignee: Richard Zowalla
 Fix For: 2.7.0


hadoop pulls in a lot of stuff we do not actually need. might be good to prune 
/ exclude



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (STORM-4036) Apache Storm 2.5.0 not able to start with Kafka 3.6.1

2024-03-07 Thread Richard Zowalla (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-4036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824427#comment-17824427
 ] 

Richard Zowalla commented on STORM-4036:


Can you test with Storm 2.6.1 ? We updated the Kafka version, cf. 
https://storm.apache.org/2024/02/02/storm261-released.html

> Apache Storm 2.5.0 not able to start with Kafka 3.6.1 
> --
>
> Key: STORM-4036
> URL: https://issues.apache.org/jira/browse/STORM-4036
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-client
>Affects Versions: 2.5.0
> Environment: Linux
>Reporter: Adarsh Shukla
>Priority: Major
>
> Hi Team,
> We are trying to use Apache Storm to pump in data using Kafka to another 
> component. But as soon as we add the Kafka.properties to the storm conf 
> folder, NMStormTopology and other processes fails with following error
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org.apache.kafka.clients.producer.Producer
> at 
> com.ibm.csi.nm.storm.app.NMStormTopology.validateKafkaConnection(NMStormTopology.java:126)
> at com.ibm.csi.nm.storm.app.NMStormTopology.main(NMStormTopology.java:170)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.clients.producer.Producer
> at java.net.URLClassLoader.findClass(URLClassLoader.java:610)
> at java.lang.ClassLoader.loadClassHelper(ClassLoader.java:948)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:893)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:353)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:876)
> We first start Kafka 3.6.1 server and then start storm 2.5.0.
> We are trying to understand which version of Kafka is compatible Storm 2.5.0? 
> or this is a bug in storm which is not able to integrate with Kafka 3.6.1?
> with earlier version we didn't this issue but with this version we are seeing 
> this. Help us to understand how to resolve this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4037) confluent maven resolver may not be necessary any more

2024-02-28 Thread PJ Fanning (Jira)
PJ Fanning created STORM-4037:
-

 Summary: confluent maven resolver may not be necessary any more
 Key: STORM-4037
 URL: https://issues.apache.org/jira/browse/STORM-4037
 Project: Apache Storm
  Issue Type: Bug
Reporter: PJ Fanning


After https://github.com/apache/storm/pull/3627, the repository ref at

https://github.com/apache/storm/blob/d8ab31aa5c296d018c783cb159e1ccaf288ecc1c/external/storm-hdfs/pom.xml#L41

may not be necessary.

It is best to only use non standard maven repos when absolutely necessary.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4036) Apache Storm 2.5.0 not able to start with Kafka 3.6.1

2024-02-28 Thread Adarsh Shukla (Jira)
Adarsh Shukla created STORM-4036:


 Summary: Apache Storm 2.5.0 not able to start with Kafka 3.6.1 
 Key: STORM-4036
 URL: https://issues.apache.org/jira/browse/STORM-4036
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-client
Affects Versions: 2.5.0
 Environment: Linux
Reporter: Adarsh Shukla


Hi Team,

We are trying to use Apache Storm to pump in data using Kafka to another 
component. But as soon as we add the Kafka.properties to the storm conf folder, 
NMStormTopology and other processes fails with following error

Exception in thread "main" java.lang.NoClassDefFoundError: 
org.apache.kafka.clients.producer.Producer
at 
com.ibm.csi.nm.storm.app.NMStormTopology.validateKafkaConnection(NMStormTopology.java:126)
at com.ibm.csi.nm.storm.app.NMStormTopology.main(NMStormTopology.java:170)
Caused by: java.lang.ClassNotFoundException: 
org.apache.kafka.clients.producer.Producer
at java.net.URLClassLoader.findClass(URLClassLoader.java:610)
at java.lang.ClassLoader.loadClassHelper(ClassLoader.java:948)
at java.lang.ClassLoader.loadClass(ClassLoader.java:893)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:353)
at java.lang.ClassLoader.loadClass(ClassLoader.java:876)

We first start Kafka 3.6.1 server and then start storm 2.5.0.

We are trying to understand which version of Kafka is compatible Storm 2.5.0? 
or this is a bug in storm which is not able to integrate with Kafka 3.6.1?

with earlier version we didn't this issue but with this version we are seeing 
this. Help us to understand how to resolve this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (STORM-4035) Remove ConfluentAvroSerializer (storm-hdfs)

2024-02-26 Thread Julien Nioche (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Nioche resolved STORM-4035.
--
Resolution: Fixed

> Remove ConfluentAvroSerializer (storm-hdfs)
> ---
>
> Key: STORM-4035
> URL: https://issues.apache.org/jira/browse/STORM-4035
> Project: Apache Storm
>  Issue Type: Improvement
>Affects Versions: 2.6.1
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ConfluentAvroSerializer relies on a super old external library. It isn't 
> covcered with unit tests. 
>  
> We can remove this class as well as the related dependency.
>  
> People, who are using it, can still copy the class content and add that into 
> their topology.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4035) Remove ConfluentAvroSerializer (storm-hdfs)

2024-02-26 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4035:
--

 Summary: Remove ConfluentAvroSerializer (storm-hdfs)
 Key: STORM-4035
 URL: https://issues.apache.org/jira/browse/STORM-4035
 Project: Apache Storm
  Issue Type: Improvement
Affects Versions: 2.6.1
Reporter: Richard Zowalla
Assignee: Richard Zowalla


ConfluentAvroSerializer relies on a super old external library. It isn't 
covcered with unit tests. 
 
We can remove this class as well as the related dependency.
 
People, who are using it, can still copy the class content and add that into 
their topology.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4034) Use package manager for 3rd party JS in Storm-UI

2024-02-23 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4034:
--

 Summary: Use package manager for 3rd party JS in Storm-UI
 Key: STORM-4034
 URL: https://issues.apache.org/jira/browse/STORM-4034
 Project: Apache Storm
  Issue Type: Task
  Components: build
Affects Versions: 2.6.1
Reporter: Richard Zowalla


We have JS libraries for Storm UI directly added into VCS. It would be better 
to use a build process with webpack / gulp with maven frontend plugin to 
automatically create a minimized asset file.

 

This would allow us to control versions and update those JS libs.

 

https://github.com/apache/storm/tree/master/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/js



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4033) Inform about official docker images on downloads

2024-02-23 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4033:
--

 Summary: Inform about official docker images on downloads
 Key: STORM-4033
 URL: https://issues.apache.org/jira/browse/STORM-4033
 Project: Apache Storm
  Issue Type: Task
Affects Versions: 2.6.1
Reporter: Richard Zowalla


as the title says



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4032) Add information about nightlies to website

2024-02-23 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4032:
--

 Summary: Add information about nightlies to website
 Key: STORM-4032
 URL: https://issues.apache.org/jira/browse/STORM-4032
 Project: Apache Storm
  Issue Type: Task
Affects Versions: 2.6.1
Reporter: Richard Zowalla


Provide a link to the nightlies on the download site. Add a disclaimer similar 
like "

"Apache Storm provides untested nightly builds as a convenience for users who 
wish to access the latest developments in our project. These nightly builds are 
generated automatically from the latest codebase and are not subjected to the 
rigorous testing processes applied to official releases.

Users are hereby advised that these nightly builds may contain bugs, errors, or 
other issues that could affect the stability and functionality of the software. 
These builds are not recommended for production use, critical systems, or any 
environment where system stability is paramount."

to indicate untested binaries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4031) Fix broken Talks & Video Webpage

2024-02-23 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4031:
--

 Summary: Fix broken Talks & Video Webpage
 Key: STORM-4031
 URL: https://issues.apache.org/jira/browse/STORM-4031
 Project: Apache Storm
  Issue Type: Task
Affects Versions: 2.6.1
Reporter: Richard Zowalla
 Fix For: 2.7.0


External content seems to be blocked now:

 

[https://storm.apache.org/talksAndVideos.html]

 

We should either host those stuff ourselves or just link it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (STORM-4030) Dependency upgrades

2024-02-22 Thread Julien Nioche (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Nioche resolved STORM-4030.
--
Resolution: Fixed

> Dependency upgrades
> ---
>
> Key: STORM-4030
> URL: https://issues.apache.org/jira/browse/STORM-4030
> Project: Apache Storm
>  Issue Type: Task
>Affects Versions: 2.6.1
>Reporter: Julien Nioche
>Assignee: Julien Nioche
>Priority: Minor
> Fix For: 2.7.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code:java}
> mvn versions:display-dependency-updates 
> "-Dmaven.version.ignore=.*-M.*,.*-alpha.*,.*-beta.*,.*-BETA.*,.*-b.*,.*.ALPHA.*"
>  | grep '\->' | sort | uniq{code}
> {{shows a large number of upgradable dependencies.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (STORM-4030) Dependency upgrades

2024-02-22 Thread Julien Nioche (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819676#comment-17819676
 ] 

Julien Nioche edited comment on STORM-4030 at 2/22/24 2:44 PM:
---

commons-i 2.11.0 ->  :2.14.0 
commons-lang 3.13.0 -> 3.14.0
log4j 2.21.1 -> 2.23.0 
caffeine 2.3.5 -> 3.1.8 
guava 32.1.3-jre  -> 33.0.0-jre 
jackson 2.15.2 -> 2.16.1 
netty 4.1.100.Final -> 4.1.107.Final 
junit 5.10.0 -> 5.10.2
snakeyaml 2.0 -> 2.2
RocksDB JNI 8.5.4 -> 8.10.0
checker-qual 3.37.0 -> 3.42.0
error_prone_annotations 2.21.1 -> 2.25.0
testcontainers 1.19.1 -> 1.19.6


was (Author: jnioche):
commons-i 2.11.0 ->  :2.14.0 
commons-lang 3.13.0 -> 3.14.0
log4j 2.21.1 -> 2.23.0 
caffeine 2.3.5 -> 3.1.8 
error_prone_annotations 2.21.1 -> 2.25.0
guava 32.1.3-jre  -> 33.0.0-jre 
jackson 2.15.2 -> 2.16.1 
netty 4.1.100.Final -> 4.1.107.Final 
snakeyaml 2.0 -> 2.2
RocksDB JNI 8.5.4 -> 8.10.0
hecker-qual 3.37.0 -> 3.42.0

> Dependency upgrades
> ---
>
>     Key: STORM-4030
> URL: https://issues.apache.org/jira/browse/STORM-4030
> Project: Apache Storm
>  Issue Type: Task
>Affects Versions: 2.6.1
>Reporter: Julien Nioche
>Assignee: Julien Nioche
>Priority: Minor
> Fix For: 2.7.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:java}
> mvn versions:display-dependency-updates 
> "-Dmaven.version.ignore=.*-M.*,.*-alpha.*,.*-beta.*,.*-BETA.*,.*-b.*,.*.ALPHA.*"
>  | grep '\->' | sort | uniq{code}
> {{shows a large number of upgradable dependencies.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (STORM-4030) Dependency upgrades

2024-02-22 Thread Julien Nioche (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819676#comment-17819676
 ] 

Julien Nioche commented on STORM-4030:
--

commons-i 2.11.0 ->  :2.14.0 
commons-lang 3.13.0 -> 3.14.0
log4j 2.21.1 -> 2.23.0 
caffeine 2.3.5 -> 3.1.8 
error_prone_annotations 2.21.1 -> 2.25.0
guava 32.1.3-jre  -> 33.0.0-jre 
jackson 2.15.2 -> 2.16.1 
netty 4.1.100.Final -> 4.1.107.Final 
snakeyaml 2.0 -> 2.2
RocksDB JNI 8.5.4 -> 8.10.0
hecker-qual 3.37.0 -> 3.42.0

> Dependency upgrades
> ---
>
> Key: STORM-4030
> URL: https://issues.apache.org/jira/browse/STORM-4030
> Project: Apache Storm
>  Issue Type: Task
>Affects Versions: 2.6.1
>Reporter: Julien Nioche
>Assignee: Julien Nioche
>Priority: Minor
> Fix For: 2.7.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:java}
> mvn versions:display-dependency-updates 
> "-Dmaven.version.ignore=.*-M.*,.*-alpha.*,.*-beta.*,.*-BETA.*,.*-b.*,.*.ALPHA.*"
>  | grep '\->' | sort | uniq{code}
> {{shows a large number of upgradable dependencies.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4030) Dependency upgrades

2024-02-22 Thread Julien Nioche (Jira)
Julien Nioche created STORM-4030:


 Summary: Dependency upgrades
 Key: STORM-4030
 URL: https://issues.apache.org/jira/browse/STORM-4030
 Project: Apache Storm
  Issue Type: Task
Affects Versions: 2.6.1
Reporter: Julien Nioche
Assignee: Julien Nioche
 Fix For: 2.7.0


{code:java}
mvn versions:display-dependency-updates 
"-Dmaven.version.ignore=.*-M.*,.*-alpha.*,.*-beta.*,.*-BETA.*,.*-b.*,.*.ALPHA.*"
 | grep '\->' | sort | uniq{code}
{{shows a large number of upgradable dependencies.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4029) Bump org.apache.commons:commons-compress from 1.21 to 1.26.

2024-02-21 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4029.
--
Resolution: Fixed

> Bump org.apache.commons:commons-compress from 1.21 to 1.26.
> ---
>
> Key: STORM-4029
> URL: https://issues.apache.org/jira/browse/STORM-4029
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Affects Versions: 2.6.1
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.7.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4029) Bump org.apache.commons:commons-compress from 1.21 to 1.26.

2024-02-21 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4029:
--

 Summary: Bump org.apache.commons:commons-compress from 1.21 to 
1.26.
 Key: STORM-4029
 URL: https://issues.apache.org/jira/browse/STORM-4029
 Project: Apache Storm
  Issue Type: Dependency upgrade
Affects Versions: 2.6.1
Reporter: Richard Zowalla
Assignee: Richard Zowalla
 Fix For: 2.7.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (STORM-4020) Why not Apache Storm gives an prometheus end point for metrics scraping?

2024-02-11 Thread Harish Madineni (Jira)


[ https://issues.apache.org/jira/browse/STORM-4020 ]


Harish Madineni deleted comment on STORM-4020:


was (Author: JIRAUSER303758):
Sure
I have just started to familiarize with the storm codebase. I'll will work as 
and when I'm comfortable with the code base.

> Why not Apache Storm gives an prometheus end point for metrics scraping?
> 
>
> Key: STORM-4020
> URL: https://issues.apache.org/jira/browse/STORM-4020
> Project: Apache Storm
>  Issue Type: Question
>Reporter: Harish Madineni
>Priority: Major
>
> Why not Apache Storm gives an prometheus end point for metrics scraping for 
> all the daemons or at-least for supervisor and nimbus?
> The issue with reporter based mechanisms is that we have to create an 
> external service (like push gateway) which can be a bottleneck.
> Is there a best way, Apache Storm team suggests to scrape metrics to external 
> systems like prometheus.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-3960) Read keystore/truststore files in jks or pkcs12 prompt based on extension

2024-01-30 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-3960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla updated STORM-3960:
---
Fix Version/s: 2.7.0
   (was: 2.6.1)

> Read keystore/truststore files in jks or pkcs12 prompt based on extension
> -
>
> Key: STORM-3960
> URL: https://issues.apache.org/jira/browse/STORM-3960
> Project: Apache Storm
>  Issue Type: Task
>  Components: storm-client
>Reporter: Bipin Prasad
>Assignee: Bipin Prasad
>Priority: Major
> Fix For: 2.7.0
>
>
> Storm reads JKS (Java Key Store) format files for key and certificates. Read 
> pkcs12 format keystore and truststore of TLS connections. Infer file type 
> from extension without the need to specify the keystore/truststore type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-1836) An official Docker image

2024-01-30 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-1836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-1836.
--
Resolution: Fixed

> An official Docker image
> 
>
> Key: STORM-1836
> URL: https://issues.apache.org/jira/browse/STORM-1836
> Project: Apache Storm
>  Issue Type: New Feature
>Reporter: Elisey Zanko
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.6.1, 2.6.0
>
>
> Here is the [PR|https://github.com/docker-library/official-images/pull/1641] 
> which contains an official image for Storm. Would be great if someone takes a 
> look at it and provide some feedback. Or maybe somebody is interested in 
> collaboration on the image? Thanks!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (STORM-4020) Why not Apache Storm gives an prometheus end point for metrics scraping?

2024-01-26 Thread Harish Madineni (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-4020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811223#comment-17811223
 ] 

Harish Madineni commented on STORM-4020:


Sure
I have just started to familiarize with the storm codebase. I'll will work as 
and when I'm comfortable with the code base.

> Why not Apache Storm gives an prometheus end point for metrics scraping?
> 
>
> Key: STORM-4020
> URL: https://issues.apache.org/jira/browse/STORM-4020
> Project: Apache Storm
>  Issue Type: Question
>Reporter: Harish Madineni
>Priority: Major
>
> Why not Apache Storm gives an prometheus end point for metrics scraping for 
> all the daemons or at-least for supervisor and nimbus?
> The issue with reporter based mechanisms is that we have to create an 
> external service (like push gateway) which can be a bottleneck.
> Is there a best way, Apache Storm team suggests to scrape metrics to external 
> systems like prometheus.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4028) Curator 5.6.0

2024-01-26 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4028.
--
Fix Version/s: 2.7.0
   Resolution: Fixed

> Curator 5.6.0
> -
>
> Key: STORM-4028
> URL: https://issues.apache.org/jira/browse/STORM-4028
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Affects Versions: 2.6.0
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.7.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12314425=12353185



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4026) Thrift 0.19.0

2024-01-26 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4026.
--
Resolution: Fixed

> Thrift 0.19.0
> -
>
> Key: STORM-4026
> URL: https://issues.apache.org/jira/browse/STORM-4026
> Project: Apache Storm
>  Issue Type: Task
>Affects Versions: 2.6.0
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.7.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> https://github.com/apache/thrift/blob/master/CHANGES.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4027) Kyro 5.6.0

2024-01-26 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4027.
--
Fix Version/s: 2.7.0
   Resolution: Fixed

> Kyro 5.6.0
> --
>
> Key: STORM-4027
> URL: https://issues.apache.org/jira/browse/STORM-4027
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Affects Versions: 2.6.0
>Reporter: Richard Zowalla
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.7.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://github.com/EsotericSoftware/kryo/releases



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-3713) Possible race condition between zookeeper sync-up and killing topology

2024-01-25 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-3713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-3713.
--
Fix Version/s: 2.7.0
   Resolution: Fixed

> Possible race condition between zookeeper sync-up and killing topology
> --
>
> Key: STORM-3713
> URL: https://issues.apache.org/jira/browse/STORM-3713
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
> Fix For: 2.7.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When nimbus re-gains leadership, the leaderCallback will sync-up with 
> zookeeper:
> [https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/nimbus/LeaderListenerCallback.java#L106]
>  
> [https://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/cluster/StormClusterStateImpl.java#L212]
>   
> When killing topology, both zookeeper and in-memory assignments map get 
> cleaned up.
> [https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java#L313]
>   
> However, in the syncRemoteAssignments call, it will get the information from 
> zookeeper into stormIds. The after some processing (including 
> deserialization), it will then put it into local in-memory assignments 
> backend. If the zookeeper deletion happens between these two steps, then 
> there will be mismatch between remote zookeeper and local backends.  
> We found this issue since we observed a NPE when making assignments.
> {code:java}
> 2020-11-04 19:56:17.703 o.a.s.d.n.Nimbus timer [ERROR] Error while processing 
> event java.lang.RuntimeException: java.lang.NullPointerException at
> org.apache.storm.daemon.nimbus.Nimbus.lambda$launchServer$17(Nimbus.java:1419)
>  ~[storm-server-2.3.0.y.jar:2.3.0.y] at 
> org.apache.storm.StormTimer$1.run(StormTimer.java:110) 
> ~[storm-client-2.3.0.y.jar:2.3.0.y] at 
> org.apache.storm.StormTimer$StormTimerTask.run(StormTimer.java:226) 
> [storm-client-2.3.0.y.jar:2.3.0.y] Caused by: java.lang.NullPointerException 
> at 
> org.apache.storm.daemon.nimbus.HeartbeatCache.getAliveExecutors(HeartbeatCache.java:199)
>  ~[storm-server-2.3.0.y.jar:2.3.0.y] at 
> org.apache.storm.daemon.nimbus.Nimbus.aliveExecutors(Nimbus.java:2029) 
> ~[storm-server-2.3.0.y.jar:2.3.0.y] at 
> org.apache.storm.daemon.nimbus.Nimbus.computeTopologyToAliveExecutors(Nimbus.java:2109)
>  ~[storm-server-2.3.0.y.jar:2.3.0.y] at 
> org.apache.storm.daemon.nimbus.Nimbus.computeNewSchedulerAssignments(Nimbus.java:2272)
>  ~[storm-server-2.3.0.y.jar:2.3.0.y] at 
> org.apache.storm.daemon.nimbus.Nimbus.lockingMkAssignments(Nimbus.java:2467) 
> ~[storm-server-2.3.0.y.jar:2.3.0.y] at 
> org.apache.storm.daemon.nimbus.Nimbus.mkAssignments(Nimbus.java:2453) 
> ~[storm-server-2.3.0.y.jar:2.3.0.y] at 
> org.apache.storm.daemon.nimbus.Nimbus.mkAssignments(Nimbus.java:2397) 
> ~[storm-server-2.3.0.y.jar:2.3.0.y] at 
> org.apache.storm.daemon.nimbus.Nimbus.lambda$launchServer$17(Nimbus.java:1415)
>  ~[storm-server-2.3.0.y.jar:2.3.0.y] ... 2 more 2020-11-04 19:56:17.703 
> o.a.s.u.Utils timer [ERROR] Halting process: Error while processing event  
> {code}
> [https://github.com/apache/storm/blob/fe2f7102e244336e288d26f2dde8089198ee4c33/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java#L2108]
>   
> The existingAssignment comes from in-memory backend while the 
> topologyToExecutors comes from zookeeper which did not include a deleted 
> topolgy id. 
> [https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java#L2108]
>  
> [https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java#L2111|https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java#L2108]
>  
> [https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/daemon/nimbus/HeartbeatCache.java#L199]
> So NPE happens.      



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4028) Curator 5.6.0

2024-01-25 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4028:
--

 Summary: Curator 5.6.0
 Key: STORM-4028
 URL: https://issues.apache.org/jira/browse/STORM-4028
 Project: Apache Storm
  Issue Type: Dependency upgrade
Affects Versions: 2.6.0
Reporter: Richard Zowalla
Assignee: Richard Zowalla


https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12314425=12353185



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4027) Kyro 5.6.0

2024-01-25 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4027:
--

 Summary: Kyro 5.6.0
 Key: STORM-4027
 URL: https://issues.apache.org/jira/browse/STORM-4027
 Project: Apache Storm
  Issue Type: Dependency upgrade
Affects Versions: 2.6.0
Reporter: Richard Zowalla
Assignee: Richard Zowalla


https://github.com/EsotericSoftware/kryo/releases



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4025) ClassCastException when changing log level in Storm UI

2024-01-25 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4025.
--
Resolution: Fixed

> ClassCastException when changing log level in Storm UI
> --
>
> Key: STORM-4025
> URL: https://issues.apache.org/jira/browse/STORM-4025
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-webapp
>Affects Versions: 2.6.0
>Reporter: mleger
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.7.0
>
> Attachments: NimbusUI_error.png, error_stack.json
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> A ClassCastException is raised in Storm UI when trying to change the log 
> level for a topology (cf. attached screenshot and full error stack).
>  * POST request payload sent to the logconfig web service:
> {code:java}
> {"namedLoggerLevels":{"com.example":{"target_level":"DEBUG","reset_level":"INFO","timeout":30}}}{code}
>  * Error message received:
> {noformat}
> "error": "500 Server Error",
> "errorMessage": "java.lang.ClassCastException: java.lang.Integer incompatible 
> with java.lang.Long\n\tat 
> org.apache.storm.daemon.ui.UIHelpers.putTopologyLogLevel(UIHelpers.java:2422)\n\tat
>  
> org.apache.storm.daemon.ui.resources.StormApiResource.putTopologyLogconfig(StormApiResource.java:469)\n\tat
> [...]
> {noformat}
> The timeout parameter seems to be parsed as an Integer whereas it is cast 
> into a Long in the code, then raising a ClassCastException:
> cf. 
> [https://github.com/apache/storm/blob/ae3a96e762095553311d9e335f7505c0b351d810/storm-webapp/src/main/java/org/apache/storm/daemon/ui/UIHelpers.java#L2422C13-L2422C67]
> This issue could be related to the recent change of the JSON parser having a 
> different behavior when parsing numbers:
> cf. 
> [https://github.com/apache/storm/commit/1406f680c8d65de591c997066d2ca2cd80e56c4f#diff-67de3adeec3548f570568d351b76a4b3a936ee9ed0f3f59445ff3def0505f247]
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (STORM-4025) ClassCastException when changing log level in Storm UI

2024-01-25 Thread Richard Zowalla (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810798#comment-17810798
 ] 

Richard Zowalla commented on STORM-4025:


Thanks for reporting. This is indeed related to the JSON parser switch! Next 
storm release will contain a fix.

> ClassCastException when changing log level in Storm UI
> --
>
> Key: STORM-4025
> URL: https://issues.apache.org/jira/browse/STORM-4025
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-webapp
>Affects Versions: 2.6.0
>Reporter: mleger
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.7.0
>
> Attachments: NimbusUI_error.png, error_stack.json
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A ClassCastException is raised in Storm UI when trying to change the log 
> level for a topology (cf. attached screenshot and full error stack).
>  * POST request payload sent to the logconfig web service:
> {code:java}
> {"namedLoggerLevels":{"com.example":{"target_level":"DEBUG","reset_level":"INFO","timeout":30}}}{code}
>  * Error message received:
> {noformat}
> "error": "500 Server Error",
> "errorMessage": "java.lang.ClassCastException: java.lang.Integer incompatible 
> with java.lang.Long\n\tat 
> org.apache.storm.daemon.ui.UIHelpers.putTopologyLogLevel(UIHelpers.java:2422)\n\tat
>  
> org.apache.storm.daemon.ui.resources.StormApiResource.putTopologyLogconfig(StormApiResource.java:469)\n\tat
> [...]
> {noformat}
> The timeout parameter seems to be parsed as an Integer whereas it is cast 
> into a Long in the code, then raising a ClassCastException:
> cf. 
> [https://github.com/apache/storm/blob/ae3a96e762095553311d9e335f7505c0b351d810/storm-webapp/src/main/java/org/apache/storm/daemon/ui/UIHelpers.java#L2422C13-L2422C67]
> This issue could be related to the recent change of the JSON parser having a 
> different behavior when parsing numbers:
> cf. 
> [https://github.com/apache/storm/commit/1406f680c8d65de591c997066d2ca2cd80e56c4f#diff-67de3adeec3548f570568d351b76a4b3a936ee9ed0f3f59445ff3def0505f247]
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4024) Bolt Input Stats are blank if topology.acker.executors is null or 0

2024-01-25 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4024.
--
Fix Version/s: 2.7.0
   Resolution: Fixed

> Bolt Input Stats are blank if topology.acker.executors is null or 0
> ---
>
> Key: STORM-4024
> URL: https://issues.apache.org/jira/browse/STORM-4024
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-server
>Affects Versions: 2.0.0
>Reporter: Scott Moore
>Assignee: Scott Moore
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: Storm2-input-stats-notworking-with-no-ackers.png, 
> Storm2-input-stats-working-with-ackers.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> On StormUI (and via API) the bolt Input Stats do not work when 
> topology.acker.executors is null or 0 (see attachements showing difference 
> with and without ackers)
> Also, some of the per-bolt instance Executed and latency fields are also not 
> working



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (STORM-4023) Background periodic Kerberos re-login should use same JAAS configuration as initial login

2024-01-25 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla closed STORM-4023.
--
Fix Version/s: 2.7.0
   Resolution: Fixed

> Background periodic Kerberos re-login should use same JAAS configuration as 
> initial login
> -
>
> Key: STORM-4023
> URL: https://issues.apache.org/jira/browse/STORM-4023
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-client
>Affects Versions: 2.6.0
>Reporter: Andrew Olson
>Priority: Major
>  Labels: jaas, kerberos, messaging, netty
> Fix For: 2.7.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In the 
> [Login|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java]
>  class, a background thread is started that periodically performs a re-login 
> to the Kerberos Ticket Granting Server.
> For the initial login, a custom Configuration instance is 
> [created|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java#L257]
>  and [supplied to the LoginContext 
> constructor|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java#L300]
>  potentially using a custom JAAS file location.
> However, the background refresh thread does not then subsequently provide the 
> JAAS file location or Configuration to the [reLogin 
> method|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java#L222],
>  so it tries to construct a LoginContext with [just a context name and 
> subject|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java#L409]
>  but no Configuration parameter, which means that the underlying 
> {{Configuration.getConfiguration()}} call has to load one from [system 
> defaults|https://github.com/AdoptOpenJDK/openjdk-jdk11/blob/jdk-11%2B28/src/java.base/share/classes/javax/security/auth/login/LoginContext.java#L242],
>  which could possibly specify a different file or none at all.
> In our application where this issue was found, we had set the 
> {{java.security.auth.login.config}} value to a valid JAAS file location as a 
> Storm client property along with other standard connectivity properties, 
> since the [Netty client 
> framework|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/KerberosSaslNettyClient.java#L61]
>  loads it [from the topology 
> configuration|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/security/auth/ClientAuthUtils.java#L64].
>  It looks like the Netty server framework [does the 
> same|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/KerberosSaslNettyServer.java#L55]
>  as well. The initial login succeeded and the following Storm Nimbus 
> interactions were successful, but a while later it lost the ability to 
> communicate with Storm with this error being logged,
> {noformat}
> ERROR [Refresh-TGT] org.apache.storm.messaging.netty.Login Could not refresh 
> TGT for principal: 
> javax.security.auth.login.LoginException: No LoginModules configured for 
> StormClient
>at 
> java.base/javax.security.auth.login.LoginContext.init(LoginContext.java:267)
>at 
> java.base/javax.security.auth.login.LoginContext.(LoginContext.java:385)
>at org.apache.storm.messaging.netty.Login.reLogin(Login.java:409)
>at org.apache.storm.messaging.netty.Login$1.run(Login.java:222)
>at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}
> It appears that a viable workaround for this issue is to also set the system 
> property,
> {{-Djava.security.auth.login.config=/some/path/jaas.conf}}
> for the application, referencing the same JAAS file location as was set in 
> the Storm configuration. After doing so the background refresh thread was 
> able to correctly function in our situation.
> To address this, we can update the {{reLogin}} method to use the same JAAS 
> configuration. Furthermore it should use the same callback handler instance 
> that was originally provided also, instead of creating a new default one.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-4025) ClassCastException when changing log level in Storm UI

2024-01-25 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla updated STORM-4025:
---
Fix Version/s: 2.7.0

> ClassCastException when changing log level in Storm UI
> --
>
> Key: STORM-4025
> URL: https://issues.apache.org/jira/browse/STORM-4025
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-webapp
>Affects Versions: 2.6.0
>Reporter: mleger
>Priority: Major
> Fix For: 2.7.0
>
> Attachments: NimbusUI_error.png, error_stack.json
>
>
> A ClassCastException is raised in Storm UI when trying to change the log 
> level for a topology (cf. attached screenshot and full error stack).
>  * POST request payload sent to the logconfig web service:
> {code:java}
> {"namedLoggerLevels":{"com.example":{"target_level":"DEBUG","reset_level":"INFO","timeout":30}}}{code}
>  * Error message received:
> {noformat}
> "error": "500 Server Error",
> "errorMessage": "java.lang.ClassCastException: java.lang.Integer incompatible 
> with java.lang.Long\n\tat 
> org.apache.storm.daemon.ui.UIHelpers.putTopologyLogLevel(UIHelpers.java:2422)\n\tat
>  
> org.apache.storm.daemon.ui.resources.StormApiResource.putTopologyLogconfig(StormApiResource.java:469)\n\tat
> [...]
> {noformat}
> The timeout parameter seems to be parsed as an Integer whereas it is cast 
> into a Long in the code, then raising a ClassCastException:
> cf. 
> [https://github.com/apache/storm/blob/ae3a96e762095553311d9e335f7505c0b351d810/storm-webapp/src/main/java/org/apache/storm/daemon/ui/UIHelpers.java#L2422C13-L2422C67]
> This issue could be related to the recent change of the JSON parser having a 
> different behavior when parsing numbers:
> cf. 
> [https://github.com/apache/storm/commit/1406f680c8d65de591c997066d2ca2cd80e56c4f#diff-67de3adeec3548f570568d351b76a4b3a936ee9ed0f3f59445ff3def0505f247]
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (STORM-4025) ClassCastException when changing log level in Storm UI

2024-01-25 Thread Richard Zowalla (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zowalla reassigned STORM-4025:
--

Assignee: Richard Zowalla

> ClassCastException when changing log level in Storm UI
> --
>
> Key: STORM-4025
> URL: https://issues.apache.org/jira/browse/STORM-4025
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-webapp
>Affects Versions: 2.6.0
>Reporter: mleger
>Assignee: Richard Zowalla
>Priority: Major
> Fix For: 2.7.0
>
> Attachments: NimbusUI_error.png, error_stack.json
>
>
> A ClassCastException is raised in Storm UI when trying to change the log 
> level for a topology (cf. attached screenshot and full error stack).
>  * POST request payload sent to the logconfig web service:
> {code:java}
> {"namedLoggerLevels":{"com.example":{"target_level":"DEBUG","reset_level":"INFO","timeout":30}}}{code}
>  * Error message received:
> {noformat}
> "error": "500 Server Error",
> "errorMessage": "java.lang.ClassCastException: java.lang.Integer incompatible 
> with java.lang.Long\n\tat 
> org.apache.storm.daemon.ui.UIHelpers.putTopologyLogLevel(UIHelpers.java:2422)\n\tat
>  
> org.apache.storm.daemon.ui.resources.StormApiResource.putTopologyLogconfig(StormApiResource.java:469)\n\tat
> [...]
> {noformat}
> The timeout parameter seems to be parsed as an Integer whereas it is cast 
> into a Long in the code, then raising a ClassCastException:
> cf. 
> [https://github.com/apache/storm/blob/ae3a96e762095553311d9e335f7505c0b351d810/storm-webapp/src/main/java/org/apache/storm/daemon/ui/UIHelpers.java#L2422C13-L2422C67]
> This issue could be related to the recent change of the JSON parser having a 
> different behavior when parsing numbers:
> cf. 
> [https://github.com/apache/storm/commit/1406f680c8d65de591c997066d2ca2cd80e56c4f#diff-67de3adeec3548f570568d351b76a4b3a936ee9ed0f3f59445ff3def0505f247]
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4026) Thrift 0.19.0

2024-01-25 Thread Richard Zowalla (Jira)
Richard Zowalla created STORM-4026:
--

 Summary: Thrift 0.19.0
 Key: STORM-4026
 URL: https://issues.apache.org/jira/browse/STORM-4026
 Project: Apache Storm
  Issue Type: Task
Affects Versions: 2.6.0
Reporter: Richard Zowalla
Assignee: Richard Zowalla
 Fix For: 2.7.0


https://github.com/apache/thrift/blob/master/CHANGES.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-4025) ClassCastException when changing log level in Storm UI

2024-01-25 Thread mleger (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mleger updated STORM-4025:
--
Description: 
A ClassCastException is raised in Storm UI when trying to change the log level 
for a topology (cf. attached screenshot and full error stack).
 * POST request payload sent to the logconfig web service:

{code:java}
{"namedLoggerLevels":{"com.example":{"target_level":"DEBUG","reset_level":"INFO","timeout":30}}}{code}
 * Error message received:

{noformat}
"error": "500 Server Error",
"errorMessage": "java.lang.ClassCastException: java.lang.Integer incompatible 
with java.lang.Long\n\tat 
org.apache.storm.daemon.ui.UIHelpers.putTopologyLogLevel(UIHelpers.java:2422)\n\tat
 
org.apache.storm.daemon.ui.resources.StormApiResource.putTopologyLogconfig(StormApiResource.java:469)\n\tat
[...]
{noformat}
The timeout parameter seems to be parsed as an Integer whereas it is cast into 
a Long in the code, then raising a ClassCastException:

cf. 
[https://github.com/apache/storm/blob/ae3a96e762095553311d9e335f7505c0b351d810/storm-webapp/src/main/java/org/apache/storm/daemon/ui/UIHelpers.java#L2422C13-L2422C67]

This issue could be related to the recent change of the JSON parser having a 
different behavior when parsing numbers:

cf. 
[https://github.com/apache/storm/commit/1406f680c8d65de591c997066d2ca2cd80e56c4f#diff-67de3adeec3548f570568d351b76a4b3a936ee9ed0f3f59445ff3def0505f247]

 
 
 

  was:
A ClassCastException is raised in Storm UI when trying to change the log level 
for a topology (cf. attached screenshot and full error stack).
 * POST request payload sent to the logconfig web service:

 
{code:java}
{"namedLoggerLevels":{"com.example":{"target_level":"DEBUG","reset_level":"INFO","timeout":30}}}{code}
 
 * Error message received:

 
{noformat}
"error": "500 Server Error",
"errorMessage": "java.lang.ClassCastException: java.lang.Integer incompatible 
with java.lang.Long\n\tat 
org.apache.storm.daemon.ui.UIHelpers.putTopologyLogLevel(UIHelpers.java:2422)\n\tat
 
org.apache.storm.daemon.ui.resources.StormApiResource.putTopologyLogconfig(StormApiResource.java:469)\n\tat
[...]
{noformat}
 

The timeout parameter seems to be parsed as an Integer whereas it is cast into 
a Long in the code, then raising a ClassCastException:

cf. 
[https://github.com/apache/storm/blob/ae3a96e762095553311d9e335f7505c0b351d810/storm-webapp/src/main/java/org/apache/storm/daemon/ui/UIHelpers.java#L2422C13-L2422C67]

This issue could be related to the recent change of the JSON parser having a 
different behavior when parsing numbers:

cf. 
[https://github.com/apache/storm/commit/1406f680c8d65de591c997066d2ca2cd80e56c4f#diff-67de3adeec3548f570568d351b76a4b3a936ee9ed0f3f59445ff3def0505f247]

 
 
 


> ClassCastException when changing log level in Storm UI
> --
>
> Key: STORM-4025
> URL: https://issues.apache.org/jira/browse/STORM-4025
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-webapp
>Affects Versions: 2.6.0
>Reporter: mleger
>Priority: Major
> Attachments: NimbusUI_error.png, error_stack.json
>
>
> A ClassCastException is raised in Storm UI when trying to change the log 
> level for a topology (cf. attached screenshot and full error stack).
>  * POST request payload sent to the logconfig web service:
> {code:java}
> {"namedLoggerLevels":{"com.example":{"target_level":"DEBUG","reset_level":"INFO","timeout":30}}}{code}
>  * Error message received:
> {noformat}
> "error": "500 Server Error",
> "errorMessage": "java.lang.ClassCastException: java.lang.Integer incompatible 
> with java.lang.Long\n\tat 
> org.apache.storm.daemon.ui.UIHelpers.putTopologyLogLevel(UIHelpers.java:2422)\n\tat
>  
> org.apache.storm.daemon.ui.resources.StormApiResource.putTopologyLogconfig(StormApiResource.java:469)\n\tat
> [...]
> {noformat}
> The timeout parameter seems to be parsed as an Integer whereas it is cast 
> into a Long in the code, then raising a ClassCastException:
> cf. 
> [https://github.com/apache/storm/blob/ae3a96e762095553311d9e335f7505c0b351d810/storm-webapp/src/main/java/org/apache/storm/daemon/ui/UIHelpers.java#L2422C13-L2422C67]
> This issue could be related to the recent change of the JSON parser having a 
> different behavior when parsing numbers:
> cf. 
> [https://github.com/apache/storm/commit/1406f680c8d65de591c997066d2ca2cd80e56c4f#diff-67de3adeec3548f570568d351b76a4b3a936ee9ed0f3f59445ff3def0505f247]
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (STORM-4025) ClassCastException when changing log level in Storm UI

2024-01-25 Thread mleger (Jira)
mleger created STORM-4025:
-

 Summary: ClassCastException when changing log level in Storm UI
 Key: STORM-4025
 URL: https://issues.apache.org/jira/browse/STORM-4025
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-webapp
Affects Versions: 2.6.0
Reporter: mleger
 Attachments: NimbusUI_error.png, error_stack.json

A ClassCastException is raised in Storm UI when trying to change the log level 
for a topology (cf. attached screenshot and full error stack).
 * POST request payload sent to the logconfig web service:

 
{code:java}
{"namedLoggerLevels":{"com.example":{"target_level":"DEBUG","reset_level":"INFO","timeout":30}}}{code}
 
 * Error message received:

 
{noformat}
"error": "500 Server Error",
"errorMessage": "java.lang.ClassCastException: java.lang.Integer incompatible 
with java.lang.Long\n\tat 
org.apache.storm.daemon.ui.UIHelpers.putTopologyLogLevel(UIHelpers.java:2422)\n\tat
 
org.apache.storm.daemon.ui.resources.StormApiResource.putTopologyLogconfig(StormApiResource.java:469)\n\tat
[...]
{noformat}
 

The timeout parameter seems to be parsed as an Integer whereas it is cast into 
a Long in the code, then raising a ClassCastException:

cf. 
[https://github.com/apache/storm/blob/ae3a96e762095553311d9e335f7505c0b351d810/storm-webapp/src/main/java/org/apache/storm/daemon/ui/UIHelpers.java#L2422C13-L2422C67]

This issue could be related to the recent change of the JSON parser having a 
different behavior when parsing numbers:

cf. 
[https://github.com/apache/storm/commit/1406f680c8d65de591c997066d2ca2cd80e56c4f#diff-67de3adeec3548f570568d351b76a4b3a936ee9ed0f3f59445ff3def0505f247]

 
 
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-4023) Background periodic Kerberos re-login should use same JAAS configuration as initial login

2024-01-24 Thread Andrew Olson (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated STORM-4023:

Description: 
In the 
[Login|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java]
 class, a background thread is started that periodically performs a re-login to 
the Kerberos Ticket Granting Server.

For the initial login, a custom Configuration instance is 
[created|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java#L257]
 and [supplied to the LoginContext 
constructor|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java#L300]
 potentially using a custom JAAS file location.

However, the background refresh thread does not then subsequently provide the 
JAAS file location or Configuration to the [reLogin 
method|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java#L222],
 so it tries to construct a LoginContext with [just a context name and 
subject|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java#L409]
 but no Configuration parameter, which means that the underlying 
{{Configuration.getConfiguration()}} call has to load one from [system 
defaults|https://github.com/AdoptOpenJDK/openjdk-jdk11/blob/jdk-11%2B28/src/java.base/share/classes/javax/security/auth/login/LoginContext.java#L242],
 which could possibly specify a different file or none at all.

In our application where this issue was found, we had set the 
{{java.security.auth.login.config}} value to a valid JAAS file location as a 
Storm client property along with other standard connectivity properties, since 
the [Netty client 
framework|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/KerberosSaslNettyClient.java#L61]
 loads it [from the topology 
configuration|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/security/auth/ClientAuthUtils.java#L64].
 It looks like the Netty server framework [does the 
same|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/KerberosSaslNettyServer.java#L55]
 as well. The initial login succeeded and the following Storm Nimbus 
interactions were successful, but a while later it lost the ability to 
communicate with Storm with this error being logged,
{noformat}
ERROR [Refresh-TGT] org.apache.storm.messaging.netty.Login Could not refresh 
TGT for principal: 
javax.security.auth.login.LoginException: No LoginModules configured for 
StormClient
   at 
java.base/javax.security.auth.login.LoginContext.init(LoginContext.java:267)
   at 
java.base/javax.security.auth.login.LoginContext.(LoginContext.java:385)
   at org.apache.storm.messaging.netty.Login.reLogin(Login.java:409)
   at org.apache.storm.messaging.netty.Login$1.run(Login.java:222)
   at java.base/java.lang.Thread.run(Thread.java:829)
{noformat}
It appears that a viable workaround for this issue is to also set the system 
property,

{{-Djava.security.auth.login.config=/some/path/jaas.conf}}

for the application, referencing the same JAAS file location as was set in the 
Storm configuration. After doing so the background refresh thread was able to 
correctly function in our situation.

To address this, we can update the {{reLogin}} method to use the same JAAS 
configuration. Furthermore it should use the same callback handler instance 
that was originally provided also, instead of creating a new default one.

  was:
In the 
[Login|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java]
 class, a background thread is started that periodically performs a re-login to 
the Kerberos Ticket Granting Server.

For the initial login, a custom Configuration instance is 
[created|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java#L257]
 and [supplied to the LoginContext 
constructor|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java#L300]
 potentially using a custom JAAS file location.

However, the background refresh thread does not then subsequently provide the 
JAAS file location or Configuration to the [reLogin 
method|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java#L222],
 so it tries to construct a LoginContext with [just a context name and 
subject|https://github.com/apache/storm/blob/v2.6.0/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java#L409]
 but no Configuration parameter, which means that the underlying 
{{Configuration.getConfiguration()}} call has to load one from [system 
defaults|https://github.com/AdoptOpenJDK

[jira] [Assigned] (STORM-4024) Bolt Input Stats are blank if topology.acker.executors is null or 0

2024-01-24 Thread Scott Moore (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Moore reassigned STORM-4024:
--

Assignee: Scott Moore

> Bolt Input Stats are blank if topology.acker.executors is null or 0
> ---
>
> Key: STORM-4024
> URL: https://issues.apache.org/jira/browse/STORM-4024
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-server
>Affects Versions: 2.0.0
>Reporter: Scott Moore
>Assignee: Scott Moore
>Priority: Minor
> Attachments: Storm2-input-stats-notworking-with-no-ackers.png, 
> Storm2-input-stats-working-with-ackers.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> On StormUI (and via API) the bolt Input Stats do not work when 
> topology.acker.executors is null or 0 (see attachements showing difference 
> with and without ackers)
> Also, some of the per-bolt instance Executed and latency fields are also not 
> working



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (STORM-4024) Bolt Input Stats are blank if topology.acker.executors is null or 0

2024-01-24 Thread Scott Moore (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810503#comment-17810503
 ] 

Scott Moore commented on STORM-4024:


Pull request with fix can be found here 
https://github.com/apache/storm/pull/3620

> Bolt Input Stats are blank if topology.acker.executors is null or 0
> ---
>
> Key: STORM-4024
> URL: https://issues.apache.org/jira/browse/STORM-4024
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-server
>Affects Versions: 2.0.0
>Reporter: Scott Moore
>Priority: Minor
> Attachments: Storm2-input-stats-notworking-with-no-ackers.png, 
> Storm2-input-stats-working-with-ackers.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> On StormUI (and via API) the bolt Input Stats do not work when 
> topology.acker.executors is null or 0 (see attachements showing difference 
> with and without ackers)
> Also, some of the per-bolt instance Executed and latency fields are also not 
> working



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (STORM-4024) Bolt Input Stats are blank if topology.acker.executors is null or 0

2024-01-24 Thread Scott Moore (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810481#comment-17810481
 ] 

Scott Moore commented on STORM-4024:


I have a fix for this. Will submit pull request shortly

> Bolt Input Stats are blank if topology.acker.executors is null or 0
> ---
>
> Key: STORM-4024
> URL: https://issues.apache.org/jira/browse/STORM-4024
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-server
>Affects Versions: 2.0.0
>Reporter: Scott Moore
>Priority: Minor
> Attachments: Storm2-input-stats-notworking-with-no-ackers.png, 
> Storm2-input-stats-working-with-ackers.png
>
>
> On StormUI (and via API) the bolt Input Stats do not work when 
> topology.acker.executors is null or 0 (see attachements showing difference 
> with and without ackers)
> Also, some of the per-bolt instance Executed and latency fields are also not 
> working



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-4024) Bolt Input Stats are blank if topology.acker.executors is null or 0

2024-01-24 Thread Scott Moore (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Moore updated STORM-4024:
---
Description: 
On StormUI (and via API) the bolt Input Stats do not work when 
topology.acker.executors is null or 0 (see attachements showing difference with 
and without ackers)

Also, some of the per-bolt instance Executed and latency fields are also not 
working

> Bolt Input Stats are blank if topology.acker.executors is null or 0
> ---
>
> Key: STORM-4024
> URL: https://issues.apache.org/jira/browse/STORM-4024
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-server
>Affects Versions: 2.0.0
>Reporter: Scott Moore
>Priority: Minor
> Attachments: Storm2-input-stats-notworking-with-no-ackers.png, 
> Storm2-input-stats-working-with-ackers.png
>
>
> On StormUI (and via API) the bolt Input Stats do not work when 
> topology.acker.executors is null or 0 (see attachements showing difference 
> with and without ackers)
> Also, some of the per-bolt instance Executed and latency fields are also not 
> working



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (STORM-4024) Bolt Input Stats are blank if topology.acker.executors is null or 0

2024-01-24 Thread Scott Moore (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Moore updated STORM-4024:
---
Attachment: Storm2-input-stats-working-with-ackers.png

> Bolt Input Stats are blank if topology.acker.executors is null or 0
> ---
>
> Key: STORM-4024
> URL: https://issues.apache.org/jira/browse/STORM-4024
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-server
>Affects Versions: 2.0.0
>Reporter: Scott Moore
>Priority: Minor
> Attachments: Storm2-input-stats-notworking-with-no-ackers.png, 
> Storm2-input-stats-working-with-ackers.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >