This is an automated email from the ASF dual-hosted git repository.
rzo1 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/storm.git
The following commit(s) were added to refs/heads/master by this push:
new bab0ed3ea STORM-4058 - Provide ClusterMetrics via a Prometheus
Preparable Reporter
bab0ed3ea is described below
commit bab0ed3ea00f7d692f34317e5c8ae2251101acd0
Author: Richard Zowalla <[email protected]>
AuthorDate: Fri Jun 7 16:39:14 2024 +0200
STORM-4058 - Provide ClusterMetrics via a Prometheus Preparable Reporter
---
DEPENDENCY-LICENSES | 11 +
external/storm-metrics-prometheus/README.md | 25 ++
external/storm-metrics-prometheus/pom.xml | 90 +++++++
.../prometheus/PrometheusPreparableReporter.java | 140 +++++++++++
.../prometheus/PrometheusReporterClient.java | 259 +++++++++++++++++++++
.../PrometheusPreparableReporterTest.java | 189 +++++++++++++++
.../src/test/resources/pushgateway-basicauth.yaml | 21 ++
.../src/test/resources/pushgateway-ssl.yaml | 103 ++++++++
pom.xml | 1 +
9 files changed, 839 insertions(+)
diff --git a/DEPENDENCY-LICENSES b/DEPENDENCY-LICENSES
index 21bfd45d5..0999e1fa2 100644
--- a/DEPENDENCY-LICENSES
+++ b/DEPENDENCY-LICENSES
@@ -378,7 +378,18 @@ List of third-party dependencies grouped by their license
type.
* Plexus Interpolation API
(org.codehaus.plexus:plexus-interpolation:1.25 -
http://codehaus-plexus.github.io/plexus-interpolation/)
* Plexus Security Dispatcher Component
(org.sonatype.plexus:plexus-sec-dispatcher:1.3 -
http://spice.sonatype.org/plexus-sec-dispatcher)
* Plexus Security Dispatcher Component
(org.sonatype.plexus:plexus-sec-dispatcher:1.4 -
http://spice.sonatype.org/plexus-sec-dispatcher)
+ * Prometheus Metrics Config
(io.prometheus:prometheus-metrics-config:1.3.0 -
http://github.com/prometheus/client_java/prometheus-metrics-config)
+ * Prometheus Metrics Core (io.prometheus:prometheus-metrics-core:1.3.0
- http://github.com/prometheus/client_java/prometheus-metrics-core)
+ * Prometheus Metrics Exporter - Common
(io.prometheus:prometheus-metrics-exporter-common:1.3.0 -
http://github.com/prometheus/client_java/prometheus-metrics-exporter-common)
+ * Prometheus Metrics Exporter - Pushgateway
(io.prometheus:prometheus-metrics-exporter-pushgateway:1.3.0 -
http://github.com/prometheus/client_java/prometheus-metrics-exporter-pushgateway)
+ * Prometheus Metrics Exposition Formats
(io.prometheus:prometheus-metrics-exposition-formats:1.3.0 -
http://github.com/prometheus/client_java/prometheus-metrics-exposition-formats)
+ * Prometheus Metrics Model
(io.prometheus:prometheus-metrics-model:1.3.0 -
http://github.com/prometheus/client_java/prometheus-metrics-model)
+ * Prometheus Metrics Tracer Common
(io.prometheus:prometheus-metrics-tracer-common:1.3.0 -
http://github.com/prometheus/client_java/prometheus-metrics-tracer/prometheus-metrics-tracer-common)
+ * Prometheus Metrics Tracer Initializer
(io.prometheus:prometheus-metrics-tracer-initializer:1.3.0 -
http://github.com/prometheus/client_java/prometheus-metrics-tracer/prometheus-metrics-tracer-initializer)
+ * Prometheus Metrics Tracer OpenTelemetry
(io.prometheus:prometheus-metrics-tracer-otel:1.3.0 -
http://github.com/prometheus/client_java/prometheus-metrics-tracer/prometheus-metrics-tracer-otel)
+ * Prometheus Metrics Tracer OpenTelemetry Agent
(io.prometheus:prometheus-metrics-tracer-otel-agent:1.3.0 -
http://github.com/prometheus/client_java/prometheus-metrics-tracer/prometheus-metrics-tracer-otel-agent)
* rest (org.elasticsearch.client:elasticsearch-rest-client:7.17.13 -
https://github.com/elastic/elasticsearch.git)
+ * Shaded Protobuf
(io.prometheus:prometheus-metrics-shaded-protobuf:1.3.0 -
http://github.com/prometheus/client_java/prometheus-metrics-shaded-dependencies/prometheus-metrics-shaded-protobuf)
* sigar (org.fusesource:sigar:1.6.4 - http://fusesource.com/sigar/)
* Sisu - Guice (org.sonatype.sisu:sisu-guice:2.1.7 -
http://forge.sonatype.com/sisu-guice/)
* Sisu - Inject (JSR330 bean support)
(org.sonatype.sisu:sisu-inject-bean:1.4.2 -
http://sisu.sonatype.org/sisu-inject/guice-bean/sisu-inject-bean/)
diff --git a/external/storm-metrics-prometheus/README.md
b/external/storm-metrics-prometheus/README.md
new file mode 100644
index 000000000..351742c1a
--- /dev/null
+++ b/external/storm-metrics-prometheus/README.md
@@ -0,0 +1,25 @@
+# Storm Metrics Prometheus
+
+This module contains a reporter to push [Cluster
Metrics](https://storm.apache.org/releases/current/ClusterMetrics.html) to
+a [Prometheus Pushgateway](https://github.com/prometheus/pushgateway), where
it can be scraped by a Prometheus instance.
+
+## Usage
+
+To use, edit your `storm.yaml` config file:
+
+```yaml
+storm.daemon.metrics.reporter.plugins:
+ - "org.apache.storm.metrics.prometheus.PrometheusPreparableReporter"
+storm.daemon.metrics.reporter.interval.secs: 10
+
+# Configuration for the Prometheus Pushgateway
+storm.daemon.metrics.reporter.plugin.prometheus.job: "job_name"
+storm.daemon.metrics.reporter.plugin.prometheus.endpoint: "localhost:9091"
+storm.daemon.metrics.reporter.plugin.prometheus.scheme: "http"
+storm.daemon.metrics.reporter.plugin.prometheus.basic_auth_user: ""
+storm.daemon.metrics.reporter.plugin.prometheus.basic_auth_password: ""
+storm.daemon.metrics.reporter.plugin.prometheus.skip_tls_validation: false
+```
+
+In addition, ensure to put this jar as well as the required transient
+dependencies for prometheus into `/lib` of your Storm installation.
diff --git a/external/storm-metrics-prometheus/pom.xml
b/external/storm-metrics-prometheus/pom.xml
new file mode 100644
index 000000000..55d1096bb
--- /dev/null
+++ b/external/storm-metrics-prometheus/pom.xml
@@ -0,0 +1,90 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <parent>
+ <artifactId>storm</artifactId>
+ <groupId>org.apache.storm</groupId>
+ <version>2.6.3-SNAPSHOT</version>
+ <relativePath>../../pom.xml</relativePath>
+ </parent>
+
+ <artifactId>storm-metrics-prometheus</artifactId>
+ <packaging>jar</packaging>
+
+ <name>storm-metrics-prometheus</name>
+
+ <properties>
+ <prometheus.client.version>1.3.0</prometheus.client.version>
+ </properties>
+
+ <dependencies>
+ <dependency>
+ <groupId>org.apache.storm</groupId>
+ <artifactId>storm-server</artifactId>
+ <version>${project.version}</version>
+ <scope>provided</scope>
+ </dependency>
+ <dependency>
+ <groupId>io.prometheus</groupId>
+ <artifactId>prometheus-metrics-core</artifactId>
+ <version>${prometheus.client.version}</version>
+ </dependency>
+ <dependency>
+ <groupId>io.prometheus</groupId>
+ <artifactId>prometheus-metrics-exporter-pushgateway</artifactId>
+ <version>${prometheus.client.version}</version>
+ </dependency>
+ <dependency>
+ <groupId>org.junit.jupiter</groupId>
+ <artifactId>junit-jupiter</artifactId>
+ <scope>test</scope>
+ </dependency>
+ <dependency>
+ <groupId>org.junit.jupiter</groupId>
+ <artifactId>junit-jupiter-api</artifactId>
+ <scope>test</scope>
+ </dependency>
+ <dependency>
+ <groupId>org.junit.jupiter</groupId>
+ <artifactId>junit-jupiter-engine</artifactId>
+ <scope>test</scope>
+ </dependency>
+ <dependency>
+ <groupId>org.testcontainers</groupId>
+ <artifactId>testcontainers</artifactId>
+ <version>${testcontainers.version}</version>
+ <scope>test</scope>
+ </dependency>
+ <dependency>
+ <groupId>org.testcontainers</groupId>
+ <artifactId>junit-jupiter</artifactId>
+ <version>${testcontainers.version}</version>
+ <scope>test</scope>
+ </dependency>
+ </dependencies>
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-pmd-plugin</artifactId>
+ </plugin>
+ </plugins>
+ </build>
+</project>
diff --git
a/external/storm-metrics-prometheus/src/main/java/org/apache/storm/metrics/prometheus/PrometheusPreparableReporter.java
b/external/storm-metrics-prometheus/src/main/java/org/apache/storm/metrics/prometheus/PrometheusPreparableReporter.java
new file mode 100644
index 000000000..21c0e34eb
--- /dev/null
+++
b/external/storm-metrics-prometheus/src/main/java/org/apache/storm/metrics/prometheus/PrometheusPreparableReporter.java
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership. The
ASF licenses this file to you under the Apache License, Version
+ * 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions
+ * and limitations under the License.
+ */
+package org.apache.storm.metrics.prometheus;
+
+import java.security.KeyManagementException;
+import java.security.NoSuchAlgorithmException;
+import java.security.cert.X509Certificate;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import com.codahale.metrics.MetricRegistry;
+import io.prometheus.metrics.exporter.pushgateway.HttpConnectionFactory;
+import io.prometheus.metrics.exporter.pushgateway.PushGateway;
+import io.prometheus.metrics.exporter.pushgateway.Scheme;
+import org.apache.storm.DaemonConfig;
+import org.apache.storm.daemon.metrics.reporters.PreparableReporter;
+import org.apache.storm.utils.ObjectReader;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.net.ssl.HttpsURLConnection;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.TrustManager;
+import javax.net.ssl.X509TrustManager;
+
+public class PrometheusPreparableReporter implements PreparableReporter {
+
+ private static final Logger LOG =
LoggerFactory.getLogger(PrometheusPreparableReporter.class);
+
+ private static final TrustManager INSECURE_TRUST_MANAGER = new
X509TrustManager() {
+
+ @Override
+ public java.security.cert.X509Certificate[] getAcceptedIssuers() {
+ return null;
+ }
+
+ @Override
+ public void checkClientTrusted(X509Certificate[] chain, String
authType) {
+ }
+
+ @Override
+ public void checkServerTrusted(X509Certificate[] chain, String
authType) {
+ }
+ };
+
+ private static final HttpConnectionFactory INSECURE_CONNECTION_FACTORY =
url -> {
+ try {
+ final SSLContext sslContext = SSLContext.getInstance("TLS");
+ sslContext.init(null, new TrustManager[]{INSECURE_TRUST_MANAGER},
null);
+ SSLContext.setDefault(sslContext);
+
+ final HttpsURLConnection connection = (HttpsURLConnection)
url.openConnection();
+ connection.setHostnameVerifier((hostname, session) -> true);
+ return connection;
+ } catch (NoSuchAlgorithmException | KeyManagementException e) {
+ throw new RuntimeException(e);
+ }
+ };
+
+ private PrometheusReporterClient reporter;
+ private Integer reportingIntervalSecs;
+
+ protected PrometheusReporterClient getReporter() {
+ return reporter;
+ }
+
+ @Override
+ public void prepare(MetricRegistry metricsRegistry, Map<String, Object>
daemonConf) {
+ if (daemonConf != null) {
+
+ final String jobName = (String)
daemonConf.getOrDefault("storm.daemon.metrics.reporter.plugin.prometheus.job",
"storm");
+ final String endpoint = (String)
daemonConf.getOrDefault("storm.daemon.metrics.reporter.plugin.prometheus.endpoint",
"localhost:9091");
+ final String schemeAsString = (String)
daemonConf.getOrDefault("storm.daemon.metrics.reporter.plugin.prometheus.scheme",
"http");
+
+ Scheme scheme = Scheme.HTTP;
+
+ try {
+ scheme = Scheme.fromString(schemeAsString);
+ } catch (IllegalArgumentException iae) {
+ LOG.warn("Unsupported scheme. Expecting 'http' or 'https'.
Was: {}", schemeAsString);
+ }
+
+ final String basicAuthUser = (String)
daemonConf.getOrDefault("storm.daemon.metrics.reporter.plugin.prometheus.basic_auth_user",
"");
+ final String basicAuthPassword = (String)
daemonConf.getOrDefault("storm.daemon.metrics.reporter.plugin.prometheus.basic_auth_password",
"");
+ final boolean skipTlsValidation = (boolean)
daemonConf.getOrDefault("storm.daemon.metrics.reporter.plugin.prometheus.skip_tls_validation",
false);
+
+ final PushGateway.Builder builder = PushGateway.builder();
+
+ builder.address(endpoint).job(jobName);
+
+ if (!basicAuthUser.isBlank() && !basicAuthPassword.isBlank()) {
+ builder.basicAuth(basicAuthUser, basicAuthPassword);
+ }
+
+ builder.scheme(scheme);
+
+ if (scheme == Scheme.HTTPS && skipTlsValidation) {
+ builder.connectionFactory(INSECURE_CONNECTION_FACTORY);
+ }
+
+ final PushGateway pushGateway = builder.build();
+
+ reporter = new PrometheusReporterClient(metricsRegistry,
pushGateway);
+ reportingIntervalSecs =
ObjectReader.getInt(daemonConf.get(DaemonConfig.STORM_DAEMON_METRICS_REPORTER_INTERVAL_SECS),
10);
+ } else {
+ LOG.warn("No daemonConfiguration was supplied. Don't initialize.");
+ }
+ }
+
+
+ @Override
+ public void start() {
+ if (reporter != null) {
+ LOG.debug("Starting...");
+ reporter.start(reportingIntervalSecs, TimeUnit.SECONDS);
+ } else {
+ throw new IllegalStateException("Attempt to start without
preparing " + getClass().getSimpleName());
+ }
+ }
+
+ @Override
+ public void stop() {
+ if (reporter != null) {
+ LOG.debug("Stopping...");
+ reporter.report();
+ reporter.stop();
+ } else {
+ throw new IllegalStateException("Attempt to stop without preparing
" + getClass().getSimpleName());
+ }
+ }
+}
diff --git
a/external/storm-metrics-prometheus/src/main/java/org/apache/storm/metrics/prometheus/PrometheusReporterClient.java
b/external/storm-metrics-prometheus/src/main/java/org/apache/storm/metrics/prometheus/PrometheusReporterClient.java
new file mode 100644
index 000000000..20a8ddd20
--- /dev/null
+++
b/external/storm-metrics-prometheus/src/main/java/org/apache/storm/metrics/prometheus/PrometheusReporterClient.java
@@ -0,0 +1,259 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership. The
ASF licenses this file to you under the Apache License, Version
+ * 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions
+ * and limitations under the License.
+ */
+package org.apache.storm.metrics.prometheus;
+
+import com.codahale.metrics.Counter;
+import com.codahale.metrics.Gauge;
+import com.codahale.metrics.Histogram;
+import com.codahale.metrics.Meter;
+import com.codahale.metrics.MetricFilter;
+import com.codahale.metrics.MetricRegistry;
+import com.codahale.metrics.ScheduledReporter;
+import com.codahale.metrics.Snapshot;
+import com.codahale.metrics.Timer;
+import io.prometheus.metrics.exporter.pushgateway.PushGateway;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.SortedMap;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * This reporter pushes common cluster metrics towards a Prometheus
Pushgateway.
+ */
+public class PrometheusReporterClient extends ScheduledReporter {
+ private static final Logger LOG =
LoggerFactory.getLogger(PrometheusReporterClient.class);
+
+ private static final Map<String, Object> CLUSTER_SUMMARY_METRICS = new
HashMap<>();
+
+ private static final TimeUnit DURATION_UNIT = TimeUnit.MILLISECONDS;
+ private static final TimeUnit RATE_UNIT = TimeUnit.SECONDS;
+
+ private final PushGateway prometheus;
+
+ /**
+ * Creates a new {@link PrometheusReporterClient} instance.
+ *
+ * @param registry the {@link MetricRegistry} containing the metrics this
+ * reporter will report
+ * @param prometheus the {@link PushGateway} which is responsible for
sending metrics
+ * via a transport protocol
+ */
+ protected PrometheusReporterClient(MetricRegistry registry, PushGateway
prometheus) {
+ super(registry, "prometheus-reporter", MetricFilter.ALL, RATE_UNIT,
DURATION_UNIT, null, true, Collections.emptySet());
+ this.prometheus = prometheus;
+ }
+
+ @Override
+ public void report(SortedMap<String, Gauge> gauges, SortedMap<String,
Counter> counters, SortedMap<String, Histogram> histograms, SortedMap<String,
Meter> meters, SortedMap<String, Timer> timers) {
+ try {
+ if (CLUSTER_SUMMARY_METRICS.isEmpty()) {
+ initClusterMetrics();
+ }
+
+ for (Map.Entry<String, Gauge> e : gauges.entrySet()) {
+
+ final io.prometheus.metrics.core.metrics.Gauge pGauge =
(io.prometheus.metrics.core.metrics.Gauge)
CLUSTER_SUMMARY_METRICS.get(e.getKey());
+ if (pGauge != null) {
+ try {
+ pGauge.set(toDouble(e.getValue().getValue()));
+ } catch (NumberFormatException ignored) {
+ LOG.warn("Invalid type for Gauge {}: {}", e.getKey(),
e.getValue().getClass().getName());
+ }
+ }
+ }
+
+ for (Map.Entry<String, Histogram> e : histograms.entrySet()) {
+ final io.prometheus.metrics.core.metrics.Histogram pHisto =
(io.prometheus.metrics.core.metrics.Histogram)
CLUSTER_SUMMARY_METRICS.get(e.getKey());
+ if (pHisto != null) {
+ final Snapshot s = e.getValue().getSnapshot();
+ for (double d : s.getValues()) {
+ pHisto.observe(d);
+ }
+ }
+ }
+
+ // Counters, Timers and Meters are not implemented (yet),
+ // since we don't need them for Cluster Summary Metrics, cf.
https://storm.apache.org/releases/current/ClusterMetrics.html
+
+ prometheus.push();
+ } catch (IOException e) {
+ LOG.warn("Failed to push metrics to configured Prometheus
Pushgateway,", e);
+ }
+ }
+
+ private double toDouble(Object obj) {
+ double value;
+ if (obj instanceof Number) {
+ value = ((Number) obj).doubleValue();
+ } else if (obj instanceof Boolean) {
+ value = ((Boolean) obj) ? 1 : 0;
+ } else {
+ value = Double.parseDouble(obj.toString());
+ }
+ return value;
+
+ }
+
+ private static void initClusterMetrics() {
+ CLUSTER_SUMMARY_METRICS.put("summary.cluster:num-nimbus-leaders",
io.prometheus.metrics.core.metrics.Gauge.builder()
+ .name("summary_cluster_num_nimbus_leaders")
+ .help("Number of nimbuses marked as a leader. This should
really only ever be 1 in a healthy cluster, or 0 for a short period of time
while a fail over happens.")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.cluster:num-nimbuses",
io.prometheus.metrics.core.metrics.Gauge.builder()
+ .name("summary_cluster_num_nimbuses")
+ .help("Number of nimbuses, leader or standby.")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.cluster:num-supervisors",
io.prometheus.metrics.core.metrics.Gauge.builder()
+ .name("summary_cluster_num_supervisors")
+ .help("Number of supervisors.")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.cluster:num-topologies",
io.prometheus.metrics.core.metrics.Gauge.builder()
+ .name("summary_cluster_num_topologies")
+ .help("Number of topologies.")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.cluster:num-total-used-workers",
io.prometheus.metrics.core.metrics.Gauge.builder()
+ .name("summary_cluster_num_total_used_workers")
+ .help("Number of used workers/slots.")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.cluster:num-total-workers",
io.prometheus.metrics.core.metrics.Gauge.builder()
+ .name("summary_cluster_num_total_workers")
+ .help("Number of workers/slots.")
+ .register());
+
+
CLUSTER_SUMMARY_METRICS.put("summary.cluster:total-fragmented-cpu-non-negative",
io.prometheus.metrics.core.metrics.Gauge.builder()
+ .name("summary_cluster_total_fragmented_cpu_non_negative")
+ .help("Total fragmented CPU (% of core). This is CPU that the
system thinks it cannot use because other resources on the node are used up.")
+ .register());
+
+
CLUSTER_SUMMARY_METRICS.put("summary.cluster:total-fragmented-memory-non-negative",
io.prometheus.metrics.core.metrics.Gauge.builder()
+ .name("summary_cluster_total_fragmented_memory_non_negative")
+ .help("Total fragmented memory (MB). This is memory that the
system thinks it cannot use because other resources on the node are used up.")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("nimbus:available-cpu-non-negative",
io.prometheus.metrics.core.metrics.Gauge.builder()
+ .name("nimbus_available_cpu_non_negative")
+ .help("Available cpu on the cluster (% of a core).")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("nimbus:total-cpu",
io.prometheus.metrics.core.metrics.Gauge.builder()
+ .name("nimbus_total_cpu")
+ .help("total CPU on the cluster (% of a core)")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("nimbus:total-memory",
io.prometheus.metrics.core.metrics.Gauge.builder()
+ .name("nimbus_total_memory")
+ .help("total memory on the cluster MB")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.topologies:assigned-cpu",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("summary_topologies_assigned_cpu")
+ .help("CPU scheduled per topology (% of a core)")
+ .register());
+
+
CLUSTER_SUMMARY_METRICS.put("summary.topologies:assigned-mem-off-heap",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("summary_topologies_assigned_mem_off_heap")
+ .help("Off heap memory scheduled per topology (MB)")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.topologies:assigned-mem-on-heap",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("summary_topologies_assigned_mem_on_heap")
+ .help("On heap memory scheduled per topology (MB)")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.topologies:num-executors",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("summary_topologies_num_executors")
+ .help("Number of executors per topology")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.topologies:num-tasks",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("summary_topologies_num_tasks")
+ .help("Number of tasks per topology")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.topologies:num-workers",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("summary_topologies_num_workers")
+ .help("Number of workers per topology")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.topologies:replication-count",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("summary_topologies_replication_count")
+ .help("Replication count per topology")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.topologies:requested-cpu",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("summary_topologies_requested_cpu")
+ .help("CPU requested per topology (% of a core)")
+ .register());
+
+
CLUSTER_SUMMARY_METRICS.put("summary.topologies:requested-mem-off-heap",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("summary_topologies_requested_mem_off_heap")
+ .help("Off heap memory requested per topology (MB)")
+ .register());
+
+
CLUSTER_SUMMARY_METRICS.put("summary.topologies:requested-mem-on-heap",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("summary_topologies_requested_mem_on_heap")
+ .help("On heap memory requested per topology (MB)")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.topologies:uptime-secs",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("summary_topologies_uptime_secs")
+ .help("Uptime per topology (seconds)")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.supervisors:fragmented-cpu",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("supervisors_fragmented_cpu")
+ .help("fragmented CPU per supervisor (% of a core)")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.supervisors:fragmented-mem",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("supervisors_fragmented_mem")
+ .help("fragmented memory per supervisor (MB)")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.supervisors:num-used-workers",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("supervisors_num_used_workers")
+ .help("workers used per supervisor")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.supervisors:num-workers",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("supervisors_num_workers")
+ .help("number of workers per supervisor")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.supervisors:uptime-secs",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("supervisors_uptime_secs")
+ .help("uptime of supervisors")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.supervisors:used-cpu",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("supervisors_used_cpu")
+ .help("CPU used per supervisor (% of a core)")
+ .register());
+
+ CLUSTER_SUMMARY_METRICS.put("summary.supervisors:used-mem",
io.prometheus.metrics.core.metrics.Histogram.builder()
+ .name("supervisors_used_mem")
+ .help("memory used per supervisor (MB)")
+ .register());
+
+ }
+
+}
diff --git
a/external/storm-metrics-prometheus/src/test/java/org/apache/storm/metrics/prometheus/PrometheusPreparableReporterTest.java
b/external/storm-metrics-prometheus/src/test/java/org/apache/storm/metrics/prometheus/PrometheusPreparableReporterTest.java
new file mode 100644
index 000000000..a49a808d7
--- /dev/null
+++
b/external/storm-metrics-prometheus/src/test/java/org/apache/storm/metrics/prometheus/PrometheusPreparableReporterTest.java
@@ -0,0 +1,189 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership. The
ASF licenses this file to you under the Apache License, Version
+ * 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions
+ * and limitations under the License.
+ */
+package org.apache.storm.metrics.prometheus;
+
+import com.codahale.metrics.MetricRegistry;
+import org.apache.storm.metrics2.SimpleGauge;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.Assertions;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.testcontainers.containers.GenericContainer;
+import org.testcontainers.containers.wait.strategy.Wait;
+import org.testcontainers.junit.jupiter.Testcontainers;
+import org.testcontainers.utility.MountableFile;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.Arrays;
+import java.util.Base64;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@Testcontainers(disabledWithoutDocker = true)
+public class PrometheusPreparableReporterTest {
+
+ private GenericContainer<?> pushGatewayContainer;
+
+ @BeforeEach
+ public void setUp() {
+ pushGatewayContainer = new
GenericContainer<>("prom/pushgateway:v1.8.0")
+ .withExposedPorts(9091)
+ .waitingFor(Wait.forListeningPort());
+ }
+
+ @AfterEach
+ public void tearDown() {
+ pushGatewayContainer.stop();
+ }
+
+ @Test
+ public void testSimple() throws IOException {
+ pushGatewayContainer.start();
+
+ final PrometheusPreparableReporter sut = new
PrometheusPreparableReporter();
+
+ final Map<String, Object> daemonConf = Map.of(
+ "storm.daemon.metrics.reporter.plugin.prometheus.job",
"test_simple",
+ "storm.daemon.metrics.reporter.plugin.prometheus.endpoint",
"localhost:" + pushGatewayContainer.getMappedPort(9091),
+ "storm.daemon.metrics.reporter.plugin.prometheus.scheme",
"http"
+ );
+
+ runTest(sut, daemonConf);
+
+ }
+
+ @Test
+ public void testBasicAuth() throws IOException {
+ pushGatewayContainer
+
.withCopyFileToContainer(MountableFile.forClasspathResource("/pushgateway-basicauth.yaml"),
"/pushgateway/pushgateway-basicauth.yaml")
+ .withCommand("--web.config.file", "pushgateway-basicauth.yaml")
+ .start();
+
+ final PrometheusPreparableReporter sut = new
PrometheusPreparableReporter();
+
+ final Map<String, Object> daemonConf = Map.of(
+ "storm.daemon.metrics.reporter.plugin.prometheus.job",
"test_simple",
+ "storm.daemon.metrics.reporter.plugin.prometheus.endpoint",
"localhost:" + pushGatewayContainer.getMappedPort(9091),
+ "storm.daemon.metrics.reporter.plugin.prometheus.scheme",
"http",
+
"storm.daemon.metrics.reporter.plugin.prometheus.basic_auth_user", "my_user",
+
"storm.daemon.metrics.reporter.plugin.prometheus.basic_auth_password",
"secret_password"
+ );
+
+ runTest(sut, daemonConf);
+ }
+
+ @Test
+ public void testTls() throws IOException {
+ pushGatewayContainer
+
.withCopyFileToContainer(MountableFile.forClasspathResource("/pushgateway-ssl.yaml"),
"/pushgateway/pushgateway-ssl.yaml")
+ .withCommand("--web.config.file", "pushgateway-ssl.yaml")
+ .start();
+
+ final PrometheusPreparableReporter sut = new
PrometheusPreparableReporter();
+
+ final Map<String, Object> daemonConf = Map.of(
+ "storm.daemon.metrics.reporter.plugin.prometheus.job",
"test_simple",
+ "storm.daemon.metrics.reporter.plugin.prometheus.endpoint",
"localhost:" + pushGatewayContainer.getMappedPort(9091),
+ "storm.daemon.metrics.reporter.plugin.prometheus.scheme",
"https",
+
"storm.daemon.metrics.reporter.plugin.prometheus.skip_tls_validation", true
+ );
+
+ runTest(sut, daemonConf);
+ }
+
+
+ private void runTest(PrometheusPreparableReporter sut, Map<String, Object>
daemonConf) throws IOException {
+ // We fake the metrics here. In a real Storm environment, these
metrics are generated.
+ final MetricRegistry r = new MetricRegistry();
+ final SimpleGauge<Integer> supervisor = new SimpleGauge<>(5);
+ r.register("summary.cluster:num-supervisors", supervisor);
+ r.register("nimbus:total-memory", new SimpleGauge<>(5.6));
+ r.register("nimbus:total-cpu", new SimpleGauge<>("500"));
+
+ sut.prepare(r, daemonConf);
+
+ //manually trigger a reporting here, in a real Storm environment, this
is called by a scheduled executor.
+ sut.getReporter().report();
+
+ assertMetrics(
+ List.of(
+ "# HELP summary_cluster_num_supervisors Number of
supervisors.",
+ "# TYPE summary_cluster_num_supervisors gauge",
+
"summary_cluster_num_supervisors{instance=\"\",job=\"test_simple\"} 5",
+ "# HELP nimbus_total_memory total memory on the
cluster MB",
+ "# TYPE nimbus_total_memory gauge",
+
"nimbus_total_memory{instance=\"\",job=\"test_simple\"} 5.6",
+ "# HELP nimbus_total_cpu total CPU on the cluster (%
of a core)",
+ "# TYPE nimbus_total_cpu gauge",
+ "nimbus_total_cpu{instance=\"\",job=\"test_simple\"}
500"
+ ),
daemonConf.get("storm.daemon.metrics.reporter.plugin.prometheus.scheme") +
"://" +
daemonConf.get("storm.daemon.metrics.reporter.plugin.prometheus.endpoint") +
"/metrics", daemonConf);
+
+ //update a metric
+ supervisor.set(100);
+
+ //manually trigger a reporting here, in a real Storm environment, this
is called by a scheduled executor.
+ sut.getReporter().report();
+
+ assertMetrics(
+ List.of(
+ "# HELP summary_cluster_num_supervisors Number of
supervisors.",
+ "# TYPE summary_cluster_num_supervisors gauge",
+
"summary_cluster_num_supervisors{instance=\"\",job=\"test_simple\"} 100",
+ "# HELP nimbus_total_memory total memory on the
cluster MB",
+ "# TYPE nimbus_total_memory gauge",
+
"nimbus_total_memory{instance=\"\",job=\"test_simple\"} 5.6",
+ "# HELP nimbus_total_cpu total CPU on the cluster (%
of a core)",
+ "# TYPE nimbus_total_cpu gauge",
+ "nimbus_total_cpu{instance=\"\",job=\"test_simple\"}
500"
+ ),
daemonConf.get("storm.daemon.metrics.reporter.plugin.prometheus.scheme") +
"://" +
daemonConf.get("storm.daemon.metrics.reporter.plugin.prometheus.endpoint") +
"/metrics", daemonConf);
+ }
+
+ private void assertMetrics(List<String> elements, String endpoint,
Map<String, Object> conf) throws IOException {
+ final String content = readContent(endpoint, conf);
+ Assertions.assertNotNull(content);
+ final Set<String> contentLinesSet = new
HashSet<>(Arrays.asList(content.split("\n")));
+ elements.forEach(find ->
Assertions.assertTrue(contentLinesSet.contains(find), "Did not find: " + find));
+ }
+
+ private String readContent(String url, Map<String, Object> conf) throws
IOException {
+ final URL obj = new URL(url);
+ final HttpURLConnection con = (HttpURLConnection) obj.openConnection();
+ con.setRequestMethod("GET");
+
+ if
(conf.containsKey("storm.daemon.metrics.reporter.plugin.prometheus.basic_auth_user"))
{
+
+ String auth =
conf.get("storm.daemon.metrics.reporter.plugin.prometheus.basic_auth_user") +
":" +
conf.get("storm.daemon.metrics.reporter.plugin.prometheus.basic_auth_password");
+ String encodedAuth =
Base64.getEncoder().encodeToString(auth.getBytes());
+ String authHeaderValue = "Basic " + encodedAuth;
+ con.setRequestProperty("Authorization", authHeaderValue);
+ }
+
+
+ try (BufferedReader in = new BufferedReader(new
InputStreamReader(con.getInputStream()))) {
+ String inputLine;
+ StringBuilder response = new StringBuilder();
+
+ while ((inputLine = in.readLine()) != null) {
+ response.append(inputLine).append("\n");
+ }
+ return response.toString();
+ }
+
+ }
+
+}
diff --git
a/external/storm-metrics-prometheus/src/test/resources/pushgateway-basicauth.yaml
b/external/storm-metrics-prometheus/src/test/resources/pushgateway-basicauth.yaml
new file mode 100644
index 000000000..f36b4c3bc
--- /dev/null
+++
b/external/storm-metrics-prometheus/src/test/resources/pushgateway-basicauth.yaml
@@ -0,0 +1,21 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+basic_auth_users:
+ # Note: The bcrypt hash of the password was generated with the following
command line:
+ # python -c 'import bcrypt; print(bcrypt.hashpw(b"secret_password",
bcrypt.gensalt(rounds=10)).decode("ascii"))'
+ my_user: $2b$10$kmIxr/4wpcORDXnKLvTMC.WPGqT8nqjBm8AI3MqGkzcSrWJioTfUG
\ No newline at end of file
diff --git
a/external/storm-metrics-prometheus/src/test/resources/pushgateway-ssl.yaml
b/external/storm-metrics-prometheus/src/test/resources/pushgateway-ssl.yaml
new file mode 100644
index 000000000..8a275938c
--- /dev/null
+++ b/external/storm-metrics-prometheus/src/test/resources/pushgateway-ssl.yaml
@@ -0,0 +1,103 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+tls_server_config:
+ # cert and key have been generated with the following command:
+ # openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256
-days 36500 -nodes -subj "/CN=localhost"
+ cert: |
+ -----BEGIN CERTIFICATE-----
+ MIIFCzCCAvOgAwIBAgIUPwSov6+heI4uY6+fvB1N+1EN3FwwDQYJKoZIhvcNAQEL
+ BQAwFDESMBAGA1UEAwwJbG9jYWxob3N0MCAXDTI0MDUxMDE0MzY1NloYDzIxMjQw
+ NDE2MTQzNjU2WjAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwggIiMA0GCSqGSIb3DQEB
+ AQUAA4ICDwAwggIKAoICAQCnrySCLrYo5ad6w2/Tp/toGKnN4YW9C74eqmlqntht
+ VbGNkBha4FpEuuadjB64bH3dTSwXWg3ZMxUnjZplRCdosM7+beApzh8ZR+/Ju/qk
+ CPEw4N1J+NVZKzE7brt9rLKT+Cttjf4K8luUWnlVdOIl2UjqejCougV29TerctlD
+ svx7jUAoyauPhxbVC8Fmww8rCEox3xbv0MEe2bsc0hQP2opTRfXHwRKBq0hmA9x4
+ 1FEqSl7gzQwbg7hHH/AfgxXSsQqRazIDzZEhDNePF0O9PYALrTJxonBrQ4uGBwJE
+ NQmlyIDnmynYmp5dOuXY8nspansOq7pQkubsg0qYiQ9VwLLZ/ApmFnvcY2uqSpta
+ TwqLalDYbUMqK61DtG6kHx3rHTokuLRFiXAqP++QdUkajqKtoH7quvKBAfzgrtHj
+ WJhNkDbaGaYlpyIlekyrxFdp25T06BEHzpFjylRGhuAaz2tbo1n7ynQ6KusdHEAf
+ l51JXQSS6yU/1Wy0yXo5yuKOykj0ey15s0AoH5yMHhEhAUVG0SKxtcWnnzAFcA9k
+ DYagco+IjQ+wRuX4jdM5S/l2Kmu8tvW0O7olNyxdWh2gzH/gLmt8ZthNEyTmb4mM
+ 7kPNYjjcwrbs+oNc/Qfwk66+pn+vwYmRuZJolTZvGAhp9Es3OC19suDEgwz2bLYI
+ 4wIDAQABo1MwUTAdBgNVHQ4EFgQUMx8SJAoEhboJXjRR4iz+/I2tUhkwHwYDVR0j
+ BBgwFoAUMx8SJAoEhboJXjRR4iz+/I2tUhkwDwYDVR0TAQH/BAUwAwEB/zANBgkq
+ hkiG9w0BAQsFAAOCAgEASkE0xnofeUBGTZQK4BdRbqYgSaL9XSKi0UBH7Jw7a+nr
+ vdNu1VqOYRuSjI1FH2aYFKIaKEOipd8Z/nb1LjYArCerC51Mf/pl1mEDiUVyxECL
+ 8F/IRj4xWwglbMMHpZw9wGYKAyG/QIpU/skbKEptAfUNb25kAVqhjuQ2vBb8w1kz
+ GdLf9pGXRCUefYtJIhgLVMDLhR7XVI8tsL2KfBE9fAMeSO/YAr1sa1wVKdqsmxQD
+ StQoecib3IhspO8QbRSJ10pb6p0sffTyU3jxDonv6b+E1jAslS0FQOxCUHnjwqG/
+ TuwW1MPxl4QeOpX00cI9ReZd2qla6+aaxZDccbpDmHtJJ3nKoFVwknUYEqTu+B7y
+ qZF2iyBtIaPJmdouMISMvlEsFdR4vkcD+2eCWMLlZDkOfDifIF1ny+ni8xU7UELa
+ XoDOKdIey7A/ddKi86mUdvjp5DRD85ghpSByn20UTdSmvbjHqpdlTPuFSyvix9nk
+ 3KGJxS7Ra0hqOdGE8JHTmIXAFIjkOEEliNnmGRfd2l6EmDBEXufymlVgvz/DHlgF
+ krIsjl2SB9AUJckwmj17LdYN6pq9cUdaq7+7SIr12XCxPXyomIBSjlNyPCCy7u5L
+ nxDNTKImHmmupjoLCJ8MKpZ4fuva+kI372R47l2zMkwBWiEwn95C6+JB2kaFEiQ=
+ -----END CERTIFICATE-----
+ key: |
+ -----BEGIN PRIVATE KEY-----
+ MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQCnrySCLrYo5ad6
+ w2/Tp/toGKnN4YW9C74eqmlqnthtVbGNkBha4FpEuuadjB64bH3dTSwXWg3ZMxUn
+ jZplRCdosM7+beApzh8ZR+/Ju/qkCPEw4N1J+NVZKzE7brt9rLKT+Cttjf4K8luU
+ WnlVdOIl2UjqejCougV29TerctlDsvx7jUAoyauPhxbVC8Fmww8rCEox3xbv0MEe
+ 2bsc0hQP2opTRfXHwRKBq0hmA9x41FEqSl7gzQwbg7hHH/AfgxXSsQqRazIDzZEh
+ DNePF0O9PYALrTJxonBrQ4uGBwJENQmlyIDnmynYmp5dOuXY8nspansOq7pQkubs
+ g0qYiQ9VwLLZ/ApmFnvcY2uqSptaTwqLalDYbUMqK61DtG6kHx3rHTokuLRFiXAq
+ P++QdUkajqKtoH7quvKBAfzgrtHjWJhNkDbaGaYlpyIlekyrxFdp25T06BEHzpFj
+ ylRGhuAaz2tbo1n7ynQ6KusdHEAfl51JXQSS6yU/1Wy0yXo5yuKOykj0ey15s0Ao
+ H5yMHhEhAUVG0SKxtcWnnzAFcA9kDYagco+IjQ+wRuX4jdM5S/l2Kmu8tvW0O7ol
+ NyxdWh2gzH/gLmt8ZthNEyTmb4mM7kPNYjjcwrbs+oNc/Qfwk66+pn+vwYmRuZJo
+ lTZvGAhp9Es3OC19suDEgwz2bLYI4wIDAQABAoICADi6JJSx7sgZITaDxWIKMx/9
+ L/zJbbANt+yx4+XBBSC/28gzVjnwKjmULQ5hZ8cmVNI4GFFyErtG78IownG9w8NE
+ BVLHow0hgR3RW0qZAGrb55SMjfBHcQ2wcgBULrOOZ/9s9mwinC3h3Z9rmB6T4ynA
+ v00rtyhtfgnHXWTv/pZLh+TYXTsvNo3gupWqW2xDUu9Q56DFgwHwUlT4fbd7TnQq
+ j58qTMKeC3+4jU6NwdlSon63GC/ezljEj+Pn5xkSBKD5acTWSd5FffJ7YLU0vqLX
+ mmjY1/bfaD6xZBMcbeTbOH9QPGOd92Mis66AjV9+cLILJsRIzkgR2nNq2yKNQ5VG
+ rtpV7Vf9Mw3eI9+VKa2vjfdLlQUK9sSoBDOyyoDBYrWj9uL4E118wDTVeq6Sgk1T
+ Wxit6EDsyirNeax3VLj25KAiO0LWAp5kqvDLljMNgOkgKNAxSX5eM18ibKGWv2an
+ VTQ0TeurHEQidUo6SJA3XJJ0yeFgAN4hHSJB/OGkeCCyjAq4aieCMh7ziuXGJ48h
+ /g9hC9LcfIm8nOwPt/hrMVj4bpgIYGLlaNjLWvMoCoT/TwUdigqUSA4iYbR0bbVr
+ /So7tv5RH1QxRRWLvF1Y6bE87LzHjd4TpuZdE2Nh/Q8tBRuda0WBZucF/7fRo1Sv
+ +WPopY/3eXLhsrEmWsZxAoIBAQDsLxrag0KsOxyYB9J1z4fRrgpts8Af56ThxFOj
+ /X1fdW2lkblCeZZO2B1LS9vEoMmp1M9culX1uj82ST8VZNlrIaCwa5YzofbpyF6X
+ U2yVG+g8BxOREaMN0+V1hzrvI+lFFqx8Tl+3ex+pKYANOv+i4LfN5D+y5imlLzaT
+ M9l3gqMXZk0ZrejZBKJa9sWm949WmDDyXaKj6qC9XXLLXKbdRE9tAkPf5CMe3aLm
+ 2pbt1+nvLXgsAnsz2dt65uZEXnlubdFJr3KGK5+5BzaV0W6+8J+LSfyqUxz+AcWZ
+ +zrwRKzQ3VWx/3fuc+lkdQUJyXDz40Mk5Y2wqA9X8g5qfOcJAoIBAQC1wMI6XkyG
+ ufQHdV/B8ALVKnN1mG82t35rxCdNpfWSqsyTuLxkem1F9ZgzK4Q6CmIgEW1TxbP9
+ 74eTdtPTuP0vP9cRMomRUREblCmsYZv5/c42DbJ7hfBPDiSJHB3JKpMT4yPfEuhN
+ 9DR3gnITV0L9QqxO9TVH2sqO7lM77l9LQpQt3xJARMKCqtGTWGLdEG6skUnXHKz3
+ VBFFt1x4hT5noLVLh4M/df1A1nB2DRm5pEcyOaH1wSTMIGnCvUNgrq2xVoP3TIJs
+ RvWak9h2RO2MHPFf13Nhai3L7gxsdodpswH650Qmk4YO6og7iwUxHXL3B4GCZnK6
+ PDDOzm9Ptp+LAoIBAQDiZHS1KETsmuzZvgW67+cc0lsktLxg2MZvsqUJ+J4ItqMX
+ pguS8MFnajkKR/itDgLATEFIfUSQeqrE+okBlN3jlyRUd4xOid4IUgx5uXnHpCyD
+ /bR/xgwp4Qd+FNYlDKM5mnZT4TxWwCqlGCaqh/cqxYTqUvPMJFue/xatG3JE4HA8
+ qc8V4mHkRFDsKMdlOL+pHdEtQRv5S5owajbzQCiiyCvqLdWp8yDHIWRZLQanjeOr
+ ZEZgyTAXj6iWsmXe+0Ai3hlTLF32xjIgRg3Ipiwl0rjb51vOWETeJgyngO4KCYot
+ 2zudl2f6phj+Nj1SGEmxPhLKd0/OGgo7Hsc6w+chAoIBAHhdwcN58+A9ghj2aIY9
+ dwLI7FHys6Re/QBNlWHdCLcrGfSyoUFBuuBb94HbzePKQJXQNMEH612+peDJDxvm
+ JPaHptyixWxRba0AAGFC+1Mh/NDbXVpkp3MTgKq0zh0Nbv36rSTslqAZnC2RXA7m
+ +VxULVzVE4YUpZTmzISiJsXmv89pLeMWJmL20XhtTnvsh/8M8QPe38WkDRRIjJrc
+ Uym5ypbMleUPNLsdyLjFkEXbP7NJa7MfSElPJftr8BU1WZ5aF2dNagpfLARE6VPZ
+ 7h+eg1PfkW/wK4gkjGHAVYlwnV0Wj5GknWF/fN1CAhw2zo4+kExVoKEpf4FWQW1f
+ GmUCggEBAKcJPOSdg26e0fBtTJ3CUXYwCyTDmvxVg5z9JCEuyziPoj5QqH0tmce3
+ SWH/4bHOSWwnv1y3KwlAqGCzpa2VVvnXsfSn3dPcuQmrkXTtYv+zztMbXAMOyJxp
+ 4KEWow7AU2hKg9TrPXW6Sn3/0au3ejP5QBZBwp+1hbDqUawtSKTPmPIZkBsaGy8W
+ 6m1T5E5KtsNueMK97pH80jKc73zQtx6DjkhEnHRwMc/2EdGf/CB5k+BQ4TeXSUej
+ 2JftHmpMPwFWdtpTJ2jOqBNFps+ULuaX+4H6Bb7vOUKkAn3zhJfh4wFpvYVtnXE+
+ v8yTmHG7BmzLiznu3uPyiEfAPQ3KQ9E=
+ -----END PRIVATE KEY-----
\ No newline at end of file
diff --git a/pom.xml b/pom.xml
index de54eef23..b14b036c5 100644
--- a/pom.xml
+++ b/pom.xml
@@ -495,6 +495,7 @@
<module>external/storm-redis</module>
<module>external/storm-elasticsearch</module>
<module>external/storm-metrics</module>
+ <module>external/storm-metrics-prometheus</module>
<module>external/storm-kafka-client</module>
<module>external/storm-kafka-migration</module>
<module>external/storm-kafka-monitor</module>