[GitHub] flink issue #4586: [FLINK-7502] [metrics] Improve PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/4586 I mean in general it is probably not the best thing to just rely on the port not being available as a consensus algorithm of who should claim which port. Then again, I could not think of a straightforward way to coordinate across Job/TaskManagers without using something external, which would probably get too involved for a metrics-related component â maybe there is something I am missing? ---
[GitHub] flink issue #4586: [FLINK-7502] [metrics] Improve PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/4586 Also there should have been the warning "Could not start PrometheusReporter HTTP server on any configured port. Ports: ...", wasn't this logged? ---
[GitHub] flink issue #4586: [FLINK-7502] [metrics] Improve PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/4586 Did you configure a port range with sufficient (i.e.) three ports? By default, it uses only one port. I added a sentence about this to the readme but maybe we can make this more explicit? ---
[GitHub] flink issue #4586: [FLINK-7502] [metrics] Improve PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/4586 [Green Travis](https://travis-ci.org/mbode/flink/builds/290468452) ---
[GitHub] flink issue #4586: [FLINK-7502] [metrics] Improve PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/4586 I implemented your comments and assembled a [small setup](https://github.com/mbode/flink-prometheus-example) to test the reporter again. It currently clones *master* and build the reporter from there. In order to test another revision, you can just build *flink-metrics-prometheus-1.4-SNAPSHOT.jar* locally, put it into the *flink* subdirectory of the project and add `COPY flink-metrics-prometheus-1.4-SNAPSHOT.jar /opt/flink/lib` to *flink/Dockerfile*. ---
[GitHub] flink issue #4586: [FLINK-7502] [metrics] Improve PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/4586 @zentol *ping* ---
[GitHub] flink issue #4586: [FLINK-7502] [metrics] Improve PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/4586 @zentol It would be great if you could have another look on occasion, I added better handling for metrics that are registered e.g. by different subtasks. [green travis](https://travis-ci.org/mbode/flink/builds/274685138) ---
[GitHub] flink pull request #4586: [FLINK-7502] [metrics] Improve PrometheusReporter
Github user mbode commented on a diff in the pull request: https://github.com/apache/flink/pull/4586#discussion_r136503621 --- Diff: flink-metrics/flink-metrics-prometheus/src/test/java/org/apache/flink/metrics/prometheus/PrometheusReporterTest.java --- @@ -160,6 +151,43 @@ public void invalidCharactersAreReplacedWithUnderscore() { assertThat(PrometheusReporter.replaceInvalidChars("a,=;:?'b,=;:?'c"), equalTo("a___:__b___:__c")); } + @Test + public void registeringSameMetricTwiceDoesNotThrowException() { + Counter counter = new SimpleCounter(); + counter.inc(); + String counterName = "testCounter"; + final FrontMetricGroup group = new FrontMetricGroup<>(0, new TaskManagerMetricGroup(registry, HOST_NAME, TASK_MANAGER)); + + reporter.notifyOfAddedMetric(counter, counterName, group); + reporter.notifyOfAddedMetric(counter, counterName, group); + } + + @Test + public void cannotStartTwoReportersOnSamePort() { + final MetricRegistry fixedPort1 = new MetricRegistry(MetricRegistryConfiguration.fromConfiguration(createConfigWithOneReporter("test1", "12345"))); + final MetricRegistry fixedPort2 = new MetricRegistry(MetricRegistryConfiguration.fromConfiguration(createConfigWithOneReporter("test2", "12345"))); + assertThat(fixedPort1.getReporters(), hasSize(1)); + assertThat(fixedPort2.getReporters(), hasSize(0)); + } + + @Test + public void canStartTwoReportersWhenUsingPortRange() { + final MetricRegistry portRange1 = new MetricRegistry(MetricRegistryConfiguration.fromConfiguration(createConfigWithOneReporter("test1", "9249-9252"))); + final MetricRegistry portRange2 = new MetricRegistry(MetricRegistryConfiguration.fromConfiguration(createConfigWithOneReporter("test2", "9249-9252"))); + assertThat(portRange1.getReporters(), hasSize(1)); + assertThat(portRange2.getReporters(), hasSize(1)); + } + + @Test + public void cannotStartThreeReportersWhenPortRangeIsTooSmall() { + final MetricRegistry smallPortRange1 = new MetricRegistry(MetricRegistryConfiguration.fromConfiguration(createConfigWithOneReporter("test1", "9253-9254"))); --- End diff -- done --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #4586: [FLINK-7502] [metrics] Improve PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/4586 [Green Travis](https://travis-ci.org/mbode/flink/builds/268258386) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #4586: [FLINK-7502] [metrics] Improve PrometheusReporter
GitHub user mbode opened a pull request: https://github.com/apache/flink/pull/4586 [FLINK-7502] [metrics] Improve PrometheusReporter ## What is the purpose of the change *This pull request addresses several issues that came up when testing PrometheusReporter in a YARN environment. Most significantly, it is now possible to specify a port range similarly to how it is done for JmxReporter.* ## Brief change log * Do not throw exception when same metric is added twice * Add possibility to configure port range * Bump prometheus.version 0.0.21 -> 0.0.26 * Use simpleclient_httpserver instead of nanohttpd ## Verifying this change * The changed code is covered by existing test `PrometheusReporterTest`, test cases for new functionality were added. * Test building the jar and adding it to Flink lib directory. Then configure Prometheus reporter and check endpoints in configured port range for metrics. In particular, test case where JobManager and one TaskManager are co-located â there now should be separately reported metrics endpoints. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): **yes**, upgrade prometheus client - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: **yes**, `PrometheusReporter` is `@PublicEvolving`, but it has not been released as of now. - The serializers: **no** - The runtime per-record code paths (performance sensitive): **don't know** - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: **don't know** ## Documentation - Does this pull request introduce a new feature? **yes**, configurable port range - If yes, how is the feature documented? **docs** You can merge this pull request into a Git repository by running: $ git pull https://github.com/mbode/flink prometheus_reporter_improvements Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/4586.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4586 commit ef4e1aa79db1dca361f2a7e274e2355008881cfe Author: Maximilian Bode <maximilian.b...@tngtech.com> Date: 2017-08-24T13:59:24Z [FLINK-7502] Improve PrometheusReporter * Do not throw exception when same metric is added twice * Add possibility to configure port range * Bump prometheus.version 0.0.21 -> 0.0.26 * Use simpleclient_httpserver instead of nanohttpd --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #3833: [FLINK-6221] Add PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/3833 I see, makes sense. Finally got around to fixing the dependency and getting a [Green Travis](https://travis-ci.org/mbode/flink/builds/240926382). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #3833: [FLINK-6221] Add PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/3833 Oh, I broke the stricter checkstyle. To me, it feels a bit weird to have to put javadoc on tests, is that intended? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #3833: [FLINK-6221] Add PrometheusReporter
Github user mbode commented on a diff in the pull request: https://github.com/apache/flink/pull/3833#discussion_r119661118 --- Diff: flink-metrics/flink-metrics-prometheus/pom.xml --- @@ -0,0 +1,129 @@ + + +http://maven.apache.org/POM/4.0.0; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; +xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd;> + + 4.0.0 + + + org.apache.flink + flink-metrics + 1.3-SNAPSHOT --- End diff -- Done ð --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #4036: [FLINK-6781] Make statement fetch size configurable in JD...
Github user mbode commented on the issue: https://github.com/apache/flink/pull/4036 [Green Travis](https://travis-ci.org/mbode/flink/builds/238207585) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #4036: [FLINK-6781] Make statement fetch size configurabl...
Github user mbode commented on a diff in the pull request: https://github.com/apache/flink/pull/4036#discussion_r119528882 --- Diff: flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormatTest.java --- @@ -101,6 +105,33 @@ public void testIncompleteConfiguration() throws IOException { } @Test + public void defaultFetchSizeIsUsedIfNotConfiguredOtherwise() throws SQLException { + jdbcInputFormat = JDBCInputFormat.buildJDBCInputFormat() + .setDrivername(DRIVER_CLASS) + .setDBUrl(DB_URL) + .setQuery(SELECT_ALL_BOOKS) + .setRowTypeInfo(ROW_TYPE_INFO) + .finish(); + jdbcInputFormat.openInputFormat(); + assertThat(jdbcInputFormat.getStatement().getFetchSize(), equalTo(1)); --- End diff -- The idea was to make sure that existing behavior is not broken. I have made the dependency on the fact that derby is used in this test explicit. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #4036: [FLINK-6781] Make statement fetch size configurabl...
GitHub user mbode opened a pull request: https://github.com/apache/flink/pull/4036 [FLINK-6781] Make statement fetch size configurable in JDBCInputFormat You can merge this pull request into a Git repository by running: $ git pull https://github.com/mbode/flink JDBCInputFormat_fetchsize Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/4036.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4036 commit 69d9b9faa580be626fdef970c9b2f83ed67d798a Author: Maximilian Bode <maximilian.b...@tngtech.com> Date: 2017-05-31T16:46:55Z [FLINK-6781] Make statement fetch size configurable in JDBCInputFormat. commit 34bc56b31990907a73f146ccc21e58980e214fac Author: Maximilian Bode <maximilian.b...@tngtech.com> Date: 2017-05-31T16:48:51Z [FLINK-6781] Update DataSet docs concerning JDBCInputFormat. The way to provide type information was outdated. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #3833: [FLINK-6221] Add PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/3833 @zentol Would you mind checking that I got the shading right? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #3833: [FLINK-6221] Add PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/3833 Okay, guava is not used anymore. I am not sure about the shading part. Would you expect either prometheus clients or nanohttpd to be used in Flink user code? Or are there other advantages to shading? If so, could you point me to a module I could copy the "Flink way of shading" from? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #3833: [FLINK-6221] Add PrometheusReporter
Github user mbode commented on a diff in the pull request: https://github.com/apache/flink/pull/3833#discussion_r116376242 --- Diff: flink-metrics/flink-metrics-prometheus/src/main/java/org/apache/flink/metrics/prometheus/PrometheusReporter.java --- @@ -0,0 +1,256 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.metrics.prometheus; + +import com.google.common.base.CharMatcher; +import com.google.common.collect.ImmutableList; +import fi.iki.elonen.NanoHTTPD; +import io.prometheus.client.Collector; +import io.prometheus.client.CollectorRegistry; +import io.prometheus.client.exporter.common.TextFormat; +import org.apache.flink.annotation.PublicEvolving; +import org.apache.flink.annotation.VisibleForTesting; +import org.apache.flink.metrics.CharacterFilter; +import org.apache.flink.metrics.Counter; +import org.apache.flink.metrics.Gauge; +import org.apache.flink.metrics.Histogram; +import org.apache.flink.metrics.HistogramStatistics; +import org.apache.flink.metrics.Meter; +import org.apache.flink.metrics.Metric; +import org.apache.flink.metrics.MetricConfig; +import org.apache.flink.metrics.MetricGroup; +import org.apache.flink.metrics.reporter.MetricReporter; +import org.apache.flink.runtime.metrics.groups.AbstractMetricGroup; +import org.apache.flink.runtime.metrics.groups.FrontMetricGroup; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.io.StringWriter; +import java.util.Collections; +import java.util.HashMap; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; +import java.util.regex.Pattern; + +import static com.google.common.collect.Iterables.toArray; + +@PublicEvolving +public class PrometheusReporter implements MetricReporter { + private static final Logger log = LoggerFactory.getLogger(PrometheusReporter.class); + + static final String ARG_PORT = "port"; + private static final intDEFAULT_PORT = 9249; + + private static final Pattern UNALLOWED_CHAR_PATTERN = Pattern.compile("[^a-zA-Z0-9:_]"); + private static final CharacterFilter CHARACTER_FILTER = new CharacterFilter() { + @Override + public String filterCharacters(String input) { + return replaceInvalidChars(input); + } + }; + + private static final char SCOPE_SEPARATOR = '_'; + + private PrometheusEndpoint prometheusEndpoint; + private Map<String, Collector> collectorsByMetricName = new HashMap<>(); + + @VisibleForTesting + static String replaceInvalidChars(final String input) { + // https://prometheus.io/docs/instrumenting/writing_exporters/ + // Only [a-zA-Z0-9:_] are valid in metric names, any other characters should be sanitized to an underscore. + return UNALLOWED_CHAR_PATTERN.matcher(input).replaceAll("_"); + } + + @Override + public void open(MetricConfig config) { + int port = config.getInteger(ARG_PORT, DEFAULT_PORT); + log.info("Using port {}.", port); + prometheusEndpoint = new PrometheusEndpoint(port); + try { + prometheusEndpoint.start(NanoHTTPD.SOCKET_READ_TIMEOUT, true); + } catch (IOException e) { + log.error("Could not start PrometheusEndpoint on port " + port, e); + } + } + + @Override + public void close() { + prometheusEndpoint.stop(); + CollectorRegistry.defaultRegistry.clear(); + } + + @Override + public void notifyOfAddedMetric(final Metric metric, final String metricName, final MetricGroup group) { + final String scope = ((FrontMetricGroup<AbstractMetricGroup>) group).getL
[GitHub] flink pull request #3833: [FLINK-6221] Add PrometheusReporter
Github user mbode commented on a diff in the pull request: https://github.com/apache/flink/pull/3833#discussion_r116375209 --- Diff: flink-metrics/flink-metrics-prometheus/src/main/java/org/apache/flink/metrics/prometheus/PrometheusReporter.java --- @@ -0,0 +1,256 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.metrics.prometheus; + +import com.google.common.base.CharMatcher; +import com.google.common.collect.ImmutableList; +import fi.iki.elonen.NanoHTTPD; +import io.prometheus.client.Collector; +import io.prometheus.client.CollectorRegistry; +import io.prometheus.client.exporter.common.TextFormat; +import org.apache.flink.annotation.PublicEvolving; +import org.apache.flink.annotation.VisibleForTesting; +import org.apache.flink.metrics.CharacterFilter; +import org.apache.flink.metrics.Counter; +import org.apache.flink.metrics.Gauge; +import org.apache.flink.metrics.Histogram; +import org.apache.flink.metrics.HistogramStatistics; +import org.apache.flink.metrics.Meter; +import org.apache.flink.metrics.Metric; +import org.apache.flink.metrics.MetricConfig; +import org.apache.flink.metrics.MetricGroup; +import org.apache.flink.metrics.reporter.MetricReporter; +import org.apache.flink.runtime.metrics.groups.AbstractMetricGroup; +import org.apache.flink.runtime.metrics.groups.FrontMetricGroup; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.io.StringWriter; +import java.util.Collections; +import java.util.HashMap; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; +import java.util.regex.Pattern; + +import static com.google.common.collect.Iterables.toArray; + +@PublicEvolving +public class PrometheusReporter implements MetricReporter { + private static final Logger log = LoggerFactory.getLogger(PrometheusReporter.class); + + static final String ARG_PORT = "port"; + private static final intDEFAULT_PORT = 9249; + + private static final Pattern UNALLOWED_CHAR_PATTERN = Pattern.compile("[^a-zA-Z0-9:_]"); + private static final CharacterFilter CHARACTER_FILTER = new CharacterFilter() { + @Override + public String filterCharacters(String input) { + return replaceInvalidChars(input); + } + }; + + private static final char SCOPE_SEPARATOR = '_'; + + private PrometheusEndpoint prometheusEndpoint; + private Map<String, Collector> collectorsByMetricName = new HashMap<>(); + + @VisibleForTesting + static String replaceInvalidChars(final String input) { + // https://prometheus.io/docs/instrumenting/writing_exporters/ + // Only [a-zA-Z0-9:_] are valid in metric names, any other characters should be sanitized to an underscore. + return UNALLOWED_CHAR_PATTERN.matcher(input).replaceAll("_"); + } + + @Override + public void open(MetricConfig config) { + int port = config.getInteger(ARG_PORT, DEFAULT_PORT); + log.info("Using port {}.", port); + prometheusEndpoint = new PrometheusEndpoint(port); + try { + prometheusEndpoint.start(NanoHTTPD.SOCKET_READ_TIMEOUT, true); + } catch (IOException e) { + log.error("Could not start PrometheusEndpoint on port " + port, e); + } + } + + @Override + public void close() { + prometheusEndpoint.stop(); --- End diff -- I don't think so, `NanoHTTPD.stop()` seems to catch everything. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled b
[GitHub] flink issue #3833: [FLINK-6221] Add PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/3833 @hadronzoo I tried to keep it as general as possible by exporting all variables available to the metric group as labels. I am not sure if this might lead to [label overuse](https://prometheus.io/docs/practices/instrumentation/#do-not-overuse-labels) at some point, did you ever run into difficulties in that context? Unfortunately I do not have a lot of experience running Prometheus in a production environment yet. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #3833: [FLINK-6221] Add PrometheusReporter
Github user mbode commented on the issue: https://github.com/apache/flink/pull/3833 Thanks for looking at it so quickly! I somewhat had the same instinct as far as your first point is concerned and thought about pulling out a `DropwizardReporter` without Scheduling but decided against it to not have to touch too many places. I like your suggestion of converting metrics directly without using Dropwizard as an intermediate step and am going to try that. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #3833: [FLINK-6221] Add PrometheusReporter
GitHub user mbode opened a pull request: https://github.com/apache/flink/pull/3833 [FLINK-6221] Add PrometheusReporter You can merge this pull request into a Git repository by running: $ git pull https://github.com/mbode/flink PrometheusReporter Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/3833.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3833 commit 9c1889abcd5591d89dde3d5b032b6c54d4d518ba Author: Maximilian Bode <maximilian.b...@tngtech.com> Date: 2017-05-06T00:49:42Z [FLINK-6221] Add PrometheusReporter --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request: [FLINK-3836] Add LongHistogram accumulator
Github user mbode commented on the pull request: https://github.com/apache/flink/pull/1966#issuecomment-222155560 Sorry guys, botched the PR :/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request: [FLINK-3836] Add LongHistogram accumulator
Github user mbode closed the pull request at: https://github.com/apache/flink/pull/1966 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request: [FLINK-3836] Add LongHistogram accumulator
Github user mbode commented on the pull request: https://github.com/apache/flink/pull/1966#issuecomment-220277425 Hi Stephan, sounds good. 1. I wanted to avoid too much duplication, but I see your point. 2. Ok, throwing a new exception breaks the API. So should I just mark `Histogram` as deprecated? I guess the proper way would be to make Histogram generic, enabling users to instantiate Histogram. Again, this breaks the API, so we would have to wait for the next major release â what is the process for cases like that? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request: [FLINK-3836] Add LongHistogram accumulator
GitHub user mbode reopened a pull request: https://github.com/apache/flink/pull/1966 [FLINK-3836] Add LongHistogram accumulator New accumulator `LongHistogram`; the `Histogram` accumulator now throws an `IllegalArgumentException` instead of letting the int overflow. - [x] General - The pull request references the related JIRA issue ("[FLINK-XXX] Jira title text") - The pull request addresses only one issue - Each commit in the PR has a meaningful commit message (including the JIRA id) - [x] Documentation - Documentation has been added for new functionality - Old documentation affected by the pull request has been updated - JavaDoc for public methods has been added - [x] Tests & Build - Functionality added by the pull request is covered by tests - `mvn clean verify` has been executed successfully locally or a Travis build has passed You can merge this pull request into a Git repository by running: $ git pull https://github.com/mbode/flink master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/1966.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1966 commit f457319481701a1234c9ea7d29da24f857ae4241 Author: Maximilian Bode <maximilian.b...@tngtech.com> Date: 2016-04-27T15:19:16Z [Flink-3836] Add LongHistogram accumulator --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request: [FLINK-3836] Add LongHistogram accumulator
Github user mbode closed the pull request at: https://github.com/apache/flink/pull/1966 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request: [Flink-3836] Add LongHistogram accumulator
GitHub user mbode opened a pull request: https://github.com/apache/flink/pull/1966 [Flink-3836] Add LongHistogram accumulator New accumulator `LongHistogram`; the `Histogram` accumulator now throws an `IllegalArgumentException` instead of letting the int overflow. - [x] General - The pull request references the related JIRA issue ("[FLINK-XXX] Jira title text") - The pull request addresses only one issue - Each commit in the PR has a meaningful commit message (including the JIRA id) - [x] Documentation - Documentation has been added for new functionality - Old documentation affected by the pull request has been updated - JavaDoc for public methods has been added - [x] Tests & Build - Functionality added by the pull request is covered by tests - `mvn clean verify` has been executed successfully locally or a Travis build has passed You can merge this pull request into a Git repository by running: $ git pull https://github.com/mbode/flink master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/1966.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1966 commit f457319481701a1234c9ea7d29da24f857ae4241 Author: Maximilian Bode <maximilian.b...@tngtech.com> Date: 2016-04-27T15:19:16Z [Flink-3836] Add LongHistogram accumulator --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---