> On Sept. 23, 2015, 12:16 a.m., Sumit Mohanty wrote: > > ambari-metrics/ambari-metrics-storm-sink/pom.xml, line 35 > > <https://reviews.apache.org/r/38571/diff/3/?file=1081842#file1081842line35> > > > > This is needed?
Apache build is borken since we checked in a build version that is no longer present, Checked with Sriharsha and picked 2.3.0 build which has the changes. - Sid ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/38571/#review100101 ----------------------------------------------------------- On Sept. 23, 2015, 12:08 a.m., Sid Wagle wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/38571/ > ----------------------------------------------------------- > > (Updated Sept. 23, 2015, 12:08 a.m.) > > > Review request for Ambari, Alejandro Fernandez, Sumit Mohanty, and Sriharsha > Chintalapani. > > > Bugs: AMBARI-13173 > https://issues.apache.org/jira/browse/AMBARI-13173 > > > Repository: ambari > > > Description > ------- > > - Kafka writes 1100 metrics for a topic producing a very high load on the > metrics system. > - Ganglia metrics reporter provided a regex to filter out metrics that are > not needed. > - This patch provides configuration knobs to filter unnecessary metrics: > - Exclude list: > {noformat} > kafka.network.RequestMetrics.* > kafka.server.DelayedOperationPurgatory.* > kafka.server.BrokerTopicMetrics.BytesRejectedPerSec.* > {noformat} > - Include list: > {noformat} > kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile > kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile > kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile > kafka.network.RequestMetrics.RequestsPerSec.request.* > {noformat} > > > Diffs > ----- > > > ambari-metrics/ambari-metrics-kafka-sink/src/main/java/org/apache/hadoop/metrics2/sink/kafka/KafkaTimelineMetricsReporter.java > a259864 > > ambari-metrics/ambari-metrics-kafka-sink/src/test/java/org/apache/hadoop/metrics2/sink/kafka/KafkaTimelineMetricsReporterTest.java > 67c61e1 > > ambari-metrics/ambari-metrics-kafka-sink/src/test/java/org/apache/hadoop/metrics2/sink/kafka/ScheduledReporterTest.java > 41f9126 > ambari-metrics/ambari-metrics-storm-sink/pom.xml c666de0 > > ambari-metrics/ambari-metrics-storm-sink/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsSink.java > 36339c5 > > ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog212.java > 610ab14 > > ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/files/service-metrics/KAFKA.txt > 228a5bc > > ambari-server/src/main/resources/stacks/HDP/2.3/services/KAFKA/configuration/kafka-broker.xml > e621ddf > > Diff: https://reviews.apache.org/r/38571/diff/ > > > Testing > ------- > > Unit tests pass: org.apache.hadoop.metrics2.sink.kafka.*, > org.apache.ambari.server.upgrade.UpgradeCatalog212 > Manual verification done: Went from 1102 unique rows to 187 > > > Thanks, > > Sid Wagle > >
