Hi Chandu, What mode does your Flink run in? In addition, can you check if the flink-metrics-core is included in the classpath of the Flink runtime environment?
Thanks, vino. Chandu Kempaiah <chandu.kempa...@reflektion.com> 于2018年10月11日周四 上午9:51写道: > > Hello, > > I am have a job that reads messages from kafka, processes them and writes > back to kafka, this jobs works fine on flink 1.3.2. I upgraded cluster to > 1.6.1 but now see below error. Has any one faced similar issue? > > I have updated all the dependencies to use > > <flink.version>1.6.1</flink.version> > > <dependency> > <groupId>org.apache.flink</groupId> > <artifactId>flink-connector-kafka-0.10_${scala.version}</artifactId> > <version>${flink.version}</version> > </dependency> > > > java.lang.NoSuchMethodError: > org.apache.flink.metrics.MetricGroup.addGroup(Ljava/lang/String;Ljava/lang/String;)Lorg/apache/flink/metrics/MetricGroup; > at > org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.registerOffsetMetrics(AbstractFetcher.java:622) > at > org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.<init>(AbstractFetcher.java:200) > at > org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.<init>(Kafka09Fetcher.java:91) > at > org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.<init>(Kafka010Fetcher.java:64) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010.createFetcher(FlinkKafkaConsumer010.java:209) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:647) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711) > at java.lang.Thread.run(Thread.java:748) > > > > Thanks > > Chandu > >