[
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390117#comment-15390117
]
ASF GitHub Bot commented on FLINK-4035:
---------------------------------------
Github user radekg commented on the issue:
https://github.com/apache/flink/pull/2231
Merged with `upstream/master` and I'm getting this when running `mvn clean
verify`:
```
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR]
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[30,69]
cannot find symbol
symbol: class DefaultKafkaMetricAccumulator
location: package
org.apache.flink.streaming.connectors.kafka.internals.metrics
[ERROR]
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[105,17]
constructor AbstractFetcher in class
org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher<T,KPH>
cannot be applied to given types;
required:
org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T>,java.util.List<org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition>,org.apache.flink.util.SerializedValue<org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks<T>>,org.apache.flink.util.SerializedValue<org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks<T>>,org.apache.flink.streaming.api.operators.StreamingRuntimeContext,boolean
found:
org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T>,java.util.List<org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition>,org.apache.flink.util.SerializedValue<org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks<T>>,org.apache.flink.util.SerializedValue<org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks<T>>,org.apache.flink.streaming.api.operators.StreamingRuntimeContext
reason: actual and formal argument lists differ in length
[ERROR]
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[192,49]
cannot find symbol
symbol: class DefaultKafkaMetricAccumulator
location: class
org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher<T>
[ERROR]
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[193,65]
cannot find symbol
symbol: variable DefaultKafkaMetricAccumulator
location: class
org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher<T>
[INFO] 4 errors
[INFO] -------------------------------------------------------------
[INFO]
------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] force-shading ...................................... SUCCESS [
1.210 s]
[INFO] flink .............................................. SUCCESS [
4.416 s]
[INFO] flink-annotations .................................. SUCCESS [
1.551 s]
[INFO] flink-shaded-hadoop ................................ SUCCESS [
0.162 s]
[INFO] flink-shaded-hadoop2 ............................... SUCCESS [
6.451 s]
[INFO] flink-shaded-include-yarn-tests .................... SUCCESS [
7.929 s]
[INFO] flink-shaded-curator ............................... SUCCESS [
0.110 s]
[INFO] flink-shaded-curator-recipes ....................... SUCCESS [
0.986 s]
[INFO] flink-shaded-curator-test .......................... SUCCESS [
0.200 s]
[INFO] flink-test-utils-parent ............................ SUCCESS [
0.111 s]
[INFO] flink-test-utils-junit ............................. SUCCESS [
2.417 s]
[INFO] flink-core ......................................... SUCCESS [
37.825 s]
[INFO] flink-java ......................................... SUCCESS [
23.620 s]
[INFO] flink-runtime ...................................... SUCCESS [06:25
min]
[INFO] flink-optimizer .................................... SUCCESS [
12.698 s]
[INFO] flink-clients ...................................... SUCCESS [
9.795 s]
[INFO] flink-streaming-java ............................... SUCCESS [
43.709 s]
[INFO] flink-test-utils ................................... SUCCESS [
9.363 s]
[INFO] flink-scala ........................................ SUCCESS [
37.639 s]
[INFO] flink-runtime-web .................................. SUCCESS [
19.749 s]
[INFO] flink-examples ..................................... SUCCESS [
1.006 s]
[INFO] flink-examples-batch ............................... SUCCESS [
14.276 s]
[INFO] flink-contrib ...................................... SUCCESS [
0.104 s]
[INFO] flink-statebackend-rocksdb ......................... SUCCESS [
10.938 s]
[INFO] flink-tests ........................................ SUCCESS [07:34
min]
[INFO] flink-streaming-scala .............................. SUCCESS [
33.365 s]
[INFO] flink-streaming-connectors ......................... SUCCESS [
0.106 s]
[INFO] flink-connector-flume .............................. SUCCESS [
5.626 s]
[INFO] flink-libraries .................................... SUCCESS [
0.100 s]
[INFO] flink-table ........................................ SUCCESS [02:31
min]
[INFO] flink-connector-kafka-base ......................... SUCCESS [
10.033 s]
[INFO] flink-connector-kafka-0.8 .......................... SUCCESS [02:06
min]
[INFO] flink-connector-kafka-0.9 .......................... SUCCESS [02:12
min]
[INFO] flink-connector-kafka-0.10 ......................... FAILURE [
0.197 s]
[INFO] flink-connector-elasticsearch ...................... SKIPPED
[INFO] flink-connector-elasticsearch2 ..................... SKIPPED
[INFO] flink-connector-rabbitmq ........................... SKIPPED
[INFO] flink-connector-twitter ............................ SKIPPED
[INFO] flink-connector-nifi ............................... SKIPPED
[INFO] flink-connector-cassandra .......................... SKIPPED
[INFO] flink-connector-redis .............................. SKIPPED
[INFO] flink-connector-filesystem ......................... SKIPPED
[INFO] flink-batch-connectors ............................. SKIPPED
[INFO] flink-avro ......................................... SKIPPED
[INFO] flink-jdbc ......................................... SKIPPED
[INFO] flink-hadoop-compatibility ......................... SKIPPED
[INFO] flink-hbase ........................................ SKIPPED
[INFO] flink-hcatalog ..................................... SKIPPED
[INFO] flink-examples-streaming ........................... SKIPPED
[INFO] flink-gelly ........................................ SKIPPED
[INFO] flink-gelly-scala .................................. SKIPPED
[INFO] flink-gelly-examples ............................... SKIPPED
[INFO] flink-python ....................................... SKIPPED
[INFO] flink-ml ........................................... SKIPPED
[INFO] flink-cep .......................................... SKIPPED
[INFO] flink-cep-scala .................................... SKIPPED
[INFO] flink-scala-shell .................................. SKIPPED
[INFO] flink-quickstart ................................... SKIPPED
[INFO] flink-quickstart-java .............................. SKIPPED
[INFO] flink-quickstart-scala ............................. SKIPPED
[INFO] flink-storm ........................................ SKIPPED
[INFO] flink-storm-examples ............................... SKIPPED
[INFO] flink-streaming-contrib ............................ SKIPPED
[INFO] flink-tweet-inputformat ............................ SKIPPED
[INFO] flink-operator-stats ............................... SKIPPED
[INFO] flink-connector-wikiedits .......................... SKIPPED
[INFO] flink-yarn ......................................... SKIPPED
[INFO] flink-dist ......................................... SKIPPED
[INFO] flink-metrics ...................................... SKIPPED
[INFO] flink-metrics-dropwizard ........................... SKIPPED
[INFO] flink-metrics-ganglia .............................. SKIPPED
[INFO] flink-metrics-graphite ............................. SKIPPED
[INFO] flink-metrics-statsd ............................... SKIPPED
[INFO] flink-fs-tests ..................................... SKIPPED
[INFO] flink-java8 ........................................ SKIPPED
[INFO]
------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO]
------------------------------------------------------------------------
[INFO] Total time: 25:46 min
[INFO] Finished at: 2016-07-22T21:52:55+02:00
[INFO] Final Memory: 159M/1763M
[INFO]
------------------------------------------------------------------------
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on
project flink-connector-kafka-0.10_2.10: Compilation failure: Compilation
failure:
[ERROR]
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[30,69]
cannot find symbol
[ERROR] symbol: class DefaultKafkaMetricAccumulator
[ERROR] location: package
org.apache.flink.streaming.connectors.kafka.internals.metrics
[ERROR]
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[105,17]
constructor AbstractFetcher in class
org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher<T,KPH>
cannot be applied to given types;
[ERROR] required:
org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T>,java.util.List<org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition>,org.apache.flink.util.SerializedValue<org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks<T>>,org.apache.flink.util.SerializedValue<org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks<T>>,org.apache.flink.streaming.api.operators.StreamingRuntimeContext,boolean
[ERROR] found:
org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T>,java.util.List<org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition>,org.apache.flink.util.SerializedValue<org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks<T>>,org.apache.flink.util.SerializedValue<org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks<T>>,org.apache.flink.streaming.api.operators.StreamingRuntimeContext
[ERROR] reason: actual and formal argument lists differ in length
[ERROR]
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[192,49]
cannot find symbol
[ERROR] symbol: class DefaultKafkaMetricAccumulator
[ERROR] location: class
org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher<T>
[ERROR]
/Users/rad/dev/twc/flink/flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka010Fetcher.java:[193,65]
cannot find symbol
[ERROR] symbol: variable DefaultKafkaMetricAccumulator
[ERROR] location: class
org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher<T>
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,
please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the
command
[ERROR] mvn <goals> -rf :flink-connector-kafka-0.10_2.10
```
Any advice?
> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---------------------------------------------------
>
> Key: FLINK-4035
> URL: https://issues.apache.org/jira/browse/FLINK-4035
> Project: Flink
> Issue Type: Bug
> Components: Kafka Connector
> Affects Versions: 1.0.3
> Reporter: Elias Levy
> Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.
> Published messages now include timestamps and compressed messages now include
> relative offsets. As it is now, brokers must decompress publisher compressed
> messages, assign offset to them, and recompress them, which is wasteful and
> makes it less likely that compression will be used at all.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)