[
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15429127#comment-15429127
]
ASF GitHub Bot commented on FLINK-4035:
---------------------------------------
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/2369#discussion_r75569347
--- Diff:
flink-streaming-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/AbstractFetcher.java
---
@@ -207,15 +207,14 @@ public void
restoreOffsets(HashMap<KafkaTopicPartition, Long> snapshotState) {
//
------------------------------------------------------------------------
/**
- *
* <p>Implementation Note: This method is kept brief to be JIT inlining
friendly.
* That makes the fast path efficient, the extended paths are called as
separate methods.
- *
* @param record The record to emit
* @param partitionState The state of the Kafka partition from which
the record was fetched
- * @param offset The offset from which the record was fetched
+ * @param offset The offset of the record
+ * @param kafkaRecord The original Kafka record
*/
- protected final void emitRecord(T record, KafkaTopicPartitionState<KPH>
partitionState, long offset) {
+ protected <R> void emitRecord(T record, KafkaTopicPartitionState<KPH>
partitionState, long offset, R kafkaRecord) throws Exception {
--- End diff --
Is there a reason we need to have an extra `kafkaRecord`? It doesn't seem
to be used in the function.
> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---------------------------------------------------
>
> Key: FLINK-4035
> URL: https://issues.apache.org/jira/browse/FLINK-4035
> Project: Flink
> Issue Type: Bug
> Components: Kafka Connector
> Affects Versions: 1.0.3
> Reporter: Elias Levy
> Assignee: Robert Metzger
> Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.
> Published messages now include timestamps and compressed messages now include
> relative offsets. As it is now, brokers must decompress publisher compressed
> messages, assign offset to them, and recompress them, which is wasteful and
> makes it less likely that compression will be used at all.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)