[
https://issues.apache.org/jira/browse/BEAM-12008?focusedWorklogId=620181&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-620181
]
ASF GitHub Bot logged work on BEAM-12008:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 07/Jul/21 19:44
Start Date: 07/Jul/21 19:44
Worklog Time Spent: 10m
Work Description: boyuanzz commented on a change in pull request #15090:
URL: https://github.com/apache/beam/pull/15090#discussion_r665658872
##########
File path:
sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java
##########
@@ -838,6 +850,16 @@ public void setTimestampPolicy(String timestampPolicy) {
}
}
+ /**
+ * Sets nullKeyFlag for present of null keys
+ *
+ * <p>By default, nullKeyFlag is {@code false} and will invoke {@link
KafkaRecordCoder} when
+ * nullKeyFlag is {@code true}, it invokes {@link
NullableKeyKafkaRecordCoder}
+ */
+ public Read<K, V> withNullKeyFlag() {
Review comment:
I think the major pushback for making this as a default one is update
safe and backward compatibility. Let's say that we make this by default from
beam 2.33.0. Then customers with pipeline prior to 2.33.0 will not be able to
update their streaming pipeline to 2.33.0 because of coder incompatibility.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 620181)
Time Spent: 2h 40m (was: 2.5h)
> KafkaIO does not handle null keys
> ---------------------------------
>
> Key: BEAM-12008
> URL: https://issues.apache.org/jira/browse/BEAM-12008
> Project: Beam
> Issue Type: Bug
> Components: io-java-kafka
> Reporter: Daniel Collins
> Assignee: Weiwen Xu
> Priority: P2
> Labels: stale-P2
> Time Spent: 2h 40m
> Remaining Estimate: 0h
>
> Kafka
> [ConsumerRecord|https://kafka.apache.org/27/javadoc/org/apache/kafka/clients/consumer/ConsumerRecord.html#key--]
> and
> [ProducerRecord|https://kafka.apache.org/27/javadoc/org/apache/kafka/clients/producer/ProducerRecord.html#key--]
> 'key' fields are explicitly allowed to be null. In addition, on the producer
> side, setting a null key is the way that the user indicates that they want a
> [random partition for their
> message|[https://github.com/apache/kafka/blob/9adfac280392da0837cfd8d582bc540951e94087/clients/src/main/java/org/apache/kafka/clients/producer/internals/DefaultPartitioner.java#L67].]
>
> Beam KafkaIO does not support null keys in byte[] mode (read side:
> [https://github.com/apache/beam/blob/9e0997760cf3320f1a1d0c4342d3dff559a25775/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java#L727|https://github.com/apache/beam/blob/9e0997760cf3320f1a1d0c4342d3dff559a25775/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java#L727)]
> write side:
> [https://github.com/apache/beam/blob/9e0997760cf3320f1a1d0c4342d3dff559a25775/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaRecordCoder.java#L58])
>
> since it would defer to ByteArrayCoder which does not support null arrays.
>
> BeamKafkaTable suffers the same issue
> https://github.com/apache/beam/blob/9e0997760cf3320f1a1d0c4342d3dff559a25775/sdks/java/extensions/sql/src/main/java/org/apache/beam/sdk/extensions/sql/meta/provider/kafka/BeamKafkaTable.java#L144
--
This message was sent by Atlassian Jira
(v8.3.4#803005)