boyuanzz commented on a change in pull request #15090:
URL: https://github.com/apache/beam/pull/15090#discussion_r667122540



##########
File path: 
sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java
##########
@@ -847,6 +858,18 @@ public void setTimestampPolicy(String timestampPolicy) {
       }
     }
 
+    /**
+     * Update SupportsNullKeys for present of null keys
+     *
+     * <p>By default, withSupportsNullKeys is {@code false} and will invoke 
{@link KafkaRecordCoder}
+     * as normal. In this case, {@link KafkaRecordCoder} will not be able to 
handle null keys.
+     * When nullKeyFlag is {@code true}, it wraps the key coder with a {@link 
NullableCoder} before
+     * invoking {@link KafkaRecordCoder}. In this case, it can handle null 
keys.
+     */
+    public Read<K, V> withSupportsNullKeys() {

Review comment:
       `withNullabeKeys`?

##########
File path: 
sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java
##########
@@ -847,6 +858,18 @@ public void setTimestampPolicy(String timestampPolicy) {
       }
     }
 
+    /**
+     * Update SupportsNullKeys for present of null keys

Review comment:
       ```suggestion
        * Indicates whether the key of {@link KafkaRecord} could be null.
   ```

##########
File path: 
sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/NullableKeyKafkaRecordCoderTest.java
##########
@@ -0,0 +1,76 @@
+/*

Review comment:
       Please remove this file if it's no longer needed.

##########
File path: 
sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java
##########
@@ -768,6 +778,8 @@ private static Coder resolveCoder(Class deserializer) {
         }
         throw new RuntimeException("Couldn't resolve coder for Deserializer: " 
+ deserializer);
       }
+
+

Review comment:
       Please remove additional whitespace changes.

##########
File path: 
sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/NullableKeyKafkaRecordCoder.java
##########
@@ -0,0 +1,160 @@
+/*

Review comment:
       Please remove this file if it's not needed anymore.

##########
File path: 
sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOIT.java
##########
@@ -258,6 +301,17 @@ private void cancelIfTimeouted(PipelineResult readResult, 
PipelineResult.State r
         .withTopic(options.getKafkaTopic());
   }
 
+  private KafkaIO.Read<byte[], String> readFromKafkaNullKey() {
+    return KafkaIO.<byte[], String>read()
+        .withSupportsNullKeys()
+        .withBootstrapServers(options.getKafkaBootstrapServerAddresses())
+        .withConsumerConfigUpdates(ImmutableMap.of("auto.offset.reset", 
"earliest"))
+        .withTopic(options.getKafkaTopic())
+        .withMaxNumRecords(100)

Review comment:
       Usually hardcoding a number in a util function is not perferred. If your 
test is the only place using this, it might be better to have the test 
construct Kafka read directly.

##########
File path: 
sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOIT.java
##########
@@ -166,6 +174,41 @@ public void 
testKafkaIOReadsAndWritesCorrectlyInStreaming() throws IOException {
     }
   }
 
+  @Test
+  public void testKafkaIOReadsAndWritesCorrectlyInBatchNullKey() throws 
IOException {
+    List<String> values = new ArrayList<>();
+    for (int i = 0; i < 100; i++) {
+      values.add("value" + Integer.toString(i));
+    }
+    PCollection<String> writeInput =
+        writePipeline.apply(Create.of(values)).setCoder(StringUtf8Coder.of());
+
+    writeInput.apply(
+        KafkaIO.<byte[], String>write()
+            .withBootstrapServers(options.getKafkaBootstrapServerAddresses())
+            .withTopic(options.getKafkaTopic())
+            .withValueSerializer(StringSerializer.class)
+            .values());
+
+    PCollection<String> readOutput =
+        readPipeline
+            .apply("Read from bounded Kafka", readFromKafkaNullKey())
+            .apply("Materialize input", Reshuffle.viaRandomKey())
+            .apply(
+                "Map records to strings", MapElements.via(new 
MapKafkaRecordsToStringsNullKey()));

Review comment:
       You can use a lambda here, like: 
https://github.com/apache/beam/blob/243128a8fc52798e1b58b0cf1a271d95ee7aa241/sdks/java/extensions/sql/src/main/java/org/apache/beam/sdk/extensions/sql/meta/provider/kafka/PayloadSerializerKafkaTable.java#L53-L54

##########
File path: 
sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaIO.java
##########
@@ -847,6 +858,18 @@ public void setTimestampPolicy(String timestampPolicy) {
       }
     }
 
+    /**
+     * Update SupportsNullKeys for present of null keys
+     *
+     * <p>By default, withSupportsNullKeys is {@code false} and will invoke 
{@link KafkaRecordCoder}
+     * as normal. In this case, {@link KafkaRecordCoder} will not be able to 
handle null keys.
+     * When nullKeyFlag is {@code true}, it wraps the key coder with a {@link 
NullableCoder} before
+     * invoking {@link KafkaRecordCoder}. In this case, it can handle null 
keys.
+     */

Review comment:
       ```suggestion
        * <p>By specifying {@link withNullableKeys}, {@link KafkaIO.Read} is 
able to handle {@link KafkaRecord} with nullable keys. Otherwise, {@link 
KafkaIO.Read} will assume the key from {@link KafkaRecord} is not null all the 
time. Reading {@link KafkaRecord} with nullable keys but without specifying 
{@link withNullableKeys} may result in pipeline failures.
        */
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to