brucearctor commented on code in PR #28865:
URL: https://github.com/apache/beam/pull/28865#discussion_r1350738119


##########
sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/KafkaReadSchemaTransformConfiguration.java:
##########
@@ -48,8 +48,14 @@ public void validate() {
     assert startOffset == null || 
VALID_START_OFFSET_VALUES.contains(startOffset)
         : "Valid Kafka Start offset values are " + VALID_START_OFFSET_VALUES;
     final String dataFormat = this.getFormat();
-    assert dataFormat == null || VALID_DATA_FORMATS.contains(dataFormat)
-        : "Valid data formats are " + VALID_DATA_FORMATS;
+    assert dataFormat == null || isValidDataFormat(dataFormat)
+        : "Valid data formats are " + VALID_FORMATS_STR;
+  }
+
+  private boolean isValidDataFormat(String dataFormat) {
+    // Convert the input dataFormat to lowercase for case-insensitive 
comparison
+    String lowercaseDataFormat = dataFormat.toLowerCase();
+    return VALID_DATA_FORMATS.contains(lowercaseDataFormat);

Review Comment:
   I'd actually aim to err for one or the other [ either upper or lower ].  
Else, we need to really be careful and dig in here to ensure there isn't a case 
where it passes partly, and then doesn't fail.  I am not looking at code this 
second, but would there be problems with 'RaW', or 'jSOn' as the data format.  
It would pass that check, but what if that value is piped through elsewhere.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to