kirksw commented on issue #32620:
URL: https://github.com/apache/beam/issues/32620#issuecomment-2443748961

   Agreed to some extent, with a raw kafka consumer implementation I would 
simply pass these into the consumer config along with the schema registry 
config, which would then give me specific records which I would then parse 
based on the type.
   
   ```java
   avroConsumeConfigs.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
KafkaAvroDeserializer.class);
   
avroConsumeConfigs.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, 
true);
    ```
   
   But beam introduces a wrapper I guess because it also wants to build the 
beam schema alongside parsing the message, but this uses the separate (and I 
guess outdated schema registry implementation which doesn't support this).
   
   I see that the flink kafka source connector seems to have a thinner shim for 
the kafka consumer and so looks to function somewhat the same as the raw kafka 
consumer client.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to