greyp9 commented on code in PR #9604:
URL: https://github.com/apache/nifi/pull/9604#discussion_r1904441425
##########
nifi-extension-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/kafka/processors/producer/convert/RecordWrapperStreamKafkaRecordConverter.java:
##########
@@ -107,7 +107,10 @@ public KafkaRecord next() {
final RecordFieldConverter converter = new
RecordFieldConverter(record, flowFile, logger);
final byte[] key = converter.toBytes(WrapperRecord.KEY,
keyWriterFactory);
final byte[] value =
converter.toBytes(WrapperRecord.VALUE, writerFactory);
- ProducerUtils.checkMessageSize(maxMessageSize,
value.length);
+
+ if (value != null) {
Review Comment:
Yes, there is some ambiguity here.
Given the other code touch, it seems like it makes sense to let the "wrapper
record" value pass through, which would be implicitly interpreted by Kafka as a
tombstone. The confusion is that for the FlowFile strategy, it is not possible
to set the value to null, so we need to flex based on the FlowFile attribute.
But not for this strategy.
I think this change is good...
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]