Ewen Cheslack-Postava commented on KAFKA-6207:

The RecordTooLargeException is thrown with that specific class so the 
application can determine what to do with it. Why wouldn't your app just catch 
that specific exception and log the relevant info? It's unclear to me why this 
should be addressed at the Kafka level rather than at the application level.

Logging data in the key/value/headers is generally problematic for users of 
Kafka. This doesn't affect all users, but logging *any* data can be problematic 
for certain users/use cases. For example, what if the data contained 

> Include start of record when RecordIsTooLarge
> ---------------------------------------------
>                 Key: KAFKA-6207
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6207
>             Project: Kafka
>          Issue Type: Bug
>          Components: producer 
>    Affects Versions:
>            Reporter: Tadhg Pearson
>            Priority: Minor
> When a message is too large to be sent (at 
> org.apache.kafka.clients.producer.KafkaProducer#doSend), the 
> RecordTooLargeException should carry the start of the record (for example, 
> the first 1KB) so that the calling application can debug which message caused 
> the error. 
> For example: one common use case of Kafka is logging. The 
> RecordTooLargeException is thrown due to a large log message being sent by 
> the application. How do you know which statement in your application logged 
> this large message? If your exception has thousands of logging statements, it 
> will be very tough to find which one is the cause today.... but you include 
> the start of the message, this could prove a very strong hint as to the cause!

This message was sent by Atlassian JIRA

Reply via email to