Jiangjie Qin commented on KAFKA-2200:

[~andrew.musselman] Yep, still a problem. 

[~omkreddy] There were some discussion on this before. The idea is to let all 
the API exceptions to be handled by the callback code. There are two issues 
regarding the current code:

1. TimeoutException is supposed to be only used for error code 7 which indicate 
the specific case that when a producer sends messages with acks=all, the 
replication on the broker side did not finish within the specified replication 
timeout. However, we misused this exception as a general timeout exception in 
quite a few places on the client side.

2. The RecordTooLargeException in the producer is not that accurate, 
producer.send() will throw that exception if the decompressed message size is 
greater than the max request size or the memory buffer size. Supposedly if the 
message can be compressed to less than max message size, we should still send 

So in terms of the solution, I am thinking of doing the following:
To solve issue (1), add a new ClientTimeoutException to replace the 
TimeoutException(error code 7) when appropriate. 
To solve issue (2), if the message is less than the max memory size, let it be 
appended to the accumulator and let the sender thread to handle the record size 
too large exception if the batch is too big. If the message size is bigger than 
the memory buffer size, we can consider throwing IllegalArgumentException.

> kafkaProducer.send() should not call callback.onCompletion()
> ------------------------------------------------------------
>                 Key: KAFKA-2200
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2200
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Jiangjie Qin
>              Labels: newbie
> KafkaProducer.send() should not call callback.onCompletion() because this 
> might break the callback firing order.

This message was sent by Atlassian JIRA

Reply via email to