[ 
https://issues.apache.org/jira/browse/KAFKA-1026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14059044#comment-14059044
 ] 

Jay Kreps commented on KAFKA-1026:
----------------------------------

I believe this is fixed in the new producer, no? We will allocate up to the 
maximum memory size for messages larger than the batch size.

> Dynamically Adjust Batch Size Upon Receiving MessageSizeTooLargeException
> -------------------------------------------------------------------------
>
>                 Key: KAFKA-1026
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1026
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Guozhang Wang
>            Assignee: Guozhang Wang
>              Labels: newbie++
>             Fix For: 0.9.0
>
>
> Among the exceptions that can possibly received in Producer.send(), 
> MessageSizeTooLargeException is currently not recoverable since the producer 
> does not change the batch size but still retries on sending. It is better to 
> have a dynamic batch size adjustment mechanism based on 
> MessageSizeTooLargeException.
> This is related to KAFKA-998



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to