[ 
https://issues.apache.org/jira/browse/KAFKA-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5032:
-------------------------------
    Description: 
It's worth noting that the new behaviour for uncompressed messages is the same 
as the existing behaviour for compressed messages.

A few things to think about:

1. Do the producer settings max.request.size and batch.size still make sense 
and do we need to update the documentation? My conclusion is that things are 
still fine, but we may need to revise the docs.

2. (Seems like we don't need to do this) Consider changing default max message 
set size to include record batch overhead. This is currently defined as:

{code}
val MessageMaxBytes = 1000000 + MessageSet.LogOverhead
{code}

We should consider changing it to (I haven't thought it through though):

{code}
val MessageMaxBytes = 1000000 + DefaultRecordBatch.RECORD_BATCH_OVERHEAD
{code}

3. When a record batch is too large, we throw RecordTooLargeException, which is 
confusing because there's also a RecordBatchTooLargeException. We should 
consider renaming these exceptions to make the behaviour clearer.

4. We should consider deprecating max.message.bytes (server config) and 
message.max.bytes (topic config) in favour of configs that make it clear that 
we are talking about record batches instead of individual messages.

Part of the work in this JIRA is working out what should be done for 0.11.0.0 
and what can be done later.

  was:
It's worth noting that the new behaviour for uncompressed messages is the same 
as the existing behaviour for compressed messages.

A few things to think about:

1. Do the producer settings max.request.size and batch.size still make sense 
and do we need to update the documentation? My conclusion is that things are 
still fine, but we may need to revise the docs.

2. Consider changing default max message set size to include record batch 
overhead. This is currently defined as:

{code}
val MessageMaxBytes = 1000000 + MessageSet.LogOverhead
{code}

We should consider changing it to (I haven't thought it through though):

{code}
val MessageMaxBytes = 1000000 + DefaultRecordBatch.RECORD_BATCH_OVERHEAD
{code}

3. When a record batch is too large, we throw RecordTooLargeException, which is 
confusing because there's also a RecordBatchTooLargeException. We should 
consider renaming these exceptions to make the behaviour clearer.

4. We should consider deprecating max.message.bytes (server config) and 
message.max.bytes (topic config) in favour of configs that make it clear that 
we are talking about record batches instead of individual messages.

Part of the work in this JIRA is working out what should be done for 0.11.0.0 
and what can be done later.


> Think through implications of max.message.size affecting record batches in 
> message format V2
> --------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-5032
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5032
>             Project: Kafka
>          Issue Type: Sub-task
>          Components: clients, core, producer 
>            Reporter: Ismael Juma
>            Priority: Critical
>              Labels: documentation, exactly-once
>             Fix For: 0.11.0.0
>
>
> It's worth noting that the new behaviour for uncompressed messages is the 
> same as the existing behaviour for compressed messages.
> A few things to think about:
> 1. Do the producer settings max.request.size and batch.size still make sense 
> and do we need to update the documentation? My conclusion is that things are 
> still fine, but we may need to revise the docs.
> 2. (Seems like we don't need to do this) Consider changing default max 
> message set size to include record batch overhead. This is currently defined 
> as:
> {code}
> val MessageMaxBytes = 1000000 + MessageSet.LogOverhead
> {code}
> We should consider changing it to (I haven't thought it through though):
> {code}
> val MessageMaxBytes = 1000000 + DefaultRecordBatch.RECORD_BATCH_OVERHEAD
> {code}
> 3. When a record batch is too large, we throw RecordTooLargeException, which 
> is confusing because there's also a RecordBatchTooLargeException. We should 
> consider renaming these exceptions to make the behaviour clearer.
> 4. We should consider deprecating max.message.bytes (server config) and 
> message.max.bytes (topic config) in favour of configs that make it clear that 
> we are talking about record batches instead of individual messages.
> Part of the work in this JIRA is working out what should be done for 0.11.0.0 
> and what can be done later.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to