hey Kirk,

Thanks for the questions, please let me answer them:

1. This is handled by the *FileRecords class, *now the open uses the slice
<https://github.com/apache/kafka/pull/11842/files#diff-27b1f2e66462b1f8ef9b21da31ae9bdc76576b590458cd92d3b7d8b4042a2e10R253>
 which takes care of the end
<https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/DumpLogSegments.scala#L309>
bytes . There will be some trailing bytes which is totally fine, you can
see here it was expected even here
<https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/DumpLogSegments.scala#L309>
before my proposal, I added an extra conditional to avoid printing the
warning in the main script here
<https://github.com/apache/kafka/pull/11842/files#diff-27b1f2e66462b1f8ef9b21da31ae9bdc76576b590458cd92d3b7d8b4042a2e10R310>
.

2. *FileRecords *class already checks the value
<https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/FileRecords.java#L167>,
we do the same, similar case when we set the maxMessageSizeOpt
<https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/DumpLogSegments.scala#L428>


3. I have just added the unit test 😀




On Fri, 4 Mar 2022 at 20:21, Kirk True <k...@mustardgrain.com> wrote:

> Hi Sergio,
>
> Thanks for the KIP. I don't know anything about the log segment internals,
> but the logic and implementation seem sound.
>
> Three questions:
>  1. Since the --max-batches-size unit is bytes, does it matter if that
> size doesn't align to a record boundary?
>  2. Can you add a check to make sure that --max-batches-size doesn't allow
> the user to pass in a negative number?
>  3. Can you add/update any unit tests related to the DumpLogSegments
> arguments?
> Thanks,
> Kirk
>
> On Thu, Mar 3, 2022, at 1:32 PM, Sergio Daniel Troiano wrote:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-824%3A+Allowing+dumping+segmentlogs+limiting+the+batches+in+the+output
> >
>

Reply via email to