Joel Koshy updated KAFKA-4293:
    Assignee: radai rosenblatt

It turns out we should be able to handle all of our current codecs by 
re-implementing the {{available()}} method correctly. We would still want to 
continue to catch EOF as a safety net for any future codecs we may add.

> ByteBufferMessageSet.deepIterator burns CPU catching EOFExceptions
> ------------------------------------------------------------------
>                 Key: KAFKA-4293
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4293
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions:
>            Reporter: radai rosenblatt
>            Assignee: radai rosenblatt
> around line 110:
> {noformat}
> try {
>     while (true)
>         innerMessageAndOffsets.add(readMessageFromStream(compressed))
> } catch {
>     case eofe: EOFException =>
>     // we don't do anything at all here, because the finally
>     // will close the compressed input stream, and we simply
>     // want to return the innerMessageAndOffsets
> {noformat}
> the only indication the code has that the end of the oteration was reached is 
> by catching EOFException (which will be thrown inside 
> readMessageFromStream()).
> profiling runs performed at linkedIn show 10% of the total broker CPU time 
> taken up by Throwable.fillInStack() because of this behaviour.
> unfortunately InputStream.available() cannot be relied upon (concrete example 
> - GZipInputStream will not correctly return 0) so the fix would probably be a 
> wire format change to also encode the number of messages.

This message was sent by Atlassian JIRA

Reply via email to