[
https://issues.apache.org/jira/browse/KAFKA-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15878775#comment-15878775
]
Armin Braun commented on KAFKA-1895:
------------------------------------
[~ijuma] maybe you have an opinion on the viability of the above ? :)
> Investigate moving deserialization and decompression out of KafkaConsumer
> -------------------------------------------------------------------------
>
> Key: KAFKA-1895
> URL: https://issues.apache.org/jira/browse/KAFKA-1895
> Project: Kafka
> Issue Type: Sub-task
> Components: consumer
> Reporter: Jay Kreps
>
> The consumer implementation in KAFKA-1760 decompresses fetch responses and
> deserializes them into ConsumerRecords which are then handed back as the
> result of poll().
> There are several downsides to this:
> 1. It is impossible to scale serialization and decompression work beyond the
> single thread running the KafkaConsumer.
> 2. The results can come back during the processing of other calls such as
> commit() etc which can result in caching these records a little longer.
> An alternative would be to have ConsumerRecords wrap the actual compressed
> serialized MemoryRecords chunks and do the deserialization during iteration.
> This way you could scale this over a thread pool if needed.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)