[ 
https://issues.apache.org/jira/browse/KAFKA-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15879483#comment-15879483
 ] 

Jason Gustafson commented on KAFKA-1895:
----------------------------------------

I don't know, supporting two deserializers sounds a lot easier than supporting 
two poll() methods ;). No new configurations, no new APIs, we just need an 
instanceof check to tell which one to use. There might be better ideas out 
there though.

Making the buffers read-only is necessary, but not sufficient since the 
consumer needs to know when it can reuse them. It seems like you would need a 
way for the user to increment a reference count so that the consumer knows not 
to reuse the memory (something like what Netty's ByteBuf provides).

> Investigate moving deserialization and decompression out of KafkaConsumer
> -------------------------------------------------------------------------
>
>                 Key: KAFKA-1895
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1895
>             Project: Kafka
>          Issue Type: Sub-task
>          Components: consumer
>            Reporter: Jay Kreps
>
> The consumer implementation in KAFKA-1760 decompresses fetch responses and 
> deserializes them into ConsumerRecords which are then handed back as the 
> result of poll().
> There are several downsides to this:
> 1. It is impossible to scale serialization and decompression work beyond the 
> single thread running the KafkaConsumer.
> 2. The results can come back during the processing of other calls such as 
> commit() etc which can result in caching these records a little longer.
> An alternative would be to have ConsumerRecords wrap the actual compressed 
> serialized MemoryRecords chunks and do the deserialization during iteration. 
> This way you could scale this over a thread pool if needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to