[ 
https://issues.apache.org/jira/browse/KAFKA-1430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990819#comment-13990819
 ] 

Jay Kreps commented on KAFKA-1430:
----------------------------------

The way we garbage collect obsolete entries was fundamentally a bit sketchy due 
to keeping both an LRU list and hash.

> Purgatory redesign
> ------------------
>
>                 Key: KAFKA-1430
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1430
>             Project: Kafka
>          Issue Type: Improvement
>          Components: core
>    Affects Versions: 0.8.2
>            Reporter: Jun Rao
>
> We have seen 2 main issues with the Purgatory.
> 1. There is no atomic checkAndWatch functionality. So, a client typically 
> first checks whether a request is satisfied or not and then register the 
> watcher. However, by the time the watcher is registered, the registered item 
> could already be satisfied. This item won't be satisfied until the next 
> update happens or the delayed time expires, which means the watched item 
> could be delayed. 
> 2. FetchRequestPurgatory doesn't quite work. This is because the current 
> design tries to incrementally maintain the accumulated bytes ready for fetch. 
> However, this is difficult since the right time to check whether a fetch (for 
> regular consumer) request is satisfied is when the high watermark moves. At 
> that point, it's hard to figure out how many bytes we should incrementally 
> add to each pending fetch request.
> The problem has been reported in KAFKA-1150 and KAFKA-703.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to