[ 
https://issues.apache.org/jira/browse/KAFKA-1835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14625674#comment-14625674
 ] 

Ewen Cheslack-Postava commented on KAFKA-1835:
----------------------------------------------

[~becket_qin] I agree that for a user this doesn't look great when they first 
start using the API and switch it to be fully non-blocking. Although in a 
perverse way it may be pretty good behavior for those users -- it forces them 
to actually handle that exception properly when it occurs because they need to 
handle it to get any data sent out. This means they should be robust, to some 
degree, to both the metadata fetch and a buffer full condition. And I'm not 
convinced this behavior is unreasonable. Right now we continue to use the 
metadata we have for partitioning even if it's hit the max age. I'd argue we 
could very reasonably say that as soon as it hits metadata.max.age.ms and we 
can't get an update, we could reasonably start throwing the same error on send 
because we can't be certain the partitioning is still valid since the number of 
partitions could have changed.

In fact, perhaps that's another bug? Given connectivity issue with the cluster, 
the producer could incorrectly partition for arbitrarily long. It's also 
limited by the buffer size so in most cases probably wouldn't be an issue, but 
seems like bad behavior nonetheless.

> Kafka new producer needs options to make blocking behavior explicit
> -------------------------------------------------------------------
>
>                 Key: KAFKA-1835
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1835
>             Project: Kafka
>          Issue Type: Improvement
>          Components: clients
>    Affects Versions: 0.8.2.0, 0.8.3, 0.9.0
>            Reporter: Paul Pearcy
>             Fix For: 0.8.3
>
>         Attachments: KAFKA-1835-New-producer--blocking_v0.patch, 
> KAFKA-1835.patch
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> The new (0.8.2 standalone) producer will block the first time it attempts to 
> retrieve metadata for a topic. This is not the desired behavior in some use 
> cases where async non-blocking guarantees are required and message loss is 
> acceptable in known cases. Also, most developers will assume an API that 
> returns a future is safe to call in a critical request path. 
> Discussing on the mailing list, the most viable option is to have the 
> following settings:
>  pre.initialize.topics=x,y,z
>  pre.initialize.timeout=x
>  
> This moves potential blocking to the init of the producer and outside of some 
> random request. The potential will still exist for blocking in a corner case 
> where connectivity with Kafka is lost and a topic not included in pre-init 
> has a message sent for the first time. 
> There is the question of what to do when initialization fails. There are a 
> couple of options that I'd like available:
> - Fail creation of the client 
> - Fail all sends until the meta is available 
> Open to input on how the above option should be expressed. 
> It is also worth noting more nuanced solutions exist that could work without 
> the extra settings, they just end up having extra complications and at the 
> end of the day not adding much value. For instance, the producer could accept 
> and queue messages(note: more complicated than I am making it sound due to 
> storing all accepted messages in pre-partitioned compact binary form), but 
> you're still going to be forced to choose to either start blocking or 
> dropping messages at some point. 
> I have some test cases I am going to port over to the Kafka producer 
> integration ones and start from there. My current impl is in scala, but 
> porting to Java shouldn't be a big deal (was using a promise to track init 
> status, but will likely need to make that an atomic bool). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to