[ https://issues.apache.org/jira/browse/KAFKA-1835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14628122#comment-14628122 ]
David Hay commented on KAFKA-1835: ---------------------------------- It seems to me that there are two aspects to sending a message: (a) selecting a partition and (b) sending the message to the partition leader. Currently, (b) is performed asynchronously, (a) is not, and contains a potentially blocking call to get partition metadata (or refresh it). Maybe I'm missing something, but why couldn't the producer just queue the ProducerRecord and have a separate thread take care of querying for broker metadata, serializing the key and message, selecting a partition and pushing the result onto the send queue? In other words, put the current implementation of {{send()}} in a separate thread? This is essentially what we've had to do in order to work around the possibility of {{send()}} blocking when we don't want it to. In addition, it puts the serialization process in a separate thread, which is important when we need to send a message and respond to user input as quickly as possible. > Kafka new producer needs options to make blocking behavior explicit > ------------------------------------------------------------------- > > Key: KAFKA-1835 > URL: https://issues.apache.org/jira/browse/KAFKA-1835 > Project: Kafka > Issue Type: Improvement > Components: clients > Affects Versions: 0.8.2.0, 0.8.3, 0.9.0 > Reporter: Paul Pearcy > Fix For: 0.8.3 > > Attachments: KAFKA-1835-New-producer--blocking_v0.patch, > KAFKA-1835.patch > > Original Estimate: 504h > Remaining Estimate: 504h > > The new (0.8.2 standalone) producer will block the first time it attempts to > retrieve metadata for a topic. This is not the desired behavior in some use > cases where async non-blocking guarantees are required and message loss is > acceptable in known cases. Also, most developers will assume an API that > returns a future is safe to call in a critical request path. > Discussing on the mailing list, the most viable option is to have the > following settings: > pre.initialize.topics=x,y,z > pre.initialize.timeout=x > > This moves potential blocking to the init of the producer and outside of some > random request. The potential will still exist for blocking in a corner case > where connectivity with Kafka is lost and a topic not included in pre-init > has a message sent for the first time. > There is the question of what to do when initialization fails. There are a > couple of options that I'd like available: > - Fail creation of the client > - Fail all sends until the meta is available > Open to input on how the above option should be expressed. > It is also worth noting more nuanced solutions exist that could work without > the extra settings, they just end up having extra complications and at the > end of the day not adding much value. For instance, the producer could accept > and queue messages(note: more complicated than I am making it sound due to > storing all accepted messages in pre-partitioned compact binary form), but > you're still going to be forced to choose to either start blocking or > dropping messages at some point. > I have some test cases I am going to port over to the Kafka producer > integration ones and start from there. My current impl is in scala, but > porting to Java shouldn't be a big deal (was using a promise to track init > status, but will likely need to make that an atomic bool). -- This message was sent by Atlassian JIRA (v6.3.4#6332)