[ 
https://issues.apache.org/jira/browse/KAFKA-1316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14010498#comment-14010498
 ] 

Jay Kreps edited comment on KAFKA-1316 at 5/27/14 11:38 PM:
------------------------------------------------------------

For (2) I think there are two solutions
1. Change the server semantics to allow processing multiple requests at the 
same time and out of order. 
2. Use two connections

Quick discussion:
1. Allowing out-of-order requests complicates things in the clients a bit as 
you can no longer reason that the Nth response is for the Nth request you made. 
It also isn't clear what we would even guarantee on the server side. The two 
things that we have to do is handle produce requests in order and produce back 
pressure when two much data is sent. Backpressure means the socket server needs 
to stop reading requests, but to make that decision it needs to have parsed the 
request and know it is a produce request...

2. Using two connections might work. It is a bit hacky. The consumer would need 
to create a Node object for the host-port of the current co-ordinator and then 
things would work from there on (I think). The node id would need to be some 
negative number or something. I'm not really sure if there is a clean 
generalization of this.


was (Author: jkreps):
For (2) I think there are two solutions
1. Change the server semantics to allow processing multiple requests at the 
same time and out of order. 
2. Use two connections

Quick discussion:
1. Allowing out-of-order requests complicates things in the clients a bit as 
you can no longer reason that the Nth response is for the Nth request you made. 
It also isn't clear what we would even guarantee on the server side. The two 
things that we have to do is handle produce requests in order and produce back 
pressure when two much data is sent. Backpressure means the socket server needs 
to stop reading requests, but to make that decision it needs to have parsed the 
request and know it is a produce request...

2. Using two connections might work. It is a bit hacky. The consumer would need 
to create a Node object for the host-port of the current co-ordinator and then 
things would work from there on (I think).

> Refactor Sender
> ---------------
>
>                 Key: KAFKA-1316
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1316
>             Project: Kafka
>          Issue Type: Sub-task
>          Components: producer 
>            Reporter: Jay Kreps
>            Assignee: Jay Kreps
>         Attachments: KAFKA-1316.patch
>
>
> Currently most of the logic of the producer I/O thread is in Sender.java.
> However we will need to do a fair number of similar things in the new 
> consumer. Specifically:
>  - Track in-flight requests
>  - Fetch metadata
>  - Manage connection lifecycle
> It may be possible to refactor some of this into a helper class that can be 
> shared with the consumer. This will require some detailed thought.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to