On Apr 8, 2010, at 11:35 PM, Bruce Mitchener wrote:

> Doug,
> 
> I'm happy to hear that you like this approach!
> 
> Allocation of channels seems to be something specific to an application.  In
> my app, I'd have a channel for the streaming data that is constantly
> arriving and a channel for making requests on and getting back answers
> immediately.  Others could have a channel per object or whatever.

If this is all on one TCP port, then channels will interfere with one another 
somewhat -- the transport layer will see packets arrive in the order they were 
sent.  If one packet in your streaming data stalls, both channels will stall.  
Depending on the application requirements, this might be fine.  But it should 
be made clear that channels are not independent, they are just interleaved over 
one ordered data stream.  How each implementation orders sending data on one 
end will affect order on the other side.

> 
> I definitely agree about not wanting a handshake per request. For my
> application that would add a lot of overhead in terms of the data
> transmitted.  (I'm sending a lot of small requests, hopefully many thousands
> per second...)  I would be much much happier being able to have a handshake
> per connection (or per channel open).
> 

Handshake per request will limit WAN usage.  Doubling request latency isn't a 
problem for local networks with sub 0.1ms RTTs, but it is a problem with 25ms 
RTTs.  Round trips aren't free on the processing or bandwidth side either.   If 
there is a way to meet most goals and limit extra handshakes to specific cases 
that would be a significant performance improvement.

> - Bruce
> 
> On Thu, Apr 8, 2010 at 4:43 PM, Doug Cutting <cutt...@apache.org> wrote:
> 
>> Bruce,
>> 
>> Overall this looks like a good approach to me.
>> 
>> How do you anticipate allocating channels?  I'm guessing this would be one
>> per client object, that a pool of open connections to servers would be
>> maintained, and creating a new client object would allocate a new channel.
>> 
>> Currently we perform a handshake per request.  This is fairly cheap and
>> permits things like routing through proxy servers.  Different requests over
>> the same connection can talk to different backend servers running different
>> versions of the protocol.  Also consider the case where, between calls on an
>> object, the connection times out, and a new session is established and a new
>> handshake must take place.
>> 
>> That said, having a session where the handshake can be assumed vastly
>> simplifies one-way messages.  Without a response or error on which to prefix
>> a handshake response, a one-way client has no means to know that the server
>> was able to even parse its request.  Yet we'd still like a handshake for
>> one-way messages, so that clients and servers need not be versioned in
>> lockstep.  So the handshake-per-request model doesn't serve one-way messages
>> well.
>> 
>> How can we address both of these needs: to permit flexible payload routing
>> and efficient one-way messaging?
>> 
>> Doug
>> 
>> 
>> Bruce Mitchener wrote:
>> 
>>> * Think about adding something for true one-way messages, but an empty
>>> reply frame is probably sufficient, since that still allows reporting
>>> errors
>>> if needed (or desired).
>>> 
>> 

Reply via email to