To your other question, there are no deep seeded reasons for not doing this. It 
has been oft-talked about, but yet to happen.

I think the main reasons are simply that it is quite a complex implementation, 
and relatively few people have a real need for it (most client applications 
typically *need* to block until they get a response to do something with).

-----Original Message-----
From: Mayan Moudgill [mailto:[email protected]] 
Sent: Wednesday, March 17, 2010 10:39 AM
To: [email protected]; [email protected]
Subject: Re: async c++ client / THRIFT-1

I'm a little curious why you would expect to get repsonses out-of-order. 
If you're using TCP/IP (or some other inherently FIFO transport) + a 
FIFO server, then RPC responses will appear in the order in which the 
send were sent. Why would you need to reorder cseqids? Are you planning 
on using UDP or something else? Or do you want a non-order-preserving 
server? Or are you communicating with multiple servers? In the last 
case, I'm assuming you'll have different sockets for each server - but 
since the communication with each server will be FIFO, you still won't 
need to reorder cseqids.

As for recv being blocking:
1. You can use an O_NONBLOCK socket (maybe not in TSocket.py, but 
cthrift uses O_NONBLOCK sockets).
2. But that doesn't solve the problem if you're talking to multiple 
sockets and the size of response is > PIPE_BUF, then it is possible for 
a RPC response to be split up into multiple packets. So, you'll read the 
data from the first packet, then get blocked waiting for the second 
packet to come in - UNLESS you can set up things so that when you get 
blocked, you switch to a different context (thread/coroutine) to process 
any other data.

I can't speak for the Thrift team, but I don't think splitting a client 
call into a send part and a receive part is enough to make a fully 
non-blocking client, and they may be trying to figure out how to get there.

I'm working on some ideas for non-coroutine/context switch based, 
partial progress receivers [they involve treating the RPC response 
specification as a CFG {which it is} derive a table driven parser, and 
then save the intermediate parse state on a block] - these are for the 
server, but it should be trivial to add to the client. However, I will 
only be implementing them in cthrift, not Thrift.

Mayan

Daniel Kluesing wrote:
> Well, what I'm really curious about is a fully non-blocking c++ client.
 > What I'm doing now is sending the send_'s, setting cseqids myself, and
 > then sorting out the responses based on the rseqid, that's ok, but the
 > recv call in TSocket is still a blocking call. Since it's been an open
 > issue for so long, I'm curious if there are others working on it, or
 > deep seeded reasons why it's not been done that I just don't know about.
> 
> -----Original Message-----
> From: Mayan Moudgill [mailto:[email protected]] 
> Sent: Tuesday, March 16, 2010 7:41 PM
> To: [email protected]; Daniel Kluesing
> Subject: Re: async c++ client / THRIFT-1
> 
> I'm guessing you want the abilitiy to separate the calls and receives on 
> a client; is that correct?
> 
> Thanks
> 
> Mayan
> 
> Daniel Kluesing wrote:
> 
> 
>>I wanted/needed  something approximating async requests on the client - 
>>directly call send_ stuff, do unrelated stuff that might fail, directly call 
>>recv_ stuff - so I hacked into the c++ generator on 0.2.0 a bit more support 
>>for cseqid and rseqid. I did this somewhat quickly and while it works for my 
>>purposes, it isn't the 'right' thing to do. Is there any news on 
>>http://issues.apache.org/jira/browse/THRIFT-1 or anyone working on a proper 
>>nonblocking c++ client? (I saw the spotify ASIO patch)
>>
>>(and if anyone out there wants similar functionality, I can port my changes 
>>to trunk and make a patch, I didn't bother since this has lain dormant for 
>>quite a while, so I'm guessing interest is low)
>>
> 
> 
> 
> 

Reply via email to