I tried for a while (in Ruby) to make some thread-safe client wrappers, but found that maintaining the synchronization logic (and other logic as well, for example at the time there were some issues with clearing out buffers after a failed call). I ended up just opting to create a new connection on every call, and for my purposes the slight performance hit was well worth the reliability gain.

If you use just a single client, then threadB will have to wait for threadA to finish, whereas using 2 clients will allow them to run in parallel, which should be a performance boost. If you want to get around this with just a single client, you can try to use the Twisted support for the python client (uses the reactor pattern), or make the calls to send_xxx and recv_xx yourself (synchronizing the calls, of course)

-Ben

On May 22, 2009, at 10:05 AM, Oscar wrote:

Hi all,

In our project we need call RPCs in different threads.

//pseudo code

void init()
{
 //initialize the transport and protocol
}


void threadA()
{
//call rpc a using the initialized client
}

void threadB()
{
//call rpc b using the initialized client
}


The above code has the race-condition problem.

My question is: lock the client directly or make a client socket pool?

What's your opinion?

Reply via email to