Thank you Ben for the response. Sorry for the missing data. -Version Thrift 0.9.1 -OS: Windows 8 -Language: C++
As additional data, when I call interrupt to stop the server (TThreadedServer), since the threads died, when the server in its final path stops waiting for all task threads to finish, it enters a deadlock because tasks cannot erase themselves from the task set and cannot notify the main server thread. Thanks, Rodolfo > -----Original Message----- > From: Ben Craig [mailto:[email protected]] > Sent: Wednesday, June 04, 2014 10:18 AM > To: [email protected] > Subject: Re: Thread dies in TThreadedServer task > > Which OS? Broken pipe signal handling differs wildly between Windows, > Mac, Linux, BSD, etc. > Which version of Thrift? > > If you are using Mac, and Thrift 0.9.1, then you may want to look at > THRIFT-2019 and its associated .patch. > https://issues.apache.org/jira/browse/THRIFT-2019 > > "Kohn, Rodolfo" <[email protected]> wrote on 06/03/2014 05:44:07 > PM: > > > From: "Kohn, Rodolfo" <[email protected]> > > To: "[email protected]" <[email protected]>, > > Date: 06/03/2014 05:45 PM > > Subject: Thread dies in TThreadedServer task > > > > Hello, > > I'm working with C++ Thrift and I'm using TThreadedServer with > > TBinaryProtocolFactory and TBufferedTransportFactory. > > When a new connection is received in accept, a task is created and > > executed as a runnable. > > When the connection with the client is broken for some reason (I don't > > know whether it received a RST but I suppose so), the following line > > makes the thread die: > > > > !input_->getTransport()->peek() > > > > I found the problem occurs inside TBufferTransports.h in method bool > > peek() in the following line: > > setReadBuffer(rBuf_.get(), transport_->read(rBuf_.get(), rBufSize_)); > > > > I suppose this is because the thread is getting a broken pipe signal > > that is not properly handled but I would like to ask the list whether > > this makes sense and how this could be solved. > > > > Thanks, > > Rodolfo > > > > > > > > > >
