[ 
https://issues.apache.org/jira/browse/THRIFT-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387838#comment-16387838
 ] 

James E. King, III commented on THRIFT-4488:
--------------------------------------------

I am not aware of an issue in this area; it's possible we have a flush() in a 
bad place though I would expect that to have been caught by someone at this 
point as it would represent a pretty bad performance hit.

As for disabling Nagle, I looked over the TServerSocket class in C++ and I 
don't see an easy way to do this without modifying the library in some way 
because the TServerSocket class is mostly non-virtual.  For example if you made 
acceptImpl virtual, you could override it, call the base method, and then set 
nagle there.

So the use case is that you are using thrift in a oneway-only mode and you want 
to disable nagle to improve latency?  It would be possible to add an option to 
disable nagle to the library; we already set a number of other socket options.

> Performance impact of Nagle disabled
> ------------------------------------
>
>                 Key: THRIFT-4488
>                 URL: https://issues.apache.org/jira/browse/THRIFT-4488
>             Project: Thrift
>          Issue Type: Bug
>          Components: C++ - Library
>    Affects Versions: 0.10.0
>            Reporter: Rocco Corsi
>            Priority: Major
>
> Running a SUSE 12 SP2 x86_64 C++ Thrift Server that is using OpenSSL.  Our 
> thrift service uses all Oneway methods exclusively, so a Java client sends 
> request Oneway and C++ server responds with Oneway method calls too.
> Noticed that the packets from Java client API method calls were mostly 
> contained within one or two packets, but C++ server responses are being split 
> over many packets.  Often 1 data byte per packet.  This is not really a good 
> use of SSL protocol.  Under high load, too many extra packets can exhaust 
> random data cache and stale SSL library.
> As an experiment, re-enabled Nagle's Algorithm on C++ Thrift server (modified 
> TServerSocket.cpp) and did tests at various load levels with various number 
> of java clients.  Comparing results with Nagle disabled and enabled, 
> performance improvements varied from -10% to +40%, most of the results were 
> on the plus side.
> Additionally, also working with wireshark developer to decode thrift traffic 
> and the large number of packets that need to be reassembled is causing huge 
> headaches to program the dissector.  Hopefully he can fix that, but seems 
> very difficult from what he tells me.
> Our C++ Thrift Server is based on TBufferedProtocol and TBinaryProtocol.   
> Briefly tried changing to TFramed, but that didn't appear to make any 
> difference, and client didn't work any longer (we did try to change it to 
> match server, maybe we did something wrong).
> Is there a problem with the way we are creating our C++ Thrift server 
> (TBuffered + TBinary), see further below for more complete info? Shouldn't 
> the TBufferedProcotol send complete API messages and prevent the large number 
> of packets?  Is TBinaryProtocol the problem?
> Would it be asking too much to allow Thrift Server user the choice to enable 
> Nagle or not during server creation?
> or
> Is there a problem with TBufferedProtocol or TBinaryProtocol or something 
> else we are doing wrong?
> Thanks for your time.
>  This is how we create our C++ Thrift server
> {code:java}
>     shared_ptr<toNappIfFactory> handlerFactory(new 
> NappServiceHandlerFactory());
>     shared_ptr<TProcessorFactory> processorFactory(new 
> toNappProcessorFactory(handlerFactory));
>     shared_ptr<TTransportFactory> transportFactory(new 
> TBufferedTransportFactory());
>     shared_ptr<TProtocolFactory> protocolFactory(new 
> TBinaryProtocolFactory());
>     shared_ptr<ThreadManager> 
> threadManager(ThreadManager::newSimpleThreadManager(NUMBER_OF_SERVER_THREADS));
>     shared_ptr<PlatformThreadFactory> threadFactory = 
> shared_ptr<PlatformThreadFactory>(new PlatformThreadFactory());
>     threadManager->threadFactory(threadFactory);
>     threadManager->start();
>     shared_ptr<TServerSocket> socket( 
> nappServerSocketBuilder->buildSSLServerSocket( 
> nappServerSocketBuilder->getPortNumber(), s_sslConfig));
>     shared_ptr<TServerTransport> serverTransport(socket);
>     shared_ptr<TServer> server( new TThreadPoolServer(processorFactory,
>                                                        serverTransport,
>                                                        transportFactory,
>                                                        protocolFactory,
>                                                        threadManager));
> {code}
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to