[ 
https://issues.apache.org/jira/browse/THRIFT-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386745#comment-16386745
 ] 

Rocco Corsi commented on THRIFT-4488:
-------------------------------------

Issue was discussed some time ago here 
https://issues.apache.org/jira/browse/THRIFT-904

Discussion in THRIFT-904 mentions that TFramed might have a two packet on the 
wire issue (due to two writes being used) and that would lead one to conclude 
that TBuffered would all be written with one call.   Don't know if this applies 
to both C# and C++, but I assume both are now implementing the same and this 
issue (TFramed with two write calls) is all fixed.

Wondering if there could be something wrong in our OS or application that could 
cause these packets to be broken up into many very small packets (usually one 
data byte per packet), or if this is a real Thrift issue?!?

> Performance impact of Nagle disabled
> ------------------------------------
>
>                 Key: THRIFT-4488
>                 URL: https://issues.apache.org/jira/browse/THRIFT-4488
>             Project: Thrift
>          Issue Type: Bug
>          Components: C++ - Library
>    Affects Versions: 0.10.0
>            Reporter: Rocco Corsi
>            Priority: Major
>
> Running a SUSE 12 SP2 x86_64 C++ Thrift Server that is using OpenSSL.  Our 
> thrift service uses all Oneway methods exclusively, so a Java client sends 
> request Oneway and C++ server responds with Oneway method calls too.
> Noticed that the packets from Java client API method calls were mostly 
> contained within one or two packets, but C++ server responses are being split 
> over many packets.  Often 1 data byte per packet.  This is not really a good 
> use of SSL protocol.  Under high load, too many extra packets can exhaust 
> random data cache and stale SSL library.
> As an experiment, re-enabled Nagle's Algorithm on C++ Thrift server (modified 
> TServerSocket.cpp) and did tests at various load levels with various number 
> of java clients.  Comparing results with Nagle disabled and enabled, 
> performance improvements varied from -10% to +40%, most of the results were 
> on the plus side.
> Additionally, also working with wireshark developer to decode thrift traffic 
> and the large number of packets that need to be reassembled is causing huge 
> headaches to program the dissector.  Hopefully he can fix that, but seems 
> very difficult from what he tells me.
> Our C++ Thrift Server is based on TBufferedProtocol and TBinaryProtocol.   
> Briefly tried changing to TFramed, but that didn't appear to make any 
> difference, and client didn't work any longer (we did try to change it to 
> match server, maybe we did something wrong).
> Is there a problem with the way we are creating our C++ Thrift server 
> (TBuffered + TBinary), see further below for more complete info? Shouldn't 
> the TBufferedProcotol send complete API messages and prevent the large number 
> of packets?  Is TBinaryProtocol the problem?
> Would it be asking too much to allow Thrift Server user the choice to enable 
> Nagle or not during server creation?
> or
> Is there a problem with TBufferedProtocol or TBinaryProtocol or something 
> else we are doing wrong?
> Thanks for your time.
>  This is how we create our C++ Thrift server
> {code:java}
>     shared_ptr<toNappIfFactory> handlerFactory(new 
> NappServiceHandlerFactory());
>     shared_ptr<TProcessorFactory> processorFactory(new 
> toNappProcessorFactory(handlerFactory));
>     shared_ptr<TTransportFactory> transportFactory(new 
> TBufferedTransportFactory());
>     shared_ptr<TProtocolFactory> protocolFactory(new 
> TBinaryProtocolFactory());
>     shared_ptr<ThreadManager> 
> threadManager(ThreadManager::newSimpleThreadManager(NUMBER_OF_SERVER_THREADS));
>     shared_ptr<PlatformThreadFactory> threadFactory = 
> shared_ptr<PlatformThreadFactory>(new PlatformThreadFactory());
>     threadManager->threadFactory(threadFactory);
>     threadManager->start();
>     shared_ptr<TServerSocket> socket( 
> nappServerSocketBuilder->buildSSLServerSocket( 
> nappServerSocketBuilder->getPortNumber(), s_sslConfig));
>     shared_ptr<TServerTransport> serverTransport(socket);
>     shared_ptr<TServer> server( new TThreadPoolServer(processorFactory,
>                                                        serverTransport,
>                                                        transportFactory,
>                                                        protocolFactory,
>                                                        threadManager));
> {code}
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to