[ 
https://issues.apache.org/jira/browse/THRIFT-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14491624#comment-14491624
 ] 

Randy Abernethy commented on THRIFT-3084:
-----------------------------------------

Hey James,

Thanks for the thorough response, I +1 your implementation, a big improvement 
over what we have now, so take this as what it is, just a collegial chat. 

My real focus here is that we do not want many servers that do the same or very 
similar things. It adds little value, confuses users and creates an additional 
maintenance burden on the maintainers (close to my heart). Some of the Apache 
Thrift languages have a crazy number of (more or less redundant) servers. 
People write blog post trying to compare them. Your patch was a lightning rod 
for removing code (TThreadedServer) which is the best thing you can do to a 
code base. Thanks for that!

If you subscribe to the above minimum server axiom (which perhaps not all do), 
then a server should be in the code base because it brings something 
particularly unique to the table. The nonblocking server is event driven, the 
thread pool server is thread per client, an IOCompletion Port server is based 
on completion ports, an async server returns packets out of order, etc. So do 
we really want another synchronous, blocking, thread per client server? 
Shouldn’t we just have one good one? If we want just one good one, then is 
there a need for a shared framework base with implementation in it with a view 
on the processing model? The last thing I want to see is additional micro 
variations on the thread per client server theme. I also don’t want people to 
think they have to derive from TServerFramework (sounds pretty official). A 
server should implement TServer and that is really the only requirement.

So what about simple server. I see SimpleServer as a special case. Its one goal 
is to be simple. At the end of the day, if it is only marginally different from 
TThreadPoolServer, maybe we should delete it too. Seems like it would be silly 
to implement SimpleServer in terms of TThreadPool. Yet we sort of are with the 
proposed framework classes. It is not adding unique value anymore, it isn’t any 
simpler than TThreadPool. This leaves us with a Framework class with one 
derivative TThreadPool and takes me back to wanting to just implement it in 
TThreadPool.

I won’t bug you anymore on this, as I mentioned I am +1 on the patch. It is 
great work and though I am lobbying to package it a little differently, diverse 
views on things are a natural part of a vibrant community and it is your patch.

Best,
Randy

> C++ add concurrent client limit to threaded servers
> ---------------------------------------------------
>
>                 Key: THRIFT-3084
>                 URL: https://issues.apache.org/jira/browse/THRIFT-3084
>             Project: Thrift
>          Issue Type: Improvement
>          Components: C++ - Library
>    Affects Versions: 0.8, 0.9, 0.9.1, 0.9.2
>            Reporter: James E. King, III
>         Attachments: THRIFT-3084-on-3083.v2.patch
>
>
> The TThreadedServer and TThreadPoolServer do not impose limits on the number 
> of simultaneous connections, which is not useful in production as bad clients 
> can drive a server to consume too many file descriptors or have too many 
> threads.
> With TThreadPoolServer one can set the limit on the number of threads, 
> however the server will use one additional file descriptor because the 
> serve() routine does not block until after accepting the threadManager size + 
> 1 sockets.
> With TThreadedServer there was no built-in way to throttle.
> Give the serve() loop is the only code capable of adding a client, the 
> solution is to add a Monitor to the TServerFramework and check the number of 
> concurrent clients immediately before calling TServerTransport::accept() to 
> get another client, and to track the number of clients that are still alive 
> (their smart pointer hasn't been destroyed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to