On Wed, 2009-04-29 at 08:50 +1000, mcto...@gmail.com wrote:
> A few followups on this.  It's only now that I have realised that the 
> the tomcat apr handler in JBoss is subtly different from the one in 
> Tomcat....ie that whole piece of tomcat has been forked into JBossWeb, 
> and they're starting to diverge.  Be that as it may, my comments cover 
> the current design as it exists in both of them at trunk level.

The accept code is the same, and that part of the poller code is
equivalent. JBoss Web is my new connector code (that I was doing in the
comet branch here before moving it).

> Why is poller performance bad in Windows? Is that a consequence of the 

I've been told it uses select, which works only on 64 sockets at a time.
So if you have a large poller, then the poll call performance degrades.
Mladen is really the only one who can answer Windows questions.
(personally, I think it is a lot cause for really high scalability)

> way the APR interfaces to WinSock? I'm guessing that APR uses a 
> Unix-style approach to polling the sockets. Or is it to do with the 
> performance of the poll inside Window itself?
> 
> Be that as it may, at our end we still have to make Windows work as well 
> as possible, so if there are simple tweaks we can do to cause 
> performance to degrade more gracefully under conditions of peaky load, 
> can we not discuss it?
> 
> Also I couldn't see where it specifically sets a high number of pollers 
> high for Windows? (And in which fork?? :-)

Both. It's pretty easy, there are Windows specific flags in the code.
Look there. If the poller size is larger than 1024, it creates multiple
pollers, each holding 1024 sockets (my testing did show this was a
magical value, where it was not too slow, yet large enough).

> And could you elaborate please on that last statement? "I'm not sure 
> this whole thing will still be relevant then".

I think APR is by far the best technology for implementing Servlet 3.0,
but I'm not so sure beyond that. The Vista+ enhancements need a new
major version of APR, so it will take some time (too much IMP).

> There seems to be two distinct aspects to this deferAccept thing. One is 
> what happens with the socket options. (And as I understand it this 
> option is only supported in Linux 2.6 anyway).  The other - which is in 
> the AprEndpoint code, concerns the processing of the new connection. 
> Just on that note, I have a question about this bit of code:
> 
> if (!deferAccept) {
>                 if (setSocketOptions(socket)) {
>                     getPoller().add(socket);
>                 } else {
>                     // Close socket and pool
>                     Socket.destroy(socket);
>                     socket = 0;
>                 }
>             } else {
>                 // Process the request from this socket
>                 if (!setSocketOptions(socket)
>                         || handler.process(socket) == 
> Handler.SocketState.CLOSED) {
>                     // Close socket and pool
>                     Socket.destroy(socket);
>                     socket = 0;
> 
> 
> The default value of deferAccept is true, but on Windows this option is 
> not supported in the TCP/IP stack, so there is code that falsifies the 
> flag if this is the case. In which case, the socket is added straight to 
> the poller. I'm happy with that approach anyway.  But, the act of 
> getting it across to the poller  - which should be a relatively quick 
> operation (?) requires the use of a worker thread from the common pool. 
> This gets back to my original point.  If the new connection could be 
> pushed across to the poller asap, (without handling the request), and 
> without having to rely on the worker threads, then surely this is going 
> to degrade more gracefully than the current situation where a busy 
> server is going to leave things in the backlog for quite some time. 
> Which is a problem with a relatively small backlog.

Yes, but (as I said in my previous email):
- Poller.add does sync. This is bad for the Acceptor thread, but it
might be ok.
- Socket options also does SSL handshake, which is not doable in the
Acceptor thread.

So (as I was also saying) if there is no SSL and deferAccept is false,
it is possible to have a configuration option to have the Acceptor
thread set the options and put the socket in the poller without using a
thread from the pool. That is, if you tested it and you found it worked
better for you.

Also, you are supposed to have a large maxThreads in the pool if you
want to scale. Otherwise, although it might work well 99% of the time
since thread use is normally limited due to the poller for keepalive,
it's very easy to DoS your server. BTW, it's rather cheap (on Linux, at
least ;) ).

> In the Tomcat branch, there is code to have multiple acceptor threads, 

That stuff wasn't really implemented. I don't think it's such a good
idea to be too efficient to accept and poll, if all it's going to do is
blow up the app server (which would probably be even more challenged by
the burst than the connector). The network stack is actually a decent
place to smooth things out a little (or you could implement a backlog as
part of the accept process).

> with a remark that it doesn't seem to work that well if you do. So that 
> being the case, why not push it straight across to the poller in the 
> context of the acceptor thread?

Rémy



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to