Hello, last week I have upgraded to the new Pound version 2.6 (on Linux kernel 2.6.39.3 x86_64). Now I'm seeing many log messages like "pound: NULL get_thr_arg". According to this mailing list entry http://www.apsis.ch/pound/pound_list/archive/2011/2011-01/1293940338000 the messages should not show up in the final release but I'm getting plenty of them (round about 3-5 per second). Have you forgotten to remove the message or is it still necessary? It's generating a huge logging overhead ...
In addition to that I have some questions about the new threading model in version 2.6: To determine the optimal value for the number of threads you have to "experiment" (= method of trial and error). This is not feasible in a production environment because each change in the configuration file needs a restart (and thus a small downtime). Additionally there is currently no possibility to see if all worker threads are busy and new requests are getting queued. So how can I determine, if the number of threads is sufficient? Only by "noticing" that the response time of the website is a little bit slower (we are talking about milliseconds)? Isn't it possible that Pound logs an error as soon as all threads are busy and new requests have to be queued (something like: "No worker threads left -> queuing requests. Consider raising the 'Threads' setting")? Even "nice to have" would be a poundctl option to show the number of threads currently in use (very useful for monitoring and anticipatory measures to avoid "thread bottle necks"). Besides the issues described above: Wouldn't it be better to combine the "old" threading model (dynamic thread creation at runtime) and the "new" one (fixed thread creation at startup) by using two config directives, one for the number of threads created at startup and one for the maximum number of threads pound is allowed to create dynamically during runtime (like Apache does)? What do you think? Kind regards, Leo -- To unsubscribe send an email with subject unsubscribe to [email protected]. Please contact [email protected] for questions.
