Hi, I've been browsing the naviserver code, learning the differences from the aolserver code that I'm more familiar with, and checking for a few bugs that I've found and/or fixed previously. Here's what I've come across so far. These are all pretty unusual cases with straightforward workarounds, but they're still bugs.
fastpath stale cache - with fastpath caching enabled, it is possible to serve incorrect data with 'ns_returnfile' if a constant filename is being used and overwritten repeatedly. The problem is much less serious than in aolserver where the file inode is used as the cache key instead of the file name, meaning that using [ns_tmpnam] or [ns_mktemp] for a temporary file instead of a constant name completely avoids the issue; but it is still present and the fix of not caching files with ctime==time() works. ns_returnfile doesn't decode passed filenames from tcl correctly. As a result, [file exists $file] could report that a file does exist but [ns_returnfile 200 text/html $file] could give a "not found" error. The circumstances needed to make this happen are uncommon (changing tcl's system encoding, actually having files in strange encodings) but it is possible. Fix should be to pass the argument to ns_returnfile through Tcl_UtfToExternalDString to get a 'native' file name. ADP parsing doesn't handle nested <% %> sequences correctly. There is a fix available. It is possible to get into a situation where there are connections queued but no conn threads running to handle them, meaning nothing happens until a new connection comes in. When this happens the server will also not shut down cleanly. As far as I can figure, this can only happen if the connection queue is larger than connsperthread and the load is very bursty (i.e., a load test); all the existing conn threads can hit their cpt and exit, but a new conn thread only starts when a new connection is queued. I think the solution here is to limit maxconnections to no more than connsperthread. Doing so exposes a less severe problem where connections waiting in the driver thread don't get queued for some time; it's less of a problem because there is a timeout and the dirver thread will typically wake up on a closing socket fairly soon, but it can still result in a simple request taking ~3s to complete. I don't know how to fix this latter problem. Obsolete commands in manpages - several still have examples with ns_share - ns_mutex, ns_register, and ns_register_filter. Examples in ns_mutex and ns_sockcallback refer to 'detach', which I've never heard of but from the usage I'm guessing is the predecessor to ns_chan. -J ------------------------------------------------------------------------------ Don't let slow site performance ruin your business. Deploy New Relic APM Deploy New Relic app performance management and know exactly what is happening inside your Ruby, Python, PHP, Java, and .NET app Try New Relic at no cost today and get our sweet Data Nerd shirt too! http://p.sf.net/sfu/newrelic-dev2dev _______________________________________________ naviserver-devel mailing list naviserver-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/naviserver-devel