On Wed, 2005-11-09 at 13:38 +0100, ness wrote: > Jonathan S. Shapiro wrote: > > No no. The file system can no longer make any specification of latency > > for *any* file, because the act of locating *other* files may require a > > name comparison on an arbitrarily long name along the way. > > > > shap > > Why shouldn't the thread of execution and scheduling time be provided by > the caller, too?
Something in this whole conversation has bothered me, and it took ness's note to help me see precisely what it was. In my entire professional life, I've only seen ONE program that was impeded by PATH_MAX, and that one really needed to be fixed, and the problem was the program, not the PATH_MAX value. Alfred came forward with another example a while back. That makes two. But in the present discussion, a bunch of people have said, in effect, "we can work around the unbounded path problems by introducing shared memory multithreading into the server". Sometimes shared memory multithreading is a necessary evil, but it really needs to be viewed as a solution of last resort in a robust system. There are three reasons for this: Formally, we have no idea how to model the behavior of such programs in general. As developers we have no idea how to debug such programs. Pragmatically, we really do observe in practice that this type of programming idiom is a source of real, recurring, durable, and hard to find bugs in real systems. So the proposal to multithread the file server in order to support arbitrary path lengths basically amounts to: "in order to better serve the occasional exceptionally rare program, we should abandon robustness for *all* programs in a fundamental and unengineerable way." This doesn't strike me as the right outcome. There is no such thing as a free lunch, folks. shap _______________________________________________ L4-hurd mailing list [email protected] http://lists.gnu.org/mailman/listinfo/l4-hurd
