I've been profiling the pthreaded fileserver for a while now, and one thing continues to bother me. We call the time() syscall so frequently that it's often 20% of our total execution time. There are plenty of hacks to reduce this overhead, but every one I know of will introduce some amount of non-determinism. Is that an acceptable risk?
Having a thread in a nanosleep; gettimeofday loop might be acceptable, but during periods of high load, you couldn't guarantee that your global epoch time variable would be _monotonically_ increasing. I suppose a combination of a SIGALRM handler and gettimeofday; setitimer might mitigate that problem to some extent. Anyone have any better suggestions on how to reduce this overhead? Certainly there are parts of the code that should continue to call time() directly, but how much do we care if we accept a kerberos ticket that expired a few milliseconds ago, when acceptable clock drift is many orders of magnitude larger? -- Tom Keiser [EMAIL PROTECTED] _______________________________________________ OpenAFS-devel mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-devel
