On Tue, 5 Aug 2003, David Schultz wrote: > On Tue, Aug 05, 2003, Roderick van Domburg wrote: > > There's this Linux kernel patch that allows for timeslice tuning. It's > > got the following rules of the thumb: > > > > * The optimal setting is your CPU's speed in MHz's / 2 > > * ...but there is no point in going above 1000 Hz > > * ...and be sure to use multiples of 100 Hz > > > > I am everything but an expert at scheduling, but somehow this makes > > sense: i.e. one jiffy for the scheduler and one jiffy for the process. > > > > Does all of this make any sense or is it just a load of hooey? > > There might be some rationale behind that suggestion, but my first > guess would be that someone pulled those numbers out of a hat. In > general, doing a context switch has negative cache effects, in > addition to the overhead that you pay up front. For optimum > throughput, you want to set HZ to the smallest number that still > gives acceptable resolution. 100 Hz works just fine for > interactive jobs; humans can't tell the difference.[1] For many > real-time applications, a higher value is needed. > > > [1] In terms of overhead, I think 100 Hz is well into the noise > these days, so bumping that up a little bit would result in > negligible difference. I measured 100 vs. 500 a little while > ago, and couldn't find a realistic benchmark that was negatively > impacted by the higher frequency.
I used to run my old Pentium I (200MHz) laptop at 1000Hz without any problems. I ran it this way for years until I retired it a few months ago. I'd support raising our default rate from 100Hz to 1000Hz. -- Dan Eischen _______________________________________________ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "[EMAIL PROTECTED]"

