On Thu, Dec 05, 2002 at 01:58:24AM -0800, Terry Lambert wrote:
> Stijn Hoop wrote:
> > I'd argue it isn't flawed for the measuring it is supposed to do - namely
> > the overhead for the various _sleep functions. Care to tell me why it is
> > flawed according to you?
> 
> Because it measures the API one way, but the code uses it another.
> The results you get are not predictive of the code that you are
> going to be running.

But the code is going to use the _sleep functions as used in the benchmark
-- to sleep for less than 10 ms (which evidently makes no sense on a default
FreeBSD system, as pointed out by the results).

> Well, really, something that requires RT performance should be in
> the kernel.  That's why we put interrupt handlers there.  8-).

/me ponders having an option XMAME in the kernel.... nah, lets not go there :)

> Probably the place to do this is in the POSIX RT scheduling; if
> the RT scheduling is active (meaning a process has called it, and
> that process is still running), it's probably a reasonable thing
> to crank up the Hz.  This would make it self-adjusting, and also
> self-healing, so that you could safely degrade the overall system
> performance by intentionally running your application, but not
> otherwise.

That's a good suggestion, but how many OSs implement those? Where can I
learn more about them? Any open standards?

> Note that if this were implemented, it would mean your benchmark
> is still broken, because it doesn't call the necessary interfaces.

? I don't get this.

> Another alternative would be a nanosleep call with an argument below
> a certain value.  I would hesitate to do it that way, though, since
> I think that it ought to take a priviledged program to do the evil
> deed, given the impact on the rest of the system.

And that would sleep less than 10ms on average?

--Stijn

-- 
SIGSIG -- signature too long (core dumped)

Attachment: msg38502/pgp00000.pgp
Description: PGP signature

Reply via email to