Marcus Meissner <[EMAIL PROTECTED]> writes:

> Critical Sections take way too much time. I have added the appended patch
> to EnterCriticalSection():
> 
> It prints an S every 20 seconds on my K6-200, meaning it did at least 
> 3*1000 (wine->wineserver->wine) process context switches (not counting
> the switches caused by the owning thread).

If I understand your test correctly, it means we do 1000 server calls
every 20 seconds, or 50 calls per second, because of critical
sections.  On my PII-266 a server call takes approx. 60us, so this
means critical section wait eats 50x60us = 3ms every second, or 0.3%
of the total time. Even if we add the calls made by the thread leaving
the section I doubt this would cause a dramatic performance impact.

> I can't say for sure, but this might be sucking up way too expensive CPU 
> power ;)
> 
> A solution would be too change handling of local critical sections to use
> either UNIX IPC semaphores direct or 'busy wait' using a local wait
> (like usleep(), select() and/or sched_yield()).

We could do a couple of busy-wait iterations before sleeping;
I believe recent Windows versions do something like this.

But I think a better performance improvement would be to reduce the
number of threads we use, and particularly avoid processing X events
in the service thread. This would avoid all contention on the X11
section, at least for non-threaded programs.

-- 
Alexandre Julliard
[EMAIL PROTECTED]

Reply via email to