On 17 April 2012 13:09, Michal Suchanek <[email protected]> wrote: > On 17 April 2012 02:45, Peter Hutterer <[email protected]> wrote: >> On Fri, Apr 13, 2012 at 09:33:38AM -0700, Chase Douglas wrote: >>> On 04/13/2012 01:34 AM, Daniel Kurtz wrote: >>> > On Fri, Apr 13, 2012 at 9:09 AM, Peter Hutterer >>> > <[email protected]> wrote: >>> >> On Wed, Apr 11, 2012 at 06:33:26PM -0700, Chase Douglas wrote: >>> >>> On 04/11/2012 04:53 PM, Peter Hutterer wrote: >>> >>>> On Mon, Apr 09, 2012 at 11:17:26AM -0700, Chase Douglas wrote: >>> >>>>> This new patch set includes a few fixes to the first and many more >>> >>>>> fixed >>> >>>>> logging paths. All paths in the server that occur when it encounters a >>> >>>>> fatal signal are now handled properly. >>> >>>>> >>> > >>> > Hi Chase, >>> > >>> > I just started looking at the os/log.c in the X server again to fix an >>> > unrelated issue, and realized that my patches from last year did not, >>> > in fact, make logging safe to do from signal handlers. I am really >>> > glad you are looking into fixing this! >>> > >>> > However, I have a more basic question. Why does the X server process >>> > input events in signal handlers the first place? >>> > Why not just add the event device files to a read file set in the main >>> > "WaitForSomething" select loop, and then call the appropriate driver >>> > ReadInput from normal process context? >>> > >>> > The X server must be using SigIO for some really good reasons, what are >>> > they? >>> >>> To reduce input visual latency. Signal handlers are run immediately when >>> the signal occurs. The process is interrupted no matter where it is, >>> including when it is 500 stack frames deep from the event loop. >>> >>> The reduced latency is an effect that some people don't notice, some >>> people notice but it's not a big deal, and some people can't live >>> without, based on what I remember from a discussion at XDC 2010. >> >> yes, that's the reason. pointer movement should be instantaneous (or closest >> thereof) when moving the device. >> >> the whole thing exploded because it went from >> input event -> move the cursor >> >> to >> >> input event -> emulate or filter buttons if needed, interpret touch >> gestures, rescale to screen area >> depending on current multiscreen setup, pointer acceleration, emulate scroll >> events >> if applicable -> move the cursor >> >> except for the last, they all need to be in that path since, you can't >> really move the cursor until you know where to move it to. >> >>> At that >>> time, some devs from Nokia were looking at changing the signal context >>> to a pthread, but I guess it stalled. I haven't heard anything about it >>> for quite some time. >> >> I remember two problems with it: the input thread was to big to prevent it >> from being swapped out (and thus lose the response times). Not sure if that >> was still a problem in the last set of patches. > > This is not affected by threading. The code and data is swapped out > when there is no input and it it is large and separated enough to > create unused pages. > > Code cleanup may lead to code being swappable by the system but that > has nothing to do with threading. > >> >> The bigger problem was the input event generation code doesn't lock properly >> because it expects to never be interrupted. > > Maybe running the input code in higher priority thread might simulate > the signal behaviour close enough?
Hmm, there is probably no pthread standard scheduling model that would guarantee that lower priority threads do not run concurrently with higher priority thread on a multicore system. Not that there is even any scheduling model other than 'implementation defined' required by pthread standard. Thanks Michal _______________________________________________ [email protected]: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
