On Mon, 2009-02-02 at 09:16 -0800, Jesse Barnes wrote:
> On Monday, February 2, 2009 8:43 am Michel Dänzer wrote:
> > On Sat, 2009-01-31 at 08:15 -0800, Jesse Barnes wrote:
> > > I don't want to add another test, just to replace the existing one. 
> > > Below is what I'd like to push, since it matches other wraparound
> > > handling in the kernel.  How did you settle on 1<<23 originally?
> >
> > It should allow waiting for up to about one or two days (38.8 hours at
> > 60 Hz to be exact :), anything else is considered a miss. From today's
> > perspective, it might make sense to further restrict that to about a
> > minute or so. (A minor complication being that the same logic exists in
> > userspace code, not sure what would happen if they didn't match anymore)
> >
> > However, that isn't what your proposed change does, is it? Doesn't it
> > even increase the maximum wait span to the positive range of an int,
> > which equals more than a year at 60 Hz?

Actually, I'm afraid I was getting confused... Looking at the current
code again, it does about the opposite of what I said above: it
terminates the wait if the target sequence is up to a day behind the
counter, so it would allow waiting for up to more than two years.

However, if we are to change this test, I'd suggest going much further
than your proposal (which btw is really just a fancy way of replacing
the shift of 23 with 31 :): e.g. something like

(vblwait->request.sequence - drm_vblank_count(dev, crtc) - 1) > (1 << 23)

should do what I said above. Then if we reduce the right hand side to
e.g. 3600, only waiting for up to 3600 frames (one minute at 60 Hz) is
possible, anything else is considered a miss.

(I hope I didn't make any stupid mistake in the math again, it's getting
late... but I'm quite confident the principle is sound)


> > > It's mostly harmless, we probably see it on Intel because we do an IRQ
> > > disable across VT switch, which would trigger the update_vblank_count
> > > call at the first post-VT switch drm_vblank_get.  Should be easy to
> > > fix up though, we can just capture last_vblank_count at IRQ disable
> > > time too.
> >
> > If this was fixed, would any of the other changes you proposed in this
> > thread be necessary? Wouldn't they just be papering over the real issue,
> > which is the jumping counter?
> 
> The real issue is that passing in a vblank sequence that's already passed 
> doesn't return immediately like it should.

Well, the problem is exactly what 'already passed' means.

> The jumping counter exposed that problem and should probably be fixed
> too (just to keep the counter values more friendly) but the core
> problem should be fixed first, imo.

I still disagree there: If the counter doesn't jump, even the current
code works just fine as long as userspace calls the ioctl at least once
a day. We can try to contain the damage caused by a jumping counter, but
we can't completely eliminate it in all cases. Even if we can usually
handle it sanely in the kernel, userspace libraries / apps may not be so
robust. From my POV the core problem is clearly the jumping counter,
everything else discussed in this thread is just workarounds to paper
over it.


-- 
Earthling Michel Dänzer           |                http://www.vmware.com
Libre software enthusiast         |          Debian, X and DRI developer

------------------------------------------------------------------------------
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to