On Wed, Nov 24, 2010 at 03:39:44PM +0530, ext Hiremath, Vaibhav wrote:
> 
> > -----Original Message-----
> > From: Tomi Valkeinen [mailto:[email protected]]
> > Sent: Wednesday, November 24, 2010 2:28 PM
> > To: Hiremath, Vaibhav
> > Cc: [email protected]
> > Subject: Re: OMAP:DSS: possible bug in WAITFOR_VSYNC ioctl
> > 
> > On Tue, 2010-11-23 at 23:46 +0530, ext Hiremath, Vaibhav wrote:
> > > Hi,
> > >
> > > While supporting one of customer I came across one interesting issue and
> > finding with WAITFORVSYNC ioctl -
> > >
> > > Problem Statement -
> > > -------------------
> > > With WAITFORVSYNC lots of tearing artifacts are visible on LCD output,
> > but WAITFORGO works fine without any issues.
> > >
> > > Debugging/Findings -
> > > --------------------
> > >
> > > Technically both, WAITFORVSYNC and WAITFORGO wait on VSYNC interrupt -
> > but there is slight difference in their implementation/processing.
> > 
> > No, that's not quite right.
> > 
> > WAITFORVSYNC waits for the next vsync.
> > 
> [Hiremath, Vaibhav] Yes, certainly.
> 
> > WAITFORGO waits until the the config changes for the particular overlay
> > have been applied to the HW, which may take multiple vsyncs if there are
> > already pending config changes. Or, if there are no changes to be
> > applied, it doesn't wait at all.
> > 
> [Hiremath, Vaibhav] Yes, completely agreed. But in the panning use-case where 
> if we assume there will be always config change when you enter WAIFORGO ioctl 
> it waits for next VSYNC, and as you mentioned it may wait for multiple vsyncs 
> to make sure that config change gets applied to HW.
> 
> > > WAITFORGO ioctl ensures that dirty & shadow_dirty flags (software flag)
> > are cleared, making sure that hardware state and software state always
> > stays in sync. It makes 4 attempts to do so - inside loop it checks for
> > dirty flags and call wait_for_vsync API. In ideal usecase scenario it
> > should come out in single iteration.
> <snip>
> 
> > > Suggestions/Recommendation -
> > > --------------------------
> > >
> > > From User application point of view, user won't care about driver
> > internal implementation. Application believes that WAITFORVSYNC will only
> > return after displaying (DMAing) previous buffer and now with addition to
> > FBIO_WAITFORVSYNC standard ioctl interface this is very well expected from
> > user application as a standard behavior.
> > >
> > > I would recommend having WAITFORGO like implementation under
> > WAITFORVSYNC, merging WAITFORGO with WAITFORVSYNC, and killing WAITFORGO
> > (since we have FBIO_WAITFORVSYNC standard ioctl).
> > > Also WAITFORGO ioctl is OMAP specific custom ioctl.
> > 
> > I have to say that I'm not quite sure what WAITFORVSYNC should do. 
> [Hiremath, Vaibhav] Me neither.
> 
> > The
> > name implies that it should wait for the next vsync, which is what it
> > does for omapfb.
> > 
> 
> > Changing it to WAITFORGO would alter the behaviour. Sometimes it would
> > not wait at all, sometimes it could wait for multiple vsyncs.
> [Hiremath, Vaibhav] I am quite not sure, whether it makes sense from 
> application point of view to wait for vsync if config is not changed. What 
> could be the use-case for such requirement, where application won't change 
> anything but still would like to wait on vsync?
> 
> And as far as wait on multiple vsync is concerned, it does make sense to coat 
> "WAITFORVSYNC ioctl makes sure that your change got applied to HW". 
> 
> I am not aware of other architectures, but I believe this is something OMAP 
> specific stuff. And for other platforms, WAITFORVSYNC means changes applied 
> to HW (why does apps care about whether it is vsync or anything else?).

WAITFORVSYNC waits for the next vsync (or actually vblank in many
cases). That's it. I don't see any point in trying to shoehorn
other functionality into it.

If you want to standardise WAITFORGO type of stuff then just add
another standard ioctl for it.

-- 
Ville Syrjälä
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to