> From: Haehnle, Nicolai
> Sent: Tuesday, April 10, 2018 13:48
> On 10.04.2018 19:25, Cyr, Aric wrote:
> >> -----Original Message-----
> >> From: Michel Dänzer [mailto:mic...@daenzer.net]
> >> Sent: Tuesday, April 10, 2018 13:16
> >>
> >> On 2018-04-10 07:13 PM, Cyr, Aric wrote:
> >>>> -----Original Message-----
> >>>> From: Michel Dänzer [mailto:mic...@daenzer.net]
> >>>> Sent: Tuesday, April 10, 2018 13:06
> >>>> On 2018-04-10 06:26 PM, Cyr, Aric wrote:
> >>>>> From: Koenig, Christian Sent: Tuesday, April 10, 2018 11:43
> >>>>>
> >>>>>> For video games we have a similar situation where a frame is rendered
> >>>>>> for a certain world time and in the ideal case we would actually
> >>>>>> display the frame at this world time.
> >>>>>
> >>>>> That seems like it would be a poorly written game that flips like
> >>>>> that, unless they are explicitly trying to throttle the framerate for
> >>>>> some reason.  When a game presents a completed frame, they’d like
> >>>>> that to happen as soon as possible.
> >>>>
> >>>> What you're describing is what most games have been doing traditionally.
> >>>> Croteam's research shows that this results in micro-stuttering, because
> >>>> frames may be presented too early. To avoid that, they want to
> >>>> explicitly time each presentation as described by Christian.
> >>>
> >>> Yes, I agree completely.  However that's only truly relevant for fixed
> >>> refreshed rate displays.
> >>
> >> No, it also affects variable refresh; possibly even more in some cases,
> >> because the presentation time is less predictable.
> >
> > Yes, and that's why you don't want to do it when you have variable refresh. 
> >  The hardware in the monitor and GPU will do it for you,
> so why bother?
> 
> I think Michel's point is that the monitor and GPU hardware *cannot*
> really do this, because there's synchronization with audio to take into
> account, which the GPU or monitor don't know about.

How does it work fine today given that all kernel seems to know is 'current' or 
'current+1' vsyncs.  
Presumably the applications somehow schedule all this just fine.
If this works without variable refresh for 60Hz, will it not work for a 
fixed-rate "48Hz" monitor (assuming a 24Hz video)?

> Also, as I wrote separately, there's the case of synchronizing multiple
> monitors.

For multimonitor to work with VRR, they'll have to be timing and flip 
synchronized.
This is impossible for an application to manage, it needs driver/HW control or 
you end up with one display flipping before the other and it looks terrible.
And definitely forget about multiGPU without professional workstation-type 
support needed to sync the displays across adapters.

> > The input to their algorithms will be noisy causing worst estimations.  If 
> > you just present as fast as you can, it'll just work (within
> reason).
> > The majority of gamers want maximum FPS for their games, and there's quite 
> > frequently outrage at a particular game when they are
> limited to something lower that what their monitor could otherwise support 
> (i.e. I don't want my game limited to 30Hz if I have a shiny
> 144Hz gaming display I paid good money for).   Of course, there's always 
> exceptions... but in our experience those are few and far
> between.
> 
> I agree that games most likely shouldn't try to be smart. I'm curious
> about the Croteam findings, but even if they did a really clever thing
> that works better than just telling the display driver "display ASAP
> please", chances are that *most* developers won't do that. And they'll
> most likely get it wrong, so our guidance should really be "games should
> ask for ASAP presentation, and nothing else".

Right, I think this is the 'easy' case and is covered in Harry's initial 
proposal when target_frame_duration_ns = 0.

> However, there *are* legitimate use cases for requesting a specific
> presentation time, and there *is* precedent of APIs that expose such
> features.
>
> Are there any real problems with exposing an absolute target present time?

Realistically, how far into the future are you requesting a presentation time? 
Won't it almost always be something like current_time+1000/video_frame_rate?
If so, why not just tell the driver to set 1000/video_frame_rate and have the 
GPU/monitor create nicely spaced VSYNCs for you that match the source content?

In fact, you probably wouldn't even need to change your video player at all, 
other than having it pass the target_frame_duration_ns.  You could consider 
this a 'hint' as you suggested, since it's cannot be guaranteed in cases your 
driver or HW doesn't support variable refresh.  If the target_frame_duration_ns 
hint is supported/applied, then the video app should have nothing extra to do 
that it wouldn't already do for any arbitrary fixed-refresh rate display.  If 
not supported (say the drm_atomic_check fails with -EINVAL or something), the 
video app falls can stop requesting a fixed target_frame_duration_ns.

A fundamental problem I have with a target present time though is how to 
accommodate present times that are larger than one VSYNC time?  If my monitor 
has a 40Hz-60Hz variable refresh, it's easy to translate "my content is 24Hz, 
repeat this next frame an integer multiple number of times so that it lands 
within the monitor range".  Driver fixes display to an even 48Hz and everything 
good (no worse than a 30Hz clip on a traditional 60Hz display anyways).  This 
frame-doubling is all hardware based and doesn't require any polling.  

Now if you change that to "show my content in at least X nanoseconds" it can 
work on all displays, but the intent of the app is gone and driver/GPU/display 
cannot optimize.  For example, the HDMI VRR spec defines a "CinemaVRR" mode 
where target refresh rate error is accounted for based on 0.1% deviation from 
requested and the v_total lines are incremented/decremented to compensate.  If 
we don't know the target rate, we will not be able to comply to this industry 
standard specification.

Also, how would you manage an absolute target present time in kernel?  I guess 
app and driver need to use a common system clock or tick count, but when would 
you know to 'wake up' and execute the flip?  If you wait for VSYNC then you'll 
always timeout out on v_total_max (i.e. minimum refresh rate), check your time 
and see "yup, need to present now" and then flip.  Now your monitor just jumped 
from lowest refresh rate to something else which can cause other problems.  If 
you use some timer, then you're burning needless power polling some counter and 
still wouldn't have the same accuracy you could achieve with a fixed duration.

Regards,
  Aric
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Reply via email to