Am 10.04.2018 um 17:08 schrieb Harry Wentland:
On 2018-04-10 03:37 AM, Michel Dänzer wrote:
On 2018-04-10 08:45 AM, Christian König wrote:
Am 09.04.2018 um 23:45 schrieb Manasi Navare:
Thanks for initiating the discussion. Find my comments below:
On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
On 2018-04-09 03:56 PM, Harry Wentland wrote:
=== A DRM render API to support variable refresh rates ===

In order to benefit from adaptive sync and VRR userland needs a way
to let us know whether to vary frame timings or to target a
different frame time. These can be provided as atomic properties on
   * bool    variable_refresh_compatible
   * int    target_frame_duration_ns (nanosecond frame duration)

This gives us the following cases:

variable_refresh_compatible = 0, target_frame_duration_ns = 0
   * drive monitor at timing's normal refresh rate

variable_refresh_compatible = 1, target_frame_duration_ns = 0
   * send new frame to monitor as soon as it's available, if within
min/max of monitor's reported capabilities

variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
   * send new frame to monitor with the specified

When a target_frame_duration_ns or variable_refresh_compatible
cannot be supported the atomic check will reject the commit.

What I would like is two sets of properties on a CRTC or preferably on
a connector:

KMD properties that UMD can query:
* vrr_capable -  This will be an immutable property for exposing
hardware's capability of supporting VRR. This will be set by the
kernel after
reading the EDID mode information and monitor range capabilities.
* vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max
refresh rates supported.
These properties are optional and will be created and attached to the
DP/eDP connector when the connector
is getting intialized.
Mhm, aren't those properties actually per mode and not per CRTC/connector?

Properties that you mentioned above that the UMD can set before kernel
can enable VRR functionality
*bool vrr_enable or vrr_compatible
Yeah, that certainly makes sense. But target_frame_duration_ns is a bad

We should use an absolute timestamp where the frame should be presented,
otherwise you could run into a bunch of trouble with IOCTL restarts or
missed blanks.
Also, a fixed target frame duration isn't suitable even for video
playback, due to drift between the video and audio clocks.

Time-based presentation seems to be the right approach for preventing
micro-stutter in games as well, Croteam developers have been researching

I'm not sure if the driver can ever give a guarantee of the exact time a flip 
occurs. What we have control over with our HW is frame duration.

Sounds like you misunderstood what we mean here.

The driver does not need to give an exact guarantee that a flip happens at that time. It should just not flip before that specific time.

E.g. when we missed a VBLANK your approach would still wait for the specific amount of time, while an absolute timestamp would mean to flip as soon as possible after that timestamp passed.

As Michel noted that is also exactly what video players need.

Are Croteam devs trying to predict render times? I'm not sure how that would 
work. We've had bad experience in the past with games that try to do 
framepacing as that's usually not accurate and tends to lead to more problems 
than benefits.

As far as I understand that is just a regulated feedback system, e.g. the application records the timestamps of the last three frames (or so) and then uses that + margin to as world time for the 3D rendering.

When the application has finished sending all rendering commands it sends the frame to be displayed exactly with that timestamp as well.

The timestamp when the frame was actually displayed is then used again as input to the algorithm.



amd-gfx mailing list

amd-gfx mailing list

Reply via email to