Re: RFC for a render API to support adaptive sync and VRR

2018-08-17 Thread Ernst Sjöstrand
It would be really nice to have support for the automatic
extension-less fullscreen game scenario. Maybe you don't have to solve
everything in the first implementation...
So a friendly ping here!

Regards
//Ernst

Den tis 24 apr. 2018 kl 23:58 skrev Daniel Vetter :
>
> On Tue, Apr 24, 2018 at 4:28 PM, Harry Wentland  
> wrote:
> >
> >
> > On 2018-04-24 08:09 AM, Daniel Vetter wrote:
> >> On Mon, Apr 23, 2018 at 02:19:44PM -0700, Manasi Navare wrote:
> >>> On Mon, Apr 23, 2018 at 10:40:06AM -0400, Harry Wentland wrote:
>  On 2018-04-20 04:32 PM, Manasi Navare wrote:
> > On Wed, Apr 18, 2018 at 09:39:02AM +0200, Daniel Vetter wrote:
> >> On Wed, Apr 18, 2018 at 5:58 AM, Keith Packard  
> >> wrote:
> >>> Michel Dänzer  writes:
>  Time-based presentation seems to be the right approach for preventing
>  micro-stutter in games as well, Croteam developers have been 
>  researching
>  this.
> >>>
> >>> Both the Vulkan GOOGLE_display_timing extension and X11 Present
> >>> extension offer the ability to specify the desired display time in
> >>> seconds.
> >>>
> >>> Similarly, I'd suggest that the min/max display refresh rate values be
> >>> advertised as time between frames rather than frames per second.
> >
> > So there is a global min and max refresh rate as advertised by the 
> > monitor
> > range descriptor. That I guess can be exposed as a global range in 
> > terms of
> > min and max time between frames as a global property of the connector.
> >
> > We dont need the per mode min and max refresh rate to be exposed right?
> 
>  If I understand VRR right, with CinemaVRR acceptable refresh rates might 
>  fall outside the range advertised by the monitor. Would we
>   1) advertise 24/1.001 as a lower bound,
>   2) expect media apps to use the lower bound simply for informational 
>  purposes,
>   3) or simply not support CinemaVRR?
> 
>  (1) has the added caveat that not all reported rates would be supported.
> 
>  Alternatively a bit could indicate that CinemaVRR is support, but I'm 
>  not sure if user mode would need all these details.
> 
>  Harry
> >>>
> >>> Are there special CinemaVRR suported monitors? In that case we need to 
> >>> understand how those monitors
> >>> advertise the monitor range and if they have a bit in EDID that indicate 
> >>> they are CinemaVRR capable
> >>> as opposed to just the Adaptive Sync/VRR.
> >>> Harry, if you have one of those monitors, could you send the EDID dump 
> >>> for that?
> >>
> >> As long as the any multiple of the 24/1.001 refresh rate is within the
> >> officially supported refresh range rate this should work out. Maybe we'll
> >> end up uploading 2x (to run at ~48Hz), maybe the kernel only uploads at
> >> 24Hz. But should all be fine.
> >>
> >
> > Would kernel driver upload 48Hz when UMD asks for 24Hz or would UMD be 
> > expected to submit double frames?
> >
> > If kernel driver supports frame doubling (like our DC driver) we would 
> > probably report half of monitor-reported min-refresh (or rather double of 
> > monitor-reported max frame time).
>
> Your driver (amdgpu) already supports frame doubling, except only for
> vblank seqno instead of timestamps. Whether VRR can get down to 24Hz
> or not is totally irrelevant from userspace's point of view. By
> default the kernel is expected to keep display the current frame for
> as long as userspace gives it a new one. There's no expectation that
> userspace provides a new buffer for every vblank (whether that's a
> fixed or variable refresh rate doesn't matter).
> -Daniel
>
> >
> > Harry
> >
> >> Ofc if we have CinemaVRR screens which don't fit this, then maybe we need
> >> to figure out something ...
> >> -Daniel
> >>
> >>>
> >>> Manasi
> >>>
> 
> >
> >>>
> >>> I'd also encourage using a single unit for all of these values,
> >>> preferably nanoseconds. Absolute times should all be referenced to
> >>> CLOCK_MONOTONIC.
> >>
> >> +1 on everything Keith said. I got somehow dragged in khr vk
> >> discussions around preventing micro-stuttering, and consensus seems to
> >> be that timestamps for scheduling frames is the way to go, most likely
> >> absolute ones (not everything is running Linux unfortunately, so can't
> >> go outright and claim it's guaranteed to be CLOCK_MONOTONIC).
> >> -Daniel
> >
> > And yes I also got consensus from Mesa and media folks about using the
> > absolute timestamp for scheduling the frames and then the driver will
> > modify the vblank logic to "present no earlier than the timestamp"
> >
> > Manasi
> >
> >> --
> >> Daniel Vetter
> >> Software Engineer, Intel Corporation
> >> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> >> ___
> >> dri-devel mailing list
> >> dri-de...@lists.

Re: RFC for a render API to support adaptive sync and VRR

2018-04-24 Thread Daniel Vetter
On Tue, Apr 24, 2018 at 4:28 PM, Harry Wentland  wrote:
>
>
> On 2018-04-24 08:09 AM, Daniel Vetter wrote:
>> On Mon, Apr 23, 2018 at 02:19:44PM -0700, Manasi Navare wrote:
>>> On Mon, Apr 23, 2018 at 10:40:06AM -0400, Harry Wentland wrote:
 On 2018-04-20 04:32 PM, Manasi Navare wrote:
> On Wed, Apr 18, 2018 at 09:39:02AM +0200, Daniel Vetter wrote:
>> On Wed, Apr 18, 2018 at 5:58 AM, Keith Packard  wrote:
>>> Michel Dänzer  writes:
 Time-based presentation seems to be the right approach for preventing
 micro-stutter in games as well, Croteam developers have been 
 researching
 this.
>>>
>>> Both the Vulkan GOOGLE_display_timing extension and X11 Present
>>> extension offer the ability to specify the desired display time in
>>> seconds.
>>>
>>> Similarly, I'd suggest that the min/max display refresh rate values be
>>> advertised as time between frames rather than frames per second.
>
> So there is a global min and max refresh rate as advertised by the monitor
> range descriptor. That I guess can be exposed as a global range in terms 
> of
> min and max time between frames as a global property of the connector.
>
> We dont need the per mode min and max refresh rate to be exposed right?

 If I understand VRR right, with CinemaVRR acceptable refresh rates might 
 fall outside the range advertised by the monitor. Would we
  1) advertise 24/1.001 as a lower bound,
  2) expect media apps to use the lower bound simply for informational 
 purposes,
  3) or simply not support CinemaVRR?

 (1) has the added caveat that not all reported rates would be supported.

 Alternatively a bit could indicate that CinemaVRR is support, but I'm not 
 sure if user mode would need all these details.

 Harry
>>>
>>> Are there special CinemaVRR suported monitors? In that case we need to 
>>> understand how those monitors
>>> advertise the monitor range and if they have a bit in EDID that indicate 
>>> they are CinemaVRR capable
>>> as opposed to just the Adaptive Sync/VRR.
>>> Harry, if you have one of those monitors, could you send the EDID dump for 
>>> that?
>>
>> As long as the any multiple of the 24/1.001 refresh rate is within the
>> officially supported refresh range rate this should work out. Maybe we'll
>> end up uploading 2x (to run at ~48Hz), maybe the kernel only uploads at
>> 24Hz. But should all be fine.
>>
>
> Would kernel driver upload 48Hz when UMD asks for 24Hz or would UMD be 
> expected to submit double frames?
>
> If kernel driver supports frame doubling (like our DC driver) we would 
> probably report half of monitor-reported min-refresh (or rather double of 
> monitor-reported max frame time).

Your driver (amdgpu) already supports frame doubling, except only for
vblank seqno instead of timestamps. Whether VRR can get down to 24Hz
or not is totally irrelevant from userspace's point of view. By
default the kernel is expected to keep display the current frame for
as long as userspace gives it a new one. There's no expectation that
userspace provides a new buffer for every vblank (whether that's a
fixed or variable refresh rate doesn't matter).
-Daniel

>
> Harry
>
>> Ofc if we have CinemaVRR screens which don't fit this, then maybe we need
>> to figure out something ...
>> -Daniel
>>
>>>
>>> Manasi
>>>

>
>>>
>>> I'd also encourage using a single unit for all of these values,
>>> preferably nanoseconds. Absolute times should all be referenced to
>>> CLOCK_MONOTONIC.
>>
>> +1 on everything Keith said. I got somehow dragged in khr vk
>> discussions around preventing micro-stuttering, and consensus seems to
>> be that timestamps for scheduling frames is the way to go, most likely
>> absolute ones (not everything is running Linux unfortunately, so can't
>> go outright and claim it's guaranteed to be CLOCK_MONOTONIC).
>> -Daniel
>
> And yes I also got consensus from Mesa and media folks about using the
> absolute timestamp for scheduling the frames and then the driver will
> modify the vblank logic to "present no earlier than the timestamp"
>
> Manasi
>
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
>> ___
>> dri-devel mailing list
>> dri-de...@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
>>



-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mail

Re: RFC for a render API to support adaptive sync and VRR

2018-04-24 Thread Harry Wentland


On 2018-04-24 08:09 AM, Daniel Vetter wrote:
> On Mon, Apr 23, 2018 at 02:19:44PM -0700, Manasi Navare wrote:
>> On Mon, Apr 23, 2018 at 10:40:06AM -0400, Harry Wentland wrote:
>>> On 2018-04-20 04:32 PM, Manasi Navare wrote:
 On Wed, Apr 18, 2018 at 09:39:02AM +0200, Daniel Vetter wrote:
> On Wed, Apr 18, 2018 at 5:58 AM, Keith Packard  wrote:
>> Michel Dänzer  writes:
>>> Time-based presentation seems to be the right approach for preventing
>>> micro-stutter in games as well, Croteam developers have been researching
>>> this.
>>
>> Both the Vulkan GOOGLE_display_timing extension and X11 Present
>> extension offer the ability to specify the desired display time in
>> seconds.
>>
>> Similarly, I'd suggest that the min/max display refresh rate values be
>> advertised as time between frames rather than frames per second.

 So there is a global min and max refresh rate as advertised by the monitor
 range descriptor. That I guess can be exposed as a global range in terms of
 min and max time between frames as a global property of the connector.

 We dont need the per mode min and max refresh rate to be exposed right?
>>>
>>> If I understand VRR right, with CinemaVRR acceptable refresh rates might 
>>> fall outside the range advertised by the monitor. Would we
>>>  1) advertise 24/1.001 as a lower bound,
>>>  2) expect media apps to use the lower bound simply for informational 
>>> purposes,
>>>  3) or simply not support CinemaVRR?
>>>
>>> (1) has the added caveat that not all reported rates would be supported.
>>>
>>> Alternatively a bit could indicate that CinemaVRR is support, but I'm not 
>>> sure if user mode would need all these details.
>>>
>>> Harry
>>
>> Are there special CinemaVRR suported monitors? In that case we need to 
>> understand how those monitors
>> advertise the monitor range and if they have a bit in EDID that indicate 
>> they are CinemaVRR capable
>> as opposed to just the Adaptive Sync/VRR.
>> Harry, if you have one of those monitors, could you send the EDID dump for 
>> that?
> 
> As long as the any multiple of the 24/1.001 refresh rate is within the
> officially supported refresh range rate this should work out. Maybe we'll
> end up uploading 2x (to run at ~48Hz), maybe the kernel only uploads at
> 24Hz. But should all be fine.
> 

Would kernel driver upload 48Hz when UMD asks for 24Hz or would UMD be expected 
to submit double frames?

If kernel driver supports frame doubling (like our DC driver) we would probably 
report half of monitor-reported min-refresh (or rather double of 
monitor-reported max frame time).

Harry

> Ofc if we have CinemaVRR screens which don't fit this, then maybe we need
> to figure out something ...
> -Daniel
> 
>>
>> Manasi
>>
>>>

>>
>> I'd also encourage using a single unit for all of these values,
>> preferably nanoseconds. Absolute times should all be referenced to
>> CLOCK_MONOTONIC.
>
> +1 on everything Keith said. I got somehow dragged in khr vk
> discussions around preventing micro-stuttering, and consensus seems to
> be that timestamps for scheduling frames is the way to go, most likely
> absolute ones (not everything is running Linux unfortunately, so can't
> go outright and claim it's guaranteed to be CLOCK_MONOTONIC).
> -Daniel

 And yes I also got consensus from Mesa and media folks about using the
 absolute timestamp for scheduling the frames and then the driver will
 modify the vblank logic to "present no earlier than the timestamp"

 Manasi

> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
 ___
 dri-devel mailing list
 dri-de...@lists.freedesktop.org
 https://lists.freedesktop.org/mailman/listinfo/dri-devel

> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: RFC for a render API to support adaptive sync and VRR

2018-04-24 Thread Cyr, Aric
> From: Daniel Vetter [mailto:daniel.vet...@ffwll.ch] On Behalf Of Daniel Vetter
> Sent: Tuesday, April 24, 2018 08:10
> On Mon, Apr 23, 2018 at 02:19:44PM -0700, Manasi Navare wrote:
> > On Mon, Apr 23, 2018 at 10:40:06AM -0400, Harry Wentland wrote:
> > > On 2018-04-20 04:32 PM, Manasi Navare wrote:
> > > > On Wed, Apr 18, 2018 at 09:39:02AM +0200, Daniel Vetter wrote:
> > > >> On Wed, Apr 18, 2018 at 5:58 AM, Keith Packard  
> > > >> wrote:
> > > >>> Michel Dänzer  writes:
> > >  Time-based presentation seems to be the right approach for preventing
> > >  micro-stutter in games as well, Croteam developers have been 
> > >  researching
> > >  this.
> > > >>>
> > > >>> Both the Vulkan GOOGLE_display_timing extension and X11 Present
> > > >>> extension offer the ability to specify the desired display time in
> > > >>> seconds.
> > > >>>
> > > >>> Similarly, I'd suggest that the min/max display refresh rate values be
> > > >>> advertised as time between frames rather than frames per second.
> > > >
> > > > So there is a global min and max refresh rate as advertised by the 
> > > > monitor
> > > > range descriptor. That I guess can be exposed as a global range in 
> > > > terms of
> > > > min and max time between frames as a global property of the connector.
> > > >
> > > > We dont need the per mode min and max refresh rate to be exposed right?
> > >
> > > If I understand VRR right, with CinemaVRR acceptable refresh rates might 
> > > fall outside the range advertised by the monitor. Would
> we
> > >  1) advertise 24/1.001 as a lower bound,
> > >  2) expect media apps to use the lower bound simply for informational 
> > > purposes,
> > >  3) or simply not support CinemaVRR?
> > >
> > > (1) has the added caveat that not all reported rates would be supported.
> > >
> > > Alternatively a bit could indicate that CinemaVRR is support, but I'm not 
> > > sure if user mode would need all these details.
> > >
> > > Harry
> >
> > Are there special CinemaVRR suported monitors? In that case we need to 
> > understand how those monitors
> > advertise the monitor range and if they have a bit in EDID that indicate 
> > they are CinemaVRR capable
> > as opposed to just the Adaptive Sync/VRR.
> > Harry, if you have one of those monitors, could you send the EDID dump for 
> > that?
> 
> As long as the any multiple of the 24/1.001 refresh rate is within the
> officially supported refresh range rate this should work out. Maybe we'll
> end up uploading 2x (to run at ~48Hz), maybe the kernel only uploads at
> 24Hz. But should all be fine.

Ya, I think this makes most sense.  An app can really only know when it ideally 
wants to present the next frame.
We should let drivers figure out how best to accommodate that time, whether 
Adaptive Sync or not, frame doubling, etc.
Various hardware will may have different capabilities whose complexity we 
really don't want to expose to app level.

All an app needs to know is when they want to present a frame (input to 
kernel), and at what time it was actually presented (feedback from kernel).
Anything else is superfluous and likely overcomplicating things.

Regards,

--
ARIC CYR 
PMTS Software Engineer | SW - Display Technologies


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-24 Thread Daniel Vetter
On Mon, Apr 23, 2018 at 02:19:44PM -0700, Manasi Navare wrote:
> On Mon, Apr 23, 2018 at 10:40:06AM -0400, Harry Wentland wrote:
> > On 2018-04-20 04:32 PM, Manasi Navare wrote:
> > > On Wed, Apr 18, 2018 at 09:39:02AM +0200, Daniel Vetter wrote:
> > >> On Wed, Apr 18, 2018 at 5:58 AM, Keith Packard  wrote:
> > >>> Michel Dänzer  writes:
> >  Time-based presentation seems to be the right approach for preventing
> >  micro-stutter in games as well, Croteam developers have been 
> >  researching
> >  this.
> > >>>
> > >>> Both the Vulkan GOOGLE_display_timing extension and X11 Present
> > >>> extension offer the ability to specify the desired display time in
> > >>> seconds.
> > >>>
> > >>> Similarly, I'd suggest that the min/max display refresh rate values be
> > >>> advertised as time between frames rather than frames per second.
> > > 
> > > So there is a global min and max refresh rate as advertised by the monitor
> > > range descriptor. That I guess can be exposed as a global range in terms 
> > > of
> > > min and max time between frames as a global property of the connector.
> > > 
> > > We dont need the per mode min and max refresh rate to be exposed right?
> > 
> > If I understand VRR right, with CinemaVRR acceptable refresh rates might 
> > fall outside the range advertised by the monitor. Would we
> >  1) advertise 24/1.001 as a lower bound,
> >  2) expect media apps to use the lower bound simply for informational 
> > purposes,
> >  3) or simply not support CinemaVRR?
> > 
> > (1) has the added caveat that not all reported rates would be supported.
> > 
> > Alternatively a bit could indicate that CinemaVRR is support, but I'm not 
> > sure if user mode would need all these details.
> > 
> > Harry
> 
> Are there special CinemaVRR suported monitors? In that case we need to 
> understand how those monitors
> advertise the monitor range and if they have a bit in EDID that indicate they 
> are CinemaVRR capable
> as opposed to just the Adaptive Sync/VRR.
> Harry, if you have one of those monitors, could you send the EDID dump for 
> that?

As long as the any multiple of the 24/1.001 refresh rate is within the
officially supported refresh range rate this should work out. Maybe we'll
end up uploading 2x (to run at ~48Hz), maybe the kernel only uploads at
24Hz. But should all be fine.

Ofc if we have CinemaVRR screens which don't fit this, then maybe we need
to figure out something ...
-Daniel

> 
> Manasi
> 
> > 
> > > 
> > >>>
> > >>> I'd also encourage using a single unit for all of these values,
> > >>> preferably nanoseconds. Absolute times should all be referenced to
> > >>> CLOCK_MONOTONIC.
> > >>
> > >> +1 on everything Keith said. I got somehow dragged in khr vk
> > >> discussions around preventing micro-stuttering, and consensus seems to
> > >> be that timestamps for scheduling frames is the way to go, most likely
> > >> absolute ones (not everything is running Linux unfortunately, so can't
> > >> go outright and claim it's guaranteed to be CLOCK_MONOTONIC).
> > >> -Daniel
> > > 
> > > And yes I also got consensus from Mesa and media folks about using the
> > > absolute timestamp for scheduling the frames and then the driver will
> > > modify the vblank logic to "present no earlier than the timestamp"
> > > 
> > > Manasi
> > > 
> > >> -- 
> > >> Daniel Vetter
> > >> Software Engineer, Intel Corporation
> > >> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> > >> ___
> > >> dri-devel mailing list
> > >> dri-de...@lists.freedesktop.org
> > >> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> > > ___
> > > dri-devel mailing list
> > > dri-de...@lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> > > 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-23 Thread Manasi Navare
On Mon, Apr 23, 2018 at 10:40:06AM -0400, Harry Wentland wrote:
> On 2018-04-20 04:32 PM, Manasi Navare wrote:
> > On Wed, Apr 18, 2018 at 09:39:02AM +0200, Daniel Vetter wrote:
> >> On Wed, Apr 18, 2018 at 5:58 AM, Keith Packard  wrote:
> >>> Michel Dänzer  writes:
>  Time-based presentation seems to be the right approach for preventing
>  micro-stutter in games as well, Croteam developers have been researching
>  this.
> >>>
> >>> Both the Vulkan GOOGLE_display_timing extension and X11 Present
> >>> extension offer the ability to specify the desired display time in
> >>> seconds.
> >>>
> >>> Similarly, I'd suggest that the min/max display refresh rate values be
> >>> advertised as time between frames rather than frames per second.
> > 
> > So there is a global min and max refresh rate as advertised by the monitor
> > range descriptor. That I guess can be exposed as a global range in terms of
> > min and max time between frames as a global property of the connector.
> > 
> > We dont need the per mode min and max refresh rate to be exposed right?
> 
> If I understand VRR right, with CinemaVRR acceptable refresh rates might fall 
> outside the range advertised by the monitor. Would we
>  1) advertise 24/1.001 as a lower bound,
>  2) expect media apps to use the lower bound simply for informational 
> purposes,
>  3) or simply not support CinemaVRR?
> 
> (1) has the added caveat that not all reported rates would be supported.
> 
> Alternatively a bit could indicate that CinemaVRR is support, but I'm not 
> sure if user mode would need all these details.
> 
> Harry

Are there special CinemaVRR suported monitors? In that case we need to 
understand how those monitors
advertise the monitor range and if they have a bit in EDID that indicate they 
are CinemaVRR capable
as opposed to just the Adaptive Sync/VRR.
Harry, if you have one of those monitors, could you send the EDID dump for that?

Manasi

> 
> > 
> >>>
> >>> I'd also encourage using a single unit for all of these values,
> >>> preferably nanoseconds. Absolute times should all be referenced to
> >>> CLOCK_MONOTONIC.
> >>
> >> +1 on everything Keith said. I got somehow dragged in khr vk
> >> discussions around preventing micro-stuttering, and consensus seems to
> >> be that timestamps for scheduling frames is the way to go, most likely
> >> absolute ones (not everything is running Linux unfortunately, so can't
> >> go outright and claim it's guaranteed to be CLOCK_MONOTONIC).
> >> -Daniel
> > 
> > And yes I also got consensus from Mesa and media folks about using the
> > absolute timestamp for scheduling the frames and then the driver will
> > modify the vblank logic to "present no earlier than the timestamp"
> > 
> > Manasi
> > 
> >> -- 
> >> Daniel Vetter
> >> Software Engineer, Intel Corporation
> >> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> >> ___
> >> dri-devel mailing list
> >> dri-de...@lists.freedesktop.org
> >> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> > ___
> > dri-devel mailing list
> > dri-de...@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> > 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-23 Thread Harry Wentland
On 2018-04-20 04:32 PM, Manasi Navare wrote:
> On Wed, Apr 18, 2018 at 09:39:02AM +0200, Daniel Vetter wrote:
>> On Wed, Apr 18, 2018 at 5:58 AM, Keith Packard  wrote:
>>> Michel Dänzer  writes:
 Time-based presentation seems to be the right approach for preventing
 micro-stutter in games as well, Croteam developers have been researching
 this.
>>>
>>> Both the Vulkan GOOGLE_display_timing extension and X11 Present
>>> extension offer the ability to specify the desired display time in
>>> seconds.
>>>
>>> Similarly, I'd suggest that the min/max display refresh rate values be
>>> advertised as time between frames rather than frames per second.
> 
> So there is a global min and max refresh rate as advertised by the monitor
> range descriptor. That I guess can be exposed as a global range in terms of
> min and max time between frames as a global property of the connector.
> 
> We dont need the per mode min and max refresh rate to be exposed right?

If I understand VRR right, with CinemaVRR acceptable refresh rates might fall 
outside the range advertised by the monitor. Would we
 1) advertise 24/1.001 as a lower bound,
 2) expect media apps to use the lower bound simply for informational purposes,
 3) or simply not support CinemaVRR?

(1) has the added caveat that not all reported rates would be supported.

Alternatively a bit could indicate that CinemaVRR is support, but I'm not sure 
if user mode would need all these details.

Harry

> 
>>>
>>> I'd also encourage using a single unit for all of these values,
>>> preferably nanoseconds. Absolute times should all be referenced to
>>> CLOCK_MONOTONIC.
>>
>> +1 on everything Keith said. I got somehow dragged in khr vk
>> discussions around preventing micro-stuttering, and consensus seems to
>> be that timestamps for scheduling frames is the way to go, most likely
>> absolute ones (not everything is running Linux unfortunately, so can't
>> go outright and claim it's guaranteed to be CLOCK_MONOTONIC).
>> -Daniel
> 
> And yes I also got consensus from Mesa and media folks about using the
> absolute timestamp for scheduling the frames and then the driver will
> modify the vblank logic to "present no earlier than the timestamp"
> 
> Manasi
> 
>> -- 
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
>> ___
>> dri-devel mailing list
>> dri-de...@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-21 Thread Daniel Stone
Hi,

On 20 April 2018 at 21:32, Manasi Navare  wrote:
> On Wed, Apr 18, 2018 at 09:39:02AM +0200, Daniel Vetter wrote:
>> On Wed, Apr 18, 2018 at 5:58 AM, Keith Packard  wrote:
>> > I'd also encourage using a single unit for all of these values,
>> > preferably nanoseconds. Absolute times should all be referenced to
>> > CLOCK_MONOTONIC.
>>
>> +1 on everything Keith said. I got somehow dragged in khr vk
>> discussions around preventing micro-stuttering, and consensus seems to
>> be that timestamps for scheduling frames is the way to go, most likely
>> absolute ones (not everything is running Linux unfortunately, so can't
>> go outright and claim it's guaranteed to be CLOCK_MONOTONIC).
>
> And yes I also got consensus from Mesa and media folks about using the
> absolute timestamp for scheduling the frames and then the driver will
> modify the vblank logic to "present no earlier than the timestamp"

Whilst we're all piling in, here's another AOL. We didn't yet
implement this for Wayland because of the fun involved in adding a
FIFO mode to a very mailbox window system, but at some point we'll
have to suck it up and push it.

Cheers,
Daniel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-20 Thread Manasi Navare
On Wed, Apr 18, 2018 at 09:39:02AM +0200, Daniel Vetter wrote:
> On Wed, Apr 18, 2018 at 5:58 AM, Keith Packard  wrote:
> > Michel Dänzer  writes:
> >> Time-based presentation seems to be the right approach for preventing
> >> micro-stutter in games as well, Croteam developers have been researching
> >> this.
> >
> > Both the Vulkan GOOGLE_display_timing extension and X11 Present
> > extension offer the ability to specify the desired display time in
> > seconds.
> >
> > Similarly, I'd suggest that the min/max display refresh rate values be
> > advertised as time between frames rather than frames per second.

So there is a global min and max refresh rate as advertised by the monitor
range descriptor. That I guess can be exposed as a global range in terms of
min and max time between frames as a global property of the connector.

We dont need the per mode min and max refresh rate to be exposed right?

> >
> > I'd also encourage using a single unit for all of these values,
> > preferably nanoseconds. Absolute times should all be referenced to
> > CLOCK_MONOTONIC.
> 
> +1 on everything Keith said. I got somehow dragged in khr vk
> discussions around preventing micro-stuttering, and consensus seems to
> be that timestamps for scheduling frames is the way to go, most likely
> absolute ones (not everything is running Linux unfortunately, so can't
> go outright and claim it's guaranteed to be CLOCK_MONOTONIC).
> -Daniel

And yes I also got consensus from Mesa and media folks about using the
absolute timestamp for scheduling the frames and then the driver will
modify the vblank logic to "present no earlier than the timestamp"

Manasi

> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-18 Thread Daniel Vetter
On Wed, Apr 18, 2018 at 5:58 AM, Keith Packard  wrote:
> Michel Dänzer  writes:
>> Time-based presentation seems to be the right approach for preventing
>> micro-stutter in games as well, Croteam developers have been researching
>> this.
>
> Both the Vulkan GOOGLE_display_timing extension and X11 Present
> extension offer the ability to specify the desired display time in
> seconds.
>
> Similarly, I'd suggest that the min/max display refresh rate values be
> advertised as time between frames rather than frames per second.
>
> I'd also encourage using a single unit for all of these values,
> preferably nanoseconds. Absolute times should all be referenced to
> CLOCK_MONOTONIC.

+1 on everything Keith said. I got somehow dragged in khr vk
discussions around preventing micro-stuttering, and consensus seems to
be that timestamps for scheduling frames is the way to go, most likely
absolute ones (not everything is running Linux unfortunately, so can't
go outright and claim it's guaranteed to be CLOCK_MONOTONIC).
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-17 Thread Keith Packard
Michel Dänzer  writes:

> Time-based presentation seems to be the right approach for preventing
> micro-stutter in games as well, Croteam developers have been researching
> this.

Both the Vulkan GOOGLE_display_timing extension and X11 Present
extension offer the ability to specify the desired display time in
seconds.

Similarly, I'd suggest that the min/max display refresh rate values be
advertised as time between frames rather than frames per second.

I'd also encourage using a single unit for all of these values,
preferably nanoseconds. Absolute times should all be referenced to
CLOCK_MONOTONIC.

-- 
-keith


signature.asc
Description: PGP signature
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-13 Thread Harry Wentland
On 2018-04-12 05:38 PM, Stéphane Marchesin wrote:
> On Tue, Apr 10, 2018 at 12:37 AM, Michel Dänzer  wrote:
>> On 2018-04-10 08:45 AM, Christian König wrote:
>>> Am 09.04.2018 um 23:45 schrieb Manasi Navare:
 Thanks for initiating the discussion. Find my comments below:
 On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
> On 2018-04-09 03:56 PM, Harry Wentland wrote:
>>
>> === A DRM render API to support variable refresh rates ===
>>
>> In order to benefit from adaptive sync and VRR userland needs a way
>> to let us know whether to vary frame timings or to target a
>> different frame time. These can be provided as atomic properties on
>> a CRTC:
>>   * boolvariable_refresh_compatible
>>   * inttarget_frame_duration_ns (nanosecond frame duration)
>>
>> This gives us the following cases:
>>
>> variable_refresh_compatible = 0, target_frame_duration_ns = 0
>>   * drive monitor at timing's normal refresh rate
>>
>> variable_refresh_compatible = 1, target_frame_duration_ns = 0
>>   * send new frame to monitor as soon as it's available, if within
>> min/max of monitor's reported capabilities
>>
>> variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
>>   * send new frame to monitor with the specified
>> target_frame_duration_ns
>>
>> When a target_frame_duration_ns or variable_refresh_compatible
>> cannot be supported the atomic check will reject the commit.
>>
 What I would like is two sets of properties on a CRTC or preferably on
 a connector:

 KMD properties that UMD can query:
 * vrr_capable -  This will be an immutable property for exposing
 hardware's capability of supporting VRR. This will be set by the
 kernel after
 reading the EDID mode information and monitor range capabilities.
 * vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max
 refresh rates supported.
 These properties are optional and will be created and attached to the
 DP/eDP connector when the connector
 is getting intialized.
>>>
>>> Mhm, aren't those properties actually per mode and not per CRTC/connector?
>>>
 Properties that you mentioned above that the UMD can set before kernel
 can enable VRR functionality
 *bool vrr_enable or vrr_compatible
 target_frame_duration_ns
>>>
>>> Yeah, that certainly makes sense. But target_frame_duration_ns is a bad
>>> name/semantics.
>>>
>>> We should use an absolute timestamp where the frame should be presented,
>>> otherwise you could run into a bunch of trouble with IOCTL restarts or
>>> missed blanks.
>>
>> Also, a fixed target frame duration isn't suitable even for video
>> playback, due to drift between the video and audio clocks.
>>
>> Time-based presentation seems to be the right approach for preventing
>> micro-stutter in games as well, Croteam developers have been researching
>> this.
> 
> Another case that you can handle with time-based presentation but not
> with refresh-based API is the use of per-scanline flips in conjunction
> with damage rects. For example if you know that the damage rect covers
> a certain Y range, you can flip when you're outside that range if the
> time that you were given allows it. That's even independent from VRR
> displays.
> 

That's an interesting use-case. I don't think we have given much thought to 
damage rects before.

Harry

> Stéphane
> 
> 
>>
>>
>> --
>> Earthling Michel Dänzer   |   http://www.amd.com
>> Libre software enthusiast | Mesa and X developer
>> ___
>> dri-devel mailing list
>> dri-de...@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-13 Thread Harry Wentland
On 2018-04-13 12:04 PM, Daniel Vetter wrote:
> On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
>> Adding dri-devel, which I should've included from the start.
> 
> Top posting, because I'm lazy and was out sick ...
> 
> Few observations:
> - Stéphane has a great point which seems to have been ignored thus far.
> - Where's the VK extension for this - there must be one :-) Starting with
>   a full implementation for that (based on radv or anv or something like
>   that) might help.

Good point. The intention of this very early RFC was to understand if we're 
sort of thinking along the same lines as the rest of the community before going 
ahead and prototyping something that's not going to work well in the end. The 
focus here was on the kernel API. We haven't done any investigation of VK, GL, 
or MM APIs on this yet and were hoping for some guidance on that. That guidance 
seems to be that from VK and MM API perspectives frame_duration doesn't cut it 
and we should rather pursue an absolute presentation time.

Harry

> - Imo if we do a conversion between the vk api and what we feed into the
>   hw, then let's not do a midlayer mistake: That conversion should happen
>   at the bottom, in the kernel driver, maybe assisted with some helpers.
>   Not somewhere in-between, like in libdrm of all places!
> 
> Cheers, Daniel
> 
>>
>> On 2018-04-09 03:56 PM, Harry Wentland wrote:
>>> === What is adaptive sync and VRR? ===
>>>
>>> Adaptive sync has been part of the DisplayPort spec for a while now and 
>>> allows graphics adapters to drive displays with varying frame timings. VRR 
>>> (variable refresh rate) is essentially the same, but defined for HDMI.
>>>
>>>
>>>
>>> === Why allow variable frame timings? ===
>>>
>>> Variable render times don't align with fixed refresh rates, leading to
>>> stuttering, tearing, and/or input lag.
>>>
>>> e.g. (rc = render completion, dr = display refresh)
>>>
>>> rc   B  CDE  F
>>> dr  A   B   C   C   D   E   F
>>>
>>> ^ ^
>>>   frame missed 
>>>  repeated   display
>>>   twice refresh   
>>>
>>>
>>>
>>> === Other use cases of adaptive sync 
>>>
>>> Beside the variable render case, adaptive sync also allows adjustment of 
>>> refresh rates without a mode change. One such use case would be 24 Hz video.
>>>
>>>
>>>
>>> === A DRM render API to support variable refresh rates ===
>>>
>>> In order to benefit from adaptive sync and VRR userland needs a way to let 
>>> us know whether to vary frame timings or to target a different frame time. 
>>> These can be provided as atomic properties on a CRTC:
>>>  * bool variable_refresh_compatible
>>>  * int  target_frame_duration_ns (nanosecond frame duration)
>>>
>>> This gives us the following cases:
>>>
>>> variable_refresh_compatible = 0, target_frame_duration_ns = 0
>>>  * drive monitor at timing's normal refresh rate
>>>
>>> variable_refresh_compatible = 1, target_frame_duration_ns = 0
>>>  * send new frame to monitor as soon as it's available, if within min/max 
>>> of monitor's reported capabilities
>>>
>>> variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
>>>  * send new frame to monitor with the specified target_frame_duration_ns
>>>
>>> When a target_frame_duration_ns or variable_refresh_compatible cannot be 
>>> supported the atomic check will reject the commit.
>>>
>>>
>>>
>>> === Previous discussions ===
>>>
>>> https://lists.freedesktop.org/archives/dri-devel/2017-October/155207.html
>>>
>>>
>>>
>>> === Feedback and moving forward ===
>>>
>>> I'm hoping to get some feedback on this or continue the discussion on how 
>>> adaptive sync / VRR might look like in the DRM ecosystem. Once there are no 
>>> major concerns or objections left we'll probably start creating some 
>>> patches to sketch this out a bit better and see how it looks in practice.
>>>
>>>
>>>
>>> Cheers,
>>> Harry
>>> ___
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>>
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-13 Thread Daniel Vetter
On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
> Adding dri-devel, which I should've included from the start.

Top posting, because I'm lazy and was out sick ...

Few observations:
- Stéphane has a great point which seems to have been ignored thus far.
- Where's the VK extension for this - there must be one :-) Starting with
  a full implementation for that (based on radv or anv or something like
  that) might help.
- Imo if we do a conversion between the vk api and what we feed into the
  hw, then let's not do a midlayer mistake: That conversion should happen
  at the bottom, in the kernel driver, maybe assisted with some helpers.
  Not somewhere in-between, like in libdrm of all places!

Cheers, Daniel

> 
> On 2018-04-09 03:56 PM, Harry Wentland wrote:
> > === What is adaptive sync and VRR? ===
> > 
> > Adaptive sync has been part of the DisplayPort spec for a while now and 
> > allows graphics adapters to drive displays with varying frame timings. VRR 
> > (variable refresh rate) is essentially the same, but defined for HDMI.
> > 
> > 
> > 
> > === Why allow variable frame timings? ===
> > 
> > Variable render times don't align with fixed refresh rates, leading to
> > stuttering, tearing, and/or input lag.
> > 
> > e.g. (rc = render completion, dr = display refresh)
> > 
> > rc   B  CDE  F
> > dr  A   B   C   C   D   E   F
> > 
> > ^ ^
> >   frame missed 
> >  repeated   display
> >   twice refresh   
> > 
> > 
> > 
> > === Other use cases of adaptive sync 
> > 
> > Beside the variable render case, adaptive sync also allows adjustment of 
> > refresh rates without a mode change. One such use case would be 24 Hz video.
> > 
> > 
> > 
> > === A DRM render API to support variable refresh rates ===
> > 
> > In order to benefit from adaptive sync and VRR userland needs a way to let 
> > us know whether to vary frame timings or to target a different frame time. 
> > These can be provided as atomic properties on a CRTC:
> >  * bool variable_refresh_compatible
> >  * int  target_frame_duration_ns (nanosecond frame duration)
> > 
> > This gives us the following cases:
> > 
> > variable_refresh_compatible = 0, target_frame_duration_ns = 0
> >  * drive monitor at timing's normal refresh rate
> > 
> > variable_refresh_compatible = 1, target_frame_duration_ns = 0
> >  * send new frame to monitor as soon as it's available, if within min/max 
> > of monitor's reported capabilities
> > 
> > variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
> >  * send new frame to monitor with the specified target_frame_duration_ns
> > 
> > When a target_frame_duration_ns or variable_refresh_compatible cannot be 
> > supported the atomic check will reject the commit.
> > 
> > 
> > 
> > === Previous discussions ===
> > 
> > https://lists.freedesktop.org/archives/dri-devel/2017-October/155207.html
> > 
> > 
> > 
> > === Feedback and moving forward ===
> > 
> > I'm hoping to get some feedback on this or continue the discussion on how 
> > adaptive sync / VRR might look like in the DRM ecosystem. Once there are no 
> > major concerns or objections left we'll probably start creating some 
> > patches to sketch this out a bit better and see how it looks in practice.
> > 
> > 
> > 
> > Cheers,
> > Harry
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> > 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-12 Thread Stéphane Marchesin
On Tue, Apr 10, 2018 at 12:37 AM, Michel Dänzer  wrote:
> On 2018-04-10 08:45 AM, Christian König wrote:
>> Am 09.04.2018 um 23:45 schrieb Manasi Navare:
>>> Thanks for initiating the discussion. Find my comments below:
>>> On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
 On 2018-04-09 03:56 PM, Harry Wentland wrote:
>
> === A DRM render API to support variable refresh rates ===
>
> In order to benefit from adaptive sync and VRR userland needs a way
> to let us know whether to vary frame timings or to target a
> different frame time. These can be provided as atomic properties on
> a CRTC:
>   * boolvariable_refresh_compatible
>   * inttarget_frame_duration_ns (nanosecond frame duration)
>
> This gives us the following cases:
>
> variable_refresh_compatible = 0, target_frame_duration_ns = 0
>   * drive monitor at timing's normal refresh rate
>
> variable_refresh_compatible = 1, target_frame_duration_ns = 0
>   * send new frame to monitor as soon as it's available, if within
> min/max of monitor's reported capabilities
>
> variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
>   * send new frame to monitor with the specified
> target_frame_duration_ns
>
> When a target_frame_duration_ns or variable_refresh_compatible
> cannot be supported the atomic check will reject the commit.
>
>>> What I would like is two sets of properties on a CRTC or preferably on
>>> a connector:
>>>
>>> KMD properties that UMD can query:
>>> * vrr_capable -  This will be an immutable property for exposing
>>> hardware's capability of supporting VRR. This will be set by the
>>> kernel after
>>> reading the EDID mode information and monitor range capabilities.
>>> * vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max
>>> refresh rates supported.
>>> These properties are optional and will be created and attached to the
>>> DP/eDP connector when the connector
>>> is getting intialized.
>>
>> Mhm, aren't those properties actually per mode and not per CRTC/connector?
>>
>>> Properties that you mentioned above that the UMD can set before kernel
>>> can enable VRR functionality
>>> *bool vrr_enable or vrr_compatible
>>> target_frame_duration_ns
>>
>> Yeah, that certainly makes sense. But target_frame_duration_ns is a bad
>> name/semantics.
>>
>> We should use an absolute timestamp where the frame should be presented,
>> otherwise you could run into a bunch of trouble with IOCTL restarts or
>> missed blanks.
>
> Also, a fixed target frame duration isn't suitable even for video
> playback, due to drift between the video and audio clocks.
>
> Time-based presentation seems to be the right approach for preventing
> micro-stutter in games as well, Croteam developers have been researching
> this.

Another case that you can handle with time-based presentation but not
with refresh-based API is the use of per-scanline flips in conjunction
with damage rects. For example if you know that the damage rect covers
a certain Y range, you can flip when you're outside that range if the
time that you were given allows it. That's even independent from VRR
displays.

Stéphane


>
>
> --
> Earthling Michel Dänzer   |   http://www.amd.com
> Libre software enthusiast | Mesa and X developer
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-12 Thread Harry Wentland
On 2018-04-12 07:39 AM, Nicolai Hähnle wrote:
> On 12.04.2018 01:30, Cyr, Aric wrote:
>>> At least with VDPAU, video players are already explicitly specifying the
>>> target presentation time, so no changes should be required at that
>>> level. Don't know about other video APIs.
>>>
>>> The X11 Present extension protocol is also prepared for specifying the
>>> target presentation time already, the support for it just needs to be
>>> implemented.
>>
>> I'm perfectly OK with presentation time-based *API*.  I get it from a user 
>> mode/app perspective, and that's fine.  We need that feedback and would like 
>> help defining that portions of the stack.
>> However, I think it doesn't make as much sense as a *DDI* because it doesn't 
>> correspond to any hardware real or logical (i.e. no one would implement it 
>> in HW this way) and the industry specs aren't defined that way.
>> You can have libdrm or some other usermode component translate your 
>> presentation time into a frame duration and schedule it.  What's the 
>> advantage of having this in kernel besides the fact we lose the intent of 
>> the application and could prevent features and optimizations.  When it gets 
>> to kernel, I think it is much more elegant for the flip structure to contain 
>> a simple duration that says "hey, show this frame on the screen for this 
>> long".  Then we don't need any clocks or timers just some simple math and 
>> program the hardware.
> 
> There isn't necessarily an inherent advantage to having this translation in 
> the kernel. However, we *must* do this translation in a place that is owned 
> by display experts (i.e., you guys), because only you guys know how to 
> actually do that translation reliably and correctly.
> 
> Since your work is currently limited to the kernel, it makes sense to do it 
> in the kernel.

We're actively trying to change this. I want us (the display team) to 
eventually own anything display related across the stack, or at least closely 
work with the owners of those components.

> 
> If the translation doesn't happen in a place that you feel comfortable 
> working on, we're setting ourselves up for a future where this hypothetical 
> future UMD component will get this wrong, and there'll be a lot of 
> finger-pointing between you guys and whoever writes that UMD, with likely 
> little willingness to actually go into the respective other codebase to fix 
> what's wrong. And that's a pretty sucky future.
> 

If finger-pointing happened I'd like to apologize. Again, this is something we 
actively try to change.

Ultimately I'm looking for a solution that works for everyone and is not owned 
on a SW component basis, but rather expertise basis.

Harry

> Cheers,
> Nicolai
> 
> P.S.: I'm also a little surprised that you seem to be saying that requesting 
> a target present time is basically impossible (at least, that's kind of 
> implied by your statement about mGPUs), and yet there's precedent for such 
> APIs in both Vulkan and VDPAU.
> 
> 
>>
>> In short,
>>   1) We can simplify media players' lives by helping them get really, really 
>> close to their content rate, so they wouldn't need any frame rate conversion.
>>   They'll still need A/V syncing though, and variable refresh cannot 
>> solve this and thus is way out of scope of what we're proposing.
>>
>>   2) For gaming, don't even try to guess a frame duration, the 
>> driver/hardware will do a better job every time, just specify duration=0 and 
>> flip as fast as you can.
>>
>> Regards,
>>    Aric
>>
>> P.S. Thanks for the Croteam link.  Interesting, but basically nullified by 
>> variable refresh rate displays.  You won't have 
>> stuttering/microstuttering/juddering/tearing if your display's refresh rate 
>> matches the render/present rate of the game.  Maybe I should grab The Talos 
>> Principle to see how well it works with FreeSync display :)
>>
>> -- 
>> ARIC CYR
>> PMTS Software Engineer | SW – Display Technologies
>>
>>
>>
>>
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-12 Thread Michel Dänzer
On 2018-04-12 01:39 PM, Nicolai Hähnle wrote:
> On 12.04.2018 01:30, Cyr, Aric wrote:
>>> At least with VDPAU, video players are already explicitly specifying the
>>> target presentation time, so no changes should be required at that
>>> level. Don't know about other video APIs.
>>>
>>> The X11 Present extension protocol is also prepared for specifying the
>>> target presentation time already, the support for it just needs to be
>>> implemented.
>>
>> I'm perfectly OK with presentation time-based *API*.  I get it from a
>> user mode/app perspective, and that's fine.  We need that feedback and
>> would like help defining that portions of the stack.
>> However, I think it doesn't make as much sense as a *DDI* because it
>> doesn't correspond to any hardware real or logical (i.e. no one would
>> implement it in HW this way) and the industry specs aren't defined
>> that way.
>> You can have libdrm or some other usermode component translate your
>> presentation time into a frame duration and schedule it.  What's the
>> advantage of having this in kernel besides the fact we lose the intent
>> of the application and could prevent features and optimizations.  When
>> it gets to kernel, I think it is much more elegant for the flip
>> structure to contain a simple duration that says "hey, show this frame
>> on the screen for this long".  Then we don't need any clocks or timers
>> just some simple math and program the hardware.
> 
> There isn't necessarily an inherent advantage to having this translation
> in the kernel.

One such advantage is that it doesn't require userspace to predict the
future, where at least in the Vulkan case there is no information to
base the prediction on. I fail to see how that can work at all.


> P.S.: I'm also a little surprised that you seem to be saying that
> requesting a target present time is basically impossible (at least,
> that's kind of implied by your statement about mGPUs), and yet there's
> precedent for such APIs in both Vulkan and VDPAU.

Keep in mind that the constraint is "present no earlier than", which can
be satisfied e.g. by waiting for the target time to pass before
programming the flip to the hardware.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-12 Thread Nicolai Hähnle

On 12.04.2018 01:30, Cyr, Aric wrote:

At least with VDPAU, video players are already explicitly specifying the
target presentation time, so no changes should be required at that
level. Don't know about other video APIs.

The X11 Present extension protocol is also prepared for specifying the
target presentation time already, the support for it just needs to be
implemented.


I'm perfectly OK with presentation time-based *API*.  I get it from a user 
mode/app perspective, and that's fine.  We need that feedback and would like 
help defining that portions of the stack.
However, I think it doesn't make as much sense as a *DDI* because it doesn't 
correspond to any hardware real or logical (i.e. no one would implement it in 
HW this way) and the industry specs aren't defined that way.
You can have libdrm or some other usermode component translate your presentation time 
into a frame duration and schedule it.  What's the advantage of having this in kernel 
besides the fact we lose the intent of the application and could prevent features and 
optimizations.  When it gets to kernel, I think it is much more elegant for the flip 
structure to contain a simple duration that says "hey, show this frame on the screen 
for this long".  Then we don't need any clocks or timers just some simple math and 
program the hardware.


There isn't necessarily an inherent advantage to having this translation 
in the kernel. However, we *must* do this translation in a place that is 
owned by display experts (i.e., you guys), because only you guys know 
how to actually do that translation reliably and correctly.


Since your work is currently limited to the kernel, it makes sense to do 
it in the kernel.


If the translation doesn't happen in a place that you feel comfortable 
working on, we're setting ourselves up for a future where this 
hypothetical future UMD component will get this wrong, and there'll be a 
lot of finger-pointing between you guys and whoever writes that UMD, 
with likely little willingness to actually go into the respective other 
codebase to fix what's wrong. And that's a pretty sucky future.


Cheers,
Nicolai

P.S.: I'm also a little surprised that you seem to be saying that 
requesting a target present time is basically impossible (at least, 
that's kind of implied by your statement about mGPUs), and yet there's 
precedent for such APIs in both Vulkan and VDPAU.





In short,
  1) We can simplify media players' lives by helping them get really, really 
close to their content rate, so they wouldn't need any frame rate conversion.
  They'll still need A/V syncing though, and variable refresh cannot solve 
this and thus is way out of scope of what we're proposing.

  2) For gaming, don't even try to guess a frame duration, the driver/hardware 
will do a better job every time, just specify duration=0 and flip as fast as 
you can.

Regards,
   Aric

P.S. Thanks for the Croteam link.  Interesting, but basically nullified by 
variable refresh rate displays.  You won't have 
stuttering/microstuttering/juddering/tearing if your display's refresh rate 
matches the render/present rate of the game.  Maybe I should grab The Talos 
Principle to see how well it works with FreeSync display :)

--
ARIC CYR
PMTS Software Engineer | SW – Display Technologies






___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-12 Thread Michel Dänzer
On 2018-04-12 01:30 AM, Cyr, Aric wrote:
>> From: Michel Dänzer [mailto:mic...@daenzer.net]
>> Sent: Wednesday, April 11, 2018 05:50
>> On 2018-04-11 08:57 AM, Nicolai Hähnle wrote:
>>> On 10.04.2018 23:45, Cyr, Aric wrote:
 How does it work fine today given that all kernel seems to know is
 'current' or 'current+1' vsyncs.
 Presumably the applications somehow schedule all this just fine.
 If this works without variable refresh for 60Hz, will it not work for
 a fixed-rate "48Hz" monitor (assuming a 24Hz video)?
>>>
>>> You're right. I guess a better way to state the point is that it
>>> *doesn't* really work today with fixed refresh, but if we're going to
>>> introduce a new API, then why not do so in a way that can fix these
>>> additional problems as well?
>>
>> Exactly. With a fixed frame duration, we'll still have fundamentally the
>> same issues as we currently do without variable refresh, not making use
>> of the full potential of variable refresh.
> 
> I see.  Well then, that's makes this sort of orthogonal to the discussion.  
> If you say that there are no media players on Linux today that can maintain 
> audio/video sync with a 60Hz display, then that problem is much larger than 
> the one we're trying to solve here.  
> By the way, I don't believe that is a true statement :)

Indeed, that's not what we're saying:

With fixed refresh rate, audio/video sync cannot be maintained without
occasional visual artifacts, due to skipped / repeated frames.


>>> How about what I wrote in an earlier mail of having attributes:
>>>
>>> - target_present_time_ns
>>> - hint_frame_time_ns (optional)
>>>
>>> ... and if a video player set both, the driver could still do the
>>> optimizations you've explained?
>>
>> FWIW, I don't think a property would be a good mechanism for the target
>> presentation time.
>>
>> At least with VDPAU, video players are already explicitly specifying the
>> target presentation time, so no changes should be required at that
>> level. Don't know about other video APIs.
>>
>> The X11 Present extension protocol is also prepared for specifying the
>> target presentation time already, the support for it just needs to be
>> implemented.
> 
> I'm perfectly OK with presentation time-based *API*.  I get it from a user 
> mode/app perspective, and that's fine.  We need that feedback and would like 
> help defining that portions of the stack.
> However, I think it doesn't make as much sense as a *DDI* because it doesn't 
> correspond to any hardware real or logical (i.e. no one would implement it in 
> HW this way) and the industry specs aren't defined that way.

Which specs are you referring to? There are at least two specs (VDPAU
and VK_GOOGLE_display_timing) which are defined that way.

> You can have libdrm or some other usermode component translate your 
> presentation time into a frame duration and schedule it.

This cuts both ways.

> What's the advantage of having this in kernel besides the fact we lose the 
> intent of the application and could prevent features and optimizations.

To me, presentation time is much clearer as intent of the application.
It can express all the same things frame duration can, but not the other
way around.


> When it gets to kernel, I think it is much more elegant for the flip structure
> to contain a simple duration that says "hey, show this frame on the screen for
> this long".

A game cannot know this in advance, can it? Per the Croteam
presentation, it depends on when this frame is actually presented (among
other things).


>  1) We can simplify media players' lives by helping them get really, really 
> close to their content rate, so they wouldn't need any frame rate conversion. 
>  

At least with VDPAU, media players shouldn't need any changes at all, as
they're already explicitly specifying the presentation times.


>  They'll still need A/V syncing though, and variable refresh cannot solve 
> this

I've been trying to explain that it can, perfectly. Can you explain why
you think it can't, or ask if something isn't clear about what I've been
explaining?


> P.S. Thanks for the Croteam link.  Interesting, but basically nullified by 
> variable refresh rate displays.
According to whom / what? I don't see why it wouldn't apply to variable
refresh as well. Without time-based presentation, the game cannot
prevent a frame from being presented too early.

There is no doubt that the artifacts of not doing this properly will be
less noticeable with variable refresh, but that doesn't mean they don't
exist.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: RFC for a render API to support adaptive sync and VRR

2018-04-11 Thread Cyr, Aric
> From: Michel Dänzer [mailto:mic...@daenzer.net]
> Sent: Wednesday, April 11, 2018 05:50
> On 2018-04-11 08:57 AM, Nicolai Hähnle wrote:
> > On 10.04.2018 23:45, Cyr, Aric wrote:
> >> How does it work fine today given that all kernel seems to know is
> >> 'current' or 'current+1' vsyncs.
> >> Presumably the applications somehow schedule all this just fine.
> >> If this works without variable refresh for 60Hz, will it not work for
> >> a fixed-rate "48Hz" monitor (assuming a 24Hz video)?
> >
> > You're right. I guess a better way to state the point is that it
> > *doesn't* really work today with fixed refresh, but if we're going to
> > introduce a new API, then why not do so in a way that can fix these
> > additional problems as well?
> 
> Exactly. With a fixed frame duration, we'll still have fundamentally the
> same issues as we currently do without variable refresh, not making use
> of the full potential of variable refresh.

I see.  Well then, that's makes this sort of orthogonal to the discussion.  
If you say that there are no media players on Linux today that can maintain 
audio/video sync with a 60Hz display, then that problem is much larger than the 
one we're trying to solve here.  
By the way, I don't believe that is a true statement :)

> > Say you have a multi-GPU system, and each GPU has multiple displays
> > attached, and a single application is driving them all. The application
> > queues flips for all displays with the same target_present_time_ns
> > attribute. Starting at some time T, the application simply asks for the
> > same present time T + i * 1667 (or whatever) for frame i from all
> > displays.
[snip]
> > Why would that not work to sync up all displays almost perfectly?
> 
> Seconded.

It doesn't work that way unfortunately.  In theory, sounds great, but if you 
ask anyone who's worked with framelock/genlock, it is a complicated problem.  
Easiest explaination is that you need to be able to atomically program 
registers across multiple GPUs at the same time, this is not possible without 
hardware assist (see AMD's S400 module for example).
We have enough to discuss without this, so let's leave mGPU for another day 
since we can't solve it here anyways.
 
> > Okay, that's interesting. Does this mean that the display driver still
> > programs a refresh rate to some hardware register?

Yes, driver can, in some cases, update the minimum and maximum vertical total 
each flip.
In fixed rated example, you would set them equal to achieve your desired 
refresh rate.
We don't program refresh rate, just the vertical total min/max (which 
translates to affecting refresh rate, given constant pixel clock and horizontal 
total).
Thus, our refresh rate granularity is one line-time, on the order of microsec 
accuracy.

> > How about what I wrote in an earlier mail of having attributes:
> >
> > - target_present_time_ns
> > - hint_frame_time_ns (optional)
> >
> > ... and if a video player set both, the driver could still do the
> > optimizations you've explained?
> 
> FWIW, I don't think a property would be a good mechanism for the target
> presentation time.
> 
> At least with VDPAU, video players are already explicitly specifying the
> target presentation time, so no changes should be required at that
> level. Don't know about other video APIs.
> 
> The X11 Present extension protocol is also prepared for specifying the
> target presentation time already, the support for it just needs to be
> implemented.

I'm perfectly OK with presentation time-based *API*.  I get it from a user 
mode/app perspective, and that's fine.  We need that feedback and would like 
help defining that portions of the stack.
However, I think it doesn't make as much sense as a *DDI* because it doesn't 
correspond to any hardware real or logical (i.e. no one would implement it in 
HW this way) and the industry specs aren't defined that way.
You can have libdrm or some other usermode component translate your 
presentation time into a frame duration and schedule it.  What's the advantage 
of having this in kernel besides the fact we lose the intent of the application 
and could prevent features and optimizations.  When it gets to kernel, I think 
it is much more elegant for the flip structure to contain a simple duration 
that says "hey, show this frame on the screen for this long".  Then we don't 
need any clocks or timers just some simple math and program the hardware.

In short, 
 1) We can simplify media players' lives by helping them get really, really 
close to their content rate, so they wouldn't need any frame rate conversion.  
 They'll still need A/V syncing though, and variable refresh cannot solve 
this and thus is way out of scope of what we're proposing.  

 2) For gaming, don't even try to guess a frame duration, the driver/hardware 
will do a better job every time, just specify duration=0 and flip as fast as 
you can.

Regards,
  Aric

P.S. Thanks for the Croteam link.  Interesting, but basically nullifie

Re: RFC for a render API to support adaptive sync and VRR

2018-04-11 Thread Michel Dänzer
On 2018-04-10 06:26 PM, Cyr, Aric wrote:> > My guess is they prefer to
“do nothing” and let driver/HW manage it,
> otherwise you exempt all existing games from supporting adaptive sync
> without a rewrite or update.
Nobody is saying adaptive sync should only work with explicit target
presentation times provided by the application. We're just arguing that
target presentation time as a mechanism is superior to target refresh
rate, both for video and game use cases. It also trivially allows
emulating "as early as possible" (target presentation time = 0) and
"fixed refresh rate" (target presentation time = start + i * target
frame duration) behaviour, even transparently for the application.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-11 Thread Michel Dänzer
On 2018-04-10 07:25 PM, Cyr, Aric wrote:
>> From: Michel Dänzer [mailto:mic...@daenzer.net]
>> On 2018-04-10 07:13 PM, Cyr, Aric wrote:
 From: Michel Dänzer [mailto:mic...@daenzer.net]
 On 2018-04-10 06:26 PM, Cyr, Aric wrote:
> From: Koenig, Christian Sent: Tuesday, April 10, 2018 11:43
>
>> For video games we have a similar situation where a frame is rendered
>> for a certain world time and in the ideal case we would actually
>> display the frame at this world time.
>
> That seems like it would be a poorly written game that flips like
> that, unless they are explicitly trying to throttle the framerate for
> some reason.  When a game presents a completed frame, they’d like
> that to happen as soon as possible.

 What you're describing is what most games have been doing traditionally.
 Croteam's research shows that this results in micro-stuttering, because
 frames may be presented too early. To avoid that, they want to
 explicitly time each presentation as described by Christian.
>>>
>>> Yes, I agree completely.  However that's only truly relevant for fixed
>>> refreshed rate displays.
>>
>> No, it also affects variable refresh; possibly even more in some cases,
>> because the presentation time is less predictable.
> 
> Yes, and that's why you don't want to do it when you have variable refresh.  
> The hardware in the monitor and GPU will do it for you, so why bother?
> The input to their algorithms will be noisy causing worst estimations.  If 
> you just present as fast as you can, it'll just work (within reason).

If a frame is presented earlier than the time corresponding to the state
of the world as displayed in the frame, it results in stutter, just as
when it's presented too late.


> The majority of gamers want maximum FPS for their games, and there's quite 
> frequently outrage at a particular game when they are limited to something 
> lower that what their monitor could otherwise support (i.e. I don't want my 
> game limited to 30Hz if I have a shiny 144Hz gaming display I paid good money 
> for).

That doesn't (have to) happen.


See
https://www.gdcvault.com/play/1025407/Advanced-Graphics-Techniques-Tutorial-The
for Croteam's talk about this at this year's GDC. It says the best API
available so far is the Vulkan extension VK_GOOGLE_display_timing, which
(among other things) allows specifying the earliest desired presentation
time via VkPresentTimeGOOGLE::desiredPresentTime . (The talk also
mentions that they previously experimented with VDPAU, because it allows
specifying the target presentation time)


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-11 Thread Michel Dänzer
On 2018-04-11 08:57 AM, Nicolai Hähnle wrote:
> On 10.04.2018 23:45, Cyr, Aric wrote:
> For video games we have a similar situation where a frame is
> rendered
> for a certain world time and in the ideal case we would actually
> display the frame at this world time.

 That seems like it would be a poorly written game that flips like
 that, unless they are explicitly trying to throttle the
 framerate for
 some reason.  When a game presents a completed frame, they’d like
 that to happen as soon as possible.
>>>
>>> What you're describing is what most games have been doing
>>> traditionally.
>>> Croteam's research shows that this results in micro-stuttering,
>>> because
>>> frames may be presented too early. To avoid that, they want to
>>> explicitly time each presentation as described by Christian.
>>
>> Yes, I agree completely.  However that's only truly relevant for
>> fixed
>> refreshed rate displays.
>
> No, it also affects variable refresh; possibly even more in some
> cases,
> because the presentation time is less predictable.

 Yes, and that's why you don't want to do it when you have variable
 refresh.  The hardware in the monitor and GPU will do it for you,
>>> so why bother?
>>>
>>> I think Michel's point is that the monitor and GPU hardware *cannot*
>>> really do this, because there's synchronization with audio to take into
>>> account, which the GPU or monitor don't know about.
>>
>> How does it work fine today given that all kernel seems to know is
>> 'current' or 'current+1' vsyncs.
>> Presumably the applications somehow schedule all this just fine.
>> If this works without variable refresh for 60Hz, will it not work for
>> a fixed-rate "48Hz" monitor (assuming a 24Hz video)?
> 
> You're right. I guess a better way to state the point is that it
> *doesn't* really work today with fixed refresh, but if we're going to
> introduce a new API, then why not do so in a way that can fix these
> additional problems as well?

Exactly. With a fixed frame duration, we'll still have fundamentally the
same issues as we currently do without variable refresh, not making use
of the full potential of variable refresh.


> Say you have a multi-GPU system, and each GPU has multiple displays
> attached, and a single application is driving them all. The application
> queues flips for all displays with the same target_present_time_ns
> attribute. Starting at some time T, the application simply asks for the
> same present time T + i * 1667 (or whatever) for frame i from all
> displays.

BTW, this is an interesting side point I've wanted to make: Any
applications / use cases which really do want a fixed refresh rate can
trivially do it with time-based presentation like this.


> Of course it's to be expected that some (or all) of the displays will
> not be able to hit the target time on the first bunch of flips due to
> hardware limitations, but as long as the range of supported frame times
> is wide enough, I'd expect all of them to drift towards presenting at
> the correct time eventually, even across multiple GPUs, with this simple
> scheme.
> 
> Why would that not work to sync up all displays almost perfectly?

Seconded.


> How about what I wrote in an earlier mail of having attributes:
> 
> - target_present_time_ns
> - hint_frame_time_ns (optional)
> 
> ... and if a video player set both, the driver could still do the
> optimizations you've explained?

FWIW, I don't think a property would be a good mechanism for the target
presentation time.

At least with VDPAU, video players are already explicitly specifying the
target presentation time, so no changes should be required at that
level. Don't know about other video APIs.

The X11 Present extension protocol is also prepared for specifying the
target presentation time already, the support for it just needs to be
implemented.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Nicolai Hähnle

On 10.04.2018 23:45, Cyr, Aric wrote:

For video games we have a similar situation where a frame is rendered
for a certain world time and in the ideal case we would actually
display the frame at this world time.


That seems like it would be a poorly written game that flips like
that, unless they are explicitly trying to throttle the framerate for
some reason.  When a game presents a completed frame, they’d like
that to happen as soon as possible.


What you're describing is what most games have been doing traditionally.
Croteam's research shows that this results in micro-stuttering, because
frames may be presented too early. To avoid that, they want to
explicitly time each presentation as described by Christian.


Yes, I agree completely.  However that's only truly relevant for fixed
refreshed rate displays.


No, it also affects variable refresh; possibly even more in some cases,
because the presentation time is less predictable.


Yes, and that's why you don't want to do it when you have variable refresh.  
The hardware in the monitor and GPU will do it for you,

so why bother?

I think Michel's point is that the monitor and GPU hardware *cannot*
really do this, because there's synchronization with audio to take into
account, which the GPU or monitor don't know about.


How does it work fine today given that all kernel seems to know is 'current' or 
'current+1' vsyncs.
Presumably the applications somehow schedule all this just fine.
If this works without variable refresh for 60Hz, will it not work for a fixed-rate 
"48Hz" monitor (assuming a 24Hz video)?


You're right. I guess a better way to state the point is that it 
*doesn't* really work today with fixed refresh, but if we're going to 
introduce a new API, then why not do so in a way that can fix these 
additional problems as well?




Also, as I wrote separately, there's the case of synchronizing multiple
monitors.


For multimonitor to work with VRR, they'll have to be timing and flip 
synchronized.
This is impossible for an application to manage, it needs driver/HW control or 
you end up with one display flipping before the other and it looks terrible.
And definitely forget about multiGPU without professional workstation-type 
support needed to sync the displays across adapters.


I'm not a display expert, but I find it hard to believe that it's that 
difficult. Perhaps you can help us understand?


Say you have a multi-GPU system, and each GPU has multiple displays 
attached, and a single application is driving them all. The application 
queues flips for all displays with the same target_present_time_ns 
attribute. Starting at some time T, the application simply asks for the 
same present time T + i * 1667 (or whatever) for frame i from all 
displays.


Of course it's to be expected that some (or all) of the displays will 
not be able to hit the target time on the first bunch of flips due to 
hardware limitations, but as long as the range of supported frame times 
is wide enough, I'd expect all of them to drift towards presenting at 
the correct time eventually, even across multiple GPUs, with this simple 
scheme.


Why would that not work to sync up all displays almost perfectly?


[snip]

Are there any real problems with exposing an absolute target present time?


Realistically, how far into the future are you requesting a presentation time? 
Won't it almost always be something like current_time+1000/video_frame_rate?
If so, why not just tell the driver to set 1000/video_frame_rate and have the 
GPU/monitor create nicely spaced VSYNCs for you that match the source content?

In fact, you probably wouldn't even need to change your video player at all, 
other than having it pass the target_frame_duration_ns.  You could consider 
this a 'hint' as you suggested, since it's cannot be guaranteed in cases your 
driver or HW doesn't support variable refresh.  If the target_frame_duration_ns 
hint is supported/applied, then the video app should have nothing extra to do 
that it wouldn't already do for any arbitrary fixed-refresh rate display.  If 
not supported (say the drm_atomic_check fails with -EINVAL or something), the 
video app falls can stop requesting a fixed target_frame_duration_ns.

A fundamental problem I have with a target present time though is how to accommodate 
present times that are larger than one VSYNC time?  If my monitor has a 40Hz-60Hz 
variable refresh, it's easy to translate "my content is 24Hz, repeat this next frame 
an integer multiple number of times so that it lands within the monitor range".  
Driver fixes display to an even 48Hz and everything good (no worse than a 30Hz clip on a 
traditional 60Hz display anyways).  This frame-doubling is all hardware based and doesn't 
require any polling.

Now if you change that to "show my content in at least X nanoseconds" it can work on all 
displays, but the intent of the app is gone and driver/GPU/display cannot optimize.  For example, 
the HDMI VRR spec defines a "CinemaVRR

RE: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Cyr, Aric
> From: Manasi Navare [mailto:manasi.d.nav...@intel.com]
> Sent: Tuesday, April 10, 2018 17:37
> To: Wentland, Harry 
> Cc: amd-gfx mailing list ; Daniel Vetter 
> ; Haehnle, Nicolai
> ; Daenzer, Michel ; Deucher, 
> Alexander ;
> Koenig, Christian ; dri-devel 
> ; Cyr, Aric ; Koo,
> Anthony 
> Subject: Re: RFC for a render API to support adaptive sync and VRR
> 
> On Tue, Apr 10, 2018 at 11:03:02AM -0400, Harry Wentland wrote:
> > Adding Anthony and Aric who've been working on Freesync with DC on other 
> > OSes for a while.
> >
> > On 2018-04-09 05:45 PM, Manasi Navare wrote:
> > > Thanks for initiating the discussion. Find my comments below:
> > >
> > > On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
> > >> Adding dri-devel, which I should've included from the start.
> > >>
> > >> On 2018-04-09 03:56 PM, Harry Wentland wrote:
> > >>> === What is adaptive sync and VRR? ===
> > >>>
> > >>> Adaptive sync has been part of the DisplayPort spec for a while now and 
> > >>> allows graphics adapters to drive displays with varying
> frame timings. VRR (variable refresh rate) is essentially the same, but 
> defined for HDMI.
> > >>>
> > >>>
> > >>>
> > >>> === Why allow variable frame timings? ===
> > >>>
> > >>> Variable render times don't align with fixed refresh rates, leading to
> > >>> stuttering, tearing, and/or input lag.
> > >>>
> > >>> e.g. (rc = render completion, dr = display refresh)
> > >>>
> > >>> rc   B  CDE  F
> > >>> dr  A   B   C   C   D   E   F
> > >>>
> > >>> ^ ^
> > >>>   frame missed
> > >>>  repeated   display
> > >>>   twice refresh
> > >>>
> > >>>
> > >>>
> > >>> === Other use cases of adaptive sync 
> > >>>
> > >>> Beside the variable render case, adaptive sync also allows adjustment 
> > >>> of refresh rates without a mode change. One such use
> case would be 24 Hz video.
> > >>>
> > >
> > > One of the the advantages here when the render speed is slower than the 
> > > display refresh rate, since we are stretching the vertical
> blanking interval
> > > the display adapters will follow "draw fast and then go idle" approach. 
> > > This gives power savings when render rate is lower than the
> display refresh rate.
> >
> > Are you talking about a use case, such as an idle desktop, where the 
> > renders are quite sporadic?
> >
> 
> I was refering to a case where the render rate is lower say 24Hz but the 
> display rate is fixed 60Hz, that means we are pretty much
> displaying the same frame
> twice. But with Adaptive Sync, the display rate would be lowered to 24hz and 
> the vertical blanking time will be stretched where
> instead of drawing the
> same frame twice, the system is now idle in that extra blanking time thus 
> giving some power savings.

Hi Manasi,

Assuming the panel could go down to 24Hz, this would be possible.  
If it was a game, it'd naturally do this since the refresh rate would track the 
render rate. 

For a video where you have an adaptive sync capable player, it could request a 
fixed duration to achieve the same thing.
Most panels do not support as low as 24Hz however, so usually in the video case 
at least you'd end up with say 48Hz with the driver/HW providing automatic 
frame doubling.

> > >
> > >>>
> > >>>
> > >>> === A DRM render API to support variable refresh rates ===
> > >>>
> > >>> In order to benefit from adaptive sync and VRR userland needs a way to 
> > >>> let us know whether to vary frame timings or to target
> a different frame time. These can be provided as atomic properties on a CRTC:
> > >>>  * bool variable_refresh_compatible
> > >>>  * int  target_frame_duration_ns (nanosecond frame duration)
> > >>>
> > >>> This gives us the following cases:
> > >>>
> > >>> variable_refresh_compatible = 0, target_frame_duration_ns = 0
> > >>>  * drive monitor at timing's normal refresh rate
> > >>>
> > >>> variable_refresh_compatible = 1,

RE: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Cyr, Aric
> From: Haehnle, Nicolai
> Sent: Tuesday, April 10, 2018 13:48
> On 10.04.2018 19:25, Cyr, Aric wrote:
> >> -Original Message-
> >> From: Michel Dänzer [mailto:mic...@daenzer.net]
> >> Sent: Tuesday, April 10, 2018 13:16
> >>
> >> On 2018-04-10 07:13 PM, Cyr, Aric wrote:
>  -Original Message-
>  From: Michel Dänzer [mailto:mic...@daenzer.net]
>  Sent: Tuesday, April 10, 2018 13:06
>  On 2018-04-10 06:26 PM, Cyr, Aric wrote:
> > From: Koenig, Christian Sent: Tuesday, April 10, 2018 11:43
> >
> >> For video games we have a similar situation where a frame is rendered
> >> for a certain world time and in the ideal case we would actually
> >> display the frame at this world time.
> >
> > That seems like it would be a poorly written game that flips like
> > that, unless they are explicitly trying to throttle the framerate for
> > some reason.  When a game presents a completed frame, they’d like
> > that to happen as soon as possible.
> 
>  What you're describing is what most games have been doing traditionally.
>  Croteam's research shows that this results in micro-stuttering, because
>  frames may be presented too early. To avoid that, they want to
>  explicitly time each presentation as described by Christian.
> >>>
> >>> Yes, I agree completely.  However that's only truly relevant for fixed
> >>> refreshed rate displays.
> >>
> >> No, it also affects variable refresh; possibly even more in some cases,
> >> because the presentation time is less predictable.
> >
> > Yes, and that's why you don't want to do it when you have variable refresh. 
> >  The hardware in the monitor and GPU will do it for you,
> so why bother?
> 
> I think Michel's point is that the monitor and GPU hardware *cannot*
> really do this, because there's synchronization with audio to take into
> account, which the GPU or monitor don't know about.

How does it work fine today given that all kernel seems to know is 'current' or 
'current+1' vsyncs.  
Presumably the applications somehow schedule all this just fine.
If this works without variable refresh for 60Hz, will it not work for a 
fixed-rate "48Hz" monitor (assuming a 24Hz video)?

> Also, as I wrote separately, there's the case of synchronizing multiple
> monitors.

For multimonitor to work with VRR, they'll have to be timing and flip 
synchronized.
This is impossible for an application to manage, it needs driver/HW control or 
you end up with one display flipping before the other and it looks terrible.
And definitely forget about multiGPU without professional workstation-type 
support needed to sync the displays across adapters.

> > The input to their algorithms will be noisy causing worst estimations.  If 
> > you just present as fast as you can, it'll just work (within
> reason).
> > The majority of gamers want maximum FPS for their games, and there's quite 
> > frequently outrage at a particular game when they are
> limited to something lower that what their monitor could otherwise support 
> (i.e. I don't want my game limited to 30Hz if I have a shiny
> 144Hz gaming display I paid good money for).   Of course, there's always 
> exceptions... but in our experience those are few and far
> between.
> 
> I agree that games most likely shouldn't try to be smart. I'm curious
> about the Croteam findings, but even if they did a really clever thing
> that works better than just telling the display driver "display ASAP
> please", chances are that *most* developers won't do that. And they'll
> most likely get it wrong, so our guidance should really be "games should
> ask for ASAP presentation, and nothing else".

Right, I think this is the 'easy' case and is covered in Harry's initial 
proposal when target_frame_duration_ns = 0.

> However, there *are* legitimate use cases for requesting a specific
> presentation time, and there *is* precedent of APIs that expose such
> features.
>
> Are there any real problems with exposing an absolute target present time?

Realistically, how far into the future are you requesting a presentation time? 
Won't it almost always be something like current_time+1000/video_frame_rate?
If so, why not just tell the driver to set 1000/video_frame_rate and have the 
GPU/monitor create nicely spaced VSYNCs for you that match the source content?

In fact, you probably wouldn't even need to change your video player at all, 
other than having it pass the target_frame_duration_ns.  You could consider 
this a 'hint' as you suggested, since it's cannot be guaranteed in cases your 
driver or HW doesn't support variable refresh.  If the target_frame_duration_ns 
hint is supported/applied, then the video app should have nothing extra to do 
that it wouldn't already do for any arbitrary fixed-refresh rate display.  If 
not supported (say the drm_atomic_check fails with -EINVAL or something), the 
video app falls can stop requesting a fixed target_frame_duration_ns.

A fundamental problem 

Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Manasi Navare
On Tue, Apr 10, 2018 at 11:03:02AM -0400, Harry Wentland wrote:
> Adding Anthony and Aric who've been working on Freesync with DC on other OSes 
> for a while.
> 
> On 2018-04-09 05:45 PM, Manasi Navare wrote:
> > Thanks for initiating the discussion. Find my comments below:
> > 
> > On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
> >> Adding dri-devel, which I should've included from the start.
> >>
> >> On 2018-04-09 03:56 PM, Harry Wentland wrote:
> >>> === What is adaptive sync and VRR? ===
> >>>
> >>> Adaptive sync has been part of the DisplayPort spec for a while now and 
> >>> allows graphics adapters to drive displays with varying frame timings. 
> >>> VRR (variable refresh rate) is essentially the same, but defined for HDMI.
> >>>
> >>>
> >>>
> >>> === Why allow variable frame timings? ===
> >>>
> >>> Variable render times don't align with fixed refresh rates, leading to
> >>> stuttering, tearing, and/or input lag.
> >>>
> >>> e.g. (rc = render completion, dr = display refresh)
> >>>
> >>> rc   B  CDE  F
> >>> drA   B   C   C   D   E   F
> >>>
> >>> ^ ^
> >>> frame missed 
> >>>repeated   display
> >>> twice refresh   
> >>>
> >>>
> >>>
> >>> === Other use cases of adaptive sync 
> >>>
> >>> Beside the variable render case, adaptive sync also allows adjustment of 
> >>> refresh rates without a mode change. One such use case would be 24 Hz 
> >>> video.
> >>>
> > 
> > One of the the advantages here when the render speed is slower than the 
> > display refresh rate, since we are stretching the vertical blanking interval
> > the display adapters will follow "draw fast and then go idle" approach. 
> > This gives power savings when render rate is lower than the display refresh 
> > rate.
> 
> Are you talking about a use case, such as an idle desktop, where the renders 
> are quite sporadic?
>

I was refering to a case where the render rate is lower say 24Hz but the 
display rate is fixed 60Hz, that means we are pretty much displaying the same 
frame
twice. But with Adaptive Sync, the display rate would be lowered to 24hz and 
the vertical blanking time will be stretched where instead of drawing the
same frame twice, the system is now idle in that extra blanking time thus 
giving some power savings.
 
> >  
> >>>
> >>>
> >>> === A DRM render API to support variable refresh rates ===
> >>>
> >>> In order to benefit from adaptive sync and VRR userland needs a way to 
> >>> let us know whether to vary frame timings or to target a different frame 
> >>> time. These can be provided as atomic properties on a CRTC:
> >>>  * bool   variable_refresh_compatible
> >>>  * inttarget_frame_duration_ns (nanosecond frame duration)
> >>>
> >>> This gives us the following cases:
> >>>
> >>> variable_refresh_compatible = 0, target_frame_duration_ns = 0
> >>>  * drive monitor at timing's normal refresh rate
> >>>
> >>> variable_refresh_compatible = 1, target_frame_duration_ns = 0
> >>>  * send new frame to monitor as soon as it's available, if within min/max 
> >>> of monitor's reported capabilities
> >>>
> >>> variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
> >>>  * send new frame to monitor with the specified target_frame_duration_ns
> >>>
> >>> When a target_frame_duration_ns or variable_refresh_compatible cannot be 
> >>> supported the atomic check will reject the commit.
> >>>
> > 
> > What I would like is two sets of properties on a CRTC or preferably on a 
> > connector:
> > 
> > KMD properties that UMD can query:
> > * vrr_capable -  This will be an immutable property for exposing hardware's 
> > capability of supporting VRR. This will be set by the kernel after 
> > reading the EDID mode information and monitor range capabilities.
> > * vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max refresh 
> > rates supported.
> > These properties are optional and will be created and attached to the 
> > DP/eDP connector when the connector
> > is getting intialized.
> > 
> 
> If we're talking about the properties from the EDID these might not 
> necessarily align with a currently selected mode, which might have a refresh 
> rate lower than the vrr_refresh_max, requiring us to cap it at that. In some 
> scenarios we also might do low framerate compensation [1] where we do magic 
> to allow the framerate to drop below the supported range.

Actually the way I have coded that currently is span through all the EDID modes 
and for each mode with the same resolution but different refresh rates 
supported, create a VRR field part of drm_mode_config structure that would have
refresh_max and min. So that way we store the max and min per mode as opposed 
to a per crtc/connector property.

> 
> I think if a vrr_refresh_max/min are exposed to UMD these should really be 
> only for informational purposes, in whic

Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Harry Wentland
On 2018-04-10 07:44 AM, Chris Wilson wrote:
> Quoting Christian König (2018-04-10 07:45:04)
>> Am 09.04.2018 um 23:45 schrieb Manasi Navare:
>>> Properties that you mentioned above that the UMD can set before kernel can 
>>> enable VRR functionality
>>> *bool vrr_enable or vrr_compatible
>>> target_frame_duration_ns
>>
>> Yeah, that certainly makes sense. But target_frame_duration_ns is a bad 
>> name/semantics.
>>
>> We should use an absolute timestamp where the frame should be presented, 
>> otherwise you could run into a bunch of trouble with IOCTL restarts or 
>> missed blanks.
> 
> Hear, hear. I was disappointed not to see this be the starting point of
> the conversation. Imo, the uABI should in terms of absolutes with the
> drivers mapping that onto HW and reporting back the discrepancies.

I think it's just that some of us that work on KMD display drivers have had our 
work primarily guided by different use cases, such as gaming, which has then be 
extended to provide a better experience for video as well. We might not be as 
intimately aware of some of the work that's been done on video APIs and the 
pains involved in it but are always happy to learn and work together toward the 
best solution.

Harry

> -Chris
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Harry Wentland
On 2018-04-10 01:52 PM, Harry Wentland wrote:
> On 2018-04-10 12:37 PM, Nicolai Hähnle wrote:
>> On 10.04.2018 18:26, Cyr, Aric wrote:
>>> That presentation time doesn’t need to come to kernel as such and actually 
>>> is fine as-is completely decoupled from adaptive sync.  As long as the 
>>> video player provides the new target_frame_duration_ns on the flip, then 
>>> the driver/HW will target the correct refresh rate to match the source 
>>> content.  This simply means that more often than not the video presents 
>>> will  align very close to the monitor’s refresh rate, resulting in a smooth 
>>> video experience.  For example, if you have 24Hz content, and an adaptive 
>>> sync monitor with a range of 40-60Hz, once the target_frame_duration_ns is 
>>> provided, driver can configure the monitor to a fixed refresh rate of 48Hz 
>>> causing all video presents to be frame-doubled in hardware without further 
>>> application intervention.
>>
>> What about multi-monitor displays, where you want to play an animation that 
>> spans multiple monitors. You really want all monitors to flip at the same 
>> time.
>>
> 
> Syncing two monitors is what we currently do with our timing sync feature 
> where we drive two monitors from the same clock source if they use the same 
> timing. That, along with VSync, guarantees all monitors flip at the same 
> time. I'm not sure if it works with adaptive sync.
> 
> Are you suggesting to use adaptive sync to do an in-SW sync of multiple 
> displays?
> 
>> I understand where you're coming from, but the perspective of refusing a 
>> target presentation time is a rather selfish one of "we're the display, 
>> we're the most important, everybody else has to adjust to us" (e.g. to get 
>> perfect sync between video and audio). I admit I'm phrasing it in a bit of 
>> an extreme way, but perhaps this phrasing helps to see why that's just not a 
>> very good attitude to have.
>>
> 
> I really dislike arguing on an emotional basis and would rather not use words 
> such as "selfish" in this discussion. I believe all of us want to come to the 
> best possible solution based on technical merit.
> 
>> All devices (whether video or audio or whatever) should be able to receive a 
>> target presentation time.
>>
> 
> I'm not sure I understand the full extent of the problem as I'm not really 
> familiar with how this is currently done, but isn't the problem the same 
> without variable refresh rates (or targeted refresh rates)? A Video API would 
> still have to somehow synchronize audio and video to 60Hz on most monitors 
> today. What would change if we gave user mode the ability to suggest we flip 
> at video frame rates (24/48Hz)?
> 

Never mind. Just saw Michel's reply to an earlier message.

Harry

> Harry
> 
>> If the application can make your life a bit easier by providing the 
>> targetted refresh rate as additional *hint-only* parameter (like in your 24 
>> Hz --> 48 Hz doubling example), then maybe we should indeed consider that.
>>
>> Cheers,
>> Nicolai
>>
>>
>>>
>>>
>>> For video games we have a similar situation where a frame is rendered for a 
>>> certain world time and in the ideal case we would actually display the 
>>> frame at this world time.
>>>
>>> That seems like it would be a poorly written game that flips like that, 
>>> unless they are explicitly trying to throttle the framerate for some 
>>> reason.  When a game presents a completed frame, they’d like that to happen 
>>> as soon as possible.  This is why non-VSYNC modes of flipping exist and 
>>> many games leverage this.  Adaptive sync gives you the lower latency of 
>>> immediate flips without the tearing imposed by using non-VSYNC flipping.
>>>
>>>
>>> I mean we have the guys from Valve on this mailing list so I think we 
>>> should just get the feedback from them and see what they prefer.
>>>
>>> We have thousands of Steam games on other OSes that work great already, but 
>>> we’d certainly be interested in any additional feedback.  My guess is they 
>>> prefer to “do nothing” and let driver/HW manage it, otherwise you exempt 
>>> all existing games from supporting adaptive sync without a rewrite or 
>>> update.
>>>
>>>
>>> Regards,
>>> Christian.
>>>
>>>
>>>     -Aric
>>>
>>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Harry Wentland
On 2018-04-10 12:37 PM, Nicolai Hähnle wrote:
> On 10.04.2018 18:26, Cyr, Aric wrote:
>> That presentation time doesn’t need to come to kernel as such and actually 
>> is fine as-is completely decoupled from adaptive sync.  As long as the video 
>> player provides the new target_frame_duration_ns on the flip, then the 
>> driver/HW will target the correct refresh rate to match the source content.  
>> This simply means that more often than not the video presents will  align 
>> very close to the monitor’s refresh rate, resulting in a smooth video 
>> experience.  For example, if you have 24Hz content, and an adaptive sync 
>> monitor with a range of 40-60Hz, once the target_frame_duration_ns is 
>> provided, driver can configure the monitor to a fixed refresh rate of 48Hz 
>> causing all video presents to be frame-doubled in hardware without further 
>> application intervention.
> 
> What about multi-monitor displays, where you want to play an animation that 
> spans multiple monitors. You really want all monitors to flip at the same 
> time.
> 

Syncing two monitors is what we currently do with our timing sync feature where 
we drive two monitors from the same clock source if they use the same timing. 
That, along with VSync, guarantees all monitors flip at the same time. I'm not 
sure if it works with adaptive sync.

Are you suggesting to use adaptive sync to do an in-SW sync of multiple 
displays?

> I understand where you're coming from, but the perspective of refusing a 
> target presentation time is a rather selfish one of "we're the display, we're 
> the most important, everybody else has to adjust to us" (e.g. to get perfect 
> sync between video and audio). I admit I'm phrasing it in a bit of an extreme 
> way, but perhaps this phrasing helps to see why that's just not a very good 
> attitude to have.
> 

I really dislike arguing on an emotional basis and would rather not use words 
such as "selfish" in this discussion. I believe all of us want to come to the 
best possible solution based on technical merit.

> All devices (whether video or audio or whatever) should be able to receive a 
> target presentation time.
> 

I'm not sure I understand the full extent of the problem as I'm not really 
familiar with how this is currently done, but isn't the problem the same 
without variable refresh rates (or targeted refresh rates)? A Video API would 
still have to somehow synchronize audio and video to 60Hz on most monitors 
today. What would change if we gave user mode the ability to suggest we flip at 
video frame rates (24/48Hz)?

Harry

> If the application can make your life a bit easier by providing the targetted 
> refresh rate as additional *hint-only* parameter (like in your 24 Hz --> 48 
> Hz doubling example), then maybe we should indeed consider that.
> 
> Cheers,
> Nicolai
> 
> 
>>
>>
>> For video games we have a similar situation where a frame is rendered for a 
>> certain world time and in the ideal case we would actually display the frame 
>> at this world time.
>>
>> That seems like it would be a poorly written game that flips like that, 
>> unless they are explicitly trying to throttle the framerate for some reason. 
>>  When a game presents a completed frame, they’d like that to happen as soon 
>> as possible.  This is why non-VSYNC modes of flipping exist and many games 
>> leverage this.  Adaptive sync gives you the lower latency of immediate flips 
>> without the tearing imposed by using non-VSYNC flipping.
>>
>>
>> I mean we have the guys from Valve on this mailing list so I think we should 
>> just get the feedback from them and see what they prefer.
>>
>> We have thousands of Steam games on other OSes that work great already, but 
>> we’d certainly be interested in any additional feedback.  My guess is they 
>> prefer to “do nothing” and let driver/HW manage it, otherwise you exempt all 
>> existing games from supporting adaptive sync without a rewrite or update.
>>
>>
>> Regards,
>> Christian.
>>
>>
>>     -Aric
>>
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Nicolai Hähnle

On 10.04.2018 19:25, Cyr, Aric wrote:

-Original Message-
From: Michel Dänzer [mailto:mic...@daenzer.net]
Sent: Tuesday, April 10, 2018 13:16

On 2018-04-10 07:13 PM, Cyr, Aric wrote:

-Original Message-
From: Michel Dänzer [mailto:mic...@daenzer.net]
Sent: Tuesday, April 10, 2018 13:06
On 2018-04-10 06:26 PM, Cyr, Aric wrote:

From: Koenig, Christian Sent: Tuesday, April 10, 2018 11:43


For video games we have a similar situation where a frame is rendered
for a certain world time and in the ideal case we would actually
display the frame at this world time.


That seems like it would be a poorly written game that flips like
that, unless they are explicitly trying to throttle the framerate for
some reason.  When a game presents a completed frame, they’d like
that to happen as soon as possible.


What you're describing is what most games have been doing traditionally.
Croteam's research shows that this results in micro-stuttering, because
frames may be presented too early. To avoid that, they want to
explicitly time each presentation as described by Christian.


Yes, I agree completely.  However that's only truly relevant for fixed
refreshed rate displays.


No, it also affects variable refresh; possibly even more in some cases,
because the presentation time is less predictable.


Yes, and that's why you don't want to do it when you have variable refresh.  
The hardware in the monitor and GPU will do it for you, so why bother?


I think Michel's point is that the monitor and GPU hardware *cannot* 
really do this, because there's synchronization with audio to take into 
account, which the GPU or monitor don't know about.


Also, as I wrote separately, there's the case of synchronizing multiple 
monitors.




The input to their algorithms will be noisy causing worst estimations.  If you 
just present as fast as you can, it'll just work (within reason).
The majority of gamers want maximum FPS for their games, and there's quite 
frequently outrage at a particular game when they are limited to something 
lower that what their monitor could otherwise support (i.e. I don't want my 
game limited to 30Hz if I have a shiny 144Hz gaming display I paid good money 
for).   Of course, there's always exceptions... but in our experience those are 
few and far between.


I agree that games most likely shouldn't try to be smart. I'm curious 
about the Croteam findings, but even if they did a really clever thing 
that works better than just telling the display driver "display ASAP 
please", chances are that *most* developers won't do that. And they'll 
most likely get it wrong, so our guidance should really be "games should 
ask for ASAP presentation, and nothing else".


However, there *are* legitimate use cases for requesting a specific 
presentation time, and there *is* precedent of APIs that expose such 
features.


Are there any real problems with exposing an absolute target present time?

Cheers,
Nicolai

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Cyr, Aric
> -Original Message-
> From: Michel Dänzer [mailto:mic...@daenzer.net]
> Sent: Tuesday, April 10, 2018 13:16
> 
> On 2018-04-10 07:13 PM, Cyr, Aric wrote:
> >> -Original Message-
> >> From: Michel Dänzer [mailto:mic...@daenzer.net]
> >> Sent: Tuesday, April 10, 2018 13:06
> >> On 2018-04-10 06:26 PM, Cyr, Aric wrote:
> >>> From: Koenig, Christian Sent: Tuesday, April 10, 2018 11:43
> >>>
>  For video games we have a similar situation where a frame is rendered
>  for a certain world time and in the ideal case we would actually
>  display the frame at this world time.
> >>>
> >>> That seems like it would be a poorly written game that flips like
> >>> that, unless they are explicitly trying to throttle the framerate for
> >>> some reason.  When a game presents a completed frame, they’d like
> >>> that to happen as soon as possible.
> >>
> >> What you're describing is what most games have been doing traditionally.
> >> Croteam's research shows that this results in micro-stuttering, because
> >> frames may be presented too early. To avoid that, they want to
> >> explicitly time each presentation as described by Christian.
> >
> > Yes, I agree completely.  However that's only truly relevant for fixed
> > refreshed rate displays.
> 
> No, it also affects variable refresh; possibly even more in some cases,
> because the presentation time is less predictable.

Yes, and that's why you don't want to do it when you have variable refresh.  
The hardware in the monitor and GPU will do it for you, so why bother?
The input to their algorithms will be noisy causing worst estimations.  If you 
just present as fast as you can, it'll just work (within reason).
The majority of gamers want maximum FPS for their games, and there's quite 
frequently outrage at a particular game when they are limited to something 
lower that what their monitor could otherwise support (i.e. I don't want my 
game limited to 30Hz if I have a shiny 144Hz gaming display I paid good money 
for).   Of course, there's always exceptions... but in our experience those are 
few and far between.

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Michel Dänzer
On 2018-04-10 07:13 PM, Cyr, Aric wrote:
>> -Original Message-
>> From: Michel Dänzer [mailto:mic...@daenzer.net]
>> Sent: Tuesday, April 10, 2018 13:06
>> On 2018-04-10 06:26 PM, Cyr, Aric wrote:
>>> From: Koenig, Christian Sent: Tuesday, April 10, 2018 11:43
>>>
 For video games we have a similar situation where a frame is rendered
 for a certain world time and in the ideal case we would actually
 display the frame at this world time.
>>>
>>> That seems like it would be a poorly written game that flips like
>>> that, unless they are explicitly trying to throttle the framerate for
>>> some reason.  When a game presents a completed frame, they’d like
>>> that to happen as soon as possible.
>>
>> What you're describing is what most games have been doing traditionally.
>> Croteam's research shows that this results in micro-stuttering, because
>> frames may be presented too early. To avoid that, they want to
>> explicitly time each presentation as described by Christian.
> 
> Yes, I agree completely.  However that's only truly relevant for fixed
> refreshed rate displays.

No, it also affects variable refresh; possibly even more in some cases,
because the presentation time is less predictable.


I have to leave for today, I'll look up the Croteam video on Youtube
explaining this tomorrow if nobody beats me to it.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Michel Dänzer
On 2018-04-10 05:35 PM, Cyr, Aric wrote:
>> On 2018-04-10 03:37 AM, Michel Dänzer wrote:
>>> On 2018-04-10 08:45 AM, Christian König wrote:
 Am 09.04.2018 um 23:45 schrieb Manasi Navare:
> Thanks for initiating the discussion. Find my comments
> below: On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry
> Wentland wrote:
>> On 2018-04-09 03:56 PM, Harry Wentland wrote:
>>> 
>>> === A DRM render API to support variable refresh rates
>>> ===
>>> 
>>> In order to benefit from adaptive sync and VRR userland
>>> needs a way to let us know whether to vary frame timings
>>> or to target a different frame time. These can be
>>> provided as atomic properties on a CRTC: * bool
>>> variable_refresh_compatible * int
>>> target_frame_duration_ns (nanosecond frame duration)
>>> 
>>> This gives us the following cases:
>>> 
>>> variable_refresh_compatible = 0, target_frame_duration_ns
>>> = 0 * drive monitor at timing's normal refresh rate
>>> 
>>> variable_refresh_compatible = 1, target_frame_duration_ns
>>> = 0 * send new frame to monitor as soon as it's
>>> available, if within min/max of monitor's reported
>>> capabilities
>>> 
>>> variable_refresh_compatible = 0/1,
>>> target_frame_duration_ns = > 0 * send new frame to
>>> monitor with the specified target_frame_duration_ns
>>> 
>>> When a target_frame_duration_ns or
>>> variable_refresh_compatible cannot be supported the
>>> atomic check will reject the commit.
>>> 
> What I would like is two sets of properties on a CRTC or
> preferably on a connector:
> 
> KMD properties that UMD can query: * vrr_capable -  This will
> be an immutable property for exposing hardware's capability
> of supporting VRR. This will be set by the kernel after 
> reading the EDID mode information and monitor range
> capabilities. * vrr_vrefresh_max, vrr_vrefresh_min - To
> expose the min and max refresh rates supported. These
> properties are optional and will be created and attached to
> the DP/eDP connector when the connector is getting
> intialized.
 
 Mhm, aren't those properties actually per mode and not per
 CRTC/connector?
 
> Properties that you mentioned above that the UMD can set
> before kernel can enable VRR functionality *bool vrr_enable
> or vrr_compatible target_frame_duration_ns
 
 Yeah, that certainly makes sense. But target_frame_duration_ns
 is a bad name/semantics.
 
 We should use an absolute timestamp where the frame should be
 presented, otherwise you could run into a bunch of trouble with
 IOCTL restarts or missed blanks.
>>> 
>>> Also, a fixed target frame duration isn't suitable even for
>>> video playback, due to drift between the video and audio clocks.
> 
> Why?  Even if they drift, you know you want to show your 24Hz video
> frame for 41.ms and adaptive sync can ensure that with reasonable
> accuracy.

Due to the drift, the video player has to occasionally either skip a
frame or present it twice to prevent audio and video going out of sync,
resulting in visual artifacts.

With time-based presentation and variable refresh rate, audio and video
can stay in sync without occasional visual artifacts.

It would be a pity to create a "variable refresh rate API" which doesn't
allow harnessing this strength of variable refresh rate.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Cyr, Aric
> -Original Message-
> From: Michel Dänzer [mailto:mic...@daenzer.net]
> Sent: Tuesday, April 10, 2018 13:06
> On 2018-04-10 06:26 PM, Cyr, Aric wrote:
> > From: Koenig, Christian Sent: Tuesday, April 10, 2018 11:43
> >
> >> For video games we have a similar situation where a frame is rendered
> >> for a certain world time and in the ideal case we would actually
> >> display the frame at this world time.
> >
> > That seems like it would be a poorly written game that flips like
> > that, unless they are explicitly trying to throttle the framerate for
> > some reason.  When a game presents a completed frame, they’d like
> > that to happen as soon as possible.
> 
> What you're describing is what most games have been doing traditionally.
> Croteam's research shows that this results in micro-stuttering, because
> frames may be presented too early. To avoid that, they want to
> explicitly time each presentation as described by Christian.

Yes, I agree completely.  However that's only truly relevant for fixed 
refreshed rate displays.
This is the primary reason for having Adaptive Sync.  
There is no perfect way to solve this without Adaptive Sync, but yes they can 
come up with better algorithms to improve fixed refresh rate displays.

> 
> Maybe we should try getting the Croteam guys researching this involved
> directly here.

I'd be interested in any research they could share, for sure.  
We also have years of experience and research here, but not distilled into any 
readily available format.

> 
> 
> --
> Earthling Michel Dänzer   |   http://www.amd.com
> Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Michel Dänzer
On 2018-04-10 06:26 PM, Cyr, Aric wrote:
> From: Koenig, Christian Sent: Tuesday, April 10, 2018 11:43
> 
>> For video games we have a similar situation where a frame is rendered
>> for a certain world time and in the ideal case we would actually
>> display the frame at this world time.
> 
> That seems like it would be a poorly written game that flips like
> that, unless they are explicitly trying to throttle the framerate for
> some reason.  When a game presents a completed frame, they’d like
> that to happen as soon as possible.

What you're describing is what most games have been doing traditionally.
Croteam's research shows that this results in micro-stuttering, because
frames may be presented too early. To avoid that, they want to
explicitly time each presentation as described by Christian.


Maybe we should try getting the Croteam guys researching this involved
directly here.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Nicolai Hähnle

On 10.04.2018 18:26, Cyr, Aric wrote:
That presentation time doesn’t need to come to kernel as such and 
actually is fine as-is completely decoupled from adaptive sync.  As long 
as the video player provides the new target_frame_duration_ns on the 
flip, then the driver/HW will target the correct refresh rate to match 
the source content.  This simply means that more often than not the 
video presents will  align very close to the monitor’s refresh rate, 
resulting in a smooth video experience.  For example, if you have 24Hz 
content, and an adaptive sync monitor with a range of 40-60Hz, once the 
target_frame_duration_ns is provided, driver can configure the monitor 
to a fixed refresh rate of 48Hz causing all video presents to be 
frame-doubled in hardware without further application intervention.


What about multi-monitor displays, where you want to play an animation 
that spans multiple monitors. You really want all monitors to flip at 
the same time.


I understand where you're coming from, but the perspective of refusing a 
target presentation time is a rather selfish one of "we're the display, 
we're the most important, everybody else has to adjust to us" (e.g. to 
get perfect sync between video and audio). I admit I'm phrasing it in a 
bit of an extreme way, but perhaps this phrasing helps to see why that's 
just not a very good attitude to have.


All devices (whether video or audio or whatever) should be able to 
receive a target presentation time.


If the application can make your life a bit easier by providing the 
targetted refresh rate as additional *hint-only* parameter (like in your 
24 Hz --> 48 Hz doubling example), then maybe we should indeed consider 
that.


Cheers,
Nicolai





For video games we have a similar situation where a frame is rendered 
for a certain world time and in the ideal case we would actually display 
the frame at this world time.


That seems like it would be a poorly written game that flips like that, 
unless they are explicitly trying to throttle the framerate for some 
reason.  When a game presents a completed frame, they’d like that to 
happen as soon as possible.  This is why non-VSYNC modes of flipping 
exist and many games leverage this.  Adaptive sync gives you the lower 
latency of immediate flips without the tearing imposed by using 
non-VSYNC flipping.



I mean we have the guys from Valve on this mailing list so I think we 
should just get the feedback from them and see what they prefer.


We have thousands of Steam games on other OSes that work great already, 
but we’d certainly be interested in any additional feedback.  My guess 
is they prefer to “do nothing” and let driver/HW manage it, otherwise 
you exempt all existing games from supporting adaptive sync without a 
rewrite or update.



Regards,
Christian.


-Aric



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Cyr, Aric
From: Koenig, Christian
Sent: Tuesday, April 10, 2018 11:43

Am 10.04.2018 um 17:35 schrieb Cyr, Aric:

-Original Message-

From: Wentland, Harry

Sent: Tuesday, April 10, 2018 11:08

To: Michel Dänzer <mailto:mic...@daenzer.net>; Koenig, 
Christian <mailto:christian.koe...@amd.com>; Manasi 
Navare

<mailto:manasi.d.nav...@intel.com>

Cc: Haehnle, Nicolai <mailto:nicolai.haeh...@amd.com>; 
Daniel Vetter <mailto:daniel.vet...@ffwll.ch>; Daenzer, 
Michel

<mailto:michel.daen...@amd.com>; dri-devel 
<mailto:dri-de...@lists.freedesktop.org>; 
amd-gfx mailing list 
<mailto:amd-gfx@lists.freedesktop.org>;

Deucher, Alexander 
<mailto:alexander.deuc...@amd.com>; Cyr, Aric 
<mailto:aric@amd.com>; Koo, Anthony 
<mailto:anthony@amd.com>

Subject: Re: RFC for a render API to support adaptive sync and VRR



On 2018-04-10 03:37 AM, Michel Dänzer wrote:

On 2018-04-10 08:45 AM, Christian König wrote:

Am 09.04.2018 um 23:45 schrieb Manasi Navare:

Thanks for initiating the discussion. Find my comments below:

On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:

On 2018-04-09 03:56 PM, Harry Wentland wrote:



=== A DRM render API to support variable refresh rates ===



In order to benefit from adaptive sync and VRR userland needs a way

to let us know whether to vary frame timings or to target a

different frame time. These can be provided as atomic properties on

a CRTC:

  * boolvariable_refresh_compatible

  * inttarget_frame_duration_ns (nanosecond frame duration)



This gives us the following cases:



variable_refresh_compatible = 0, target_frame_duration_ns = 0

  * drive monitor at timing's normal refresh rate



variable_refresh_compatible = 1, target_frame_duration_ns = 0

  * send new frame to monitor as soon as it's available, if within

min/max of monitor's reported capabilities



variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0

  * send new frame to monitor with the specified

target_frame_duration_ns



When a target_frame_duration_ns or variable_refresh_compatible

cannot be supported the atomic check will reject the commit.



What I would like is two sets of properties on a CRTC or preferably on

a connector:



KMD properties that UMD can query:

* vrr_capable -  This will be an immutable property for exposing

hardware's capability of supporting VRR. This will be set by the

kernel after

reading the EDID mode information and monitor range capabilities.

* vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max

refresh rates supported.

These properties are optional and will be created and attached to the

DP/eDP connector when the connector

is getting intialized.



Mhm, aren't those properties actually per mode and not per CRTC/connector?



Properties that you mentioned above that the UMD can set before kernel

can enable VRR functionality

*bool vrr_enable or vrr_compatible

target_frame_duration_ns



Yeah, that certainly makes sense. But target_frame_duration_ns is a bad

name/semantics.



We should use an absolute timestamp where the frame should be presented,

otherwise you could run into a bunch of trouble with IOCTL restarts or

missed blanks.



Also, a fixed target frame duration isn't suitable even for video

playback, due to drift between the video and audio clocks.



Why?  Even if they drift, you know you want to show your 24Hz video frame for 
41.ms and adaptive sync can ensure that with reasonable accuracy.

All we're doing is eliminating the need for frame rate converters from the 
application and offloading that to hardware.



Time-based presentation seems to be the right approach for preventing

micro-stutter in games as well, Croteam developers have been researching

this.





I'm not sure if the driver can ever give a guarantee of the exact time a flip 
occurs. What we have control over with our HW is frame

duration.



Are Croteam devs trying to predict render times? I'm not sure how that would 
work. We've had bad experience in the past with

games that try to do framepacing as that's usually not accurate and tends to 
lead to more problems than benefits.



For gaming, it doesn't make sense nor is it feasible to know how exactly how 
long a render will take with microsecond precision, very coarse guesses at 
best.  The point of adaptive sync is that it works *transparently* for the 
majority of cases, within the capability of the HW and driver.  We don't want 
to have every game re-write their engine to support this, but we do want the 
majority to "just work".



The only exception is the video case where an application may want to request a 
fixed frame duration aligned to the video content.  This requires an explicit 
interface for the video app, and our proposal is to keep it simple:  app knows 
how long a frame should be presented for, an

Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Christian König

Am 10.04.2018 um 17:35 schrieb Cyr, Aric:

-Original Message-
From: Wentland, Harry
Sent: Tuesday, April 10, 2018 11:08
To: Michel Dänzer ; Koenig, Christian 
; Manasi Navare

Cc: Haehnle, Nicolai ; Daniel Vetter 
; Daenzer, Michel
; dri-devel ; amd-gfx 
mailing list ;
Deucher, Alexander ; Cyr, Aric ; Koo, 
Anthony 
Subject: Re: RFC for a render API to support adaptive sync and VRR

On 2018-04-10 03:37 AM, Michel Dänzer wrote:

On 2018-04-10 08:45 AM, Christian König wrote:

Am 09.04.2018 um 23:45 schrieb Manasi Navare:

Thanks for initiating the discussion. Find my comments below:
On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:

On 2018-04-09 03:56 PM, Harry Wentland wrote:

=== A DRM render API to support variable refresh rates ===

In order to benefit from adaptive sync and VRR userland needs a way
to let us know whether to vary frame timings or to target a
different frame time. These can be provided as atomic properties on
a CRTC:
   * bool    variable_refresh_compatible
   * int    target_frame_duration_ns (nanosecond frame duration)

This gives us the following cases:

variable_refresh_compatible = 0, target_frame_duration_ns = 0
   * drive monitor at timing's normal refresh rate

variable_refresh_compatible = 1, target_frame_duration_ns = 0
   * send new frame to monitor as soon as it's available, if within
min/max of monitor's reported capabilities

variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
   * send new frame to monitor with the specified
target_frame_duration_ns

When a target_frame_duration_ns or variable_refresh_compatible
cannot be supported the atomic check will reject the commit.


What I would like is two sets of properties on a CRTC or preferably on
a connector:

KMD properties that UMD can query:
* vrr_capable -  This will be an immutable property for exposing
hardware's capability of supporting VRR. This will be set by the
kernel after
reading the EDID mode information and monitor range capabilities.
* vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max
refresh rates supported.
These properties are optional and will be created and attached to the
DP/eDP connector when the connector
is getting intialized.

Mhm, aren't those properties actually per mode and not per CRTC/connector?


Properties that you mentioned above that the UMD can set before kernel
can enable VRR functionality
*bool vrr_enable or vrr_compatible
target_frame_duration_ns

Yeah, that certainly makes sense. But target_frame_duration_ns is a bad
name/semantics.

We should use an absolute timestamp where the frame should be presented,
otherwise you could run into a bunch of trouble with IOCTL restarts or
missed blanks.

Also, a fixed target frame duration isn't suitable even for video
playback, due to drift between the video and audio clocks.

Why?  Even if they drift, you know you want to show your 24Hz video frame for 
41.ms and adaptive sync can ensure that with reasonable accuracy.
All we're doing is eliminating the need for frame rate converters from the 
application and offloading that to hardware.


Time-based presentation seems to be the right approach for preventing
micro-stutter in games as well, Croteam developers have been researching
this.


I'm not sure if the driver can ever give a guarantee of the exact time a flip 
occurs. What we have control over with our HW is frame
duration.

Are Croteam devs trying to predict render times? I'm not sure how that would 
work. We've had bad experience in the past with
games that try to do framepacing as that's usually not accurate and tends to 
lead to more problems than benefits.

For gaming, it doesn't make sense nor is it feasible to know how exactly how long a 
render will take with microsecond precision, very coarse guesses at best.  The point of 
adaptive sync is that it works *transparently* for the majority of cases, within the 
capability of the HW and driver.  We don't want to have every game re-write their engine 
to support this, but we do want the majority to "just work".

The only exception is the video case where an application may want to request a 
fixed frame duration aligned to the video content.  This requires an explicit 
interface for the video app, and our proposal is to keep it simple:  app knows 
how long a frame should be presented for, and we try to honour that.


Well I strongly disagree on that.

See VDPAU for example: 
https://http.download.nvidia.com/XFree86/vdpau/doxygen/html/group___vdp_presentation_queue.html#ga5bd61ca8ef5d1bc54ca6921aa57f835a

[in]

	earliest_presentation_time 	The timestamp associated with the 
surface. The presentation queue will not display the surface until the 
presentation queue's current time is at least this value.




Especially video players want an interface where they can specify when 
exactly a frame should show up on the display and then get the feedback 
wh

RE: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Cyr, Aric
> -Original Message-
> From: Wentland, Harry
> Sent: Tuesday, April 10, 2018 11:08
> To: Michel Dänzer ; Koenig, Christian 
> ; Manasi Navare
> 
> Cc: Haehnle, Nicolai ; Daniel Vetter 
> ; Daenzer, Michel
> ; dri-devel ; 
> amd-gfx mailing list ;
> Deucher, Alexander ; Cyr, Aric ; 
> Koo, Anthony 
> Subject: Re: RFC for a render API to support adaptive sync and VRR
> 
> On 2018-04-10 03:37 AM, Michel Dänzer wrote:
> > On 2018-04-10 08:45 AM, Christian König wrote:
> >> Am 09.04.2018 um 23:45 schrieb Manasi Navare:
> >>> Thanks for initiating the discussion. Find my comments below:
> >>> On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
> >>>> On 2018-04-09 03:56 PM, Harry Wentland wrote:
> >>>>>
> >>>>> === A DRM render API to support variable refresh rates ===
> >>>>>
> >>>>> In order to benefit from adaptive sync and VRR userland needs a way
> >>>>> to let us know whether to vary frame timings or to target a
> >>>>> different frame time. These can be provided as atomic properties on
> >>>>> a CRTC:
> >>>>>   * bool    variable_refresh_compatible
> >>>>>   * int    target_frame_duration_ns (nanosecond frame duration)
> >>>>>
> >>>>> This gives us the following cases:
> >>>>>
> >>>>> variable_refresh_compatible = 0, target_frame_duration_ns = 0
> >>>>>   * drive monitor at timing's normal refresh rate
> >>>>>
> >>>>> variable_refresh_compatible = 1, target_frame_duration_ns = 0
> >>>>>   * send new frame to monitor as soon as it's available, if within
> >>>>> min/max of monitor's reported capabilities
> >>>>>
> >>>>> variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
> >>>>>   * send new frame to monitor with the specified
> >>>>> target_frame_duration_ns
> >>>>>
> >>>>> When a target_frame_duration_ns or variable_refresh_compatible
> >>>>> cannot be supported the atomic check will reject the commit.
> >>>>>
> >>> What I would like is two sets of properties on a CRTC or preferably on
> >>> a connector:
> >>>
> >>> KMD properties that UMD can query:
> >>> * vrr_capable -  This will be an immutable property for exposing
> >>> hardware's capability of supporting VRR. This will be set by the
> >>> kernel after
> >>> reading the EDID mode information and monitor range capabilities.
> >>> * vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max
> >>> refresh rates supported.
> >>> These properties are optional and will be created and attached to the
> >>> DP/eDP connector when the connector
> >>> is getting intialized.
> >>
> >> Mhm, aren't those properties actually per mode and not per CRTC/connector?
> >>
> >>> Properties that you mentioned above that the UMD can set before kernel
> >>> can enable VRR functionality
> >>> *bool vrr_enable or vrr_compatible
> >>> target_frame_duration_ns
> >>
> >> Yeah, that certainly makes sense. But target_frame_duration_ns is a bad
> >> name/semantics.
> >>
> >> We should use an absolute timestamp where the frame should be presented,
> >> otherwise you could run into a bunch of trouble with IOCTL restarts or
> >> missed blanks.
> >
> > Also, a fixed target frame duration isn't suitable even for video
> > playback, due to drift between the video and audio clocks.

Why?  Even if they drift, you know you want to show your 24Hz video frame for 
41.ms and adaptive sync can ensure that with reasonable accuracy.  
All we're doing is eliminating the need for frame rate converters from the 
application and offloading that to hardware.

> > Time-based presentation seems to be the right approach for preventing
> > micro-stutter in games as well, Croteam developers have been researching
> > this.
> >
> 
> I'm not sure if the driver can ever give a guarantee of the exact time a flip 
> occurs. What we have control over with our HW is frame
> duration.
> 
> Are Croteam devs trying to predict render times? I'm not sure how that would 
> work. We've had bad experience in the past with
> games that try to do framepacing as that's usually not accurate and tends to 
> lead to more problems than benefits.

For gaming, it doesn't make sense nor is it feasible to know how exactly how 
long a render will take with microsecond precision, very coarse guesses at 
best.  The point of adaptive sync is that it works *transparently* for the 
majority of cases, within the capability of the HW and driver.  We don't want 
to have every game re-write their engine to support this, but we do want the 
majority to "just work".

The only exception is the video case where an application may want to request a 
fixed frame duration aligned to the video content.  This requires an explicit 
interface for the video app, and our proposal is to keep it simple:  app knows 
how long a frame should be presented for, and we try to honour that.

-Aric
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Christian König

Am 10.04.2018 um 17:08 schrieb Harry Wentland:

On 2018-04-10 03:37 AM, Michel Dänzer wrote:

On 2018-04-10 08:45 AM, Christian König wrote:

Am 09.04.2018 um 23:45 schrieb Manasi Navare:

Thanks for initiating the discussion. Find my comments below:
On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:

On 2018-04-09 03:56 PM, Harry Wentland wrote:

=== A DRM render API to support variable refresh rates ===

In order to benefit from adaptive sync and VRR userland needs a way
to let us know whether to vary frame timings or to target a
different frame time. These can be provided as atomic properties on
a CRTC:
   * bool    variable_refresh_compatible
   * int    target_frame_duration_ns (nanosecond frame duration)

This gives us the following cases:

variable_refresh_compatible = 0, target_frame_duration_ns = 0
   * drive monitor at timing's normal refresh rate

variable_refresh_compatible = 1, target_frame_duration_ns = 0
   * send new frame to monitor as soon as it's available, if within
min/max of monitor's reported capabilities

variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
   * send new frame to monitor with the specified
target_frame_duration_ns

When a target_frame_duration_ns or variable_refresh_compatible
cannot be supported the atomic check will reject the commit.


What I would like is two sets of properties on a CRTC or preferably on
a connector:

KMD properties that UMD can query:
* vrr_capable -  This will be an immutable property for exposing
hardware's capability of supporting VRR. This will be set by the
kernel after
reading the EDID mode information and monitor range capabilities.
* vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max
refresh rates supported.
These properties are optional and will be created and attached to the
DP/eDP connector when the connector
is getting intialized.

Mhm, aren't those properties actually per mode and not per CRTC/connector?


Properties that you mentioned above that the UMD can set before kernel
can enable VRR functionality
*bool vrr_enable or vrr_compatible
target_frame_duration_ns

Yeah, that certainly makes sense. But target_frame_duration_ns is a bad
name/semantics.

We should use an absolute timestamp where the frame should be presented,
otherwise you could run into a bunch of trouble with IOCTL restarts or
missed blanks.

Also, a fixed target frame duration isn't suitable even for video
playback, due to drift between the video and audio clocks.

Time-based presentation seems to be the right approach for preventing
micro-stutter in games as well, Croteam developers have been researching
this.


I'm not sure if the driver can ever give a guarantee of the exact time a flip 
occurs. What we have control over with our HW is frame duration.


Sounds like you misunderstood what we mean here.

The driver does not need to give an exact guarantee that a flip happens 
at that time. It should just not flip before that specific time.


E.g. when we missed a VBLANK your approach would still wait for the 
specific amount of time, while an absolute timestamp would mean to flip 
as soon as possible after that timestamp passed.


As Michel noted that is also exactly what video players need.



Are Croteam devs trying to predict render times? I'm not sure how that would 
work. We've had bad experience in the past with games that try to do 
framepacing as that's usually not accurate and tends to lead to more problems 
than benefits.


As far as I understand that is just a regulated feedback system, e.g. 
the application records the timestamps of the last three frames (or so) 
and then uses that + margin to as world time for the 3D rendering.


When the application has finished sending all rendering commands it 
sends the frame to be displayed exactly with that timestamp as well.


The timestamp when the frame was actually displayed is then used again 
as input to the algorithm.


Regards,
Christian.



Harry

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Harry Wentland
On 2018-04-10 03:37 AM, Michel Dänzer wrote:
> On 2018-04-10 08:45 AM, Christian König wrote:
>> Am 09.04.2018 um 23:45 schrieb Manasi Navare:
>>> Thanks for initiating the discussion. Find my comments below:
>>> On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
 On 2018-04-09 03:56 PM, Harry Wentland wrote:
>
> === A DRM render API to support variable refresh rates ===
>
> In order to benefit from adaptive sync and VRR userland needs a way
> to let us know whether to vary frame timings or to target a
> different frame time. These can be provided as atomic properties on
> a CRTC:
>   * bool    variable_refresh_compatible
>   * int    target_frame_duration_ns (nanosecond frame duration)
>
> This gives us the following cases:
>
> variable_refresh_compatible = 0, target_frame_duration_ns = 0
>   * drive monitor at timing's normal refresh rate
>
> variable_refresh_compatible = 1, target_frame_duration_ns = 0
>   * send new frame to monitor as soon as it's available, if within
> min/max of monitor's reported capabilities
>
> variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
>   * send new frame to monitor with the specified
> target_frame_duration_ns
>
> When a target_frame_duration_ns or variable_refresh_compatible
> cannot be supported the atomic check will reject the commit.
>
>>> What I would like is two sets of properties on a CRTC or preferably on
>>> a connector:
>>>
>>> KMD properties that UMD can query:
>>> * vrr_capable -  This will be an immutable property for exposing
>>> hardware's capability of supporting VRR. This will be set by the
>>> kernel after
>>> reading the EDID mode information and monitor range capabilities.
>>> * vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max
>>> refresh rates supported.
>>> These properties are optional and will be created and attached to the
>>> DP/eDP connector when the connector
>>> is getting intialized.
>>
>> Mhm, aren't those properties actually per mode and not per CRTC/connector?
>>
>>> Properties that you mentioned above that the UMD can set before kernel
>>> can enable VRR functionality
>>> *bool vrr_enable or vrr_compatible
>>> target_frame_duration_ns
>>
>> Yeah, that certainly makes sense. But target_frame_duration_ns is a bad
>> name/semantics.
>>
>> We should use an absolute timestamp where the frame should be presented,
>> otherwise you could run into a bunch of trouble with IOCTL restarts or
>> missed blanks.
> 
> Also, a fixed target frame duration isn't suitable even for video
> playback, due to drift between the video and audio clocks.
> 
> Time-based presentation seems to be the right approach for preventing
> micro-stutter in games as well, Croteam developers have been researching
> this.
> 

I'm not sure if the driver can ever give a guarantee of the exact time a flip 
occurs. What we have control over with our HW is frame duration.

Are Croteam devs trying to predict render times? I'm not sure how that would 
work. We've had bad experience in the past with games that try to do 
framepacing as that's usually not accurate and tends to lead to more problems 
than benefits.

Harry

> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Harry Wentland
Adding Anthony and Aric who've been working on Freesync with DC on other OSes 
for a while.

On 2018-04-09 05:45 PM, Manasi Navare wrote:
> Thanks for initiating the discussion. Find my comments below:
> 
> On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
>> Adding dri-devel, which I should've included from the start.
>>
>> On 2018-04-09 03:56 PM, Harry Wentland wrote:
>>> === What is adaptive sync and VRR? ===
>>>
>>> Adaptive sync has been part of the DisplayPort spec for a while now and 
>>> allows graphics adapters to drive displays with varying frame timings. VRR 
>>> (variable refresh rate) is essentially the same, but defined for HDMI.
>>>
>>>
>>>
>>> === Why allow variable frame timings? ===
>>>
>>> Variable render times don't align with fixed refresh rates, leading to
>>> stuttering, tearing, and/or input lag.
>>>
>>> e.g. (rc = render completion, dr = display refresh)
>>>
>>> rc   B  CDE  F
>>> dr  A   B   C   C   D   E   F
>>>
>>> ^ ^
>>>   frame missed 
>>>  repeated   display
>>>   twice refresh   
>>>
>>>
>>>
>>> === Other use cases of adaptive sync 
>>>
>>> Beside the variable render case, adaptive sync also allows adjustment of 
>>> refresh rates without a mode change. One such use case would be 24 Hz video.
>>>
> 
> One of the the advantages here when the render speed is slower than the 
> display refresh rate, since we are stretching the vertical blanking interval
> the display adapters will follow "draw fast and then go idle" approach. This 
> gives power savings when render rate is lower than the display refresh rate.

Are you talking about a use case, such as an idle desktop, where the renders 
are quite sporadic?

>  
>>>
>>>
>>> === A DRM render API to support variable refresh rates ===
>>>
>>> In order to benefit from adaptive sync and VRR userland needs a way to let 
>>> us know whether to vary frame timings or to target a different frame time. 
>>> These can be provided as atomic properties on a CRTC:
>>>  * bool variable_refresh_compatible
>>>  * int  target_frame_duration_ns (nanosecond frame duration)
>>>
>>> This gives us the following cases:
>>>
>>> variable_refresh_compatible = 0, target_frame_duration_ns = 0
>>>  * drive monitor at timing's normal refresh rate
>>>
>>> variable_refresh_compatible = 1, target_frame_duration_ns = 0
>>>  * send new frame to monitor as soon as it's available, if within min/max 
>>> of monitor's reported capabilities
>>>
>>> variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
>>>  * send new frame to monitor with the specified target_frame_duration_ns
>>>
>>> When a target_frame_duration_ns or variable_refresh_compatible cannot be 
>>> supported the atomic check will reject the commit.
>>>
> 
> What I would like is two sets of properties on a CRTC or preferably on a 
> connector:
> 
> KMD properties that UMD can query:
> * vrr_capable -  This will be an immutable property for exposing hardware's 
> capability of supporting VRR. This will be set by the kernel after 
> reading the EDID mode information and monitor range capabilities.
> * vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max refresh 
> rates supported.
> These properties are optional and will be created and attached to the DP/eDP 
> connector when the connector
> is getting intialized.
> 

If we're talking about the properties from the EDID these might not necessarily 
align with a currently selected mode, which might have a refresh rate lower 
than the vrr_refresh_max, requiring us to cap it at that. In some scenarios we 
also might do low framerate compensation [1] where we do magic to allow the 
framerate to drop below the supported range.

I think if a vrr_refresh_max/min are exposed to UMD these should really be only 
for informational purposes, in which case it might make more sense to expose 
them through sysfs or even debugfs entries.

[1] https://www.amd.com/Documents/freesync-lfc.pdf

> Properties that you mentioned above that the UMD can set before kernel can 
> enable VRR functionality
> *bool vrr_enable or vrr_compatible
> target_frame_duration_ns
> 
> The monitor only specifies the monitor range through EDID. Apart from this 
> should we also need to scan the modes and check
> if there are modes that have the same pixel clock and horizontal timings but 
> variable vertical totals?
> 

I'm not sure about the VRR spec, but for adaptive sync we should only consider 
the range limits specified in the EDID and allow adaptive sync for modes within 
that range.

> I have RFC patches for all the above mentioned. If we get a 
> concensus/agreement on the above properties and method to check
> monitor's VRR capability, I can submit those patches atleast as RFC.
> 

That sounds great. I wouldn't mind trying those patches and then working 
together to arrive at somet

Re: RFC for a render API to support adaptive sync and VRR

2018-04-10 Thread Michel Dänzer
On 2018-04-10 08:45 AM, Christian König wrote:
> Am 09.04.2018 um 23:45 schrieb Manasi Navare:
>> Thanks for initiating the discussion. Find my comments below:
>> On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
>>> On 2018-04-09 03:56 PM, Harry Wentland wrote:

 === A DRM render API to support variable refresh rates ===

 In order to benefit from adaptive sync and VRR userland needs a way
 to let us know whether to vary frame timings or to target a
 different frame time. These can be provided as atomic properties on
 a CRTC:
   * bool    variable_refresh_compatible
   * int    target_frame_duration_ns (nanosecond frame duration)

 This gives us the following cases:

 variable_refresh_compatible = 0, target_frame_duration_ns = 0
   * drive monitor at timing's normal refresh rate

 variable_refresh_compatible = 1, target_frame_duration_ns = 0
   * send new frame to monitor as soon as it's available, if within
 min/max of monitor's reported capabilities

 variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
   * send new frame to monitor with the specified
 target_frame_duration_ns

 When a target_frame_duration_ns or variable_refresh_compatible
 cannot be supported the atomic check will reject the commit.

>> What I would like is two sets of properties on a CRTC or preferably on
>> a connector:
>>
>> KMD properties that UMD can query:
>> * vrr_capable -  This will be an immutable property for exposing
>> hardware's capability of supporting VRR. This will be set by the
>> kernel after
>> reading the EDID mode information and monitor range capabilities.
>> * vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max
>> refresh rates supported.
>> These properties are optional and will be created and attached to the
>> DP/eDP connector when the connector
>> is getting intialized.
> 
> Mhm, aren't those properties actually per mode and not per CRTC/connector?
> 
>> Properties that you mentioned above that the UMD can set before kernel
>> can enable VRR functionality
>> *bool vrr_enable or vrr_compatible
>> target_frame_duration_ns
> 
> Yeah, that certainly makes sense. But target_frame_duration_ns is a bad
> name/semantics.
> 
> We should use an absolute timestamp where the frame should be presented,
> otherwise you could run into a bunch of trouble with IOCTL restarts or
> missed blanks.

Also, a fixed target frame duration isn't suitable even for video
playback, due to drift between the video and audio clocks.

Time-based presentation seems to be the right approach for preventing
micro-stutter in games as well, Croteam developers have been researching
this.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-09 Thread Christian König

Am 09.04.2018 um 23:45 schrieb Manasi Navare:

Thanks for initiating the discussion. Find my comments below:

On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:

Adding dri-devel, which I should've included from the start.

On 2018-04-09 03:56 PM, Harry Wentland wrote:

=== What is adaptive sync and VRR? ===

Adaptive sync has been part of the DisplayPort spec for a while now and allows 
graphics adapters to drive displays with varying frame timings. VRR (variable 
refresh rate) is essentially the same, but defined for HDMI.



=== Why allow variable frame timings? ===

Variable render times don't align with fixed refresh rates, leading to
stuttering, tearing, and/or input lag.

e.g. (rc = render completion, dr = display refresh)

rc   B  CDE  F
dr  A   B   C   C   D   E   F

 ^ ^
  frame missed
 repeated   display
  twice refresh



=== Other use cases of adaptive sync 

Beside the variable render case, adaptive sync also allows adjustment of 
refresh rates without a mode change. One such use case would be 24 Hz video.


One of the the advantages here when the render speed is slower than the display 
refresh rate, since we are stretching the vertical blanking interval
the display adapters will follow "draw fast and then go idle" approach. This 
gives power savings when render rate is lower than the display refresh rate.
  


=== A DRM render API to support variable refresh rates ===

In order to benefit from adaptive sync and VRR userland needs a way to let us 
know whether to vary frame timings or to target a different frame time. These 
can be provided as atomic properties on a CRTC:
  * boolvariable_refresh_compatible
  * int target_frame_duration_ns (nanosecond frame duration)

This gives us the following cases:

variable_refresh_compatible = 0, target_frame_duration_ns = 0
  * drive monitor at timing's normal refresh rate

variable_refresh_compatible = 1, target_frame_duration_ns = 0
  * send new frame to monitor as soon as it's available, if within min/max of 
monitor's reported capabilities

variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
  * send new frame to monitor with the specified target_frame_duration_ns

When a target_frame_duration_ns or variable_refresh_compatible cannot be 
supported the atomic check will reject the commit.


What I would like is two sets of properties on a CRTC or preferably on a 
connector:

KMD properties that UMD can query:
* vrr_capable -  This will be an immutable property for exposing hardware's 
capability of supporting VRR. This will be set by the kernel after
reading the EDID mode information and monitor range capabilities.
* vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max refresh rates 
supported.
These properties are optional and will be created and attached to the DP/eDP 
connector when the connector
is getting intialized.


Mhm, aren't those properties actually per mode and not per CRTC/connector?


Properties that you mentioned above that the UMD can set before kernel can 
enable VRR functionality
*bool vrr_enable or vrr_compatible
target_frame_duration_ns


Yeah, that certainly makes sense. But target_frame_duration_ns is a bad 
name/semantics.


We should use an absolute timestamp where the frame should be presented, 
otherwise you could run into a bunch of trouble with IOCTL restarts or 
missed blanks.


Regards,
Christian.



The monitor only specifies the monitor range through EDID. Apart from this 
should we also need to scan the modes and check
if there are modes that have the same pixel clock and horizontal timings but 
variable vertical totals?

I have RFC patches for all the above mentioned. If we get a concensus/agreement 
on the above properties and method to check
monitor's VRR capability, I can submit those patches atleast as RFC.

Regards
Manasi



=== Previous discussions ===

https://lists.freedesktop.org/archives/dri-devel/2017-October/155207.html



=== Feedback and moving forward ===

I'm hoping to get some feedback on this or continue the discussion on how 
adaptive sync / VRR might look like in the DRM ecosystem. Once there are no 
major concerns or objections left we'll probably start creating some patches to 
sketch this out a bit better and see how it looks in practice.



Cheers,
Harry
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-09 Thread Manasi Navare
Thanks for initiating the discussion. Find my comments below:

On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
> Adding dri-devel, which I should've included from the start.
> 
> On 2018-04-09 03:56 PM, Harry Wentland wrote:
> > === What is adaptive sync and VRR? ===
> > 
> > Adaptive sync has been part of the DisplayPort spec for a while now and 
> > allows graphics adapters to drive displays with varying frame timings. VRR 
> > (variable refresh rate) is essentially the same, but defined for HDMI.
> > 
> > 
> > 
> > === Why allow variable frame timings? ===
> > 
> > Variable render times don't align with fixed refresh rates, leading to
> > stuttering, tearing, and/or input lag.
> > 
> > e.g. (rc = render completion, dr = display refresh)
> > 
> > rc   B  CDE  F
> > dr  A   B   C   C   D   E   F
> > 
> > ^ ^
> >   frame missed 
> >  repeated   display
> >   twice refresh   
> > 
> > 
> > 
> > === Other use cases of adaptive sync 
> > 
> > Beside the variable render case, adaptive sync also allows adjustment of 
> > refresh rates without a mode change. One such use case would be 24 Hz video.
> >

One of the the advantages here when the render speed is slower than the display 
refresh rate, since we are stretching the vertical blanking interval
the display adapters will follow "draw fast and then go idle" approach. This 
gives power savings when render rate is lower than the display refresh rate.
 
> > 
> > 
> > === A DRM render API to support variable refresh rates ===
> > 
> > In order to benefit from adaptive sync and VRR userland needs a way to let 
> > us know whether to vary frame timings or to target a different frame time. 
> > These can be provided as atomic properties on a CRTC:
> >  * bool variable_refresh_compatible
> >  * int  target_frame_duration_ns (nanosecond frame duration)
> > 
> > This gives us the following cases:
> > 
> > variable_refresh_compatible = 0, target_frame_duration_ns = 0
> >  * drive monitor at timing's normal refresh rate
> > 
> > variable_refresh_compatible = 1, target_frame_duration_ns = 0
> >  * send new frame to monitor as soon as it's available, if within min/max 
> > of monitor's reported capabilities
> > 
> > variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
> >  * send new frame to monitor with the specified target_frame_duration_ns
> > 
> > When a target_frame_duration_ns or variable_refresh_compatible cannot be 
> > supported the atomic check will reject the commit.
> >

What I would like is two sets of properties on a CRTC or preferably on a 
connector:

KMD properties that UMD can query:
* vrr_capable -  This will be an immutable property for exposing hardware's 
capability of supporting VRR. This will be set by the kernel after 
reading the EDID mode information and monitor range capabilities.
* vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max refresh rates 
supported.
These properties are optional and will be created and attached to the DP/eDP 
connector when the connector
is getting intialized.

Properties that you mentioned above that the UMD can set before kernel can 
enable VRR functionality
*bool vrr_enable or vrr_compatible
target_frame_duration_ns

The monitor only specifies the monitor range through EDID. Apart from this 
should we also need to scan the modes and check
if there are modes that have the same pixel clock and horizontal timings but 
variable vertical totals?

I have RFC patches for all the above mentioned. If we get a concensus/agreement 
on the above properties and method to check
monitor's VRR capability, I can submit those patches atleast as RFC.

Regards
Manasi

> > 
> > 
> > === Previous discussions ===
> > 
> > https://lists.freedesktop.org/archives/dri-devel/2017-October/155207.html
> > 
> > 
> > 
> > === Feedback and moving forward ===
> > 
> > I'm hoping to get some feedback on this or continue the discussion on how 
> > adaptive sync / VRR might look like in the DRM ecosystem. Once there are no 
> > major concerns or objections left we'll probably start creating some 
> > patches to sketch this out a bit better and see how it looks in practice.
> > 
> > 
> > 
> > Cheers,
> > Harry
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> > 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-09 Thread Harry Wentland
Adding dri-devel, which I should've included from the start.

On 2018-04-09 03:56 PM, Harry Wentland wrote:
> === What is adaptive sync and VRR? ===
> 
> Adaptive sync has been part of the DisplayPort spec for a while now and 
> allows graphics adapters to drive displays with varying frame timings. VRR 
> (variable refresh rate) is essentially the same, but defined for HDMI.
> 
> 
> 
> === Why allow variable frame timings? ===
> 
> Variable render times don't align with fixed refresh rates, leading to
> stuttering, tearing, and/or input lag.
> 
> e.g. (rc = render completion, dr = display refresh)
> 
> rc   B  CDE  F
> drA   B   C   C   D   E   F
> 
> ^ ^
> frame missed 
>repeated   display
> twice refresh   
> 
> 
> 
> === Other use cases of adaptive sync 
> 
> Beside the variable render case, adaptive sync also allows adjustment of 
> refresh rates without a mode change. One such use case would be 24 Hz video.
> 
> 
> 
> === A DRM render API to support variable refresh rates ===
> 
> In order to benefit from adaptive sync and VRR userland needs a way to let us 
> know whether to vary frame timings or to target a different frame time. These 
> can be provided as atomic properties on a CRTC:
>  * bool   variable_refresh_compatible
>  * inttarget_frame_duration_ns (nanosecond frame duration)
> 
> This gives us the following cases:
> 
> variable_refresh_compatible = 0, target_frame_duration_ns = 0
>  * drive monitor at timing's normal refresh rate
> 
> variable_refresh_compatible = 1, target_frame_duration_ns = 0
>  * send new frame to monitor as soon as it's available, if within min/max of 
> monitor's reported capabilities
> 
> variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
>  * send new frame to monitor with the specified target_frame_duration_ns
> 
> When a target_frame_duration_ns or variable_refresh_compatible cannot be 
> supported the atomic check will reject the commit.
> 
> 
> 
> === Previous discussions ===
> 
> https://lists.freedesktop.org/archives/dri-devel/2017-October/155207.html
> 
> 
> 
> === Feedback and moving forward ===
> 
> I'm hoping to get some feedback on this or continue the discussion on how 
> adaptive sync / VRR might look like in the DRM ecosystem. Once there are no 
> major concerns or objections left we'll probably start creating some patches 
> to sketch this out a bit better and see how it looks in practice.
> 
> 
> 
> Cheers,
> Harry
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx