On Thu, Aug 29, 2013 at 7:26 PM, Greg Hackmann <ghackmann at google.com> wrote:
> On Thu, Aug 29, 2013 at 12:36 AM, Ville Syrj?l?
> <ville.syrjala at linux.intel.com> wrote:
>>
>> On Wed, Aug 28, 2013 at 11:51:59PM -0400, Rob Clark wrote:
>> > On Wed, Aug 28, 2013 at 9:51 PM, Greg Hackmann <ghackmann at google.com>
>> > wrote:
>>
>> > > 1.  The API is geared toward updating one object at a time.  Android's
>> > > graphics stack needs the entire screen updated atomically to avoid 
>> > > tearing,
>> > > and on some SoCs to avoid wedging the display hardware.  Rob Clark's 
>> > > atomic
>> > > modeset patchset worked, but copy/update/commit design meant the driver 
>> > > had
>> > > to keep a lot more internal state.
>> > >
>> >
>> > I'm not entirely sure how to avoid that, because at least some hw we
>> > need to have the entire new-state in order to validate if it is
>> > possible.
>>
>> I guess the only reason adf is a bit different is that there can only be
>> one custom (driver specific!) blob in the ioctl, so the driver is just
>> free to dump that directly into whatever internal structure it uses to
>> store the full state. So it just frees you from the per-prop state
>> buildup process.
>
>
> Right, the difference is that clients send the complete state they want
> rather than deltas against the current state.  This means the driver doesn't
> have to track its current state and duplicate it at the beginning of the
> flip operation, which is a minor pain on hardware with a ton of knobs to
> twist across different hardware blocks.
>
> Maybe the important question is whether incremental atomic updates is a
> use-case that clients need in practice.  SurfaceFlinger tells the HW
> composer each frame "here's a complete description of what I want onscreen,
> make it so" and IIRC Weston works the same way.

weston works this way (although per-display, it handles independent
displays each with their own display loop).

But X does things more independently..  although effective use of
overlays is a bit difficult with X.. but at least a couple drivers for
hw that does not have dedicated hw cursor do use overlays/planes to
implement hw cursor.

> I used a blob rather than property/value pairs because the composition is
> going to be inherently device specific anyway.  Display controllers have
> such different features and constraints that you'd end up with each driver
> exposing a bunch of hardware-specific properties, and I'm not convinced
> that's any better than just letting the driver dictate how the requests are
> structured (modulo a handful of hardware-agnostic properties).  I'm not
> strongly tied to blobs over properties but I think the former's easier on
> driver developers.

weston (or other upcoming wayland compositors) use kms in a relatively
generic way, so you don't have a userspace component to to the driver
handling the display.  This gets rid of a lot of duplicate kms code,
which is currently duplicated in each xf86-video-foo.

The idea w/ property based "atomic" KMS is that you would have
standard properties for all the generic/core KMS fields (mode, x/y,
w/h, etc).  And driver custom and semi-custom properties for things
that are more hw specific.  Ie. if multiple different hw supports some
particular feature, for example solid-fill bg color, they would align
on the same property name.  In userspace you could query the
properties on the plane/crtc/etc to see what custom things are
supported.  I guess you could think of it as the display/kms
equivalent to GL extensions.

There are some things which are hard to express, like
upscale/downscale/bandwidth limits.  So possibly we eventually need to
define some userspace plugin API where some hw specific module can
help make better decisions about which surfaces to assign to which
planes.  But I think we want to try to share as much code in common as
possible.

BR,
-R

>>
>> But if the idea would to be totally driver specific anyway, I wouldn't
>> even bother with any of this fancy framework stuff. Just specify some
>> massive driver specific structure with a custom ioctl and call it a
>> day.
>
>
> I disagree -- this is basically what vendors do today to support Android,
> and there's a lot of common scaffolding that could go into a framework.  The
> custom ioctl handlers all look reasonably close to this:
>
> 1) import dma-bufs and fences from their respective fds
> 2) map the buffers into the display device
> 3) validate the buffer sizes against their formats and width/stride/height
> 4) validate the requested layout doesn't violate hardware constraints
> 5) hand everything off to a worker that waits for the buffers' sync fences
> to fire
> 6) commit the requested layout to hardware
> 7) unmap and release all the buffers that just left the screen
> 8) advance the sync timeline
>
> with some leeway on the ordering of (2)-(4) and (7)-(8).  ADF handles all of
> this except for (4) and (6), which are inherently hardware-specific and
> delegated to driver ops.

Reply via email to