On 2018-04-12 06:30 AM, Michel Dänzer wrote:
On 2018-04-11 11:26 PM, Leo Li wrote:
On 2018-04-11 04:39 AM, Michel Dänzer wrote:

Hmm. So either legacy or non-legacy clients won't work at all, or
they'll step on each other's toes, clobbering the HW gamma LUT from
each other.

I'm afraid neither of those alternatives sound like a good user
experience to me.

Consider on the one hand something like Night Light / redshift,
using legacy APIs to adjust colour temperature to the time of day.
On the other hand, another client using the non-legacy API for say
fine-tuning of a display's advanced gamma capabilities.

Ideally, in this case the gamma LUTs from the legacy and non-legacy
APIs should be combined, such that the hardware LUT reflects both
the colour temperature set by Night Light / refshift and the
fine-tuning set by the non-legacy client. Is that feasible? If not,
can you explain a little why?

Hmm, combining LUTs could be feasible, but I don't think that's the
right approach.

LUTs can be combined through composition h(x) = f(g(x)), with some
interpolation involved. Once combined, it can be set using the
non-legacy API, since that'll likely have a larger LUT size.

But I think what you've mentioned raises a bigger issue of color
management conflicts in general. It doesn't have to be redshift vs
monitor correction, what if there more than 2 applications contending
to set gamma? xrandr's brightness already conflicts with redshift,
and so does some apps running on WINE.

What you're describing is an X11 design flaw, which we cannot do
anything about in the DDX driver.

What I want to avoid is repeating a similar situation as we had before
xserver 1.19, where one could have either working per-window colormaps
and global gamma, or per-CRTC gamma via RandR, not all at the same
time. (Before xserver 1.7, they would clobber the HW LUT from each
other) I fixed this in
https://cgit.freedesktop.org/xorg/xserver/commit/?id=b4e46c0444bb09f4af59d9d13acc939a0fbbc6d6
by composing the LUTs.


For the small example at hand, the ideal solution is to have
redshift use the color transformation matrix (CTM), which will be
exposed as part of the non-legacy color API. It'll need modification
of redshift, but at least it won't conflict with anything gamma
related.

 From past experience, it can take a long time (up to forever) for some
clients to be updated like this. E.g. it's unlikely at this point that
libraries such as SDL1 will ever be updated for the new API, so I'm
afraid users would run into things like
https://bugs.freedesktop.org/show_bug.cgi?id=27222 again.

(Besides, it would conflict with anything else using the same API, as
you described above, so it's not even obviously the ideal solution)


Fair enough, it's better to have the frequent redshift+monitor adjust
use-case working now instead of pushing an entirely new API.


Jumping back on some patch 1 topics:

Are there ways to get arbitrarily (no more than 4096 elements) sized
arrays from a client, to the DDX driver?

Seems like the RRChangeOutputProperty request could work.


I agree, it would definitely be nicer for clients to not worry about
DRM blobs at all.

Indeed. E.g. as a bonus, it would allow this to work even with a remote
display connection.


I'm holding off on the more detailed discussion about the other patches
until there is a plan for this fundamental issue.



I'll take another look at RRChangeOutputProperty. Seems I missed the
'length' argument when I first went through it.

Once the blobs are gone, the other issues should be much simpler to
solve. I'll see if I can come up with some v2's.

A side question: How does xrandr display lengthy properties, like LUTs?
Can users set it via --set still?

Leo


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Reply via email to