On 11/17/2014 10:46 AM, Simon Budig wrote:
The following is my understanding, when pippin answers his answers have
more authority than mine.
I appreciate your answers, but the points you make aren't actually
relevant to the questions that I wanted to ask Pippin. This is my fault.
In an effort to clarify what I think he's saying, I seem to have just
opened the door for more miscommunication.
Elle Stone (ellest...@ninedegreesbelow.com) wrote:
Putting aside coding considerations that might affect other software that
uses babl and GEGL, here's my understanding of your current plan for
A slight preface here. I don't consider it important to focus on the
*storage* of the pixel data, as in the actual bulk memory for the pixel
If you choose to *store* the user's RGB data using chromaticities not of
user's choosing, that suggests that you also intend to *edit* the user's
RGB data using chromaticities not of user's choosing
1. Upon opening an image, the image will be converted to unbounded sRGB.
I don't think that this is decided yet, I actually consider it unlikely
at the moment. I think it might be more likely to have it sitting in
memory as userRGB.But again, this is an implementation detail, where I
assume that the floating point math is sufficiently precise to avoid
visible rounding errors.
This is not just an implementation detail. The user has the right to
control what RGB working space is used when performing RGB edits on the
user's own RGB data.
2. For all chromaticity dependent RGB editing operations:
* Either the image will be re-encoded using the user's chosen RGB working
space chromaticities, and then the operation will be performed.
* Or else the image will be converted to CIELAB and then the operation
will be performed.
Conceptually the image won't be converted as a whole. A
pixel-data-flow-graph will be set up that considers the region of
This really is an implemetation detail.
For all chromaticity dependent RGB editing operations the pixels will be
re-encoded in the format the operations requests. Which most likely will
If the user opens a BetaRGB image or an AdobeRGB image or whatever, the
user really does expect that *all* RGB edits wil be done using the
user's chosen RGB working space chromaticities.
3. For all chromaticity *in*dependent RGB editing operations, the image will
be converted to unbounded sRGB for processing.
For all chromaticity *in*dependent RGB editing operations the pixels will be
converted to the format the operations requests. Which most likely will
0e sRGB or XYZ.
This brings me to the question that I was planning on asking Pippin.
If Pippin *doesn't* intend to convert the image to unbounded sRGB before
performing any RGB editing operations, then my question is irrelevant.
So the "pre question" is: Does Pippin intend that the image will be
converted to unbounded sRGB before performing chromaticity *in*dependent
RGB editing operations?
If Pippin's answer to the "pre question" is yes, here's the question I
wanted to ask Pippin:
1. We've agree that many RGB editing operations really are chromaticity
dependent and therefore should be done using the user's chosen RGB
working space chromaticities.
2. We've agree that for chromaticity *in*dependent editing operations,
the resulting colors as located in the XYZ reference color space are the
same *regardless* of the chromaticities used to encode the RGB data
before performing the operation.
Given the above two points of agreement, what is point of EVER
converting the image to unbounded sRGB?
I can list a whole bunch of disadvantages. But I don't understand what
the advantages are supposed to be.
I thought Jon Nordby had made it clear (more than once) that for all RGB
editing operations, the user's chosen RGB working space's chromaticties
would be used.
But it's hard to tell whether Pippin agrees with Jon Nordby.
So I'm asking Pippin as directly as possible whether he ever intends
that RGB editing will be done using sRGB chromaticities instead of the
user's chosen RGB working space's chromaticities.
If Pippin's answer is "yes", then the next question is "Why?".
(Sorry for repeating, but this is an important point: in a given image
context "userRGB" really is not conceptually different from sRGB or XYZ,
My apologies, but this statement makes no sense unless you mean the
trivially obvious point that RGB color data can be losslessly converted
to various unbounded color spaces including the XYZ reference color
space, of course putting aside floating point precision limitations and
sundry other sources of computational loss of precision.
it gets slightly more complicated when two images with different
user chromaticies are to be combined.)
Again, let the *user* decide what RGB working space to use for combining
images. Except for the few compositing operations that are chromaticity
independent, results will be different in different RGB working spaces.
4. When converting to XYZ/CIELAB/etc, the image will first be converted to
Hmm, based on what Pippin has said previously, he wants to use linear
gamma sRGB as a sort of pseudo-PCS
This seems to be for coding convenience, to take advantage of the
current hard-coded babl conversion from sRGB to XYZ to LAB, and probably
also from sRGB to Y.
So the "not necessarily" is a bit surprising.
And again, the babl hard-coded conversions that are specific to sRGB can
be generalized to use the user's chosen RGB working space's Y and XYZ
values, which automatically covers color-managed sRGB images.
If specific userRGB to XYZ/whatever transforms are
available to babl it will make use of them. This will likely be the case
for e.g. AdobeRGB.
OK, if I understand what you just said, this is getting silly.
I think you are indicating a willingness to write hard-coded AdobeRGB
editing operations and put them right alongside the hard-coded sRGB
And maybe you'd also add hard-coded ProPhotoRGB, hard-coded
WideGamutRBB, hard-coded BetaRGB, and so forth.
And then you'd be faced with the problem that there are multiple
variants of these profiles floating around open source image editing
Which would create the problem of deciding whether the user's chosen RGB
working space was "close enough" to one of your hard-coded versions to
warrant using the hard-coded paths.
And you'd still need generalized code for the RGB working spaces for
which you haven't provided hard-coded paths.
You really do *not* want to walk down that particular coding path.
The only sensible thing to do is use LCMS to retrieve the user's chosen
RGB working space's Y and XYZ values for the handful of editing
operations that currently use hard-coded sRGB Y and XYZ values, and then
generalize those operations, even for color-managed sRGB images.
I've already submitted the code to retrieve the user's RGB working space
Y and XYZ values. I've already told you where in the code base all the
hard-coded sRGB values that need to be generalized are located.
Sometimes I'm tempted to think that maybe the real problem is that the
babl/GEGL/GIMP devs don't know how to code up generalized functions
using Y and XYZ retrieved from the user's chosen RGB working space. But
the devs are too smart, this can't possibly be the problem.
gimp-user-list mailing list
List address: firstname.lastname@example.org
List membership: https://mail.gnome.org/mailman/listinfo/gimp-user-list
List archives: https://mail.gnome.org/archives/gimp-user-list