@Phyllis
I have added a section in the manual on Color Management. See if it's okay.
@All
I also attach the txt file for those who would like to correct and improve it.
Overview on color management

CinGG does not have support for ICC color profiles or a global color management 
to standardize and facilitate the
management of the various files with which it works. But it has its own way of 
managing color spaces and conversions; let's see how.


Color Space

A color space is a subspace of the absolute CIE XYZ color space that includes 
all possible, human-visible color coordinates. The absolute color space is 
device independent while the color subspaces are mapped to each individual 
device. A color space consists of primaries (gamut), transfer function (gamma) 
and matrix coefficients (scaler).

- Color primaries: the gamut of the color space associated with the media, 
sensor or device (display, for example).
- Transfer characteristic function: converts linear values to non-linear values 
(e.g. logarithmic). It is also called Gamma correction.
- Color matrix function (scaler): converts from one color model to another. RGB 
<--> YUV; RGB <--> YCbCr; etc.

The camera sensors are always RGB and linear. Generally, those values get 
converted to YUV, in the files that produce, because it is a more efficient 
format thanks to chroma subsampling, and produces smaller files (even if of 
lower quality, i.e. you lose part of the colors data). The conversion is 
nonlinear and so it concerns the "transfer characteristic" or gamma. The 
encoder gets input YUV and compresses that. It stores the transfer function as 
metadata, if provided.


CMS

A color management system describes how translates the colors of images/videos 
from their current color space to the color space of the other devices, i.e. 
monitors. The basic problem is to be able to display the same colors in every 
device we use for editing and every device on which our work will be watched. 
Calibrating and keeping our hardware under control is feasible, when instead we 
come out on the internet or DVD, etc. will be impossible to maintain the same 
colors on which we have worked us and so we just have to hope that there are 
not too many and bad alterations. But if the basis that we have set us is 
consistent, the alterations could be acceptable because they do not result from 
the sum of more issue at each step. There are two types of color management: 
Display referred (DRC) and Scene referred (SRC).
- DRC is based on having a calibrated monitor. What it displays is considered 
correct and becomes the basis of our color grading. The hope is that the colors 
of the final render won't change too much when displayed in other 
hardware/contexts. Be careful though, there must be a color profile for each 
type of color space you choose for your monitor. If we work for the web we have 
to set the monitor in sRGB with its color profile, if we go out for HDTV we 
have to set the monitor in rec.709 with its color profile, for 4k in Rec 2020; 
for Cinema in DCP-P3; etc.
- SRC instead uses three steps: 1- The input color space: whatever it is, it 
can be converted manually or automatically to a color space of your choice. 2- 
The color space of the timeline: we can choose and set the color space on which 
to work. 3- The colour space of the output: we can choose the colour space of 
the output (on other monitors or of the final rendering). ACES and OpenColorIO 
have an SRC workflow. NB: the monitor must still be calibrated to avoid 
unwanted color shifts.
- There would also be a third type of CMS: the one through the LUTs. In 
practice, the SRC workflow is followed through the appropriate 3D LUTs, instead 
of relying on the internal (automatic) management of the program. The LUT 
combined with the camera, used to display it correctly in the timeline and the 
LUT for the final output. Using LUTs, however, always involves preparation, 
selection of the appropriate LUT and post-correction. Also, as they are fixed 
conversion tables, they can always result in clipping and banding.


Monitors

Not having CinGG a CMS, it becomes essential to have a monitor calibrated and 
set in sRGB that is just the output displayed on the timeline of the program. 
You have these cases:

timeline: sRGB --> monitor: sRGB  (we get a correct color reproduction)
timeline: sRGB --> monitor: rec709 (we get slightly dark colors, because gamma)
timeline: sRGB:--> monitor: DCI-P3 (we get over-saturated dark colors, because 
gamma and bigger gamut)


Pipeline CMS

INPUT --> DECODING/PROCESSING --> OUTPUT/PLAYBACK --> DISPLAY --> ENCODING

Input: color space and color depth of the source file; better if combined with 
an ICC profile.
Decoding: How CinGG transforms and uses the input file (it is a temporary 
transformation, for usage of the internal/ffmpeg engine and plugins).
Output: our setting of the project for the output. In CinGG such a signal is 
8-bit sRGB, but it can also be 8-bit YUV in continuos playback.
Display: As the monitor equipped with its color space (and profiled with ICC or 
LUT) displays the signal that reaches him and that we see. The signal reaching 
the display is also mediated by the graphics card and the operating system cms, 
if any.
Enconding: the final rendering stage where we set not only formats and codecs 
but also color space, color depth, and color range.


How CinGG works

decoding/playback:
Video is decoded to internal representation (look at settings/format/Color 
model). Internal format is unpacked 3..4 values every pixel. CinGG have 6 
internal pixel formats (RGB(A) 8-bit; YUV(A) 8-bit and RGB(A)_FLOAT 32-bit. 
CinGG will configure the frame buffer for your resulting video to be able to 
hold data in that colour model. Then, for each plug-in, it will pick the 
variant of the algorithm coded for that model.
CinGG automatically converts the source file to the set color model (in a 
buffer, the original is not touched!). Even if the input color model matches 
what we set in "Settings/format/Color model", there will always be a first 
conversion because CinGG works internally (in the buffer) at 32-bit in RGB. For 
playback CinGG has to convert frame to format acceptable by output device, i.e. 
sRGB 8-bit. In practice, the decoded file follows two separate paths: 
conversion to FLOAT for all internal calculations in the temporary (including 
other conversions for plugins, etc.) and simultaneously the result in the 
temporary is converted to 8-bit sRGB for on-screen display. CinGG use X11 and 
X11 is RGB only and it is used to draw the "refresh frame". So single step is 
always drawn in RGB. Continuous playback on the other hand can also be YUV for 
efficiency reasons.

Color range:
One problem with the YUV color model is the "YUV color range." This can create 
a visible effect of a switch in color in the Compositor, usually shown as 
grayish versus over-bright. The cause of the issue is that X11 is RGB only and 
it is used to draw the "refresh" frame. So single step is always drawn in RGB. 
To make a YUV frame into RGB, a color model transfer function is used. The math 
equations are based on color_space and color_range. In this case, color_range 
is the cause of the "gray" offset. The YUV MPEG color range (limited or TV) is 
16..235 for Y, 16..240 for UV, and the color range used by YUV JPEG color range 
(full or HDTV) is 0..255. The cause is that 16-16-16 is seen as pure black in 
MPEG, but as gray in JPEG and all playback will come out brighter and more 
grayish. This can be fixed by forcing appropriate conversions via the 
ColorSpace plugin.

Plugins:
On timeline all plugins see the frames only in internal pixel format and modify 
this as needed (temporary). Some effects work differenlly depending on 
colorspace: sometimes pixel values are converted to float, sometimes to 8-bit 
for an effect. In addition "playback single step” and “plugins” cause the 
render to be in the session color model, while continuous playback with no 
plugins tries to use the file’s best color model for the display (for speed). 
As mentioned. each plugin we add converts and uses the color information in its 
own way. Some limit the gamut and depth of color by clipping (i.e. Histogram); 
others convert and reconvert color spaces for their convenience; others 
introduce artifacts and posterization; etc. For example, the Chroma Key (HSV) 
plugin converts any signal to HSV for its operation.
If we want to better control and target this color management in CinGG, we can 
take advantage of its internal ffmpeg engine: There is a optional feature that 
can be used via .opts lines from the ffmpeg decoded files. This is via the 
video_filter=colormatrix=...ffmpeg plugin. There may be other good plugins 
(lut3d...) that can also accomplish a desired color transform. This .opts 
feature affects the file colorspace on a file by file basis, although in 
principle it should be possible to setup a histogram plugin or any of the 
F_lut* plugins to remap the colortable, either by table or interpolation.

Conversion:
Any conversion is done with approximate mathematical calculations and always 
involves a loss of data, more or less visible, because you always have to 
interpolate an exact value when mapping it into the other colour space. 
Obviously, when we use floating point numbers to represent values, these losses 
become small and close to negligible. So the choice comes down to either 
keeping the source color model even while processing or converting to Float, 
which in addition to leading to fewer errors should also minimize the number of 
conversions, being congruous with the program's internal one. The use of FLOAT, 
however, is much heavier on the system than the streamlined YUV. Color 
conversions are mathematical operations; for example to make a YUV frame into 
RGB, a color model matrix function is used. The math equations are based on 
color_space and color_range. Since the majority of sources are YUV, this 
conversion is very common and it is important to set these parameters to 
optimize playback speed and correct color representation.

Encoding:
Finally, the encoding converts to colorspace required by the codec.

Workflow

Let us give an example of color workflow in CinGG. We start with a source of 
type YUV (probably: YCbCr); this is decoded and converted to the chosen color 
model for the project, resulting in a temporary. Various jobs and conversions 
are done in FLOAT math and the result remains in the chosen color model until 
further action. In addition, the temporary is always converted to sRGB 8-bit 
for monitor display only. If we apply the ChromaKey (HSV) plugin, the temporary 
is converted to HSV (in FLOAT math) and the result in the temporary becomes 
HSV. If we do other jobs the temporary is again converted to the set color 
model (or others if there is demand) to perform the other actions. At the end 
of all jobs, the obtained temporary will be the basis of the rendering that 
will be implemented according to the choice of codecs in the render window 
(wrench), therefore regardless of the color model set in the project. If we 
have worked well the final temporary will retain as much of the source color 
data as possible and will be a good basis for encoding of whatever type it is.

Attachment: cms.tar.gz
Description: application/gzip

-- 
Cin mailing list
[email protected]
https://lists.cinelerra-gg.org/mailman/listinfo/cin

Reply via email to