>From this post by DeJay:
https://www.cinelerra-gg.org/forum/help-video/yuv-to-rgb-conversion-issues/paged/2/#post-2196
I added the example in the manual. I attach the changes. See if it is okay.
\chapter{How some stuff works}%
\label{cha:how_stuff_works}

This section describes in detail some areas of \CGG{} to help explain how things work.

\section{Copy/Paste and Highlight Usage}%
\label{sec:copy_paste_highlight_usage}
\index{copy/paste}

There are 3 types of copy/cut and paste methods which exist in X windows, and most modern programs use 2 of them.  The 3 cases are:

\begin{description}
    \item[cut\_buffers 0-7] these are obsolete but they still work and are the simplest to use.
    \item[highlighting] called primary selection; almost all clipboard programs only use this.
    \item[copy and paste] called secondary (or clipboard) selection. Some more modern programs use Ctrl-C/Ctrl-X and Ctrl-V for this (some use other keys or qualifiers, like Shift).
\end{description}

\subsection*{How Copy/Paste works:}%
\label{sub:how_copy_paste_works}

All of the methods use window \textit{properties} to attach data, called a selection, to a source window.  The program advertises the selection by using the X server.  The window property used determines which selection type is set/advertised by the new selection.

When a paste is used in a target window, the target program requests the advertised selection data.  This may access one of two buffers depending on which type of load/paste action is used.  The user loads a \textit{cut buffer} via Drag select or Ctrl-C/Ctrl-X, and pastes a \textit{cut buffer} via middle mouse press or Ctrl-V.

\subsection*{\CGG{} cut and paste:}%
\label{sub:cinelerra_cut_paste}

\subsubsection*{1. Text cut and paste operations}%
\label{ssub:text_cut_paste_operations}

To use a text selection, create a drag selection in textboxes by pressing and holding left mouse button with the pointer over the beginning of the text selection, then move the pointer over the desired selection to the selection end, then release the mouse button.  This continuously reloads the \textit{primary} clipboard buffer and highlights the text selection.  It can then be pasted to most programs by pressing the middle mouse button with the pointer over the text insertion point.  Some examples of these programs are \textit{xterm}, \textit{gnome-terminal}, \textit{cinelerra}, and \textit{browser} input text boxes.  After you create a text selection, if you then press Ctrl-C the selection will also be copied to the \textit{secondary} (or \textit{clipboard}) selection buffer.  This second paste buffer can be used for a more lasting save effect, since it will not be lost until you again press Ctrl-C (copy).  Using Ctrl-V (cut) will also copy the selection to the secondary clipboard buffer, and then delete the selection from the textbox.  If you press Ctrl-V (or paste) in a target window, the secondary selection will be inserted at the target window cursor.  If a text selection exists in the target window, it is replaced by the pasted text.

\subsubsection*{2. Media cut and paste operations}%
\label{ssub:media_cut_paste_operations}

To create a media selection, highlight a region on the \CGG{} media timeline, then use the main menubar or compositor/viewer edit panel to operate the clip cut, copy, or copy-keyframe menu buttons.  This selection can then be pasted to a target selection on the timeline using the main menubar or compositor/viewer edit panel to operate the clip paste or paste-keyframe operation.  Also, by using the resource window you can select the \textit{Clips} folder and right mouse the resources list box, then use the \textit{Paste Clip} menu item to paste the selection to a named clip.  Additionally, these methods work between running instances of \CGG{}, which means you can move media clips between the \CGG{} program instances.  The clip data is also copied to the secondary clipboard buffer.  This makes it possible to examine the clip content directly if so desired.

\subsubsection*{3. The older cut\_buffer method}%
\label{ssub:older_cut_buffer_method}

\begin{itemize}
    \item For text, if there is an active selection when a window closes, it uses \texttt{cut\_buffer0}.  Normally when a paste is performed, the target window \textit{notifies} the selection owner to \textit{send it now} when you do a paste, but if the window has closed there is no source window, so no pasting.  Some programs, like \CGG{}, use \texttt{cut\_buffer0} as a fallback.  This makes it possible to paste data from a closed window.
    \item To move media clip, data \texttt{cut\_buffer2} is used because it does not require the selection owner interface, and works simply and reliably.  This buffer is not normally in use by other programs.
\end{itemize}

\subsection*{Final note}%
\label{sub:final_note}

When a text selection is set, the selected text is redrawn using selected-highlight color when the textbox loses focus.  This convenience feature shows the active text selection as you move the pointer to the new target window.  When a new selection is set anywhere else on your screen, the current text selection will be redrawn using the inactive-highlight color as the textbox loses selection ownership.  In most \CGG{} themes, the drag selection text-highlight color is BLUE ($\#0000FF$), the selected-highlight color is SLBLUE ($\#6040C0$) -- really sort of purple, and the inactive-highlight color is MEGREY ($\#AFAFAF$).

\section{Playing is Different than Seeking/Positioning!}%
\label{sec:playing_seeking_positioning}

\subsection{Playing/Seeking}%
\label{sub:playing_seeking}
\index{seek!playing}

\textit{Seeking} targets and displays the next frame.  The next frame is targeted because frame zero has no previous.  When you seek, you reposition to just before the target frame, and since the play direction has not been established (there is no direction when seeking) it shows you the next frame.  This produces the expected behavior when you seek to frame zero; you see the first frame.  Seeking displays in the compositor what you are getting ready to work with/edit/etc; always showing the next frame in relation to the cursor. Technically, since seeking just resets the position, it would be correct not to update the compositor, but it is best to seek and show the next frame to confirm that it is the frame you expected to see.

\textit{Playing} shows you what has just been played in the compositor window.  It is not the same as seeking. When you use Keypad 1 to play frame forward, then the 1st frame that is played and shown in the compositor window is frame zero (which was already displayed).  The position is incremented to 1.  Press Keypad 1 yet again, and the next frame displayed is 1, and the new position is 2, and so on.  According to the implemented strategy, the insertion point moves to track playback.  When playback stops, the insertion point stays where playback stopped.  Thus, with playback you change the position of the insertion point.

Simple explanation of what you will be seeing in the compositor when playing:

\begin{description}
    \item[Play forward] the frame to the right of the cursor in the timeline gets displayed.
    \item[Play backward] the frame to the left of the cursor in the timeline gets displayed.
\end{description}

The reason behind this \textit{play} methodology is that you want to know what you just played so that you know what matches what you just saw/heard in case that is the desired stuff.   You don't want the compositor to show you what you have not yet played -- you need to see this frame to analyze/check to see if it is what you want.  This behavior applies to any playing operation, such as the \textit{keypad} or \textit{Frame forward / Frame reverse} buttons.  You can still easily see the actual insertion point in the zoombar at the bottom of the timeline -- seventh button over or 3rd button from the right side.   Also note the following:

\begin{description}
    \item[Blinking insertion point on the timeline] seeking/positioning was the last operation.
    \item[Solid non-blinking insertion point on timeline] playing was the last operation.
\end{description}

\subsubsection*{Example and explanation}%
\label{ssub:example_explanation}

\begin{enumerate}
    \item open a small example of 10 numbered frames (or use the \texttt{Title} plugin to add a timestamp)
    \item press "f" to \textit{fit} the timeline
    \item make sure \texttt{settings $\rightarrow$ align\_cursor\_on\_frames} is set
    \item seek to frame 4 by clicking on the timeline at position 4, compositor shows the 5th frame, since the
    media counts from 1 and the timeline counts from 0.  This is correct behavior.
    \item press KP1 to play next frame.  According to playback strategy: \textit{When play is forwards, the next unit is displayed, and the position is advanced one unit}. So the next frame is 4, (shows the $5^{th}$ frame) it is displayed. The position is advanced from 4 to 5.  This is correct behavior.
    \item press KP4 to play the previous frame. According to playback strategy: \textit{When play is in reverse, the previous unit is displayed, and the position is reduced one unit}. So the previous frame is 4, (shows the $5^{th}$ frame) it is displayed. The position is reduced from 5 to 4. This is correct behavior.
\end{enumerate}

If you watch the zoombar (bottom of main window) position, it shows the current position is just before the next frame to be displayed when going forwards, and just after the frame to be displayed when going backward.

To recap, position is usually set in the program as a location that is between a previous and next frame/sample unit such that the next unit equals the seek target.  After position is reset using a \textit{seek} operation, the next unit is displayed, which is the seek target.  When \textit{play is forward}, the next unit is shown, and the position is advanced one unit.  When \textit{play is in reverse}, the previous unit is shown, and the position is reduced one unit.  At the beginning, there is no previous, and at the end, there is no next, but silence is rendered at the end.

\subsection{Always Show Next Frame}%
\label{sub:always_show_next_frame}
\index{always show next frame}

Since some users prefer the insertion pointer to reflect the same as the Compositor a choice is available.  For playing forward, there is a preference option which results in what looks like 1 was added to the frame displayed in the Compositor window.  To enable this mode, check the box \texttt{Always show next frame}, and this will be saved to \texttt{.bcast5}.  The option checkbox is in the \texttt{Settings $\rightarrow$ Preferences $\rightarrow$ Appearance} tab and when checked, any forward \textit{plays} in the Compositor window show the same frame as you would with a seek.  Reverse plays and plays using a selection or In/Out pointers (with Ctrl) work the same as without this preference set.  But you will no longer see the odd behavior where if you frame advance forward and then frame advance backward, the displayed frame does not change -- instead it will change and look more natural.
A color indicator that shows in the main track canvas timeline and the compositor timeline reminds the user which mode is currently active.  The cursor in the compositor turns \textit{red} for default mode and \textit{white} for \textit{Always show next frame} mode.  The top portion of the insertion cursor in the track canvas mirrors this, with red for default and white otherwise.

Figure~\ref{fig:cursor01} using the default \textit{playing} method where the frame in the compositor is the one that was just played; in this case play was in the forward direction.  Note that the insertion pointer in the main track canvas shows 03:16 but the compositor show 03:15 so you know what you last saw.  Also, the cursor/cursor tops in both windows is red.

\begin{figure}[htpb]
	\centering
	%\includegraphics[width=0.8\linewidth]{name.ext}
	\begin{tikzpicture}[scale=1, transform shape]
	\node (img1) [yshift=0cm, xshift=0cm, rotate=0] {\includegraphics[width=0.6\linewidth]{cursor01.png}};    
	\node [yshift=-29mm, xshift=-1cm,anchor=east] at (img1.north west) (Compositor) {Red cursor in Compositor};
	\node [yshift=-40mm, xshift=-1cm,anchor=east] at (img1.north west) (Timeline) {red cursor in Timeline};
	\draw [->, line width=1mm] (Compositor) edge  ([yshift=-29mm] img1.north west);
	\draw [->, line width=1mm] (Timeline) edge  ([yshift=-40mm] img1.north west);   
	\end{tikzpicture}    
	\caption{"Default" mode with red cursors}
	\label{fig:cursor01}
\end{figure}

Figure~\ref{fig:cursor02} using the \textit{Always show next frame} method where the frame in the compositor is the same one that would have shown with a seek; in this case play was in the forward direction.  Note that the insertion pointer in the main track canvas shows 03:16 and the compositor shows 03:16.  Also, the cursor/cursor tops in both windows is white.

\begin{figure}[htpb]
	\centering
	%\includegraphics[width=0.8\linewidth]{name.ext}
	\begin{tikzpicture}[scale=1, transform shape]
	\node (img1) [yshift=0cm, xshift=0cm, rotate=0] {\includegraphics[width=0.6\linewidth]{cursor02.png}};    
	\node [yshift=-29mm, xshift=-1cm,anchor=east] at (img1.north west) (Compositor) {White cursor in Compositor};
	\node [yshift=-40mm, xshift=-1cm,anchor=east] at (img1.north west) (Timeline) {White cursor in Timeline};
	\draw [->, line width=1mm] (Compositor) edge  ([yshift=-29mm] img1.north west);
	\draw [->, line width=1mm] (Timeline) edge  ([yshift=-40mm] img1.north west);    
	\end{tikzpicture}    
	\caption{"Always show next frame" mode with white cursors}
	\label{fig:cursor02}
\end{figure}

\subsection{Seeking Issues}%
\label{sub:seeking_issue}
\index{seek!issue}

If you have an issue playing a video and not seeing it in the Compositor (just see a black screen), it is most likely due to the media not being designed to be \textit{editable}.  It is most likely not damaged.  Generally it just does not have keyframes which are needed for seeking which is what is done when you move around the media and start playing in the middle.  The media plays just fine in the compositor if you always play from the beginning because then you don’t need keyframes to seek.  You can get around this problem if you proxy the media.  A good choice to use for the proxy would be \textit{use scalar}, \textit{ffmpeg/mp4} and size of $\frac{1}{2}$.  The proxied media can then seek and you will see it play in the compositor because keyframes exist.

\section{Color Space and Color Range Affecting Playback}%
\label{sec:color_space_range_playback}
\index{color!space}
\index{color!range}

Playback \textit{single step} and \textit{plugins} cause the render to be in the session color model, while continuous playback with no plugins tries to use the file’s best color model for the display (for speed).
This can create a visible effect of a switch in color in the Compositor, usually shown as grayish versus over-bright.

The cause of the issue is that X11 is RGB only and it is used to draw the \textit{refresh frame}.  So single step is always drawn in RGB.  To make a YUV frame into RGB, a color model transfer function is used.  The math equations are based on color\_space and color\_range.  In this case, color\_range is the cause of the \textit{gray} offset.  The \textit{YUV mpeg} color range is $[16..235]$ for Y, $[16..240]$ for UV, and the color range used by \textit{YUV jpeg} is $[0..255]$ for YUV.

\begin{wrapfigure}[11]{O}{0.5\textwidth} 
    \vspace{-2ex}
    \centering
    \includegraphics[width=0.5\textwidth,keepaspectratio]{color.png}
    \caption{Color space and Color range}
    \label{fig:color}
\end{wrapfigure} 

The mpeg YUV color range $[16..235]$ looks sort of like an old TV if it is viewed on a display with jpeg range $[0..255]$.  A common expression for the short \textit{mpeg} color range is \textit{compressed} color range.  If you are using color compressed data with no display decompression, or using uncompressed data and the display is configured to use compression, the color range will appear \textit{squished} or \textit{stretched}, as too gray or too much contrast.

The \textit{use X11 direct when possible} preference with X11 as your video driver, generally means that you value speed over color range.  When you use this feature and the color range preference is mismatched, the color switch offsets will still appear.  This is a personal choice solely for improved speed.

There is now program code to look for RGB versus YUV color model mismatches.  You can override the default setup, which mirrors the original code, via the following.

\texttt{Settings $\rightarrow$ Preferences $\rightarrow$ Appearance} tab in the lower left hand corner (Figure~\ref{fig:color}):

\begin{description}
    \item[YUV color space] default choice is BT601, alternate is BT709 (High Definition), BT2020 (UHD)
    \item[YUV color range] default choice is JPEG,   alternate is MPEG
\end{description}

Some general tips (See also \ref{sec:video_attributes} \textit{Color model}):
\begin{itemize}
	\item If your hardware allows it use RGB-Float (in \texttt{Settings $\rightarrow$ Format}); this format does not lead to transfer errors from one model to another, but it uses more cpu.
	\item Use RGB-8 if the source is RGB and YUV-8 if the source is YUV (most commonly used).
	\item If you notice alterations in color/brightness representation, try playing with color models in \texttt{Settings $\rightarrow$ Format} and with \textit{YUV color space} and \textit{YUV color range} in \texttt{Settings $\rightarrow$ Preferences $\rightarrow$ Appearance} tab. Another possibility is to check if the display color model conforms to the project color model. A practical case that may arise is as follows\protect\footnote{thanks to DeJay}: YUV source with limited color range (MPEG or TV); \CGG{} set with the color range to extended (JPEG or PC); the colors on the compositor will be flattened. If we set the color range to MPEG the colors will be correct but hard clipping will occur. In this case the best result is presented by setting the color range to JPEG but then doing a conversion of the source to JPEG color range via the \texttt{ColorSpace} plugin. Summary table:
\end{itemize}

\begin{center}
	\begin{tabular}{ |c|c|c| } 
		\hline
		\textbf{Source} & \textbf{YUV Color Range} & \textbf{Colors} \\ 
		\hline
		MPEG & JPEG & Flat colors \\ 
		MPEG & MPEG & Hard clipping \\
		MPEG + conversion to JPEG & JPEG & Colors OK \\
		\hline
	\end{tabular}
\end{center}

\section{Automatic "Best Model" Media Load}%
\label{sec:conform_the_project}
\index{color!model}
When you load media with the insertion strategy of \textit{replace current project}, the program code will
automatically use the "best model" for the render based on the media's codec.  The best model is pretty
much going to be what works well for television.  This automation was added to facilitate easy use of
\CGG{}.  Which is to say that it is difficult for a new or occasional user to set all of the 
necessary parameters as best as possible so the program does it for you.  This means you do not have to 
\textit{conform your project} which ordinarily would have to have been done in the Resources window with RMB
click on the highlighted media and choosing \textit{Match project size}. 

However, this automatic method leads to the dilemma of where you have a 10-bit media file and it would
get loaded as RGBA-8 when you would prefer it to be RGBA-Float.  So instead of using \textit{replace current
project} when loading your media, you would have to make sure the project is first set to your desired
Format.  This could be done with the \texttt{File$\rightarrow$New project} and then setting your Color
Model to RGBA-Float and whatever other parameters you want.  Next when doing a \texttt{File$\rightarrow$Load}, use
\textit{Append in new tracks} or \textit{Create resources only}. This avoids using the "best model"
technique and uses instead what you have designated so that if you set the Color Model to RGBA-Float that
will be in effect.

It is important to note that even when using the "best model" no bits are lost if the input media is 10-bit
and the Color Model is RGBA-8. This is because the media will be loaded using the "case BC\_RGB16161616"
where 16 stands for 16 bits. It fills the other 6 bits not used for 10 bits with zeros.

\section{Simple Animation (Festival)}%
\label{sec:simple_animation_festival}

This functionality was added to \CGG{} by the original author to create simple animation.  The file type for this animation is \textit{Scene}.

To get started making a simple animated movie copy from the directory:\\
\texttt{<cin\_path>/cinelerra/tests} the \texttt{text2movie} and \texttt{text2movie.xml}. \\
You can see what this does via \texttt{File $\rightarrow$ Load\dots $\rightarrow$ text2movie.xml}.  The file text2movie acts like a normal asset, except changes to it are immediately reflected on the timeline without reloading and the length is infinite.  You can just edit the text2movie file to change the script.  If the length of the movie increases, drag the right edit handle to extend the edit or use the pulldown \texttt{Edit $\rightarrow$ edit length}. There is one audio channel created for every character.  The frame rate, sample rate, frame size, and camera angles are fixed.  To see these values, right click on the asset and look at the \textit{Asset info}.

Currently the functionality that is implemented focuses on dialog between two people.  The models are defined in model files saved in \CGG{}'s executable directory (for example, \texttt{/opt/cinelerra/models}).  The character model and voice is selected separately in the script.  The model files have the same name that appears in the script and are usually saved in the directory the script is in, but there is a defined search path, if not.  You can create new models for the script without affecting the entire system.  These models define the total size of the model along with the images used -- the model images are 2D png images because all the animations are baked.  Since there is no 3D renderer, no custom movement is supported.

There are currently 2 actions implemented:

\begin{enumerate}
    \item Character2 can cut off character1 if character1's dialog ends in “\dots”
    \item Inserting “[pause]” anywhere causes the character to pause.  This is useful for adjusting the timing of dialog.
\end{enumerate}

This is \textit{simple} animation so you can expect speech synthesis not to be that good.  And you will have to adjust punctuation and spelling based on the sound.  Since the dialog is rendered on-demand, there is a delay when each character starts to speak but you can split dialog into shorter blocks to reduce the delay.

You can see a step by step example in the website \url{http://www.g-raffa.eu/Cinelerra/HOWTO/animations.html}

\section{Textbox Non-std Character / Unicode Insertion}%
\label{sec:textbox_non_std_character_unicode}
\index{unicode insertion}

If you want to enter a special character -- like a bullet, an accent grave character, or a mathematical summation symbol -- you can use the unicode equivalent in a textbox to do so.  In the textbox, keyin Ctrl-Shift-U which puts you into single character unicode mode, then keyin the numerical value for the intended single character followed by the carriage return.  For a voluminous list of possible special characters, you can go to {\small \url{https://unicode-table.com/en/}} on the internet to choose by highlighting a character to get its numerical equivalence.  For example, $U+2022$ is a bullet.  If you make a mistake, you can use the \textit{backspace} key or if you want to exit unicode-insert-mode, use the \textit{ESC} key.  This feature is especially useful with the \textit{Title} plugin and for naming Tracks in the main window.

However, it is worth mentioning that some special characters are available via the \textit{compose} key in the current distribution. {\small \url{https://en.wikipedia.org/wiki/Compose_key}}

\chapter{Project and Media Attributes}%
\label{cha:project_and_media_attributes}
\index{project attributes}
\index{format}
\index{settings}

When you play media files in \CGG{}, the media files have a certain
number of tracks, frame size, sample size, and so on.  No matter
what attributes the media file has, it is played back according to
the project attributes.  So, if an audio file's sample rate is
different than the project attributes, it is resampled.  Similarly,
if a video file's frame size is different than the project
attributes, the video is composited on a black frame, either cropped
or bordered with black.

The project attributes are adjusted in \texttt{Settings $\rightarrow$
Format} (figure~\ref{fig:set-format}) or can be created in
\texttt{File $\rightarrow$ New}.  When you adjust project settings
in \texttt{File $\rightarrow$ New}, a new empty timeline is created.
Every timeline created from this point on uses the same settings.
When you adjust settings in \texttt{Settings $\rightarrow$ Format},
media on the timeline is left unchanged.  But every timeline created
from this point uses the same settings.

\begin{figure}[htpb]\centering
\includegraphics[width=0.6\linewidth]{set-format.png}
  \caption{Set Format window - note the Audio Channel positions}
  \label{fig:set-format}
\end{figure}

In addition to the standard settings for sample rate, frame rate,
and frame size, \CGG{} uses some less traditional settings like
channel positions, color model, and aspect ratio.  The aspect ratio
refers to the screen aspect ratio (SAR).

Edit decision lists , the EDL \index{EDL} stored in XML, save the project
settings.  Formats which contain media but no edit decisions just
add data to the tracks.  Keep in mind details such as if your
project sample rate is 48\,kHz and you load a sound file with
96\,kHz, you will still be playing it at 48\,kHz.  Or if you load an
EDL file at 96\,kHz and the current project sample rate is 48\,kHz,
you will change it to 96\,kHz.

The New Project window has some options that are different than the
Set Format window as you can see by comparing
figure~\ref{fig:set-format} above with this
figure~\ref{fig:new-project}.  Mostly notably is the field for a
directory path and a Project Name.

\begin{figure}[htpb] \centering
\includegraphics[width=0.7\linewidth]{new-project.png}
  \caption{New Project dialog window}
  \label{fig:new-project}
\end{figure}

Explanation of the various fields is described next.

\section{Audio attributes}%
\label{sec:audio_attributes}
\index{audio!attributes}


\begin{description}
\item[Presets:] select an option from this menu to have all the
project settings set to one of the known standards.  Some of the
options are 1080P/24, 1080I, 720P/60, PAL, NTSC, YouTube, and CD
audio.

\item[Tracks:] (in New Project menu only) sets the number of audio
tracks for the new project. Tracks can be added or deleted later,
but this option is on the New Project menu for convenience.

\item[Samplerate:] \index{sample rate} sets the samplerate of the audio. The project
samplerate does not have to be the same as the media sample rate
that you load. Media is resampled to match the project sample rate.

\item[Channels:] \index{audio!channels} sets the number of audio channels for the new
project. The number of audio channels does not have to be the same
as the number of tracks.

\item[Channel positions:] the currently enabled audio channels and
their positions in the audio panning boxes in the track patchbay are
displayed in the channel position widget in the Set Format window.
You can see this display on the left side in
figure~\ref{fig:set-format} above.  Channel positions are not in New
Project window.

  The channels are numbered.  When rendered, the output from channel
1 is rendered to the first output track in the file or the first
sound card channel of the sound card.  Later channels are rendered
to output tracks numbered consecutively.  The audio channel
positions correspond to where in the panning widgets each of the
audio outputs is located.  The closer the panning position is to one
of the audio outputs, the more signal that speaker gets.  Click on a
speaker icon and drag to change the audio channel location.  The
speakers can be in any orientation.  A different speaker arrangement
is stored for every number of audio channels since normally you do
not want the same speaker arrangement for different numbers of
channels.

  Channel positions is the only setting that does not affect the
output necessarily.  It is merely a convenience, so that when more
than two channels are used, the pan controls on the timeline can
distinguish between them.  It has nothing to do with the actual
arrangement of speakers.  Different channels can be positioned very
close together to make them have the same output.
\end{description}


\section{Video attributes}%
\label{sec:video_attributes}
\index{video!attributes}

\begin{description}
\item[Tracks:] (in New Project menu only) sets the number of video
tracks the new project is assigned.  Tracks can be added or deleted
later, but options are provided here for convenience.

\item[Framerate:] \index{framerate} sets the framerate of the video.  The project
framerate does not have to be the same as an individual media file
frame rate that you load.  Media is reframed to match the project
framerate.

\item[Canvas size:] \index{canvas size} sets the size of the video output \index{output size}.  In addition,
each track also has its own frame size.  Initially, the New Project
dialog creates video tracks whose size match the video output.  The
video track sizes can be changed later without changing the video
output.

\item[Aspect ratio:] \index{aspect ratio} sets the aspect ratio; this aspect ratio refers
to the screen aspect ratio.  The aspect ratio is applied to the
video output.  The aspect ratio can be different than the ratio that
results from the formula: $\dfrac{h}{v}$ (the number of horizontal
pixels divided into the number of vertical pixels).  If the aspect
ratio differs from the results of the formula above, your output
will be in non-square pixels.

\item[Auto aspect ratio:] if this option is checked, the Set Format
dialog always recalculates the Aspect ratio setting based upon the
given Canvas size. This ensures pixels are always square.

\item[Color model:] \index{color!model} the internal color space of \CGG{} is X11 sRGB
without color profile. \CGG{} always switches to sRGB when applying
filters or using the compositing engine. Different case for
decoding/playback or encoding/output; the project will be stored in
the color model video that is selected in the dropdown.  Color model
is important for video playback because video has the disadvantage
of being slow compared to audio.  Video is stored on disk in one
colormodel, usually a YUV derivative.  When played back, \CGG{}
decompresses it from the file format directly into the format of the
output device.  If effects are processed, the program decompresses
the video into an intermediate colormodel first and then converts it
to the format of the output device.  The selection of an
intermediate colormodel determines how fast and accurate the effects
are.  A list of the current colormodel choices follows.

  \begin{description}
  \item[RGB-8 bit] Allocates 8\,bits for the R, G, and B channels
and no alpha. This is normally used for uncompressed media with low
dynamic range.
  \item[RGBA-8 bit] Allocates an alpha channel to the 8\,bit RGB
colormodel. It can be used for overlaying multiple tracks.
  \item[RGB-Float] Allocates a 32\,bit float for the R, G, and B
channels and no alpha. This is used for high dynamic range
processing with no transparency.
  \item[RGBA-Float] This adds a 32\,bit float for alpha to
RGB-Float. It is used for high dynamic range processing with
transparency. Or when we don't want to lose data during workflow,
for example in color correction, key extraction and motion
tracking.
  \item[YUV-8 bit] Allocates 8\,bits for Y, U, and V. This is used
for low dynamic range operations in which the media is compressed in
the YUV color space. Most compressed media is in YUV and this
derivative allows video to be processed fast with the least color
degradation.
  \item[YUVA-8 bit] Allocates an alpha channel to the 8\,bit YUV
colormodel for transparency.
  \end{description}

In order to do effects which involve alpha
channels \index{alpha channel}, a colormodel with an alpha channel must be selected.
These are RGBA-8 bit, YUVA-8 bit, and RGBA-Float.  The 4 channel
colormodels are slower than 3\,channel colormodels, with the slowest
being RGBA-Float.  Some effects, like fade, work around the need for
alpha channels while other effects, like chromakey, require an alpha
channel in order to be functional.  So in order to get faster
results, it is always a good idea to try the effect without alpha
channels to see if it works before settling on an alpha channel and
slowing it down.

  When using compressed footage, YUV colormodels \index{yuv} are usually faster
than RGB colormodels \index{RGB}.  They also destroy fewer colors than RGB
colormodels.  If footage stored as JPEG or MPEG is processed many
times in RGB, the colors will fade whereas they will not fade if
processed in YUV\@.  Years of working with high dynamic range footage
has shown floating point RGB to be the best format for high dynamic
range.  16 bit integers were used in the past and were too lossy and
slow for the amount of improvement.  RGB float does not destroy
information when used with YUV source footage and also supports
brightness above 100\,\%.  Be aware that some effects, like
Histogram, still clip above 100\,\% when in floating point. See also \ref{sec:color_space_range_playback} and \ref{sec:conform_the_project}.

\item[Interlace mode:] \index{interlacing} this is mostly obsolete in the modern digital
age, but may be needed for older media such as that from broadcast
TV\@.  Interlacing uses two fields to create a frame. One field
contains all odd-numbered lines in the image; the other contains all
even-numbered lines.  Interlaced fields are stored in alternating
lines of interlaced source footage. The alternating lines missing on
each output frame are interpolated.
\end{description}

\section{Best practice in pre-editing}%
\label{sec:best_practice_pre_editing}

\CGG{} supports the simultaneous presence in the Timeline of sources with different frame sizes and frame rates. However, audio/video synchronization problems may occur due to their different timing.\protect\footnote{credit to sge and Andrew Randrianasulu}
Plugins that rely on the timing of each frame, for example \textit{Motion} and \textit{Interpolate} plugins, may have problems when used at the same time with engines which increase frame rate. Frame rate per definition cannot be increased without either duplicating some frames or generating them in some intelligent way. But to work reliably, the \textit{Motion} plugin requires access to all actual frames. These kinds of plugins (and also the rare cases of audio/video desync) explicitly require the \textit{Play every frame} option.

There is no problem as long as the source fps, project fps, and destination fps are identical. In most cases, high frame rates such as 120 or 144 or any fps, will be just fine for \textit{Motion} provided that source footage all has the same frame rate.

But when \textit{project} and \textit{source} frame rates are different (or \textit{project} and
\textit{rendered} fps), then the \CGG{} engine has to either duplicate (interpolate) some frames or throw some away. Because of this, the audio tracks and the timeline get out of sync with such accelerated (or slowed down) video. And to make \textit{Motion} plugins reliably calculate interframe changes, you have to ensure the consistent frame numbers and frame properties.

Generally, best practice is to perform the following sequence of preparations for video editing.

\begin{enumerate}
	\item Motion stabilization, and maybe some other preparations, to improve the quality of the source video is best done under the properties identical to the properties of the original video; it may be different codec, but same frame size and same frame rate.
	\item If you need to alter the frame rate, for example because different source clips have different frame rates, then recode all the necessary clips to the same future project frame rate. Here frame sizes can still have different sizes, but frame rates should be all the same.
	\item Whole editing: if you need to change frame rate of some restricted part, particularly when smooth acceleration/deceleration is needed, it can be done here. But if frame rate has to be changed only due to different source fps, it is better to do it during the preparation stage.
\end{enumerate}

\CGG{} does not have color management \index{color management}, but we can still give some general advice on how to set color spaces:

\begin{enumerate}
	\item Profiling and setting the monitor: \\
	source: \textit{sRGB} $\rightarrow$ monitor: \textit{sRGB}  (we get a correct color reproduction) \\
	source: \textit{sRGB} $\rightarrow$ monitor: \textit{rec709} (we get slightly dark colors) \\
	source: \textit{sRGB} $\rightarrow$ monitor: \textit{DCI-P3} (we get over-saturated colors) \\
	
	source: \textit{rec709} $\rightarrow$ monitor: \textit{rec709} (we get a correct color reproduction) \\
	source: \textit{rec709} $\rightarrow$ monitor: \textit{sRGB} (we get slightly faded colors) \\
	source: \textit{rec709} $\rightarrow$ monitor: \textit{DCI-P3} (we get over-saturated colors)
	\item It would be better to set the project as RGB(A)-FLOAT, allowing system performance, because it collects all available data and does not make rounding errors. If we can't afford it, starting from YUV type media it is better to set the project as YUV(A)8, so as not to have a darker rendering in the timeline. On the contrary, if we start from RGB signals, it is better to use RGB(A)8. If we don't display correctly on the timeline, we'll make adjustments from the wrong base (metamerism) and get false results.
	\item Having correct color representation on Compositor can be complicated. We can convert the imput \textit{YUV color range} to a new YUV color range that provides more correct results (i.e. MPEG to JPEG). The \texttt{Colorspace} plugin can be used for the conversion.
	\item Among the rendering options always set the values \\	
	\texttt{color\_trc=...} (gamma correction) \\
	\texttt{color\_primaries=...} (gamut) \\
	\texttt{colorspace=...} (color spaces conversion, more depth-color); \\
	or \\
	\texttt{colormatrix=...} (color spaces conversion, faster).
	
	These are only metadata that do not affect rendering but when the file is read by a player later they are used to reproduce the colors without errors.
\end{enumerate}

For more tips on how \CGG{} processes colors on the timeline see  \nameref{sec:color_space_range_playback} and \nameref{sec:conform_the_project}.

%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../CinelerraGG_Manual"
%%% End:
-- 
Cin mailing list
[email protected]
https://lists.cinelerra-gg.org/mailman/listinfo/cin

Reply via email to