Although that’s not a bad idea it’s also not correct, when you add a new sample
between red and black to accommodate for B’s blue you’re making the assumption
that there’s actually “data” in that space and more so that the red and black
samples are describing some sort of start and end of a
This sounds like the same problem as this post from Ivan:
[Nuke-users] How does DeepRecolor distribute a target opacity across
multiple samples?
If you want to fade deep data so it's combined alphas reach a certain
value, first convert the alpha values from opacity to transparency:
transparency =
Thanks for the info Colin!
On 19 June 2014 05:42, Colin Alway colin.al...@gmail.com wrote:
This sounds like the same problem as this post from Ivan:
[Nuke-users] How does DeepRecolor distribute a target opacity across
multiple samples?
If you want to fade deep data so it's combined alphas
It's more difficult than it initially seems..
The obvious thing is to use the DeepMerge set to holdout to punch a hole
in your A input, invert the matte and punch the inverse hole in the B
input.. but when you merge these you get the dark fringing where your
matte is semi-transparent, which is
If all the samples are at the same depth in both A and B, easy, the
output samples are a simple mix between each sample of A and B. This is
the case for, say, keymixing in a DeepColorCorrect (where only the
existing sample values will be changed)
I managed to get a deep volumetric keymix
Thanks Michael!
I think I got reasonably close with my expression hackiness yesterday
and with a little help from somebody else we got even closer (basically
by doing the soft part of the mask in flat image space).
Hopefully we are good to go now.
On 18/06/14 3:25 am, Michael Garrett wrote:
On 18 June 2014 01:25, Michael Garrett michaeld...@gmail.com wrote:
This was specific to Mantra. I have yet to extensively use the deep output
from other renderers but I believe Mantra's deep output has it's quirks and
what I did may not translate exactly to another renderer. Also we were
Sounds like you're sorted, but I found this note on the basic method I
used. So you could use a flat bezier mask as you say, that is promoted to
deep then used for the deep holdout.
The deep image is unpremultiplied by its original deep opacity alpha and
premultiplied by the new mask. Redundant
Hi peeps,
I'm just trying to figure out how to merge two deep images based on a
deep mask channel, without getting fringing.
Been playing with DeepExpression but don't know if I can reference
samples in there (the documentation is rather sparse to say the least).
Basically I need a true,