I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and
ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would
appear that the correct solution would be to divide aSum by 3.
Isn't it unusual to define an alpha for each color component, generally
you have a single A associated with a combined RGB? So averaging the
three alphas might make sense here, because I think they should all be
the same value.
In addition, there's no scaling of the individual red, green and blue
values by their channel's alpha. If the input were two index-color
images, each of which had different alphas, the code should multiply
the r, g and b values by the alphas before summing and then divide by
the total alpha in the end. The alpha in this case *should* be the sum
of alphas divided by the number of channels.
I think alpha processing is more cumulative, done layer by layer in some
defined layer order. For a given pixel say the current output pixel
value is ARGB1 and you are compositing a second image with value ARGB2
on top of it: For the red channel the output color should be ((255 -
alpha(ARGB2)) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / 255. The
alpha of ARGB1 is not involved.
In other words, if you add a layer that is completely opaque you no
longer have to consider any of the colors or alpha values underneath it.
I think the bigger issue here is this code is specifically designed to
composite red, green and blue image layers. It's a special case since
for a given pixel the red comes from the red layer, blue from blue
layer, and green from green layer. These layers shouldn't be completely
opaque, since the colors wouldn't combine at all then or completely
transparent since then they wouldn't contribute any color. I don't
think transparency is useful here.
It's also possible that a multichannel image with > 3 channels is being
displayed with more color channels, namely cyan, magenta, and yellow.
The code here is designed to stop overflow, but I'm not convinced those
extended color channels would combine meaningfully.
Aivar
In addition, there's no scaling of the individual red, green and blue
values by their channel's alpha. If the input were two index-color
images, each of which had different alphas, the code should multiply
the r, g and b values by the alphas before summing and then divide by
the total alpha in the end. The alpha in this case *should* be the sum
of alphas divided by the number of channels.
I think alpha processing is cumulative layer by layer.
This brings up some interesting questions:
1) If the first, bottom-most layer is transparent, what color should
show through? Black, white? Or perhaps it's best to ignore this base
layer transparency.
2) If you wanted to composite several transparent images, how do you
calculate the transparency of the composite? I'm not sure this is
something we need to do.
Aivar
On 7/15/13 10:31 AM, Lee Kamentsky wrote:
Hi all,
I'm looking at the code for net.imglib2.display.CompositeXYProjector
and as I step through it, it's clear that the alpha calculation isn't
being handled correctly. Here's the code as it stands now, line 190
roughly:
for ( int i = 0; i < size; i++ )
{
sourceRandomAccess.setPosition( currentPositions[ i ], dimIndex );
currentConverters[ i ].convert( sourceRandomAccess.get(), bi );
// accumulate converted result
final int value = bi.get();
final int a = ARGBType.alpha( value );
final int r = ARGBType.red( value );
final int g = ARGBType.green( value );
final int b = ARGBType.blue( value );
aSum += a;
rSum += r;
gSum += g;
bSum += b;
}
if ( aSum > 255 )
aSum = 255;
if ( rSum > 255 )
rSum = 255;
if ( gSum > 255 )
gSum = 255;
if ( bSum > 255 )
bSum = 255;
targetCursor.get().set( ARGBType.rgba( rSum, gSum, bSum, aSum ) );
I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and
ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would
appear that the correct solution would be to divide aSum by 3. In
addition, there's no scaling of the individual red, green and blue
values by their channel's alpha. If the input were two index-color
images, each of which had different alphas, the code should multiply
the r, g and b values by the alphas before summing and then divide by
the total alpha in the end. The alpha in this case *should* be the sum
of alphas divided by the number of channels.
However, I think the problem is deeper than that. For an RGB ImgPlus,
there are three LUTs and each of them has an alpha of 255, but that
alpha only applies to one of the colors in the LUT. When you're
compositing images and weighing them equally, if two are black and one
is white, then the result is 1/3 of the white intensity - if you
translate that to red, green and blue images, the resulting intensity
will be 1/3 of that desired. This might sound weird, but the only
solution that works out mathematically is for the defaultLUTs in the
DefaultDatasetView to use color tables that return values that are 3x
those of ColorTables.RED, GREEN and BLUE. Thinking about it, I'm
afraid this *is* the correct model and each channel really is 3x
brighter than possible.
It took me quite a bit of back and forth to come up with the above...
I hope you all understand what I'm saying and understand the problem
and counter-intuitive solution and have the patience to follow it.
Dscho, if you made it this far - you're the mathematician, what's your
take?
--Lee
_______________________________________________
ImageJ-devel mailing list
ImageJ-devel@imagej.net
http://imagej.net/mailman/listinfo/imagej-devel
_______________________________________________
ImageJ-devel mailing list
ImageJ-devel@imagej.net
http://imagej.net/mailman/listinfo/imagej-devel