Hi all,

My suspicion is that there is no One True Solution™. So from my point of view 
it would be nice to have a way to support different options.

The recent projectors pull request https://github.com/imagej/imglib/pull/23 by 
Michael Zinsmaier (KNIME) has potential to provide this extensibility.
Their DimProjector2D which is a possible replacement for 
net.imglib2.display.CompositeXYProjector uses a  final Converter< 
ProjectedDimSampler< A >, B > to convert from a set of A-values in the 
"composite dimension" to the output B-value. There could be different 
converters for different alpha-compositing algorithms and it would be easy to 
add new options for imglib2 users.

The projectors branch / pull request requires some work to make it a 
replacement for the current projectors instead of opening up a parallel 
hierarchy. If someone wants to work on the compositing issues I think that 
would be a good place to direct efforts to.

best regards,
Tobias


On Jul 16, 2013, at 12:28 AM, Aivar Grislis <gris...@wisc.edu> wrote:

>> I believe ImageJ1 treats it [RGBCMY] as additive. Look at the sample "Organ 
>> of Corti" -- the current behavior of ImageJ2 causes that sample to appear 
>> the same as it does in IJ1. Before we added the bounds-checking code, it 
>> erroneously wrapped pixel values.
> By not being additive I meant C is a secondary color composed of primaries G 
> & B, etc.  In the sense of http://en.wikipedia.org/wiki/Additive_color .
> 
> Okay, "Organ of Corti" uses RGBK (and K is even worse than my example of C 
> since it has all three RGB components not just G & B) and yet it works as an 
> image.  It's useful because the areas lit up in each channel are fairly 
> distinct.  If these areas overlapped the bounds-checking code would come into 
> play in the overlapping pixels and some highlights would get squashed and 
> some colors distorted (when one component is squashed but not the others).  
> But even if the code did a better job of combining the colors of overlapping 
> areas you'd still have visual ambiguity in these areas (since eyes can't 
> distinguish C from G + B).  So now I'm thinking the code works well as is.
>> It was intended to be more general than only the cases Aivar mentioned, and 
>> instead provided additive support for *any* color table per channel you 
>> throw at it, the same as ImageJ1's CompositeImages do.
> Sure, it shouldn't crash and burn if you put Fire on one channel and Ice on 
> another but that's not usable visually unless the areas lit up in each 
> channel are distinct.  If you have a lot of overlap and you want the colors 
> to add up meaningfully you're better off sticking with primary additive 
> colors for your channel LUTs.
> 
> On 7/15/13 3:53 PM, Curtis Rueden wrote:
>> Hi all,
>> 
>> > the bigger issue is RGBCMY is not an additive color system.
>> 
>> I believe ImageJ1 treats it as additive. Look at the sample "Organ of Corti" 
>> -- the current behavior of ImageJ2 causes that sample to appear the same as 
>> it does in IJ1. Before we added the bounds-checking code, it erroneously 
>> wrapped pixel values.
>> 
>> As for the alpha stuff, I will try to digest and reply soon but I am way too 
>> tired at this moment. I just wanted to clarify why the code is the way it 
>> is. It was intended to be more general than only the cases Aivar mentioned, 
>> and instead provided additive support for *any* color table per channel you 
>> throw at it, the same as ImageJ1's CompositeImages do.
>> 
>> Regards,
>> Curtis
>> 
>> 
>> On Mon, Jul 15, 2013 at 3:46 PM, Aivar Grislis <gris...@wisc.edu> wrote:
>> I think CompositeXYProjector is meant to handle the following cases:
>> 
>> 1) Rendering LUT images, a single converter is used.  Grayscale images are 
>> included here.
>> 
>> 2) Rendering RGB images, three converters are used.  These use red-only, 
>> green-only, and blue-only LUTs.
>> 
>> 3) I believe it's also intended to work with images with > 3 channels, using 
>> C, M, and Y for the excess channels.
>> 
>> The existing code works well for cases 1 & 2.  Case 3 adds the possibility 
>> of overflow, if your red converter gives you a value of 255 for the red 
>> component but your magenta converter adds another 255.  Currently the code 
>> just limits the value to 255 in that case.  Some sort of blending might work 
>> better here, but the bigger issue is RGBCMY is not an additive color system. 
>>  If you see a cyan blotch you don't know if its in both the G & B channels 
>> or just the C channel.
>> 
>> Aivar
>> 
>> 
>> 
>> On 7/15/13 2:40 PM, Lee Kamentsky wrote:
>>> Thanks for answering Aivar,
>>> 
>>> I think what your reply did for me is to have me take a step back and 
>>> consider what we're modeling. If you look at my replies below, I think that 
>>> the best solution is to use a model where the background is white and each 
>>> successive layer filters out some of that background, like a gel. A layer 
>>> attenuates the underlying layer by a fraction of (1 - alpha/255 * (1 - 
>>> red/255)), resulting in no attenuation for 255 and attenuation of alpha/255 
>>> for zero. We can then use a red converter that returns a value of 255 for 
>>> the blue and green channels and the model and math work correctly.
>>> 
>>> On Mon, Jul 15, 2013 at 1:59 PM, Aivar Grislis <gris...@wisc.edu> wrote:
>>>> I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and 
>>>> ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would 
>>>> appear that the correct solution would be to divide aSum by 3.
>>> Isn't it unusual to define an alpha for each color component, generally you 
>>> have a single A associated with a combined RGB?  So averaging the three 
>>> alphas might make sense here, because I think they should all be the same 
>>> value.
>>> I think you're right, the model always is that each pixel has an alpha 
>>> value that applies to R, G and B. The image I was using was the Clown 
>>> example image. DefaultDatasetView.initializeView constructs three 
>>> RealLUTConverters for the projector, one for red, one for green and one for 
>>> blue which sends you down this rabbit hole.
>>>> In addition, there's no scaling of the individual red, green and blue 
>>>> values by their channel's alpha. If the input were two index-color images, 
>>>> each of which had different alphas, the code should multiply the r, g and 
>>>> b values by the alphas before summing and then divide by the total alpha 
>>>> in the end. The alpha in this case *should* be the sum of alphas divided 
>>>> by the number of channels.
>>> I think alpha processing is more cumulative, done layer by layer in some 
>>> defined layer order.  For a given pixel say the current output pixel value 
>>> is ARGB1 and you are compositing a second image with value ARGB2 on top of 
>>> it:  For the red channel the output color should be ((255 - alpha(ARGB2)) * 
>>> red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / 255.  The alpha of ARGB1 is not 
>>> involved.
>>> I think that's a valid interpretation. I've always used (alpha(ARGB1) * 
>>> red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / (alpha(ARGB1) + alpha(ARGB2)) 
>>> because I assumed the alpha indicated the
>>> strength of the blending of each source. In any case, the code as it stands 
>>> doesn't do either of these.
>>> 
>>> In other words, if you add a layer that is completely opaque you no longer 
>>> have to consider any of the colors or alpha values underneath it. 
>>> 
>>> I think the bigger issue here is this code is specifically designed to 
>>> composite red, green and blue image layers.  It's a special case since for 
>>> a given pixel the red comes from the red layer, blue from blue layer, and 
>>> green from green layer.  These layers shouldn't be completely opaque, since 
>>> the colors wouldn't combine at all then or completely transparent since 
>>> then they wouldn't contribute any color.  I don't think transparency is 
>>> useful here.
>>> So this is an argument for blending instead of layering - transparency 
>>> would be useful if the images were blended and treated as if on a par with 
>>> each other, allowing the user to emphasize one channel or the other. 
>>> 
>>> It's also possible that a multichannel image with > 3 channels is being 
>>> displayed with more color channels, namely cyan, magenta, and yellow.  The 
>>> code here is designed to stop overflow, but I'm not convinced those 
>>> extended color channels would combine meaningfully.
>>> 
>>> Aivar
>>> 
>>>> In addition, there's no scaling of the individual red, green and blue 
>>>> values by their channel's alpha. If the input were two index-color images, 
>>>> each of which had different alphas, the code should multiply the r, g and 
>>>> b values by the alphas before summing and then divide by the total alpha 
>>>> in the end. The alpha in this case *should* be the sum of alphas divided 
>>>> by the number of channels.
>>> I think alpha processing is cumulative layer by layer.  
>>> 
>>> This brings up some interesting questions:
>>> 
>>> 1) If the first, bottom-most layer is transparent, what color should show 
>>> through?  Black, white?  Or perhaps it's best to ignore this base layer 
>>> transparency.
>>> Maybe the model should be that the background is white and successive 
>>> layers are like gel filters on top. In that case, you'd have:
>>> red = (255 - alpha(ARGB2) *(255 - red(ARGB2))/255) * red(ARGB1) 
>>> 
>>> And maybe that points to what the true solution is. For the default, we 
>>> could change things so that red channel would have blue = 255 and green = 
>>> 255 and the first composition would change only the red channel.
>>> 
>>> 2) If you wanted to composite several transparent images, how do you 
>>> calculate the transparency of the composite?  I'm not sure this is 
>>> something we need to do.
>>> 
>>> Aivar
>>> 
>>> 
>>> On 7/15/13 10:31 AM, Lee Kamentsky wrote:
>>>> Hi all, 
>>>> I'm looking at the code for net.imglib2.display.CompositeXYProjector and 
>>>> as I step through it, it's clear that the alpha calculation isn't being 
>>>> handled correctly. Here's the code as it stands now, line 190 roughly:
>>>> 
>>>> for ( int i = 0; i < size; i++ )
>>>> {
>>>> sourceRandomAccess.setPosition( currentPositions[ i ], dimIndex );
>>>> currentConverters[ i ].convert( sourceRandomAccess.get(), bi );
>>>> // accumulate converted result
>>>> final int value = bi.get();
>>>> final int a = ARGBType.alpha( value );
>>>> final int r = ARGBType.red( value );
>>>> final int g = ARGBType.green( value );
>>>> final int b = ARGBType.blue( value );
>>>> aSum += a;
>>>> rSum += r;
>>>> gSum += g;
>>>> bSum += b;
>>>> }
>>>> if ( aSum > 255 )
>>>> aSum = 255;
>>>> if ( rSum > 255 )
>>>> rSum = 255;
>>>> if ( gSum > 255 )
>>>> gSum = 255;
>>>> if ( bSum > 255 )
>>>> bSum = 255;
>>>> targetCursor.get().set( ARGBType.rgba( rSum, gSum, bSum, aSum ) );
>>>> 
>>>> I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and 
>>>> ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would 
>>>> appear that the correct solution would be to divide aSum by 3. In 
>>>> addition, there's no scaling of the individual red, green and blue values 
>>>> by their channel's alpha. If the input were two index-color images, each 
>>>> of which had different alphas, the code should multiply the r, g and b 
>>>> values by the alphas before summing and then divide by the total alpha in 
>>>> the end. The alpha in this case *should* be the sum of alphas divided by 
>>>> the number of channels.
>>>> 
>>>> However, I think the problem is deeper than that. For an RGB ImgPlus, 
>>>> there are three LUTs and each of them has an alpha of 255, but that alpha 
>>>> only applies to one of the colors in the LUT. When you're compositing 
>>>> images and weighing them equally, if two are black and one is white, then 
>>>> the result is 1/3 of the white intensity - if you translate that to red, 
>>>> green and blue images, the resulting intensity will be 1/3 of that 
>>>> desired. This might sound weird, but the only solution that works out 
>>>> mathematically is for the defaultLUTs in the DefaultDatasetView to use 
>>>> color tables that return values that are 3x those of ColorTables.RED, 
>>>> GREEN and BLUE. Thinking about it, I'm afraid this *is* the correct model 
>>>> and each channel really is 3x brighter than possible.
>>>> 
>>>> It took me quite a bit of back and forth to come up with the above... I 
>>>> hope you all understand what I'm saying and understand the problem and 
>>>> counter-intuitive solution and have the patience to follow it. Dscho, if 
>>>> you made it this far - you're the mathematician, what's your take?
>>>> 
>>>> --Lee
>>>> 
>>>> 
>>>> _______________________________________________
>>>> ImageJ-devel mailing list
>>>> ImageJ-devel@imagej.net
>>>> http://imagej.net/mailman/listinfo/imagej-devel
>>> 
>>> 
>>> _______________________________________________
>>> ImageJ-devel mailing list
>>> ImageJ-devel@imagej.net
>>> http://imagej.net/mailman/listinfo/imagej-devel
>>> 
>>> 
>> 
>> 
>> _______________________________________________
>> ImageJ-devel mailing list
>> ImageJ-devel@imagej.net
>> http://imagej.net/mailman/listinfo/imagej-devel
>> 
>> 
> 
> _______________________________________________
> ImageJ-devel mailing list
> ImageJ-devel@imagej.net
> http://imagej.net/mailman/listinfo/imagej-devel

_______________________________________________
ImageJ-devel mailing list
ImageJ-devel@imagej.net
http://imagej.net/mailman/listinfo/imagej-devel

Reply via email to