Hi Greg,

I am in the midst of trying to get Evalglare to accurately process HDR
images.  In order to account for light-fall from a Canon + fish-eye
generated HDR, I have tried to multiply by the original HDR image with a
correction factor using the pcomb command.  However, the resulting image
does not maintain similar luminance values as the original (as it should
particularly for the center part of the image that is only multiplied by a
factor of 1).

I am using the following command line and would appreciate any guidance on
why the luminance values are so affected:
pcomb -e "ro=ri(1)*ri(2);go=gi(1)*gi(2);bo=bi(1)*bi(2);" vignette_283.pic
Chauhaus3cr.hdr > Chauhaus3Bvg.hdr

My initial attempt actually used the -o function to normalize the values of
the image before processing, but the resulting vignetted image was very dark
and the luminance values had drastically dropped -- pcomb -e
"ro=ri(1)*ri(2);go=gi(1)*gi(2);bo=bi(1)*bi(2);" -o vignette_283.pic
Chauhaus3cr.hdr > Chauhaus3vg.hdr


As an aside to something Rob mentioned earlier, I do not believe that right
now Evalglare automatically only judges the circular view of the fish-eye
HDR while being processed.  In order to get accurate results, the
rectangular images should be cropped (using pcompos) and also the view type
should be verified before processing.  I have discovered that several of my
Photopshere generated HDRs are being judged as a perspective view rather
than angular fish-eye as I had assumed it would (vtv instead of vta) and
that I had to manually adjust this setting before getting accurate and
meaningful Evalglare results.  (Greg -- is there anyway to specify the lens
type in Photosphere before compiling the HDR so that it follows through from
the beginning?)

Best,
Rashida


-- 
Rashida Mogri | LEED AP
Harvard Graduate School of Design
MDesS, 2011 | Sustainable Design


On Mon, Sep 27, 2010 at 7:35 PM, Gregory J. Ward <[email protected]>wrote:

> Try:
>
> pcomb -e
> 's(x):x*x;m=if(xmax*ymax/4-s(x-xmax/2)-s(y-ymax/2),1,0);ro=m*ri(1);go=m*gi(1);bo=m*bi(1)'
> input.hdr > output.hdr
>
> -Greg
>
> > From: Rob Guglielmetti <[email protected]>
> > Date: September 27, 2010 4:00:11 PM PDT
> >
> > Oh, they are, very low (~2 nits in the HDR), I was just curious because
> these corner areas are not truly in the vield of view that we're evaluating.
> I ASSume evalglare only is looking at the hemisphere anyway. (?)
> >
> > Having said all that, does anyone have a masking tip or something like
> that to clean up those corners, just out of aesthetic curiosity?
> >
> > On Mon, Sep 27, 2010 at 4:42 PM, Gregory J. Ward <[email protected]>
> wrote:
> > Hi Rob,
> >
> > You need to check what the values of those border pixels are.  Chances
> are, they are quite low compared to the circular image, and you are only
> seeing them because of the tone-mapping compression going on in Photosphere,
> assuming that's what you're using.
> >
> > Best,
> > -Greg
>
> _______________________________________________
> HDRI mailing list
> [email protected]
> http://www.radiance-online.org/mailman/listinfo/hdri
>
_______________________________________________
HDRI mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/hdri

Reply via email to