On Wed, Aug 31, 2011 at 6:08 PM, Larry Gritz <l...@larrygritz.com> wrote:
> There is no specific goal to match any other application, but the filtering 
> is tricky and easy to get wrong (as I have clearly botched it in this case, 
> despite having written it many times over 25 years!).  So when we match the 
> results of another application that is written by smart people (including, 
> sometimes us, in other products or companies), it raises our confidence that 
> we've done it correctly this time.  When several allegedly-smart products 
> each gets a completely different answer, it makes me very nervous about 
> assuming that we are somehow the one group to get it right.

Oh I know that one...

>> I notice the definition for Lanczos3 for instance is not my expected
>> windowing function which is something more like:
>>
>> sinc(position) * sinc(position / size)
>>
>> where size = 3 for a 3 lobed window.
>
> Um... I think we're correct, and Jeremy says we match several other smart 
> apps now...
>
> Glancing at the code myself, I clearly see our lanczos3 implementation 
> returning
>
>        a/(pi*x)^2 * sin(pi*x)*sin(pi*x/a)
>        = sinc(x) * sinc(x/3)    (for "normalized" sinc, a=3)

having looked at this, I guess I should have looked at expanding the
formula... I agree it is the same... I've just never had to expand the
sync() functions for either accuracy or speed, when re-sampling an
image I just cache the filter results, although that won't work for
arbitrary transforms, it works well for typical resizing usage.

>> Also i wondered what the 'phase' of the filters does the
>> implementation sample at - does it always results in all pixels being
>> filtered when exactly halving/doubling, or does the filter only
>> effectively sample at 0.0 and thus return a single sample for half the
>> outputs? (Once I have it compiled, I can obviously run it to find out,
>> there appears to be a lot of +/- 0.5s in the code and I'm not as good
>> as the compiler :-).
>
> Sorry, I'm not exactly sure I understand the question.
>

Jeremy's answer suggests that the implementation samples the filters
in the same way, Shake or Nuke would do. I ask because recently we
came across a production that was handing us both '4K' EXR  and '2K'
DPX files, unfortunately they used a tool that did not use the pi/4
offset to reproduce the 2K and so to reproduce the 2K DPX files from
the EXRs we had to factor that into our system. Visually, sampling on
the pi/4 offset looks much nicer, especially if the original was not
band limited correctly in the first place (in this exact halving
situation you can end up with alternate pixels being 'impulse'
filtered).

Anyway, I finally beat cmake into submission and built the code
pointing to our preferred libraries, I'll move onto testing it and
then OpenColorIO

Thanks

Kevin
_______________________________________________
Oiio-dev mailing list
Oiio-dev@lists.openimageio.org
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to