No comments on this in a week. Is everybody ok with the compatibility break in
maketx SHA-1 fingerprinting?
-- lg
On Feb 23, 2013, at 10:25 AM, Larry Gritz wrote:
> Usually pull requests just go to people who "follow" the GitHub project, but
> sometimes I forward them to oiio-dev if the proposed changes have important
> implications for users that they may want to weight in on, even if they are
> usually unconcerned about the details of the code. This is one of those
> kinds of changes.
>
> ---
>
>
> More work to speed up maketx. My profiles show that computing the hash is
> becoming significant, especially when running maketx on a machine with many
> cores, because the SHA-1 hash is serialized. SHA-1 is inherently serial,
> but... we can sha hash pieces of the image independently, then combine the
> results.
>
> So this patch refactors computePixelHashSHA1 for parallelism:
>
> Allow ROI to designate a section of the image.
>
> Add an option to computePixelHashSHA1 to hash in blocks, then return hash of
> the hashes (versus hashing the entire image as one block).
>
> When hashing in blocks, this can be multithreaded, since the computations are
> independent!
>
> Change maketx to use the block-based hashing, in blocks of 256 scanlines.
>
> This will be incompatible with old maketx-generated hashes (both will be
> accepted by the texture system, but old will not match against new -- the
> hashes are different) but significanly speeds up maketx on multi-core
> machines.
>
> I'm open to discussion/complaints about the compatibility issue. I think that
> overall, we'll be happier with a much faster maketx that can be fully
> multithreaded (and, as well, anything else that makes use of
> computePixelHashSHA1). But in the short term, it means that a texture made
> with the new maketx will not properly recognize itself as a duplicate of an
> identical texture made with the old maketx. (Old will still match against
> old, and new against new.) Is this an actual problem in the real world?
> Probably only if you have LOTS of duplicate textures, split close to evenly
> between new and old maketx and you're unwilling to re-maketx the old textures
> to re-align the hashes.
>
> My intuition is that this isn't a significant problem in a production
> environment, because as shows come and go, you will no longer use assets from
> old shows that were made with old maketx, and the new shows will use new
> maketx, and there will be very little mixture between the two (and if there
> is a small amount of mixing, who cares, it's not going to hurt too much and
> it'll be nice to speed up the maketx times). But if you think I'm wrong about
> this, please let me know.
>
> You can merge this Pull Request by running
>
> git pull https://github.com/lgritz/oiio lg-parallel-sha1
> Or view, comment on, or merge it at:
>
> https://github.com/OpenImageIO/oiio/pull/530
>
> Commit Summary
>
> Clarify comments and add ROI::All()
> Refactor computePixelHashSHA1 for parallelism:
> File Changes
>
> M src/include/imagebuf.h (11)
> M src/include/imagebufalgo.h (21)
> M src/libOpenImageIO/imagebufalgo.cpp (115)
> M src/libOpenImageIO/maketexture.cpp (8)
> M testsuite/maketx/ref/out.txt (8)
> Patch Links:
>
> https://github.com/OpenImageIO/oiio/pull/530.patch
> https://github.com/OpenImageIO/oiio/pull/530.diff
>
> --
> Larry Gritz
> [email protected]
>
>
> _______________________________________________
> Oiio-dev mailing list
> [email protected]
> http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
--
Larry Gritz
[email protected]
_______________________________________________
Oiio-dev mailing list
[email protected]
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org