Thanx Larry!I definitely will check ImageBuf::Iterator. And do my best.Common or not, 
but if you'll check some of 3D format parsers, you can find that they are by default 
normalize vertices normals. And even if most 3D DCC apps do the same with normal maps 
inputs, and most normal maps generator output normalized data, it still a good to have 
this function in openimageio, well, just in case. :DHi, Vlad.That's probably the 
simplest approach with the existing set of IBA functions, yes. Maybe there is some 
optimizing you can do around the edges -- like, `mul(img,img)` may be faster than 
`pow(img,2.0)`, I'm not sure, and definitely you want to use the variety of IBA 
functions that takes a destination image rather than returning an ImageBuf, in order to 
minimize needless buffer copying. But those obvious tricks will only get you so far.If 
you're doing this a lot and it's performance critical, a better way would be to write it 
as a single function that uses ImageBuf::Iterator to traverse the image and do all the 
operations at once for each pixel, with no extra buffer copies. Looking at the source 
code to any of the usual IBA functions that take one input image and produce one output 
image will provide you with a good example to copy and change the guts to make your new 
function.I don't recall anybody asking for this particular thing before, but if you 
think it is a commonly needed operation, then by all means propose a PR to add this new 
function after you've implemented it.On Sun, Jun 4, 2023 at 6:28 PM Vladlen Erium 
<v...@hdri.xyz> wrote:What should be most efficient way to implement normalizing 
vector data images (normals) using ImageBuffAlgo?  For this moment I only see the way to 
do this in four steps.  ImageBuffAlgo::madd for [0.0,1.0] -> [-1.0,1.0] 
ImageBuffAlgo::pow for power of 2 ImageBuffAlgo::sum_channels for vector magnitude 
ImageBuffAlgo::div (sec and magnitude) to normalize vector length  ImageBuffAlgo::madd 
for normalize to [0.0, 1.0] range.  This not only required to make so many steps but 
also required a lot of temporary buffers, that for huge textures can required lot of 
memory. When all steps can be done per pixel and perfectly parallelized (shaders, cuda). 
 Maybe I missed some OIIO functions? 🤔  Best regards: Vlad  PS: btw, looks like Google 
completely filter out all Larry messages if Gmail used for subscription to this mailing 
list. They even not in spam folder, where mail list messages quite often can be moved 😒 
_______________________________________________ Oiio-dev mailing list 
Oiio-dev@lists.openimageio.org 
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org-- Larry 
Gritzlg@imageworks.com_______________________________________________Oiio-dev mailing 
listOiio-dev@lists.openimageio.orghttp://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
_______________________________________________
Oiio-dev mailing list
Oiio-dev@lists.openimageio.org
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to