A few more thoughts on the ride home...

When I wrote this:

- My *really* simple (incorrect?) mental model of these approaches is that
> you convert an rgb image into a 2d array of spatial derivatives, manipulate
> the derivatives in clever ways, and then solve for a final rgb image (with
> an input seed?) that best approximates your munged derivatives.
>

My intent is for you to correct this with the right model of how the
operations work. :)


- Is Poisson image editing amenable to either scanline processing?  Or tile
processing? If so, we'll probably want to structure the implementation to
do so.  If only one of the portions of the transform supports scanlines or
tiles (i.e., rgb->poisson can work on chunks, but poisson->rgb requires the
whole image) it may still make sense to consider it.

- Rather than basing the algorithm on ImageBufAlgo, we may want to expose a
version that processes raw pixels (i.e., ImageSpec*).  We can *also* expose
an API for processing existing ImageBufs, but if it's simpler to expose a
version that processes pixels in a different underlying container it will
make sense to allow it.  What I'm thinking of is how we can implement the
processing in a comp package such as nuke, which doesnt rely on
imagebufalgo.  (Any while we could certainly construct one temporarily,
there's no reason to do so if not needed).

-- Jeremy
_______________________________________________
Oiio-dev mailing list
Oiio-dev@lists.openimageio.org
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to