Hello,Since my gsoc proposal was accepted, I described in more details the
behavior and the technical part of my tool.

I'd like to have opinion and advise from competent people on the different
part (UI, reconstruction step ...).

You could find the detail, either in a public wave or in the following of
this mail.



Michael Muré

Green coordinates





Preparation step

   - Tool activation
   - Creation of the cage and setup of the tool (UI)
   - the user draws a closed cage in the image
   - the user can change the setup of the tool, if parameters available (not
   sure yet)
   - Intersection of the selection and the cage to obtain the pixels to be
   processed (my gsoc is for interior of the cage only, see my proposal)

Binding step

   - For each pixel processed, the green coordinates need a set of
   coefficient (one per vertice of the cage, one per edge of the cage)
   - the number of coefficients is: Pixel_number x (Cage_vertice_number +
   - in 2D, Cage_vertice_number = Cage_edge_number
   - the computation of this coefficients is described at the end of the

Real time deformation step

   - For each user action, processing + treatment
   - pixels processed are transformed by the green coordinates
   - discretization : The result of the transformation is not exact pixel,
   not integer. I could have also more "information" for one target pixel, or
   no information at all.
   - I need a way to handle this problem, see below
   - It could be needed to compute only a part of the pixels to achieve real

Final action

   - maybe some filtering and post-process
   - end of the tool

Image reconstruction

I see 2 ways to handle the problem

Solution 1:

   - each source pixels are sent to the target pixels.
   - For pixels with more than one information the value is computed as an
   average of the color, weighted by the distance to the pixel
   - For pixels with no information, the value is computed as a weighted
   average of the closer neighbour

Solution 2:

   - when the pixel is sent to the target pixels, they not only affect one
   pixel, but an area of pixel (3x3, or 4x4 pixels) with a gaussian weight or
   - For pixels with no information (which should be rare), the value is
   computed as a weighted average of the closer neighbour


   - compute the transformation for all the pixels, store them in a huge
   table, with their color, and perform a sampling on it to compute the final

Data structure

for the coefficients, 2 big table should be enough since they are computed
during the bind, and don't change after

Reverse transform is mathematically too difficult ?

I'm not sure yet, but I think the transformation is not bijective, so
reverse transformation cannot be achieved

Ways of improving

   - multithread
   - computing of only a part of the pixels during real time (1/9, 1/16, ..)
   - automatic adaptation of the proportion of the pixels computed (based on
   computing time)
   - automatic creation of the cage, based on the shape of the selection (no
   idea how to do that)


   - the following url shows an interesting UI for cage-based deformation



   - the handles can be moved by group (selection with a rectangle or other)
   - the handles or group of handle can be moved with hotkey, in a similar
   way than within blender (R to rotate, M to move, S to scale)
   - the handles can be moved by clicking in any part of the image that is
   not a handle, the influence on the handle is computed in function of the
   distance to the cursor. See
Gimp-developer mailing list

Reply via email to