Hi everyone, and thanks for the replies.  Most wavelet work has seemed to 
concentrate on the mathematical structures of the wavelets (in one 
dimension) themselves (for instance, orthogonal vs. continuous, and lifting 
vs. frames, etc.).  I guess this comes from thought about how the Fourier 
transform applies to higher dimensions.  Say, in an image, you transform 
along the x axis of an image, then the y axis.  This is mathematically 
consistent with a Cartesian arrangement of the data in an image, but a 
Cartesian arrangement usually has nothing to do with the arrangement of the 
scene contrast elements contained in an image - the C. arrangement is just 
a human imposition of structure where it doesn't really exist.  Blah, blah 
(sorry, I know I talk too much).   The "separability" comes from the 
separability of the transform into the x and y (say) coordinates.

Anyway, this result is artifacts - for instance, in a 2D Daub. wavelet 
transform of a smooth circle, the scene contrast elements at 45 degrees to 
the pixel coordinate system are "enhanced."

The ridgelets and curvelets, as well as things like matching pursuit and 
entropy treeing are ways to sort of get around these - but they come from 
the coordinate grid so you sort of cannot get rid of them.  Two fixes seem 
to be buried deep in the literature: (1) creating wavelets of the same 
dimension as the dataset (2D wavelets for images) - and these wavelets are 
not simple combinations of 1D wavelets; and (2) performing "tricks" with 
the underlying grid (at least, that's how I understand it) - the "quincunx" 
grid transforms rotate and decimate the grid so there's no preferred 
direction in, say, a 2D transform.  I think these two may actually be 
different aspects of the same thing.

Reply via email to