For some time I have been playing around with local equalization of
images but as camera resolution has improved the problem has become
appreciably worse.

I have what currently appears to be a fairly efficient equalization
verb ... that is not so much my problem ... what I would like to find is
a more efficient general approach to my overall process.

A typical image is now approximately 3000 x 4000 pixels; if I am solely
looking at (8 bit) intensity then I am working with a 12 MB file.

The verb runs very fast on a small core/frame but progressively (of
course) slows down as the frame expands and I  read and write to a
larger area. What I am looking for is a core that is approximately 25%
of the image size (let's say roughly 800 x 800). This size is the crux
of my problem.

I currently am processing on a line-by-line, col-by-col basis which
means thay many pixels may be processed as many as 640000 times as the
frame passes over them. Processing this way, on this size, is simply
impractical on my hardware.

I know that some work on this problem has been done that involves
pre-evaluating histograms and storing data on a line by line and col by
col basis prior to the actual process calculation and I would be
interested to know if anybody has experience with this or solutions to
similar problems.

David

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to