Title: @@weblog
Elphel Development Blog
October 26, 2010 11:08 PM
You have subscribed to these e-mail notices about new posts to the blog.
If you want to change your settings or unsubscribe please visit:
http://blogs.elphel.com/post_notification_header/?code=cd9ad4791210eef4735d43f52cc21dd2&addr=support-list%40support.elphel.com&

"Zoom in. Now... enhance."

Elphel cameras provide raw image data, isn’t that enough?

Before recently we never bothered to do anything about image post-processing – we believed our job is done when we provide our customers with ability to acquire “raw” images from the camera.  And so far it worked OK for everybody but when we’ve built the new Eyesis camera we realized that it is not the case any more. We tried to develop the best hardware in the class, but some of our customers were not impressed by the raw images we posted. I tried to explain that the images are really raw – in that mode we un-apply all the in-camera gamma correction (used to match output encoding to the sensor noise performance) and color balance, the decoded JP4 images are virtually what sensors pixels provide, no sharpness enhancement that most cameras apply to the output data. But these my explanations did not seem to persuade them, so that was a valuable hint for us – maybe we should really do something about image post-processing?

As sensors improve faster than lenses, software image correction becomes more important

There were several factors suggesting that it will not be a waste of time, do a job for one particular application (generally we try to stay clear of getting too deep into any particular areas, because we believe that the GPL-ed products provide enough freedom to our users that they can do a lot of nice derivative products on their own, without any of our  involvement whatsoever). The most important factor is that as the sensor resolution increases, they already got ahead of the lens resolution. Of course, there are different ways to specify lens resolution (and most manufactures just limit that data to “megapixel”,  “multi-megapixel”, “super-megapixel” and similar, more detailed data is often unavailable. But our experience shows that for small format sensors like we use (1/2.5 inch Aptina MT9P031 5MPix ) even the best lenses can match full sensor resolution (and modern de-mosaic algorithms can use it all) only in the center of the image area, especially for wide angle optics. Such unequal lens resolution is common, and for many applications it is tolerable – the focus of interest is normally in the center part of the image and some blurring of the peripheral areas is OK (or even desirable). But panoramic applications require the same good resolution over most of the image area (angular resolution), there is no single center of the 360-degree stitched image, so it would be nice to correct aberrations – especially in the worst parts of the image. Just uniform “sharpening” applied to the whole image would not help much – required correction is variable over the area, being very small (if any) in the center, gradually increasing towards the edges and account for the anisotropy of the aberrations that is usually present in the off-center areas. And we can anticipate that the need for such image post-processing will become even more important when we’ll move from the 5MPix sensors we currently use to the higher resolution ones.

As the sensor resolution gets ahead of the lens one, the focus in image post-processing shifts from the de-mosaic algorithms that are made to “guess” the missing colors (i.e. red color in the location of the green pixel) to the programs that try to correct lens flaws with the software. In some cases the lens aberrations (like axial chromatic one) are put to work – by translating image of one primary color that is sharp at given distance to the remaining two, camera manufacturers are able to implement digital focal distance adjustment to replace the mechanical focal ring in the  mobile phone cameras.

What are we trying to correct. Aberration vs. distortion

There are different effects that are responsible for the images being not as good as originals, but here we focus only on the optical aberration that makes images being not sharp, blurred or having color bands along the edges, not the optical distortion that makes the objects to have wrong shapes (i.e. straight lines becoming curved), even as the linked Wikipedia article about distortion calls it to be “a form of optical aberration”. Distortions are handled in a different way and generally they are much easier to correct in the  post-processing software so when selecting a good lens distortion itself is not a problem (some kinds of distortions are even useful for particular applications and so desirable).

Universal way to handle image aberrations in the system

Measuring the lens performance and un-applying the aberrations is an old trick – one of the most known applications were “glasses” for the Hubble Space Telescope to compensate for the flaw in the main mirror. It is also commonly used in microscopy – in such application in most cases it is impossible to perfectly focus on the object as the depth of filed is very small.

Using such method with consumer cameras is difficult for several reasons – lenses are either interchangeable, have iris, adjustable focus – each of these factors influence the overall aberrations of the system and would require re-measuring the aberrations that need to be known with great precision, otherwise “correction” can in fact make images even worse. Additional challenge is presented by small pixel “full well capacity” (caused by the small physical pixel size) that leads to relatively large pixel shot noise which is unavoidably amplified when the aberration correction is applied.

On the other hand, when the lens is fixed, focus is not adjusted and there is no moveable iris in the lens (iris anyway has very little use for the small format high-resolution  CMOS image sensors because of  the diffraction limit and very large depth of field) it is possible to make individual calibration of the lens once and then apply the calculated correction to all subsequently acquired images.

PSF measurement

The first step to correct the lens aberration would be to precisely measure the point-spread function (PSF) of the system at different areas of the image. Then, if there are no zeros in the spacial frequencies area of interest it will be possible to inverse the the PSF and use the result to correct the image. At least to some extent limited by the PSF itself (no zeros – you can not compensate the loss at the frequencies that yield complete zero by multiplying), precision of the PSF measurement and the system noise (unfortunately rather high for the sensors with  small pixels). Measuring PSF directly would require real point light sources – something that is common in astronomy where stars are perfect approximations of the ideal point sources with additional advantage that their locations are precisely known.  In our  case we need something brighter (long integration of the starlight will cause to much noise in the sensor if we would try to use stars among other problems). Small PSF size (still just few pixels wide in “bad” areas) combined with high pixel noise require different test targets.

What the test target should look like? The PSF is calculated by deconvolving the image data with the inverted ideal test pattern, it is done by calculating Fourier transforms of both, than dividing the result of the first (image) by that of a second (test pattern) and performing inverse Fourier transform. For this operation to be successful the test pattern needs to meet the following  requirements:

  1. It should have all the spacial frequencies of interest included
  2. The pattern should be easy to detect in the image even when image geometric distortions are present
  3. The period of the pattern (if it is periodic) should be large enough so the measured PSF instances would not overlap
  4. The period of the pattern should be small enough so the PSF measurement grid (image plane points at which the aberrations are measured) will have sufficient resolution
  5. Different phases of the pattern crossing pixels will be uniformly distributed so the sub-pixel resolution could be achieved

The first requirement is needed because otherwise we’ll have to divide zeros by zeros or at least very small numbers by very small numbers with the results being buried in the noise. It is OK to have some zeros, but the non-zero values should cover most of the spectral area without large gaps.

The second requirement is important as in the calculations we need to know precisely what is the ideal image that sensor sees after the aberration is applied.  Keeping precise and known orientation of the target and the camera is not practical, additionally the image is subject to the image geometrical distortions, and we need to know the image without aberration with a sub-pixel resolution. With the pattern build on a regular periodic grid it should be not so difficult for the software to “recognize” that pattern even in the presence of the geometric distortion and locally compensate that distortion so the simulated (model) image will match the measured one to a fraction of a pixel precision. And stay so over the large enough area  to be able to make reliable measurements even in the presence of noise.

When using the periodic (or actually close to periodic, as distortion breaks the exact periods) pattern and calculating PSF, there would be PSF clones with the same period as the grid itself, because(when neglecting distortions) there is no way to tell which of the object grid cells was recorded as which pattern cell on the image. And so the “wings” of the PSF instances should be far enough to prevent overlapping, so that defines the minimal period of the pattern grid.

On the other hand we can not make the cells too big, otherwise the PSF measurement itself will not have enough resolution – the PSF varies along the image area and so we have to know it in all those locations, that puts an upper limit on the pattern grid period.

Last requirement is similar to that described in ISO 12233  about  measuring the spatial frequency response (SFR) to get resolution higher than a pixel by following the black/white edge that crosses pixel grid at different distances from the pixels centers – short description of such measurements is in the page about SE-MTF ImageJ plugin.

I started with the slanted checker board pattern – it was easy to recognize in the registered image and easy to generate the model that had the same local geometric distortions. Local distortions were approximated by the second degree polynomials and measured by finding the pattern phases in half-sized areas inside the full selection – full selection was 512×512 sensor pixels (256 x 256 for individual Bayer components), so that were 128×128 squares in the center and shifted by 64 pixels in 8 directions.

Slanted checker board pattern

Windowed checker board pattern

Amplitude spectrum of the checker board pattern

Filtered spectrum of the checker board pattern

After working with such pattern for some time I realized that it had a major flaw – the spectrum was very anisotropic – 2 lines forming a cross that extended to very high frequencies (even rolling over the margins) and very little energy between those directions. The net effect of such anisotropy was that the measured PSF also had cross-like artifacts that I first tried to compensate in the code but that did not work well enough. It was possible to  cut off the long legs of the cross, but boosting gain between then led to the increased noise and artifacts in the measured PSF.

Then I tried to improve the pattern itself to reduce the anisotropy of the pattern spectrum while keeping it simple to generate and detect in the images, preserving the “inversion” feature of the plain checker board: if the pattern is shifted by one cell in either direction it becomes it’s negative – white cells are replaced by the same shaped black ones and vice versa. The pattern was generated by replacing the straight sides of the squares with a pair of  equal arcs, so the squares had circular segments added/removed. Such patterns worked much better – the calculated PSF did not have the artifacts without any additional tweaking, at the same time even the unmodified code could recognize the pattern in the image and calculate the grid parameters to generate a simulated model that matches measured data.

Curved pattern

Windowed curved checker board pattern

Windowed curved pattern

Amplitude spectrum of the curved pattern

Filtered spectrum of the curved pattern

_______________________________________________
Support-list mailing list
[email protected]
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com

Reply via email to