Hello, First, please let me introduce our use case; we are trying to use Elphel cameras (353 + Computar 4-8mm 1/2") to get maximum resolution (> FullHD), 25 fps video.
Our current problems are about image quality when zooming on details (blurred images); in other terms, we are trying to improve the rendering quality as much as possible to have cleaner images. In this context, last year we implemented http://code.google.com/p/gst-plugins-elphel/ but only changing the debayering algorithm did not improve the quality enough for our application (at least not when compared to the processing overhead). I was wondering about the method described in the awesome article "Zoom in ... now enhance": - are there any specifics about the method being for Eyesis only (my guess is it's not) ? - regarding the calibration, what are the invariable factors ? Is the calibration required for: - every camera model/generation (depending on camera/sensor manufacturing design/process variations) ? - every lens model (depending on lens model) ? - every lens tuning (zoom level / focus / iris ...) ? - climatic condition changes (temperature, ...) ? - the hidden question behind this is: how can this technique be used in production ? - For a given camera/lens combination, could a public database of tuning data reduce the calibration requirement (in a similar fashion to A-GPSes which download correction data from the network to increase performance on low-quality reception and/or chips http://en.wikipedia.org/wiki/Assisted_GPS) ? - is there a hope of having such a feature (in the long term) integrated in the camera itself (i.e. grabbing an mjpeg stream who had the corrections made right before the encoding) ? Thanks Florent
_______________________________________________ Support-list mailing list [email protected] http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com
