Re: [Elphel-support] Flash on a two camera system

2011-04-13 Thread Andrey Filippov
Andreas,

Why do you need  pulse _before_ trigger? With this senor exposure of the
first line start 8 scan lines after the trigger, last line exposure starts
1/15 sec later. So if you want to use a flash, you need to set the exposure
time 1/15 sec and trigger flash 1/15 after the trigger.
The FPGA has delay timer, when the camera is triggered externally, this
programmable delay is between the arrival of the sync pulse and the sensor
triggering. When the camera is set to trigger directly from the FPGA
(trigger condition =0), the delay sensor is triggered immediately, and the
output sync is delayed by the programmed value - you may use it for the
flash. Unfortunately it is for a single camera only.
If you still need pulse before the sensor trigger - you may just reverse the
trigger source - use external pulse, apply it to multiple cameras in
parallel - each camera can  be individually delayed by programming
TRIG_DELAY parameter.

Andrey
___
Support-list mailing list
Support-list@support.elphel.com
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com


Re: [Elphel-support] Flash on a two camera system

2011-04-13 Thread Andreas Bean
Andrey,

In fact we develop a led light and not a discharge flash, therefore the
light isn't deposited in a short period. You say the exposure of the
last scanline starts 67ms after the first line. I'm trying the get light
intensity high enough for an exposure time of 5ms. This means that the
leds have to be turned on for 72ms ?

And additionally I don't know how long they need to reach the full
intensity, that's why I asked if its possible to get the signal before
the trigger.
How would you connect the led driver circuit to the cam?

Does it matter that the signal is only for one cam? They are closely
mounted together, so the signal can be used for both, can't it?

Have consumer digital cameras also the delay between the exposure of the
first and last line? Does that mean that during a fast movement the
first and the last line in the picture differ significantly?

Andreas



Andrey Filippov schrieb:
 Andreas,

 Why do you need  pulse _before_ trigger? With this senor exposure of
 the first line start 8 scan lines after the trigger, last line
 exposure starts 1/15 sec later. So if you want to use a flash, you
 need to set the exposure time 1/15 sec and trigger flash 1/15 after
 the trigger.
 The FPGA has delay timer, when the camera is triggered externally,
 this programmable delay is between the arrival of the sync pulse and
 the sensor triggering. When the camera is set to trigger directly from
 the FPGA (trigger condition =0), the delay sensor is triggered
 immediately, and the output sync is delayed by the programmed value -
 you may use it for the flash. Unfortunately it is for a single camera
 only.
 If you still need pulse before the sensor trigger - you may just
 reverse the trigger source - use external pulse, apply it to multiple
 cameras in parallel - each camera can  be individually delayed by
 programming TRIG_DELAY parameter.

 Andrey



attachment: office.vcf___
Support-list mailing list
Support-list@support.elphel.com
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com


Re: [Elphel-support] Flash on a two camera system

2011-04-13 Thread Sebastian Pichelhofer
On Wed, Apr 13, 2011 at 21:03, Andreas Bean off...@beanbox.com wrote:
 Andrey,

 In fact we develop a led light and not a discharge flash, therefore the
 light isn't deposited in a short period. You say the exposure of the
 last scanline starts 67ms after the first line. I'm trying the get light
 intensity high enough for an exposure time of 5ms. This means that the
 leds have to be turned on for 72ms ?

 And additionally I don't know how long they need to reach the full
 intensity, that's why I asked if its possible to get the signal before
 the trigger.
 How would you connect the led driver circuit to the cam?

 Does it matter that the signal is only for one cam? They are closely
 mounted together, so the signal can be used for both, can't it?

 Have consumer digital cameras also the delay between the exposure of the
 first and last line? Does that mean that during a fast movement the
 first and the last line in the picture differ significantly?

ERS (Electronic Rolling Shutter) has been used in all consumer as well
as professional CMOS based camera systems to date.

Regards Sebastian

 Andreas



 Andrey Filippov schrieb:
 Andreas,

 Why do you need  pulse _before_ trigger? With this senor exposure of
 the first line start 8 scan lines after the trigger, last line
 exposure starts 1/15 sec later. So if you want to use a flash, you
 need to set the exposure time 1/15 sec and trigger flash 1/15 after
 the trigger.
 The FPGA has delay timer, when the camera is triggered externally,
 this programmable delay is between the arrival of the sync pulse and
 the sensor triggering. When the camera is set to trigger directly from
 the FPGA (trigger condition =0), the delay sensor is triggered
 immediately, and the output sync is delayed by the programmed value -
 you may use it for the flash. Unfortunately it is for a single camera
 only.
 If you still need pulse before the sensor trigger - you may just
 reverse the trigger source - use external pulse, apply it to multiple
 cameras in parallel - each camera can  be individually delayed by
 programming TRIG_DELAY parameter.

 Andrey




 ___
 Support-list mailing list
 Support-list@support.elphel.com
 http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com



___
Support-list mailing list
Support-list@support.elphel.com
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com


[Elphel-support] Flash on a two camera system

2011-04-13 Thread Andrey Filippov
Andreas,

that depends on what are you trying to capture. If ERS distortion is not a
problem in your case, and the LED is just to provide more light, not limit
the exposure - yes, you can do that (same as if LEDs were always ON for the
camera). If on the other hand, you need LED as a  snapshot shutter than
you have to reduce ambient light or increase the LED brightness and
simultaneously add neutral filters to the camera lens.

Andrey

On Wed, Apr 13, 2011 at 1:32 PM, Andreas Bean off...@beanbox.com wrote:

 Andrey,

 I can't set the exposure time even near 67ms. We have difficult lighting
 conditions. For example, we have an indoor room with a bright window
 where the cam is moving.
 Setting the exposure time to 72ms will give me a blurred image of the
 window. Anything else may be sharp due to the fact that it is only
 lighted for 5ms.
 Is the only option to set the exposure time to 5ms and turn the leds on
 for 72ms?

 Andreas

 Andrey Filippov schrieb:
  Sebastian,
 
  Most consumer cameras with ERS have additional mechanical shutter - it
  opens 1/15 sec (if they used the same senor) after the sensor starts
  exposing the first line, readout starts after the shutter closes. When
  using the bright LED it the LED on state is virtually the same as
  mechanical shutter open
 
  Andrey
 
  ERS (Electronic Rolling Shutter) has been used in all consumer as
 well
  as professional CMOS based camera systems to date.
 
 


 ___
 Support-list mailing list
 Support-list@support.elphel.com
 http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com


___
Support-list mailing list
Support-list@support.elphel.com
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com


Re: [Elphel-support] Questions regarding zoom in ... now enhance blog post

2011-04-13 Thread Andrey Filippov
On Tue, Apr 12, 2011 at 7:28 AM, Florent Thiery
florent.thi...@ubicast.euwrote:

 Hello,

 First, please let me introduce our use case; we are trying to use Elphel
 cameras (353 + Computar 4-8mm 1/2) to get maximum resolution ( FullHD), 25
 fps video.

 Our current problems are about image quality when zooming on details
 (blurred images); in other terms, we are trying to improve the rendering
 quality as much as possible to have cleaner images. In this context, last
 year we implemented http://code.google.com/p/gst-plugins-elphel/ but only
 changing the debayering algorithm did not improve the quality enough for our
 application (at least not when compared to the processing overhead).

 I was wondering about the method described in the awesome article Zoom in
 ... now enhance:

- are there any specifics about the method being for Eyesis only (my
guess is it's not) ?

 Florent, sure it can be used with as single camera too. But it may be too
slow for processing videos - it now takes 2-3 minutes/frame on an i7 with 8
GB of RAM (in multi-threaded mode)


- regarding the calibration, what are the invariable factors ? Is the
calibration required for:
   - every camera model/generation (depending on camera/sensor
   manufacturing design/process variations) ?
   - every lens model (depending on lens model) ?
   - every lens tuning (zoom level / focus / iris ...) ?
   - climatic condition changes (temperature, ...) ?

 This was designed for fixed everything (though we did not notice
degradation with changing temperatures). But there is a significant
difference between lenses even the same model. The lenses we used do not
have zoom, and iris does not make much sens for such lenses - with 2.2 um
pixels you can not close lens more than ~4-5.6 because of the diffraction,
small sensors do not provide much control over DoF and using iris to limit
light - not really needed with ERS sensors - they can handle very short
exposures perfectly. And of course, focus setting would influence results
too, as well as the lens orientation.


- the hidden question behind this is: how can this technique be used in
production ?


Working with wide angle lenses ( 45x60 degrees FOV) we do not have enough
room to capture the test pattern in a single shot, so software is able to
combine multiple shots where the test pattern covers just a fraction of
frame. We did not work on optimizing computational time of the calibration,
so it takes several hours to process data ftom 8 camera modules. Current
calibration is only designed for the aberrations correction, but I'm now
working on the precise distortions calibration (with some 1/10 pixel
precision), we plan to use it for panorama stitching, it can also be used
for making measurements with the camera. This calibration will use the same
pattern we use fro aberration correction, just at closer range, so the
pattern will cover the whole FOV (minor out of focus is not a problem here).


- For a given camera/lens combination, could a public database of
   tuning data reduce the calibration requirement (in a similar fashion to
   A-GPSes which download correction data from the network to increase
   performance on low-quality reception and/or chips
   http://en.wikipedia.org/wiki/Assisted_GPS) ?

 Our goal was to have precise individual lens correction, we did not
experiment with correction of the lens model - it probably is possible, but
with less correction, of course. Software has multiple tuning parameters, it
should be possible to do that.


- is there a hope of having such a feature (in the long term)
   integrated in the camera itself (i.e. grabbing an mjpeg stream who had 
 the
   corrections made right before the encoding) ?


Not in the near future, at least. We now heavily rely on post-processing,
camera role is just to capture all what sensor can provide, in as much raw
form, as possible.

Andrey


 Thanks

 Florent

 ___
 Support-list mailing list
 Support-list@support.elphel.com
 http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com


___
Support-list mailing list
Support-list@support.elphel.com
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com