Hi,

Thanks for the information that you have provided us. It was very useful
and we will use the tool imgsrv. Now we would like to ask you about camera
settings.

We captured two cameras and get different timestamps for each frame.  Can
it be depends on the parameters of trig_cond and trig_out (which currently
are at 0). We have  trig = 4 mode with a trig_period = 4800000 in both. In
the master camera we have external trig = 0 and the slave 1 and trig_delay
equal to 0 in both.

What parameters would have to be modified to get the same timestamp of the
two cameras for a framerate of 25fps for example?

Kind regards,

Jennifer



2016-01-11 3:13 GMT+01:00 support-list <support-list@support.elphel.com>:

> Jennifer,
>
> In our records I found that your cameras all have 10369 I/O boards,
> ordered specifically for synchronization of the cameras so I suppose you or
> others in your team  know how to use them.
>
> Cameras should be placed in triggered mode, where they performs as two
> independent devices: one just generates periodic trigger pulses (with
> attached timestamps), and the other (the camera itself) waits for the
> trigger and then initiates frame acquisition sequence (so if individual
> cameras have different exposures, the _beginning_ of each exposure will
> match, not the center). The synchronization cable wiring defines the
> "master" - output pair of wires on its connector are soldered to all input
> pairs in parallel (including input on the master itself) - that means that
> all the cameras can be programmed identically as "masters" - outputs from
> all but the real "master" will be just not used.
>
> The electronic rolling shutter sensors have two running (in scanline
> order) pixel pointers - erase pointer and readout pointer, and the delay
> between the erase pointer and the readout one define exposure time. In free
> running mode (not synchronized) the pointers can be in different frames -
> while the readout is in frame #0, the erase one may already be in frame #1
> (next one ), the longest exposure time in that mode is just the frame
> period - as soon as pixel is read out, it is erased and exposure starts for
> the next frame.
>
> In triggered mode this is not possible, so when trigger arrives the sensor
> starts erasing line by line, starting from the first line. Exposure time
> later the readout pointer starts, and erasing of the next frame can not
> start until the full frame is read out. This makes the minimal frame period
> (the value programmed to the FPGA sync pulses generator) equal to the sum
> of the exposure time (or maximal anticipated exposure time if you use the
> default autoexposure mode).
>
> Sensor readout time can be calculated using sensor datasheet available for
> download on On Semiconductor web site. Pixel clock is 96MHz (96MPix/s).
> This time includes not just pixel readout, but also vertical (rather small)
> and horizontal (large) blanking;
>
>  FPGA processes 53.33Mpix/s in JPEG mode and 80 MPix/s in JP4 mode (we
> recommend this mode that provides you with raw Bayer mosaic data), and FPGA
> has the full frame period to compress image - and there is no "blanking",
> so for the FPGA the minimal synchronization period is just slightly above
> total number of fixels divided by FPGA pixel rate (53.3M and 80M). JP4 mode
> is always faster than the sensor, JPEG is slower for large frames, but
> smaller for the smaller ones.
>
> Other frame rate limiting factor is the network bandwidth and while sensor
> readout and FPGA processing do not depend on compression quality, the
> bandwidth sure does. Sending images over the 100Mbps Ethernet provides
> approximately 10MB/s of data. If you are trying to push as much data as you
> can, you can adjust image quality by acquiring a test image of the scene,
> looking at the file size - the maximal frame rate will be just 10MB/s
> divided by the image size.
>
> 10 MB/s is achieved if the network can handle 100Mbps from each camera. As
> you have 16 of them you will need to split them in 2 groups (if you are
> using GigE switches). On the host PC you need to run either multi-threaded
> or just run one script (that reads from the imgsrv) per camera
> simultaneously.
>
> This will provide you with the sets of images precisely timestamped, each
> channel will have images with exactly the same timestamp value so you
> should not have any problems matching images from different channels.
>
> There are multiple programs available to process JP4 format (an perform
> client-side demosaic) - in C, Java, and even JavaScript code that converts
> images using just HTML5 canvas.
>
> Andrey
>
>
_______________________________________________
Support-list mailing list
Support-list@support.elphel.com
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com

Reply via email to