Re: [Elphel-support] Sensor Synchronization and Memory

2018-01-18 Thread Oleg
Fabjan,

> I already reviewed and tested the driver and it works good. For converting
> the raw data we use OpenCV with numpy. Feel free to copy the snippet below
> to your wiki page. If there are any problems oder open questions I will
> write you an E-Mail.
>
Thanks, I added your code to that wiki page.

I made a few changes which are now available at
https://community.elphel.com/files/393/20180118/

Raw image info is accessible in sysfs. Examples are in demo scripts:
a. /usr/bin/raw.py - demo for mmap
b.  http://192.168.0.9/raw.php - demo for getting images via http requests

The scripts do not interrupt triggering - so images may not be in sync
(different frame numbers).
Also, note: if exposure ever gets longer than the trigger period the frame
numbers will get different.

More info about the scripts:
https://wiki.elphel.com/wiki/Working_with_raw_image_data#Capturing_and_downloading

So, do you transfer raw images over network?
I can add a small header to the data streamed by raw.php.

Best regards,
Oleg Dzhimiev
Electronics Engineer
phone: +1 801 783  x124 <(801)%20783->
Elphel, Inc.
___
Support-list mailing list
Support-list@support.elphel.com
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com


Re: [Elphel-support] Sensor Synchronization and Memory

2018-01-18 Thread Fabjan Sukalia

Hello Oleg,

thank you for the driver. I already reviewed and tested the driver and 
it works good. For converting the raw data we use OpenCV with numpy. 
Feel free to copy the snippet below to your wiki page. If there are any 
problems oder open questions I will write you an E-Mail.


Kind regards,

Fabjan Sukalia


#!/usr/bin/env python
import cv2
import numpy as np

width = 2608
height = 1940

with open("test.raw", "rb") as rawimg:
    img = np.fromfile(rawimg, np.dtype('u1'), width * 
height).reshape(height, width)

    img.tofile("test2.raw")
    colimg = cv2.cvtColor(img, cv2.COLOR_BAYER_GB2BGR)
    cv2.imwrite("test.jpeg", colimg)
    #cv2.imshow("color", colimg)
    #cv2.waitKey(0)


Am 2018-01-16 um 23:49 schrieb Oleg:

Hi,

I modified the videomem driver so it is able to get raw pixels, the 
firmware is available here 
 - built in rocko 
branch.


So, the basic functionality is implemented: read and mmap. There are a 
few more things to be done:


* in the fpga memory the buffer for pixel data is 2-frames long (for 
each port) - right now there's no way to learn which image to start 
from in that buffer, based on the absolute frame number or anything 
else. Once started it's easy to follow, of course.
Alternatively, those memory channels and counters can be reset, so the 
start would always be in the beginning - I need to test it.


* finish a demo script with setting proper sizes, frame waiting, and 
getting frames from 2 channels (only a single port can be accessed for 
raw data at a time)


* test 16bpp

*demo:*

root@elphel393:~# raw.py


*instructions:*

https://wiki.elphel.com/wiki/Working_with_raw_image_data#Downloading


Let me know if you have any comments/wishes.

Best regards,
Oleg Dzhimiev
Electronics Engineer
phone: +1 801 783  x124
Elphel, Inc.


--
qinematiq GmbH
-
Fabjan Sukalia   Millergasse 21/5   A-1060 Vienna
www.qinematiq.com

___
Support-list mailing list
Support-list@support.elphel.com
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com


Re: [Elphel-support] Sensor Synchronization and Memory

2018-01-16 Thread Oleg
Hi,

I modified the videomem driver so it is able to get raw pixels, the
firmware is available here
 - built in rocko branch.

So, the basic functionality is implemented: read and mmap. There are a few
more things to be done:

* in the fpga memory the buffer for pixel data is 2-frames long (for each
port) - right now there's no way to learn which image to start from in that
buffer, based on the absolute frame number or anything else. Once started
it's easy to follow, of course.
Alternatively, those memory channels and counters can be reset, so the
start would always be in the beginning - I need to test it.

* finish a demo script with setting proper sizes, frame waiting, and
getting frames from 2 channels (only a single port can be accessed for raw
data at a time)

* test 16bpp

*demo:*

> root@elphel393:~# raw.py


*instructions:*

> https://wiki.elphel.com/wiki/Working_with_raw_image_data#Downloading


Let me know if you have any comments/wishes.

Best regards,
Oleg Dzhimiev
Electronics Engineer
phone: +1 801 783  x124
Elphel, Inc.
___
Support-list mailing list
Support-list@support.elphel.com
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com


Re: [Elphel-support] Sensor Synchronization and Memory

2018-01-09 Thread Elphel Support
Hello Fabian,

Yes - I just forgot that that did not yet port that functionality that was 
available in NC353. Oleg is working on it right now. Do you have any 
preferences for the interface?

Andrey

 On Tue, 09 Jan 2018 08:09:38 -0800 Fabjan Sukalia 
fabjan.suka...@qinematiq.com wrote  

  Hello Andrey,
 thanks for your help. The issue with the trigger is currently on hold and I 
concentrate on reading out the raw sensor data from the video memory. 
 
  
One of the drivers provide raw access to the whole video memory as to the large 
continuous  file. Other driver provides access to the actual captured frame 
data. Do you mean the x393_videomem.c driver? It seems this driver does not 
have the functionality implemented 
(https://git.elphel.com/Elphel/linux-elphel/blob/master/src/drivers/elphel/x393_videomem.c#L417).
 
 
  
 In memory the frame width is rounded up, so there are gaps between sensor 
pixel data. This means that every scanline, independent of the width, is in a 
8192 byte region and the next scanline starts at the next 8192 byte boundary. 
Also the two frames for each sensor are also consecutive in the video memory 
without any gap, besides the round-up to 8192 byte. Is this correct?
 
 
 Kind regards,
 Fabjan Sukalia
 
 
 Am 2017-12-15 um 17:57 schrieb Elphel Support:
 
   Hello Fabian,
 
 The sensors used in 393 have 2 major operational modes - free running and 
triggered (there are mode details in 
https://blog.elphel.com/2016/10/using-a-flash-with-a-cmos-image-sensor-ers-and-grr-modes/
 and in sensor datasheets). In free running mode the maximal frame rate does 
not depend on exposure time (exposure can be up to the full frame period). In 
the triggered mode (from the sensor "point of view", so it does not matter if 
the trigger is received over the cable or generated by the FPGA timer) exposure 
and readout can not be overlapped, so the maximal frame rate is limited to 
1/(T_readout + T_exposure). That means that the trigger can be missed if 
exposure is set too high (for example by the autoexposure daemon). Please 
describe what trigger problems did you have so we can try to reproduce them.
 
 If your exposure time  is short compared to readout time, you just need to 
slightly increase the frame period (so it will accommodate both T_readout and 
T_exposure) and either use manual exposure or specify maximal exposure time in 
autoexposure settings.
 
 If your exposure time is high (not enough light) it is possible to try the 
following trick.
 1) Run camera in triggered mode (FPS  1/(T_readout+T_exposure)
 2) Make sure the parameters that define the frame rate in free running mode 
are the same for all the participating sensors.
 3) Limit or set exposure time so it will never exceed frame period in free 
running mode
 4) Simultaneously (using broadcast mask) switch all sensors to the free 
running mode
 
 Sensors should stay in sync as they use the same source clock and all other 
parameters are the same.
 
 As for uncompressed data - it should be possible (it is tested with Python 
test_mcntrl.py ) as there is DMA-based bridge between the video memory and the 
system memory. There are drivers ported from the 353 camera that provide access 
to this memory, but we did not use them and need to check operation.
 One of the drivers provide raw access to the whole video memory as to the 
large continuous  file. Other driver provides access to the actual captured 
frame data. In memory the frame width is rounded up, so there are gaps between 
sensor pixel data.
 
 Next thing depends on 8/16 bpp modes. In normal JPEG/JP4 modes the data in the 
video memory is 8bpp (after the gamma conversion), and so it is possible to 
simultaneously get both compressed and uncompressed output. In 16 bpp mode 
(with 12 bit sensor data is shifted left by 3 bits, so different sensors use 
full range of positive short int). In that mode it is not possible to 
simultaneously get compressed and raw data.
 
 Video memory buffering can be programmed to use variable number of frames for 
each channel, by default it is set to 2, working as a Ping-pong buffer. When 
using compressed output the operation of the data acquisition channel (writing 
video memory in scan-line order) and reading data to compressors (20x20 
overlapping tiles in JPEG mode, non-overlapping 16x16 in JP4 mode) are 
synchronized in the FPGA (read channel waits for the sufficient lines to be 
acquired for the next row of tiles), but that is not so for the raw data read 
from the video memory. FPGA provides 8 individual interrupts for the imaging 
subsystem - 4 channels for the sensor acquisition channels (frame sync signals 
also internally advance command sequencers described here - 
https://blog.elphel.com/2016/09/nc393-development-progress-and-the-future-plans/)
 and 4 compressor_done interrupts. And there are userland ways to wait fro the 
next frame (e.g. from the PHP extension - 
https://wiki.elphel.com/wiki/PHP_in_Elphel_cameras).
 
 

Re: [Elphel-support] Sensor Synchronization and Memory

2018-01-09 Thread Fabjan Sukalia

Hello Andrey,

thanks for your help. The issue with the trigger is currently on hold 
and I concentrate on reading out the raw sensor data from the video memory.


One of the drivers provide raw access to the whole video memory as to 
the large continuous  file. Other driver provides access to the actual 
captured frame data.
Do you mean the x393_videomem.c driver? It seems this driver does not 
have the functionality implemented 
(https://git.elphel.com/Elphel/linux-elphel/blob/master/src/drivers/elphel/x393_videomem.c#L417). 



In memory the frame width is rounded up, so there are gaps between 
sensor pixel data.
This means that every scanline, independent of the width, is in a 8192 
byte region and the next scanline starts at the next 8192 byte boundary. 
Also the two frames for each sensor are also consecutive in the video 
memory without any gap, besides the round-up to 8192 byte. Is this correct?



Kind regards,

Fabjan Sukalia


Am 2017-12-15 um 17:57 schrieb Elphel Support:

Hello Fabian,

The sensors used in 393 have 2 major operational modes - free running 
and triggered (there are mode details in 
https://blog.elphel.com/2016/10/using-a-flash-with-a-cmos-image-sensor-ers-and-grr-modes/ 
and in sensor datasheets). In free running mode the maximal frame rate 
does not depend on exposure time (exposure can be up to the full frame 
period). In the triggered mode (from the sensor "point of view", so it 
does not matter if the trigger is received over the cable or generated 
by the FPGA timer) exposure and readout can not be overlapped, so the 
maximal frame rate is limited to 1/(T_readout + T_exposure). That 
means that the trigger can be missed if exposure is set too high (for 
example by the autoexposure daemon). Please describe what trigger 
problems did you have so we can try to reproduce them.


If your exposure time  is short compared to readout time, you just 
need to slightly increase the frame period (so it will accommodate 
both T_readout and T_exposure) and either use manual exposure or 
specify maximal exposure time in autoexposure settings.


If your exposure time is high (not enough light) it is possible to try 
the following trick.

1) Run camera in triggered mode (FPS < 1/(T_readout+T_exposure)
2) Make sure the parameters that define the frame rate in free running 
mode are the same for all the participating sensors.
3) Limit or set exposure time so it will never exceed frame period in 
free running mode
4) Simultaneously (using broadcast mask) switch all sensors to the 
free running mode


Sensors should stay in sync as they use the same source clock and all 
other parameters are the same.


As for uncompressed data - it should be possible (it is tested with 
Python test_mcntrl.py ) as there is DMA-based bridge between the video 
memory and the system memory. There are drivers ported from the 353 
camera that provide access to this memory, but we did not use them and 
need to check operation.
One of the drivers provide raw access to the whole video memory as to 
the large continuous  file. Other driver provides access to the actual 
captured frame data. In memory the frame width is rounded up, so there 
are gaps between sensor pixel data.


Next thing depends on 8/16 bpp modes. In normal JPEG/JP4 modes the 
data in the video memory is 8bpp (after the gamma conversion), and so 
it is possible to simultaneously get both compressed and uncompressed 
output. In 16 bpp mode (with 12 bit sensor data is shifted left by 3 
bits, so different sensors use full range of positive short int). In 
that mode it is not possible to simultaneously get compressed and raw 
data.


Video memory buffering can be programmed to use variable number of 
frames for each channel, by default it is set to 2, working as a 
Ping-pong buffer. When using compressed output the operation of the 
data acquisition channel (writing video memory in scan-line order) and 
reading data to compressors (20x20 overlapping tiles in JPEG mode, 
non-overlapping 16x16 in JP4 mode) are synchronized in the FPGA (read 
channel waits for the sufficient lines to be acquired for the next row 
of tiles), but that is not so for the raw data read from the video 
memory. FPGA provides 8 individual interrupts for the imaging 
subsystem - 4 channels for the sensor acquisition channels (frame sync 
signals also internally advance command sequencers described here - 
https://blog.elphel.com/2016/09/nc393-development-progress-and-the-future-plans/) 
and 4 compressor_done interrupts. And there are userland ways to wait 
fro the next frame (e.g. from the PHP extension - 
https://wiki.elphel.com/wiki/PHP_in_Elphel_cameras).


We will check (update if needed) the drivers that provide access to 
the video memory.


Andrey




 On Fri, 15 Dec 2017 05:48:27 -0800 *Fabjan Sukalia 
* wrote 


Dear Elphel-Team,

currently I'm working with the synchronization and readout of the
sensor on the 393. My first 

Re: [Elphel-support] Sensor Synchronization and Memory

2017-12-15 Thread Elphel Support
Hello Fabian,

The sensors used in 393 have 2 major operational modes - free running and 
triggered (there are mode details in 
https://blog.elphel.com/2016/10/using-a-flash-with-a-cmos-image-sensor-ers-and-grr-modes/
 and in sensor datasheets). In free running mode the maximal frame rate does 
not depend on exposure time (exposure can be up to the full frame period). In 
the triggered mode (from the sensor "point of view", so it does not matter if 
the trigger is received over the cable or generated by the FPGA timer) exposure 
and readout can not be overlapped, so the maximal frame rate is limited to 
1/(T_readout + T_exposure). That means that the trigger can be missed if 
exposure is set too high (for example by the autoexposure daemon). Please 
describe what trigger problems did you have so we can try to reproduce them.

If your exposure time  is short compared to readout time, you just need to 
slightly increase the frame period (so it will accommodate both T_readout and 
T_exposure) and either use manual exposure or specify maximal exposure time in 
autoexposure settings.

If your exposure time is high (not enough light) it is possible to try the 
following trick.
1) Run camera in triggered mode (FPS  1/(T_readout+T_exposure)
2) Make sure the parameters that define the frame rate in free running mode are 
the same for all the participating sensors.
3) Limit or set exposure time so it will never exceed frame period in free 
running mode
4) Simultaneously (using broadcast mask) switch all sensors to the free running 
mode

Sensors should stay in sync as they use the same source clock and all other 
parameters are the same.

As for uncompressed data - it should be possible (it is tested with Python 
test_mcntrl.py ) as there is DMA-based bridge between the video memory and the 
system memory. There are drivers ported from the 353 camera that provide access 
to this memory, but we did not use them and need to check operation.
One of the drivers provide raw access to the whole video memory as to the large 
continuous  file. Other driver provides access to the actual captured frame 
data. In memory the frame width is rounded up, so there are gaps between sensor 
pixel data.

Next thing depends on 8/16 bpp modes. In normal JPEG/JP4 modes the data in the 
video memory is 8bpp (after the gamma conversion), and so it is possible to 
simultaneously get both compressed and uncompressed output. In 16 bpp mode 
(with 12 bit sensor data is shifted left by 3 bits, so different sensors use 
full range of positive short int). In that mode it is not possible to 
simultaneously get compressed and raw data.

Video memory buffering can be programmed to use variable number of frames for 
each channel, by default it is set to 2, working as a Ping-pong buffer. When 
using compressed output the operation of the data acquisition channel (writing 
video memory in scan-line order) and reading data to compressors (20x20 
overlapping tiles in JPEG mode, non-overlapping 16x16 in JP4 mode) are 
synchronized in the FPGA (read channel waits for the sufficient lines to be 
acquired for the next row of tiles), but that is not so for the raw data read 
from the video memory. FPGA provides 8 individual interrupts for the imaging 
subsystem - 4 channels for the sensor acquisition channels (frame sync signals 
also internally advance command sequencers described here - 
https://blog.elphel.com/2016/09/nc393-development-progress-and-the-future-plans/)
 and 4 compressor_done interrupts. And there are userland ways to wait fro the 
next frame (e.g. from the PHP extension - 
https://wiki.elphel.com/wiki/PHP_in_Elphel_cameras).

We will check (update if needed) the drivers that provide access to the video 
memory.

Andrey




 On Fri, 15 Dec 2017 05:48:27 -0800 Fabjan Sukalia 
fabjan.suka...@qinematiq.com wrote  

  Dear Elphel-Team,
 currently I'm working with the synchronization and readout of the sensor on 
the 393. My first goal is to synchronize two or more sensors so that the 
pictures are taken at the exact same time. To my understanding the internal 
trigger could be used for this purpose but I'm unsure if this produces a stable 
video stream with the maximum frame rate. Currently I'm unable to confirm this 
as the firmware that is provided by my colleagues has issues with the trigger. 
Therefore I'm asking you if the internal trigger can synchronize the sensors 
and still provide a video stream with the highest frame rate possible. Also the 
maximal exposure time would be 16 ms for a 60 fps video. 
 
 My second task is to access the uncompressed data of the sensors. These data 
reside on the memory chip dedicated to the FPGA-part of the Zynq. Is there some 
example code on how to access the uncompressed data from a user-space program?
 Thanks in advance.
 Kind regards,
 Fabjan Sukalia
 -- 
 qinematiq GmbH 
 - 
 Fabjan SukaliaMillergasse 21/5  A - 1060 Vienna 
 

[Elphel-support] Sensor Synchronization and Memory

2017-12-15 Thread Fabjan Sukalia

Dear Elphel-Team,

currently I'm working with the synchronization and readout of the sensor 
on the 393. My first goal is to synchronize two or more sensors so that 
the pictures are taken at the exact same time. To my understanding the 
internal trigger could be used for this purpose but I'm unsure if this 
produces a stable video stream with the maximum frame rate. Currently 
I'm unable to confirm this as the firmware that is provided by my 
colleagues has issues with the trigger. Therefore I'm asking you if the 
internal trigger can synchronize the sensors and still provide a video 
stream with the highest frame rate possible. Also the maximal exposure 
time would be 16 ms for a 60 fps video.


My second task is to access the uncompressed data of the sensors. These 
data reside on the memory chip dedicated to the FPGA-part of the Zynq. 
Is there some example code on how to access the uncompressed data from a 
user-space program?


Thanks in advance.

Kind regards,

Fabjan Sukalia

--
qinematiq GmbH
-
Fabjan Sukalia    Millergasse 21/5  A - 1060 Vienna
Tel: +43 1 595 11 21-11   Mobil: +43 664 926 9277
www.qinematiq.com

___
Support-list mailing list
Support-list@support.elphel.com
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com