Re: [Elphel-support] 答复: 答复: 答复: 答复: 答复: 答复: 答复: 答复: 答复: 答复: Re: Quote for 1 NC393-Stereo camera
Hello, Here's an updated firmware image: https://community.elphel.com/files/393/20180109/ The instructions are in the readme.md in the zip archive. Let me know if you have any questions. To update camera's NAND flash you will need to boot from the recovery uSD card. Best regards, Oleg Dzhimiev Electronics Engineer phone: +1 801 783 x124 Elphel, Inc. ___ Support-list mailing list Support-list@support.elphel.com http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com
Re: [Elphel-support] 答复: 答复: 答复: 答复: 答复: 答复: 答复: 答复: 答复: 答复: Re: Quote for 1 NC393-Stereo camera
Winston, that was really a software bug that influenced only some lighting conditionsso we did not notice it. It is now fixed in git repository https://git.elphel.com/Elphel/elphel-apps-autoexposure/commit/4a215f5ce69d3953585874fa0645bffde44dbafc , we will post the binary image for the uSD card shortly. Andrey On Mon, 08 Jan 2018 23:47:18 -0800 Elphel Support support-list@support.elphel.com wrote On Mon, 08 Jan 2018 20:58:46 -0800 Winston Zhangwinston.zh...@blacksesame.com.cn wrote Hi Andrey I have set all the same parameters. Please look at attachments. How did I set the 245 as your shown on screenshot? This page on mine did not show any object, please look at the attachments. And two camera images change asynchronously. How can I make this two camera image changes synchronously? Winston, to set same parameter as "245" - just open and press "Apply" http://192.168.0.9/parsedit.php?sensor_port=0AEXP_LEVEL=0xf500refresh http://192.168.0.9/parsedit.php?sensor_port=1AEXP_LEVEL=0xf500refresh Or On that user interface that you've sent last video press on "more details..." (it is just above the sliders) and then on the yellow tab (#4) - that will open AE contols like on Fig.4 here: https://wiki.elphel.com/wiki/Autoexposure Andrey ___ Support-list mailing list Support-list@support.elphel.com http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com
Re: [Elphel-support] Sensor Synchronization and Memory
Hello Fabian, Yes - I just forgot that that did not yet port that functionality that was available in NC353. Oleg is working on it right now. Do you have any preferences for the interface? Andrey On Tue, 09 Jan 2018 08:09:38 -0800 Fabjan Sukalia fabjan.suka...@qinematiq.com wrote Hello Andrey, thanks for your help. The issue with the trigger is currently on hold and I concentrate on reading out the raw sensor data from the video memory. One of the drivers provide raw access to the whole video memory as to the large continuous file. Other driver provides access to the actual captured frame data. Do you mean the x393_videomem.c driver? It seems this driver does not have the functionality implemented (https://git.elphel.com/Elphel/linux-elphel/blob/master/src/drivers/elphel/x393_videomem.c#L417). In memory the frame width is rounded up, so there are gaps between sensor pixel data. This means that every scanline, independent of the width, is in a 8192 byte region and the next scanline starts at the next 8192 byte boundary. Also the two frames for each sensor are also consecutive in the video memory without any gap, besides the round-up to 8192 byte. Is this correct? Kind regards, Fabjan Sukalia Am 2017-12-15 um 17:57 schrieb Elphel Support: Hello Fabian, The sensors used in 393 have 2 major operational modes - free running and triggered (there are mode details in https://blog.elphel.com/2016/10/using-a-flash-with-a-cmos-image-sensor-ers-and-grr-modes/ and in sensor datasheets). In free running mode the maximal frame rate does not depend on exposure time (exposure can be up to the full frame period). In the triggered mode (from the sensor "point of view", so it does not matter if the trigger is received over the cable or generated by the FPGA timer) exposure and readout can not be overlapped, so the maximal frame rate is limited to 1/(T_readout + T_exposure). That means that the trigger can be missed if exposure is set too high (for example by the autoexposure daemon). Please describe what trigger problems did you have so we can try to reproduce them. If your exposure time is short compared to readout time, you just need to slightly increase the frame period (so it will accommodate both T_readout and T_exposure) and either use manual exposure or specify maximal exposure time in autoexposure settings. If your exposure time is high (not enough light) it is possible to try the following trick. 1) Run camera in triggered mode (FPS 1/(T_readout+T_exposure) 2) Make sure the parameters that define the frame rate in free running mode are the same for all the participating sensors. 3) Limit or set exposure time so it will never exceed frame period in free running mode 4) Simultaneously (using broadcast mask) switch all sensors to the free running mode Sensors should stay in sync as they use the same source clock and all other parameters are the same. As for uncompressed data - it should be possible (it is tested with Python test_mcntrl.py ) as there is DMA-based bridge between the video memory and the system memory. There are drivers ported from the 353 camera that provide access to this memory, but we did not use them and need to check operation. One of the drivers provide raw access to the whole video memory as to the large continuous file. Other driver provides access to the actual captured frame data. In memory the frame width is rounded up, so there are gaps between sensor pixel data. Next thing depends on 8/16 bpp modes. In normal JPEG/JP4 modes the data in the video memory is 8bpp (after the gamma conversion), and so it is possible to simultaneously get both compressed and uncompressed output. In 16 bpp mode (with 12 bit sensor data is shifted left by 3 bits, so different sensors use full range of positive short int). In that mode it is not possible to simultaneously get compressed and raw data. Video memory buffering can be programmed to use variable number of frames for each channel, by default it is set to 2, working as a Ping-pong buffer. When using compressed output the operation of the data acquisition channel (writing video memory in scan-line order) and reading data to compressors (20x20 overlapping tiles in JPEG mode, non-overlapping 16x16 in JP4 mode) are synchronized in the FPGA (read channel waits for the sufficient lines to be acquired for the next row of tiles), but that is not so for the raw data read from the video memory. FPGA provides 8 individual interrupts for the imaging subsystem - 4 channels for the sensor acquisition channels (frame sync signals also internally advance command sequencers described here - https://blog.elphel.com/2016/09/nc393-development-progress-and-the-future-plans/) and 4 compressor_done interrupts. And there are userland ways to wait fro the next frame (e.g. from the PHP extension - https://wiki.elphel.com/wiki/PHP_in_Elphel_cameras).
Re: [Elphel-support] Sensor Synchronization and Memory
Hello Andrey, thanks for your help. The issue with the trigger is currently on hold and I concentrate on reading out the raw sensor data from the video memory. One of the drivers provide raw access to the whole video memory as to the large continuous file. Other driver provides access to the actual captured frame data. Do you mean the x393_videomem.c driver? It seems this driver does not have the functionality implemented (https://git.elphel.com/Elphel/linux-elphel/blob/master/src/drivers/elphel/x393_videomem.c#L417). In memory the frame width is rounded up, so there are gaps between sensor pixel data. This means that every scanline, independent of the width, is in a 8192 byte region and the next scanline starts at the next 8192 byte boundary. Also the two frames for each sensor are also consecutive in the video memory without any gap, besides the round-up to 8192 byte. Is this correct? Kind regards, Fabjan Sukalia Am 2017-12-15 um 17:57 schrieb Elphel Support: Hello Fabian, The sensors used in 393 have 2 major operational modes - free running and triggered (there are mode details in https://blog.elphel.com/2016/10/using-a-flash-with-a-cmos-image-sensor-ers-and-grr-modes/ and in sensor datasheets). In free running mode the maximal frame rate does not depend on exposure time (exposure can be up to the full frame period). In the triggered mode (from the sensor "point of view", so it does not matter if the trigger is received over the cable or generated by the FPGA timer) exposure and readout can not be overlapped, so the maximal frame rate is limited to 1/(T_readout + T_exposure). That means that the trigger can be missed if exposure is set too high (for example by the autoexposure daemon). Please describe what trigger problems did you have so we can try to reproduce them. If your exposure time is short compared to readout time, you just need to slightly increase the frame period (so it will accommodate both T_readout and T_exposure) and either use manual exposure or specify maximal exposure time in autoexposure settings. If your exposure time is high (not enough light) it is possible to try the following trick. 1) Run camera in triggered mode (FPS < 1/(T_readout+T_exposure) 2) Make sure the parameters that define the frame rate in free running mode are the same for all the participating sensors. 3) Limit or set exposure time so it will never exceed frame period in free running mode 4) Simultaneously (using broadcast mask) switch all sensors to the free running mode Sensors should stay in sync as they use the same source clock and all other parameters are the same. As for uncompressed data - it should be possible (it is tested with Python test_mcntrl.py ) as there is DMA-based bridge between the video memory and the system memory. There are drivers ported from the 353 camera that provide access to this memory, but we did not use them and need to check operation. One of the drivers provide raw access to the whole video memory as to the large continuous file. Other driver provides access to the actual captured frame data. In memory the frame width is rounded up, so there are gaps between sensor pixel data. Next thing depends on 8/16 bpp modes. In normal JPEG/JP4 modes the data in the video memory is 8bpp (after the gamma conversion), and so it is possible to simultaneously get both compressed and uncompressed output. In 16 bpp mode (with 12 bit sensor data is shifted left by 3 bits, so different sensors use full range of positive short int). In that mode it is not possible to simultaneously get compressed and raw data. Video memory buffering can be programmed to use variable number of frames for each channel, by default it is set to 2, working as a Ping-pong buffer. When using compressed output the operation of the data acquisition channel (writing video memory in scan-line order) and reading data to compressors (20x20 overlapping tiles in JPEG mode, non-overlapping 16x16 in JP4 mode) are synchronized in the FPGA (read channel waits for the sufficient lines to be acquired for the next row of tiles), but that is not so for the raw data read from the video memory. FPGA provides 8 individual interrupts for the imaging subsystem - 4 channels for the sensor acquisition channels (frame sync signals also internally advance command sequencers described here - https://blog.elphel.com/2016/09/nc393-development-progress-and-the-future-plans/) and 4 compressor_done interrupts. And there are userland ways to wait fro the next frame (e.g. from the PHP extension - https://wiki.elphel.com/wiki/PHP_in_Elphel_cameras). We will check (update if needed) the drivers that provide access to the video memory. Andrey On Fri, 15 Dec 2017 05:48:27 -0800 *Fabjan Sukalia* wrote Dear Elphel-Team, currently I'm working with the synchronization and readout of the sensor on the 393. My first