Re: [Elphel-support] video camera for drone project

2017-06-14 Thread Oleg
Hello,


> This is an student project with the idea of developing a real time game
> and for this reason we need a low latency and good quality video.


Currently, for 1080p the latency is roughly 40ms - the camera needs to
compress a single frame before sending it out over network.
There's also network latency and time it takes for PC to display an image.

There is room for improvement - reduce it down to ~0.7ms if one don't wait
for the compressor to finish. This just needs the fpga modification.

Answering your initial letter...


> Communications: Wireless operating at 5 GHZ, with transmitter built-in or
> external HDMI transmitter
>
to be processed /displayed in  local computer.

Range : >50 feet


Available options would be:

usb dongles (USB2.0):
* https://www.amazon.com/Mailiya-AC600Mbps-Wireless-Adapter-
Mini/dp/B01MYQW7IR/ref=sr_1_18?ie=UTF8=1497389691=
8-18=nano+router
* https://www.amazon.com/dp/B06XRG9QDV?psc=1

There are more that are 2.4GHz of course.
In terms of speed for 1080p@30fps 300Mbps should be enough

Resolution Video: 1080p at 30 frames/sc
>

Yes


> View angle: > 120
>

Yes
Image sensor's format is 1/2.5".
Possible options (small lenses, M12x0.5 thread):
* hfov=~135 : http://www.optics-online.com/OOL/DSL/DSL315.PDF,
* hfov=~180 : http://www.optics-online.com/OOL/DSL/DSL227.PDF

Note: for 1080p (full res = 2592x1936) the FoV is smaller.

Power: Battery operated with duration of an hour before recharged

Weight: < the minimum as possible < 1pound

Do you have a video camera that we can use?


NC393-DEV (12V) with a single SFE will be the lightest - 150-200g.

The power consumption is 6-8W.
The input voltage range is 9-36V. We recommend getting a 14.8V battery,
0.5-1Ah should be enough for 1 hour of operation. (Of course, if possible
try to use a single battery for the whole system).

Example (240g): https://www.dollarhobbyz.com/collections/batteries-
4s-14-8v-lipo/products/venom-lipo-battery-4s-1p-2100-mah-
20c-starter-box-15004

The total weight comes up to 1lb.

Best regards,
Oleg Dzhimiev
Electronics Engineer
phone: +1 801 783  x124
Elphel, Inc.

On 14 June 2017 at 16:28, Maria Gonzalez  wrote:

> Thanks for the detailed answer covering the internal electronics. This is
> an student project with the idea of developing a real time game and for
> this reason we need a low latency and good quality video. The video will be
> processed in the computer adding some additional elements which where we
> will put our effort. The other elements such as drone, camera,
> communications to the computer /google, we are hoping to purchase these
> parts ready to be operational .
> Sincerely, we do not know if our specifications are feasible to
> materialize with existing ready to go equipment.
>
> Maria C. Gonzalez
>
>
> On Tue, Jun 13, 2017 at 4:57 PM, Elphel Support <
> support-list@support.elphel.com> wrote:
>
>> Hello Maria,
>>
>> The camera uses mt9p031 sensors, this datasheet
>> http://www.onsemi.com/pub/Collateral/MT9P031-D.PDF lists some of the
>> frame rate/resolution combinations - camera supports all what sensor can
>> provide.
>> With latency with default firmware camera sends acquired frame
>> immediately after it is acquired - when the frame sync pulse comes from the
>> sensor, it starts sending the previous frame.
>> It is possible to reduce the latency even more, but that will require
>> some software development and custom software on the receiving side as for
>> sending standard files you need to know file size, and the file size is
>> available only after the full frame is compressed.
>>
>> The FPGA code that acquires and compresses image frames has latency of 18
>> scan lines plus under 1000 pixels - this is the lag between the pixels
>> acquired from the sensors and compressed data is stored in the system
>> memory. So it is possible to send it  not waiting for the full frame to be
>> compressed.
>>
>> Oleg will reply you about the power and weight, but yes - it is possible.
>> And with the same weight you can have up to 4 simultaneous sensors
>> operating.
>>
>> Andrey
>>
>>
>>  On Tue, 13 Jun 2017 16:34:23 -0700 *Maria Gonzalez
>> >* wrote 
>>
>> Hello
>>
>> We are looking for a camera with the following characteristics:
>>
>> *Desired Camera and Communications Specifications*
>>
>> *Communications*: Wireless operating at 5 GHZ, with transmitter built-in
>> or external HDMI transmitter
>>
>> to be processed /displayed in  local computer.
>>
>> *Resolution Video*: 1080p at 30 frames/sc
>>
>> * System* *Communications Latency ( From Camera to Computer Display)* : <
>> order of ms
>>
>> *Range* : >50 feet
>>
>> *View angle*: > 120
>>
>> *Power*: Battery operated with duration of an hour before recharged
>> *Weight*: < the minimum as possible < 1pound
>>
>> Do you have a camera that meet this characteristics? The more critical
>> parameter is the latency and we are open to suggestion for the

Re: [Elphel-support] video camera for drone project

2017-06-14 Thread Maria Gonzalez
Thanks for the detailed answer covering the internal electronics. This is
an student project with the idea of developing a real time game and for
this reason we need a low latency and good quality video. The video will be
processed in the computer adding some additional elements which where we
will put our effort. The other elements such as drone, camera,
communications to the computer /google, we are hoping to purchase these
parts ready to be operational .
Sincerely, we do not know if our specifications are feasible to materialize
with existing ready to go equipment.

Maria C. Gonzalez


On Tue, Jun 13, 2017 at 4:57 PM, Elphel Support <
support-list@support.elphel.com> wrote:

> Hello Maria,
>
> The camera uses mt9p031 sensors, this datasheet http://www.onsemi.com/pub/
> Collateral/MT9P031-D.PDF lists some of the frame rate/resolution
> combinations - camera supports all what sensor can provide.
> With latency with default firmware camera sends acquired frame immediately
> after it is acquired - when the frame sync pulse comes from the sensor, it
> starts sending the previous frame.
> It is possible to reduce the latency even more, but that will require some
> software development and custom software on the receiving side as for
> sending standard files you need to know file size, and the file size is
> available only after the full frame is compressed.
>
> The FPGA code that acquires and compresses image frames has latency of 18
> scan lines plus under 1000 pixels - this is the lag between the pixels
> acquired from the sensors and compressed data is stored in the system
> memory. So it is possible to send it  not waiting for the full frame to be
> compressed.
>
> Oleg will reply you about the power and weight, but yes - it is possible.
> And with the same weight you can have up to 4 simultaneous sensors
> operating.
>
> Andrey
>
>
>  On Tue, 13 Jun 2017 16:34:23 -0700 *Maria Gonzalez
> >* wrote 
>
> Hello
>
> We are looking for a camera with the following characteristics:
>
> *Desired Camera and Communications Specifications*
>
> *Communications*: Wireless operating at 5 GHZ, with transmitter built-in
> or external HDMI transmitter
>
> to be processed /displayed in  local computer.
>
> *Resolution Video*: 1080p at 30 frames/sc
>
> * System* *Communications Latency ( From Camera to Computer Display)* : <
> order of ms
>
> *Range* : >50 feet
>
> *View angle*: > 120
>
> *Power*: Battery operated with duration of an hour before recharged
> *Weight*: < the minimum as possible < 1pound
>
> Do you have a camera that meet this characteristics? The more critical
> parameter is the latency and we are open to suggestion for the
> communications scheme.
>
> Thanks,
>
> Maria C. Gonzalez
> ___
> Support-list mailing list
> Support-list@support.elphel.com
> http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com
>
>
>
>
___
Support-list mailing list
Support-list@support.elphel.com
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com


Re: [Elphel-support] auto exposure on multi sensor setup & binning question.

2017-06-14 Thread Oleg
Hi,

Jorick, with the limited dynamic range of the small format sensors it is a
> challenge to capture maximal data from the image, and there is no universal
> fits-all set of autoexposure parameters. We probably need to create some
> tutorial about it.


Here are a few screenshots with gui and parameters you might need to
control histogram window and the autoexposure program -
https://wiki.elphel.com/wiki/Autoexposure


> I could probably also modify the streamer to output a lower resolution.
> h264/h265 support would be even better. The idea is to run the analytics
> and streaming and only grab the interesting frames in raw from the circular
> buffer.
> How difficult would a color space conversion to YUV be on the elphel?
> We're streaming so much the same color (green ;-) and this would save
> bandwith, it would also be easier to change the brightness on parts of the
> image.
>

JPEGs are YCbCr 4:2:0 encoded.

There are a few other formats:
https://wiki.elphel.com/wiki/JP4#Format_variations
Most of the time we use JP4 to preserve pixel data, though it does not save
bandwidth.
I will look into 'jp4diff' - see if it still works and try to decode it.
Will let you know.

Best regards,
Oleg Dzhimiev
Electronics Engineer
phone: +1 801 783  x124 <(801)%20783->
Elphel, Inc.
___
Support-list mailing list
Support-list@support.elphel.com
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com


Re: [Elphel-support] auto exposure on multi sensor setup & binning question.

2017-06-14 Thread Elphel Support
Jorick,

Yes binning/decimation is broken - we do not use it ourselves (for the reasons 
I explained before) so I did not notice when I broke it - it did work 
initially. I'll look into it, but it may take some time.

Andrey

 On Tue, 13 Jun 2017 03:34:30 -0700 Jorick 
Astregojor...@theidiotcompany.eu wrote  

  
 
 
 On 05/31/2017 05:53 PM, Elphel Support wrote:
 

  Hi,
  Currently we are testing a panoramic NC393 camera and are having problems 
with the auto exposure on multiple image sensors. 
  
  When there is half shadow, half bright light in the collective images. 
The image is dark on the shadow part. Does autoexposure apply globally or 
should every sensor apply it's own autoexposure settings?
 
 Jorick, with the limited dynamic range of the small format sensors it is a 
challenge to capture maximal data from the image, and there is no universal 
fits-all set of autoexposure parameters. We probably need to create some 
tutorial about it.
 
 The overall strategy is to keep maximal data from each channel - EXIF data in 
each image contains acquisition settings so it is possible  to match individual 
channels after acquisition - that just may require more than 8-bit of the 
intensity for the intermediate data.
 
 Each channel operates it autoexposure independently, so the output may not 
match in raw form - it needs post-processing. For example channels can be  
combined in a single 16-bit per color panorama and then high-pass filter (with 
low cutoff frequency ~ 1/2000 pix) applied to reduce difference between bright 
and dark parts of the panorama.
 
 
  Thanks for the explanation, the problem we have with post processing is that 
for this phase of the project we are doing realtime image analytics with cuda 
on a MJPEG stream. 
 
 We will be doing some raw capturing but we're looking to trigger this on 
demand. 
 
 Maybe we could set the exposure parameters with an algorithm that does the 
analytics , but I don't know if that will respond fast enough and this will 
take up processing power. We will have to mess about with it some more ;-)
 
What would be a way to get a more balanced exposure?
 
 There are multiple parameters that control autoexposure, the main are window, 
level and fraction. The daemons (1 per each of the 4 channels) builds 
histograms for all pixels inside the selected rectangular area, and then 
calculates required exposure so the specified fraction of all pixels have 
values below the specified level.
 
 Default settings are for images where there are no very bright objects (like 
the Sun) in the field of view, so if the camera is pointed there the picture 
will become all dark. If you change the fraction to say 95%, then up to 5% of 
the pixels are allowed to be above the level - that level does not have to be 
very high, so exceeding it do not mean necessarily overexposure. Camvc program 
allows you to adjust this pair (fraction/level) graphically. You can turn off 
autoexposure, set it manually to the desired level and then move slider for the 
fraction - level value will be adjusted to match it.
 
 
  
 Ok understood, we will do some more testing.
 
Second question, is binning supported and do we have to do any 
aditional steps for this? We'd like to reduce the resolution while still using 
the whole sensor. When I set it to 1/2 Horizontally and 1/2 Vertically, the 
output gets corrupted.
 
 Binning is supported, it should all be the same as in the 353 camera. Can you 
please describe precisely how you've got corrupted images, we will try to 
reproduce (and address) the problem.
 
 On the other hand, binning and decimation are not r3eally good in any of the 
color mosaic sensors as these modes reduce resolution more than twice. Because 
of the Bayer mosaic (1 row):
 R1 G1 R2 G2 R3 G3 R4 G4
 
 R1 will be merged with R2, R3 - with R4 and similar G1 with G2:
 R1+R2, G1+G2, R3+R4, G3+G4, ...
 
  For the realtime anlytics the resolution is too high, currently we scale it 
down while processing. I was in the understanding that by binning you would 
have all the light but half the resolution (I tried 1/2 binning)
 So I just set horizontal and vertical to "1/2" and got the corrupted image. 
When I set them back to the original setting the image remains corrupted. I 
haven't had time to check if it stores the image uncorrupted yet.
 
 This could be due to the outdated firmware, so we'll start with upgrading that.
 
 I could probably also modify the streamer to output a lower resolution. 
h264/h265 support would be even better. The idea is to run the analytics and 
streaming and only grab the interesting frames in raw from the circular buffer.
 
 How difficult would a color space conversion to YUV be on the elphel? We're 
streaming so much the same color (green ;-) and this would save bandwith, it 
would also be easier to change the brightness on parts of the image.
 
   
  
  
  Also we have a problem in the Camogm recording application, we use fast 
recording