David,

353 has room and hardware for only one compressor :-(

The 393 FPGA block diagram is on the project git page - 
https://git.elphel.com/Elphel/x393  - you can see all the channels and 
FPGA-CPU/system memory connections there.

And this is why 393 has 12x performance of the 353 - 4 channels instead of one, 
each is 3 times faster

Andrey

---- On Tue, 25 Jul 2017 22:26:48 -0700 David McPike 
<davidmcp...@gmail.com> wrote ---- 

I started setting up a 393 build environment and realized that we're talking 
about the new cameras.  The vehicle we're testing right now is still stuck with 
the 353s.  Is this same thing possible in those cameras?

We are restarting development now, so I'll begin working on integrating the new 
cameras soon.


Thanks much,
David


On Tue, Jul 25, 2017 at 10:18 PM David McPike <davidmcp...@gmail.com> 
wrote:

Thanks, Andrey.  Can you direct me to documentation or provide some additional 
pointers regarding the adjustments necessary to achieve multiple compression 
parameters on a single sensor?  I'd like to get a better idea in mind before 
deciding which route we'll go.
Best,
David


 

On Sun, Jul 23, 2017 at 9:32 PM Elphel Support 
<support-list@support.elphel.com> wrote:

David, 

gstreamer seems to have the lowest latency - 
https://blog.elphel.com/2017/07/current-video-stream-latency-and-a-way-to-reduce-it
 , it is probably easy to add pixel reordering after JPEG decoder. Adding it to 
chrome may be trickier. Javascript http://community.elphel.com/jp4viewer/ may 
be not fast enough.

Image size for the same quality is about the same, JP4 just preserves more 
"raw" data and does not introduce de-Bayer artifacts that are difficult to undo.

2 compressors use 2 different circbuf devices, sure - circbuf data is after the 
compressor, so different quality would result ion different data

Andrey



---- On Sun, 23 Jul 2017 19:07:12 -0700 David 
McPike<davidmcp...@gmail.com> wrote ---- 

I will need to setup my dev environment (Ubuntu) again.  I haven't built Elphel 
firmware for a long time.  If we setup two compressors, does that mean we would 
have two circbuf devices as well?

We will have different network characteristics this time around, so I don't 
have an exact answer on our bandwidth limits just yet.  The bigger concerns are 
ease of use and latency.  I am interested in adding functionality to the 
OpenROV project and using it for our piloting and telemetry platform on the 
surface.  The server and video feeds will be accessed via Chrome, making it 
very easy to use, and most users will be running Windows.  I would be a bit 
concerned about trying to use Gstreamer and get it running nicely inside an 
HTML5 app in a browser, or running Gstreamer separately outside of the 
telemetry and control applications.  I don't suppose you know of a Chrome 
extension that supports decoding JP4 images?  Can you tell me some average file 
sizes of JP4 compressed images compared to, say, 80% quality JPEGs?


We usually use an ACK'd layer 2 protocol like Homeplug AV, which already adds a 
bit of delay for every frame.  In the past, we have run our piloting camera at 
1/4 resolution to minimize transmission latency.  I'm happy to keep more sensor 
information in a higher resolution frame with bigger compression to keep 
latency lower.


Thanks!
David


On Sun, Jul 23, 2017 at 8:47 PM Elphel Support 
<support-list@support.elphel.com> wrote:

David,

That will need modification of the drivers -  ( 
https://git.elphel.com/Elphel/linux-elphel ), because while in FPGA sensor and 
compressor channels can be paired in any combination, driver currently has them 
one-to-one.


Do you have Eclipse-based development environment installed and set up? All the 
header files responsible for communication with the FPGA are missing in the git 
repository - they are generated/updated from the Verilog source files.

And what i your bottleneck? What is the bandwidth of the cable? Maybe you can 
stream JP4 and decode it on the fly with gstreamer? 


Andrey


---- On Sun, 23 Jul 2017 17:44:50 -0700 David McPike 
<davidmcp...@gmail.com> wrote ---- 




Andrey, can you give me some advice or point me to documentation on how I would 
accomplish the following:

- Run two compressors
- The compressor used by camogm would write JP4 raw or 100% JPEG quality MJPEG 
images to disk
- Imgsrv would serve images at various JPEG qualities as we set them


Do I need to read circbuf frame metadata to figure this out?  What's the right 
way?  Running multiple sensors in each camera isn't achievable at this time.


Thanks!
David




On Sat, Jul 22, 2017 at 10:12 PM David McPike <davidmcp...@gmail.com> 
wrote:

Hi Andrey,


Our use case would be one sensor per camera, running around 10-12fps, recording 
video on local disk, and surface ROV pilot viewing streamed images.  I would 
like to maximize image quality stored on disk, while minimizing latency for 
images delivered to the surface on a limited bandwidth network path.  Using a 
single fps is totally fine.


I can't recall our typical resoluition.  I don't feel like we ran the camera at 
full resolution in the past, so I'll keep your input in mind and try to 
finalize our requirements.


Thanks much!






On Sat, Jul 22, 2017 at 9:53 PM Elphel Support 
<support-list@support.elphel.com> wrote:

David,

Do you mean resolution or JPEG quality?

Different JPEG quality can be achieved with the same FPGA code, but different 
software if you use less that all 4 channels - run 2 compressors from the same 
sensor channel.

Different resolution simultaneously can only be achieved if the sensor runs at 
full resolution, and FPGA reduces resolution. That mode is not implemented as 
it is limited to the same (lowest of 2) frame rate, defined by the sensor. So 
it is not possible to combine low fps/high resolution and high fps/low 
resolution from the same sensor, only what can be done is similarly to 353 - 
short interruptions of the stream to get full resolution images (simple 
application can record them to the ssd).

Probably the most practical solution (having 4 sensor channels on the same 
camera - dedicate one sensor for streaming, others - for high-res recording.

Andrey

---- On Sat, 22 Jul 2017 18:45:23 -0700 David McPike 
<davidmcp...@gmail.com> wrote ---- 

Hi All,

I can't seem to find a clear explanation on this.  We are still primarily using 
the 353 cameras.  Is there a way to locally record high quality images via 
camogm and at the same time retrieve JPEG images with higher compression via 
imgsrv?


Thanks much,
David







 _______________________________________________ 
Support-list mailing list 
Support-list@support.elphel.com 
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com 












 _______________________________________________ 
Support-list mailing list 
Support-list@support.elphel.com 
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com 






 _______________________________________________ 
Support-list mailing list 
Support-list@support.elphel.com 
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com 












 _______________________________________________ 
Support-list mailing list 
Support-list@support.elphel.com 
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com 





_______________________________________________
Support-list mailing list
Support-list@support.elphel.com
http://support.elphel.com/mailman/listinfo/support-list_support.elphel.com

Reply via email to