Re: [ANNOUNCE DRAFT] Kernel Summit Media Workshop 2015 report - Seoul

2015-11-18 Thread Mauro Carvalho Chehab
Em Thu, 12 Nov 2015 17:53:29 -0200
Mauro Carvalho Chehab  escreveu:

> That's the first draft of the KS workshop that we had in Seoul.
> 
> It is based on the notes we took on Etherpad, but I had to add several things
> from my memory and from Hans slide deck.
> 
> A graph version of this draft is at:
>   http://linuxtv.org/news.php?entry=2015-11-12.mchehab
> 
> TODO:
>   Add group photo and links to the presentations
>   Fix some visual issues at the web.
> 

I updated today the LinuxTV entry with the second version, with the
Group Photo and a link to the Hans slidedeck. The visual issues on
v1 was fixed too.

The only pending item is the slidedeck from Pawel.

Regards,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANNOUNCE DRAFT] Kernel Summit Media Workshop 2015 report - Seoul

2015-11-12 Thread Mauro Carvalho Chehab
That's the first draft of the KS workshop that we had in Seoul.

It is based on the notes we took on Etherpad, but I had to add several things
from my memory and from Hans slide deck.

A graph version of this draft is at:
http://linuxtv.org/news.php?entry=2015-11-12.mchehab

TODO:
Add group photo and links to the presentations
Fix some visual issues at the web.

---

Linux Kernel Summit Media Workshop – Seoul 2015

Attendees list:

Arnd Bergmann – Linaro – a...@arndb.de
David Howells – Red Hat – dhowe...@redhat.com
Geunyoung Kim – Samsung Visual Display Business – nenggun@samsung.com
Hans Verkuil – Cisco Systems Norway – hverk...@xs4all.nl
Ikjoon Kim – Samsung Visual Display Business – ikjoon@samsung.com
Inki Dae – Samsung Software R Center – inki@samsung.com
Javier Martinez Canillas – Samsung OSG – jav...@osg.samsung.com
Junghak Sung – Samsung Visual Display Business – jh1009.s...@samsung.com
Laurent Pinchart – Ideas on Board – laurent.pinch...@ideasonboard.com
Luis de Bethencourt – Samsung OSB – lui...@osg.samsung.com
Mark Brown – Linaro – broo...@kernel.org
Mauro Carvalho Chehab – Samsung OSG – mche...@osg.samsung.com
Minsong Kim – Samsung Visual Display Business – ms17@samsung.com
Pawel Osciak – Google – pa...@osciak.com
Rany Kwon – Samsung Visual Display Business – rany.k...@samsung.com
Reynaldo – Samsung OSG – reyna...@osg.samsung.com
Seung-Woo Kim – Samsung Software R Center – sw0312@samsung.com
Shuah Khan – Samsung OSG – shua...@osg.samsung.com
Thiago – Samsung OSG – thiag...@osg.samsung.com
Tomasz Figa – Google – tf...@chromium.org
Vinod Koul – Intel – vinod.k...@intel.com

1. Codec API


Stream API
==

The original V4L2 codec API was developed along with the Exynos codec driver. 
As the device implements high-level operations in hardware the resulting API 
was high-level as well with drivers accepting unprocessed raw streams. This 
matches older ARM SoCs where CPU power wasn’t deemed enough to implement stream 
parsing.

Drivers implement two V4L2 buffer queues, one on the uncompressed side and one 
on the compressed side. The two queues operate independently, without a 1:1 
correspondence between consumed and produced buffers (for instance reference 
frames need to be accumulated when starting video encoding before producing any 
output or bitstream header parsed before CAPTURE buffers can be allocated). The 
mem2mem V4L2 kernel framework thus can’t be used to implement such codec 
drivers as it hardcodes this 1:1 correspondence. This is a kernel framework 
issue, not a V4L2 userspace API issue.

(For stream API fundamentals see Pawel’s slides)

Frame API (Slice API)
=

CPUs are getting faster in the ARM world. The trend is to implement lower-level 
hardware codecs that require stream parsing on the CPU. CPU code needs to slice 
the stream, extract information, process them and pass the result to a 
shader-like device. This is the model used on Intel platforms and implemented 
in the VA API library.

Drivers still implement two V4L2 buffer queues, but the encoded stream is split 
into frames of slices, and a large number of codec-specific controls need to be 
set from parsed stream information.

Stream parsing and parameters calculation is better done in userspace. 
Userspace is responsible for managing reference frames and their life time, and 
for passing data to the codec in such a way that an input buffer will always 
produce an output buffer. The two queues operate together with a 1:1 
correspondence between buffers. The mem2mem framework is thus usable.

Sources buffer contain only slice data (macroblocks + coefficient data). 
Controls contain information extracted from stream parsing, list of reference 
frames and DPB (Decoded Picture Buffer). The request API can be used to 
associate controls with source buffers.

Keeping references to reference frames is one of the remaining problems 
(especially with DMABUF, and possibly with MMAP in the future when we’ll have 
the ability to destroy MMAP buffers while streaming). This problem also exists 
for the stream API. More discussion is needed to design a solution.

Pawel promised to upstream the ChromeOS codec drivers code this year (and you 
should be also nagging Tomasz to do it…).

For encoders header generation should probably be done in userspace as well as 
the code is complex and doesn’t require a kernel implementation.

Userspace code should be implemented as libv4l2 plugins to interface between 
the frame API exposed by the kernel and a stream API exposed to applications.

Status
==

See Pawel’s slides.

Discussion points
=

Further discussions should happen during the Workshop week (scheduled to happen 
on Tuesday at 11:30).

References:
===

Intel libVA SDK: https://bugs.freedesktop.org/show_bug.cgi?id=92533
Request API: