Re: [FFmpeg-devel] [PATCH v4 2/2] rtp: rfc4175: add handler for YCbCr-4:2:2

2017-10-30 Thread Éloi Bail
> I think you misunderstood "unpublished" for some other word. Those
> specs are actually available for anyone that wants to read them (if
> they are willing to pay for them). 2110-20 isn't available on any of
> the usual sources for obtaining SMPTE standards documents.

> - Hendrik

Hi Hendrik. 

Yes I see. SMPTE 2110-20 should be published very soon. I would suggest then to 
add "partial support of RFC 4175: RTP Payload Format for Uncompressed Video ") 
. 
I don't want to be pushy claiming the modification of the changelog. The goal 
is to get people contributing on this topic. 

Eloi 
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v4 2/2] rtp: rfc4175: add handler for YCbCr-4:2:2

2017-10-26 Thread Éloi Bail
> As an open source project we cannot cite unpublished documents.

> Kieran

Well... I don't buy this explanation at all. 
Since when you cannot even reference a non open technology in issue projects? 

Dolby, DTS, newtek technologies are already cited in the document and those 
aren't open. 

Furthermore Official SMPTE documents are referenced 5 times. 

Dolby E decoder and SMPTE 337M demuxer 
SMPTE VC-2 HQ profile support for the Dirac decoder 
SMPTE VC-2 native encoder supporting the HQ profile 
SMPTE 302M audio encoder 
SMPTE 302M AES3 audio decoder 

Eloi 
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v4 2/2] rtp: rfc4175: add handler for YCbCr-4:2:2

2017-10-24 Thread Éloi Bail
- On Apr 5, 2017, at 6:11 PM, Rostislav Pehlivanov  
wrote: 

> On 31 March 2017 at 16:36, Damien Riegel  > wrote:

> > This adds partial support for the RFC 4175 (raw video over RTP). The
> > only supported formats are the YCbCr-4:2:2 8 bit because it's natively
> > supported by FFmpeg with pixel format UYVY, and 10 bit which requires
> > the vrawdepay codec to convert the payload in a format handled by
> > FFmpeg.

> > Signed-off-by: Damien Riegel 
> > ---
> > Changes in v4:
> > - use strncmp for string comparisons
> > - use AVERROR_INVALIDDATA instead of custom error codes

> > Changes in v3:
> > - rename rawvideo to rfc4175
> > - set pixel format in codec parameters
> > - add additional check to prevent buffer overflow

> > libavformat/Makefile | 1 +
> > libavformat/rtpdec.c | 1 +
> > libavformat/rtpdec_formats.h | 1 +
> > libavformat/rtpdec_rfc4175.c | 236 ++
> > +
> > 4 files changed, 239 insertions(+)
> > create mode 100644 libavformat/rtpdec_rfc4175.c

> > diff --git a/libavformat/Makefile b/libavformat/Makefile
> > index f56ef16532..a1dae894fe 100644
> > --- a/libavformat/Makefile
> > +++ b/libavformat/Makefile
> > @@ -55,6 +55,7 @@ OBJS-$(CONFIG_RTPDEC) += rdt.o
> > \
> > rtpdec_qcelp.o \
> > rtpdec_qdm2.o \
> > rtpdec_qt.o \
> > + rtpdec_rfc4175.o \
> > rtpdec_svq3.o \
> > rtpdec_vc2hq.o \
> > rtpdec_vp8.o \
> > diff --git a/libavformat/rtpdec.c b/libavformat/rtpdec.c
> > index 53cdad7396..4acb1ca629 100644
> > --- a/libavformat/rtpdec.c
> > +++ b/libavformat/rtpdec.c
> > @@ -114,6 +114,7 @@ void ff_register_rtp_dynamic_payload_handlers(void)
> > ff_register_dynamic_payload_handler(_qt_rtp_vid_handler);
> > ff_register_dynamic_payload_handler(_quicktime_rtp_aud_handler);
> > ff_register_dynamic_payload_handler(_quicktime_rtp_vid_handler);
> > + ff_register_dynamic_payload_handler(_rfc4175_rtp_handler);
> > ff_register_dynamic_payload_handler(_svq3_dynamic_handler);
> > ff_register_dynamic_payload_handler(_theora_dynamic_handler);
> > ff_register_dynamic_payload_handler(_vc2hq_dynamic_handler);
> > diff --git a/libavformat/rtpdec_formats.h b/libavformat/rtpdec_formats.h
> > index 3292a3d265..a436c9d62c 100644
> > --- a/libavformat/rtpdec_formats.h
> > +++ b/libavformat/rtpdec_formats.h
> > @@ -82,6 +82,7 @@ extern RTPDynamicProtocolHandler ff_qt_rtp_aud_handler;
> > extern RTPDynamicProtocolHandler ff_qt_rtp_vid_handler;
> > extern RTPDynamicProtocolHandler ff_quicktime_rtp_aud_handler;
> > extern RTPDynamicProtocolHandler ff_quicktime_rtp_vid_handler;
> > +extern RTPDynamicProtocolHandler ff_rfc4175_rtp_handler;
> > extern RTPDynamicProtocolHandler ff_svq3_dynamic_handler;
> > extern RTPDynamicProtocolHandler ff_theora_dynamic_handler;
> > extern RTPDynamicProtocolHandler ff_vc2hq_dynamic_handler;
> > diff --git a/libavformat/rtpdec_rfc4175.c b/libavformat/rtpdec_rfc4175.c
> > new file mode 100644
> > index 00..498381dfd3
> > --- /dev/null
> > +++ b/libavformat/rtpdec_rfc4175.c
> > @@ -0,0 +1,236 @@
> > +/*
> > + * RTP Depacketization of RAW video (TR-03)
> > + * Copyright (c) 2016 Savoir-faire Linux, Inc
> > + *
> > + * This file is part of FFmpeg.
> > + *
> > + * FFmpeg is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU Lesser General Public
> > + * License as published by the Free Software Foundation; either
> > + * version 2.1 of the License, or (at your option) any later version.
> > + *
> > + * FFmpeg is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
> > + * Lesser General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU Lesser General Public
> > + * License along with FFmpeg; if not, write to the Free Software
> > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
> > 02110-1301 USA
> > + */
> > +
> > +/* Development sponsored by CBC/Radio-Canada */
> > +
> > +#include "avio_internal.h"
> > +#include "rtpdec_formats.h"
> > +#include "libavutil/avstring.h"
> > +#include "libavutil/pixdesc.h"
> > +
> > +struct PayloadContext {
> > + char *sampling;
> > + int depth;
> > + int width;
> > + int height;
> > +
> > + uint8_t *frame;
> > + unsigned int frame_size;
> > + unsigned int pgroup; /* size of the pixel group in bytes */
> > + unsigned int xinc;
> > +
> > + uint32_t timestamp;
> > +};
> > +
> > +static int rfc4175_parse_format(AVStream *stream, PayloadContext *data)
> > +{
> > + enum AVPixelFormat pixfmt = AV_PIX_FMT_NONE;
> > + int bits_per_sample = 0;
> > + int tag = 0;
> > +
> > + if (!strncmp(data->sampling, "YCbCr-4:2:2", 11)) {
> > + tag = MKTAG('U', 'Y', 'V', 'Y');
> > + data->xinc = 2;
> > +
> > + if (data->depth == 8) {
> > + data->pgroup = 4;
> > + bits_per_sample = 16;
> > + pixfmt = 

Re: [FFmpeg-devel] TR-03 implementation

2017-02-16 Thread Éloi Bail
Hi, 

In november, we wrote on the mailing list about implementing support for TR-03 
in ffmpeg [1]. 
There were some doubts in the ffmpeg community about whether or not ffmpeg 
could handle demuxing 3gbps of RTP input without significantly modifying the 
RTP demuxer and/or doing kernel bypassing. 

CBC/Radio Canada contracted us to test what was possible and to try to 
implement TR-03 
in ffmpeg. Using 2 servers connected by 10gbps fibre optic connection and a 
switch we 
performed several tests with various tools which showed that it should be 
possible to 
receive and demux 3gbps of RTP raw video with a large enough RX queue in the 
NIC and the socket. We then patched ffmpeg to support depayloading 8 and 10 bit 
raw video [2] and process the input stream on a seperate thread [3]. This 
allowed us 
to succesfully receive a 3gbps raw video stream in ffmpeg and write the raw 
video to 
the disk. We were also able to transcode it into h264. 

Thus it seems to us that ffmpeg should be able to support TR-03 without 
significant 
modifications nor kernel bypassing. 

Bellow is a more detailed description of our testing and development process: 

1. In the Linux Kernel: Thanks to iperf tool, we tested that the Linux 
kernel is able to handle 3gbps of udp streams with a payload size of 800 to 
1450 bytes. 

2. Using a simple RTP demuxer, we ensured that a user space program is able 
to handle a 3gbps stream without dropping packets. When adding an 
increasing amount of processing per packet, we observed that eventually 
packets are dropped. We concluded that minimal processing per packet should 
be used to achieve the reception of 3 gbps video stream. 

3. We played with Gstreamer which already implements an RTP raw video muxer 
/ demuxer. We were able to send a 3gbps video stream without dropping any 
packets. In reception, we experienced around 20% packet drop with 3gbps 
video stream because the thread in charge of socket reading is taking 100% 
CPU. Gstreamer team is aware of that and have ideas to reduce significantly 
the CPU usage grouping the processing per packet with the recvmmsg syscall 

4. We implement an RTP demuxer compatible with RFC 4175 and pixel format 
422-8bits and 422-10bits [2] 

* Checking FFmpeg tool code, we saw that a separate input thread(s) is used 
only if there is more than one input. With a minimal pipeline which reads 
an RTP stream from a socket and writes the raw video into a file, we 
observed that packets were dropped because too much time was used for 
packet processing. 

We modified FFmpeg tool to force the use of a dedicated input thread. 

5. Several queues are used from packet reception to packet processing. 
Tunning each queue allowed us to have zero packet dropped: 

* In the NIC queue: thanks to ethtool, we increased the queue size from 453 
to its maximum (4078) to avoid packet dropped in the NIC queue 

* In the Kernel queue: we observed no packet dropped after increasing the 
queue size to 16 mo 

* In the jitter buffer queue (FFmpeg): By default the jitter buffer is 
sized for 500 packets. With 1080P raw videos (RFC4175), we calculated 
that a video frame would lead to around 3000 packets. 

To be more resilient to packets reordering, we could increase the size of 
the jitter buffer but we observed that using a big jitter buffer, a 
significant processing per packet is added and lead thus to packet dropped 
in the Kernel. In addition, RFC4175 adds a mechanism to be resiliant to packet 
reordering per video frame. 

Results: 
* With : 
- our test setup composed of 2 servers running Centos 7 linked by a 10gbps 
switch. 
- our modified FFmpeg to handle RFC4175 and to improve the reading 
performance, 
- NIC and Kernel queues tunned and FFmpeg jitter buffer disabled 

we were able to: 
- send a 3 gbps video stream with gstreamer 
- receive with FFmpeg a 3 gbps video stream 422-8 bits without dropping any 
packets nor having any video artifacts. 
* However, using pixel format 4.2.2 10bits (packed), we encountered a 
performance degradation. Indeed 4.2.2 10bits (packed) is not supported in 
FFmpeg. We decided to convert into a 4.2.2 10bits planar format. We 
believe that this conversion adds too much processing per packets and thus 
leads to packets dropped. 
We are able to stream (and live transcode) 1080p 60fps 42210-bits without 
dropping packets. In reception the 
bandwidth is around 2.2 gbps. 

[1]: http://ffmpeg.org/pipermail/ffmpeg-devel/2016-November/202554.html 
[2]: http://ffmpeg.org/pipermail/ffmpeg-devel/2017-February/207253.html 
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] TR-03 implementation

2016-11-10 Thread Éloi Bail
Hi all, 

Media broadcasters tend to move on carriage of live professional media 
over IP. A group has been set up to recommend a standard for that, and 
they produced the TR-03. It is only a recommendation, but it should be 
close enough to what the SMPTE will adopt to start working on it. 

Radio Canada/CBC, a major Canadian broadcaster, wants to make FFmpeg 
compatible with TR-03 and SMTPE future standard. 

Video pixel format is defined in RFC4175 (4.3. Payload Data) for YCbCr 
format video. According to the discussion we had on #ffmpeg-devel 
yesterday, lots of those pixel formats are not supported in FFmpeg. 
These pixel formats are designed to group together all the samples of a 
pixel group. 

A solution would be to repack the samples into a compatible format, eg. 
planar. Repacking samples seems not to be an appropriate solution 
because it has an impact on performance, specially dealing with high 
resolution uncompressed streams. 

We would like to contribute to FFmpeg by adding the support of those pixel 
formats and thus make FFmpeg usable for the next generation of 
broadcasting products. 

Could you tell us if our contribution would make sense and eventually 
advise us on the best way to address that ? 

Best, 

Eloi 

[1] 
http://www.videoservicesforum.org/download/technical_recommendations/VSF_TR-03_DRAFT_2015-10-19.pdf
 
[2] https://tools.ietf.org/html/rfc4175 
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel