[Libav-user] FFmpeg RTP payload 96 instead of 97
I want to create an rtp audio stream with ffmpeg. When I run my application I get the following SDP file configuration: Output #0, rtp, to 'rtp://127.0.0.1:8554': Stream #0:0: Audio: pcm_s16be, 8000 Hz, stereo, s16, 256 kb/s SDP: v=0 o=- 0 0 IN IP4 127.0.0.1 s=No Name c=IN IP4 127.0.0.1 t=0 0 a=tool:libavformat 57.25.101 m=audio 8554 RTP/AVP 96 b=AS:256 a=rtpmap:96 L16/8000/2 However, when I try to read it with `ffplay -i test.sdp -protocol_whitelist file,udp,rtp`, it fails,shows the following: ffplay version N-78598-g98a0053 Copyright (c) 2003-2016 the FFmpeg developers built with gcc 5.3.0 (GCC) configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib libavutil 55. 18.100 / 55. 18.100 libavcodec 57. 24.103 / 57. 24.103 libavformat57. 25.101 / 57. 25.101 libavdevice57. 0.101 / 57. 0.101 libavfilter 6. 34.100 / 6. 34.100 libswscale 4. 0.100 / 4. 0.100 libswresample 2. 0.101 / 2. 0.101 libpostproc54. 0.100 / 54. 0.100 nan: 0.000 fd= 0 aq=0KB vq=0KB sq=0B f=0/0 (...waits indefinitely.) The only way to make it work again is to modify the payload type in the SDP file from 96 to 97. Can someone tell me why? Where and how is this number defined? Here is my source. #include extern "C" { #include #include #include #include #include #include #include #include } static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt) { /* rescale output packet timestamp values from codec to stream timebase */ av_packet_rescale_ts(pkt, *time_base, st->time_base); /* Write the compressed frame to the media file. */ return av_interleaved_write_frame(fmt_ctx, pkt); } /* * Audio encoding example */ static void audio_encode_example(const char *filename) { AVPacket pkt; int i, j, k, ret, got_output; int buffer_size; uint16_t *samples; float t, tincr; AVCodec *outCodec = NULL; AVCodecContext *outCodecCtx = NULL; AVFormatContext *outFormatCtx = NULL; AVStream * outAudioStream = NULL; AVFrame *outFrame = NULL; ret = avformat_alloc_output_context2(, NULL, "rtp", filename); if (!outFormatCtx || ret < 0) { fprintf(stderr, "Could not allocate output context"); } outFormatCtx->flags |= AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS; outFormatCtx->oformat->audio_codec = AV_CODEC_ID_PCM_S16BE; /* find the encoder */ outCodec = avcodec_find_encoder(outFormatCtx->oformat->audio_codec); if (!outCodec) { fprintf(stderr, "Codec not found\n"); exit(1); } outAudioStream = avformat_new_stream(outFormatCtx, outCodec); if (!outAudioStream) { fprintf(stderr, "Cannot add new audio stream\n"); exit(1); } outAudioStream->id = outFormatCtx->nb_streams - 1; outCodecCtx = outAudioStream->codec; outCodecCtx->sample_fmt = AV_SAMPLE_FMT_S16; /* select other audio parameters supported by the encoder */ outCodecCtx->sample_rate = 8000; outCodecCtx->channel_layout = AV_CH_LAYOUT_STEREO; outCodecCtx->channels = 2; /* open it */ if (avcodec_open2(outCodecCtx, outCodec, NULL) < 0) { fprintf(stderr, "Could not open codec\n"); exit(1); } // PCM has no frame, so we have to explicitly specify outCodecCtx->frame_size = 1152; av_dump_format(outFormatCtx, 0, filename, 1); char buff[1] = { 0 }; ret = av_sdp_create(, 1, buff, sizeof(buff)); printf("%s", buff); ret = avio_open2(>pb, filename, AVIO_FLAG_WRITE, NULL, NULL); ret = avformat_write_header(outFormatCtx, NULL); printf("ret = %d\n", ret); if (ret <0) { exit(1); } /* frame containing input audio */
[Libav-user] Generated H264 video is only 1 second long with only 1 frame shown (Probably a PTS/DTS problem)
Hi, After encoding an H264 video, the output video has 1 frame only shown with duration of 1 second (That's using VLC media player, it doesn't even play with WMP - Shows corrupt file -), I also get "[libx264 @ 01f60060] non-strictly-monotonic PTS" Warning while encoding frames, so I guess it is a PTS problem. How do I set the PTS/DTS for a packet in H264? Here is my code: (http://pastebin.com/WibnmKLV) #include "libavformat/avformat.h" #include "libavcodec/avcodec.h" #include "libavutil/avutil.h" #include "libavutil/rational.h" #include int main() { av_register_all(); //avcodec_register_all(); av_log_set_level(AV_LOG_DEBUG); AVFormatContext *ps = avformat_alloc_context(); AVFormatContext *ps2 = NULL;//avformat_alloc_context(); AVOutputFormat *oF = av_guess_format("mp4", NULL, "video/mp4"); if(avformat_open_input(, "vid.mp4", NULL, NULL) != 0) { printf("Failed to open input file.\n"); return -1; } avformat_alloc_output_context2(, oF, NULL, "vid2.mp4"); avformat_find_stream_info(ps, NULL); ps2->metadata = ps->metadata; AVCodecContext **pC = (AVCodecContext**)malloc(ps->nb_streams), **p2C = (AVCodecContext**)malloc(ps->nb_streams); AVStream *oStream = NULL; AVStream *iStream = NULL; AVCodec *encoder = NULL; AVCodec *decoder = NULL; AVCodecContext *strCtx = NULL; unsigned int i; avio_open(>pb, "vid2.mp4", AVIO_FLAG_WRITE); for(i = 0; i < ps->nb_streams; i++) { printf("%d\n", i); iStream = ps->streams[i]; pC[i] = iStream->codec; if(pC[i]->codec_type == AVMEDIA_TYPE_UNKNOWN) { printf("Skipping bad stream\n"); continue; } if(pC[i]->codec_type == AVMEDIA_TYPE_VIDEO || pC[i]->codec_type == AVMEDIA_TYPE_AUDIO) { encoder = avcodec_find_encoder(pC[i]->codec_id); //encoder->init(p2C[i]); if (!encoder) { av_log(NULL, AV_LOG_FATAL, "Necessary encoder not found\n"); return AVERROR_INVALIDDATA; } oStream = avformat_new_stream(ps2, encoder); strCtx = oStream->codec; //We have to set oStream->codec parameters for write_header to work, //since write_header only relies on the stream parameters. p2C[i] = avcodec_alloc_context3(encoder); AVDictionary *param = NULL; if (pC[i]->codec_type == AVMEDIA_TYPE_VIDEO) { p2C[i]->width = pC[i]->width; p2C[i]->height = pC[i]->height; if (encoder->pix_fmts) p2C[i]->pix_fmt = encoder->pix_fmts[0]; else p2C[i]->pix_fmt = pC[i]->pix_fmt; p2C[i]->sample_rate = pC[i]->sample_rate; p2C[i]->sample_aspect_ratio = pC[i]->sample_aspect_ratio; p2C[i]->frame_size = pC[i]->frame_size; p2C[i]->time_base = pC[i]->time_base; strCtx->width = pC[i]->width; strCtx->height = pC[i]->height; if (encoder->pix_fmts) strCtx->pix_fmt = encoder->pix_fmts[0]; else strCtx->pix_fmt = pC[i]->pix_fmt; strCtx->sample_rate = pC[i]->sample_rate; strCtx->sample_aspect_ratio = pC[i]->sample_aspect_ratio; strCtx->time_base = pC[i]->time_base; strCtx->frame_size = pC[i]->frame_size; } else { p2C[i]->sample_rate = pC[i]->sample_rate; p2C[i]->channel_layout = pC[i]->channel_layout; p2C[i]->channels = av_get_channel_layout_nb_channels(p2C[i]->channel_layout); // take first format from list of supported formats p2C[i]->sample_fmt = encoder->sample_fmts[0]; p2C[i]->time_base = (AVRational){1, p2C[i]->sample_rate}; strCtx->sample_rate = pC[i]->sample_rate; strCtx->channel_layout = pC[i]->channel_layout; strCtx->channels = av_get_channel_layout_nb_channels(strCtx->channel_layout); // take first format from list of supported formats strCtx->sample_fmt = encoder->sample_fmts[0]; strCtx->time_base = (AVRational){1, strCtx->sample_rate}; } //AVCodecParameters *par = avcodec_parameters_alloc(); //avcodec_parameters_from_context(par, pC[i]); //avcodec_parameters_to_context(p2C[i], par); decoder = avcodec_find_decoder(pC[i]->codec_id); if(decoder == NULL) printf("Couldn't find decoder\n"); if(pC[i]->codec_type == AVMEDIA_TYPE_VIDEO) { int ret1 = avcodec_open2(pC[i], decoder, ); int
Re: [Libav-user] How to achieve 30 FPS using APIs on Android?
On Tue, Sep 20, 2016 at 1:54 PM, Amber Beriwalwrote: > Hi, > > > > Decode Only. We are not generating PNG. These are the timings of obtaining > YUV data from frame. > > > > Regards > > *Amber Beriwal* > > Newgen Software Technologies Ltd. > You cannot achieve 30 FPS for 720p video in software on the specified platform. Luckily, most Android devices have hardware decoders that can easily handle hi-rez video, but they are not very flexible. The bottleneck is usually the encoder, and fine-tuning its communication with Android camera, see e.g. http://stackoverflow.com/a/19923966/192373. I did some experiments with sliced multithreading for x264, and (with the fastest profile) I could produce 25 FPS for 720p on similar hardware with software encoder, using 2 cores (with NEON optimizations enabled). But not 30 FPS. And this was a severe stress for the battery. Once you have a sliced video stream (or file), you can decode it on two cores, and your own benchmark suggests that the goal is reachable: 60 ms/frame / 2 threads => ~30 fps! BR, Alex ___ Libav-user mailing list Libav-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/libav-user
Re: [Libav-user] Memory leak while using ff_get_buffer ?
After some searching, I found out that in some decoders the incoming /avframe pointer/ is unreferenced using *av_frame_unref* api, and then *ff_get_buffer* is called. Can someone please explain why such things are done ! I am so much confused . -- View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-Memory-leak-while-using-ff-get-buffer-tp4662720p4662733.html Sent from the libav-users mailing list archive at Nabble.com. ___ Libav-user mailing list Libav-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/libav-user
Re: [Libav-user] How to achieve 30 FPS using APIs on Android?
On Wed, Sep 21, 2016 at 7:17 AM, Amber Beriwalwrote: > Hi, > > > > Yes, we built FFMPEG libs by ourselves. What is Neon support? > > > > Regards > > *Amber Beriwal* > > Newgen Software Technologies Ltd. > > > >From https://en.wikipedia.org/wiki/ARM_Cortex-A7 / https://en.wikipedia.org/wiki/ARM_architecture#Advanced_SIMD_.28NEON.29 The Advanced SIMD extension (aka NEON or "MPE" Media Processing Engine) is a combined 64- and 128-bit SIMD instruction set that provides standardized acceleration for media and signal processing applications. NEON is included in all Cortex-A8 devices but is optional in Cortex-A9 devices.[79] NEON can execute MP3 audio decoding on CPUs running at 10 MHz and can run the GSM adaptive multi-rate (AMR) speech codec at no more than 13 MHz. It features a comprehensive instruction set, separate register files and independent execution hardware.[80] NEON supports 8-, 16-, 32- and 64-bit integer and single-precision (32-bit) floating-point data and SIMD operations for handling audio and video processing as well as graphics and gaming processing. In NEON, the SIMD supports up to 16 operations at the same time. The NEON hardware shares the same floating-point registers as used in VFP. Devices such as the ARM Cortex-A8 and Cortex-A9 support 128-bit vectors but will execute with 64 bits at a time,[76] whereas newer Cortex-A15 devices can execute 128 bits at a time. Regards, WM ___ Libav-user mailing list Libav-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/libav-user
[Libav-user] av_read_frame is blocking when I remove the camera
Hi all, I use av_read_frame to read packets.when I remove the camera, it blocks.Mac OSX 10.11.6 ___ Libav-user mailing list Libav-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/libav-user