Re: [Libav-user] list all available formats of webcam

2019-04-17 Thread Hristo Ivanov
Hi. The implementation of 'avdevice_capabilities_create'( https://ffmpeg.org/doxygen/trunk/avdevice_8c_source.html#l00143) starts with the following lines: 1int ret; 2av_assert0(s && caps); 3av_assert0(s->iformat || s->oformat); 4if ((s->oformat &&

Re: [Libav-user] list all available formats of webcam

2019-04-17 Thread Hristo Ivanov
Hi. Sorry, I think I got it wrong in my previous reply: > I think there is no way to retrieve the information using the FFMPEG API. There is this: https://ffmpeg.org/doxygen/trunk/structAVDeviceCapabilitiesQuery.html Quoting the docs: > Following API allows user to probe device capabilities

Re: [Libav-user] Can't decode more than 5 video streams using libav+hwaccel.

2019-04-17 Thread Hristo Ivanov
Hi. > Is it because of NVIDIA GPUs have concurrent session limitation? There is a session limitation for encoding for sure, but I think there is no such thing for decoding: https://developer.nvidia.com/video-encode-decode-gpu-support-matrix > By the way, I don't understand where these messages

Re: [Libav-user] Using ffmpeg for a software

2019-04-17 Thread Hristo Ivanov
Hi. I think this is what you need: https://www.ffmpeg.org/legal.html Regards. ___ Libav-user mailing list Libav-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/libav-user To unsubscribe, visit link above, or email libav-user-requ...@ffmpeg.org

Re: [Libav-user] list all available formats of webcam

2019-04-16 Thread Hristo Ivanov
Hi. I faced a similar problem: http://ffmpeg.org/pipermail/libav-user/2018-October/011413.html I think there is no way to retrieve the information using the FFMPEG API. As a workaround you can use a log call back to store the printed formats info and later parse that information. Regards.

Re: [Libav-user] Using libavcodec and libavformat to encode live audio streams?

2019-04-11 Thread Hristo Ivanov
Hi Ethin. > The interface to libav also looks unnecessarily complicated, or is that just me. When learning libav, at first, I felt the same way. The interface is complicated by not unnecessarily. Concerning the rest of your questions: too vague to give a definitive answer. For a conferencing

[Libav-user] Get the #EXT-X-TARGETDURATION from hls demuxer.

2019-03-06 Thread Hristo Ivanov
Hi. I have a HLS input and I want to be able to get value of the #EXT-X-TARGETDURATION tag(the segments target duration) of the .m3u8 file. Is that possible? Can that information be recovered from the AVFormatContext struct? The reason why I need the target duration of the hls segments is the

Re: [Libav-user] Possible memory leak in avformat_find_stream_info, how to deal with it?

2019-02-26 Thread Hristo Ivanov
Hi Wodzu. >From the ffmeg docs about avformat_find_stream_info(): - Read packets of a media file to get stream information. - The logical file position is not changed by this function; examined packets may be buffered for later processing. * Read packets of a media file to get stream

Re: [Libav-user] Syncing multiple sources.

2019-02-01 Thread Hristo Ivanov
Hi Carl, > But the setpts filter is changing the framerate to 60 fps, no? I am pretty sure it does not, it changes the timebase in use. In a stream with 1/30 tb and pts values [0,1,2,3,4,5], using a 1/60 tb those pts values would change to [0,2,4,6,8,10]. > WMP only supports yuv420p, this is

Re: [Libav-user] Syncing multiple sources.

2019-01-31 Thread Hristo Ivanov
Hi Carl, > In this example, it looks as if the input already has 60fps. The inputs is 30fps, set by the '-r' flag. Maybe the settb filter is throwing you off, but it only changes the timebase to 1/60, the framerate is kept to 30/1. > Your original email gave the impression that vstack somehow

Re: [Libav-user] Syncing multiple sources.

2019-01-31 Thread Hristo Ivanov
Hi Carl. > How can I reproduce this (major?) issue with ffmpeg (the application)? .\ffmpeg.exe -y -loop 1 -r 30 -t 5 -i .\frame.png -filter_complex "[0]settb=expr=1/60[tb];[tb]split[s0][s1];[s1]setpts=PTS+1[offset];[s0][offset]vstack[out]" -map [out] -c:v h264 -vsync 0 -f mp4 out.mp4 Here is

Re: [Libav-user] Syncing multiple sources.

2019-01-30 Thread Hristo Ivanov
Hi I forgot to speak about the timebases in use, and those are important in this case. For my two inputs, after decoding, my AVCodecContexts have the following framerates and timebases: in0 => 30/1, 1/60 in1 => 359/12, 6/359 // Well this is not exactly 30fps, but close. The common selected

[Libav-user] Syncing multiple sources.

2019-01-30 Thread Hristo Ivanov
Hi. In my program I have the following filter: " [in0]format=pix_fmts=yuv420p,scale=-1:540[s0]; [in1]format=pix_fmts=yuv420p,scale=-1:540[s1]; [s0][s1]vstack[stacked]; [stacked]pad=1920:1080:(ow-iw)/2:0[out0] " My problem comes from the 'vstack' filter. The stack filters assign the same

Re: [Libav-user] Audio resampling changes slightly the speed of the music

2018-10-25 Thread Hristo Ivanov
Hi Yury On Thu, Oct 25, 2018 at 11:18 AM Yurii Monakov wrote: > Resampling can introduce additional samples in the output (because > out_samples is rounded). > You should keep track of input time and output time to calculate number of > output samples. > > Yurii > I am not sure if that is true

Re: [Libav-user] Inconsistent dts/pts in packets read by av_read_frame from rtsp stream. Is it expected behavior?

2018-10-24 Thread Hristo Ivanov
Hi Yang, > While I'm using av_read_frame to extract frames from some rtsp stream from IP cameras, the dts/pts of the next frame occasionally will be smaller than the previous frames. Currently I am facing the same problem. > Is it a bug of libav or a P frame or maybe there's a bug in the IP

Re: [Libav-user] msvc compiling issue

2018-10-24 Thread Hristo Ivanov
Hi Anton, Recently I built ffmpeg with VC 2017 and had no problems. I had to use the following flags with the configure script. --arch=x86_64 \ --target-os=win64 \ --toolchain=msvc Maybe you are missing the --arch, hope this helps you.

[Libav-user] Libavdevice. Reading formats info.

2018-10-23 Thread Hristo Ivanov
Hi. The different devices ffmpeg supports have different options, I am interested in the 'list_formats' of the decklink devices. Using the ffmpeg.exe tool the following command: ffmpeg -f decklink -list_formats 1 -i 'Intensity Pro' prints the formats that can be used with the device. I

[Libav-user] What happens if AV_CODEC_FLAG_GLOBAL_HEADER is set when not needed.

2018-06-19 Thread Hristo Ivanov
Hi. I am looking into encoding once but writing to multiple files with different formats. For some formats the encoder must be configured with the AV_CODEC_FLAG_GLOBAL_HEADER flag active, like this: if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) enc_ctx->flags |=

[Libav-user] Resampling pcm audio.

2018-06-13 Thread Hristo Ivanov
Hi. I am capturing video+audio of an IP camera(RTSP), transcoding it and saving it to file. The input looks like this: Input #0, rtsp, from 'rtsp://10.1.1.22/?line=1=1_line=1': Metadata: title : LIVE VIEW Duration: N/A, start: -0.001000, bitrate: N/A Stream #0:0: Video:

Re: [Libav-user] Changing time_bases for decoder(mpeg2video) when sending packets with avcodec_send_packet().

2018-06-04 Thread Hristo Ivanov
Hi I am trying to implement my first video filter. Now I am looking at the following example: https://www.ffmpeg.org/doxygen/trunk/filtering_video_8c-example.html In the function: 79static int init_filters(const char *filters_descr) we have this: 87AVRational time_base =

[Libav-user] Changing time_bases for decoder(mpeg2video) when sending packets with avcodec_send_packet().

2018-05-30 Thread Hristo Ivanov
Hi, I am writing(C++) a simple transcoding program. At first I did not understand the concept of a timebase and how to work with it, until I found this stackoverflow response: https://stackoverflow.com/a/40278283/3022945 The response from the above link can be summarized as: - Copy the