I encountered an unusual issue while upgrading from the ffmpeg 2.5 av
libraries to 3.x (currently trying with 3.4.1) and am running into
this on Windows using an msys2 mingw64 build from source.
As I'm not sure exactly what would be helpful since it's not a command
line issue, I'll just describe i
>
> if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
>
>
>if (mov->video_track_timescale) {
> track->timescale = mov->video_track_timescale;
> } else {
> track->timescale = st->time_base.den;
> while(tr
> I set the time_base value in the output AVStream to 1/9, then stepped
> through the code to write the header, some frames, and then the trailer.
> Watching the time_base value in the debugger as I stepped through, I noticed
> that the values change to the 1/15360 value after the invocation
av_packet_rescale_ts(AVPacket *pkt, AVRational tb_src, AVRational tb_dst);
corey
On Tue, Jan 9, 2018 at 2:50 PM, Corey Taylor wrote:
>> I set the time_base value in the output AVStream to 1/9, then stepped
>> through the code to write the header, some frames, and then the trailer.
>
What error do you see?
corey
On Thu, Jan 11, 2018 at 2:02 PM, Davood Falahati
wrote:
> Dear all,
>
> I read this description in avformat.h
>
> /**
>
> * Wrap an existing array as stream side data.
>
> *
>
> * @param st stream
>
> * @param type side information type
>
> * @param data the si
On Mon, Jan 15, 2018 at 4:51 PM, Davood Falahati
wrote:
> Dear all,
>
> I am using transcoding.c example. I want to know how should I rotate the
> video by 90 degrees. The same thing that applies with
> $ ffmpeg -i input_video -vf "transpose=1" output_video
> in cli tool.
There doesn't seem to b
On Mon, Jan 22, 2018 at 7:17 PM, kckang wrote:
> mux works 2 stream combine, audio is 1, video is 0 stream.
> when combine two type of packet, just ignore and write to output context. but
> its makes half size.
> fralkly, I was so much confused, coz I wanna it becomes naturally reconize
> and au
On Tue, Jan 23, 2018 at 2:04 AM, Tobias Rapp wrote:
> Hi,
>
> in my application I have problems getting the last frame out of a filter
> graph when the filter chain includes the yadif filter. The application is
> based on the filtering_video.c example and I can reproduce it when adding
> the yadif
On Tue, Jan 23, 2018 at 1:19 AM, kckang wrote:
> //this code is not work
>
> dts = av_gettime() / 1000;
> dts = dts * 25;
> printf( "DTS:%l",dts);
> dts = av_gettime();
> int duration = 20; // 20
> if(m_prevAudioDts > 0LL) {
> duration = dts - m_prevAudioDts;
> }
>
On Tue, Jan 30, 2018 at 1:20 PM, Allan Rosner wrote:
> Hi,
>
> What ffmpeg API calls do I need to make to ensure constant chunk duration,
> video frame rate, audio frame rate and key frame distance for each video
> chunk.
>
I would start by reading the encode_video.c and encode_audio.c files in
On Wed, Jan 31, 2018 at 5:14 PM, wrote:
> Hi,
>
> If I write an application that performs hardware decoding of H264 using
> FFmpeg/FFmpeg libraries and the NVIDIA NDEC SDK, what license agreement
> applies?
>
> For example, (as advised by https://developer.nvidia.com/ffmpeg), I would
> have to co
On Fri, Feb 2, 2018 at 12:06 PM, Dan Edwards wrote:
>
> Does ffmpeg support Linux and if so does it support such a conversion
> method?
>
I think you want to email the ffmpeg tool mailing list.
https://lists.ffmpeg.org/mailman/listinfo/ffmpeg-user/
Take a look at the downloads page.
http://f
On Fri, Feb 2, 2018 at 7:35 PM, srinivas gudumasu
wrote:
> Hi,
>
> My windows playback application uses FFMPEG APIs for decoding HEVC video
> (AVCodec), resizing (swscale) and resampling (swresample).
>
> Till now, I am using avcodec-57.dll, swscale-4.dll and swresample-2.dll
> and my application
On Fri, Feb 2, 2018 at 9:30 PM, srinivas gudumasu
wrote:
> Hi Corey,
>
> I am using the latest nightly builds from ffmpeg zeranoe builds.
>
I can only guess that swresample wasn't linked properly somehow.
The build from zeranoe seems ok.
corey
___
On Sat, Feb 3, 2018 at 4:48 AM, Anton Shekhovtsov
wrote:
> Are you linking with msvc? Sounds like you need to refresh import libs.
> Zeranoe does not provide them.
>
The "dev" build seems to provide them. They look ok to me.
I don't know of any different requirements for newer ffmpeg builds.
On Wed, Feb 7, 2018 at 3:55 AM, Liyong (R) wrote:
> Dear libav-users
>
>
>
> When I used “avcodec_decode_video2” for real time Mpeg2
> decoding(one-stream-in-one-frame-out), I found different behaviors
> between version 3.4 and version 3.3.3.
>
>
>
> When I used ffmpeg-3.3.3, it could work as I e
On Mon, Feb 19, 2018 at 8:58 AM, Pedro Pereira <1995pedropere...@gmail.com>
wrote:
> However I am only interested in loading an image (or a frame of a video)
> and apply the scaling operation.
> I searched for a way to load an image to an AVFrame, apply the sws_scale
> and then write the resulting
On Tue, Feb 20, 2018 at 6:48 AM, Valeriy Shtoma
wrote:
> Hi to all,
>
> Tell me, please, how to change libx264 profile and level? I tried:
>
> av_dict_set(&libx264opt, "profile", "high", 0);
> av_dict_set(&libx264opt, "level", "41", 0);
This looks like the correct option for libx264 according
18 matches
Mail list logo