I encountered an unusual issue while upgrading from the ffmpeg 2.5 av
libraries to 3.x (currently trying with 3.4.1) and am running into
this on Windows using an msys2 mingw64 build from source.
As I'm not sure exactly what would be helpful since it's not a command
line issue, I'll just describe
> I set the time_base value in the output AVStream to 1/9, then stepped
> through the code to write the header, some frames, and then the trailer.
> Watching the time_base value in the debugger as I stepped through, I noticed
> that the values change to the 1/15360 value after the
av_packet_rescale_ts(AVPacket *pkt, AVRational tb_src, AVRational tb_dst);
corey
On Tue, Jan 9, 2018 at 2:50 PM, Corey Taylor <corey.taylor...@gmail.com> wrote:
>> I set the time_base value in the output AVStream to 1/9, then stepped
>> through the code to write the h
>
> if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
>
>
>if (mov->video_track_timescale) {
> track->timescale = mov->video_track_timescale;
> } else {
> track->timescale = st->time_base.den;
>
On Mon, Jan 15, 2018 at 4:51 PM, Davood Falahati
wrote:
> Dear all,
>
> I am using transcoding.c example. I want to know how should I rotate the
> video by 90 degrees. The same thing that applies with
> $ ffmpeg -i input_video -vf "transpose=1" output_video
> in cli
On Tue, Jan 23, 2018 at 2:04 AM, Tobias Rapp wrote:
> Hi,
>
> in my application I have problems getting the last frame out of a filter
> graph when the filter chain includes the yadif filter. The application is
> based on the filtering_video.c example and I can reproduce
On Tue, Jan 23, 2018 at 1:19 AM, kckang wrote:
> //this code is not work
>
> dts = av_gettime() / 1000;
> dts = dts * 25;
> printf( "DTS:%l",dts);
> dts = av_gettime();
> int duration = 20; // 20
> if(m_prevAudioDts > 0LL) {
> duration = dts -
On Wed, Jan 31, 2018 at 5:14 PM, wrote:
> Hi,
>
> If I write an application that performs hardware decoding of H264 using
> FFmpeg/FFmpeg libraries and the NVIDIA NDEC SDK, what license agreement
> applies?
>
> For example, (as advised by https://developer.nvidia.com/ffmpeg),
On Tue, Jan 30, 2018 at 1:20 PM, Allan Rosner wrote:
> Hi,
>
> What ffmpeg API calls do I need to make to ensure constant chunk duration,
> video frame rate, audio frame rate and key frame distance for each video
> chunk.
>
I would start by reading the encode_video.c and
On Wed, Feb 7, 2018 at 3:55 AM, Liyong (R) wrote:
> Dear libav-users
>
>
>
> When I used “avcodec_decode_video2” for real time Mpeg2
> decoding(one-stream-in-one-frame-out), I found different behaviors
> between version 3.4 and version 3.3.3.
>
>
>
> When I used
On Mon, Feb 19, 2018 at 8:58 AM, Pedro Pereira <1995pedropere...@gmail.com>
wrote:
> However I am only interested in loading an image (or a frame of a video)
> and apply the scaling operation.
> I searched for a way to load an image to an AVFrame, apply the sws_scale
> and then write the
On Tue, Feb 20, 2018 at 6:48 AM, Valeriy Shtoma
wrote:
> Hi to all,
>
> Tell me, please, how to change libx264 profile and level? I tried:
>
> av_dict_set(, "profile", "high", 0);
> av_dict_set(, "level", "41", 0);
This looks like the correct option for libx264
What error do you see?
corey
On Thu, Jan 11, 2018 at 2:02 PM, Davood Falahati
wrote:
> Dear all,
>
> I read this description in avformat.h
>
> /**
>
> * Wrap an existing array as stream side data.
>
> *
>
> * @param st stream
>
> * @param type side information
On Mon, Jan 22, 2018 at 7:17 PM, kckang wrote:
> mux works 2 stream combine, audio is 1, video is 0 stream.
> when combine two type of packet, just ignore and write to output context. but
> its makes half size.
> fralkly, I was so much confused, coz I wanna it becomes
14 matches
Mail list logo