Re: [Libav-user] Create a muxer without enc_ctx = out_stream->codec

2016-08-22 Thread Charles

On 08/22/2016 06:32 PM, Charles wrote:

I seem to be lost in where the values get copied.
Want to create a new h.264 file without an input stream to copy from

Steps
1 - av_encode_codec = avcodec_find_encoder( codec_id )
2 - av_encode_codec_ctx = avcodec_alloc_context3( av_encode_codec )
 av_encode_codec_ctx->width   = w_;   /// \note multiple of 2
 av_encode_codec_ctx->height  = h_;   /// \note multiple of 2
3 - avcodec_open2( av_encode_codec_ctx, av_encode_codec, _dict_opts )
4 - avformat_alloc_output_context2( _out_fmt_ctx, NULL, (char*)"mp4", fname )
5 - video_st = avformat_new_stream( av_out_fmt_ctx, av_encode_codec )
 av_out_fmt_ctx->streams[ 0 ]->codecpar->width = w_;
 av_out_fmt_ctx->streams[ 0 ]->codecpar->height = h_;
6 - avcodec_parameters_to_context( av_encode_codec_ctx, av_out_fmt_ctx->streams[ 0 
]->codecpar )
7 - avio_open( _out_fmt_ctx->pb, fname, AVIO_FLAG_WRITE )
8 - avformat_write_header( av_out_fmt_ctx, NULL )

[mp4 @ 0x29b4060] dimensions not set

transcoding.c
is using enc_ctx = out_stream->codec;
Which should be the same as av_encode_codec_ctx

What am I missing here?

Thanks
cco
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Posted the code at :
https://gist.github.com/LinuxwitChdoCtOr/74c1721dd7688cf1d16509ea2a52d231

Current output:
Output #0, mp4, to 'enc.mp4':
Stream #0:0: Unknown: none (h264_nvenc) ([33][0][0][0] / 0x0021)
[mp4 @ 0x1a08a60] Using AVStream.codec to pass codec parameters to muxers is 
deprecated, use AVStream.codecpar instead.
[mp4 @ 0x1a08a60] dimensions not set


___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


[Libav-user] Create a muxer without enc_ctx = out_stream->codec

2016-08-22 Thread Charles

I seem to be lost in where the values get copied.
Want to create a new h.264 file without an input stream to copy from

Steps
1 - av_encode_codec = avcodec_find_encoder( codec_id )
2 - av_encode_codec_ctx = avcodec_alloc_context3( av_encode_codec )
 av_encode_codec_ctx->width   = w_;   /// \note multiple of 2
 av_encode_codec_ctx->height  = h_;   /// \note multiple of 2
3 - avcodec_open2( av_encode_codec_ctx, av_encode_codec, _dict_opts )
4 - avformat_alloc_output_context2( _out_fmt_ctx, NULL, (char*)"mp4", fname )
5 - video_st = avformat_new_stream( av_out_fmt_ctx, av_encode_codec )
 av_out_fmt_ctx->streams[ 0 ]->codecpar->width = w_;
 av_out_fmt_ctx->streams[ 0 ]->codecpar->height = h_;
6 - avcodec_parameters_to_context( av_encode_codec_ctx, av_out_fmt_ctx->streams[ 0 
]->codecpar )
7 - avio_open( _out_fmt_ctx->pb, fname, AVIO_FLAG_WRITE )
8 - avformat_write_header( av_out_fmt_ctx, NULL )

[mp4 @ 0x29b4060] dimensions not set

transcoding.c
is using enc_ctx = out_stream->codec;
Which should be the same as av_encode_codec_ctx

What am I missing here?

Thanks
cco
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] AVStream->codec deprecation

2016-08-22 Thread salsaman
Perette,
as far as I can tell there are 2 different things relating to codecs:

old API:
AVCodecContext - obtained from stream->codec (this may be the source of
confusion), a pointer which is passed into functions dealing with the codec
AVCodec - obtained from av_codec_find_decoder(context), contains fields
like width, height, stream type etc.


new API:
AVCodecParams - obtained from stream->codec_params, equivalent to AVCodec
in old API
AVCodecContext - allocated, then set from codec_params



I could be wrong, I have not experimented with the new API yet. Just
looking at code.











http://lives-video.com
https://www.openhub.net/accounts/salsaman

On Mon, Aug 22, 2016 at 10:04 AM, Perette Barella 
wrote:

> Salsaman,
>
> I reviewed the doc/examples and my existing, working code some more, and
> I’m going to step back and ask some architectural questions to validate
> changing assumptions based on your assertions.
>
> * AVFormat provides the I/O and multiplexing for media
> * AVStream is an abstraction for the separate audio/video/subtitle
> components of the media.  It is associated with an AVFormatContext.
> * AVCodec provides the encoder/decoders for a particular AV type.
>
> My assumption has been that since AVStream->codec exists and was filled in
> by avformat_new_stream and other functions, and *that there was an
> association between an AVStream and its AVCodec* so that all these
> different components worked together.
>
> You’re implying that no such association exists, and that it’s entirely my
> code that’s pushing packets through the Codec, then moving the results onto
> the Stream.  I find this a little surprising, although I find nothing in my
> code or the examples/muxing.c to contradict it.  And it would explain why
> it takes to much code to do anything with lav as opposed to gstreamer or
> other libraries.
>
> Am I going in the right direction now?
>
> And with that in mind, then the purpose of codecpar is a way for the
> stream to *provide* parameters for a codec that I’m supposed to create, and
> the codec structure was there in the past only to provide and not an
> indication of association between the codec and the stream (because no such
> association exists).
>
> > Just use avctx from the code above. I don' t see what the problem is.
>
>
> I think one of lav’s problems is lack of a good architectural diagram to
> explain how it’s *supposed* to work.  Yes, i can read the code and the
> examples, and the Doxygen is a big help… but some sense of the intent and
> design behind the code would help significantly.  It’s like the difference
> between diagnosing an electrical problem with vs. without a
> blueprint/schematic: you can do without, but having the picture makes the
> work easier, faster and more accurate.
>
> Perette
>
> ___
> Libav-user mailing list
> Libav-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] questions about indefinite waiting for incoming rtmp stream

2016-08-22 Thread Chen Fisher
Yes, you should use *interrupt_callback*:
https://www.ffmpeg.org/doxygen/3.0/structAVFormatContext.html#a5b37acfe4024d92ee510064e80920b40
https://www.ffmpeg.org/doxygen/3.0/structAVIOInterruptCB.html

This callback function is called repeatedly when there's a blocking
operation like av_read_frame
Just return 1 if you want to abort any blocking operation

On Sun, Jul 31, 2016 at 1:25 PM, qw  wrote:

> Hi,
>
> I use avformat_open_input(), and avformat_find_stream_info() to open rtmp
> stream, and use av_read_frame() to read av packet. If there is no incoming
> rtmp stream, avformat_open_input() and avformat_find_stream_info() will
> wait indefinitely. If incoming rtmp stream is terminated in the middle,
> av_read_frame() will wait indefinitely.
>
> Is there some method for ffmpeg lib to effective handle this sort of
> indefinite waiting in case of network issue?
>
> Thanks!
>
> Regards
>
> Andrew
>
>
>
>
>
> ___
> Libav-user mailing list
> Libav-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
>
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


[Libav-user] avformat_find_stream_info

2016-08-22 Thread Chen Fisher
I'd like to get rid of calling avformat_find_stream_info (it's slow)
I have a specific source of stream so I know exactly the codec, resolution,
frame rate, etc of the stream

How can I pre populate AVContext with all needed information so I don't
have to call avformat_find_stream_info?

Thanks
Chen
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] imgutils.h decode dst buffer going from avpicture_fill to av_image_fill_arrays

2016-08-22 Thread ssshukla26
For software scaling the example is as follows, might help you.


int main(int argc, char **argv)
{
const char *src_filename = NULL;
const char *src_resolution = NULL;
const char *src_pix_fmt_name = NULL;
enum AVPixelFormat src_pix_fmt = AV_PIX_FMT_NONE;
uint8_t *src_data[4];
int src_linesize[4];
int src_w=0, src_h=0;
FILE *src_file;
int src_bufsize;

const char *dst_filename = NULL;
const char *dst_resolution = NULL;
const char *dst_pix_fmt_name = NULL;
enum AVPixelFormat dst_pix_fmt = AV_PIX_FMT_NONE;
uint8_t *dst_data[4];
int dst_linesize[4];
int dst_w=0, dst_h=0;
FILE *dst_file;
int dst_bufsize;

struct SwsContext *sws_ctx;
int ret;
int frame_count = 0;
if (argc != 7) {
fprintf(stderr, "*Usage: %s src_file src_resolution src_pix_fmt
dst_file dst_resolution dst_pix_fmt*\n"
"API example program to show how to scale an video/image
with libswscale.\n"
"This program generates a series of pictures, rescales them
to the given "
"resolution and saves them to an output file\n."
"\n", argv[0]);
exit(1);
}

src_filename = argv[1];
src_resolution   = argv[2];
src_pix_fmt_name = argv[3];
dst_filename = argv[4];
dst_resolution = argv[5];
dst_pix_fmt_name = argv[6];

if(AV_PIX_FMT_NONE == (src_pix_fmt = av_get_pix_fmt(src_pix_fmt_name)))
{
fprintf(stderr,
"Invalid source pixel format '%s'\n",
src_pix_fmt_name);
exit(1);
}

if(AV_PIX_FMT_NONE == (dst_pix_fmt = av_get_pix_fmt(dst_pix_fmt_name)))
{
fprintf(stderr,
"Invalid destination pixel format '%s'\n",
dst_pix_fmt_name);
exit(1);
}

if (av_parse_video_size(_w, _h, src_resolution) < 0) {
fprintf(stderr,
"Invalid source resolution '%s', must be in the form WxH or
a valid size abbreviation\n",
src_resolution);
exit(1);
}

if (av_parse_video_size(_w, _h, dst_resolution) < 0) {
fprintf(stderr,
"Invalid destination resolution '%s', must be in the form
WxH or a valid size abbreviation\n",
dst_resolution);
exit(1);
}

src_file = fopen(src_filename, "rb");
if (!src_file) {
fprintf(stderr, "Could not open source file %s\n", src_filename);
exit(1);
}

dst_file = fopen(dst_filename, "wb");
if (!dst_file) {
fprintf(stderr, "Could not open destination file %s\n",
dst_filename);
exit(1);
}

/* create scaling context */
sws_ctx = *sws_getContext*(src_w, src_h, src_pix_fmt,
 dst_w, dst_h, dst_pix_fmt,
 SWS_BILINEAR, NULL, NULL, NULL);
if (!sws_ctx) {
fprintf(stderr,
"Impossible to create scale context for the conversion "
"fmt:%s s:%dx%d -> fmt:%s s:%dx%d\n",
av_get_pix_fmt_name(src_pix_fmt), src_w, src_h,
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h);
ret = AVERROR(EINVAL);
goto end;
}

/* allocate source and destination image buffers */
if ((ret = *av_image_alloc*(src_data, src_linesize,
  src_w, src_h, src_pix_fmt, 16)) < 0) {
fprintf(stderr, "Could not allocate source image\n");
goto end;
}
src_bufsize = ret;

/* buffer is going to be written to rawvideo file, no alignment */
if ((ret = *av_image_alloc*(dst_data, dst_linesize,
  dst_w, dst_h, dst_pix_fmt, 1)) < 0) {
fprintf(stderr, "Could not allocate destination image\n");
goto end;
}
dst_bufsize = ret;

/* read image from source file */
while(src_bufsize == fread(src_data[0], 1, src_bufsize, src_file))
{
/* convert to destination format */
   *sws_scale*(sws_ctx, (const uint8_t * const*)src_data,src_linesize,
0, src_h, dst_data, dst_linesize);

/* write scaled image to file */
fwrite(dst_data[0], 1, dst_bufsize, dst_file);

printf("No of frames converted = %d\r",++frame_count);
fflush(stdout);
}

printf("\n");

fprintf(stderr, "Scaling succeeded. Play the output file with the
command:\n"
"ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);

end:
fclose(src_file);
fclose(dst_file);
av_freep(_data[0]);
av_freep(_data[0]);
sws_freeContext(sws_ctx);
return ret;
}




--
View this message in context: 
http://libav-users.943685.n4.nabble.com/Libav-user-imgutils-h-decode-dst-buffer-going-from-avpicture-fill-to-av-image-fill-arrays-tp4662419p4662429.html
Sent from the libav-users mailing list archive at Nabble.com.
___
Libav-user mailing list