On Mon, Jan 19, 2009 at 4:08 PM, Corbin Simpson
<mostawesomed...@gmail.com> wrote:
> Well, it occurs to me that almost all video decoding's done in a
> pipeline, and we don't want to do any steps in software between hardware
> steps. It also occurs to me that there's a big disconnect between the
> actual video format, and the "bitstream" that encapsulates it, which
> could be anything from MPEG to Matroska (my favorite) to OGM to AVI, etc.

The bitstream in this case is the video bitstream (MPEG2, H264, etc)
once it's been demuxed from the container. A full hardware decoder
would simply take it as input into a FIFO, parse it, generate control
signals for the rest of the pipeline, and eventually split out YUV
pixels on the other side, so that's one of the logical entry points.
Another would be after the bitstream has been parsed and you have a
list of macroblocks that make up a frame, which is the classic XvMC
entrypoint.

Most of what you said still applies however.

> - Gallium is only responsible for the formats themselves, and not the
> containers. Any data required to decompress a raw frame, that's normally
> stored in the container, should be passed alongside the frame.

Right, it's all in the bitstream. Each video format has a spec that
defines what a valid bitstream is.

> - Drivers declare all the formats they can handle.

A driver would implement create_video_pipe(profile, width, height,
...) by either initializing its decoding hardware for
profile,width,height,etc (e.g. mpeg2main,720,480; h264simple,1280,720)
or by initializing software and shader fallbacks if it could not
support it in hardware.

> - Drivers have one hook for taking in video frames, maybe in a context,
> maybe in a video_context.
> - Drivers are responsible for all steps of decoding frames, but are free
> to use methods in Util or Video or whatever auxiliary module we decide
> to put them in. Things like, say, video_do_idct() or video_do_huffman()
> might be possible.
> - Drivers probably shouldn't mix'n'match hardware and software steps,
> although this is a driver preference, e.g.
>
> video_do_foo();
> nouveau_do_bar();
> video_do_baz();
>
> I would guess that the migration setup would take longer than just doing
> video_do_bar() instead, but that's just my opinion. I'm sure that not
> all chipsets are quite like that.

I doubt this will be a problem, I can't think of any reason to fall
back to software for a stage if the preceding stage was handled in
hardware or shaders. There may be such an oddball case, but yeah no
point in mixing hardware, software, and shaders if we have to read
back from the GPU.

> - I think that once a frame's decompressed, we can use the normal
> methods for putting the frame to a buffer, although I'm sure that people
> are going to reply and tell me why that's not a good idea. :3
>
> So from this perspective, support for new formats needs to be explicitly
> added per-driver. PIPE_FORMAT_MPEG2, PIPE_FORMAT_THEORA,
> PIPE_FORMAT_XVID, PIPE_FORMAT_H264, etc.

There are also profile variations to consider, but yeah that's the jist of it.

------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to