I've been taking a look at VDPAU and how to support it on cards with
and without hardware support and I have some thoughts. The VDPAU API
lets the client pass off the entire video bitstream and takes care of
the rest of the decoding pipeline. This is fine if you have hardware
that can handle that, but if not you have to do at least parts of it
in software. Even for MPEG2 most cards don't have hardware to decode
the bitstream so to support VDPAU there would need to be a software
fallback. This is probably why Nvidia isn't currently supporting VDPAU
for pre-NV50 cards.

It seems to me that all of this software fallback business is outside
the scope of a state tracker. I can see this state tracker getting
very large and ugly if we have to deal with fallbacks and if anyone
wants to support fixed function decoding in the future. I think the
much better solution is to extend Gallium to support a very minimal
video decoding interface. The idea would be something along the lines
of:

> picture_desc_mpeg12;
> picture_desc_h264;
> picture_desc_vc1;
> ...
>
> pipe_video_context
> {
>               set_picture_desc(...)
>               render_picture(bitstream, ..., surface)
>               put_picture(src, dst)
>               ...
> };
>
> create_video_pipe(profile, width, height, ...)

The driver would then implement the above any way it chooses. Going
along with that would be some generic fallback modules like the
current draw module that can be arranged in a pipeline, to implement
things like software bitstream decode for various formats, software
and shader-based IDCT, shader-based mocomp, and colour space conv,
etc.

An NV50 driver might implement pipe_video_context mostly in hardware,
along with shader based colour space conv. An NV40 driver for MPEG2
might instantiate a software bitstream decoder and implement the rest
in hardware, where as for MPEG4 it might instantiate software
bitstream and IDCT along with shader-based MC and CSC. As far as I
know most fixed func decoding HW is single context, so a driver might
instantiate a software+shader pipeline if another stream is already
playing and using the HW, or it might use it as a basis for managing
states and letting DRM arbitrate access from multiple contexts. A
driver might instantiate a fallback pipeline if it had no hardware
support for a particular type of video, e.g. Theora. Lots of
variations are possible.

Having things in the state tracker makes using dedicated hardware or
supporting VDPAU and others unpleasant and would create a mess going
forward; many of these decisions should be made by driver-side code
anyway, which will simplify the state tracker greatly.

Comments would be appreciated.

------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to