Keith Whitwell wrote: > On Mon, 2009-01-19 at 05:39 -0800, Younes Manton wrote: >> I've been taking a look at VDPAU and how to support it on cards with >> and without hardware support and I have some thoughts. The VDPAU API >> lets the client pass off the entire video bitstream and takes care of >> the rest of the decoding pipeline. This is fine if you have hardware >> that can handle that, but if not you have to do at least parts of it >> in software. Even for MPEG2 most cards don't have hardware to decode >> the bitstream so to support VDPAU there would need to be a software >> fallback. This is probably why Nvidia isn't currently supporting VDPAU >> for pre-NV50 cards. >> >> It seems to me that all of this software fallback business is outside >> the scope of a state tracker. I can see this state tracker getting >> very large and ugly if we have to deal with fallbacks and if anyone >> wants to support fixed function decoding in the future. I think the >> much better solution is to extend Gallium to support a very minimal >> video decoding interface. The idea would be something along the lines >> of: >> >>> picture_desc_mpeg12; >>> picture_desc_h264; >>> picture_desc_vc1; >>> ... >>> >>> pipe_video_context >>> { >>> set_picture_desc(...) >>> render_picture(bitstream, ..., surface) >>> put_picture(src, dst) >>> ... >>> }; >>> >>> create_video_pipe(profile, width, height, ...) >> The driver would then implement the above any way it chooses. Going >> along with that would be some generic fallback modules like the >> current draw module that can be arranged in a pipeline, to implement >> things like software bitstream decode for various formats, software >> and shader-based IDCT, shader-based mocomp, and colour space conv, >> etc. >> >> An NV50 driver might implement pipe_video_context mostly in hardware, >> along with shader based colour space conv. An NV40 driver for MPEG2 >> might instantiate a software bitstream decoder and implement the rest >> in hardware, where as for MPEG4 it might instantiate software >> bitstream and IDCT along with shader-based MC and CSC. As far as I >> know most fixed func decoding HW is single context, so a driver might >> instantiate a software+shader pipeline if another stream is already >> playing and using the HW, or it might use it as a basis for managing >> states and letting DRM arbitrate access from multiple contexts. A >> driver might instantiate a fallback pipeline if it had no hardware >> support for a particular type of video, e.g. Theora. Lots of >> variations are possible. >> >> Having things in the state tracker makes using dedicated hardware or >> supporting VDPAU and others unpleasant and would create a mess going >> forward; many of these decisions should be made by driver-side code >> anyway, which will simplify the state tracker greatly. >> >> Comments would be appreciated.
Well, it occurs to me that almost all video decoding's done in a pipeline, and we don't want to do any steps in software between hardware steps. It also occurs to me that there's a big disconnect between the actual video format, and the "bitstream" that encapsulates it, which could be anything from MPEG to Matroska (my favorite) to OGM to AVI, etc. With those in mind, we could look at the big picture like so: - Gallium is only responsible for the formats themselves, and not the containers. Any data required to decompress a raw frame, that's normally stored in the container, should be passed alongside the frame. - Drivers declare all the formats they can handle. - Drivers have one hook for taking in video frames, maybe in a context, maybe in a video_context. - Drivers are responsible for all steps of decoding frames, but are free to use methods in Util or Video or whatever auxiliary module we decide to put them in. Things like, say, video_do_idct() or video_do_huffman() might be possible. - Drivers probably shouldn't mix'n'match hardware and software steps, although this is a driver preference, e.g. video_do_foo(); nouveau_do_bar(); video_do_baz(); I would guess that the migration setup would take longer than just doing video_do_bar() instead, but that's just my opinion. I'm sure that not all chipsets are quite like that. - I think that once a frame's decompressed, we can use the normal methods for putting the frame to a buffer, although I'm sure that people are going to reply and tell me why that's not a good idea. :3 So from this perspective, support for new formats needs to be explicitly added per-driver. PIPE_FORMAT_MPEG2, PIPE_FORMAT_THEORA, PIPE_FORMAT_XVID, PIPE_FORMAT_H264, etc. Not sure how much of this makes sense, as I'm still waking up, but I think I'm reasonably coherent. ~ C. ------------------------------------------------------------------------------ This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword _______________________________________________ Mesa3d-dev mailing list Mesa3d-dev@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/mesa3d-dev