Great!
probably libXvMCg3dvl.so can load device dependent xvmc driver dynamics
according to current device, for example, libnouveau_dri.so and
libradeon_dri.so.
btw, the context and screen are related to DRI, I am changing them to DRI2
in order to use gallium 3d functions.
Cooper
On Mon, Sep 21, 2009 at 10:22 AM, Younes Manton <youne...@gmail.com> wrote:
> On Mon, Jan 19, 2009 at 9:39 AM, Younes Manton <youne...@gmail.com> wrote:
> > I've been taking a look at VDPAU and how to support it on cards with
> > and without hardware support and I have some thoughts. The VDPAU API
> > lets the client pass off the entire video bitstream and takes care of
> > the rest of the decoding pipeline. This is fine if you have hardware
> > that can handle that, but if not you have to do at least parts of it
> > in software. Even for MPEG2 most cards don't have hardware to decode
> > the bitstream so to support VDPAU there would need to be a software
> > fallback. This is probably why Nvidia isn't currently supporting VDPAU
> > for pre-NV50 cards.
> >
> > It seems to me that all of this software fallback business is outside
> > the scope of a state tracker. I can see this state tracker getting
> > very large and ugly if we have to deal with fallbacks and if anyone
> > wants to support fixed function decoding in the future. I think the
> > much better solution is to extend Gallium to support a very minimal
> > video decoding interface. The idea would be something along the lines
> > of:
> >
> >> picture_desc_mpeg12;
> >> picture_desc_h264;
> >> picture_desc_vc1;
> >> ...
> >>
> >> pipe_video_context
> >> {
> >> set_picture_desc(...)
> >> render_picture(bitstream, ..., surface)
> >> put_picture(src, dst)
> >> ...
> >> };
> >>
> >> create_video_pipe(profile, width, height, ...)
> >
> > The driver would then implement the above any way it chooses. Going
> > along with that would be some generic fallback modules like the
> > current draw module that can be arranged in a pipeline, to implement
> > things like software bitstream decode for various formats, software
> > and shader-based IDCT, shader-based mocomp, and colour space conv,
> > etc.
> >
> > An NV50 driver might implement pipe_video_context mostly in hardware,
> > along with shader based colour space conv. An NV40 driver for MPEG2
> > might instantiate a software bitstream decoder and implement the rest
> > in hardware, where as for MPEG4 it might instantiate software
> > bitstream and IDCT along with shader-based MC and CSC. As far as I
> > know most fixed func decoding HW is single context, so a driver might
> > instantiate a software+shader pipeline if another stream is already
> > playing and using the HW, or it might use it as a basis for managing
> > states and letting DRM arbitrate access from multiple contexts. A
> > driver might instantiate a fallback pipeline if it had no hardware
> > support for a particular type of video, e.g. Theora. Lots of
> > variations are possible.
> >
> > Having things in the state tracker makes using dedicated hardware or
> > supporting VDPAU and others unpleasant and would create a mess going
> > forward; many of these decisions should be made by driver-side code
> > anyway, which will simplify the state tracker greatly.
> >
> > Comments would be appreciated.
> >
>
> Ok, I think this is far enough along that I may as well push it out
> soon. I'm still chasing down a bug or two, but the interface as far as
> XvMC is concerned looks ok to me. Apologies for the time it took, not
> much spare time these days.
>
> The interface lives in p_video_context.h [1] and the associated state
> objects are in p_video_state.h [2]. The
> pipe_video_context::decode_macroblocks() call is there for XvMC, since
> without the VIA extension XvMC works at the macroblock level. The
> ::decode_bitstream() call is for everyone else. Both take a fence
> param and there's no ::flush() call since we have to consider fixed
> func HW that doesn't work like that. The third function
> ::render_picture() takes decoded video surfaces and does
> scaling+colour conversion and compositing with other surfaces to form
> a final frame.
>
> I wanted to stick to pipe_surfaces and pipe_textures but again, but
> textures and fixed func HW probably don't play well, so there's a
> pipe_video_surface that allows for all of the weird formats that are
> out there (planer NV12, YUYV, etc). Besides, I doubt most of these
> formats can be used as textures by the 3D engine, so there's no point
> in having pipe_texture satisfy two seperate uses. On the other hand
> pipe_surfaces are still views into textures and ultimately that's
> where the rendered content ends up. There are
> pipe_screen::video_surface_create and ::video_surface_destroy
> functions to get access to them [3].
>
> All of the g3dvl state tracker stuff has been moved into
> src/gallium/auxiliary/vl, mainly in the form of vl_mpeg12_mc_renderer
> [4] (does mocomp only currently) and vl_compositor [5], suitable for
> use in implementing decode_macroblocks() and render_picture(). There
> wasn't much left after that to do in a state tracker and it would have
> ended up as a very light wrapper, so there won't be one for XvMC
> unless someone has a reason for wanting one. VDPAU on the other hand
> might need a state tracker, since there's more to wrap (needs to be
> thread-safe, has more interfaces, etc).
>
> Just a side note, in case someone is considering an Xv tracker. The
> ::render_picture() API should be good for use by an Xv state tracker,
> and vl_compositor basically implements textured video. The only thing
> currently missing is CPU access to video surfaces, which I don't need
> for XvMC. I'm thinking pipe_transfers or something equivalent to fill
> the gap.
>
> You can see additions to Softpipe [6,7] to see how I implemented the
> interface and how to use the auxiliary libs in about 250 lines with
> plenty of whitespace. Most of the work is done by the vl libs so
> unless you have fixed func HW it's not too hard to get 3D decoding up
> and running.
>
> You can also take a look at the XvMC context [8] and surface [9] code
> to see how to use the interface; it's similarly short. The last part
> of the interface is the winsys [10].
>
> Chances are I forgot something in the interface, so if you spot
> anything please bring it up. Barring any issues, I'll push what I have
> so far once I hunt down the last bug or two, at which point it should
> be roughly equivalent to the current code for Softpipe decoding. The
> only thing I haven't kept up with is the Nouveau winsys, which has
> changed since we moved to GEM/TTM, so I'll have to get back on that
> later to get HW decoding back, which is also broken in the current
> code. Ignore the indentation, I'll run it through indent before
> pushing anything.
>
> Younes
>
>
> ------------------------------------------------------------------------------
> Come build with us! The BlackBerry® Developer Conference in SF, CA
> is the only developer event you need to attend this year. Jumpstart your
> developing skills, take BlackBerry mobile applications to market and stay
> ahead of the curve. Join us from November 9-12, 2009. Register now!
> http://p.sf.net/sfu/devconf
> _______________________________________________
> Mesa3d-dev mailing list
> Mesa3d-dev@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/mesa3d-dev
>
>
------------------------------------------------------------------------------
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
_______________________________________________
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev