On Thu, 2009-04-23 at 09:18 -0700, Clark Snowdall wrote:
> On Apr 22, 2009, at 10:17 AM, Rusty Lynch wrote:
> 
> > On Tue, 2009-04-21 at 09:48 -0700, Clark Snowdall wrote:
> >> Hello all,
> >>
> >> The project my team and I are working on requires lots of video
> >> embedded in clutter.  We are developing on Acer Aspire, but the  
> >> target
> >> platform is currently the Compal.  I wrote a test app using gstreamer
> >> and clutter that works reasonably well on Aspire, but brings the
> >> Compal to its knees.  I have heard that the Compal (being Menlow) has
> >> video hardware acceleration.  I'm assuming that I 'm not tapping into
> >> that power, but would like to.
> >
> > Definitely not.  You have to have codec wrappers that know about the
> > platform hardware acceleration capabilities.
> >
> > Today there is a RealNetworks product (RealPlayer for MID) and a  
> > Fluendo
> > product (not sure what they call it, but it's a GStreamer codec  
> > bundle)
> > that provide this.
> >
> >> From what I understand, the easiest
> >> (with regards to licensing) way to get hardware acceleration goodness
> >> is to get Helix.
> >
> > To be clear... you would need to install the RealNetworks product, and
> > then link into those libraries in order to not have to attain your own
> > IP licensees.
> 
> Everyone mentions that I need RealPlayer for MID's.  But I don't know  
> where to get this.  Would this be included in the standard RealPlayer  
> for Linux?  Or is it hidden somewhere within the Helix community  
> site.  Furthermore, if I were to use the codecs, which libraries  
> specifically and where can I get the headers?

There is an email address to ask for rp4mid for personal use on the
rp4mid announcement page:
https://rp4mid.helixcommunity.org/

There is also a private project setup for developers which requires
setting up a business relationship.  I don't know exactly how that works
but you could ask that in the email.

> > But... if you are deployed on a solution that has the Fluendo codecs
> > installed then, just like the RealPlayer approach, you can use the
> > codecs without having to re-license the codecs.
> 
> I'm starting to contact Fluendo to follow that path as well.
> 
> >
> >
> >> So here are the questions I'd like to pose:
> >>
> >> 1.  Is there some sort of binding like clutter-gst that is required  
> >> to
> >> get Helix into clutter?  Documentation says that Helix is supported  
> >> in
> >> clutter, but there isn't any explanation of how.
> >
> > This capability is currently a work in process (monitor the helix
> > client-dev mailing list for further progress).
> >
> > We do this by implementing what we call a media sink which acts a lot
> > like a gstreamer sink element.  When combined with a the ClutterHelix
> > object (which implements ClutterMedia), then the helix infrastructure
> > parses/decodec the media, passes the data to ClutterHelix which then
> > renders the data into the backing 2D texture.
> >
> > The ClutterHelix code exist in the clutter git repos, but has not made
> > it into a release since it depends on some helix code that will likely
> > change as we go through the helix review process.
> >
> > http://git.clutter-project.org/cgit.cgi?url=clutter-helix/tree/
> 
> It's starting to sound that a better course would be to follow the  
> gstreamer/Fluendo route in order to get it into clutter.  But regards  
> to the Helix code, how stable is unstable?  A previous poster  
> mentioned a release is being worked on, but our schedule just got  
> shortened to get a demo up by mid-June.  Think we should wait or go  
> with gstreamer/Fluendo?

I would say that technically both approaches are pretty equivalent,
although the development methodologies and basic framework designs are
very different.  Which one you prefer is largely a matter of taste.

I have found that the business side tends to be the forcing function
that will force you in one direction or another.

> >> 2.  Is libva required in any of this?
> >
> > Yes, both the helix and fluendo solutions are built on top of libva.
> >
> >>
> >> 3.  I have downloaded the Helix source code.  Is there anything
> >> special to compile it to use hardware acceleration?
> >
> > Yes, a bunch of compile options... of which i don't recall right now.
> > Perhaps some of the other guys can remind me what build configuration
> > options need to be set.
> 
> Anyone else out there know this one?
> 
> >
> >
> > But... like i mentioned above, the code has not landed that allows you
> > to use helix to render hardware accelerated video into a clutter
> > texture.
> >
> >> 4.  If one is going to put a video in a moving rotation clutter  
> >> actor,
> >> is it best to decode the video elsewhere in memory and then just keep
> >> updating the pixmap on the actor (one post I read seems this gets the
> >> most performance out of the hw accelerator)?
> >
> > When using the libva interface you normally don't get the data back  
> > out
> > of the hardware, but instead just get a ref id and then make  
> > operations
> > on the id so that the number of copy operations is limited.  In our
> > helix changes we pass video buffers to the media sink that are really
> > just the ref id and a colorspace id that lets the buffer consumer know
> > that this is not a real video buffer.
> >
> > When you initialize the libva surface, you pass in a window or a  
> > pixmap
> > for the graphics hardware to render into.  In order to render into a
> > clutter texture, we pass in a pixmap to libva, and then use the
> > texture-from-pixmap operation to construct the backing 2d texture used
> > for the clutter texture.
> >
> > But... the initial graphics driver delivered on menlow products only
> > supported libva operations on a window drawable, so you will need  
> > access
> > to the new drivers that has not been released yet.
> 
> Does clutter-gst using the Fluendo codecs allow one to put hardware  
> accelerated video in a clutter actor?  Or will I need to do this  
> pixmap transfer business myself?  For that matter, if neither of these  
> codecs work for us, I assume we can use libva to do this sort of thing  
> ourselves?

clutter-gst currently only supports software decoding.  

The fluendo bundle provides replacement ximagesink and xvimagesink
elements that understand how to deal with libva based blitting, but I
haven't been able to get the patches to that code.  Yes, it's lgpl code,
so it should just be a matter of asking for the patches... and perhaps
they are posted some place that i overlooked.

If somebody could submit those changes back to the gstreamer project
then we would have enough information to implement the equivalent
functionality for the clutter gst sink.

Caveat: 
Again, just like with helix, we would still need texture-from-pixmap
support and support for passing in a pixmap to libva.  Without this then
we would have to use the libva method to copy the data out of the
hardware and then write that into the 2d texture.  I don't know how much
of a performance impact that will have.

> As for you helix implementation, you mention using new graphics  
> drivers.  I assume that it what allows you to do this texture-from- 
> pixmap operation?  

correct

> So even if we got the Helix codec, and got the  
> unstable helix-clutter bindings, we STILL couldn't get the hardware  
> acceleration working because we don't currently have these drivers?   
> Any schedule on these?

All I know is it's almost ready.

> >> Any example code that pertains to this would be most appreciated.
> >
> > Unfortunately the clutter-helix code only implements the software  
> > based
> > rendering (i.e. you can see in clutter-helix-video-texture.c where we
> > check for the special colorspace that indicates that the data in the
> > buffer is really just a libva id, but we haven't added the libva calls
> > to blit the data into the texture.)
> 
> Again, sounds like gstreamer/Fluendo is going to get me further along  
> with this.

Well... for software decoding the released clutter-gst just works.
There is still a gap for hardware acceleration.

   --rusty

_______________________________________________
Moblin dev Mailing List
[email protected]

To manage or unsubscribe from this mailing list visit:
https://lists.moblin.org/mailman/listinfo/dev or your user account on 
http://moblin.org once logged in.

For more information on the Moblin Developer Mailing lists visit:
http://moblin.org/community/mailing-lists

Reply via email to