On Mon, 28 Jan 2002, Curtis Veit wrote:

> On Sun, Jan 27, 2002 at 11:21:55AM +0100, Christoph Egger wrote:
> >
> > Curtis wrote:
> > > [...] I may actually get enough background to start fixing up and
> > > adding pieces to GGI.  I'll be playing with libbuf and libblt and
> > > libovl quite a bit I think.
> >
> > The development of all these three libs is mainly bottlenecked by two
> > things:
> >
> > 1) libgalloc needs to turn into a more major state
>
> Is discussion about this ongoing?  Perhaps we should discuss
> what is needed on the GGI list.

Not really. Sorry. Only Brian and I are involved in the todo list (and
perhaps Eric a little). Brian left his latest work on libgalloc unfinished
(i.e. releasing of resource issue, missing bugfixes). Then the targets
needs an major update.

Brian: Could you be a bit verbose on this points, please?


> > 2) libovl, libbuf and libblt needs to interact on each other transparently
> >    to the user
> >
> yes,  I'll see how well these work in my situations soon.

Any ideas are welcome. There are no stupid ideas, only stupid ways of
implementations... :-))


> > These points done means a BIG milestone in the development of the whole
> > GGI project.
> >
> >
> > > I also plan to use the batch-ops.
> >
> > Having at least a framework of the batch-ops working is a requirement to
> > get point 2) done.
> >
>
> I think batch-ops done correctly will be a key element in the
> foundation.  I'll start rereading the docs from Brian to see if I
> really get it.

Yep, that's right.


> > To libgpf: I am shortly before getting it working again. The new pipeline
> > managemnet allows to autooptimize the data-transfer for
> > speed/memory-usage. That makes realtime stuff (i.e. video-play) much
> > easier to realize. I plan to write a target, that uses gstreamers'
> > plugins. That allows libgpf to transfer videos quickly. Perhaps, I'll try
> > to write a SDL and a DirectFB target to show, how flexible libgpf is. Past
> > this point, it should be possible to play videos - hw-assisted through
> > SDL/DirectFB (hopefully even with sound, but can't promise yet).
> >
>
> I have been very interested in the gstreamer project. They are doing
> some great work so this sounds excellent to me!
> Is there anything specific I can do to help on libgpf?

Anybody can help out there in writing targets, protocols and pipe sublibs
for it. MooZ (a guy on IRC) is working on a pcx target. He is also
interested in writing a gif and jpeg target.

For this week, I have been busy with fixing bugs, cleaning up the code.

> > > I'll play with all of this running under fbdev and kgi on matrox
> > > cards. I'll share any fix ups for system architecture as they become
> > > necessary.
> >
> > Have I already given you CVS write access?
>
> No, I don't think so.  I haven't come up with enough help to be worth
> access up to now.  I hope to change that though.
> If you feel confident in my ability go ahead and set it up.  (Do I need
> a specific password?)

First, you must be a registered user at SourceForge. Then I need your
loginname to add you to the developer's list.


> > Here's the latest doc about LibVideo:
> >
> > Introduction
> >
> > This is more of a "mission statement" than a specification for the
> > LibVideo GGI extension, and it's associated extension LibGPF.  Though an
> > API has been presented for LibVideo, it is only a straw man, intended for
> > improvement and revision.  In addition, the LibGPF API is relatively new
> > and will be adapting to better accomodate LibVideo.
> >
> > LibVideo is intended to provide video support on LibGGI visuals, with the
> > capability for various levels of fallback to software implementations
> > (including 3rd party software), depending on the level of hardware support
> > available in the target.
> >
> > LibVideo tries to keep itself as far sepearated from audio processing as
> > it can be, however, since audio processing is often entangled with video
> > processing, LibVideo does contain audio related code -- however this
> > functionality is limited to simple informational elements containing the
> > information necessary to attach a third party sound library or
> > application.  Callback function support will be provided in cases where
> > the audio is hopelessly entangled.
>
> How does this relate to specific audio implementations such as ALSA?

I guess, you wonder how it will interact with sound libraries, concretely?
Well, that isn't thought out in the deep. Only very roughly, namely as
above explained.

> There is a project 'jack' which provides a callback style API on top of
> ALSA. Perhaps where meshing audio and video functionality this is
> needed or at least handy?  (I don't know but will look into it as
> time permits.)

If it's interface fits perfectly, probably we can use it directly.


> >
> > Let us look at some of the operational modes that LibVideo needs to
> > support:
> >
> > 1) Full software fallback
> >
> > In this instance there is no actual hardware to accelerate video overlays
> > or stream control operations.  An example is playback of an encoded video
> > file on disk, when the graphics card contains no decoder or overlay
> > functionality.  LibVideo simply finds either A) a way to use an
> > accelerated operation e.g. bitblt from VRAM or DMA texture load to render
> > video data or B) a direct way to access a window of the framebuffer.
> > Information about this area is given to LibGPF, which loads either an
> > output-libblt or output-libggi sublib, an input sublib that handles
> > reading the file from the disk, and a conversion sublib to do the
> > decoding.  The latter two may be simply wrappers around 3rd party
> > software.  Information about the LibGGI display is passed from the output
> > sublib to the conversion sublib as necessary to locate buffers and choose
> > intermediate formats.
> >
> > 2) Software fallback with overlay:
> >
> > When the chipset supports overlay, LibVideo allocates the overlay using
> > LibOvl, and instructs LibGPF to load an output-libovl sublib instead,
> > otherwise everything else is kept pretty much the same.
> >
> > 3) Hardware decoders:
> >
> > In the case of hardware decoders, LibVideo attempts to match the output of
> > the decoder to an overlay, or to the BitBlt texture format or native
> > framebuffer format if overlays are not available, as closely as it can so
> > that minimal color/buffer conversion is required.  LibGPF provides a dummy
> > conversion library in the event that the decoder can write directly to the
> > graphics hardware, or otherwise a simple colorspace/pixelformat conversion
> > sublib.  LibVideo hands the libggi (or libblt or libovl) output sublib
> > information about the LibGGI display as it would normally do above.  If it
> > is loaded, the dummy conversion library takes this information and simply
> > passes it on to the input sublib, allowing the decoder to access the
> > graphics resource directly.
> >
> > 4) Fully integrated hardware:
> >
> > LibVideo negotiates an overlay and instructs LibGPF to load a do-nothing
> > sublib for input and output.  A conversion sublib is loaded which performs
> > no actual conversion, but rather simply acts as a point of control over
> > the hardware.
> >
> > 5) Accelerator integrated software decoders:
> >
> > On the long-term TODO list of LibGPF will be convertors implementing
> > decoders which bypass the libblt output sublib and use LibBlt more
> > vigorously, allowing the Blt engine to perform the final stage of the
> > decoding using stretching/pattern fill/etc.  LibBlt is a perfect fit for
> > this task, as it provides a virtualized accelerator command queue.
> >
> >
> > Information about libgpf can be found at
> > <link to "libgpf/doc/libgpf-documentation">here</link>
> >
> >
> > Overlay control:
> >
> > Aside from the abscense of blend factors and keying, which we are working
> > to add (because it belongs there anyway).  We feel the LibOvl API covers
> > the bases as far as determining the positioning/scaling/priority of a
> > video overlay window.  When an overlay is used which supports the
> > functionality described in the LibOvl documentation, LibOvl will be used
> > to perform these functions.  The decision to be made now is whether to
> > provide a full analog to LibOvl's positioning API in LibVideo, or whether
> > a simpler API is desirable.
> >
>
> The above actually looks like a pretty good summary to me. The gear
> that I am (mostly) looking at is fully integrates HW. So as I see it the
> trick is to provide an API which works for understanding and specifying
> available actions. (Like: Connect the output of Mpeg decoder 0 to Screen
> Layer 3) And then being able to implement on all the cases you have
> given above.   I am going back through the docs for the GGI libs and
> will send some comments about the specific APIs.

Exactly! You got it.

> The fact is that for the highly integrated HW, GGI really does not do
> much.  It is more an issue of tracking what can be done and providing
> an interface to do it. (Then we fall back to using the 'regular'
> portions of GGI for graphics on a specific layer (or layers).
>
> Of course my desire is to see the same interface when the HW is not
> as integrated.

That is exactly the full intention.


CU,

Christoph Egger
E-Mail: [EMAIL PROTECTED]

Reply via email to