On Thursday 03 May 2007 10:21, Frantisek Dufka wrote:

> Siarhei Siamashka wrote:
> >   If decoding time for
> > each frame will never exceed 28-29ms (which is a tough limitation, cpu
> > usage is not uniform), video playback without dropping any frames will be
> > possible even with tearsync enabled.
>
> Would a double or multiple buffering help with this? 

Yes, most likely it will. N800 has 800x480 virtual size for framebuffer and a
new enhanced screen update ioctl. Now it should be possible (did not try yet, 
but will have some results very soon) to specify output position and size for
the rectangle as it gets displayed on the screen.

struct omapfb_update_window {
        __u32 x, y;
        __u32 width, height;
        __u32 format;
        __u32 out_x, out_y;
        __u32 out_width, out_height;
        __u32 reserved[8];
};

This theoretically allows us to use some kind of double buffering, we can
split framebuffer into two 400x480 parts and while one part is being
displayed, another one can be freely filled with the data for the next frame.
This will effectively remove the need for OMAPFB_SYNC_GFX, improving
peak framerate.

But this solution will require support for arbitrary downscaling in YUV420
format for each video frame to fit 400x480 box. The quality will be also
reduced a bit, but on the other hand, graphics bus should have no 
performance problems with sending 400x480 through it.

If virtual framebuffer size could be extended to 800x960, this would allow us
to use doublebuffering without sacrificing resolution. Anyway, I'll try to
fix MPlayer framebuffer output module to properly work with the latest
version of N800 firmware and implement this form of doublebuffering. It 
should provide the fastest video output performance that is possible.

Regarding Nokia 770, now it uses 800x600 framebuffer virtual size (some
extra waste of RAM?). Anyway, if hwa742 kernel driver could be extended to
support this improved screen update API and respect 'out_x' and 'out_y'
arguments, we could have four video pages in framebufer memory for 
400x240 pixel doubled video output. It could allow to implement a very 
efficient double buffering for accelerated Nokia 770 SDL project if it ever
takes off the ground :)

> Does mplayer use different threads for displaying and decoding and decode
> frames in advance? 

No, it doesn't have any extra threads now. But video playback on Nokia 770 
is already parallel, splitting tasks between the following pieces of hardware
each working simultaneously:
1. ARM core (demuxing and decoding video into framebuffer)
2. DMA + graphics controller (screen update transferring data from framebuffer
into videomemory and performaing YUV->RGB conversion on the fly)
3. C55x DSP core (mp3 audio decoding and playback)

There is not much point in creating many threads on ARM, as we only have a
single ARM core and splitting work into several threads will not accelerate
overall performance. Threads could be useful for doing something extra while
waiting for other hardware components to finish their work (waiting for screen
update for example), but decoding ahead will also require storing the decoded
data somewhere. This place for storing decoded ahead frames could be only
some extra space in framebuffer memory, otherwise we would lose some
performance on moving this data to framebuffer later (and increasing battery
power consumption). As framebuffer space is limited, we would not be able to
store many frames ahead, and decoding cpu usage most likely varies not 
between frames but more like between different scenes (complicated action
scene will make us run out of decode ahead buffer pretty fast). Anyway,
probably this may be worth trying later, there even exists some threads based
MPlayer fork: http://mplayerxp.sourceforge.net/
_______________________________________________
maemo-developers mailing list
maemo-developers@maemo.org
https://maemo.org/mailman/listinfo/maemo-developers

Reply via email to