Re: [maemo-developers] maemo mplayer development and its possible future use on Nokia 770
On Tuesday 12 December 2006 01:57, you wrote: > My original goal of posting the previous message was an attempt to find a > volunteer who would like to try developing such a frontend. > > I don't have that much time to devote to mplayer development myself. Up > until this moment I even could not concentrate on solving some specific > task but tried some bits with MP3 audio output, decoder improvements, GUI > and user interface, fixing arm specific bugs, and now video output code > with hardware YUV support. Also some kind of management work, integration > of useful patches and support for users in the forums takes some time. I > would like to concentrate on some task such as video decoder optimizations > for ARM, but seeing that other parts are not in a quite good shape, > distracts attention somewhat :-) Well, after reading this part again today, looks like it sounds a bit controversial, I'm sorry about it. Actually I'm not the only one who took part in porting and improving mplayer for maemo. Ed Bartosh created deb packages and Josep Torra optimized mpeg decoder for Nokia 770. Also some patches were taken from AGAWA Koji's Zaurus port of mplayer. Not to mention numerous upstream developers of mplayer and ffmpeg who developed this nice piece of software and were very helpful (especially Mans Rullgard who developed initial version of armv5te optimized idct code). Also there were many people who provided useful information and valuable comments. Consider it just as my regret for not being able to contribute to mplayer development as much as I should, but not a rant about 'nobody is helping' :-) I think we still really need a proper credits tab in mplayer GUI. But I expect that having more people contributing to maemo mplayer development can provide some very nice results. So feel free to join. ___ maemo-developers mailing list maemo-developers@maemo.org https://maemo.org/mailman/listinfo/maemo-developers
Re: [maemo-developers] maemo mplayer development and its possible future use on Nokia 770
On Monday 11 December 2006 11:26, Frantisek Dufka wrote: > Yes, the result would look like video overlay works in windows or linux > on PC - overlay draws over different windows when it shouldn't :-) We > can live with that. I thought it is actually not a problem but quite a good thing :-) Surely for mplayer as a standalone video player, supporting keboard/initializing some window is important. But if it just outputs video into some rectangular screen area (provided by some other application) and is controlled via issuing commands through a pipe, it makes possible to develop some advanced frontends which use mplayer as a video rendering engine. For example a twin of the standard Nokia 770 video player which simulates all its GUI controls could be created. My original goal of posting the previous message was an attempt to find a volunteer who would like to try developing such a frontend. I don't have that much time to devote to mplayer development myself. Up until this moment I even could not concentrate on solving some specific task but tried some bits with MP3 audio output, decoder improvements, GUI and user interface, fixing arm specific bugs, and now video output code with hardware YUV support. Also some kind of management work, integration of useful patches and support for users in the forums takes some time. I would like to concentrate on some task such as video decoder optimizations for ARM, but seeing that other parts are not in a quite good shape, distracts attention somewhat :-) > As for framebuffer permissions, it may be better to > relax device permissions than to run mplayer as root. The most right way to solve this issue is probably to add 'user' to 'video' group. Alternative solutions involve messing with mplayer binary ownership and suid/sgid bits. I wonder what is possible to do automatically in the least intrusive way when installing mplayer package? > Well, the conversion is done on the fly while the data is transferred to > internal epson video buffer. I guess it would be hard to do planar YUV > -> RGB without additional memory. I still don't understand how it is > done on the fly even in those packed formats since some color > information (U,V) is common for more lines. Seems like tough task. There > needs to be additional memory for remembering U,V parts from previous line. YUV422 is a good format as it matches 16-bit RGB format quite well. Both of them use 16 bits per pixel, and YUV422 encodes each pair of pixels into a stride of 4 bytes (16-bit RGB encodes each pixel into 2 bytes, but you can also treat it as 2 pixels in 4 bytes). So we can mix YUV422 and RGB data in a framebuffer quite conveniently. > > Another interesting possibility is to relay video scaling and color > > conversion (planer -> packed YUV) to DSP. > > I'm not sure, is there some math involved in this or it is just memory > shuffling? I guess DSP would be really bad for memory shuffling. From > previous discussions it looks like when you add DSP to the mix all kinds > of bottlenecks appears. I wonder if gstreamer/dspfbsink could keep up > with mplayer speed doing just conversion and video output. Actually DSP may be a good choice for scaling, if you check the same spru098.pdf you will find "Pixel interpolation Hardware Extension" part :-) Also looks like dspfbsink uses DSP for scaling as it provides a mapped memory for planar YV12 data (or its variant) and accepts a command to do the rendering. I looked through xserver sources and gst plugins to dig for information and I think I got some impression about how they work, but I think this all deserves a separate post along with some additional inquiries addressed to Nokia developers :-) ARM can perform YV12->YUV422 conversion quite fast if properly optimized, I even suspect that it can provide a throughoutput comparable to memcpy (as memory controller/write buffer performance is a limiting factor here and some data shuffling will not make much difference). The benchmarks in my previous message use standard color conversion/scaling code from mplayer which is not optimized for ARM. But just color format conversion is a special case, sometimes scaling is required and mplayer scaler is rather slow. Scaling performed by mplayer was completely unusable for RGB target colorspace with x11 driver, that's why maemo build of mplayer had fallback to SDL when playback for scaled video was required. Now with the target colorspace YUV422, it is slow but still usable and a bit better than SDL. If we want a fast scaler for ARM, using JIT is a good option (and I have some experience in developing JIT translator for x86). Anyway, I hope that by using DSP for scaling and running it asynchronously, it is possible to reduce ARM core usage to almost zero and keep all the resources for video decoding. A related interesting observation is that screen update ioctl does not seem to affect performance at all (commenting it out does not improve performace and naturally w
Re: [maemo-developers] maemo mplayer development and its possible future use on Nokia 770
Siarhei Siamashka wrote: Surely all these problems can be fixed by implementing hybrid x11/framebuffer code where x11 is responsible for keyboard input and sets video mode so that no other application draws over a screen area used by mplayer. Yes, the result would look like video overlay works in windows or linux on PC - overlay draws over different windows when it shouldn't :-) We can live with that. As for framebuffer permissions, it may be better to relax device permissions than to run mplayer as root. I needed a confirmation that Epson chip supports only packed YUV formats and no planar formats are really available Well, the conversion is done on the fly while the data is transferred to internal epson video buffer. I guess it would be hard to do planar YUV -> RGB without additional memory. I still don't understand how it is done on the fly even in those packed formats since some color information (U,V) is common for more lines. Seems like tough task. There needs to be additional memory for remembering U,V parts from previous line. Another interesting possibility is to relay video scaling and color conversion (planer -> packed YUV) to DSP. I'm not sure, is there some math involved in this or it is just memory shuffling? I guess DSP would be really bad for memory shuffling. From previous discussions it looks like when you add DSP to the mix all kinds of bottlenecks appears. I wonder if gstreamer/dspfbsink could keep up with mplayer speed doing just conversion and video output. Oh BTW, it is off topic but I finally found what that cryptic 'Video hardware accelerators for DCT, iDCT, pixel interpolation, and motion estimation for video compression' feature on OMAP 1710 page means. Looks like the DSP really has some special instructions for performing those operations, google for spru098.pdf It is sad that default video player is still so bad with such features implemented in hardware. Frantisek ___ maemo-developers mailing list maemo-developers@maemo.org https://maemo.org/mailman/listinfo/maemo-developers
[maemo-developers] maemo mplayer development and its possible future use on Nokia 770
Hello All, I have just uploaded a new build of mplayer (mplayer_1.0rc1-maemo.3) to garage, which implements some experimental and not yet clean video output method using hardware YUV colorspace and direct access to framebuffer. It's not quite usable as framebuffer access is not allowed when running mplayer as ordinary user (framebuffer device is owned by root:video with 660 permissions). Also right now there are problems with keyboard input and other applications may cause some flicker effect (for example clock applet or google search applet overlap fullscreen video when thay are redrawn). Surely all these problems can be fixed by implementing hybrid x11/framebuffer code where x11 is responsible for keyboard input and sets video mode so that no other application draws over a screen area used by mplayer. But having a plain framebuffer access creates an interesting possibility. Looks like mplayer can coexist with other applicaitons nicely and if they provide some rectangular area for video output, mplayer can be used from them in slave mode (http://www.mplayerhq.hu/DOCS/tech/slave.txt). Mplayer just needs to be extended to accept some command line option or slave option to specify/change screen region that it will use for video playback. Also I noticed that there is some initiative to control video players using d-bus: http://lists.mplayerhq.hu/pipermail/mplayer-dev-eng/2006-December/048067.html So maybe someone can develop a more advanced gui frontend for mplayer with all the eye candy and convenient gui controls. Let me know if you are interested in this idea and need some help or more information from me. Also thanks to Frantisek Dufka for Epson chip documentation, having full and reliable information adds a certain level of confidence and encourages development (I needed a confirmation that Epson chip supports only packed YUV formats and no planar formats are really available). Next come current benchmark results (mplayer_1.0rc1-maemo.3) showing some recent improvements, you can skip this part if you are not interested. Tested with the following video file (its first 100 seconds): VIDEO: [DIV3] 640x480 24bpp 23.976 fps 779.1 kbps (95.1 kbyte/s) Without hardware YUV colorspace support (-vo x11): # mplayer -endpos 100 -benchmark -vo x11 -nosound -quiet ... SwScaler: using unscaled yuv420p -> bgr565 special converter BENCHMARKs: VC: 122.215s VO: 90.458s A: 0.000s Sys: 1.769s = 214.442s BENCHMARK%: VC: 56.9918% VO: 42.1831% A: 0.% Sys: 0.8250% = 100.% Now using framebuffer output with YUV422 colorspace (-vo nokia770): # mplayer -endpos 100 -benchmark -vo nokia770 -nosound -quiet testfile.avi ... SwScaler: using unscaled yuv420p -> yuyv422 special converter BENCHMARKs: VC: 121.282s VO: 31.538s A: 0.000s Sys: 1.577s = 154.397s BENCHMARK%: VC: 78.5517% VO: 20.4267% A: 0.% Sys: 1.0216% = 100.% VC - is the raw time it took to decode this video fragment VO - is the time it took to display it on screen (performing color conversion) = - is a total time including decoding and displaying (preferably this number should stay below 100 seconds is we want to play this video in realtime) So no playback for 640x480 videos yet, but we got a bit closer to it (and lower bitrate/resolution videos should be supported better with less battery power consumption). Another interesting possibility is to relay video scaling and color conversion (planer -> packed YUV) to DSP. This method is used by dspfbsink gstreamer plugin and another nice feature is that dsp tasks are accessibe for ordinary user. But I'll post some more details and thoughts a bit later in another message. ___ maemo-developers mailing list maemo-developers@maemo.org https://maemo.org/mailman/listinfo/maemo-developers