On Sat, Feb 12, 2011 at 12:55 AM, Luke Kenneth Casson Leighton
<[email protected]> wrote:

>> People wanting something for their own purposes is
>> not a compelling argument for why a trade association
>> should fund this work.

>> Like: reduced maintenance, easier bug fixing,
>> more control over SoC-specific optimizations, etc.
>> that result in reduced cost or greater customer
>> satisfaction.
> ok, now i'm out of ideas, honest.

 ok i did actually think of another one.  best illustrated if i go through it.

 let's imagine that someone considers VP8 (google's upcoming
alternative to MPEG) to be really really important to get going on
embedded CPUs, that 1080p30 is really really important, but the ARM
CPU is only 600mhz Cortex A8, has a 400mhz DSP, and funnily enough has
a PowerVR SGX engine on it (yes, that sounds very much like the
OMAP3530, doesn't it?  bear with me, everyone else gets a say, later
on...)

 so it would be really important, say, for this SoC manufacturer
(hypothetically Texas Instruments), to be able to position their
top-of-the-line but older CPU as being capable of doing Google VP8
when nobody else can, because everyone else has tied themselves to the
"MPEG-in-hardware" mast, and these hardware-MPEG-only 2008-9 CPUs are
stuffed because they only do 720p, if you're really lucky.  to get
1080p30 or upcoming 1080p60 which will be increasingly important as
the IDSB-T standard which is to be common across brazil, europe, china
and even the tiny market comprising the USA is thinking of adopting
IDSB-T, all these SoC vendors are *forced* to create entirely new ICs
(18 months, $USD 50million) but TI's approach is software (DSP) so
they *might* be able to do more (VP8), ahead of everyone else.

 ... with me so far? :)

so - let's see.  how could this VP8 algorithm be implemented on this
600mhz multi-talented CPU.  ooo, i dunno - let's sponsor a GSoC
student, just like was done for Ogg/Theora.  that would get it
up-and-running, right?  yes, but only at about 720p30 if you're very
lucky and used the 720mhz OMAP3530 ($45) instead of the 600mhz one
($23) - double the price.  but, it cost the SoC vendor nothing -
google paid for the hybrid ARM+DSP software development, work was
done, dusted in 6-8 weeks, everybody's happy.

from what i understand, the horrendously expensive bit of video CODECs
isn't the DCT decode (for example, in MPEG), it's the YUV-to-RGB
conversion, which is the most truly dreadful bit-level munging and/or
mathematical approximation algorithms that even the world's top DSP
from Texas Instruments *simply* cannot handle, efficiently.  AMD's
bought-in GEODE LX 500mhz CPU (from National) solved this, simply by
ensuring that this lowly x86 CPU had the most basic of MMX
instructions and then had a YUV-to-RGB hardware macro which took over.
 consequently, at 500mhz, i've seen this lowly CPU do 720x534 at 25fps
MPEG decode perfectly, and even 1280x1024 at a watchable 20fps if you
didn't mind the lack of triple-buffering jolting half the screen at at
time, occasionally.  ok, to be honest, i put up with it because i knew
that this was only a 500mhz low-power CPU, i was more impressed with
that than the film :)

now, what the hell has this got to do with PowerVR SGX??

start with their specification: it states that the 3D texturing engine
has, yep, you guessed it - programmable YUV-to-RGB conversion, and
that *even* the version of PowerVR SGX that's on the OMAP3530 - 600mhz
version - is capable of something like 500 million pixels per second -
can't remember the details, but i found it somewhere "on da
in'ur'neh".

hmm, 500 million pixels per second, let's see... what's 1920 times
1080 times 60?  oh look, that's 124 million pixels per second...

so wait... am i seriously suggesting that it could well be the case
that this lowly 600mhz processor, conceived and designed somewhere in
2006/2007, could well *accidentally* have enough processing power to
do pretty much any modern Video algorithm, at *full* HD
specifications, not just of MPEG or h264 but also the strategically
important Google VP8 algorithms, that were designed and written two to
three years _after_ the CPU's specs were finalised?

hmm, i guess i am.

so what's blocking us from finding out, and being able to make a full
technical analysis??

ahhh, now we get to the crunch :)

let's say that you _can_ do a basic analysis, by doing the following:

* the DSP performs the majority of decompression, all but the YUV-RGB
* the DSP "hands over" the results to the ARM CPU
* the ARM CPU calls the proprietary PowerVR SGX 3D Libraries
* a 3D "texture" is created, handing over a big chunk of memory
* the PowerVR engine converts this "texture" from YUV to RGB
* an even larger chunk of memory gets handed back.

this might even actually result in "success", even before you have a
Free Software PowerVR driver :)

however, the reason for mentioning the memory is because there is
going to be a limit, on the internal bus architecture of the
(hypothetical) OMAP3530 processor, which *could* get in the way
(coming and going, three maybe four times: DSP->ARM, ARM->PowerVR,
PowerVR->ARM, ARM->Video) as bear in mind, 124 million *pixels* is
actually 37 million BYTES per second (24-bits per pixel).

so, what could be done, here, to optimise the situation?  well....
mayyybeeee... instead of having the DSP perform all the work, you
could mayyybeee... ooo, just hypothetically, implement the algorithm
mostly using the PowerVR SGX Floating-Point vector processor and the
SIMD features present in the SGX on-board RISC core?

maybe?

and, the critical thing is, there, that now you have eliminated that
dependence on the DSP, suddenly, now, all the *other* SoC vendors
should be sitting up bolt-upright and going "ah HA!  that's us!  we
just have a plain ARM / MIPS processor with PowerVR SGX, we could do
this too, yaay!"

but, before you get all excited, allow me to deflate things, by asking
this simple question: how the bloody hell are you going to get this
software video experimentation done, possibly even paid for by a
Google Summer of Code student and by google themselves, to potentially
position your SoC CPU into very large current and future market
segments that you *believed* that your product simply wasn't capable
of, if you *don't* know how to program the PowerVR SGX engine, because
it's bloody well proprietary and utterly controlled by Imagination
Technologies Ltd???

you see how your business has become utterly, utterly beholden to some
2-bit little proprietary software vendor now??

oh dearie me.

and are you going to pay ImgTec to get google VP8 done?  well....
sure, yes you could - but there's a twist - a sting in the tail.  take
a look at the patent licensing on google VP8 algorithm.  free software
implementations are granted an unlimited royalty-free license, but
*proprietary* implementations have to pay monnneyyyyy.  the MPEGLA
crack-heads are even calling for people to give them patents so that
they can control VP8 in exactly the same way that they presently
control MPEG.

and what do you think the chances are of asking Imagination
Technologies to create a free software implementation of VP8, complete
with full source code, *and* complying with say the LGPL license to
provide the full software tool-chain required to *build* that
LGPL-licensed PowerVR SGX optimised version of Google VP8, are, eh?

you think they'd go for it, eh? eh?

... naah, i didn't think so, either.

also, think about this: you think that PowerVR have time and engineers
to learn _your_ specific SoC CPU?  you think PowerVR engineers have
time to learn TI's DSP??  but if there was a free software compiler
and toolchain for PowerVR, you would get weirdos like me, people in
poland with more time on their hands than sense, and strange PhD
students just.. throwing stuff together because they want to learn
several new technologies all at once and it's part of their doctorate,
they'll just _do_ it.

The Internet shows us that "Brownian Motion" - random increments - can
apply to software development, but you *can't* use the "Internet
Effect" for free software development when there is a whopping great
turd of proprietary technology in the way, stinking to high heaven.

so, i appreciate that this is a bit long, but it simply has to be that
way, to illustrate that free software really _is_ becoming
increasingly more and more financially important for businesses to
adopt _full_ free software toolchains and libraries, right across the
entire hardware.  you _can't_ just take bits of free software like "da
linuxx kurnull" and "hope for the best" that it will happen to be all
a-okay when combined with some bits of proprietary crud - these SoCs
are complex and much more powerful than your marketing departments are
telling you, okay?? :)

i leave you with this: cooperation *always* achieves more than
competition; scientific progress is made when working by increments,
standing on the shoulders of giants before us (someone famous said
that), and it is only the cultural ethic handed down from Ancient
Greece and then the Romans who worshipped their Olympic Games
"Champions" who won (but stood alone, just as the "losers" did, being
mocked and derided) that is telling us otherwise (it was my mum who
noticed this - she researched ancient cooperative celtic society which
was wiped out by the greece-emulating highly-competitive romans).

l.
_______________________________________________
Celinux-dev mailing list
[email protected]
http://tree.celinuxforum.org/mailman/listinfo/celinux-dev

Reply via email to