If I am following the idea correctly, the hardware MPEG decoder chip
would be embedded in the Sun Ray. The primitives in the Appliance Link
Protocol would need to be augmented to support streaming the raw MPEG
data, feeding it to the decoder chip, and then displaying it at some
location in the screen buffer. SRSS would need to know when and how to
invoke the new primitives.
There would probably need to be a custom codec or driver installed in
Solaris and/or Windows. Right now, SRSS does not "know" than an MPEG
stream is being rendered. The custom codec could tell SRSS when and
where the video is being displayed. It would also prevent the "real"
decoding and rendering from taking place on the server. This highlights
one of the benefits of this solution: An SRSS server today can render
only a very limited number of MPEG streams due to high CPU utilization
of the decoder. By offloading the decoding to each terminal, that same
SRSS server could simultaneously stream the coded MPEG data to dozens
(or perhaps hundreds) of terminals.
All in all this is an interesting idea.
-jerry
David Hunnisett wrote:
it would have just needed an mpeg decoder chip in the sunray and way
of sending/controlling the streams
_______________________________________________
SunRay-Users mailing list
[email protected]
http://www.filibeto.org/mailman/listinfo/sunray-users