-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 03/25/2011 11:23 AM, Edward Hervey wrote:
>> On Fri, 2011-03-25 at 10:58 -0400, Cory T. Tusar wrote:
>>> On 03/17/2011 06:57 AM, Stefan Kost wrote:
>> 
>> <snip>
>> 
>>> In "7 Transparency" you need to highlight what your proposal adds to the
>>> existing features.
>>> * Transport protocol: handled e.g. by gstreamer already, standarts like
>>> DLNA specify subsets for interoperability already
>>> * Transparent Encapsulation and Multiplexing: could you please elaborate
>>> why one would need the non-automatic mode. I think it does not make
>>> sense to let the application specify what format the stream is in, if
>>> the media-framework can figure it (in almost all of the cases). In some
>>> corner cases one can e.g. use custom pipelines and specify the format
>>> (e.g. a ringtone playback service might do that if it knows the format
>>> already).
>> 
>> As a possible example (pulled from recent experience), automagic
>> determination of stream parameters takes time (and CPU cycles).  A
>> "non-automatic" mode would be (was) helpful in instances where the
>> application knows exactly what type of stream to expect, and there is
>> a requirement for an absolute minimum of start-up time between the user
>> pressing the "Play" button and video appearing on the screen.
>> 
>   A lot of improvement has gone into GStreamer over the past year to
> speed up the pre-roll/typefinding/setup of playback pipelines. This was
> mainly to get gst-discoverer to be faster than exiftool to get
> information about media files, which it now is ... considering it also
> decodes the first audio/video frame(s).
>   The only case I can think of where you would gain time would be for
> live mpeg-ts streams where you could provide the PAT/PMT information
> which you would have cached previously (in order not to have to wait for
> the next occurence). But that would still require you to wait for the
> next keyframe to appear unless you already have a few seconds live
> back-buffer on the machine (in which case you would also have cached
> PAT/PMT).
>   Did you have another use-case in mind ?

Pretty much the above, or slight variations thereof.

Short version: there were product requirements regarding startup time
and display of the first keyframe received over the network within N
milliseconds.  Explicit knowledge of stream type when constructing the
decode pipeline proved helpful in meeting those requirements (this
particular case was with a GStreamer pipeline on Moblin).

I'm not arguing against automatic detection - it's what works and works
well in a vast majority of cases - just leave the "power-user" option
of explicitly specifying codec use / buffer sizing / etc. available for
times when it's needed.

>>> * Transparent Target: Whats the role of the UMMS here? How does the URI
>>> make sense here. Are you suggesting to use something like
>>> opengl://localdisplay/0/0/854/480? MAFW was introducing renderers, where
>>> a local renderer would render well locally and one could e.g. have a
>>> UPnP DLNA renderer or a media recorder.
>>> * Transparent Resource Management: That makes a lot of sense and so far
>>> was planned to be done on QT MultimediaKit
>>> * Attended and Non Attended execution: This sounds like having a media
>>> recording service in the platform.
>>>
>>> "8 Audio Video Control"
>>> This is a media player interface. Most of the things make sense. Below
>>> those that might need more thinking
>>> * Codec Selection: please don't. This is something that we need to solve
>>> below and not push to the application or even to the user.
>> 
>> Agreed, in part.  As a general rule, the underlying detection and codec
>> selection should be transparent to an application, however there are
>> corner cases where this may not be desirable, and specific selection of
>> a codec may be necessary.
>> 
>> Consider a system which has an external (to the main CPU)
>> PowerDrain-5000(tm) video processor capable of both MPEG-2 and MPEG-4
>> decode.  If the system is in a commanded low-power state, it may be
>> more prudent to decode standard-definition MPEG-2 content in software on
>> the main CPU and leave the external video processor powered-down.
>> However, when decode of MPEG-4 content is desired, soft-decode may not
>> be feasible and the external video hardware needs to be used.
>> 
>> In instances, as above, where the system has multiple codecs (hardware
>> and software) capable of decoding given content, is there envisioned
>> some method of specifying codec priority so that a given method of
>> decode is used preferentially?
>> 
>   Yes, with playbin2/decodebin2 you can change the order of
> codecs/plugins being used. By default it will use the one with the
> highest rank matching the stream to decode, but you can connect to the
> 'autoplug-factories' signal and reorder those plugins to have it use the
> software one or the hardware one.
>   Another way to go around that problem would be to have the software
> plugin only accept SD streams in input (via its pad template caps) and
> have a higher rank than the hardware one, which would make the system
> automatically pick up the SW plugin for SD content, but use the HW one
> for HD content.

Understood, but the above are implementation details specific to one
possible low-level UMMS foundation (GStreamer).

If I build my user application to use UMMS (with a GStreamer
foundation), how do I, using the UMMS APIs, perform something similar?

Should my application even be able to specify codec preference via UMMS
APIs?  Stefan argued against.  I provided a use case (not entirely
theoretical) that I believe makes an argument for.

>>> * Buffer Strategy: same as before. Buffering strategy depends on the
>>> use-case and media. The application needs to express whether its a
>>> media-player/media-editor/.. and from that we need to derive this.
>> 
>> But not all use-cases may have the same requirements.  Again, from
>> recent experience, my system's requirements for low-latency may or may
>> not match yours.  That's not to say that providing some sane defaults
>> that cover a majority of expected use cases isn't a good idea, just
>> don't restrict the application to those and those alone.
>> 
>   Latency/buffering are configurable in playbin2. That can be adjusted
> on a per-system basis.

As above...I'm less concerned with the capabilities of the underlying
foundation as opposed to the capabilities available via UMMS APIs to my
application.

I don't want to get too far off into the weeds regarding what is or is
not possible with GStreamer, but rather focus on what /should/ be
possible using an application based on Dominig's proposed UMMS APIs.

- -Cory


- -- 
Cory T. Tusar
Senior Software Engineer
Videon Central, Inc.
2171 Sandy Drive
State College, PA 16803
(814) 235-1111 x316
(814) 235-1118 fax


"There are two ways of constructing a software design.  One way is to
 make it so simple that there are obviously no deficiencies, and the
 other way is to make it so complicated that there are no obvious
 deficiencies."  --Sir Charles Anthony Richard Hoare

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (GNU/Linux)

iEYEARECAAYFAk2MwqYACgkQHT1tsfGwHJ8FUACfRMSdja082YMEJNGiwaKIOl5M
yPgAn2qv7Yacpa02R5potn/mUztdYXpa
=+rKv
-----END PGP SIGNATURE-----
_______________________________________________
MeeGo-dev mailing list
[email protected]
http://lists.meego.com/listinfo/meego-dev
http://wiki.meego.com/Mailing_list_guidelines

Reply via email to