-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17/02/13 05:48 AM, Nils Dagsson Moskopp wrote:
If one cares to that extent, and is
already handling format differences, dealing with vendor
variation on top isn't that much more effort.
I disagree, strongly.
Ok, thanks for the feedback. Do
On 12-12-11 5:23 PM, Ralph Giles wrote:
That said, I'm not convinced this is an issue given the primary
use-case, which is pretty much that web content wants to do more
sophisticated things with the metadata than the user-agent's
standardized parsing allows. If one cares to that extent
On 12-12-11 4:58 PM, Ian Hickson wrote:
This seems reasonable.
Thanks for the feedback. Anyone else? :-)
I don't want to be the one to maintain the mapping from media formats to
metadata schema, because this isn't my area of expertise, and it isn't
trivial work.
Good point. This would
On 12-11-26 4:18 PM, Ralph Giles wrote:
interface HTMLMediaElement {
...
object getMetadata();
};
After the metadataloaded event fires, this method would return a new
object containing a copy of the metadata read from the resource, in
whatever format the decoder implementation
On 12-11-27 9:19 AM, Gordon P. Hemsley wrote:
Is it sufficient to sniff just for application/ogg and then let the
UA's Ogg library determine whether or not the contents of the file can
be handled? (I'm sensing the consensus is yes.)
I think so.
Defining a codec enumerating algorithm and mime
On 12-09-27 1:44 AM, Philip Jägenstedt wrote:
I'm skeptical that all that we want from ID3v2 or common VorbisComment
tags can be mapped to Dublin Core, it seems better to define mappings
directly from the underlying format to the WebIDL interface.
You're right.
Given the open-endedness of
Recently, we've been considering adding a 'tags' or 'metadata' attribute
to HTML media elements in Firefox, to allow webcontent access to
metadata from the playing media resource. In particular we're interested
in tag data like creator, title, date, and so on.
My recollection is that this has
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/12/11 05:38 AM, David Singer wrote:
Very. Correct rendering of some text requires that it be correctly
labelled with a BCP-47 tag, as I understand.
For me, file-level default/overall setting, with the possibility of
span-based
On 11-11-17 12:29 PM, Jonas Sicking wrote:
Authors do however know how loud the volume of the media is though. If
a video is encoded with a very loud volume, or a very quiet volume, it
can be quite useful to be able to adjust that up or down when linking
to it.
I was going to say, that
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 15/11/11 10:32 AM, Aaron Colwell wrote:
Yeah it looks to me like starting at the requested position is the
only option. I saw the text about media engine triggered seeks, but
it seems like users would be very surprised to see the seeking
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 14/11/11 03:49 PM, Aaron Colwell wrote:
Does this mean the user agent must resume playback at the exact
location specified?
Maybe you can muck with the 'media.seekable' TimeRanges object to only
show keyframes?
Otherwise, it kind of sounds
On 10/10/11 12:19 AM, Simon Pieters wrote:
0 negative intervals
0 cues skipped because field counts were different
That will teach me to proofread after posting. The real counts should be:
2227 negative intervals
6822 cues skipped because field counts were different
From which I conclude
On 06/10/11 01:58 AM, Simon Pieters wrote:
I don't know how many have negative interval, I'd need to run a new
script over the 52,000,000 lines to figure out. (If you want me to check
this, please contact me with details about what you want to count as
negative interval.)
I had in mind
...@gmail.com wrote:
On Thu, Oct 6, 2011 at 10:51 AM, Ralph Giles gi...@mozilla.com wrote:
On 05/10/11 04:36 PM, Glenn Maynard wrote:
If the files don't work in VTT in any major implementation, then
probably
not many. It's the fault of overly-lenient parsers that these things
happen
On 05/10/11 11:37 AM, Ashley Sheridan wrote:
I would assume the part that the Skype plugin is being used for, as the
only other part of the chat that isn't HTML/Javascript code is the
Jabber connectivity, which isn't strictly a plugin per-say, more an
additional interface to the raw data that
On 05/10/11 10:22 AM, Simon Pieters wrote:
I did some research on authoring errors in SRT timestamps to inform
whether WebVTT parsing of timestamps should be changed.
This is completely awesome, thanks for doing it.
hours too many '(^|\s|)\d{3,}[:\.,]\d+[:\.,]\d+'
834
As Silvia mentioned,
On 05/10/11 04:36 PM, Glenn Maynard wrote:
If the files don't work in VTT in any major implementation, then probably
not many. It's the fault of overly-lenient parsers that these things happen
in the first place.
A point Philip Jägenstedt has made is that it's sufficiently tedious to
verify
On 21/09/11 04:04 AM, Anne van Kesteren wrote:
I have an additional point. Can we maybe consider naming it just VTT?
At least as far as file signatures, media types, and other
developer-facing identifiers are concerned.
Three ASCII characters is a little sparse for a file signature.
On Thu, Nov 5, 2009 at 6:10 AM, Brian Campbell
brian.p.campb...@dartmouth.edu wrote:
As implemented by Safari and Chrome (which is the minimum rate allowed by
the spec), it's not really useful for that purpose, as 4 updates per second
makes any sort of synchronization feel jerky and laggy.
It
On Thu, Jul 9, 2009 at 3:34 PM, David Gerarddger...@gmail.com wrote:
Anyone got ideas on the iPhone problem?
I think this is off topic, and I am not an iPhone developer, but:
Assuming the app store terms allow video players, it should be
possible to distribute some sort of dedicated player
On Thu, Jul 9, 2009 at 9:22 PM, Maciej Stachowiakm...@apple.com wrote:
I think at one point I suggested that canPlayType should return one of
boolean false, true or maybe, so that naiive boolean tests would work. Or
in any case, make the no option something that tests as boolean false.
We
On Tue, Apr 7, 2009 at 1:26 AM, Silvia Pfeiffer
silviapfeiff...@gmail.com wrote:
For example, take a video that is a subpart of a larger video and has
been delivered through a media fragment URI
(http://www.w3.org/2008/WebVideo/Fragments/WD-media-fragments-reqs/).
When a user watches both,
On Tue, Feb 10, 2009 at 1:54 AM, Michael A. Puls II
shadow2...@gmail.com wrote:
Flash has low, medium and high quality that the user can change (although a
lot of sites/players seem to rudely disable that option in the menu for some
reason). This helps out a lot and can allow a video to play
On Mon, Dec 8, 2008 at 9:20 PM, Martin Atkins [EMAIL PROTECTED] wrote:
My concern is that if the only thing linking the various streams together is
the HTML document then the streams are less useful outside of a web browser
context.
Absolutely. This proposal places an additional burden on the
On Mon, Dec 8, 2008 at 6:08 PM, Martin Atkins [EMAIL PROTECTED] wrote:
What are the advantages of doing this directly in HTML rather than having
the src attribute point at some sort of compound media document?
The general point here is that subtitle data is in current practice
often created
On 10-Nov-08, at 7:49 PM, Maciej Stachowiak wrote:
1) Allow unrestricted cross-origin video/audio
2) Allow cross-origin video/audio but carefully restrict the
API to limit the information a page can get about media loaded
from a different origin
3) Disallow cross-origin video/audio unless
On Thu, Nov 6, 2008 at 9:46 AM, Eric Carlson [EMAIL PROTECTED] wrote:
Instead of seeking to the end of the file to calculate an
exact
duration as you describe, it is much cheaper to estimate the duration by
processing a fixed portion of the file and extrapolating to the
. There aren't so many aspect ratios is common use--you're welcome
to choose the one nearest to the floating point value given if you
think it's important.
-r
--
Ralph Giles
Xiph.org Foundation
On Thu, Aug 7, 2008 at 1:57 AM, Philip Jägenstedt [EMAIL PROTECTED] wrote:
I suggest that the spec allows raising the NOT_SUPPORTED_ERR exception
in response to any playback rate which it cannot provide for the current
configuration.
That sounds reasonable. It is a special effect.
With a
On 10-Jun-08, at 9:31 AM, Philip Jägenstedt wrote:
The default value, if the attribute is omitted or cannot be parsed,
is the media resource's self-described pixel ratio, or 1.0 for
media resources that do not self-describe their pixel ratio.
This is actually how I read the original, but
On Mon, Jan 07, 2008 at 01:50:09PM -0800, Dave Singer wrote:
I get the impression that this is not an openly-specified codec,
which I rather think is a problem. That is, there is neither a
publicly available spec. nor publicly-available source, which means
that it is controlled by one
On Mon, Dec 10, 2007 at 09:14:39AM -0800, James Justin Harrell wrote:
The language could be improved. Ogg Theora refers to Theora-encoded video
enclosed in an Ogg
container, not the Theora codec. Similar for Vorbis. Theora and Vorbis
should be used without
Ogg to refer to the actual
Thanks for adding to the discussion. We're very interested in
implementing support for presentations as well, so it's good
to hear from someone with experience.
Since we work on streaming media formats, I always assumed things would
have to be broken up by the server and the various components
On Wed, Apr 11, 2007 at 05:45:34PM -0700, Dave Singer wrote:
But [video/*] does at least indicate that we have a time-based multimedia
container on our hands, and that it might contain visual
presentation. application/ suffers that it does not say even that,
and it raises the concern that
On Tue, Apr 10, 2007 at 11:21:10AM -0700, Dave Singer wrote:
# application/ogg; disposition=moving-image; codecs=theora, vorbis
# application/ogg; disposition=sound; codecs=speex
what is the 'disposition' parameter?
The idea of a 'disposition-type' is to mark content with presentational
On Mon, Apr 02, 2007 at 11:12:07AM -0700, Maciej Stachowiak wrote:
I don't think Theora (or Dirac) are inherently more interoperable
than other codecs. There's only one implementation of each so far, so
there's actually less proof of this than for other codecs.
Just to clarify, there are
On Fri, Mar 23, 2007 at 04:33:39PM -0700, Eric Carlson wrote:
Yes, the UA needs the offset/chunking table in order to calculate
a file offset for a time, but this is efficient in the case of
container formats in which the table is stored together with other
information that's needed
On Sat, Mar 24, 2007 at 01:57:45AM -0700, Kevin Marks wrote:
How does one seek a Vorbis file with video in and recover framing?
It looks like you skip to an arbitrary point and scan for 'OggS' then
do a 64kB CRC to make sure this isn't a fluke. Then you have some
packets that correspond to
38 matches
Mail list logo