On Thu, 16 Jul 2009 07:58:30 +0200, Silvia Pfeiffer
silviapfeiff...@gmail.com wrote:
Hi Ian,
Great to see the new efforts to move the subtitle/caption/karaoke
issues forward!
I actually have a contract with Mozilla starting this month to help
solve this, so I am more than grateful that you
Thanks for the analysis, but two pieces of feedback:
1) Though sub-titles and captions are the most common accessibility
issue for audio/video content, they are not the only one. There are
people:
-- who cannot see, and need audio description of video
-- who cannot hear, and prefer sign
On Thu, Jul 16, 2009 at 10:31 PM, David Singersin...@apple.com wrote:
Thanks for the analysis, but two pieces of feedback:
1) Though sub-titles and captions are the most common accessibility issue
for audio/video content, they are not the only one. There are people:
-- who cannot see, and
On Thu, Jul 16, 2009 at 6:28 PM, Philip Jägenstedtphil...@opera.com wrote:
On Thu, 16 Jul 2009 07:58:30 +0200, Silvia Pfeiffer
silviapfeiff...@gmail.com wrote:
3. Timed text stored in a separate file, which is then parsed by the
user agent and rendered as part of the video automatically by
At 23:28 +1000 16/07/09, Silvia Pfeiffer wrote:
2) I think the environment can and should help select and configure type-1
resources, where it can. It shouldn't need to be always a manual step by
the user interacting with the media player. That is, I don't see why we
cannot have the
On Thu, Jul 16, 2009 at 11:56 PM, David Singersin...@apple.com wrote:
At 23:28 +1000 16/07/09, Silvia Pfeiffer wrote:
2) I think the environment can and should help select and configure
type-1
resources, where it can. It shouldn't need to be always a manual step
by
the user
On Sat, 27 Dec 2008, Calogero Alex Baldacchino wrote:
A flying thought: why not thinking also to a further option for
embedding everything in a sort of all-in-one html page generated on
the fly when downloading, making of it a global container for video and
text to be consumed by UAs
On Sat, 27 Dec 2008, Silvia Pfeiffer wrote:
6. Timed text stored in a separate file, which is then fetched and
parsed by the Web page, and which is then rendered by the Web page.
For case 6, while it works for deaf people, we actually create an
accessibility nightmare for blind people
Hi Ian,
Great to see the new efforts to move the subtitle/caption/karaoke
issues forward!
I actually have a contract with Mozilla starting this month to help
solve this, so I am more than grateful that you have proposed some
ideas in this space.
On Thu, Jul 16, 2009 at 9:38 AM, Ian
I have carefully read all the feedback in this thread concerning
associating text with video, for various purposes such as captions,
annotations, etc.
Taking a step back as far as I can tell there are two axes: where the
timed text comes from, and how it is rendered.
Where it comes from, it
Hi Ian,
Thanks for taking the time to go through all the options, analyse and
understand them - especially on your birthday! :-) Much appreciated!
I agree with your analysis and the 6 options you have identified.
However, I disagree slightly with the conclusions you have come to -
mostly from a
Silvia Pfeiffer ha scritto:
Hi Ian,
Thanks for taking the time to go through all the options, analyse and
understand them - especially on your birthday! :-) Much appreciated!
Than, happy birthday to Ian!
[...]
The only real issue that we have with separate files is that the
captions may
Another implementation comes from the W3C TimedText working group:
They have a test suite for DFXP files at
http://www.w3.org/2008/12/dfxp-testsuite/web-framework/START.html .
Philippe just announced that he added HTML5 video tag support using
the javascript file that Jan had written for srt
And now we have a first demo of the proposed syntax in action. Michael
Dale implemented SRT support like this:
video src=sample_fish.ogg poster=sample_fish.jpg duration=26
text category=SUB lang=en type=text/x-srt default=true
title=english SRT subtitles
Yea as Silvia outlines in the intro to this thread we will likely
continue to see external timed text files winning out over muxed timed
text.
Its just more flexible ... Javascript embedding libraries which are
widely used today for flash video will be even more widely used with the
emerging
On Wed, Dec 10, 2008 at 5:56 PM, Dave Singer [EMAIL PROTECTED] wrote:
At 21:33 +1300 9/12/08, Robert O'Callahan wrote:
For what it's worth, loading an intermediate document of some new type
which references other streams to be loaded adds a lot of complexity to the
browser implementation.
At 14:40 +1300 11/12/08, Robert O'Callahan wrote:
On Wed, Dec 10, 2008 at 5:56 PM, Dave Singer
mailto:[EMAIL PROTECTED][EMAIL PROTECTED] wrote:
At 21:33 +1300 9/12/08, Robert O'Callahan wrote:
For what it's worth, loading an intermediate document of some new
type which references other
On Mon, Dec 8, 2008 at 9:20 PM, Martin Atkins [EMAIL PROTECTED] wrote:
My concern is that if the only thing linking the various streams together is
the HTML document then the streams are less useful outside of a web browser
context.
Absolutely. This proposal places an additional burden on the
I heard some complaints about there not being any implementation of
the suggestions I made.
So here goes:
1. out-of-band
There is an example of using srt with ogg in a out-of-band approach here:
http://v2v.cc/~j/jquery.srt/
You will need Firefox3.1 to play it.
The syntax of what Jan implemented
Silvia Pfeiffer ha scritto:
I heard some complaints about there not being any implementation of
the suggestions I made.
So here goes:
1. out-of-band
There is an example of using srt with ogg in a out-of-band approach here:
http://v2v.cc/~j/jquery.srt/
You will need Firefox3.1 to play it.
The
On Wed, Dec 10, 2008 at 6:59 AM, Calogero Alex Baldacchino
[EMAIL PROTECTED] wrote:
Anyway, the use of subtitles in conjunction with screen readers might be
problematic: a deeper synchronization with the media might be needed in
order to have the text read just during voice pauses, to describe
Also, for those interested, metavid and mv_embed are examples of use of ROE:
http://metavid.org/w/index.php/Mv_embed
Metavid uses
video roe=my_roe_file.xml
for clean remote embedding of multiple text/video/audio tracks in a
single xml encapsulation.
An example of such embeds is here:
Silvia Pfeiffer ha scritto:
On Wed, Dec 10, 2008 at 6:59 AM, Calogero Alex Baldacchino
[EMAIL PROTECTED] wrote:
Anyway, the use of subtitles in conjunction with screen readers might be
problematic: a deeper synchronization with the media might be needed in
order to have the text read just
At 21:33 +1300 9/12/08, Robert O'Callahan wrote:
For what it's worth, loading an intermediate document of some new
type which references other streams to be loaded adds a lot of
complexity to the browser implementation. It creates new states that
the decoder can be in, and introduces new
Silvia Pfeiffer wrote:
Take this as an example:
video src=http://example.com/video.ogv; controls
text category=CC lang=en type=text/x-srt src=caption.srt/text
text category=SUB lang=de type=application/ttaf+xml
src=german.dfxp/text
text category=SUB lang=jp type=application/smil
On Mon, Dec 8, 2008 at 6:08 PM, Martin Atkins [EMAIL PROTECTED] wrote:
What are the advantages of doing this directly in HTML rather than having
the src attribute point at some sort of compound media document?
The general point here is that subtitle data is in current practice
often created
On Tue, Dec 9, 2008 at 1:08 PM, Martin Atkins [EMAIL PROTECTED] wrote:
Silvia Pfeiffer wrote:
Take this as an example:
video src=http://example.com/video.ogv; controls
text category=CC lang=en type=text/x-srt src=caption.srt/text
text category=SUB lang=de type=application/ttaf+xml
Silvia Pfeiffer wrote:
I'm interested to hear people's opinions on these ideas. I agree with
Ralph and think having a simple, explicit mechanism at the html level
is worthwhile - and very open and explicit to a web author. Having a
redirection through a ROE-type file on the server is more
Am Montag, den 08.12.2008, 21:20 -0800 schrieb Martin Atkins:
My concern is that if the only thing linking the various streams
together is the HTML document then the streams are less useful outside
of a web browser context. If there is a separate resource containing the
description of how
29 matches
Mail list logo