On Thu, 06 Oct 2011 07:36:00 +0200, Silvia Pfeiffer <[email protected]> wrote:

On Thu, Oct 6, 2011 at 10:51 AM, Ralph Giles <[email protected]> wrote:
On 05/10/11 04:36 PM, Glenn Maynard wrote:

If the files don't work in VTT in any major implementation, then probably not many. It's the fault of overly-lenient parsers that these things happen
in the first place.

A point Philip Jägenstedt has made is that it's sufficiently tedious to
verify correct subtitle playback that authors are unlikely to do so with
any vigilance. Therefore the better trade-off is to make the parser
forgiving, rather than inflict the occasional missing cue on viewers.

That's a slippery slope to go down on. If they cannot see the
consequence, they assume it's legal. It's not like we are totally
screwing up the display - there's only one mis-authored cue missing.
If we accept one type of mis-authoring, where do you stop with
accepting weirdness? How can you make compatible implementations if
everyone decides for themselves what weirdness that is not in the spec
they accept?

I'd rather we have strict parsing and recover from brokenness. It's
the job of validators to identify broken cues. We should teach authors
to use validators before they decide that their files are ok.

As for some of the more dominant mis-authorings: we can accept them as
correct authoring, but then they have to be made part of the
specification and legalized.

To clarify, I have certainly never suggested that implementation do anything other than follow the spec to the letter. I *have* suggested that the parsing spec be more tolerant of certain errors, but looking at the extremely low error rates in our sample I have to conclude that either (1) the data is biased or (2) most of these errors are not common enough that they need to be handled.

--
Philip Jägenstedt
Core Developer
Opera Software

Reply via email to