Patrick H. Lauke writes:

> Smylers wrote:
> 
> > Well it's very close to being useless.  In that if browsers don't do
> > anything with some mark-up, there's no point in having it (and
> > indeed no incentive for authors to provide it).
> 
> Assistive technology is certainly a valid use case here.

Would it work well enough?  Is not being able to distinguish
abbreviations from words a significant problem for developers of such
software?

> > Yes, that is potentially ambiguous.  But it's the same in books,
> > newspapers, and so on, where it turns out not to be much of a
> > problem.
> 
> But books etc don't have any other way of providing
> disambiguation/structure. Under that reasoning, you could argue that
> there's no need for heading elements etc, as simply having text bigger
> works fine in print, so all we need is a font sizing markup option.

Not quite the same, since in order to make the text bigger we obviously
need _some_ mark-up (so there's no advantage in it not being
meaningful).

But here we're discussing having mark-up versus not having any mark-up
at all.

> > What in practice would you expect AT to do with this knowledge?
> > Remember that most abbreviations that aren't being tagged with
> > expansions won't be marked up, so AT is going to have to deal
> > sensibly with that case anyway.
> 
> So you'd prefer hit and miss heuristics over unambiguous
> interpretation?

We're going to have heuristics anyway.  Humans can generally distinguish
abbreviations from words, so it isn't too far-fetched to expect AI to be
able to do likewise.

I can see that unambiguous specification is preferable (if it would
work).  But I don't understand why of all the problems trying to
pronounce human languages correctly this particular one is the one that
gets additional help from HTML.

Smylers

Reply via email to