On 8/11/2025 6:02 PM, John Was wrote:
Dear Hans.
Excellent. I'm convinced myself that the optimal solution will be two
separate documents, one an exact replica of the physical publication,
the other tailored to the needs of the visually impaired. The latter
should have the same pagination as the former for purposes of referring
to a work, and would also ideally be customizable - I've encountered one
classicist who doesn't mind footnotes being read out at the foot of each
page, another who would like the option of reading the main text without
the interruption of footnotes, and no doubt there are further permutations.
At all events, I think it's essential to work with a selection of
visually impaired readers while this is being developed, rather than
just guessing what they might want.
Indeed! We also should keep in mind that those for who this matters have
tools that we don't know about and might not even depend on tagging.
And, as some of these machine learning language models actually scan the
content stream I bet one of these days decent tagging (without mappign
to some unsuitable pdf sub model) wins over obscure, buggy, bad defined,
evolving ... whatever (which then makes it kind of obsolete as happened
with other technologies) so robust basic pdf is then the best (long
term) combined with dedicated well-designed alternatives.
We just have to prove that we can do it. Best spend the time where it is
worth spending (and most effective).
Hans
-----------------------------------------------------------------
Hans Hagen | PRAGMA ADE
Ridderstraat 27 | 8061 GH Hasselt | The Netherlands
tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl
-----------------------------------------------------------------
___________________________________________________________________________________
If your question is of interest to others as well, please add an entry to the
Wiki!
maillist : ntg-context@ntg.nl /
https://mailman.ntg.nl/mailman3/lists/ntg-context.ntg.nl
webpage : https://www.pragma-ade.nl / https://context.aanhet.net (mirror)
archive : https://github.com/contextgarden/context
wiki : https://wiki.contextgarden.net
___________________________________________________________________________________