Duplicate xmlids have way of creeping into my 60,000 documents. The ids keep 
the document from parsing, which is helpful in drawing attention to errors, but 
it makes it harder to correct the errors. I work with documents in the TEI 
namespace, and I have a very kludgy workaround: I comment out the reference to 
the schema and change ‘xml:id ‘ to ‘xmlom’. Then  I can loop through the 
document and fix errors with a script.

There must be a more elegant way to do this. Is there away of telling lxml: 
“never mind the duplicate IDs. Just carry on”. Then I can toggle between a 
script that cares or doesn’t care about duplicate IDS.

With thanks in advance for any help

Martin Mueller
Professor emeritus of English and Classics
Northwestern University


_______________________________________________
lxml - The Python XML Toolkit mailing list -- lxml@python.org
To unsubscribe send an email to lxml-le...@python.org
https://mail.python.org/mailman3/lists/lxml.python.org/
Member address: arch...@mail-archive.com

Reply via email to