Alas, setting the parser to ‘collect_ids = False’ does not solve the problem. 
It generates the same error message:

File 
"/users/martinmueller/dropbox/earlyprint/ecco-evans/evans-2020-03-02/N01868.xml",
 line 3663
lxml.etree.XMLSyntaxError: ID N01868-0011-3105 already defined, line 3663, 
column 74

From: Jens Quade <[email protected]>
Date: Sunday, July 31, 2022 at 4:52 PM
To: Martin Mueller <[email protected]>
Cc: lxml mailing list <[email protected]>
Subject: Re: [lxml] can I tell lxml to ignore xmlids?
Hi Martin,

unique IDs are written into the constraints in the XML specification itself (in 
3.3.1, Attribute types).

However, you can tell the XML parser to not care about IDs. I’m not sure if 
this is a useful option.

With renaming, processing and renaming back you know at least exactly what is 
going on.


if you have a document like this:

>>> from lxml import etree
>>>

>>> s = '<a><b xml:id="id1"/><c xml:id="id1"/></a>'
>>>

containing the same ID twice, this fails:

>>> etree.XML(s)
Traceback (most recent call last):
…
lxml.etree.XMLSyntaxError: ID id1 already defined, line 1, column 36

But I can define a parser with the option collect_ids set to false, like this:

>>> myparser = etree.XMLParser(collect_ids=False)

use it to parse my document s:

>>> tree = etree.XML(s, parser=myparser)

and everything seems fine:

>>> etree.dump(tree)
<a>
  <b xml:id="id1"/>
  <c xml:id="id1"/>
</a>
>>>

As I said, this is a path not often taken, proceed with caution.


jens



On 31. Jul 2022, at 23:56, Martin Mueller 
<[email protected]<mailto:[email protected]>> wrote:

Duplicate xmlids have way of creeping into my 60,000 documents. The ids keep 
the document from parsing, which is helpful in drawing attention to errors, but 
it makes it harder to correct the errors. I work with documents in the TEI 
namespace, and I have a very kludgy workaround: I comment out the reference to 
the schema and change ‘xml:id ‘ to ‘xmlom’. Then  I can loop through the 
document and fix errors with a script.

There must be a more elegant way to do this. Is there away of telling lxml: 
“never mind the duplicate IDs. Just carry on”. Then I can toggle between a 
script that cares or doesn’t care about duplicate IDS.

With thanks in advance for any help

Martin Mueller
Professor emeritus of English and Classics
Northwestern University


_______________________________________________
lxml - The Python XML Toolkit mailing list -- 
[email protected]<mailto:[email protected]>
To unsubscribe send an email to 
[email protected]<mailto:[email protected]>
https://mail.python.org/mailman3/lists/lxml.python.org/<https://urldefense.com/v3/__https:/mail.python.org/mailman3/lists/lxml.python.org/__;!!Dq0X2DkFhyF93HkjWTBQKhk!XoYXDs7FdDJl70otcJApkERiTtT3RfI96nuNNX6pdgZ9VU1otgaNd1UAlH6xnDJp0TJMdiO5N3FYakzT-9PdttI$>
Member address: [email protected]<mailto:[email protected]>

_______________________________________________
lxml - The Python XML Toolkit mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/lxml.python.org/
Member address: [email protected]

Reply via email to