On 2005-01-28 12:31:33 -0500, Daniel Veillard wrote: > Since libxslt generate-id() is based on node pointer values. More > over each document reference the DocBook DTD which is far from being > small. So it's not surprizing at all to me if memory grows very > fast. The only doubt I have is that the DTD from those documents can > probably be removed once parsed since all entities values should > have been replaced at that point but that's not 100% sure it's a > safe thing to do.
Couldn't the DTD structure be shared, since it is read-only? And the advantage would be that the DTD wouldn't have to be reparsed. It would also be useful for batch transformations of XML files using the same DTD. > Using a DTD which uses 2.5 MByte of memory for each blog items > which should be around a kilobyte each sounds a very heavy design to > me. You're paying the cost of that design I would say. I don't think the design is really bad. IMHO, xsltproc/libxslt is sub-optimal. -- Vincent Lef�vre <[EMAIL PROTECTED]> - Web: <http://www.vinc17.org/> 100% accessible validated (X)HTML - Blog: <http://www.vinc17.org/blog/> Work: CR INRIA - computer arithmetic / SPACES project at LORIA _______________________________________________ xslt mailing list, project page http://xmlsoft.org/XSLT/ [email protected] http://mail.gnome.org/mailman/listinfo/xslt
