Hi
One of
the systems I'm working on are parsing files about the same size that you are
mention using exact the technique that you exemplify. This is working without
any problem.
The
only problem we have had with this is code outside the "parsing part" i.e.
creating to many object so the garbage collector becomes a problem and so on.
We
greatly increased our performance by optimizing the parsing of the XML and only
retrieve elements that we really needed.
Good
luck!
Cheers
Christian
-----Ursprungligt meddelande-----
Från: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] För Peter Venø
Skickat: den 16 december 2005 09:59
Till: dom4j-user@lists.sourceforge.net
Ämne: [dom4j-user] Parsing large filesHelle all,I've have just joined the list, so forgive me if the questions have been asked before.I'm parsing large XML files using DOM4J's event systemprivate void addEntryHandler(SAXReader saxReader) {
saxReader.addHandler( "/root/entry",
new ElementHandler() {
public void onStart(ElementPath path) {}
public void onEnd(ElementPath path) {
Element entry = path.getCurrent();
processEntry(entry);
entry.detach();
}
}
);
}Which I believe is the standard way. The processEntry(entry) method extracts info by means ofentry.element("name").getText()Now this works well but the time uses to parse the records increases linearly as the parsing progresses. The memory consumptions of the parser increase, but only slightly. I can parse the first 10,000 records in appr. 10 seconds whereas parsing entries 2,160,000 to 2,170,000 takes more that 5 minutes.According to this article http://www.devx.com/Java/Article/29161/0/page/2 parsing of 'extremely large files' should not be a problem. However my file is significantly larger that the extremely large file used in the article (14 MB). It ludicrously large - appr. 850 MB - gzippedHave any of you experienced similar problems with the parsing of large files. Any input is appriciated.thanksPeter