Are there any special techniques for handling large hashes. When
building an XML tree as a Perl data structure, at times more and more
memory is being consumed eventually runs out of memory. Other than
partitioning the problem is there any known technique?

The problem occurs when we use Any2XML.pm that uses Ximple.pm which
builds XML tree as an array of hashes recursively. While this works well
for most documents, when parsing flat files containing thousands of
records, this breaks down.

---Closing information gaps-----
Ranga Nathan, Reliance Technololgy
>>Live demo at http://any2xml.com/docs/timesheet_demo.shtml<<
>>Get free COBOLExplorer at http://goreliance.com/download-products <<

_______________________________________________
Boston-pm mailing list
[EMAIL PROTECTED]
http://mail.pm.org/mailman/listinfo/boston-pm

Reply via email to