https://bugzilla.wikimedia.org/show_bug.cgi?id=28146
--- Comment #4 from Brion Vibber <[email protected]> 2011-04-01 00:29:59 UTC --- It might also be wise to divide up the giant DjVu data set better. It looks like the *entire* page text metadata for all pages in the file gets read in in a batch in DjVuImage::retrieveMetadata. This entire set of output is run through UtfNormal::cleanUp() in one piece -- where the above error is occurring -- then divided up into pages, and then put back into a giant XML string which gets saved as the file's metadata. That giant string later gets read in and parsed into an XML DOM for access later, but the string is going to sit around bloating up the image table record, memcached, and anybody fetching the document info via InstantCommons. -- Configure bugmail: https://bugzilla.wikimedia.org/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the assignee for the bug. You are on the CC list for the bug. _______________________________________________ Wikibugs-l mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/wikibugs-l
