To comment on the following update, log in, then open the issue:
http://www.openoffice.org/issues/show_bug.cgi?id=63500





------- Additional comments from [EMAIL PROTECTED] Thu Mar 23 02:51:12 -0800 
2006 -------
Rumour has it this saves eg. 10's of Mb with large (typical) data-pilot/filtered
table type sheets, since (of course) many, many of the strings are identical.

Perhaps a better place to do this work though is in the importer - checking vs.
the last N rows or something - and/or perhaps having a hash of the last N (100?)
strings, plus a 'frequently re-used' strings table or something would be useful.

Either way - I guess a small code change can give a huge memory saving here. I
imagine any small lost performance from the hash will be more than compensated
for on most sheets by reduced memory allocation & paging overhead associated
with that.

---------------------------------------------------------------------
Please do not reply to this automatically generated notification from
Issue Tracker. Please log onto the website and enter your comments.
http://qa.openoffice.org/issue_handling/project_issues.html#notification

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to