Kenneth Lakin wrote:
So, the memory that we consume with the un-buffered read should be released
when each revision gets written out. Right?
Along those lines: Aren't all data structures in Dumpfile flushed after each
revision, except for those in SanityChecker?
As far as I know, both counts are correct. So the fact that it keeps
getting bigger over time seems to indicate that either the sanity
checker is indeed at fault, or your perl interpreter itself has a memory
leak. That's why I asked about trying the AS binary; I didn't realize
you already had w/ the same results... so I agree that the sanity
checker seems to be the main culprit.
You may want to try running the old 0.11.0-alpha1 version[1] to at least
test this theory, since the sanity checker was much less ambitious and
therefore kept much less state data at that point.
[1]http://www.pumacode.org/download/vss2svn/vss2svn-0.11.0-alpha1.zip
The patch is attached. It does two things:
1) It patches output_node to take a reference to the incoming node and output_content to take a reference to
the data that it's going to write out.
2) It syswrite instead of print to write out that data.
Both of these changes reduce the memory footprint, and enabled me to process another database that
required 1GB of RAM really early in the IMPORTSVN phase.
Thanks, looking at that again I don't know why on earth I was passing
the whole text contents directly, that doesn't make any sense at all w/
Perl's pass-by-value strings!
toby
_______________________________________________
vss2svn-users mailing list
Project homepage:
http://www.pumacode.org/projects/vss2svn/
Subscribe/Unsubscribe/Admin:
http://lists.pumacode.org/mailman/listinfo/vss2svn-users-lists.pumacode.org
Mailing list web interface (with searchable archives):
http://dir.gmane.org/gmane.comp.version-control.subversion.vss2svn.user