Emmanuel Lécharny wrote:
Hi guys,
many thanks Kiran for the OOM fix !
That's one step toward a fast load of big database load.
The next steps are also critical. We are currently limited by the memory
size as we store in memory the DNs we load. In order to go one step
farther, we need to implement a system where we can prcoess a ldif file
with no limitation due to the available memory.
That supposes we prcoess the ldif file by chunks, and once the chuks are
sorted, then we process them as a whole, pulling one element from each
of the sorted list of DN and picking the smallest to inject it into the
BTree.
Why do you store the DNs in memory? Why are you sorting them?
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/