On Fri, Jun 20, 2014 at 6:20 PM, Howard Chu <[email protected]> wrote: > Emmanuel Lécharny wrote: > >> Hi guys, >> >> many thanks Kiran for the OOM fix ! >> >> That's one step toward a fast load of big database load. >> >> The next steps are also critical. We are currently limited by the memory >> size as we store in memory the DNs we load. In order to go one step >> farther, we need to implement a system where we can prcoess a ldif file >> with no limitation due to the available memory. >> >> That supposes we prcoess the ldif file by chunks, and once the chuks are >> sorted, then we process them as a whole, pulling one element from each >> of the sorted list of DN and picking the smallest to inject it into the >> BTree. >> > > Why do you store the DNs in memory? Why are you sorting them? > sorting the DNs with the assumption that given input LDIF may contain entries in random order and in ApacheDS each entry contains 'entryParentID' attribute linking to it's parent entry's ID The DNs are to held in memory briefly until this relationship is built using the DN and the generated entryUUIDs.
> > -- > -- Howard Chu > CTO, Symas Corp. http://www.symas.com > Director, Highland Sun http://highlandsun.com/hyc/ > Chief Architect, OpenLDAP http://www.openldap.org/project/ > -- Kiran Ayyagari http://keydap.com
