Kiran, thanks.
Yes the out of memory issue is on consumer, the provider is fine and memory 
does not grow hardly at all. 

On or around line 1148 of ServiceBuilder the SyncReplConfiguration size limit 
is not being set from the replBean. 
When I add the line below 

                config.setSearchSizeLimit( replBean.getReplSearchSizeLimit() );


The consumer only processes 5000 entries. That takes care of the JVM OOM but 
the consumer does not appear to repeatedly process another 5000 at a time. 
Should it, can it? 
Thanks.


-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of 
Kiran Ayyagari
Sent: Wednesday, November 06, 2013 11:43 PM
To: [email protected]
Subject: Re: Options to get a replica consumer initially loaded with large db

Carlo,

On Thu, Nov 7, 2013 at 3:33 AM, <[email protected]> wrote:

> Hi,
>
> We've successfully setup replication (m16 snapshot) and it's working 
> great on our test instances with ~10K entries.
> With  ~85K entries, we cannot get a consumer (slave) instance 
> initially loaded without running out of memory.
>
this error is on consumer correct?

> We're on 32 bit so the most we can set the JVM memory to is ~ 1.34 GB
>
> 2 questions:
> Is it possible to batch the replication changes 5000 at a time to give 
> the jvm time to catch up?
>
should be possible with the config setting you mentioned below

> We boot up an empty instance and it starts replicating the 85k 
> entries. It runs out of memory before it's done.
> We're not sure what the state of the database is once this occurs.
> Again, when there are only 10k, it completes no problem. We tried 
> setting ads-replSearchSizeLimit to 5k but this doesn't seem to make any 
> difference.
>
> this is strange, the value is indeed used while sending the 
> replication
rquest

> Also,  Is it possible to configure a consumer to begin replicating 
> from a certain time?  For example, if I wanted the consumer to Pull 
> everything in from 2 pm yesterday? We want to replay changes from a 
> certain point to the slaves.
>
> the server already takes care of this based on the previous state 
> stored
(using what is known as a 'cookie')
so after restart a consumer should be able to pickup from where it left

>
> Thanks,
>
> Here's how our consumers are configured.
>
> dn:
> ads-replConsumerId=cpro,ou=replConsumers,ads-serverId=ldapServer,ou=se
> rvers,ads-directoryServiceId=default,ou=config
> objectClass: top
> objectClass: ads-base
> objectClass: ads-replConsumer
> ads-replSearchSizeLimit: 5000
> ads-replAttributes: *
> ads-replConsumerId: cpro
> ads-replRefreshInterval: 60000
> ads-replUserPassword: secret
> ads-replStrictCertValidation: FALSE
> ads-replUserDn: uid=admin,ou=system
> ads-replUseTls: FALSE
> ads-replProvPort: 10389
> ads-replProvHostName: localhost
> ads-replRefreshNPersist: TRUE
> ads-replSearchScope: sub
> ads-replSearchTimeOut: 0
> ads-searchBaseDN: o=cpro
> ads-replSearchFilter: (objectClass=*)
> ads-enabled: TRUE
> ads-replAliasDerefMode: never
>
> Java OOM
> jvm 1    | Exception in thread "Thread-4"
> jvm 1    | java.lang.OutOfMemoryError: Java heap space
> jvm 1    |      at java.lang.Class.getDeclaredMethods0(Native Method)
> jvm 1    |      at
> java.lang.Class.privateGetDeclaredMethods(Class.java:2436)
> jvm 1    |      at java.lang.Class.getDeclaredMethod(Class.java:1937)
> jvm 1    |      at
> java.io.ObjectStreamClass.getInheritableMethod(ObjectStreamClass.java:1344)
> jvm 1    |      at
> java.io.ObjectStreamClass.access$2200(ObjectStreamClass.java:50)
> jvm 1    |      at
> java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:446)
> jvm 1    |      at java.security.AccessController.doPrivileged(Native
> Method)
> jvm 1    |      at
> java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:411)
> jvm 1    |      at
> java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
> jvm 1    |      at
> java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:407)
> jvm 1    |      at
> java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
> jvm 1    |      at
> java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:407)
> jvm 1    |      at
> java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
> jvm 1    |      at
> java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:407)
> jvm 1    |      at
> java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:308)
> jvm 1    |      at
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1114)
> jvm 1    |      at
> java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:330)
> jvm 1    |      at jdbm.btree.BTree.writeExternal(BTree.java:580)
> jvm 1    |      at
> java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1429)
> jvm 1    |      at
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1398)
> jvm 1    |      at
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1158)
> jvm 1    |      at
> java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:330)
> jvm 1    |      at
> jdbm.helper.Serialization.serialize(Serialization.java:74)
> jvm 1    |      at
> jdbm.helper.DefaultSerializer.serialize(DefaultSerializer.java:81)
> jvm 1    |      at
> jdbm.recman.BaseRecordManager.update(BaseRecordManager.java:274)
> jvm 1    |      at
> jdbm.recman.CacheRecordManager.updateCacheEntries(CacheRecordManager.java:417)
> jvm 1    |      at
> jdbm.recman.CacheRecordManager.commit(CacheRecordManager.java:349)
> jvm 1    |      at
> org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmTable.sync(JdbmTable.java:977)
> jvm 1    |      at
> org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmTable.commit(JdbmTable.java:1183)
> jvm 1    |      at
> org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmTable.remove(JdbmTable.java:829)
> jvm 1    |      at
> org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmIndex.drop(JdbmIndex.java:435)
> jvm 1    |      at
> org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmIndex.drop(JdbmIndex.java:1)
> jvm 1    | Server daemon died!
>
this definitely looking like a JDBM cache related issue, can you file a bug 
please

> jvm 1    | Exception in thread "pool-4-thread-1"
> jvm 1    | java.lang.OutOfMemoryError: Java heap space
> jvm 1    |      at java.util.HashMap.newKeyIterator(HashMap.java:840)
> jvm 1    |      at java.util.HashMap$KeySet.iterator(HashMap.java:874)
> jvm 1    |      at java.util.HashSet.iterator(HashSet.java:153)
> jvm 1    |      at
> java.util.Collections$UnmodifiableCollection$1.<init>(Collections.java:1005)
> jvm 1    |      at
> java.util.Collections$UnmodifiableCollection.iterator(Collections.java:1004)
> jvm 1    |      at
> org.apache.mina.transport.socket.nio.NioProcessor$IoSessionIterator.<init>(NioProcessor.java:321)
> jvm 1    |      at
> org.apache.mina.transport.socket.nio.NioProcessor$IoSessionIterator.<init>(NioProcessor.java:311)
> jvm 1    |      at
> org.apache.mina.transport.socket.nio.NioProcessor.allSessions(NioProcessor.java:93)
> jvm 1    |      at
> org.apache.mina.core.polling.AbstractPollingIoProcessor.notifyIdleSessions(AbstractPollingIoProcessor.java:760)
> jvm 1    |      at
> org.apache.mina.core.polling.AbstractPollingIoProcessor.access$900(AbstractPollingIoProcessor.java:67)
> jvm 1    |      at
> org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:1135)
> jvm 1    |      at
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
> jvm 1    |      at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
> jvm 1    |      at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> jvm 1    |      at java.lang.Thread.run(Thread.java:662)
> jvm 1    | java.lang.OutOfMemoryError: Java heap space
> jvm 1    |      at java.lang.String.toCharArray(String.java:2725)
> jvm 1    |      at
> sun.nio.cs.SingleByteDecoder.<init>(SingleByteDecoder.java:27)
> jvm 1    |      at sun.nio.cs.MS1252$Decoder.<init>(MS1252.java:72)
> jvm 1    |      at sun.nio.cs.MS1252.newDecoder(MS1252.java:39)
> jvm 1    |      at
> java.lang.StringCoding$StringDecoder.<init>(StringCoding.java:116)
> jvm 1    |      at
> java.lang.StringCoding$StringDecoder.<init>(StringCoding.java:108)
> jvm 1    |      at java.lang.StringCoding.decode(StringCoding.java:167)
> jvm 1    |      at java.lang.StringCoding.decode(StringCoding.java:185)
> jvm 1    |      at java.lang.String.<init>(String.java:570)
> jvm 1    |      at
> org.tanukisoftware.wrapper.WrapperManager.handleSocket(WrapperManager.java:3759)
> jvm 1    |      at
> org.tanukisoftware.wrapper.WrapperManager.run(WrapperManager.java:4084)
> jvm 1    |      at java.lang.Thread.run(Thread.java:662)
> jvm 1    | [15:47:16] WARN
> [org.apache.directory.ldap.client.api.LdapNetworkConnection] - Java 
> heap space
> jvm 1    | java.lang.OutOfMemoryError: Java heap space
> wrapper  | JVM appears hung: Timed out waiting for signal from JVM.
> wrapper  | JVM did not exit on request, terminated wrapper  | 
> Launching a JVM...
> jvm 2    | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
> jvm 2    |   Copyright 1999-2006 Tanuki Software, Inc.  All Rights
> Reserved.
> jvm 2    |
>
is the below error thrown when a restart was attempted by you after the process 
was killed with OOM?

> jvm 2    | [15:54:56] ERROR
> [org.apache.directory.server.core.authz.GroupCache]
> - Exception while initializing the groupCache:  {}
> jvm 2    |
> org.apache.directory.api.ldap.model.exception.LdapOperationErrorException
> jvm 2    |      at
> org.apache.directory.server.core.partition.impl.btree.AbstractBTreePartition.fetch(AbstractBTreePartition.java:1120)
> jvm 2    |      at
> org.apache.directory.server.xdbm.search.evaluator.EqualityEvaluator.evaluate(EqualityEvaluator.java:97)
> jvm 2    |      at
> org.apache.directory.server.xdbm.search.evaluator.AndEvaluator.evaluate(AndEvaluator.java:110)
> jvm 2    |      at
> org.apache.directory.server.core.partition.impl.btree.EntryCursorAdaptor.get(EntryCursorAdaptor.java:167)
> jvm 2    |      at
> org.apache.directory.server.core.partition.impl.btree.EntryCursorAdaptor.get(EntryCursorAdaptor.java:1)
> jvm 2    |      at
> org.apache.directory.server.core.api.filtering.BaseEntryFilteringCursor.next(BaseEntryFilteringCursor.java:377)
> jvm 2    |      at
> org.apache.directory.server.core.authz.GroupCache.initialize(GroupCache.java:164)
> jvm 2    |      at
> org.apache.directory.server.core.authz.GroupCache.<init>(GroupCache.java:122)
> jvm 2    |      at
> org.apache.directory.server.core.authz.AciAuthorizationInterceptor.init(AciAuthorizationInterceptor.java:286)
> jvm 2    |      at
> org.apache.directory.server.core.DefaultDirectoryService.initInterceptors(DefaultDirectoryService.java:688)
> jvm 2    |      at
> org.apache.directory.server.core.DefaultDirectoryService.initialize(DefaultDirectoryService.java:1836)
> jvm 2    |      at
> org.apache.directory.server.core.DefaultDirectoryService.startup(DefaultDirectoryService.java:1247)
> jvm 2    |      at
> org.apache.directory.server.ApacheDsService.initDirectoryService(ApacheDsService.java:323)
> jvm 2    |      at
> org.apache.directory.server.ApacheDsService.start(ApacheDsService.java:182)
> jvm 2    |      at
> org.apache.directory.server.wrapper.ApacheDsTanukiWrapper.start(ApacheDsTanukiWrapper.java:72)
> jvm 2    |      at
> org.tanukisoftware.wrapper.WrapperManager$12.run(WrapperManager.java:2788)
> jvm 2    | Caused by:
> org.apache.directory.api.ldap.model.exception.LdapOperationErrorException
> jvm 2    |      at
> org.apache.directory.server.core.partition.impl.btree.AbstractBTreePartition.fetch(AbstractBTreePartition.java:1196)
> jvm 2    |      at
> org.apache.directory.server.core.partition.impl.btree.AbstractBTreePartition.fetch(AbstractBTreePartition.java:1116)
> jvm 2    |      ... 15 more
>
>


--
Kiran Ayyagari
http://keydap.com

Reply via email to