[
https://issues.apache.org/jira/browse/DIRSERVER-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Emmanuel Lecharny updated DIRSERVER-2093:
-----------------------------------------
Component/s: jdbm
> ApacheDS M19 got stuck and no longer returns all entries
> --------------------------------------------------------
>
> Key: DIRSERVER-2093
> URL: https://issues.apache.org/jira/browse/DIRSERVER-2093
> Project: Directory ApacheDS
> Issue Type: Bug
> Components: jdbm, ldap
> Affects Versions: 2.0.0-M19
> Reporter: John Peter
> Priority: Blocker
>
> We are using ApacheDS M19 on DC1 and DC2. DC1 is primary and all editing is
> done to DC1.
> We use a custom replication scheme to push updated to DC2 once an hour.
> (MultiMaster replication has caused us issues...)
> DC2 apacheDS stopped responding with the error message:
> INFO | jvm 1 | 2015/09/12 03:28:58 | Exception in thread
> "pool-2-thread-341" Exception in thread "pool-2-thread-528" Exception in
> thread "pool-2-thread-802" Exception in thread "pool-7-thread-1716"
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> This happend when the custom build replication tried searching for all the
> entries with a query: (objectClass=*).
> On DC1 it returns 28469 entries. On DC2 it received a timeout.
> After restarting DC2 apacheDS it seemed to startup normally but using
> Directory studio to search for all entries it seems to return only 6665
> entries.
> Under ApacheDS\instances\default\partitions
> DC1 we can see a size of 778MB. The master.db is 329MB.
> DC2 we can see a size of 2,56GB. The master.db is 2,46G.
> Might the DC2 apacheDS database be corrupt? Any idea why it is so large?
> Any suggestions how to recover and how to avoid the issue from reoccuring?
> We had similar issues in August and it was solved by reinstalling the DC2
> apacheDS from an ldif dump.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]