[
https://issues.apache.org/jira/browse/RANGER-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Pavi Subenderan updated RANGER-3442:
------------------------------------
Description:
We have many keys created in our KMS keystore and recently we found that when
we create new keys, the KMS instances easily hit against the memory limit.
We can reproduce this with a script to call KMS createKey and then getMetadata
for new keys in a loop. Basically we restart our instances and memory usage is
approximately 1.5GB out of 8GB, but after running this script for a bit (1-5
minutes), we hit close to the 8GB limit and the memory usage does not go back
down after that.
I did a heap dump and saw that most of the memory was being retained by
XXRangerKeystore and eclipse EntityManagerImpl.
* org.eclipse.persistence.internal.jpa.EntityManagerImpl
* org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork
And the largest shallow size object was char[] with 4GB+...
*My fix*
I was ultimately able to solve this issue by adding an
getEntityManager().clear() call in BaseDao.java getAllKeys().
After I added this fix, we can now run as many KMS CreateKey / getMetadata
calls as we want without any increase in memory usage whatsoever. Memory usage
now stays constant at <1.7GB.
My understanding is that Ranger KMS has a many instances of ThreadLocal
EntityManager (160+ according to my heap dump) which each held a cache of the
results for getAllKeys. Since we have so many keys in our KMS, this would
easily put as at the memory limit.
Not sure if there are any drawbacks to clearing EntityManager in
BaseDao.getAllKeys() but we are seeing greatly improved performance in our case
since we aren't constantly hitting the memory limit anymore.
was:
We have many keys created in our KMS keystore and recently we found that when
we create new keys, the KMS instances easily hit against the memory limit.
We can reproduce this with a script to call KMS createKey and then getMetadata
for new keys in a loop. Basically we restart our instances and memory usage is
approximately 1.5GB out of 8GB, but after running this script for a bit (1-5
minutes), we hit close to the 8GB limit and the memory usage does not go back
down after that.
I did a heap dump and saw that most of the memory was being retained by
XXRangerKeystore and eclipse EntityManagerImpl.
org.eclipse.persistence.internal.jpa.EntityManagerImpl
org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork
And the largest shallow size object was char[] with 4GB+...
I was ultimately able to solve this issue by adding an
getEntityManager().clear() call in BaseDao.java getAllKeys().
After I added this fix, we can now run as many KMS CreateKey / getMetadata
calls as we want without any increase in memory usage whatsoever. Memory usage
now stays constant at <1.7GB.
My understanding is that Ranger KMS has a many instances of ThreadLocal
EntityManager (160+ according to my heap dump) which each held a cache of the
results for getAllKeys. Since we have so many keys in our KMS, this would
easily put as at the memory limit.
Not sure if there are any drawbacks to clearing EntityManager in
BaseDao.getAllKeys() but we are seeing greatly improved performance in our case
since we aren't constantly hitting the memory limit anymore.
> Ranger KMS DAO memory issues when many new keys are created
> -----------------------------------------------------------
>
> Key: RANGER-3442
> URL: https://issues.apache.org/jira/browse/RANGER-3442
> Project: Ranger
> Issue Type: Bug
> Components: kms
> Affects Versions: 2.0.0
> Reporter: Pavi Subenderan
> Priority: Major
>
> We have many keys created in our KMS keystore and recently we found that when
> we create new keys, the KMS instances easily hit against the memory limit.
> We can reproduce this with a script to call KMS createKey and then
> getMetadata for new keys in a loop. Basically we restart our instances and
> memory usage is approximately 1.5GB out of 8GB, but after running this script
> for a bit (1-5 minutes), we hit close to the 8GB limit and the memory usage
> does not go back down after that.
> I did a heap dump and saw that most of the memory was being retained by
> XXRangerKeystore and eclipse EntityManagerImpl.
> * org.eclipse.persistence.internal.jpa.EntityManagerImpl
> * org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork
> And the largest shallow size object was char[] with 4GB+...
>
> *My fix*
> I was ultimately able to solve this issue by adding an
> getEntityManager().clear() call in BaseDao.java getAllKeys().
> After I added this fix, we can now run as many KMS CreateKey / getMetadata
> calls as we want without any increase in memory usage whatsoever. Memory
> usage now stays constant at <1.7GB.
> My understanding is that Ranger KMS has a many instances of ThreadLocal
> EntityManager (160+ according to my heap dump) which each held a cache of the
> results for getAllKeys. Since we have so many keys in our KMS, this would
> easily put as at the memory limit.
> Not sure if there are any drawbacks to clearing EntityManager in
> BaseDao.getAllKeys() but we are seeing greatly improved performance in our
> case since we aren't constantly hitting the memory limit anymore.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)