[ 
https://issues.apache.org/jira/browse/HIVE-20192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-20192:
------------------------------------
    Description: 
Hiveserver2 instances where every 3-4 days they are observing HS2 in an 
unresponsive state, we observed that the FGC collection happening regularly

>From JXray report it is seen that pmCache(List of JDOPersistenceManager 
>objects) is occupying 84% of the heap and there are around 16,000 references 
>of UDFClassLoader.
{code:java}
10,759,230K (84.7%) Object tree for GC root(s) Java Static 
org.apache.hadoop.hive.metastore.ObjectStore.pmf
- org.datanucleus.api.jdo.JDOPersistenceManagerFactory.pmCache ↘ 10,744,419K 
(84.6%), 1 reference(s)
  - j.u.Collections$SetFromMap.m ↘ 10,744,419K (84.6%), 1 reference(s)
    - {java.util.concurrent.ConcurrentHashMap}.keys ↘ 10,743,764K (84.5%), 
16,872 reference(s)
      - org.datanucleus.api.jdo.JDOPersistenceManager.ec ↘ 10,738,831K (84.5%), 
16,872 reference(s)
        ... 3 more references together retaining 4,933K (< 0.1%)
    - java.util.concurrent.ConcurrentHashMap self 655K (< 0.1%), 1 object(s)
      ... 2 more references together retaining 48b (< 0.1%)
- org.datanucleus.api.jdo.JDOPersistenceManagerFactory.nucleusContext ↘ 14,810K 
(0.1%), 1 reference(s)
... 3 more references together retaining 96b (< 0.1%){code}
When the RawStore object is re-created, it is not allowed to be updated into 
the ThreadWithGarbageCleanup.threadRawStoreMap which leads to the new RawStore 
never gets cleaned-up when the thread exit.

 

  was:
Hiveserver2 instances where every 3-4 days they are observing HS2 in an 
unresponsive state, we observed that the FGC collection happening regularly

>From JXray report it is seen that pmCache(List of JDOPersistenceManager 
>objects) is occupying 84% of the heap and there are around 16,000 references 
>of UDFClassLoader.

When the RawStore object is re-created, it is not allowed to be updated into 
the ThreadWithGarbageCleanup.threadRawStoreMap which leads to the new RawStore 
never gets cleaned-up when the thread exit.

 


> HS2 is leaking JDOPersistenceManager objects.
> ---------------------------------------------
>
>                 Key: HIVE-20192
>                 URL: https://issues.apache.org/jira/browse/HIVE-20192
>             Project: Hive
>          Issue Type: Bug
>          Components: HiveServer2
>    Affects Versions: 3.0.0, 3.1.0
>            Reporter: Sankar Hariappan
>            Assignee: Sankar Hariappan
>            Priority: Major
>              Labels: HiveServer2
>
> Hiveserver2 instances where every 3-4 days they are observing HS2 in an 
> unresponsive state, we observed that the FGC collection happening regularly
> From JXray report it is seen that pmCache(List of JDOPersistenceManager 
> objects) is occupying 84% of the heap and there are around 16,000 references 
> of UDFClassLoader.
> {code:java}
> 10,759,230K (84.7%) Object tree for GC root(s) Java Static 
> org.apache.hadoop.hive.metastore.ObjectStore.pmf
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.pmCache ↘ 10,744,419K 
> (84.6%), 1 reference(s)
>   - j.u.Collections$SetFromMap.m ↘ 10,744,419K (84.6%), 1 reference(s)
>     - {java.util.concurrent.ConcurrentHashMap}.keys ↘ 10,743,764K (84.5%), 
> 16,872 reference(s)
>       - org.datanucleus.api.jdo.JDOPersistenceManager.ec ↘ 10,738,831K 
> (84.5%), 16,872 reference(s)
>         ... 3 more references together retaining 4,933K (< 0.1%)
>     - java.util.concurrent.ConcurrentHashMap self 655K (< 0.1%), 1 object(s)
>       ... 2 more references together retaining 48b (< 0.1%)
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.nucleusContext ↘ 
> 14,810K (0.1%), 1 reference(s)
> ... 3 more references together retaining 96b (< 0.1%){code}
> When the RawStore object is re-created, it is not allowed to be updated into 
> the ThreadWithGarbageCleanup.threadRawStoreMap which leads to the new 
> RawStore never gets cleaned-up when the thread exit.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to