[
https://issues.apache.org/jira/browse/AMBARI-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Siddharth Wagle updated AMBARI-13517:
-------------------------------------
Description:
- Max Heap restriction problem: Ehcache library that we use has a limit of how
far in the object graph it traverses to find size of an Cached reference, this
is to make it performant. Default limit = 1000.
- User gets following Warning which are harmless unless the data cannot fit in
memory and eviction does not kick in:
{code}
WARN [qtp-client-70] ObjectGraphWalker:209 - The configured limit of 1,000
object references was reached while attempting to calculate the size of the
object graph. Severe performance degradation could occur
if the sizing operation continues. This can be avoided by setting the
CacheManger or Cache <sizeOfPolicy> elements maxDepthExceededBehavior to
"abort" or adding stop points with @IgnoreSizeOf annotations. If performance
degradation
is NOT an issue at the configured limit, raise the limit value using the
CacheManager or Cache <sizeOfPolicy> elements maxDepth attribute. For more
information, see the Ehcache configuration documentation.
{code}
_Workaround in 2.1.2_:
- Add this to /etc/ambari-server/conf/ambari.properties file,
"server.timeline.metrics.cache.disabled=true" and restart the server
OR
- If you have the memory on ambari server, increase heap size in
/var/lib/ambari-server/ambari-env.sh, *Temporary until upgrade to 2.1.3*
_Objective of the patch_:
- Provide a custom Sizing engine for Ehcache that will provide a close
approximation of the data in the cache with significant perf gain. Premise:
Since we know the data structures we can make better estimates.
- Expectation from sizing engine : Discrepancy of less than 10K for a DS of
size 10 MB (Proven with a unit test)
was:
- Max Heap restriction problem: Ehcache library that we use has a limit of how
far in the object graph it traverses to find size of an Cached reference, this
is to make it performant. Default limit = 1000.
- User gets following Warning which are harmless unless the data cannot fit in
memory and eviction does not kick in:
{code}
WARN [qtp-client-70] ObjectGraphWalker:209 - The configured limit of 1,000
object references was reached while attempting to calculate the size of the
object graph. Severe performance degradation could occur
if the sizing operation continues. This can be avoided by setting the
CacheManger or Cache <sizeOfPolicy> elements maxDepthExceededBehavior to
"abort" or adding stop points with @IgnoreSizeOf annotations. If performance
degradation
is NOT an issue at the configured limit, raise the limit value using the
CacheManager or Cache <sizeOfPolicy> elements maxDepth attribute. For more
information, see the Ehcache configuration documentation.
{code}
_Objective of the patch_:
- Provide a custom Sizing engine for Ehcache that will provide a close
approximation of the data in the cache with significant perf gain. Premise:
Since we know the data structures we can make better estimates.
- Expectation from sizing engine : Discrepancy of less than 10K for a DS of
size 10 MB (Proven with a unit test)
> Ambari Server JVM crashed after several clicks in Web UI to navigate graph
> timerange
> ------------------------------------------------------------------------------------
>
> Key: AMBARI-13517
> URL: https://issues.apache.org/jira/browse/AMBARI-13517
> Project: Ambari
> Issue Type: Bug
> Components: ambari-server
> Affects Versions: 2.1.2
> Reporter: Siddharth Wagle
> Assignee: Siddharth Wagle
> Priority: Critical
> Fix For: 2.1.3
>
>
> - Max Heap restriction problem: Ehcache library that we use has a limit of
> how far in the object graph it traverses to find size of an Cached reference,
> this is to make it performant. Default limit = 1000.
> - User gets following Warning which are harmless unless the data cannot fit
> in memory and eviction does not kick in:
> {code}
> WARN [qtp-client-70] ObjectGraphWalker:209 - The configured limit of 1,000
> object references was reached while attempting to calculate the size of the
> object graph. Severe performance degradation could occur
> if the sizing operation continues. This can be avoided by setting the
> CacheManger or Cache <sizeOfPolicy> elements maxDepthExceededBehavior to
> "abort" or adding stop points with @IgnoreSizeOf annotations. If performance
> degradation
> is NOT an issue at the configured limit, raise the limit value using the
> CacheManager or Cache <sizeOfPolicy> elements maxDepth attribute. For more
> information, see the Ehcache configuration documentation.
> {code}
> _Workaround in 2.1.2_:
> - Add this to /etc/ambari-server/conf/ambari.properties file,
> "server.timeline.metrics.cache.disabled=true" and restart the server
> OR
> - If you have the memory on ambari server, increase heap size in
> /var/lib/ambari-server/ambari-env.sh, *Temporary until upgrade to 2.1.3*
> _Objective of the patch_:
> - Provide a custom Sizing engine for Ehcache that will provide a close
> approximation of the data in the cache with significant perf gain. Premise:
> Since we know the data structures we can make better estimates.
> - Expectation from sizing engine : Discrepancy of less than 10K for a DS of
> size 10 MB (Proven with a unit test)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)