Hi Eirik, Thank you for reporting this issue. Unless there is a problem with Out of Memory error on ambari server caused due to Ehcache giving up on calculating true size of the in-memory cache, the following error is harmless.
I have created a Jira, AMBARI-13411, to address this problem in the next minor version release. Please feel free to comment on the Jira. - Sid ________________________________________ From: Eirik Thorsnes <[email protected]> Sent: Tuesday, October 13, 2015 8:28 AM To: [email protected] Subject: Warning from Ehcache in ambari-server log, and a socket error Hi, I get a lot of the following warning messages in the ambari-server logs: WARN [qtp-client-60803] ObjectGraphWalker:209 - The configured limit of 1,000 object references was reached while attempting to calculate the size of the object graph. Severe performance degradation could occur if the sizing operation continues. This can be avoided by setting the CacheManger or Cache <sizeOfPolicy> elements maxDepthExceededBehavior to "abort" or adding stop points with @IgnoreSizeOf annotations. If performance degradation is NOT an issue at the configured limit, raise the limit value using the CacheManager or Cache <sizeOfPolicy> elements maxDepth attribute. For more information, see the Ehcache configuration documentation. ERROR [qtp-client-24] MetricsRequestHelper:87 - Error getting timeline metrics. Can not connect to collector, socket error. Any pointers on what I can do to fix this issue? The error above looks to be related to some of the metric graphs showing "no data available", though it is not the same graphs that shows this each time. Perhaps it is a limit on the number of connections to the Metrics collector? Ambari version is 2.1.2, java 1.8 u60, linux x86_64. Regards, Eirik -- Eirik Thorsnes
