Although we're still using Ehcache, it's configured to persist overflow to disk. Looks like Hazelcast might have a similar feature:
http://blog.hazelcast.com/overflow-queue-store/ We do approx. 80k TGTs and 170k STs a day. The ticket registry is configured to hold up to 40k entries before spilling to disk, the idea being to keep it small and tight. We don't really notice a cache 'miss' when it has to go to disk to pull in a TGT/ST. The on-disk cache can become a decent size, maybe 100 MB unless a client/browser starts looping on ST generation, then it can get much bigger. Tom. On 08/31/2015 12:58 PM, Bryan Wooten wrote: > Hi all, > > > > So twice in the past few months CAS (3.5.2) has gotten really slow. A > restart of the Tomcat servers makes the issue go away. > > > > There are no errors in either cas.log or catalina.out, it is just really > slow. > > > > Because the issue occurs only in production and not in test I have never > had time to attempt any kind of root cause analysis. > > > > Now our hazelcast is configured to use 85% of heap which is set to > 2048meg. We get about 200k logins a day. > > > > I think I may be running into a tomcat/jvm tuning issue (heap size or > garbage collection issue). > > > > Does anyone have suggestions on how I should monitor this or what config > settings for tomcat I should be using/ > > > > Thanks, > > > > *Bryan Wooten* > > Tel: (801)585-9323 > > Email: [email protected] <mailto:[email protected]> > > > > Identity & Access Management_combined centered > > > > -- > You are currently subscribed to [email protected] as: > [email protected] > To unsubscribe, change settings or access archives, see > http://www.ja-sig.org/wiki/display/JSG/cas-user > -- You are currently subscribed to [email protected] as: [email protected] To unsubscribe, change settings or access archives, see http://www.ja-sig.org/wiki/display/JSG/cas-user
