On 8/7/2013 11:50 AM, Danner, Mearl wrote:
Using ehcache on a test cluster. It is what we will implement in production.
Cool. Could I trouble you to share your ticketRegistry.xml and ehcache.xml? I pieced mine together from the outdated wiki and various mailing list and blog postings, and I don't have complete confidence in it :).
I originally instantiated the EhCacheManagerFactoryBean with an externalized configLocation of /etc/cas/ecache.xml, it found that configuration and appeared to create the cas specific cache pieces, but then it also completed it couldn't find classpath:ecache.xml and said it was configuring based on ecache-failsafe.xml. I ended up moving my configuration to classpath:ecache.xml, which got rid of the failsafe warning, but it still looks like it's reading the configuration twice instead of just once.
It also seems to only be bootstrapping the service ticket cache on startup, not the ticket granting ticket cache.
Have you/are you planning to do any tuning of the garbage collection parameters?
We chose because we wanted replicated ticket registries.
That does seem preferable, barring any negatives that might outweigh the increased fault tolerance.
Our implementation will be self-contained to our datacenter/dmz so we are not concerned with securing the replication traffic.
Two of the nodes will be in our local data center, but we also plan to have a third at our DR site on the other side of the state. In general, even though our local network can for the most part be trusted, I try not to have sensitive data flow across it unencrypted. I ended up configuring ssh port forwarding tunnels to secure the data flow for the ehcache replication. Seems to working reasonably well, although RMI is a pain and you have to tell java it's running on "localhost" so it doesn't tell the remote client to connect to it directly rather than through the tunnel. I've also been unable to get the local RMI listening ports to bind to loopback rather than wildcard, ideally you would only be able to connect to them from the local machine. We do have a host-based firewall preventing access, but still, ideally :). It looks like that might only be possible with custom coding.
Replication traffic isn't a particular issue for us.
Us either, our local nodes will either be gigabit connected, or in our vmware cluster connected with virtual 10G, and our remote DR node will be at our sister campus Sacramento State. Both of us have 10G connections to the CENIC backbone network most educational sites in California use, so I don't think the remote traffic will have any problems either.
Thanks… -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | [email protected] California State Polytechnic University | Pomona CA 91768 -- You are currently subscribed to [email protected] as: [email protected] To unsubscribe, change settings or access archives, see http://www.ja-sig.org/wiki/display/JSG/cas-user
