Adam, 
For the services registry we settled on using JSON, the caveat with that is 
that it is not read write through the services GUI. We decided that eliminating 
the RBDMS will make the CAS server a little more reliable, with one less 
component to care for and connection to keep working.  
https://github.com/Unicon/cas-addons/wiki/Configuring-JSON-Service-Registry 


You could still use the database option for the services as you wouldn't have 
to worry about the performance issues of JPA like you do with the tickets.  


 
We weren't too concerned with the services registry and decided that just 
editing the conf file and setting an rsyn job to check and overwrite with the 
newer file was simple enough. 



Scott Massari
Owens Community College
Senior Systems Administrator 
Phone: (567) 661-2059
Cell: (567) 277-0638
Fax: (567) 661-7643




>>> On 5/16/2013 at 09:10 AM, in message 
>>> <can6mv5otwtdrogsmk-lv0bddh7mlyy8bjtmw2ral2o0imrx...@mail.gmail.com>, Adam 
>>> Causey <[email protected]> wrote:


Scott, Do you still use a database to store information about the services and 
serviceURLs? Is this database failover or otherwise replicated, and if so does 
this need to be master-master replication as the ticket repository would need 
to be? 



Our current setup is a db on each application server with master-master 
replication, but I'd like to switch to a ticket repository using ehcache and a 
centralized database for everything else. I am wondering if this would work. 



I assume the only reason for needing realtime replication is in cases where the 
primary server goes down in the middle of someone logging in; the failover 
server would then still recognize and accept the ticket. If the service URLs in 
the database are not realtime replicated it shouldn't disrupt service 
(possible, but not likely). 



Any thoughts? 



thanks, 

Adam


On Wed, May 8, 2013 at 11:08 AM, Scott Massari <[email protected]> wrote:



Adam, Yes that is how it works, they are distributed and stored in memory and 
are replicated between the cluster members. Also I think an additional benefit 
is not needing a ticket registry cleaner, which seems to have presented some 
institutions problems with performance.

The jasig wiki has a good entry on this: 
https://wiki.jasig.org/display/CASUM/EhcacheTicketRegistry 






Scott Massari
Owens Community College
Senior Systems Administrator
Phone: (567) 661-2059 ( tel:%28567%29%20661-2059 )
Cell: (567) 277-0638 ( tel:%28567%29%20277-0638 )
Fax: (567) 661-7643 ( tel:%28567%29%20661-7643 )





>>> On 5/8/2013 at 09:48 AM, in message 
>>> <CAN6MV5NZVWUsFrC8Hwu51LN1RmUem=aDQgh9p6sEadaHiB6r=a...@mail.gmail.com>, 
>>> Adam Causey <[email protected]> wrote:



Scott - Do you all use distributed ehcache for this? Otherwise how would each 
server in the cluster be aware of the tickets for the callback? I'd be 
interested in switching to an in-memory solution if others have had positive 
results. 



The tricky thing about CAS is making sure when a client calls back that the 
server they connect back to in the cluster recognizes the ticket. 



We are currently using two app servers - one primary and one failover (in 
separate locations) behind a Cisco LB. We are using a database for each app 
server in a master-master setup to replicate tickets. Having two DBs guarantees 
complete replication in case of failure at one physical location. We have not 
had IP issues with this approach, but I am not sure how it works in a 
round-robin type setup. The only issue I've run into recently is getting the 
originating IP into the audit logs. 





-Adam




On Wed, May 8, 2013 at 9:33 AM, Scott Massari <[email protected]> wrote:



Geoff, 
Ehcache seems to perform better than using a database (JPA), there is also 
jboss, memcache etc. I am sure you will get lot's of replies as there seems to 
be quite a few that have set it up this way. We are using a Cisco ACE LB 
hardware for our setup (ehcache for tickets, json for service registry) that we 
are rolling into production currently. 






Scott Massari
Owens Community College
Senior Systems Administrator
Phone: (567) 661-2059 ( tel:%28567%29%20661-2059 )
Cell: (567) 277-0638 ( tel:%28567%29%20277-0638 )
Fax: (567) 661-7643 ( tel:%28567%29%20661-7643 )





>>> On 5/8/2013 at 09:24 AM, in message 
>>> <ff549b8a8619594b88cc9f92fc3be5d37ee2af1...@jupiter.unfcsd.unf.edu>, 
>>> "Whittaker, Geoffrey" <[email protected]> wrote:






Good morning,

We have a hardware load balancer/proxy that we'd like to use for a distributed 
deployment of CAS using a central database. While we were discussing it this 
morning, we stumbled on the question of whether or not CAS will have a problem 
with all of the connections having the same source IP (the balancer/proxy).

Has anyone ever configured CAS like this... with a proxy/load balancer in front 
of two servers and a central database? Is this a terrible idea, fraught with 
peril and heartache? Is there a better way to ensure redundancy?


Geoff

--

You are currently subscribed to [email protected] as: 
[email protected] 


To unsubscribe, change settings or access archives, see 
http://www.ja-sig.org/wiki/display/JSG/cas-user





-- 
You are currently subscribed to [email protected] as: [email protected]
To unsubscribe, change settings or access archives, see 
http://www.ja-sig.org/wiki/display/JSG/cas-user

 




-- 
You are currently subscribed to [email protected] as: 
[email protected]
To unsubscribe, change settings or access archives, see 
http://www.ja-sig.org/wiki/display/JSG/cas-user


-- 
You are currently subscribed to [email protected] as: [email protected]
To unsubscribe, change settings or access archives, see 
http://www.ja-sig.org/wiki/display/JSG/cas-user 




-- 
You are currently subscribed to [email protected] as: 
[email protected]
To unsubscribe, change settings or access archives, see 
http://www.ja-sig.org/wiki/display/JSG/cas-user

-- 
You are currently subscribed to [email protected] as: 
[email protected]
To unsubscribe, change settings or access archives, see 
http://www.ja-sig.org/wiki/display/JSG/cas-user

Reply via email to