We did an implementation in cassandra where we set the TTL (time-to-live)
for the row containing the ticket based on the remaining time to expire for
that ticket.  When the last use time of the ticket was updated, we updated
the row to push out the TTL.  In order to so this, I believe that we had to
create a new set of expiration policy classes in order to be able to
calculate the TTL properly (I don't remember the exact reason now; it may
have been because some important attributes in ttl calculation were private
or something).   In the end, I believe that we were able to achieve the
property that a ticket would not expire in cassandra until it was no longer
valid.

 

Note however, Cassandra is not memory limited in the way that at least some
configs of memcache and ehcache would be (I'm not expert in either of them).
Certainly if they are going to expire entries for space limitations, then
Scott's  case is going to come up.

 

David Ohsie

ASD Arch. and Advanced Dev.

410-929-2092

 

 

 

From: Scott Battaglia [mailto:[email protected]] 
Sent: Wednesday, May 08, 2013 11:15 AM
To: [email protected]
Subject: Re: [cas-user] Load balancing of CAS

 

Just to be clear: any registry that doesn't rely on a registry cleaner may
or may not exactly respect your expiration policies since their internal
clean up mechanisms may clear out your ticket before its actually time
expired (this means Memcache, EhCache, etc.).  In practice, its not a huge
issue unless you undersize your caches.




-Scott Battaglia
PGP Public Key Id: 0x383733AA
LinkedIn: http://www.linkedin.com/in/scottbattaglia

 

On Wed, May 8, 2013 at 11:08 AM, Scott Massari <[email protected]>
wrote:

Adam, Yes that is how it works, they are distributed and stored in memory
and are replicated between the cluster members. Also I think an additional
benefit is not needing a ticket registry cleaner, which seems to have
presented some institutions problems with performance.

The jasig wiki has a good entry on this:
https://wiki.jasig.org/display/CASUM/EhcacheTicketRegistry 

 




Scott Massari
Owens Community College
Senior Systems Administrator 
Phone: (567) 661-2059 <tel:%28567%29%20661-2059> 
Cell: (567) 277-0638 <tel:%28567%29%20277-0638> 
Fax: (567) 661-7643 <tel:%28567%29%20661-7643> 





>>> On 5/8/2013 at 09:48 AM, in message
<CAN6MV5NZVWUsFrC8Hwu51LN1RmUem=aDQgh9p6sEadaHiB6r=a...@mail.gmail.com>, Adam
Causey <[email protected]> wrote:


Scott - Do you all use distributed ehcache for this? Otherwise how would
each server in the cluster be aware of the tickets for the callback? I'd be
interested in switching to an in-memory solution if others have had positive
results. 

 

The tricky thing about CAS is making sure when a client calls back that the
server they connect back to in the cluster recognizes the ticket. 

 

We are currently using two app servers - one primary and one failover (in
separate locations) behind a Cisco LB. We are using a database for each app
server in a master-master setup to replicate tickets. Having two DBs
guarantees complete replication in case of failure at one physical location.
We have not had IP issues with this approach, but I am not sure how it works
in a round-robin type setup. The only issue I've run into recently is
getting the originating IP into the audit logs. 

 

 

-Adam

 

On Wed, May 8, 2013 at 9:33 AM, Scott Massari <[email protected]>
wrote:

Geoff, 

Ehcache seems to perform better than using a database (JPA), there is also
jboss, memcache etc. I am sure you will get lot's of replies as there seems
to be quite a few that have set it up this way. We are using a Cisco ACE LB
hardware for our setup (ehcache for tickets, json for service registry) that
we are rolling into production currently. 

 




Scott Massari
Owens Community College
Senior Systems Administrator
Phone: (567) 661-2059 <tel:%28567%29%20661-2059> 
Cell: (567) 277-0638 <tel:%28567%29%20277-0638> 
Fax: (567) 661-7643 <tel:%28567%29%20661-7643> 





>>> On 5/8/2013 at 09:24 AM, in message
<ff549b8a8619594b88cc9f92fc3be5d37ee2af1...@jupiter.unfcsd.unf.edu>,
"Whittaker, Geoffrey" <[email protected]> wrote:

 


 

Good morning,

We have a hardware load balancer/proxy that we'd like to use for a
distributed deployment of CAS using a central database. While we were
discussing it this morning, we stumbled on the question of whether or not
CAS will have a problem with all of the connections having the same source
IP (the balancer/proxy).

Has anyone ever configured CAS like this... with a proxy/load balancer in
front of two servers and a central database? Is this a terrible idea,
fraught with peril and heartache? Is there a better way to ensure
redundancy?


Geoff

--

You are currently subscribed to [email protected] as:
[email protected] 


To unsubscribe, change settings or access archives, see
http://www.ja-sig.org/wiki/display/JSG/cas-user

 

-- 
You are currently subscribed to [email protected] as:
[email protected]
To unsubscribe, change settings or access archives, see
http://www.ja-sig.org/wiki/display/JSG/cas-user

 
 
 

 

 

-- 
You are currently subscribed to [email protected] as:
[email protected]
To unsubscribe, change settings or access archives, see
http://www.ja-sig.org/wiki/display/JSG/cas-user

 
 
-- 
You are currently subscribed to [email protected] as:
[email protected]


 
To unsubscribe, change settings or access archives, see
http://www.ja-sig.org/wiki/display/JSG/cas-user

 

-- 
You are currently subscribed to [email protected] as:
[email protected]
To unsubscribe, change settings or access archives, see
http://www.ja-sig.org/wiki/display/JSG/cas-user

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to