Scott,
I have run into a issue with MemCacheTicketRegistry and was wondering
if you have any thoughts. I didn't want to create a new thread for
this note. Anyone else with comments should feel free to reply, too.
;-)
My tests have shown that when a ticket is generated on a CAS cluster
member it may sometimes fail to validate. This is apparently because
the memcached asynchronous replication did not manage to send the
ticket replica in time. Fast as repcached may be, under a relatively
light load, ST validation failed in 0.1% of the cases, or once in 1000
attempts. It would seem that the following tasks should be fairly
complex:
- Browser accesses a
CAS-protected service
- Service redirects to
CAS for authentication
- CAS validates the TGT
- CAS issues the ST for
service
- CAS redirects the
browser to service
- Service sends the ST
for validation
But they are fast! My jMeter
testing showed this taking 28 milliseconds under light load on CAS
server , which is amazingly fast. Please note that in real life, this
can be just as fast because the browser, CAS, and service perform these
steps without the user slowing them down. CAS is indeed a lightweight
system, and memcached does nothing to slow it down. It seems that in
0.1% of the cases this outperforms repcached under light load. The bad
news is that under heavy load the failure rate increases. I've seen as
bad as 8% failure rate.
Have you or anyone else seen this? Have you had to work around this?
Thanks,
Adam
Scott Battaglia wrote:
On Tue, Oct 14, 2008 at 11:15 AM, Andrew Ralph Feller,
afelle1 <[EMAIL PROTECTED]> wrote:
Hey Scott,
Thanks for answering some questions; really appreciate it. Just a
handful more:
- What happens whenever the server it intends
to replicate with is down?
It doesn't replicate :-) The client will send its request to the
primary server and if the primary server is down it will replicate to
the secondary. The repcache server itself will not replicate to the
other server if it can't find it.
- What happens whenever it comes back up?
The repcache servers will sync with each other. The memcache
clients will continue to function as they should
- Does the newly recovered machine synchronize
itself with the other servers?
The newly recovered machine with synchronize with its paired
memcache server.
-Scott
Thanks,
Andrew
Memcache, as far as I know, uses a hash of
the key to determine which server to write to (and then with repcache,
its replicated to its pair, which you configure).
-Scott
-Scott Battaglia
PGP Public Key Id: 0x383733AA
LinkedIn: http://www.linkedin.com/in/scottbattaglia
On Tue, Oct 14, 2008 at 10:38 AM, Andrew Ralph Feller, afelle1 <[EMAIL PROTECTED]>
wrote:
Scott,
I've looked at the sample configuration file on the JA-SIG wiki,
however I was curious how memcached handles cluster membership for lack
of a better word. One of the things we are getting burned on by
JBoss/Jgroups is the frequency the cluster is being fragmented.
Thanks,
Andrew
We've disabled the registry cleaners
since memcached has explicit time outs (which are configurable on the
registry). We've configured it by default with 1 gb of RAM I think,
though I doubt we need that much.
-Scott
-Scott Battaglia
PGP Public Key Id: 0x383733AA
LinkedIn: http://www.linkedin.com/in/scottbattaglia
I've been working on updating from 3.2 to 3.3 and wanted to give
memcached a try instead of JBoss. I read Scott's message about
performance and we've had good success here with memcached for other
applications. It also looks like using memcached instead of JBoss will
simplify the configuration changes for the CAS server.
I do have the JBoss replication working with CAS 3.2 but pounding the
heck out of it with JMeter will cause some not so nice stuff to happen.
I'm using VMWare VI3 and configured an isolated switch for the
clustering and Linux-HA traffic. I do see higher traffic levels coming
to my cluster in the future, but I'm not sure if they'll meet the levels
from my JMeter test. (I'm just throwing this out there because of the
recent Best practice thread.)
If I use memcached, is the ticketRegistryCleaner not needed anymore? I
left those beans in the ticketRegistry.xml file and saw all kinds of
errors. After taking it out it seems to load fine and appears to work,
but I wasn't sure what the behavior is and I haven't tested it further.
What if memcached fills up all the way? Does anyone have a general
idea of how much memory to allocate to memcached with regards to
concurrent logins and tickets stored?
Thanks,
Pat
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Patrick Hennessy ([EMAIL PROTECTED] <http://[EMAIL PROTECTED]>
)
Senior Systems Specialist
Division of Information and Educational Technology
Delaware Technical and Community College
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
_______________________________________________
Yale CAS mailing list
[email protected] <http://[email protected]>
_______________________________________________
Yale CAS mailing list
[email protected] <http://[email protected]>
--
Andrew R. Feller, Analyst
Information Technology Services
200 Fred Frey Building
Louisiana State University
Baton Rouge, LA 70803
(225) 578-3737 (Office)
(225) 578-6400 (Fax)
_______________________________________________
Yale CAS mailing list
[email protected]
http://tp.its.yale.edu/mailman/listinfo/cas
_______________________________________________
Yale CAS mailing list
[email protected]
http://tp.its.yale.edu/mailman/listinfo/cas
|