> Our main problem was the full SCAN of tickets to check if a ticket is a TGT 
> and if the principal is the right one ("count/get SSO sessions").
> For this one, I would have created a "double key indexing", big words for a 
> simple thing ;-)
> On 6.5.x, we stored tickets this way: key=CAS_TICKET:ticketId => 
> VALUE=serialized ticket
> I propose for the "add ticket" operation to check if it is a TGT: in that 
> case, I would add to Redis: key=CAS_TICKET_USER:ticketId:userId => 
> VALUE=nothing.
> This way, we would "only" SCAN keys of TGT sessions to find the right user 
> (CAS_TICKET_USER:*:userId) and we would retrieve all the TGT identifiers for 
> a given user.
> Then, a multi GET on these identifiers would find the SSO sessions of the 
> user.

That's quite clever. But it's not without complications. These are not
strictly blockers, but we should likely take these into account:

Doing the double-indexing for a TGT would also imply that the same
thing would/could be done for OIDC codes, access tokens and refresh
tokens. For example, think of operations where you'd want to execute
"get me all the access tokens issued to user X", or "all the refresh
tokens issued to user Y", etc.  This would mean that the registry
would have to somehow be tied to modules that present those extra
ticket types though I imagine this can be somewhat solved with the
ticket catalog concept. And of course, the registry size sort of
grows. 10 TGTs for unique users would actually mean 20 entries, not to
mention for every update/remove operation you'd be issuing double
queries. So it's double the index, double the number of operations. At
scale, I am not so sure this would actually be all that better, but I
have not run any conclusive tests.

I would also be interested to see an actual test that showcases the
slowness. For example, I ran a test against a basic local redis
cluster, 7.0.5. My test, added 1000 TGTs to the registry, then fetched
them all, and then looped through the resultset asking the registry
for each ticket that was fetched again. This operation completed in
approx 13 seconds for non-encrypted tickets, and 14seconds encrypted
tickets. Then, I reverted the pattern back to what 6.5 used to do, and
I ran the same test, and I more or less saw the same execution time.
Double checked to make sure there are no obvious mistakes. Is this
also what you see? and if so, can you share some sort of a test that
actually shows or demonstrates the problem?

-- 
You received this message because you are subscribed to the Google Groups "CAS 
Developer" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to cas-dev+unsubscr...@apereo.org.
To view this discussion on the web visit 
https://groups.google.com/a/apereo.org/d/msgid/cas-dev/CAGSBKkebhxy--%3DP8PMXHz3o%2B%2BsYN0dbg0RQJND_e5YyrNa_0EQ%40mail.gmail.com.

Reply via email to