Hi, Commit "support a second-level inmemory cache for redis" implements a memory cache [ ticketId => Ticket ] => nice speed-up since getting a new ST requires a lot of calls to "getTicket" to get the TGT (~8 calls on CAS 5.3, ~15 calls on CAS 6.5) (hence the 0ms vs 2ms with one java node)
But I wonder what happens if you cache tickets in memory when using multiple java nodes. Is the cache shared between nodes? Otherwise what happens if: - get ST-1 on nodeA, logout on nodeB, get ST-2 on nodeA => still allowed since cached TGT is not invalidated!? - get ST-1 on nodeA, get ST-2 on nodeB, get ST-3 on nodeA => the "services" attr of TGT will not have ST-2!? => SLO will be half broken - get ST on nodeA, validate ST on nodeB, validate ST on nodeA => second validation allowed!? I was afraid of these issues that's why I suggested a simpler memory cache [ ticketId => redisKey ] But hopefully I missed something :-) cu On 14/11/2022 13:58, Jérôme LELEU wrote:
[...] I have launched my previous scenario again (10 000 logins). CAS v6.5: Average time: 2 ms CAS v7.0.0 fix REDIS: Average time: 0 ms Things are now blazing fast with the new implementation but I see you have added a memory cache so this is expected on a single node. So I have created a 2 nodes scenario with 10 000 login?service + service ticket validation, each call (GET /login, POST /login, GET /serviceValidate) being performed on a different node than the previous call (round robin). CAS v6.5 : Average time node 1: 1 ms Average time node 2: 1 ms CAS v7.0.0 fix REDIS : Average time node 1: 2 ms Average time node 2: 2 ms While it performs better on CAS v6.5, it now performs very well on CAS v7 as well. [...]
-- You received this message because you are subscribed to the Google Groups "CAS Developer" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/a/apereo.org/d/msgid/cas-dev/e322058b-5a7e-26dc-fed1-71359d017eb9%40univ-paris1.fr.
