No problem - glad to help.

Shiro functions this way (delegate to a distributed cache for
clustering operations) very much on purpose:  the distributed caching
mechanisms available today (Terracotta, Memcache, Apache ZooKeeper,
MongoDB, Cassandra, Hadoop, etc, etc, etc) are specifically designed
for network-based data segmentation and access.  It's best to let
Shiro be a security framework and delegate cluster state operations to
something specifically designed for such.

Cheers,

Les

On Thu, Jan 26, 2012 at 11:05 AM, Navid Mohaghegh <[email protected]> wrote:
> Les,  thank you very much for the comprehensive explanation here.
>
> We are going to use LVS (Linux Virtual Server) as our LB and command 
> interfaces like ipvsdm, keep alive and heartbeats to assure HA. And  in 
> conjunction we plan to use Memcached. I don't have any plan for persistent 
> disk space yet, but from what you mentioned, I see that the clustering 
> question should not really targeted to Shiro. And instead should be addressed 
> by distributed caching programs.
>
> Thank you,
> Navid
>
>
> On 2012-01-26, at 1:51 PM, Les Hazlewood wrote:
>
>> Most production Shiro environments will delegate state management to a
>> quality CacheManager implementation.  For authentication and
>> authorization caching, this is straight forward - those cache entries
>> can reside in the memory of the local machine only, or with a
>> distributed caching mechanism (Memcache, Ehcache+Terracotta, etc), in
>> the memory of an accessible cache node.
>>
>> For actual Subject sessions (e.g. subject.getSession()) this same
>> cache mechanism is often used, so long as sessions are saved to a
>> persistent store if memory becomes constrained.  This persistent store
>> can be a disk or disk array, an RDBMS, a NoSQL data store, or anything
>> similar.  The SessionDAO implementation (e.g. CachingSessionDAO) can
>> be used for a if-not-in-memory-then-query-the-store approach, or you
>> can use the EnterpriseCacheSessionDAO which assumes that the cache
>> itself knows how to overflow to a persistent store (e.g.
>> TerraCotta+Ehcache, Coherence, GigaSpaces, etc) and you don't need to
>> tell Shiro what that persistent store is.
>>
>> So, this question is really about Cache management - how much cache
>> memory will you enable for your application cluster?  Is your cache
>> distributed across multiple machine nodes?
>>
>> The cache does not have to be distributed if you use load-balancing
>> with sticky routing.  That is, if each application host has a local
>> cache running, and all requests from a particular client can be routed
>> to a particular machine, you'll see good performance benefits.  The
>> tricky part is ensuring that once a cache instance starts to fill up
>> on a given host (e.g. ~80% high watermark) you direct new client
>> requests to another cluster node.
>>
>> This implies coordination between the load balancer(s) and each
>> application node so the LBs know when to direct new hosts to a new
>> node. A distributed cache mechanism however can allow one to use
>> 'dumb' load balancers and any local-vs-remote data segmentation can be
>> managed by the caching product itself.
>>
>> HTH,
>>
>> --
>> Les Hazlewood
>> CTO, Katasoft | http://www.katasoft.com | 888.391.5282
>> twitter: @lhazlewood | http://twitter.com/lhazlewood
>> katasoft blog: http://www.katasoft.com/blogs/lhazlewood
>> personal blog: http://leshazlewood.com
>>
>> On Thu, Jan 26, 2012 at 6:20 AM, vdzhuvinov <[email protected]> wrote:
>>>
>>> Navid Mohaghegh wrote
>>>>
>>>> Thank you Vladimir.  I try to be as specific as I can: Image a cluster of
>>>> 4 servers each has a quad processor AMD Opteron 6272 (e.g. total of 64
>>>> cores per server running at 2.1 GHz sharing a 16 MB of L3 cache). Each
>>>> server has 64 GB of ECC registered DDR3 memory clocked at 1333MHz. The
>>>> servers will be connect using Infiniband links of 40 GB/s. We can add
>>>> SSD/HDD for caching on disk or persistent sessions. I want to know how
>>>> many sessions concurrently in total can be tolerated here and how fast we
>>>> can expect to get the authentication done (e.g. average of 15-20 ms for
>>>> the authentication request and persisting the session ...?). Thank you.
>>>>
>>>
>>> I cannot give you an answer, but here is how you can get a feeling of what
>>> to expect.
>>>
>>> In terms of memory, if a single session object is 1kByte on average, you
>>> would be able to store 1 million sessions in 1Gb. So memory space is not
>>> likely to be an issue for sessions that store just a few strings of info.
>>>
>>> The other factor is processing load and this will depend on the number of
>>> HTTP requests you get per second. You may look at benchmarks to get a
>>> feeling for that.
>>>
>>> If you're planning to use Terracotta here is one useful guide:
>>>
>>> http://docs.terracotta.org/confluence/display/docs35/Deployment+Guide#DeploymentGuide-MemorySizing
>>>
>>> My rule of thumb is not to worry too much about actual hardware performance,
>>> but to make sure fault tolerance, disaster recovery and the ability to scale
>>> and stay fluid is well thought of in advance.
>>>
>>> Vladimir
>>>
>>> -----
>>> Vladimir Dzhuvinov
>>> --
>>> View this message in context: 
>>> http://shiro-user.582556.n2.nabble.com/Shiro-on-Cluster-tp7225939p7227093.html
>>> Sent from the Shiro User mailing list archive at Nabble.com.

Reply via email to