> Performance Targets for SessionPermissions:
> 
> 1. Average response time (in test): 7ms
> 2. Average response time (in prod) w/out SSL: 400ms 
> 3. Average response time (in prod) w/ SSL:  ?
> 
> We also need throughput (# of requests / second):
> 
> 4. Average throughput (in test):   ?
> 5. Average throughput (in prod) w/out SSL: ?
> 6. Average throughput (in prod) w/ SSL: ?
> 
> And the targets (SLA’s):
> 
> 7. Average response time
> 8. Average throughput
> 
> Assumptions:
> Test is local instance of apacheds
> Prod is remote instance of apacheds with long latency times
> 
> Shawn

The response time is the same with and without SSL (or at least for all 
practical purposes) both in local test and on the remote AWS. You can find lots 
of results online confirming that the SSL penalty is and should be minimal.
The result is constant and there is no practical or certain change pr. request 
if I requests 10 or 100 times in one test. Only the first request usually takes 
a little longer. This is one thread and that reflects the delay one user will 
see.
I think that when AWS states that the performance should be 20% of one core 
then it must be a Nokia 3310 CPU they are reusing and referring to. Compared to 
the local result on the local i7-3770, the AWS performance is more like 2% of 
my single thread test. Even the ping-time to this "thing" in AWS is at least 
50% larger than expected and reported average when talking CPH-Frankfurt. We 
get around 26ms and expected 16ms.

When I first asked this question here I did not know what hardware our prod 
ApacheDS was deployed to - hence my surprise of the poor result. I conclude 
that it is not a real problem because if we needed more performance, then 
operations department can simply just upgrade (the t2.micro is close to free). 
Hosting and running stuff in AWS is a decision they have made and have benefits 
in scaling, backup, etc.

I'm not going to say that it is critical that we implement caching now (we have 
fixes that suits our needs now), but I find it important for Fortress as a 
product.
1) It gives the OPS team freedom to choose and largely disregard CPU 
performance and network latency
1a) Network latency is an issue for international applications (think Australia 
- Germany), but can of course be reduced by setting up local/national ApacheDS 
in slave-mode - which means more new stuff to learn and more config to maintain
2) It saves money if there is a price to CPU, network speed or network traffic
3) It shields an application against network issues and still allows for almost 
immediate updates (if push events are implemented)
4) It is mobile-friendly to reduce network traffic (I know that we do not yet 
have mobile api's, I'm just visioning a future development and the cache here 
is completely different and only concerns one user)
4a) A similar situation exists for SPA's
5) On any server it reduces network traffic and does not contribute to network 
congestion
6) It is just plain simpler with less steep learning curve and less 
customisation and workarounds

I'm not sure that the mobile and SPA argument is real - I would never trust a 
client and always implement server side validation.

One of my sales arguments for Fortress was the performance measures that you 
have published. They are impressive -multi threaded and all. I also read that 
caching was built in. At the time I was also researching Shiro which also have 
caching, and they estimate around 3 API calls for each authenticated user 
session (https://stormpath.com/pricing/). I fantasized that Fortress would be 
alike. I did not have enough experience to see through these informations and 
gauge real life performance.
But a typical request to a web-site runs in a single thread. That is where we 
hit the first huge network latency penalty + lookup time - at a time where we 
were running locally on "good" hardware and ping was less than 3ms. Our first 
approach was to check each permission when needed. But when you render a list 
of 200 items where ten fields on each is guarded by view-permissions, then even 
5ms pr. request is a looong time - and that is a small list. So we went with 
the sessionPermissions list, which is the first caching we made. Everyone will 
at least have to create this mechanism where they request session-permissions 
and then keep that around and implement lookup in that list locally. We thought 
that was fine 5ms extra pr. site, cool! No need to keep the list longer than 
the servlet-request lasts and instant response to changes from Commander. Then 
came the real home of our ApacheDS in AWS and each page request and each small 
ajax request takes 400ms extra. So now we have implemented a cache of 
sessionPermissions for each user. It is simply time based. It gets populated 
when the user logs in, and gets purged after a certain delay which result in an 
new request and update of data in the cache.
Compared to what the webservers are doing and already caching of other stuff, I 
am quite convinced that a permission cache is negligible in terms of memory 
usage. It's often about network and io when it comes to performance here.
That’s not to say that you can't find applications where memory is more crucial 
than performance. But will they have a scale that requires RBAC?

I apologise for a long post and any sarcasm - it's only because I am involved 
and dedicated ... aaand slightly tired and pressured with hard deadlines coming 
up while new projects starts up, organisational changes lurks, meetings must be 
attended, vacation gets coordinated and support must be handled - in other 
words just another day in the life of a software professional. 
We fixed it, and it wasn't that hard. To sum up my point of view and my current 
experience with RBAC, then it is that there is real value in out-of-the-box 
smooth performance when using the API as it hints to be used  and regardless of 
network configuration and cost conscious OPS teams.
Should I prioritize improvements to Fortress, then Commander comes before 
caching, and so do a reduction in transitive dependencies when pulling Fortress 
Core into an existing code base. There's a lot of things I would like from 
Shiro (caching, session handling, annotations) , but I must admit that I don't 
have any real experience with Shiro yet - A good start could be to implement an 
integration with Fortress Core and/or Enmasse - that's probably a completely 
different story but also a viable alternative. Had we known what we know now, 
we would have done so. As it stands we have had to implement session handling, 
caching and would have liked annotations support.

// Jan

Reply via email to