Andrew,
it's pretty much how you describe it. I think all services hat the
evaluationOrder 100 and there might have been 1-2 services with possible
multiple matches that required a special ordering. Like a /admin
/download area embedded in a certain domain/app which needed a different
setup than the rest of the app/domain. But this was a rare occurance.
The average of regexps evaluated would be ~150 for a total of ~300
services.... Definately O(N)!. We couldn't see any real performance
problems yet but i was definately concerned about the number and the
growth of services we had.
I was contemplating to order them by overall usage at some point to
drive down the average. But since performance was not an issue yet i
simply accepted the performance hit to keep everything as simple as
possible.
With 300 "overlapping" and ordered services i would get confused anyway ;)
Regards,
Joachim
On 30.12.2011 16:29, Andrew Petro wrote:
Joachim,
Interesting.
So, for your usage, am I correct in understanding for any given service
identifier presented at runtime, at most one registration could match? That
is, evaluationOrder wasn't breaking ties between otherwise overlapping
registrations?
And, for your usage, the registry needed to compare a service identifier
presented at runtime against O(N) registrations, that is, on average 300
registrations, to hit the registration that happened to match?
Andrew
On Dec 23, 2011, at 1:10 PM, Joachim Fritschi wrote:
I had around 300 services. The policy was one service entry per application and
institution.
This had various reasons:
- very strong data protection laws in germany around handing out personal data
(per app attribute settings)
- operational stability where each app is applied for so we can register an
admin to help in abuse/misconfiguration cases, generate stats etc.
- a very diverse landscape pretty much every institution had it's own subdomain
- a very strong wildcard policy because of the "open" network and dns setup
This might however not be the typical use case.
Am 23.12.2011 18:39, schrieb Scott Battaglia:
I think you bring up a question we need answered though: on average, how
many services do people have in the tool?
On Fri, Dec 23, 2011 at 12:31 PM, Joachim Fritschi<jfrits...@freenet.de
<mailto:jfrits...@freenet.de>> wrote:
I agree with Scott on the point. Having some really complicated
automatic algorithm won't help much because you would still need a
manual override. Any people need to understand the automagic ;)
The simple ordering by in the gui is fine but once you hit 100s of
services i would rather have some kind of hirarchical assessment
possibility. I think it could be helpfull to have this option
because it would allow an easy ordering of services, clean up the
admin gui by allowing "grouping" and also increasing performance
during evaluation.
This would allow separation of subdomains, domains, directories etc.
One could even think about some additional client level (different
user pool, design for different clients) Any top level service would
always get checked unless you have a match. Any subservice would get
evaluated once a top level service gets matched. The deepest service
"wins".
This can be coupled with permission inheritance and many more
features ;)
Just a crazy thought...
Regards,
Joachim
Am 23.12.2011 05<tel:23.12.2011%2005>:14, schrieb Scott Battaglia:
Again, why don't we just make it so you can drag and drop the
order from
the management screen UI.
The last thing you want to do is give people the option of
ordering and
then change the order that they gave you.
--
You are currently subscribed to cas-dev@lists.jasig.org as: ape...@unicon.net
To unsubscribe, change settings or access archives, see
http://www.ja-sig.org/wiki/display/JSG/cas-dev
--
You are currently subscribed to cas-dev@lists.jasig.org as:
arch...@mail-archive.com
To unsubscribe, change settings or access archives, see
http://www.ja-sig.org/wiki/display/JSG/cas-dev