(Sorry to come late into this thread..)
On Thu, Mar 24, 2016 at 02:49:39PM +0100, Jan Pazdziora wrote:
> On Thu, Mar 24, 2016 at 02:30:06PM +0100, Petr Spacek wrote:
> > I really do not like 'excludes'... Was an approach with longest prefix match
> > considered as an option? I do not see it in the design page.
> > E.g. imagine we have rules:
> > / -> allow anyone
> > /users -> allow all authenticated users
> > /users/edit -> allow admins
> > If the matching engine always selects rule with matchine prefix and evaluate
> > only that rule, it would nicely express who is allowed to access what and
> > did
> > not require deny rules (or even rule merging).
This is more or less what was proposed in another discussion:
So I tend to agree with Petr.
> The "Prefix" Evaluation item talks about it.
> The perceived issue is, if for some reason you miss the longest
What would be the reasons? During an IRC conversation Jan mentioned
operational reasons (like size limit exceeded, timeout during the search
etc..) which is something that Jan's automatic-excludes would solve.
I don't agree with what the design page says about the inability to make
stricter matches work:
The problem is, in web applications, the longer URI usually means
stricter access rules: it would be hard to make rules such that
"/users" is accessible by all users and "/users/admin" is only
accessible by admin (we can't exclude a subset of some matching
I think the longest-match-wins would solve this, what am I missing?
> record when evaluating, you will use the previous shorter one and
> allow more access than intended. So from certain POV it's similar to
> DENY rules -- if you miss the DENY rule for some reason, you will go
> with the allow rule.
> If the excludes are kept up-to-date automatically for each URI
> record, matching the next longer prefix, whatever record you find will
> include in some attribute information about limits of its validity.
> That might address the concern of security implication of exclude /
> deny / longest record not found.
> I don't like manual excludes either.
My preference would be either the longest match or alternatively the
automatic exludes. My only complaint about the automatic excludes is
that it adds additional complexity, so the question is whether the
additional complexity is worth spending time on. If we could make the
excludes work in a reliable and simple way, then sure.
But to be honest, I don't like regular expressions either, they are too
fragile and a nightmare to set up and maintain IMO.
Manage your subscription for the Freeipa-devel mailing list:
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code