Dear Frank,

thanks for positively reviewing my papers. I agree, we share a common vision o policies with some reconcilable differences. Moreover, I'm open to collaborate on the I2NSF capability layer security policy if you think some of the concepts in my papers can be useful.

More answers in-line...

On 20/06/2016 05:36, Xialiang (Frank) wrote:
Hi Aldo,
Thanks for sharing your research to us. I have reviewed your 2 papers and think 
they are useful for I2NSF work. My understanding and response are as follows:
1. I really like the definition of conflict resolution strategy of security 
policy in your papers, it should be part of the I2NSF capability layer security 
policy. We can work together on it;

Sure.

2. I think we share the very similar idea of what the security policy is: a set of 
security rules which is modeled by "event-condition-action". By directly 
incorporating the definition of default action and conflict resolution strategy, we can 
improve the current information/data model;

Thanks.

3. About the definition of "event", I think it's necessary. It's not simply another kind 
of "condition". It represents something happens, and that is either or not related with 
the flow/traffic that the NSFs should be enabled to control according to the security policy;

I agree, an event is something that happens and triggers a process that must take a decision, in the end, one or more actions to perform/enforce.

In my view of the problem, also reflected in the SECURED model (which I don't want to propose, I just want to explain my way of abstracting the IETF PEP/PDP model and separating their functions), the event happens at the PEP.

When asking the PDP to take a decision, the PEP also passes stateful information that may also include data about the event that triggered the evaluation (which is just another kind of stateful information, like information about the successfully established connections for stateful firewalls).

If an NSF needs to react differently depending on the event that triggered the evaluation, the policy must contain rules with conditions that target the event information.And the policy model must contain conditions on events.In this sense, these conditions are normal conditions (set-based, or regex, or custom match, depending on on the actual function).

This model also covers the case of implicit events (like receiving a packet of reassembling a PDU) that do not need to be treated separately nor mentioned explicitly.

However, any ECA model should not be in contrast with that idea.


4. For the other security capabilities, such as url filtering, anti-virus, file 
filtering, they have the following features:
        1) they perform not only content checking, but also various actions on 
the objects (e.g., url, files, programs);

I agree with you. Indeed, in the model we use in SECURED, objects are just another kind of data, on which we state conditions, that refers to specific parts of the content. We have to find a more elegant way to represent it, though.

If you mean that in some cases, NSFs can also perform actions, these have been modelled as 'Rewriting Actions' (NATting, URL rewriting, dropping attachments from emails, etc.).

    2) their specific implementations are different for every vendors and 
should not be aware of and programmed by the third party;

Agreed. 100%.

    3) due to the complexity and differentiation of the other security 
capabilities for every vendors, they are considered and treated as an atomic 
function in our information model. The third party does not need to know its 
inner logic.
But for more fine-grain controlling of them, we can specify some general input/output parameters to customize their matches and actions in certain level. I think that's what you called "custom match".

Understood,atomic representation of vendor-specific features simplifies a lot the management and reduces the burden to third parties. However, this atomic approach poses limits on the high-level features and reasoning capabilities that can be added to the infrastructure. Adding or ignoring the support for fine grained options depends on the I2NSF objectives.

In SECURED, we had to deal with policy refinement. A refinement process cannot work without (at least) a black-box modelling of the vendor-specific features.Otherwise, it does not understand how to use them. In some cases, even the black-box modelling of vendor-specific features is insufficient, even if we didn't cope with these cases (mainly due to limited time and resources, SECURED will finish in a few months).

And yes, custom match conditions provide I/O parameters for a more fine-grained representation of conditions for refinement purposes.

However, "custom match" only covers conditions, for the actions we have the rewriting actions that can also be customized. But we didn't dig into them very much as we didn't have time and resources to also cover these cases. However, they are interesting for future research.

That is, the SECURED model still needs to be extended to cover these cases based on real use cases.But it is better (in my opinion) to do it under the bigger and more comprehensive umbrella of the I2NSF WG.

5. I also think Sue's data model capture our basic idea of what it should be, 
let's start with it and keep improving it parallel with the information model.


I also approve this approach.Sue's data model is already very detailed and contains already a lot of information about several protocols we were too lazy to try to collect.And in general, it does not conflict with the extension of the information model.

Regards,
Aldo



-----邮件原件-----
发件人: I2nsf [mailto:[email protected]] 代表 Aldo Basile
发送时间: 2016年6月19日 17:22
收件人: Susan Hares; 'DIEGO LOPEZ GARCIA'
抄送: [email protected]; [email protected]; Linda Dunbar
主题: Re: [I2nsf] Help on turning I2NSF Information Models to Data Models

Dear Susan,

further answers to your answers...
I listed the condition match algorithms we support, tried to clarify the custom 
match, exposed the risk for non-interoperability in case of too much obscurity 
for custom match algorithms, and added some more hints on the resolution 
strategies.
Finally, I exposed my idea of capability template (and algebra) to reduce the 
backhaul needed to describe NSFs.

And sorry for the delay in answering...


On 16/06/2016 04:06, Susan Hares wrote:
Aldo:

I forgot to give a summary of my email.  I agree things are missing from
hares-pkt-eca-policy.   One question is whether we have a simple filter yang
model and a security filter model or how these are linked.


------------
Aldo:
I am not sure I understood the question. What we have in SECURED is a main 
model (which should match your security filter model), which defines the 
concepts (conditions, action, resolution strategies, matching conditions, etc.) 
and several sub-models that define the concepts in the main model (which should 
match your simple yang filter model). Intuitively, sub-models mainly perform 
sub-classing and add relations among these subclasses and other main model 
classes.
Sub-models can be nested (to form a tree).
------------


I could not access your papers.  Would you send them (to me directly)?
I think we have a lot of good work we can do togheter.

Sue Hares

-----Original Message-----
From: Susan Hares [mailto:[email protected]]
Sent: Wednesday, June 15, 2016 10:02 PM
To: 'Aldo Basile'; 'DIEGO LOPEZ GARCIA'
Cc: '[email protected]'; '[email protected]'; 'Linda Dunbar'
Subject: RE: [I2nsf] Help on turning I2NSF Information Models to Data
Models

Aldo:

I'm going to answer in summary, and then with "sue:" in the text
below.  On, hares-i2rs-pkt-eca-policy - I agree it is missing

-----Original Message-----
From: I2nsf [mailto:[email protected]] On Behalf Of Aldo Basile
Sent: Friday, June 10, 2016 6:46 AM
To: Susan Hares; 'DIEGO LOPEZ GARCIA'
Cc: [email protected]; [email protected]; 'Linda Dunbar'
Subject: Re: [I2nsf] Help on turning I2NSF Information Models to Data
Models

Dear Susan,

more comments in-line...



On 09/06/2016 21:30, Susan Hares wrote:
Alod:

The minor changes to getting IPTable deployments are why I started
this thread.

On 1)  I turned the packet matching into a ordered list where a
match causes the processing to stop.  Your action_to_enforce <--
Resolution{Rx,Ry,Rt,...}.   You can see some of the ideas in my I2RS
draft
(draft-hares-i2rs-pkt-eca-policy), but it needed changes for I2RS and for
the policy.   The draft-hares-i2rs-fb-rib-data-model shows how you can
use
this policy on specific interfaces, and have a default RIB for
things which do not match.  I am interested to discuss your ideas on
the more complex policy model.

I have checked your draft, I see that you can easily model iptables
rules. I also noticed that you mapped an incredibly high number of
conditions on the
 > most important (if not all) protocols at the first four ISO layers.
Excellent work that we may want to reuse.

Thank you.

Compared to our model, there is a missing information: the condition
matching algorithm, i.e., the types of operations allowed when
specifying a >condition on a given field. That is, when you express
conditions on IP Protocol Type field you specify a list of values and
don't need to use ranges (= set based), for source or dest ports you
may want to use  inequalities or ranges (= range based) and for IP
source and dest you may want to use prefixes (= prefix based ~ range based).
For URLs and other upper protocol conditions on strings you may want
to use string match (like squid dstdomain acl) or regex match  (like
squid
url_regex acl).

Sue: I agree.  I am waiting for the WG LC to be completed to release a
version with more matching conditions.  However, I'd be very
interested in yours.


------------
Aldo:
The model we have developed in SECURED use 4 types of matching conditions:
- set-based (with =, !=)
- range-based (<,=,!=,>,<=,>=)
- regex match (we didn't enter into the details of the actual regex engine, we 
use Java Regex only because our prototype is written in Java)
- custom match (as explained in the previous email, requires to be 
characterized with information on the examined fields and an optional
algorithm)
------------

Finally, to describe cases like antivirus or malware detection
conditions we have introduced the custom match conditions (i.e., we
describe the list of fields that the condition match verifier will
take
into account when taking the decision  without explicitly referring to
the algorithm to map these inputs into a Boolean answer).
'Other capabilities' in the Capabilities draft will likely be Custom
Match
conditions.

Sue: You are correct this is missing.  My struggle was where to draw
the line for simple filtering (aka IPTable or netfiltesr) and the
security custom.
I was trying to create a base model for simple filtering and augment it.
Do you think I have a good base.

------------
Aldo:
Yes, it is definitely a good base.
I fear that in general, several products (like classifier, parental control, 
IDS, IPS, etc.) will have their own custom-match.
This is a risk from the interoperability point of view. To reason about the 
policy and to deduce how a device can be used to enforce a policy and what kind 
of traffic it must receive to be able to correctly work, additional info must 
be associated to custom match. The exact amount of information is not 
completely clear to me, this is why I am assuming a black-box I/O definition of 
the custom match.
------------



Since our objective in SECURED was also to model policy specification
needs
at a higher level (and to perform refinement), we concentrated on the
definition of resolution strategies that may be useful for
administrators,
regardless of the fact that they are actually implemented in some
security
controls.

Sue: Excellent! I would love to see this policy.  I tried to get work
on the node filters (I2RS and I2NSF).  Is this the user to I2NSF Manager?

For instance we considered the Deny Take Precedence (DTP) resolution
strategies, borrowed from the DB world, where if more a rule matches,
the
action to enforce will be Deny (drop in your model) if at least one of
the matching rule enforce the Deny.
This is a way to implement conservative policies.

Sue: An interesting strategy for top-down (management).  Is that what
you were doing?

------------
Aldo:
we support policies that can use DTP as resolution strategy, but we don't 
officially selected one resolution strategy. Every policy can use its own. Then 
we have an algorithm that exports policies in FMR (First Matching Rule) to 
connect with the real world, which in most of the cases uses FMR
------------



Moreover, there is also the Most Specific Take Precedence (MSTP)
which
applies the rule that more precisely matches the PDU (e.g., if there
is a rule for > a subnet and one specific for a given IP, the one for
the single IP prevails).

This MSTP - is often the default in routing.

------------
Aldo:
yes, MSTP is the first level of decision (prefix length), then Administrative 
distance, then metrics (then, even if not specified officially, there should be 
some internal table order or priority to remove the last ambiguities).
This is another ad-hoc resolution strategy for routing.
------------


Note that not all the device use prioritized rules (= list of rules)
to
make decisions. A simple example: the ipsec tools use the {Default,
Use, Require, >Unique} attributes that slightly change the ordered
list behavious. Therefore, the concept of resolution strategy needs to
be explicitly reported in a data >model (in my opinion).

Yes - the resolution strategy does need to  indicate whether it is
ordered or "all rules match", or something else

If you are interested in my policy model, a more detailed (and formal
definition) description of resolution strategies and the policy model
for our work on capabilities can be found here:

(for packet filters)
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6111329

(extension on application filters)
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6690252

Could you send me a copy via email (outside the mail list) with this
documents. I cannot get access.


On 2) My packet filters can be expanded to include application
filters or we can create a another grouping of filters.

I agree it is a very easy step in your model.

Thank you for your input.


On 3) security controls in network security control I agree that the
matching is complex or the actions are complex.  In many
cases, these actions are specific to a vendor.   However, let's start with
simple URL filtering as an example.  I assumed that the URL Filtering
was point to a URL Filtering list.  There could be public lists or vendor
generated lists.   Therefore in high level yang, the first part of the
i2NSF
Security content might be:

Module -i2nsf-sec-content
    +--rw i2nsf-sec-content-cap* [order-id rule]
     +--rw order-id
     +--rw rule-name
     +--rw anti-virus
     |  +--rw public-anti-virus* [name]
     | ....
     |  +--rw vendor-anti-virus* [vendor]
     |  ....
     +--rw IPS
     |  +--rw snort-rules* [date]
     |  | ...
     |  +--rw vendor-IPS-rule* [Vendor]
     |  | ...
     +--rw URL Filtering
     |  +--rw public-url* [name]
     |  | ...
     |  +--rw vendor-url*[vendor]
     |  | ....

I think this matches your work.   However, I think the results of the
filter
process can be match or not.  The question is once you match, what do
you do?

Aldo's comments:
If I understand correctly, you implicitly assume an action when using
URL
filters.
For me, URL filtering is just a condition of regex/string based type.
If (the URL in the PDU under examination matches some_URL_strings)
then
return TRUE.

Sue: Agreed

some_URL_strings can be either manually written by the policy editor
or
obtained by some vendor or public source of URLs.

Then this condition on URLs can be mixed with other conditions to
form
other rules.
if (URLcondition) ^ (other conditions) Then ACTION You can see a
simple example of what I'm describing in squid

acl goodurl url_regex myurl1 myurl2 myurl3 acl goodip dst 1.1.1.1

http_access allow goodurl goodip
If the list {myurl1 myurl2 myurl3 } is downloaded from a site, the
matching
does not change, the way to retrieve the data of course changes.
I follow the same approach (i.e., I model them as conditions) also
for for
snort rules, VendorIPS rules etc.

This was  my assumption in my code.   The reason I had two places was to
allow the user a mount point for the public rules (e.g. snort filters
from a public site), and vendor developed (from IDS/IPS).  I am
missing your point, but I have not read your documents yet.


------------
Aldo:
I support the idea to have a different mount points for public info to be used 
by rules and their sub-entities (e.g., conditions, as in this case).
But for me, this mount point is a management aspect that is needed to 
dynamically build the conditions that will be actually evaluated. While the 
condition, the operations that can be used, and  wthe data that will evaluate 
in the packets/PDU remains the same.

Hope this better clarifies my point.
------------


The packet eca actions can have many types
  |        +--rw eca-actions
 |        |  +--rw eca-ingress-act*
  |        |  | ... (permit, deny, mirror)
  |        +  +--rw eca-fwd-actions*
  |        | ...  (invoke, tunnel encap, fwd, drop)
  |        |  +--rw eca-egress-act*
  |        |  |   (rate limit output,..)
  |        |  +--rw eca-qos-actions*
  |        |  | .. (set bits in packet
  |        +  +--rw eca-security-actions*
  |        |    uses i2nsf-sec-content-cap

 I was starting to work on these complex set of actions when I felt
I needed to make sure I was on the right track.

I agree on these action types, I have considered all of them in my model.

To the URL filter example before, for me, the eca-fwd-actions drop is
the one that it is actually applied for blacklists of URL and fwd
(~allow/accept) is the one in case of white lists.

Hope this has clarified my idea on the complex actions that I saw in
the  Capabilities draft.

Yes I has clarified our work.


4) PDP and PEP is a valuable concept.   I think that ordered lists of
packet-based ECA policy is simply a means for the PDP to send the PEP a
specific set of ordered filters.   Similarly, the exchange of capabilities
is simply the  I2NSF sending the NSF its capabilities as ordered rules.

+--rw i2nsf-policy-list
   +--rw policy-list-name string
   +--rw policy-rule [name]
       +--rw name  string
       +--rw net-sec-ctl-rules*
          uses ietf-pkt-eca-policy-cfg
       +--rw net-sec-content
          uses i2nsf-content-rules*
       +--rw net-attack-mitigate*
            uses i2nsf-mitigate-rules

Have I misunderstood previous conversations on this topic?

Yes, you interpreted correctly my idea of events. However, in our
model we see in a slightly different way the PDP-PEP exchanges and
capability interfaces.

The PEP sends the PDP the stateful info (i.e., the context) together
with the flow/packet/PDU data to evaluate (note that some additional
info can be obtained from a PIP, but this is a minor detail).
This context info also contains the events that triggered the
evaluation from the PEP.

This is a lot of backhaul for a simple security device (iptables) for
more sophisticated devices, it may be the best way.

------------
Aldo:
I agree, for iptables you don't need to specify this structure, however in my 
opinion the policy model should be compliant with it.
------------



The PDP looks into the Policy in the PolicyDB, checks matching rules
and decides actions (some of these rules will contain conditions on the
events), then the PDP sends the PEP the action (+ provisional data).
[maybe it is not 100% compliant with IETF but just for lack of room,
it's already a long email.]

I understand the concept.  It is the original PDP/PEP work.
Is there a benefit in this for security work that I am missing?

------------
Aldo:
Not for iptables, but as said before, the underlying policy model should
be compliant with it.
------------


In a SECURED-equivalent approach, the I2NSF controller knows (from a
repository) or asks the NSF its capabilities (through an API),
capabilities that include the list the actions that can enforce, the
conditions that allow to determine the traffic on which enforce
actions, plus all the other info needed to build a valid configuration
for that device.
Then the SECURED infrastructure uses this info to derive a
configuration for the NSF (i.e., a policy composed of rules for the
specific NSF), based on high-level policies (i.e., vendor- NSF-agnostic
security requirements).

Sue: This is a reasonable approach, the only problem is how much "backhaul"
of data needs to be done.  In modern switches fabrics, You can get a lot of
logic done on site.  (Age-old trade off: tunnel to process or process on
edge of network)

------------
Aldo:
I agree, a lot of backhaul, but it should be done only once, then I see
the need for ad hoc templates (written by us / other I2NSF experts) to
describe the main categories of functions (packet filter, l7 filter,
VPN, etc.) to allow vendors / NSF developers to easily tag their NSFs.
Moreover, we are developing a capability algebra for this (nice name for
start from a known template and add some delta rules to specify
differences between your NSF and the base template) and for composing
templates (switch that filters and creates VPN and authenticates and...)
------------



That is, the policy is built on the NSF capabilities (capabilities that
describe what an NSF can do if the infrastructure wants to use it for
enforcing security policies) but it is not a capability.

Hope I am not cryptic and boring with these long emails...

Not cryptic and I really appreciate your long email.

Thanks.

Regards,
Aldo



Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
I2nsf mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/i2nsf

Reply via email to