Re: [iovisor-dev] Best userspace programming API for XDP features query to kernel?

2018-04-05 Thread Jakub Kicinski via iovisor-dev
On Thu, 5 Apr 2018 22:51:33 +0200, Jesper Dangaard Brouer wrote:
> > What about nfp in terms of XDP
> > offload capabilities, should they be included as well or is probing to load
> > the program and see if it loads/JITs as we do today just fine (e.g. you'd
> > otherwise end up with extra flags on a per BPF helper basis)?  
> 
> No, flags per BPF helper basis. As I've described above, helper belong
> to the BPF core, not the driver.  Here I want to know what the specific
> driver support.

I think Daniel meant for nfp offload.  The offload restrictions are
quite involved, are we going to be able to express those?

This is a bit simpler but reminds me of the TC flower capability
discussion.  Expressing features and capabilities gets messy quickly.

I have a gut feeling that a good starting point would be defining and
building a test suite or a set of probing tests to check things work at
system level (incl. redirects to different ports etc.)  I think having
a concrete set of litmus tests that confirm the meaning of a given
feature/capability would go a long way in making people more comfortable
with accepting any form of BPF driver capability.  And serious BPF
projects already do probing so it's just centralizing this in the
kernel.

That's my two cents.
___
iovisor-dev mailing list
iovisor-dev@lists.iovisor.org
https://lists.iovisor.org/mailman/listinfo/iovisor-dev


Re: [iovisor-dev] [Oisf-devel] Best userspace programming API for XDP features query to kernel?

2018-04-05 Thread Jesper Dangaard Brouer via iovisor-dev
On Thu, 5 Apr 2018 09:47:37 +0200
Victor Julien  wrote:

> > Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.  
> 
> Do you have an example of how this is queried?

The code for querying should not be too difficult.

It would likely be similar to how we currently "set"/attach an XDP
program, via its BPF file-descriptor to an ifindex.  Eric Leblond,
choose to hide this in the kernel library "libbpf", see code:

 function bpf_set_link_xdp_fd()
 https://github.com/torvalds/linux/blob/master/tools/lib/bpf/bpf.c#L456-L575

Given Suricata already depend on libbpf for eBFP and XDP support, then
it might make sense to add an API call to "get" XDP link info, e.g.
bpf_get_link_xdp_features(int ifindex) ?

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer
___
iovisor-dev mailing list
iovisor-dev@lists.iovisor.org
https://lists.iovisor.org/mailman/listinfo/iovisor-dev


Re: [iovisor-dev] [Oisf-devel] Best userspace programming API for XDP features query to kernel?

2018-04-05 Thread Michał Purzyński via iovisor-dev
Extending the ethtools mechanism seems like a clean solution here. It is, by 
design, a 50% reporting tool and the XDP feature set would be just yet another 
feature here.

> On Apr 4, 2018, at 5:28 AM, Jesper Dangaard Brouer  wrote:
> 
> Hi Suricata people,
> 
> When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
> into the issue, that at Suricata load/start time, we cannot determine
> if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
> this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).
> 
> We would have liked a way to report that suricata.yaml config was
> invalid for this hardware/setup.  Now, it just loads, and packets gets
> silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).
> 
> My question to suricata developers: (Q1) Do you already have code that
> query the kernel or drivers for features?
> 
> 
> At the IOvisor call (2 weeks ago), we discussed two options of exposing
> XDP features avail in a given driver.
> 
> Option#1: Extend existing ethtool -k/-K "offload and other features"
> with some XDP features, that userspace can query. (Do you already query
> offloads, regarding Q1)
> 
> Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.
> 
> (Q2) Do Suricata devs have any preference (or other options/ideas) for
> the way the kernel expose this info to userspace?
> 
> 
> 
> [1] 
> http://suricata.readthedocs.io/en/latest/capture-hardware/ebpf-xdp.html#the-xdp-cpu-redirect-case
> -- 
> Best regards,
>  Jesper Dangaard Brouer
>  MSc.CS, Principal Kernel Engineer at Red Hat
>  LinkedIn: http://www.linkedin.com/in/brouer
> ___
> Suricata IDS Devel mailing list: oisf-de...@openinfosecfoundation.org
> Site: http://suricata-ids.org | Participate: 
> http://suricata-ids.org/participate/
> List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
> Redmine: https://redmine.openinfosecfoundation.org/
> 
___
iovisor-dev mailing list
iovisor-dev@lists.iovisor.org
https://lists.iovisor.org/mailman/listinfo/iovisor-dev


Re: [iovisor-dev] [Oisf-devel] Best userspace programming API for XDP features query to kernel?

2018-04-05 Thread Peter Manev via iovisor-dev

> On 5 Apr 2018, at 09:47, Victor Julien  wrote:
> 
>> On 04-04-18 14:28, Jesper Dangaard Brouer wrote:
>> Hi Suricata people,
>> 
>> When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
>> into the issue, that at Suricata load/start time, we cannot determine
>> if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
>> this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).
>> 
>> We would have liked a way to report that suricata.yaml config was
>> invalid for this hardware/setup.  Now, it just loads, and packets gets
>> silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).
>> 
>> My question to suricata developers: (Q1) Do you already have code that
>> query the kernel or drivers for features?
>> 
>> 
>> At the IOvisor call (2 weeks ago), we discussed two options of exposing
>> XDP features avail in a given driver.
>> 
>> Option#1: Extend existing ethtool -k/-K "offload and other features"
>> with some XDP features, that userspace can query. (Do you already query
>> offloads, regarding Q1)
> 
> I think if it would use the ioctl ETHTOOL interface it'd be easiest for
> us, as we already have code for this in place to check for offloading
> settings. See [1].
> 
> 
>> Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.
> 
> Do you have an example of how this is queried?
> 
> 
>> (Q2) Do Suricata devs have any preference (or other options/ideas) for
>> the way the kernel expose this info to userspace?
> 
> Right now I think extending the ethtool logic is best for us.
> 

+1
I would prefer that approach too.


> 
> [1] https://github.com/OISF/suricata/blob/master/src/util-ioctl.c#L326
> 
> -- 
> -
> Victor Julien
> http://www.inliniac.net/
> PGP: http://www.inliniac.net/victorjulien.asc
> -
> 
___
iovisor-dev mailing list
iovisor-dev@lists.iovisor.org
https://lists.iovisor.org/mailman/listinfo/iovisor-dev


Re: [iovisor-dev] [Oisf-devel] Best userspace programming API for XDP features query to kernel?

2018-04-05 Thread Victor Julien via iovisor-dev
On 04-04-18 14:28, Jesper Dangaard Brouer wrote:
> Hi Suricata people,
> 
> When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
> into the issue, that at Suricata load/start time, we cannot determine
> if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
> this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).
> 
> We would have liked a way to report that suricata.yaml config was
> invalid for this hardware/setup.  Now, it just loads, and packets gets
> silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).
> 
> My question to suricata developers: (Q1) Do you already have code that
> query the kernel or drivers for features?
> 
> 
> At the IOvisor call (2 weeks ago), we discussed two options of exposing
> XDP features avail in a given driver.
> 
> Option#1: Extend existing ethtool -k/-K "offload and other features"
> with some XDP features, that userspace can query. (Do you already query
> offloads, regarding Q1)

I think if it would use the ioctl ETHTOOL interface it'd be easiest for
us, as we already have code for this in place to check for offloading
settings. See [1].


> Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.

Do you have an example of how this is queried?


> (Q2) Do Suricata devs have any preference (or other options/ideas) for
> the way the kernel expose this info to userspace?

Right now I think extending the ethtool logic is best for us.


[1] https://github.com/OISF/suricata/blob/master/src/util-ioctl.c#L326

-- 
-
Victor Julien
http://www.inliniac.net/
PGP: http://www.inliniac.net/victorjulien.asc
-

___
iovisor-dev mailing list
iovisor-dev@lists.iovisor.org
https://lists.iovisor.org/mailman/listinfo/iovisor-dev


Re: [iovisor-dev] Best userspace programming API for XDP features query to kernel?

2018-04-05 Thread Daniel Borkmann via iovisor-dev
On 04/04/2018 02:28 PM, Jesper Dangaard Brouer via iovisor-dev wrote:
> Hi Suricata people,
> 
> When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
> into the issue, that at Suricata load/start time, we cannot determine
> if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
> this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).
> 
> We would have liked a way to report that suricata.yaml config was
> invalid for this hardware/setup.  Now, it just loads, and packets gets
> silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).
> 
> My question to suricata developers: (Q1) Do you already have code that
> query the kernel or drivers for features?
> 
> At the IOvisor call (2 weeks ago), we discussed two options of exposing
> XDP features avail in a given driver.
> 
> Option#1: Extend existing ethtool -k/-K "offload and other features"
> with some XDP features, that userspace can query. (Do you already query
> offloads, regarding Q1)
> 
> Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.

I don't really mind if you go via ethtool, as long as we handle this
generically from there and e.g. call the dev's ndo_bpf handler such that
we keep all the information in one place. This can be a new ndo_bpf command
e.g. XDP_QUERY_FEATURES or such.

More specifically, how would such feature mask look like? How fine grained
would this be? When you add a new minor feature to, say, cpumap that not
all drivers support yet, we'd need a new flag each time, no? Same for meta data,
then potentially for the redirect memory return work, or the af_xdp bits, the
xdp_rxq_info would have needed it, etc. What about nfp in terms of XDP
offload capabilities, should they be included as well or is probing to load
the program and see if it loads/JITs as we do today just fine (e.g. you'd
otherwise end up with extra flags on a per BPF helper basis)? To make a
somewhat reliable assertion whether feature xyz would work, this would
explode in new feature bits long term. Additionally, if we end up with a
lot of feature flags, it will be very hard for users to determine whether
this particular set of features a driver supports actually represents a
fully supported native XDP driver.

What about keeping this high level to users? E.g. say you have 2 options
that drivers can expose as netdev_features_strings 'xdp-native-full' or
'xdp-native-partial'. If a driver truly supports all XDP features for a
given kernel e.g. v4.16, then a query like 'ethtool -k foo' will say
'xdp-native-full', if at least one feature is missing to be feature complete
from e.g. above list, then ethtool will tell 'xdp-native-partial', and if
not even ndo_bpf callback exists then no 'xdp-native-*' is reported.

Side-effect might be that it would give incentive to keep drivers in state
'xdp-native-full' instead of being downgraded to 'xdp-native-partial'.
Potentially, in the 'xdp-native-partial' state, we can expose a high-level
list of missing features that the driver does not support yet, which would
over time converge towards 'zero' and thus 'xdp-native-full' again. ethtool
itself could get a new XDP specific query option that, based on this info,
can then dump the full list of supported and not supported features. In order
for this to not explode, such features would need to be kept on a high-level
basis, meaning if e.g. cpumap gets extended along with support for a number
of drivers, then those that missed out would need to be temporarily
re-flagged with e.g. 'cpumap not supported' until it gets also implemented
there. That way, we don't explode in adding too fine-grained feature bit
combinations long term and make it easier to tell whether a driver supports
the full set in native XDP or not. Thoughts?

> (Q2) Do Suricata devs have any preference (or other options/ideas) for
> the way the kernel expose this info to userspace?
> 
> [1] 
> http://suricata.readthedocs.io/en/latest/capture-hardware/ebpf-xdp.html#the-xdp-cpu-redirect-case
___
iovisor-dev mailing list
iovisor-dev@lists.iovisor.org
https://lists.iovisor.org/mailman/listinfo/iovisor-dev