Re: [iovisor-dev] Best userspace programming API for XDP features query to kernel?

2018-04-10 Thread Daniel Borkmann via iovisor-dev
On 04/09/2018 01:26 PM, Jesper Dangaard Brouer wrote:
> On Fri, 6 Apr 2018 12:36:18 +0200
> Daniel Borkmann  wrote:
[...]
>> [...]
>> The underlying problem is effectively the decoupling of program
>> verification that doesn't have/know the context of where it is being
>> attached to in this case. 
> 
> Yes, exactly, that the underlying problem. (plus the first XDP prog
> gets verified and driver attached, and subsequent added bpf tail calls,
> cannot be "rejected" (at "driver attachment time") as its too late).
> 
>> Thinking out loud for a sec on couple of other options aside
>> from feature bits, what about i) providing the target ifindex to the
>> verifier for XDP programs, such that at verification time you have the
>> full context similar to nfp offload case today, 
> 
> I do like that we could provide the target ifindex earlier.  But
> userspace still need some way to express that it e.g. need the
> XDP_REDIRECT features (as verifier cannot reliability detect the action
> return codes used, as discussed before, due to bpf tail calls and maps
> used as return values).

(See below on the detection of it.)

>> or ii) populating some
>> XDP specific auxillary data to the BPF program at verification time such
>> that the driver can check at program attach time whether the requested
>> features are possible and if not it will reject and respond with netlink
>> extack message to the user (as we do in various XDP attach cases already
>> through XDP_SETUP_PROG{,_HW}).
> 
> I like proposal ii) better.  But how do I specify/input that I need
> e.g. the XDP_REDIRECT feature, such that is it avail to "the BPF
> program at verification time"?
> 
> My proposal iii), what about at XDP attachment time, create a new
> netlink attribute that describe XDP action codes the program
> needs/wants. If the info is provided, the ndo_bpf call check and
> reject, and respond with netlink extack message.
>   If I want to query for avail action codes, then I can use the same
> netlink attribute format, and kernel will return it "populated" with
> the info.
> 
> It is very useful that the program gets rejected at attachment time (to
> avoid loading an XDP prog that silently drops packets). BUT I also
> want a query option/functionality (reuse netlink attr format).
> 
> Specifically for Suricata the same "bypass" features can be implemented
> at different levels (XDP, XDP-generic or clsbpf), and the XDP program
> contains multiple features.  Thus, depending on what NIC driver
> supports, we want to load different XDP and/or clsbpf/TC BPF-programs.
> Thus, still support the same user requested feature/functionality, even
> if XDP_REDIRECT is not avail, just slightly slower.

Makes sense to have such fallbacks.

>> This would, for example, avoid the need for feature bits, and do actual
>> rejection of the program while retaining flexibility (and avoiding to
>> expose bits that over time hopefully will deprecate anyway due to all
>> XDP aware drivers implementing them). For both cases i) and ii), it
>> would mean we make the verifier a bit smarter with regards to keeping
>> track of driver related (XDP) requirements. Program return code checking
>> is already present in the verifier's check_return_code() and we could
>> extend it for XDP as well, for example. Seems cleaner and more extensible
>> than feature bits, imho.
> 
> I thought the verifier's check_return_code() couldn't see/verify if the
> XDP return action code is provided as a map lookup result(?). How is
> that handled?

For the latter, I would just add a BPF_F_STRICT_CONST_VERDICT load flag
which e.g. enforces a constant return code in all prog types. It also needs
to check for helpers like bpf_xdp_redirect() and track R0 from there that
it either contains XDP_REDIRECT or XDP_ABORTED.

For the bpf_prog_aux, we need a prog dependent void *private_data area
pointer in bpf_prog_aux that verifier populates; e.g. we could migrate
already some of the prog type specific fields into that like kprobe_override,
cb_access, dst_needed that are non-generic anyway. For XDP, verifier would
in your case record all seen return codes into private_data. When the flag
BPF_F_STRICT_CONST_VERDICT is not used and we noticed there were cases
where the verdict was not a verifiable constant, then we e.g. mark all
XDP return codes as seen. Potentially the same is needed for tail calls.

We can add a ndo_bpf query to drivers that return their supported XDP return
codes and compare them in e.g. dev_change_xdp_fd() out of generic code and
reject with extack.

For tail calls, the only way that comes to mind right now where you could
lift that requirement with having to mark all return codes as seen is that
you'd need to pass the ifindex as in offload case at load time, such that
you the program becomes tied to the device. Then you also need to record
dev pointer in the private_data such that you can add a new callback to
bpf_prog_ops for prog dependent compare in 

Re: [iovisor-dev] Best userspace programming API for XDP features query to kernel?

2018-04-09 Thread Jesper Dangaard Brouer via iovisor-dev
On Fri, 6 Apr 2018 12:36:18 +0200
Daniel Borkmann  wrote:

> On 04/05/2018 10:51 PM, Jesper Dangaard Brouer wrote:
> > On Thu, 5 Apr 2018 12:37:19 +0200
> > Daniel Borkmann  wrote:
> >   
> >> On 04/04/2018 02:28 PM, Jesper Dangaard Brouer via iovisor-dev wrote:  
> >>> Hi Suricata people,
> >>>
> >>> When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
> >>> into the issue, that at Suricata load/start time, we cannot determine
> >>> if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
> >>> this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).
> >>>
> >>> We would have liked a way to report that suricata.yaml config was
> >>> invalid for this hardware/setup.  Now, it just loads, and packets gets
> >>> silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).
> >>>
> >>> My question to suricata developers: (Q1) Do you already have code that
> >>> query the kernel or drivers for features?
> >>>
> >>> At the IOvisor call (2 weeks ago), we discussed two options of exposing
> >>> XDP features avail in a given driver.
> >>>
> >>> Option#1: Extend existing ethtool -k/-K "offload and other features"
> >>> with some XDP features, that userspace can query. (Do you already query
> >>> offloads, regarding Q1)
> >>>
> >>> Option#2: Invent a new 'ip link set xdp' netlink msg with a query option. 
> >>>
> >>
> >> I don't really mind if you go via ethtool, as long as we handle this
> >> generically from there and e.g. call the dev's ndo_bpf handler such that
> >> we keep all the information in one place. This can be a new ndo_bpf command
> >> e.g. XDP_QUERY_FEATURES or such.  
> > 
> > Just to be clear: notice as Victor points out[2], they are programmable
> > going though the IOCTL (SIOCETHTOOL) and not using cmdline tools.  
> 
> Sure, that was perfectly clear. (But at the same time if you extend the
> ioctl, it's obvious to also add support to actual ethtool cmdline tool.)
> 
> > [2] https://github.com/OISF/suricata/blob/master/src/util-ioctl.c#L326
> > 
> > If you want everything to go through the drivers ndo_bpf call anyway
> > (which userspace API is netlink based) then at what point to you  
> 
> Not really, that's the front end. ndo_bpf itself is a plain netdev op
> and has no real tie to netlink.
> 
> > want drivers to call their own ndo_bpf, when activated though their
> > ethtool_ops ? (Sorry, but I don't follow the flow you are proposing)
> > 
> > Notice, I'm not directly against using the drivers ndo_bpf call.  I can
> > see it does provide kernel more flexibility than the ethtool IOCTL.  
> 
> What I was saying is that even if you go via ethtool ioctl api, where
> you end up in dev_ethtool() and have some new ETHTOOL_* query command,
> then instead of adding a new ethtool_ops callback, we can and should
> reuse ndo_bpf from there.

Okay, you want to catch this in dev_ethtool(), that connected the dots
for me, thanks.  BUT it does feel a little fake and confusing to do
this, as the driver developers seeing the info via ethtool, would
expect they need to implement an ethtool_ops callback.

My original idea behind going through the ethtool APIs, were that every
driver is already using this, and it's familiar to the driver
developers.  And there are existing userspace tools that sysadm's
already know, that we can leverage.

Thinking about this over the weekend.  I have changed my mind a bit,
based on your feedback. I do buy into your idea of forcing things
through the drivers ndo_bpf call.  I'm no-longer convinced this should
go through ethtool, but as (you desc) we can, if we find that this is
the easiest ways to export and leverage an existing userspace tool.



> [...]
> > Here, I want to discuss how drivers expose/tell userspace that they
> > support a given feature: Specifically a bit for: XDP_REDIRECT action
> > support.
> >   
> >> Same for meta data,  
> > 
> > Well, not really.  It would be a "nice-to-have", but not strictly
> > needed as a feature bit.  XDP meta-data is controlled via a helper.
> > And the BPF-prog can detect/see runtime, that the helper bpf_xdp_adjust_meta
> > returns -ENOTSUPP (and need to check the ret value anyhow).  Thus,
> > there is that not much gained by exposing this to be detected setup
> > time, as all drivers should eventually support this, and we can detect
> > it runtime.
> > 
> > The missing XDP_REDIRECT action features bit it different, as the
> > BPF-prog cannot detect runtime that this is an unsupported action.
> > Plus, setup time we cannot query the driver for supported XDP actions.  
> 
> Ok, so with the example of meta data, you're arguing that it's okay
> to load a native XDP program onto a driver, and run actual traffic on
> the NIC in order probe for the availability of the feature when you're
> saying that it "can detect/see [at] runtime". I totally agree with you
> that all drivers should eventually support this (same with XDP_REDIRECT),
> but 

[iovisor-dev] Best userspace programming API for XDP features query to kernel?

2018-04-06 Thread Daniel Borkmann via iovisor-dev
On 04/05/2018 10:51 PM, Jesper Dangaard Brouer wrote:
> On Thu, 5 Apr 2018 12:37:19 +0200
> Daniel Borkmann  wrote:
> 
>> On 04/04/2018 02:28 PM, Jesper Dangaard Brouer via iovisor-dev wrote:
>>> Hi Suricata people,
>>>
>>> When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
>>> into the issue, that at Suricata load/start time, we cannot determine
>>> if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
>>> this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).
>>>
>>> We would have liked a way to report that suricata.yaml config was
>>> invalid for this hardware/setup.  Now, it just loads, and packets gets
>>> silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).
>>>
>>> My question to suricata developers: (Q1) Do you already have code that
>>> query the kernel or drivers for features?
>>>
>>> At the IOvisor call (2 weeks ago), we discussed two options of exposing
>>> XDP features avail in a given driver.
>>>
>>> Option#1: Extend existing ethtool -k/-K "offload and other features"
>>> with some XDP features, that userspace can query. (Do you already query
>>> offloads, regarding Q1)
>>>
>>> Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.  
>>
>> I don't really mind if you go via ethtool, as long as we handle this
>> generically from there and e.g. call the dev's ndo_bpf handler such that
>> we keep all the information in one place. This can be a new ndo_bpf command
>> e.g. XDP_QUERY_FEATURES or such.
> 
> Just to be clear: notice as Victor points out[2], they are programmable
> going though the IOCTL (SIOCETHTOOL) and not using cmdline tools.

Sure, that was perfectly clear. (But at the same time if you extend the
ioctl, it's obvious to also add support to actual ethtool cmdline tool.)

> [2] https://github.com/OISF/suricata/blob/master/src/util-ioctl.c#L326
> 
> If you want everything to go through the drivers ndo_bpf call anyway
> (which userspace API is netlink based) then at what point to you

Not really, that's the front end. ndo_bpf itself is a plain netdev op
and has no real tie to netlink.

> want drivers to call their own ndo_bpf, when activated though their
> ethtool_ops ? (Sorry, but I don't follow the flow you are proposing)
> 
> Notice, I'm not directly against using the drivers ndo_bpf call.  I can
> see it does provide kernel more flexibility than the ethtool IOCTL.

What I was saying is that even if you go via ethtool ioctl api, where
you end up in dev_ethtool() and have some new ETHTOOL_* query command,
then instead of adding a new ethtool_ops callback, we can and should
reuse ndo_bpf from there.

[...]
> Here, I want to discuss how drivers expose/tell userspace that they
> support a given feature: Specifically a bit for: XDP_REDIRECT action
> support.
> 
>> Same for meta data,
> 
> Well, not really.  It would be a "nice-to-have", but not strictly
> needed as a feature bit.  XDP meta-data is controlled via a helper.
> And the BPF-prog can detect/see runtime, that the helper bpf_xdp_adjust_meta
> returns -ENOTSUPP (and need to check the ret value anyhow).  Thus,
> there is that not much gained by exposing this to be detected setup
> time, as all drivers should eventually support this, and we can detect
> it runtime.
> 
> The missing XDP_REDIRECT action features bit it different, as the
> BPF-prog cannot detect runtime that this is an unsupported action.
> Plus, setup time we cannot query the driver for supported XDP actions.

Ok, so with the example of meta data, you're arguing that it's okay
to load a native XDP program onto a driver, and run actual traffic on
the NIC in order probe for the availability of the feature when you're
saying that it "can detect/see [at] runtime". I totally agree with you
that all drivers should eventually support this (same with XDP_REDIRECT),
but today there are even differences in drivers on bpf_xdp_adjust_meta()/
bpf_xdp_adjust_head() with regards to how much headroom they have available,
etc (e.g. some of them have none), so right now you can either go and
read the code or do a runtime test with running actual traffic through
the NIC to check whether your BPF prog is supported or not. Theoretically,
you can do the same runtime test with XDP_REDIRECT (taking the warn in
bpf_warn_invalid_xdp_action() aside for a moment), but you do have the
trace_xdp_exception() tracepoint to figure it out, yes, it's a painful
hassle, but overall, it's not that different as you were trying to argue
here. For /both/ cases it would be nice to know at setup time whether
this would be supported or not. Hence, such query is not just limited to
XDP_REDIRECT alone. Eventually once such interface is agreed upon,
undoubtedly the list of feature bits will grow is what I'm trying to say;
only arguing on the XDP_REDIRECT here would be short term.

[...]
>> What about keeping this high level to users? E.g. say you have 2 options
>> that drivers can expose as 

Re: [iovisor-dev] Best userspace programming API for XDP features query to kernel?

2018-04-06 Thread Jesper Dangaard Brouer via iovisor-dev
On Thu, 5 Apr 2018 14:47:16 -0700
Jakub Kicinski  wrote:

> On Thu, 5 Apr 2018 22:51:33 +0200, Jesper Dangaard Brouer wrote:
> > > What about nfp in terms of XDP
> > > offload capabilities, should they be included as well or is probing to 
> > > load
> > > the program and see if it loads/JITs as we do today just fine (e.g. you'd
> > > otherwise end up with extra flags on a per BPF helper basis)?
> > 
> > No, flags per BPF helper basis. As I've described above, helper belong
> > to the BPF core, not the driver.  Here I want to know what the specific
> > driver support.  
> 
> I think Daniel meant for nfp offload.  The offload restrictions are
> quite involved, are we going to be able to express those?

Let's keep thing separate.

I'm requesting something really simple.  I want the driver to tell me
what XDP actions it supports.  We/I can implement an XDP_QUERY_ACTIONS
via ndo_bpf, problem solved.  It is small, specific and simple.

For my other use-case of enabling XDP-xmit on a device, I can
implement another ndo_bpf extension. Current approach today is loading
a dummy XDP prog via ndo_bpf anyway (which is awkward). Again a
specific change that let us move one-step further.


For your nfp offload use-case, you/we have to find a completely
different solution.  You have hit a design choice made by BPF.
Which is that BPF is part of the core kernel, and helpers cannot be
loaded as kernel modules.  As we cannot remove or add helpers after the
verifier certified the program.  And basically your nfp offload driver
comes as a kernel module.
 (Details: and you basically already solved your issue by modifying the
core verifier to do a call back to bpf_prog_offload_verify_insn()).
Point being this is very different from what I'm requesting.  Thus, for
offloading you already have a solution, to my setup time detect
problem, as your program gets rejected setup/load time by the verifier.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer
___
iovisor-dev mailing list
iovisor-dev@lists.iovisor.org
https://lists.iovisor.org/mailman/listinfo/iovisor-dev


Re: [iovisor-dev] Best userspace programming API for XDP features query to kernel?

2018-04-05 Thread Jakub Kicinski via iovisor-dev
On Thu, 5 Apr 2018 22:51:33 +0200, Jesper Dangaard Brouer wrote:
> > What about nfp in terms of XDP
> > offload capabilities, should they be included as well or is probing to load
> > the program and see if it loads/JITs as we do today just fine (e.g. you'd
> > otherwise end up with extra flags on a per BPF helper basis)?  
> 
> No, flags per BPF helper basis. As I've described above, helper belong
> to the BPF core, not the driver.  Here I want to know what the specific
> driver support.

I think Daniel meant for nfp offload.  The offload restrictions are
quite involved, are we going to be able to express those?

This is a bit simpler but reminds me of the TC flower capability
discussion.  Expressing features and capabilities gets messy quickly.

I have a gut feeling that a good starting point would be defining and
building a test suite or a set of probing tests to check things work at
system level (incl. redirects to different ports etc.)  I think having
a concrete set of litmus tests that confirm the meaning of a given
feature/capability would go a long way in making people more comfortable
with accepting any form of BPF driver capability.  And serious BPF
projects already do probing so it's just centralizing this in the
kernel.

That's my two cents.
___
iovisor-dev mailing list
iovisor-dev@lists.iovisor.org
https://lists.iovisor.org/mailman/listinfo/iovisor-dev


Re: [iovisor-dev] Best userspace programming API for XDP features query to kernel?

2018-04-05 Thread Daniel Borkmann via iovisor-dev
On 04/04/2018 02:28 PM, Jesper Dangaard Brouer via iovisor-dev wrote:
> Hi Suricata people,
> 
> When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
> into the issue, that at Suricata load/start time, we cannot determine
> if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
> this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).
> 
> We would have liked a way to report that suricata.yaml config was
> invalid for this hardware/setup.  Now, it just loads, and packets gets
> silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).
> 
> My question to suricata developers: (Q1) Do you already have code that
> query the kernel or drivers for features?
> 
> At the IOvisor call (2 weeks ago), we discussed two options of exposing
> XDP features avail in a given driver.
> 
> Option#1: Extend existing ethtool -k/-K "offload and other features"
> with some XDP features, that userspace can query. (Do you already query
> offloads, regarding Q1)
> 
> Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.

I don't really mind if you go via ethtool, as long as we handle this
generically from there and e.g. call the dev's ndo_bpf handler such that
we keep all the information in one place. This can be a new ndo_bpf command
e.g. XDP_QUERY_FEATURES or such.

More specifically, how would such feature mask look like? How fine grained
would this be? When you add a new minor feature to, say, cpumap that not
all drivers support yet, we'd need a new flag each time, no? Same for meta data,
then potentially for the redirect memory return work, or the af_xdp bits, the
xdp_rxq_info would have needed it, etc. What about nfp in terms of XDP
offload capabilities, should they be included as well or is probing to load
the program and see if it loads/JITs as we do today just fine (e.g. you'd
otherwise end up with extra flags on a per BPF helper basis)? To make a
somewhat reliable assertion whether feature xyz would work, this would
explode in new feature bits long term. Additionally, if we end up with a
lot of feature flags, it will be very hard for users to determine whether
this particular set of features a driver supports actually represents a
fully supported native XDP driver.

What about keeping this high level to users? E.g. say you have 2 options
that drivers can expose as netdev_features_strings 'xdp-native-full' or
'xdp-native-partial'. If a driver truly supports all XDP features for a
given kernel e.g. v4.16, then a query like 'ethtool -k foo' will say
'xdp-native-full', if at least one feature is missing to be feature complete
from e.g. above list, then ethtool will tell 'xdp-native-partial', and if
not even ndo_bpf callback exists then no 'xdp-native-*' is reported.

Side-effect might be that it would give incentive to keep drivers in state
'xdp-native-full' instead of being downgraded to 'xdp-native-partial'.
Potentially, in the 'xdp-native-partial' state, we can expose a high-level
list of missing features that the driver does not support yet, which would
over time converge towards 'zero' and thus 'xdp-native-full' again. ethtool
itself could get a new XDP specific query option that, based on this info,
can then dump the full list of supported and not supported features. In order
for this to not explode, such features would need to be kept on a high-level
basis, meaning if e.g. cpumap gets extended along with support for a number
of drivers, then those that missed out would need to be temporarily
re-flagged with e.g. 'cpumap not supported' until it gets also implemented
there. That way, we don't explode in adding too fine-grained feature bit
combinations long term and make it easier to tell whether a driver supports
the full set in native XDP or not. Thoughts?

> (Q2) Do Suricata devs have any preference (or other options/ideas) for
> the way the kernel expose this info to userspace?
> 
> [1] 
> http://suricata.readthedocs.io/en/latest/capture-hardware/ebpf-xdp.html#the-xdp-cpu-redirect-case
___
iovisor-dev mailing list
iovisor-dev@lists.iovisor.org
https://lists.iovisor.org/mailman/listinfo/iovisor-dev


[iovisor-dev] Best userspace programming API for XDP features query to kernel?

2018-04-04 Thread Jesper Dangaard Brouer via iovisor-dev
Hi Suricata people,

When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
into the issue, that at Suricata load/start time, we cannot determine
if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).

We would have liked a way to report that suricata.yaml config was
invalid for this hardware/setup.  Now, it just loads, and packets gets
silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).

My question to suricata developers: (Q1) Do you already have code that
query the kernel or drivers for features?


At the IOvisor call (2 weeks ago), we discussed two options of exposing
XDP features avail in a given driver.

Option#1: Extend existing ethtool -k/-K "offload and other features"
with some XDP features, that userspace can query. (Do you already query
offloads, regarding Q1)

Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.

(Q2) Do Suricata devs have any preference (or other options/ideas) for
the way the kernel expose this info to userspace?



[1] 
http://suricata.readthedocs.io/en/latest/capture-hardware/ebpf-xdp.html#the-xdp-cpu-redirect-case
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer
___
iovisor-dev mailing list
iovisor-dev@lists.iovisor.org
https://lists.iovisor.org/mailman/listinfo/iovisor-dev