On Tue, Apr 10, 2018 at 03:41:53PM +0100, Quentin Monnet wrote:
> Add documentation for eBPF helper functions to bpf.h user header file.
> This documentation can be parsed with the Python script provided in
> another commit of the patch series, in order to provide a RST document
> that can later be converted into a man page.
>
> The objective is to make the documentation easily understandable and
> accessible to all eBPF developers, including beginners.
>
> This patch contains descriptions for the following helper functions, all
> written by Daniel:
>
> - bpf_get_prandom_u32()
> - bpf_get_smp_processor_id()
> - bpf_get_cgroup_classid()
> - bpf_get_route_realm()
> - bpf_skb_load_bytes()
> - bpf_csum_diff()
> - bpf_skb_get_tunnel_opt()
> - bpf_skb_set_tunnel_opt()
> - bpf_skb_change_proto()
> - bpf_skb_change_type()
>
> Cc: Daniel Borkmann
> Signed-off-by: Quentin Monnet
> ---
> include/uapi/linux/bpf.h | 125
> +++
> 1 file changed, 125 insertions(+)
>
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index f3ea8824efbc..d147d9dd6a83 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -473,6 +473,14 @@ union bpf_attr {
> * The number of bytes written to the buffer, or a negative error
> * in case of failure.
> *
> + * u32 bpf_prandom_u32(void)
> + * Return
> + * A random 32-bit unsigned value.
there is no such helper.
It's called bpf_get_prandom_u32().
I'd also add a note that it's using its own random state and cannot be
used to infer seed of other random functions in the kernel.
> + *
> + * u32 bpf_get_smp_processor_id(void)
> + * Return
> + * The SMP (Symmetric multiprocessing) processor id.
probably worth adding a note to explain that all bpf programs run
with preemption disabled, so processor id is stable for the run of the program.
> + *
> * int bpf_skb_store_bytes(struct sk_buff *skb, u32 offset, const void
> *from, u32 len, u64 flags)
> * Description
> * Store *len* bytes from address *from* into the packet
> @@ -604,6 +612,13 @@ union bpf_attr {
> * Return
> * 0 on success, or a negative error in case of failure.
> *
> + * u32 bpf_get_cgroup_classid(struct sk_buff *skb)
> + * Description
> + * Retrieve the classid for the current task, i.e. for the
> + * net_cls (network classifier) cgroup to which *skb* belongs.
please add that kernel should be configured with CONFIG_NET_CLS_CGROUP=y|m
and mention Documentation/cgroup-v1/net_cls.txt
Otherwise 'network classifier' is way too generic.
I'd also mention that placing a task into net_cls controller
disables all of cgroup-bpf.
> + * Return
> + * The classid, or 0 for the default unconfigured classid.
> + *
> * int bpf_skb_vlan_push(struct sk_buff *skb, __be16 vlan_proto, u16
> vlan_tci)
> * Description
> * Push a *vlan_tci* (VLAN tag control information) of protocol
> @@ -703,6 +718,14 @@ union bpf_attr {
> * are **TC_ACT_REDIRECT** on success or **TC_ACT_SHOT** on
> * error.
> *
> + * u32 bpf_get_route_realm(struct sk_buff *skb)
> + * Description
> + * Retrieve the realm or the route, that is to say the
> + * **tclassid** field of the destination for the *skb*.
Similarly this only works if CONFIG_IP_ROUTE_CLASSID is on.
> + * Return
> + * The realm of the route for the packet associated to *sdb*, or 0
> + * if none was found.
> + *
> * int bpf_perf_event_output(struct pt_reg *ctx, struct bpf_map *map, u64
> flags, void *data, u64 size)
> * Description
> * Write perf raw sample into a perf event held by *map* of type
> @@ -779,6 +802,21 @@ union bpf_attr {
> * Return
> * 0 on success, or a negative error in case of failure.
> *
> + * int bpf_skb_load_bytes(const struct sk_buff *skb, u32 offset, void *to,
> u32 len)
> + * Description
> + * This helper was provided as an easy way to load data from a
> + * packet. It can be used to load *len* bytes from *offset* from
> + * the packet associated to *skb*, into the buffer pointed by
> + * *to*.
> + *
> + * Since Linux 4.7, this helper is deprecated in favor of
> + * "direct packet access", enabling packet data to be manipulated
> + * with *skb*\ **->data** and *skb*\ **->data_end** pointing
> + * respectively to the first byte of packet data and to the byte
> + * after the last byte of packet data.
I wouldn't call it deprecated.
It's still useful when programmer wants to read large quantities of
data from the packet
> + * Return
> + * 0 on success, or a negative error in case of failure.
> + *
> * int bpf_get_stackid(struct pt_reg *ctx, struct bpf_map *map, u64 flags)
> * Description
> * Walk a user or a kernel stack and return its id. To a