On Fri, Aug 23, 2024 at 12:47:56PM -0700, Andrii Nakryiko wrote:
> On Thu, Aug 22, 2024 at 2:25 AM Tony Ambardar <[email protected]> wrote:
> >
> > From: Tony Ambardar <[email protected]>
> >
> > Track target endianness in 'struct bpf_gen' and process in-memory data in
> > native byte-order, but on finalization convert the embedded loader BPF
> > insns to target endianness.
> >
> > The light skeleton also includes a target-accessed data blob which is
> > heterogeneous and thus difficult to convert to target byte-order on
> > finalization. Add support functions to convert data to target endianness
> > as it is added to the blob.
> >
> > Also add additional debug logging for data blob structure details and
> > skeleton loading.
> >
> > Signed-off-by: Tony Ambardar <[email protected]>
> > ---
> > tools/lib/bpf/bpf_gen_internal.h | 1 +
> > tools/lib/bpf/gen_loader.c | 187 +++++++++++++++++++++++--------
> > tools/lib/bpf/libbpf.c | 1 +
> > tools/lib/bpf/skel_internal.h | 3 +-
> > 4 files changed, 147 insertions(+), 45 deletions(-)
> >
>
> [...]
>
> > +/*
> > + * Fields of bpf_attr are set to values in native byte-order before being
> > + * written to the target-bound data blob, and may need endian conversion.
> > + * This macro allows setting the correct value in situ and is simpler than
> > + * writing a separate converter for *all fields* of *all records* included
> > + * in union bpf_attr.
> > + */
> > +#define move_tgt_endian(lval, rval) { \
> > + if (!gen->swapped_endian) \
> > + lval = (rval); \
>
> add {} here and for else branch, please
>
> > + else \
> > + switch (sizeof(lval)) { \
> > + case 2: \
> > + lval = bswap_16(rval); \
> > + break; \
> > + case 4: \
> > + lval = bswap_32(rval); \
> > + break; \
> > + case 8: \
> > + lval = bswap_64(rval); \
> > + break; \
>
> I'd also go for more compact:
>
>
> case 2: lval = bswap_16(rval); break;
> case 4: lval = bswap_32(rval); break;
The irony is that I had this originally but checkpatch complained loudly.
>
> > + default: \
> > + lval = (rval); \
> > + pr_warn("unsupported bswap size!\n"); \
>
> case 1: is unsupported? It just doesn't need a byte swap, so please
> add it explicitly and avoid unnecessary warnings
Good point.
>
> > + } \
> > + }
> > +
> > void bpf_gen__load_btf(struct bpf_gen *gen, const void *btf_raw_data,
> > __u32 btf_raw_size)
> > {
>
> [...]
>
> > emit(gen, BPF_LDX_MEM(BPF_B, BPF_REG_9, BPF_REG_8,
> > offsetofend(struct bpf_insn, code)));
> > emit(gen, BPF_ALU32_IMM(BPF_AND, BPF_REG_9, reg_mask));
> > emit(gen, BPF_STX_MEM(BPF_B, BPF_REG_8, BPF_REG_9,
> > offsetofend(struct bpf_insn, code)));
> > @@ -931,11 +971,34 @@ static void cleanup_relos(struct bpf_gen *gen, int
> > insns)
> > cleanup_core_relo(gen);
> > }
> >
> > +/* Covert func, line, and core relo info records to target endianness,
>
> typo: convert
Fixed, thanks.
>
> > + * checking the blob size is consistent with 32-bit fields.
> > + */
> > +static void info_blob_bswap(struct bpf_gen *gen, int info_off, __u32 size)
> > +{
> > + __u32 *field = gen->data_start + info_off;
> > + int i, cnt = size / sizeof(__u32);
> > +
> > + if (size && size % sizeof(__u32)) {
>
> nit: () around mod operation
Done.
>
> > + pr_warn("info records not using 32-bit fields!\n");
> > + return;
> > + }
> > + if (gen->swapped_endian)
> > + for (i = 0; i < cnt; i++, field++)
> > + *field = bswap_32(*field);
> > +}
> > +
> > void bpf_gen__prog_load(struct bpf_gen *gen,
> > enum bpf_prog_type prog_type, const char *prog_name,
> > const char *license, struct bpf_insn *insns, size_t
> > insn_cnt,
> > struct bpf_prog_load_opts *load_attr, int prog_idx)
> > {
> > + int func_info_tot_sz = load_attr->func_info_cnt *
> > + load_attr->func_info_rec_size;
> > + int line_info_tot_sz = load_attr->line_info_cnt *
> > + load_attr->line_info_rec_size;
> > + int core_relo_tot_sz = gen->core_relo_cnt *
> > + sizeof(struct bpf_core_relo);
> > int prog_load_attr, license_off, insns_off, func_info, line_info,
> > core_relos;
> > int attr_size = offsetofend(union bpf_attr, core_relo_rec_size);
> > union bpf_attr attr;
> > @@ -947,32 +1010,60 @@ void bpf_gen__prog_load(struct bpf_gen *gen,
> > license_off = add_data(gen, license, strlen(license) + 1);
> > /* add insns to blob of bytes */
> > insns_off = add_data(gen, insns, insn_cnt * sizeof(struct
> > bpf_insn));
> > + pr_debug("gen: prog_load: license off %d insn off %d\n",
> > + license_off, insns_off);
> >
> > - attr.prog_type = prog_type;
> > - attr.expected_attach_type = load_attr->expected_attach_type;
> > - attr.attach_btf_id = load_attr->attach_btf_id;
> > - attr.prog_ifindex = load_attr->prog_ifindex;
> > - attr.kern_version = 0;
> > - attr.insn_cnt = (__u32)insn_cnt;
> > - attr.prog_flags = load_attr->prog_flags;
> > -
> > - attr.func_info_rec_size = load_attr->func_info_rec_size;
> > - attr.func_info_cnt = load_attr->func_info_cnt;
> > - func_info = add_data(gen, load_attr->func_info,
> > - attr.func_info_cnt * attr.func_info_rec_size);
> > + /* convert blob insns to target endianness */
> > + if (gen->swapped_endian) {
> > + struct bpf_insn *insn = gen->data_start + insns_off;
> > + int i;
> >
> > - attr.line_info_rec_size = load_attr->line_info_rec_size;
> > - attr.line_info_cnt = load_attr->line_info_cnt;
> > - line_info = add_data(gen, load_attr->line_info,
> > - attr.line_info_cnt * attr.line_info_rec_size);
> > + for (i = 0; i < insn_cnt; i++, insn++)
> > + bpf_insn_bswap(insn);
> > + }
> >
> > - attr.core_relo_rec_size = sizeof(struct bpf_core_relo);
> > - attr.core_relo_cnt = gen->core_relo_cnt;
> > - core_relos = add_data(gen, gen->core_relos,
> > - attr.core_relo_cnt * attr.core_relo_rec_size);
> > + move_tgt_endian(attr.prog_type, prog_type);
> > + move_tgt_endian(attr.expected_attach_type,
> > load_attr->expected_attach_type);
> > + move_tgt_endian(attr.attach_btf_id, load_attr->attach_btf_id);
> > + move_tgt_endian(attr.prog_ifindex, load_attr->prog_ifindex);
> > + attr.kern_version = 0;
> > + move_tgt_endian(attr.insn_cnt, (__u32)insn_cnt);
> > + move_tgt_endian(attr.prog_flags, load_attr->prog_flags);
> > +
> > + move_tgt_endian(attr.func_info_rec_size,
> > load_attr->func_info_rec_size);
> > + move_tgt_endian(attr.func_info_cnt, load_attr->func_info_cnt);
>
> this is quite intrusive, maybe instead of imperative move_tgt_endian()
> macro, develop the one that could be used as
>
> attr.func_info_cnt = tgt_endian(load_attr->func_info_cnt);
I had considered this but it's not reliable since the source var size may
not match the dest and the bswap will be improperly sized e.g. note above
that move_tgt_endian() uses the _dest_ var to size the bswap.
While I completely agree this is intrusive, it's still safe and correct.
The other idea I played with is to leave the assignments alone and fix up
struct fields' endianness afterwards via macro. Something like:
attr.map_type = map_type;
attr.key_size = key_size;
attr.value_size = value_size;
attr.map_flags = map_attr->map_flags;
attr.map_extra = map_attr->map_extra;
BSWAP_FIELDS(attr, map_type, key_size, value_size, map_flags, map_extra);
But this would require some funky macro magic, possibly in a small header.
What do you think? Does something similar exist already in kernel sources?
>
> ? I.e., working as an expression, taking into account the need to swap
> and byte size of the argument. Should be doable.
>
> > + func_info = add_data(gen, load_attr->func_info, func_info_tot_sz);
> > + pr_debug("gen: prog_load: func_info: off %d cnt %d rec size %d\n",
> > + func_info, load_attr->func_info_cnt,
> > + load_attr->func_info_rec_size);
>
> [...]