On Wed, 4 Sept 2024 at 13:48, Fabiano Rosas <faro...@suse.de> wrote:
>
> In preparation for adding new payload types to multifd, move most of
> the no-compression code into multifd-nocomp.c. Let's try to keep a
> semblance of layering by not mixing general multifd control flow with
> the details of transmitting pages of ram.
>
> There are still some pieces leftover, namely the p->normal, p->zero,
> etc variables that we use for zero page tracking and the packet
> allocation which is heavily dependent on the ram code.
>
> Reviewed-by: Peter Xu <pet...@redhat.com>
> Signed-off-by: Fabiano Rosas <faro...@suse.de>

I know Coverity has only flagged this up because the
code has moved, but it seems like a good place to ask
the question:

> +void multifd_ram_fill_packet(MultiFDSendParams *p)
> +{
> +    MultiFDPacket_t *packet = p->packet;
> +    MultiFDPages_t *pages = &p->data->u.ram;
> +    uint32_t zero_num = pages->num - pages->normal_num;
> +
> +    packet->pages_alloc = cpu_to_be32(multifd_ram_page_count());
> +    packet->normal_pages = cpu_to_be32(pages->normal_num);
> +    packet->zero_pages = cpu_to_be32(zero_num);
> +
> +    if (pages->block) {
> +        strncpy(packet->ramblock, pages->block->idstr, 256);

Coverity points out that when we fill in the RAMBlock::idstr
here, if packet->ramblock is not NUL terminated then we won't
NUL-terminate idstr either (CID 1560071).

Is this really what is intended?

Perhaps
         pstrncpy(packet->ramblock, sizeof(packet->ramblock),
                  pages->block->idstr);

would be better?

(pstrncpy will always NUL-terminate, and won't pointlessly
zero-fill the space after the string in the destination.)

> +    }
> +
> +    for (int i = 0; i < pages->num; i++) {
> +        /* there are architectures where ram_addr_t is 32 bit */
> +        uint64_t temp = pages->offset[i];
> +
> +        packet->offset[i] = cpu_to_be64(temp);
> +    }
> +
> +    trace_multifd_send_ram_fill(p->id, pages->normal_num,
> +                                zero_num);
> +}

thanks
-- PMM

Reply via email to