Hello Ben,

Thank you for the review. I have been thinking about possible solutions.

1) We could use non-temporary (non-cached) copy when backing up data. That 
would avoid using write cache and make writing memory faster. In turn reading 
data back right after the write would be slower but that should not be the 
regular case. Furthermore it should not be available for all hardware platforms.

2) Sugesh came up with the idea to do conditional backup and restore of data. 
We should keep track if xlate flow, wildcard and base_flow are modified in 
translation and only copy needed fields. I think this would be the best choice 
but has large code impact and would make code maintenance more complicated.

3) Another option would be to spilt up the flow structure into partitions. Then 
instead of copying the whole structure, partitions could be checked for change 
by using memcmp(), then do the copy in case of change. We could use something 
like this:

#define FLOW_PART_LEN(start, end) \
    ((char *)&((struct flow *)0)->end) - ((char *)&((struct flow *)0)->start)

#define COMPARE_FLOW_PART(dst, src, start, end) \
    memcmp(&dst->start, &src->start, FLOW_PART_LEN(start, end))

#define COPY_FLOW_PART(dst, src, start, end) \
    memcpy(&dst->start, &src->start, FLOW_PART_LEN(start, end))

static void
copy_altered_flow_data(struct flow *dst, const struct flow *src)
{
    BUILD_ASSERT_DECL(FLOW_WC_SEQ == 40);

    /* Metadata 1 */
    if (!COMPARE_FLOW_PART(dst, src, tunnel, metadata)) {
        COPY_FLOW_PART(dst, src, tunnel, metadata);
    }

    /* Metadata 2 */
    if (!COMPARE_FLOW_PART(dst, src, metadata, dl_dst)) {
        COPY_FLOW_PART(dst, src, metadata, dl_dst);
    }

    /* L2 */
    if (!COMPARE_FLOW_PART(dst, src, dl_dst, nw_src)) {
        COPY_FLOW_PART(dst, src, dl_dst, nw_src);
    }

    /* L3 */
    if (!COMPARE_FLOW_PART(dst, src, nw_src, tp_src)) {
        COPY_FLOW_PART(dst, src, nw_src, tp_src);
    }

    /* L4 */
    if (!COMPARE_FLOW_PART(dst, src, tp_src, pad3)) {
        COPY_FLOW_PART(dst, src, tp_src, pad3);
    }
}

static void
store_ctx_xlate_data(struct xlate_backup_data *data,
                     const struct xlate_ctx *ctx, bool force)
{
    data->odp_actions_size = ctx->odp_actions->size;
    if (force) {
        data->wc_masks = ctx->wc->masks;
        data->flow = ctx->xin->flow;
        data->base_flow = ctx->base_flow;
    } else {
        copy_altered_flow_data(&data->wc_masks, &ctx->wc->masks);
        copy_altered_flow_data(&data->flow, &ctx->xin->flow);
        copy_altered_flow_data(&data->base_flow, &ctx->base_flow);
    }
}

What do you think?

Best regards,
Zoltan


> -----Original Message-----
> From: Ben Pfaff [mailto:b...@ovn.org]
> Sent: Tuesday, October 31, 2017 9:30 PM
> To: Zoltán Balogh <zoltan.bal...@ericsson.com>
> Cc: d...@openvswitch.org
> Subject: Re: [ovs-dev] [PATCH 2/2] xlate: normalize the actions after 
> translation
> 
> On Mon, Oct 30, 2017 at 12:22:50PM +0100, Zoltan Balogh wrote:
> > When all OF actions have been translated, there could be actions at
> > the end of list of odp actions which are not needed to be executed.
> > So, the list can be normalized at the end of xlate_actions().
> >
> > Signed-off-by: Zoltan Balogh <zoltan.bal...@ericsson.com>
> > Signed-off-by: Sugesh Chandran <sugesh.chand...@intel.com>
> > Co-authored-by: Sugesh Chandran <sugesh.chand...@intel.com>
> > Tested-by: Sugesh Chandran <sugesh.chand...@intel.com>
> 
> Thanks for working on this.  I wasn't previously aware that there was a
> problem, but now I see what you mean.
> 
> The description in 0/2 is necessary to properly understand the problem,
> but that description will get lost when the patches are committed.  I
> recommend adding all or most of it to patch 1.
> 
> This approach to saving and restoring is going to be super-expensive
> during translation.  On i386, struct xlate_backup_data is 1996 bytes,
> and as I read the patch, that's getting copied every time we visit a new
> OpenFlow table.  Do you have any idea about how to make it cheaper, or
> how to make fewer backups?
> 
> Thanks,
> 
> Ben.
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to