On Mon, Jun 13, 2022 at 11:40 AM Ihar Hrachyshka <[email protected]>
wrote:
>
> On Thu, Jun 9, 2022 at 3:01 AM Han Zhou <[email protected]> wrote:
> >
> >
> >
> > On Tue, Jun 7, 2022 at 7:06 PM Ihar Hrachyshka <[email protected]>
wrote:
> > >
> > > When options:activation-strategy is set to "rarp" for LSP, when used
in
> > > combination with multiple chassis names listed in
> > > options:requested-chassis, additional chassis will install special
flows
> > > that would block all ingress and egress traffic for the port until a
> > > special activation event happens.
> > >
> > > For "rarp" strategy, an observation of a RARP packet sent from the
port
> > > on the additional chassis is such an event. When it occurs, a special
> > > flow passes control to a controller() action handler that eventually
> > > removes the installed blocking flows and also marks the port as
> > > options:additional-chassis-activated in southbound db.
> > >
> > > This feature is useful in live migration scenarios where it's not
> > > advisable to unlock the destination port location prematurily to avoid
> > > duplicate packets originating from the port.
> > >
> > > Signed-off-by: Ihar Hrachyshka <[email protected]>
> >
> > Thanks for the revision. Please see my comments below:
> >
> > > +
> > > +static void
> > > +en_activated_ports_run(struct engine_node *node, void *data_)
> > > +{
> > > +    struct ed_type_activated_ports *data = data_;
> > > +    enum engine_node_state state = EN_UNCHANGED;
> > > +
> > > +    if (!data->activated_ports) {
> > > +        get_activated_ports(&data->activated_ports);
> > > +        if (data->activated_ports) {
> > > +            state = EN_UPDATED;
> > > +        }
> > > +    } else {
> >
> > This is not possible because en_activated_ports_clear_tracked_data()
will always be called before engine_run(). So this branch is not really
needed.
> >
> > > +        struct ovs_list *new_activated_ports;
> > > +        bool ap_changed = get_activated_ports(&new_activated_ports);
> > > +        if (ap_changed && new_activated_ports) {
> > > +            en_activated_ports_cleanup(data);
> > > +            data->activated_ports = new_activated_ports;
> > > +            state = EN_UPDATED;
> > > +        }
> > > +    }
> > > +    engine_set_node_state(node, state);
> > > +}
> > > +
> > > +static bool
> > > +runtime_data_activated_ports_handler(struct engine_node *node, void
*data)
> >
> > nit: better to put this function together with runtime_data_xxx
functions, since this is how we organize the functions for each engine node.
> >
>
> Hm. Is it not? Right after this function,
> runtime_data_ovs_interface_shadow_handler goes.
>
That's because the new en_activated_ports related data structure and
functions you added in the middle of the en_runtime_data functions :)
But I think there is no need for another revision just for this. Anyone who
applies the change can move the block of code to a proper place.

> > ...
> > > +static void
> > > +setup_rarp_activation_strategy(const struct sbrec_port_binding
*binding,
> > > +                               ofp_port_t ofport, struct zone_ids
*zone_ids,
> > > +                               struct ovn_desired_flow_table
*flow_table,
> > > +                               struct ofpbuf *ofpacts_p)
> > > +{
> > > +    struct match match = MATCH_CATCHALL_INITIALIZER;
> > > +
> > > +    /* Unblock the port on ingress RARP. */
> > > +    match_set_dl_type(&match, htons(ETH_TYPE_RARP));
> > > +    match_set_in_port(&match, ofport);
> > > +    ofpbuf_clear(ofpacts_p);
> > > +
> > > +    load_logical_ingress_metadata(binding, zone_ids, ofpacts_p);
> > > +
> > > +    size_t ofs = ofpacts_p->size;
> > > +    struct ofpact_controller *oc = ofpact_put_CONTROLLER(ofpacts_p);
> > > +    oc->max_len = UINT16_MAX;
> > > +    oc->reason = OFPR_ACTION;
> > > +
> > > +    struct action_header ah = {
> > > +        .opcode = htonl(ACTION_OPCODE_ACTIVATION_STRATEGY_RARP)
> > > +    };
> > > +    ofpbuf_put(ofpacts_p, &ah, sizeof ah);
> > > +
> > > +    ofpacts_p->header = oc;
> > > +    oc->userdata_len = ofpacts_p->size - (ofs + sizeof *oc);
> > > +    ofpact_finish_CONTROLLER(ofpacts_p, &oc);
> > > +    put_resubmit(OFTABLE_LOG_INGRESS_PIPELINE, ofpacts_p);
> > > +
> > > +    ofctrl_add_flow(flow_table, OFTABLE_PHY_TO_LOG, 1010,
> > > +                    binding->header_.uuid.parts[0],
> > > +                    &match, ofpacts_p, &binding->header_.uuid);
> > > +    ofpbuf_clear(ofpacts_p);
> > > +
> > > +    /* Block all non-RARP traffic for the port, both directions. */
> > > +    match_init_catchall(&match);
> > > +    match_set_in_port(&match, ofport);
> > > +
> > > +    ofctrl_add_flow(flow_table, OFTABLE_PHY_TO_LOG, 1000,
> > > +                    binding->header_.uuid.parts[0],
> > > +                    &match, ofpacts_p, &binding->header_.uuid);
> > > +
> > > +    match_init_catchall(&match);
> > > +    uint32_t dp_key = binding->datapath->tunnel_key;
> > > +    uint32_t port_key = binding->tunnel_key;
> > > +    match_set_metadata(&match, htonll(dp_key));
> > > +    match_set_reg(&match, MFF_LOG_OUTPORT - MFF_REG0, port_key);
> > > +
> > > +    ofctrl_add_flow(flow_table, OFTABLE_LOG_TO_PHY, 1000,
> > > +                    binding->header_.uuid.parts[0],
> > > +                    &match, ofpacts_p, &binding->header_.uuid);
> >
> > Recall my comment for v6:
> > >> What if another LSP on the same chassis (an additional chassis of
the migrating LSP) wants to send a packet to the migrating LSP on the main
chassis? Would it be blocked, too?
> >
> > > Packets are cloned to all chassis, delivered to all of them, and each
> > of them decides on its own if it's ok to deliver it (depending on
> > whether activation flows are still present). I'll update the test case
> > to cover the scenario you describe.
> >
> > What I meant was that: the flow is added to a datapath of LS, with the
outport matching the LSP that is being migrated, consider the example below:
> > LSP1 is migrating from HV1 to HV2. Now LSP2 on the same datapath and
bound on HV2 is sending a packet destined to LSP1, it is expected that the
packet is delivered to HV1 at this moment, right? However, because of the
above flow, it would match (both dp_key and port_key), so the packet will
be dropped. Did I misunderstand?
> >
>
> There are two drop flows set up on HV2 related to activation strategy:
>
> cookie=0xc2723b3d, duration=2.150s, table=0, n_packets=0, n_bytes=0,
> idle_age=2, priority=1000,in_port=3 actions=drop
> cookie=0xc2723b3d, duration=2.151s, table=65, n_packets=3,
> n_bytes=126, idle_age=0, priority=1000,reg15=0x1,metadata=0x1
> actions=drop
>
> The first flow is for traffic that originate from the unactivated
> port. The second flow is for traffic that is directed to the port.
> Your scenario involves the second flow. In this case, first
> table=37-39 flows are triggered that will clone packets to HV1, among
> other things; then once we reach table=65, delivery to HV2 endpoint
> will be blocked.
>
> I believe this scenario is already covered in
> "options:activation-strategy for logical port", specifically see the
> following check:
>
> # Packet from hv2:Second arrives to hv1:Migrator
> # hv2:Second cannot reach hv2:Migrator because it is blocked by RARP
strategy
> request=$(send_arp hv2 second 000000000002 000000000010 $second_spa
> $migrator_spa)
> echo $request >> hv1/migrator.expected
>

You are right. Thanks for explaining and sorry that I was misled by
something wrong in my memory.

> > ...
> > >
> > > +static bool activated_ports_changed = false;
> > > +static struct ovs_list activated_ports = OVS_LIST_INITIALIZER(
> > > +    &activated_ports);
> > > +
> > > +bool
> > > +get_activated_ports(struct ovs_list **ap)
> > > +{
> > > +    ovs_mutex_lock(&pinctrl_mutex);
> > > +    if (ovs_list_is_empty(&activated_ports)) {
> > > +        activated_ports_changed = false;
> > > +        ovs_mutex_unlock(&pinctrl_mutex);
> > > +        *ap = NULL;
> > > +        return false;
> > > +    }
> > > +
> > > +    struct activated_port *pp;
> > > +    *ap = xmalloc(sizeof **ap);
> > > +    ovs_list_init(*ap);
> > > +
> > > +    LIST_FOR_EACH (pp, list, &activated_ports) {
> > > +        struct activated_port *pp_copy = xmalloc(sizeof *pp_copy);
> > > +        pp_copy->port_key = pp->port_key;
> > > +        pp_copy->dp_key = pp->dp_key;
> > > +        ovs_list_push_front(*ap, &pp_copy->list);
> > > +    }
> > > +    bool ap_changed = activated_ports_changed;
> >
> > Instead of a global flag, I guess we will need a per entry flag in the
activated_ports list, to mark the entries as "consumed by I-P engine" once
it is returned by this function. And this function should only return the
entries that have not yet returned before. (alternatively, just use too
different lists).
> > Otherwise, the list may contain entries that are consumed by I-P engine
but not yet made it to the SB DB, and also the entries that are not
consumed by I-P engine yet, and may be reprocessed by I-P engine. Is that
desired?
>
> Yes there's some redundancy in how this is being implemented in this
> revision. I'll split it into two separate, initially duplicate, lists
> for engine and db as you suggested.
>
Thanks,
Han

>
> >
> > > +    ovs_mutex_unlock(&pinctrl_mutex);
> > > +    return ap_changed;
> > > +}
> > > +
> >
> > Thanks,
> > Han
>
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to