On Wed, Feb 5, 2020 at 10:11 AM Dincer Beken <dbe...@blackned.de> wrote:
>
> Hello all,
>
> I am not entirely new to Open vSwitch and am implementing new actions to 
> realize the TS25.446 SYNC algorithm from 3GPP to create an BM-SC component 
> for LTE-Multicast (eMBMS). I am lacking the experience to judge over the 
> following issue and would kindly ask for your opinion. Excuse me in advance, 
> if the e-Mail does not follow the principles of the mailing list. Here we go..
>
> When a MISS_UPCALL is triggered by the kernel, this causes 2 netlink 
> operations. First, the establishment of a new flow in the datapath via 
> OVS_FLOW_CMD_NEW and then the execution of the missed packet via 
> OVS_PACKET_CMD_EXECUTE.
>
> From the code, it seems to me, that both of these operations are in this 
> order. Considering this, why is the "ovs_packet_cmd_execute" message not 
> checking, and if it exists, directly using the new flow, which was 
> established with OVS_FLOW_CMD_NEW in the first netlink message, but creates a 
> temporary new flow, all the time. One issue this causes is that the generated 
> flow lacks the statistics of the first packet (#packet and #elapsed_octets).
>
> By adding the UFID into the OVS_FLOW_CMD_NEW, it is possible to reuse the 
> first flow, and effectively to update the flow statistics.
>
Another option would be to add new command that install and execute
packet in same netlink msg. That would save us a netlink msg to handle
a miss-call.  what do you think about it?


> So as an example, I added the following code, with which I could reuse the 
> flow generated via OVS_FLOW_CMD_NEW. Nothing new here.
>
> if(a[OVS_FLOW_ATTR_UFID]){
>     /* Extract flow identifier. */
>     struct sw_flow_id sw_flow_id;
>     struct sw_flow_key sw_flow_key;
>
>     int error = ovs_nla_get_identifier(&sw_flow_id, a[OVS_FLOW_ATTR_UFID],
>                        &sw_flow_key, log);
>     if (error) {
>         goto err_kfree_skb;
>     }
>     rcu_read_lock();
>     dp = get_dp_rcu(net, ovs_header->dp_ifindex);
>     err = -ENODEV;
>     if (!dp)
>         goto err_unlock;
>     /* Check if this is a duplicate flow */
>     if (ovs_identifier_is_ufid(&sw_flow_id)){
>         flow = ovs_flow_tbl_lookup_ufid(&dp->table, &sw_flow_id);
>     }
>     if(flow){
>         struct sw_flow_actions *sf_acts;
>         struct dp_stats_percpu *stats;
>         u64 *stats_counter;
>         u32 n_mask_hit;
>         struct sw_flow_key flow_key_dummy = {{0}};
>         err = ovs_flow_key_extract_userspace(net, a[OVS_PACKET_ATTR_KEY],
>             packet, &flow_key_dummy, log);
>         ovs_flow_stats_update(flow, flow->key.tp.flags, packet);
>         sf_acts = rcu_dereference(flow->sf_acts);
>         ovs_execute_actions(dp, packet, sf_acts, &flow->key);
>         stats_counter = &stats->n_hit;
>         /* Update datapath statistics. */
>         u64_stats_update_begin(&stats->syncp);
>         (*stats_counter)++;
>         stats->n_mask_hit += n_mask_hit;
>         u64_stats_update_end(&stats->syncp);
>         pr_err("Successfully handled the packet via the existing flow of the 
> UFID.");
>      }
>      rcu_read_unlock();
>      return error;
> }
>
> I think, that the ovs_packet_cmd_execute creates those temporary flows 
> because of a reason, because it is kinda an obvious issue. So, what I 
> basically would like to know is, if there are performance related or other 
> aspects (like adding the 128 bit UFID as a nlAttr), why we always create a 
> new flow.
>
> Regards,
> Dincer Beken
> _______________________________________________
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to