Re: [ovs-dev] [PATCH v3 ovn] controller: improve buffered packets management
On 11/23/22 15:32, Lorenzo Bianconi wrote: > On Nov 23, Dumitru Ceara wrote: >> On 11/23/22 15:26, Lorenzo Bianconi wrote: >>> /* Called with in the pinctrl_handler thread context. */ >>> static int >>> pinctrl_handle_buffered_packets(struct dp_packet *pkt_in, >>> const struct match *md, bool is_arp) >>> OVS_REQUIRES(pinctrl_mutex) >>> { >>> -struct buffered_packets *bp; >>> -struct dp_packet *clone; >>> -struct in6_addr addr; >>> - >>> -if (is_arp) { >>> -addr = in6_addr_mapped_ipv4(htonl(md->flow.regs[0])); >>> -} else { >>> -ovs_be128 ip6 = hton128(flow_get_xxreg(>flow, 0)); >>> -memcpy(, , sizeof addr); >>> -} >>> - >>> -uint32_t hash = hash_bytes(, sizeof addr, 0); >>> -bp = pinctrl_find_buffered_packets(, hash); >>> +uint64_t dp_key = ntohll(md->flow.metadata); >>> +uint64_t oport_key = md->flow.regs[MFF_LOG_OUTPORT - MFF_REG0]; >>> +uint32_t hash = pinctrl_buffer_apcket_hash(dp_key, oport_key); >>> +struct buffered_packets *bp >>> += pinctrl_find_buffered_packets(dp_key, oport_key, hash); >>> if (!bp) { >>> if (hmap_count(_packets_map) >= 1000) { Before your patch we would hit this hard coded limit if there were 1000 next hops for which we were bufferring packets. Now, with your change, we will hit his limit only if there are 1000 ports for which we are bufferring packets. This seems very unlikely and I wonder if this will ever happen in practice. Do we need to change its value? Also, let's add a define for it somewhere to make it a bit less "magic". >>> ack. I would prefer to be a bit more conservative and keep the condition. >>> What is the right value to use up to you? >>> >> >> Well, if you want to keep the exact same condition you'd have to count >> how many individual next hops we're buffering packets for. Is that an >> option? > > I am fine to just keep a condition that limits the max size of > buffered_packets_map, but if you think that is really not important we can > drop > it. > I think I wasn't clear, sorry. I think we still need that condition to protect ovn-controller. We just need to adapt the code a bit so that we stop buffering packets if we're already buffering for a max (1000) number of next hops. This is what we had before your patch. Regards, Dumitru ___ dev mailing list d...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Re: [ovs-dev] [PATCH v3 ovn] controller: improve buffered packets management
On Nov 23, Dumitru Ceara wrote: > On 11/23/22 15:26, Lorenzo Bianconi wrote: > > /* Called with in the pinctrl_handler thread context. */ > > static int > > pinctrl_handle_buffered_packets(struct dp_packet *pkt_in, > > const struct match *md, bool is_arp) > > OVS_REQUIRES(pinctrl_mutex) > > { > > -struct buffered_packets *bp; > > -struct dp_packet *clone; > > -struct in6_addr addr; > > - > > -if (is_arp) { > > -addr = in6_addr_mapped_ipv4(htonl(md->flow.regs[0])); > > -} else { > > -ovs_be128 ip6 = hton128(flow_get_xxreg(>flow, 0)); > > -memcpy(, , sizeof addr); > > -} > > - > > -uint32_t hash = hash_bytes(, sizeof addr, 0); > > -bp = pinctrl_find_buffered_packets(, hash); > > +uint64_t dp_key = ntohll(md->flow.metadata); > > +uint64_t oport_key = md->flow.regs[MFF_LOG_OUTPORT - MFF_REG0]; > > +uint32_t hash = pinctrl_buffer_apcket_hash(dp_key, oport_key); > > +struct buffered_packets *bp > > += pinctrl_find_buffered_packets(dp_key, oport_key, hash); > > if (!bp) { > > if (hmap_count(_packets_map) >= 1000) { > >> Before your patch we would hit this hard coded limit if there were 1000 > >> next hops for which we were bufferring packets. > >> > >> Now, with your change, we will hit his limit only if there are 1000 > >> ports for which we are bufferring packets. This seems very unlikely and > >> I wonder if this will ever happen in practice. Do we need to change its > >> value? Also, let's add a define for it somewhere to make it a bit less > >> "magic". > > ack. I would prefer to be a bit more conservative and keep the condition. > > What is the right value to use up to you? > > > > Well, if you want to keep the exact same condition you'd have to count > how many individual next hops we're buffering packets for. Is that an > option? I am fine to just keep a condition that limits the max size of buffered_packets_map, but if you think that is really not important we can drop it. Regards, Lorenzo > > Thanks, > Dumitru > ___ dev mailing list d...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Re: [ovs-dev] [PATCH v3 ovn] controller: improve buffered packets management
On 11/23/22 15:26, Lorenzo Bianconi wrote: > /* Called with in the pinctrl_handler thread context. */ > static int > pinctrl_handle_buffered_packets(struct dp_packet *pkt_in, > const struct match *md, bool is_arp) > OVS_REQUIRES(pinctrl_mutex) > { > -struct buffered_packets *bp; > -struct dp_packet *clone; > -struct in6_addr addr; > - > -if (is_arp) { > -addr = in6_addr_mapped_ipv4(htonl(md->flow.regs[0])); > -} else { > -ovs_be128 ip6 = hton128(flow_get_xxreg(>flow, 0)); > -memcpy(, , sizeof addr); > -} > - > -uint32_t hash = hash_bytes(, sizeof addr, 0); > -bp = pinctrl_find_buffered_packets(, hash); > +uint64_t dp_key = ntohll(md->flow.metadata); > +uint64_t oport_key = md->flow.regs[MFF_LOG_OUTPORT - MFF_REG0]; > +uint32_t hash = pinctrl_buffer_apcket_hash(dp_key, oport_key); > +struct buffered_packets *bp > += pinctrl_find_buffered_packets(dp_key, oport_key, hash); > if (!bp) { > if (hmap_count(_packets_map) >= 1000) { >> Before your patch we would hit this hard coded limit if there were 1000 >> next hops for which we were bufferring packets. >> >> Now, with your change, we will hit his limit only if there are 1000 >> ports for which we are bufferring packets. This seems very unlikely and >> I wonder if this will ever happen in practice. Do we need to change its >> value? Also, let's add a define for it somewhere to make it a bit less >> "magic". > ack. I would prefer to be a bit more conservative and keep the condition. > What is the right value to use up to you? > Well, if you want to keep the exact same condition you'd have to count how many individual next hops we're buffering packets for. Is that an option? Thanks, Dumitru ___ dev mailing list d...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Re: [ovs-dev] [PATCH v3 ovn] controller: improve buffered packets management
> On 11/15/22 10:44, Lorenzo Bianconi wrote: > >> On Mon, Oct 24, 2022 at 11:29 AM Lorenzo Bianconi < > >> lorenzo.bianc...@redhat.com> wrote: > >> > >>> Improve buffered packet management in ovn-controller avoid useless loop > >>> in run_buffered_binding routine and using datapath key and output port > >>> key as buffered_packets_map hashmap hash. Add new selftest for buffered > >>> packets. > >>> > >>> Signed-off-by: Lorenzo Bianconi > >>> --- > > Hi Lorenzo, > > Thanks for the patch! I have a few comments below. > > >>> Changes since v2: > >>> - improve hash function > >>> Changes since v1: > >>> - improve code readability > >>> --- > >>> controller/pinctrl.c | 118 > >>> tests/system-ovn.at | 124 +++ > >>> 2 files changed, 208 insertions(+), 34 deletions(-) > >>> > >>> diff --git a/controller/pinctrl.c b/controller/pinctrl.c > >>> index 8859cb080..dfcd0cea8 100644 > >>> --- a/controller/pinctrl.c > >>> +++ b/controller/pinctrl.c > >>> @@ -182,6 +182,7 @@ static void destroy_buffered_packets_map(void); > >>> static void > >>> run_buffered_binding(struct ovsdb_idl_index > >>> *sbrec_mac_binding_by_lport_ip, > >>> struct ovsdb_idl_index > >>> *sbrec_port_binding_by_datapath, > >>> + struct ovsdb_idl_index *sbrec_port_binding_by_name, > >>> const struct hmap *local_datapaths) > >>> OVS_REQUIRES(pinctrl_mutex); > >>> > >>> @@ -1430,6 +1431,9 @@ struct buffered_packets { > >>> struct in6_addr ip; > >>> struct eth_addr ea; > >>> > >>> +uint64_t dp_key; > >>> +uint64_t port_key; > >>> + > >>> long long int timestamp; > >>> > >>> struct buffer_info data[BUFFER_QUEUE_DEPTH]; > >>> @@ -1556,38 +1560,38 @@ buffered_packets_map_gc(void) > >>> } > >>> > >>> static struct buffered_packets * > >>> -pinctrl_find_buffered_packets(const struct in6_addr *ip, uint32_t hash) > >>> +pinctrl_find_buffered_packets(uint64_t dp_key, uint64_t port_key, > >>> + uint32_t hash) > >>> { > >>> struct buffered_packets *qp; > >>> - > >>> -HMAP_FOR_EACH_WITH_HASH (qp, hmap_node, hash, > >>> - _packets_map) { > >>> -if (IN6_ARE_ADDR_EQUAL(>ip, ip)) { > >>> +HMAP_FOR_EACH_WITH_HASH (qp, hmap_node, hash, _packets_map) > >>> { > >>> +if (qp->dp_key == dp_key && qp->port_key == port_key) { > >>> return qp; > >>> } > >>> } > >>> return NULL; > >>> } > >>> > >>> +static uint32_t > >>> +pinctrl_buffer_apcket_hash(uint64_t dp_key, uint64_t port_key) > >>> > >> > >> nit: typo in the function name "pinctrl_buffer_apcket_hash" -> > >> "pinctrl_buffer_packet_hash". > >> I guess this can be fixed during merge, let's see if others agree. > >> > >> > >>> +{ > >>> +uint32_t hash = 0; > >>> +hash = hash_add64(hash, port_key); > >>> +hash = hash_add64(hash, dp_key); > >>> +return hash_finish(hash, 16); > >>> +} > >>> + > >>> /* Called with in the pinctrl_handler thread context. */ > >>> static int > >>> pinctrl_handle_buffered_packets(struct dp_packet *pkt_in, > >>> const struct match *md, bool is_arp) > >>> OVS_REQUIRES(pinctrl_mutex) > >>> { > >>> -struct buffered_packets *bp; > >>> -struct dp_packet *clone; > >>> -struct in6_addr addr; > >>> - > >>> -if (is_arp) { > >>> -addr = in6_addr_mapped_ipv4(htonl(md->flow.regs[0])); > >>> -} else { > >>> -ovs_be128 ip6 = hton128(flow_get_xxreg(>flow, 0)); > >>> -memcpy(, , sizeof addr); > >>> -} > >>> - > >>> -uint32_t hash = hash_bytes(, sizeof addr, 0); > >>> -bp = pinctrl_find_buffered_packets(, hash); > >>> +uint64_t dp_key = ntohll(md->flow.metadata); > >>> +uint64_t oport_key = md->flow.regs[MFF_LOG_OUTPORT - MFF_REG0]; > >>> +uint32_t hash = pinctrl_buffer_apcket_hash(dp_key, oport_key); > >>> +struct buffered_packets *bp > >>> += pinctrl_find_buffered_packets(dp_key, oport_key, hash); > >>> if (!bp) { > >>> if (hmap_count(_packets_map) >= 1000) { > > Before your patch we would hit this hard coded limit if there were 1000 > next hops for which we were bufferring packets. > > Now, with your change, we will hit his limit only if there are 1000 > ports for which we are bufferring packets. This seems very unlikely and > I wonder if this will ever happen in practice. Do we need to change its > value? Also, let's add a define for it somewhere to make it a bit less > "magic". ack. I would prefer to be a bit more conservative and keep the condition. What is the right value to use up to you? :) > > >>> COVERAGE_INC(pinctrl_drop_buffered_packets_map); > >>> @@ -1597,12 +1601,20 @@ pinctrl_handle_buffered_packets(struct dp_packet > >>> *pkt_in, > >>> bp = xmalloc(sizeof *bp); > >>> hmap_insert(_packets_map, >hmap_node,
Re: [ovs-dev] [PATCH v3 ovn] controller: improve buffered packets management
On 11/15/22 10:44, Lorenzo Bianconi wrote: >> On Mon, Oct 24, 2022 at 11:29 AM Lorenzo Bianconi < >> lorenzo.bianc...@redhat.com> wrote: >> >>> Improve buffered packet management in ovn-controller avoid useless loop >>> in run_buffered_binding routine and using datapath key and output port >>> key as buffered_packets_map hashmap hash. Add new selftest for buffered >>> packets. >>> >>> Signed-off-by: Lorenzo Bianconi >>> --- Hi Lorenzo, Thanks for the patch! I have a few comments below. >>> Changes since v2: >>> - improve hash function >>> Changes since v1: >>> - improve code readability >>> --- >>> controller/pinctrl.c | 118 >>> tests/system-ovn.at | 124 +++ >>> 2 files changed, 208 insertions(+), 34 deletions(-) >>> >>> diff --git a/controller/pinctrl.c b/controller/pinctrl.c >>> index 8859cb080..dfcd0cea8 100644 >>> --- a/controller/pinctrl.c >>> +++ b/controller/pinctrl.c >>> @@ -182,6 +182,7 @@ static void destroy_buffered_packets_map(void); >>> static void >>> run_buffered_binding(struct ovsdb_idl_index >>> *sbrec_mac_binding_by_lport_ip, >>> struct ovsdb_idl_index >>> *sbrec_port_binding_by_datapath, >>> + struct ovsdb_idl_index *sbrec_port_binding_by_name, >>> const struct hmap *local_datapaths) >>> OVS_REQUIRES(pinctrl_mutex); >>> >>> @@ -1430,6 +1431,9 @@ struct buffered_packets { >>> struct in6_addr ip; >>> struct eth_addr ea; >>> >>> +uint64_t dp_key; >>> +uint64_t port_key; >>> + >>> long long int timestamp; >>> >>> struct buffer_info data[BUFFER_QUEUE_DEPTH]; >>> @@ -1556,38 +1560,38 @@ buffered_packets_map_gc(void) >>> } >>> >>> static struct buffered_packets * >>> -pinctrl_find_buffered_packets(const struct in6_addr *ip, uint32_t hash) >>> +pinctrl_find_buffered_packets(uint64_t dp_key, uint64_t port_key, >>> + uint32_t hash) >>> { >>> struct buffered_packets *qp; >>> - >>> -HMAP_FOR_EACH_WITH_HASH (qp, hmap_node, hash, >>> - _packets_map) { >>> -if (IN6_ARE_ADDR_EQUAL(>ip, ip)) { >>> +HMAP_FOR_EACH_WITH_HASH (qp, hmap_node, hash, _packets_map) { >>> +if (qp->dp_key == dp_key && qp->port_key == port_key) { >>> return qp; >>> } >>> } >>> return NULL; >>> } >>> >>> +static uint32_t >>> +pinctrl_buffer_apcket_hash(uint64_t dp_key, uint64_t port_key) >>> >> >> nit: typo in the function name "pinctrl_buffer_apcket_hash" -> >> "pinctrl_buffer_packet_hash". >> I guess this can be fixed during merge, let's see if others agree. >> >> >>> +{ >>> +uint32_t hash = 0; >>> +hash = hash_add64(hash, port_key); >>> +hash = hash_add64(hash, dp_key); >>> +return hash_finish(hash, 16); >>> +} >>> + >>> /* Called with in the pinctrl_handler thread context. */ >>> static int >>> pinctrl_handle_buffered_packets(struct dp_packet *pkt_in, >>> const struct match *md, bool is_arp) >>> OVS_REQUIRES(pinctrl_mutex) >>> { >>> -struct buffered_packets *bp; >>> -struct dp_packet *clone; >>> -struct in6_addr addr; >>> - >>> -if (is_arp) { >>> -addr = in6_addr_mapped_ipv4(htonl(md->flow.regs[0])); >>> -} else { >>> -ovs_be128 ip6 = hton128(flow_get_xxreg(>flow, 0)); >>> -memcpy(, , sizeof addr); >>> -} >>> - >>> -uint32_t hash = hash_bytes(, sizeof addr, 0); >>> -bp = pinctrl_find_buffered_packets(, hash); >>> +uint64_t dp_key = ntohll(md->flow.metadata); >>> +uint64_t oport_key = md->flow.regs[MFF_LOG_OUTPORT - MFF_REG0]; >>> +uint32_t hash = pinctrl_buffer_apcket_hash(dp_key, oport_key); >>> +struct buffered_packets *bp >>> += pinctrl_find_buffered_packets(dp_key, oport_key, hash); >>> if (!bp) { >>> if (hmap_count(_packets_map) >= 1000) { Before your patch we would hit this hard coded limit if there were 1000 next hops for which we were bufferring packets. Now, with your change, we will hit his limit only if there are 1000 ports for which we are bufferring packets. This seems very unlikely and I wonder if this will ever happen in practice. Do we need to change its value? Also, let's add a define for it somewhere to make it a bit less "magic". >>> COVERAGE_INC(pinctrl_drop_buffered_packets_map); >>> @@ -1597,12 +1601,20 @@ pinctrl_handle_buffered_packets(struct dp_packet >>> *pkt_in, >>> bp = xmalloc(sizeof *bp); >>> hmap_insert(_packets_map, >hmap_node, hash); >>> bp->head = bp->tail = 0; >>> -bp->ip = addr; >>> +if (is_arp) { >>> +bp->ip = in6_addr_mapped_ipv4(htonl(md->flow.regs[0])); >>> +} else { >>> +ovs_be128 ip6 = hton128(flow_get_xxreg(>flow, 0)); >>> +memcpy(>ip, , sizeof bp->ip); >>> +} >>> +bp->dp_key = dp_key; >>> +bp->port_key
Re: [ovs-dev] [PATCH v3 ovn] controller: improve buffered packets management
> On Mon, Oct 24, 2022 at 11:29 AM Lorenzo Bianconi < > lorenzo.bianc...@redhat.com> wrote: > > > Improve buffered packet management in ovn-controller avoid useless loop > > in run_buffered_binding routine and using datapath key and output port > > key as buffered_packets_map hashmap hash. Add new selftest for buffered > > packets. > > > > Signed-off-by: Lorenzo Bianconi > > --- > > Changes since v2: > > - improve hash function > > Changes since v1: > > - improve code readability > > --- > > controller/pinctrl.c | 118 > > tests/system-ovn.at | 124 +++ > > 2 files changed, 208 insertions(+), 34 deletions(-) > > > > diff --git a/controller/pinctrl.c b/controller/pinctrl.c > > index 8859cb080..dfcd0cea8 100644 > > --- a/controller/pinctrl.c > > +++ b/controller/pinctrl.c > > @@ -182,6 +182,7 @@ static void destroy_buffered_packets_map(void); > > static void > > run_buffered_binding(struct ovsdb_idl_index > > *sbrec_mac_binding_by_lport_ip, > > struct ovsdb_idl_index > > *sbrec_port_binding_by_datapath, > > + struct ovsdb_idl_index *sbrec_port_binding_by_name, > > const struct hmap *local_datapaths) > > OVS_REQUIRES(pinctrl_mutex); > > > > @@ -1430,6 +1431,9 @@ struct buffered_packets { > > struct in6_addr ip; > > struct eth_addr ea; > > > > +uint64_t dp_key; > > +uint64_t port_key; > > + > > long long int timestamp; > > > > struct buffer_info data[BUFFER_QUEUE_DEPTH]; > > @@ -1556,38 +1560,38 @@ buffered_packets_map_gc(void) > > } > > > > static struct buffered_packets * > > -pinctrl_find_buffered_packets(const struct in6_addr *ip, uint32_t hash) > > +pinctrl_find_buffered_packets(uint64_t dp_key, uint64_t port_key, > > + uint32_t hash) > > { > > struct buffered_packets *qp; > > - > > -HMAP_FOR_EACH_WITH_HASH (qp, hmap_node, hash, > > - _packets_map) { > > -if (IN6_ARE_ADDR_EQUAL(>ip, ip)) { > > +HMAP_FOR_EACH_WITH_HASH (qp, hmap_node, hash, _packets_map) { > > +if (qp->dp_key == dp_key && qp->port_key == port_key) { > > return qp; > > } > > } > > return NULL; > > } > > > > +static uint32_t > > +pinctrl_buffer_apcket_hash(uint64_t dp_key, uint64_t port_key) > > > > nit: typo in the function name "pinctrl_buffer_apcket_hash" -> > "pinctrl_buffer_packet_hash". > I guess this can be fixed during merge, let's see if others agree. > > > > +{ > > +uint32_t hash = 0; > > +hash = hash_add64(hash, port_key); > > +hash = hash_add64(hash, dp_key); > > +return hash_finish(hash, 16); > > +} > > + > > /* Called with in the pinctrl_handler thread context. */ > > static int > > pinctrl_handle_buffered_packets(struct dp_packet *pkt_in, > > const struct match *md, bool is_arp) > > OVS_REQUIRES(pinctrl_mutex) > > { > > -struct buffered_packets *bp; > > -struct dp_packet *clone; > > -struct in6_addr addr; > > - > > -if (is_arp) { > > -addr = in6_addr_mapped_ipv4(htonl(md->flow.regs[0])); > > -} else { > > -ovs_be128 ip6 = hton128(flow_get_xxreg(>flow, 0)); > > -memcpy(, , sizeof addr); > > -} > > - > > -uint32_t hash = hash_bytes(, sizeof addr, 0); > > -bp = pinctrl_find_buffered_packets(, hash); > > +uint64_t dp_key = ntohll(md->flow.metadata); > > +uint64_t oport_key = md->flow.regs[MFF_LOG_OUTPORT - MFF_REG0]; > > +uint32_t hash = pinctrl_buffer_apcket_hash(dp_key, oport_key); > > +struct buffered_packets *bp > > += pinctrl_find_buffered_packets(dp_key, oport_key, hash); > > if (!bp) { > > if (hmap_count(_packets_map) >= 1000) { > > COVERAGE_INC(pinctrl_drop_buffered_packets_map); > > @@ -1597,12 +1601,20 @@ pinctrl_handle_buffered_packets(struct dp_packet > > *pkt_in, > > bp = xmalloc(sizeof *bp); > > hmap_insert(_packets_map, >hmap_node, hash); > > bp->head = bp->tail = 0; > > -bp->ip = addr; > > +if (is_arp) { > > +bp->ip = in6_addr_mapped_ipv4(htonl(md->flow.regs[0])); > > +} else { > > +ovs_be128 ip6 = hton128(flow_get_xxreg(>flow, 0)); > > +memcpy(>ip, , sizeof bp->ip); > > +} > > +bp->dp_key = dp_key; > > +bp->port_key = oport_key; > > } > > bp->timestamp = time_msec(); > > /* clone the packet to send it later with correct L2 address */ > > -clone = dp_packet_clone_data(dp_packet_data(pkt_in), > > - dp_packet_size(pkt_in)); > > +struct dp_packet *clone > > += dp_packet_clone_data(dp_packet_data(pkt_in), > > + dp_packet_size(pkt_in)); > > buffered_push_packet(bp, clone, md); > > > > return 0; > > @@ -3586,6 +3598,7 @@ pinctrl_run(struct
Re: [ovs-dev] [PATCH v3 ovn] controller: improve buffered packets management
On Mon, Oct 24, 2022 at 11:29 AM Lorenzo Bianconi < lorenzo.bianc...@redhat.com> wrote: > Improve buffered packet management in ovn-controller avoid useless loop > in run_buffered_binding routine and using datapath key and output port > key as buffered_packets_map hashmap hash. Add new selftest for buffered > packets. > > Signed-off-by: Lorenzo Bianconi > --- > Changes since v2: > - improve hash function > Changes since v1: > - improve code readability > --- > controller/pinctrl.c | 118 > tests/system-ovn.at | 124 +++ > 2 files changed, 208 insertions(+), 34 deletions(-) > > diff --git a/controller/pinctrl.c b/controller/pinctrl.c > index 8859cb080..dfcd0cea8 100644 > --- a/controller/pinctrl.c > +++ b/controller/pinctrl.c > @@ -182,6 +182,7 @@ static void destroy_buffered_packets_map(void); > static void > run_buffered_binding(struct ovsdb_idl_index > *sbrec_mac_binding_by_lport_ip, > struct ovsdb_idl_index > *sbrec_port_binding_by_datapath, > + struct ovsdb_idl_index *sbrec_port_binding_by_name, > const struct hmap *local_datapaths) > OVS_REQUIRES(pinctrl_mutex); > > @@ -1430,6 +1431,9 @@ struct buffered_packets { > struct in6_addr ip; > struct eth_addr ea; > > +uint64_t dp_key; > +uint64_t port_key; > + > long long int timestamp; > > struct buffer_info data[BUFFER_QUEUE_DEPTH]; > @@ -1556,38 +1560,38 @@ buffered_packets_map_gc(void) > } > > static struct buffered_packets * > -pinctrl_find_buffered_packets(const struct in6_addr *ip, uint32_t hash) > +pinctrl_find_buffered_packets(uint64_t dp_key, uint64_t port_key, > + uint32_t hash) > { > struct buffered_packets *qp; > - > -HMAP_FOR_EACH_WITH_HASH (qp, hmap_node, hash, > - _packets_map) { > -if (IN6_ARE_ADDR_EQUAL(>ip, ip)) { > +HMAP_FOR_EACH_WITH_HASH (qp, hmap_node, hash, _packets_map) { > +if (qp->dp_key == dp_key && qp->port_key == port_key) { > return qp; > } > } > return NULL; > } > > +static uint32_t > +pinctrl_buffer_apcket_hash(uint64_t dp_key, uint64_t port_key) > nit: typo in the function name "pinctrl_buffer_apcket_hash" -> "pinctrl_buffer_packet_hash". I guess this can be fixed during merge, let's see if others agree. > +{ > +uint32_t hash = 0; > +hash = hash_add64(hash, port_key); > +hash = hash_add64(hash, dp_key); > +return hash_finish(hash, 16); > +} > + > /* Called with in the pinctrl_handler thread context. */ > static int > pinctrl_handle_buffered_packets(struct dp_packet *pkt_in, > const struct match *md, bool is_arp) > OVS_REQUIRES(pinctrl_mutex) > { > -struct buffered_packets *bp; > -struct dp_packet *clone; > -struct in6_addr addr; > - > -if (is_arp) { > -addr = in6_addr_mapped_ipv4(htonl(md->flow.regs[0])); > -} else { > -ovs_be128 ip6 = hton128(flow_get_xxreg(>flow, 0)); > -memcpy(, , sizeof addr); > -} > - > -uint32_t hash = hash_bytes(, sizeof addr, 0); > -bp = pinctrl_find_buffered_packets(, hash); > +uint64_t dp_key = ntohll(md->flow.metadata); > +uint64_t oport_key = md->flow.regs[MFF_LOG_OUTPORT - MFF_REG0]; > +uint32_t hash = pinctrl_buffer_apcket_hash(dp_key, oport_key); > +struct buffered_packets *bp > += pinctrl_find_buffered_packets(dp_key, oport_key, hash); > if (!bp) { > if (hmap_count(_packets_map) >= 1000) { > COVERAGE_INC(pinctrl_drop_buffered_packets_map); > @@ -1597,12 +1601,20 @@ pinctrl_handle_buffered_packets(struct dp_packet > *pkt_in, > bp = xmalloc(sizeof *bp); > hmap_insert(_packets_map, >hmap_node, hash); > bp->head = bp->tail = 0; > -bp->ip = addr; > +if (is_arp) { > +bp->ip = in6_addr_mapped_ipv4(htonl(md->flow.regs[0])); > +} else { > +ovs_be128 ip6 = hton128(flow_get_xxreg(>flow, 0)); > +memcpy(>ip, , sizeof bp->ip); > +} > +bp->dp_key = dp_key; > +bp->port_key = oport_key; > } > bp->timestamp = time_msec(); > /* clone the packet to send it later with correct L2 address */ > -clone = dp_packet_clone_data(dp_packet_data(pkt_in), > - dp_packet_size(pkt_in)); > +struct dp_packet *clone > += dp_packet_clone_data(dp_packet_data(pkt_in), > + dp_packet_size(pkt_in)); > buffered_push_packet(bp, clone, md); > > return 0; > @@ -3586,6 +3598,7 @@ pinctrl_run(struct ovsdb_idl_txn *ovnsb_idl_txn, >sbrec_ip_multicast_opts); > run_buffered_binding(sbrec_mac_binding_by_lport_ip, > sbrec_port_binding_by_datapath, > + sbrec_port_binding_by_name, >
[ovs-dev] [PATCH v3 ovn] controller: improve buffered packets management
Improve buffered packet management in ovn-controller avoid useless loop in run_buffered_binding routine and using datapath key and output port key as buffered_packets_map hashmap hash. Add new selftest for buffered packets. Signed-off-by: Lorenzo Bianconi --- Changes since v2: - improve hash function Changes since v1: - improve code readability --- controller/pinctrl.c | 118 tests/system-ovn.at | 124 +++ 2 files changed, 208 insertions(+), 34 deletions(-) diff --git a/controller/pinctrl.c b/controller/pinctrl.c index 8859cb080..dfcd0cea8 100644 --- a/controller/pinctrl.c +++ b/controller/pinctrl.c @@ -182,6 +182,7 @@ static void destroy_buffered_packets_map(void); static void run_buffered_binding(struct ovsdb_idl_index *sbrec_mac_binding_by_lport_ip, struct ovsdb_idl_index *sbrec_port_binding_by_datapath, + struct ovsdb_idl_index *sbrec_port_binding_by_name, const struct hmap *local_datapaths) OVS_REQUIRES(pinctrl_mutex); @@ -1430,6 +1431,9 @@ struct buffered_packets { struct in6_addr ip; struct eth_addr ea; +uint64_t dp_key; +uint64_t port_key; + long long int timestamp; struct buffer_info data[BUFFER_QUEUE_DEPTH]; @@ -1556,38 +1560,38 @@ buffered_packets_map_gc(void) } static struct buffered_packets * -pinctrl_find_buffered_packets(const struct in6_addr *ip, uint32_t hash) +pinctrl_find_buffered_packets(uint64_t dp_key, uint64_t port_key, + uint32_t hash) { struct buffered_packets *qp; - -HMAP_FOR_EACH_WITH_HASH (qp, hmap_node, hash, - _packets_map) { -if (IN6_ARE_ADDR_EQUAL(>ip, ip)) { +HMAP_FOR_EACH_WITH_HASH (qp, hmap_node, hash, _packets_map) { +if (qp->dp_key == dp_key && qp->port_key == port_key) { return qp; } } return NULL; } +static uint32_t +pinctrl_buffer_apcket_hash(uint64_t dp_key, uint64_t port_key) +{ +uint32_t hash = 0; +hash = hash_add64(hash, port_key); +hash = hash_add64(hash, dp_key); +return hash_finish(hash, 16); +} + /* Called with in the pinctrl_handler thread context. */ static int pinctrl_handle_buffered_packets(struct dp_packet *pkt_in, const struct match *md, bool is_arp) OVS_REQUIRES(pinctrl_mutex) { -struct buffered_packets *bp; -struct dp_packet *clone; -struct in6_addr addr; - -if (is_arp) { -addr = in6_addr_mapped_ipv4(htonl(md->flow.regs[0])); -} else { -ovs_be128 ip6 = hton128(flow_get_xxreg(>flow, 0)); -memcpy(, , sizeof addr); -} - -uint32_t hash = hash_bytes(, sizeof addr, 0); -bp = pinctrl_find_buffered_packets(, hash); +uint64_t dp_key = ntohll(md->flow.metadata); +uint64_t oport_key = md->flow.regs[MFF_LOG_OUTPORT - MFF_REG0]; +uint32_t hash = pinctrl_buffer_apcket_hash(dp_key, oport_key); +struct buffered_packets *bp += pinctrl_find_buffered_packets(dp_key, oport_key, hash); if (!bp) { if (hmap_count(_packets_map) >= 1000) { COVERAGE_INC(pinctrl_drop_buffered_packets_map); @@ -1597,12 +1601,20 @@ pinctrl_handle_buffered_packets(struct dp_packet *pkt_in, bp = xmalloc(sizeof *bp); hmap_insert(_packets_map, >hmap_node, hash); bp->head = bp->tail = 0; -bp->ip = addr; +if (is_arp) { +bp->ip = in6_addr_mapped_ipv4(htonl(md->flow.regs[0])); +} else { +ovs_be128 ip6 = hton128(flow_get_xxreg(>flow, 0)); +memcpy(>ip, , sizeof bp->ip); +} +bp->dp_key = dp_key; +bp->port_key = oport_key; } bp->timestamp = time_msec(); /* clone the packet to send it later with correct L2 address */ -clone = dp_packet_clone_data(dp_packet_data(pkt_in), - dp_packet_size(pkt_in)); +struct dp_packet *clone += dp_packet_clone_data(dp_packet_data(pkt_in), + dp_packet_size(pkt_in)); buffered_push_packet(bp, clone, md); return 0; @@ -3586,6 +3598,7 @@ pinctrl_run(struct ovsdb_idl_txn *ovnsb_idl_txn, sbrec_ip_multicast_opts); run_buffered_binding(sbrec_mac_binding_by_lport_ip, sbrec_port_binding_by_datapath, + sbrec_port_binding_by_name, local_datapaths); sync_svc_monitors(ovnsb_idl_txn, svc_mon_table, sbrec_port_binding_by_name, chassis); @@ -4354,15 +4367,28 @@ run_put_mac_bindings(struct ovsdb_idl_txn *ovnsb_idl_txn, } } +static struct buffered_packets * +pinctrl_get_buffered_packets(uint64_t dp_key, uint64_t port_key) +{ +uint32_t hash = pinctrl_buffer_apcket_hash(dp_key, port_key); +return pinctrl_find_buffered_packets(dp_key, port_key, hash); +} + static void