I don't understand what you're asking for. Daniele or Pravin, I think that you know the DPDK datapath well. Do you understand what David wants or why?
On Thu, Oct 15, 2015 at 01:15:11PM -0500, David Evans wrote: > Thanks Ben, > > If that’s the case, then it would be better to be adding custom action that > applies prior to this group action, to update the refcnt. > > I expect it just has to happen some time before the first PMD has finished > processing the packet so that the packet does not get deleted by the tx > routine before other PMD’s have seen the packet. > > Cheers > Dave. > > > > > On Oct 12, 2015, at 12:23 PM, Ben Pfaff <[email protected]> wrote: > > > > Your change isn't going to have much effect because most packets don't > > go through the translation process. If you try to force all packets > > through translation, it will kill performance. > > > > I think that you should read this paper that describes the various > > caching layers in Open vSwitch: > > http://openvswitch.org/support/papers/nsdi2015.pdf > > > > On Mon, Oct 12, 2015 at 11:56:03AM -0500, David Evans wrote: > >> Hi Ben, > >> > >> When i use the OFPGT11_ALL group action, the packets for a flow will be > >> sent out all buckets in a group. (in my case all the buckets are ports to > >> transmit out) > >> > >> I added a group_bucket_count to the context > >> and > >> in xlate_all_group fn the following. > >> > >> group_dpif_get_buckets(group, &buckets); > >> + if(ctx->group_bucket_count == 0){ > >> + LIST_FOR_EACH (bucket, list_node, buckets) { > >> + ctx->group_bucket_count++; > >> + } > >> + } > >> + if(ctx->xin->packet) > >> + if(ctx->xin->packet->source == DPBUF_DPDK) > >> + > >> rte_pktmbuf_refcnt_update(&ctx->xin->packet->mbuf,ctx->group_bucket_count); > >> LIST_FOR_EACH (bucket, list_node, buckets) { > >> > >> this stops the transmit pmd’s attempting to free the packet until all the > >> buckets( ports ) have transmitted it. > >> My switch also does reassembly on rx - this refcnt is necessary for > >> handling multi-segment dpdk buffers too. > >> I also changed the segment free to rte_pktmbuf_free in netdev-dpdk.c for > >> this purpose. > >> I’m expecting it will also be important for tso or the possibility of > >> using rte_ipv4_fragment_packet on an outgoing port. > >> > >> i have between 6 and 12 PMD’s depending on the number of dpdk ports > >> running at any time, and if i use OFPGT11_ALL with many output > >> buckets(ports) buffers will disappear from under some pmd’s and cause > >> segfaults etc.. > >> > >> Cheers, > >> > >> Dave. > >> > >>> On Oct 12, 2015, at 11:38 AM, Ben Pfaff <[email protected]> wrote: > >>> > >>> On Wed, Oct 07, 2015 at 05:36:18PM -0500, David Evans wrote: > >>>> While using netdev-dpdk - When i add a rule for which the action is to > >>>> send to a group (type=all) containing (x) output buckets (ports) how > >>>> can i increment the dp_packet->pkt_mbuf’s refcnt to (x) so that the > >>>> packet is not deleted before it has transmitted all ports(buckets) in > >>>> the group. > >>>> > >>>> Perhaps in ofproto-dpif-xlate.c function xlate_all_group find the > >>>> packet and apply the ctx->xin->packet->mbuf->refcnt ? Will that work > >>>> for all packets for a ctx? > >>> > >>> I don't understand what relationship you expect here. A group has no > >>> direct relationship to a packet. Translation produces a flat list of > >>> simple actions that don't refer back to the group. > >> > _______________________________________________ discuss mailing list [email protected] http://openvswitch.org/mailman/listinfo/discuss
