On 08/24/2017 09:51 AM, Darrell Ball wrote: > > > On 8/23/17, 6:34 AM, "Kevin Traynor" <[email protected]> wrote: > > Up to his point rxqs are sorted by processing cycles they > consumed and assigned to pmds in a round robin manner. > > Ian pointed out that on wrap around the most loaded pmd will be > the next one to be assigned an additional rxq and that it would be > better to reverse the pmd order when wraparound occurs. > > In other words, change from assigning by rr to assigning in a forward > and reverse cycle through pmds. > > Also, now that the algothim has finalised, document an example. > > Typo: algothim
changed, and US'ified finalised > > Suggested-by: Ian Stokes <[email protected]> > Signed-off-by: Kevin Traynor <[email protected]> > --- > Documentation/howto/dpdk.rst | 16 ++++++++++++++++ > lib/dpif-netdev.c | 21 ++++++++++++++++++++- > tests/pmd.at | 2 +- > 3 files changed, 37 insertions(+), 2 deletions(-) > > diff --git a/Documentation/howto/dpdk.rst > b/Documentation/howto/dpdk.rst > index 44737e4..493e215 100644 > --- a/Documentation/howto/dpdk.rst > +++ b/Documentation/howto/dpdk.rst > @@ -124,4 +124,20 @@ will be used where known to assign rxqs with the > highest consumption of > processing cycles to different pmds. > > +For example, in the case where here there are 5 rxqs and 3 cores > (e.g. 3,7,8) > +available, and the measured usage of core cycles per rxq over the > last > +interval is seen to be: > + > +- Queue #0: 30% > +- Queue #1: 80% > +- Queue #3: 60% > +- Queue #4: 70% > +- Queue #5: 10% > + > +The rxqs will be assigned to cores 3,7,8 in the following order: > + > +Core 3: Q1 (80%) | > +Core 7: Q4 (70%) | Q5 (10%) > +core 8: Q0 (60%) | Q0 (30%) > > [Darrell] > There is a typo in the example: > +core 8: Q0 (60%) ….. > should be > +core 8: Q3 (60%) ….. > oh, that's annoying :( thanks for spotting it. fixed it now. > > > + > Rxq to pmds assignment takes place whenever there are configuration > changes. > > diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c > index afbf591..6cc0a1e 100644 > --- a/lib/dpif-netdev.c > +++ b/lib/dpif-netdev.c > @@ -3285,4 +3285,5 @@ struct rr_numa { > > int cur_index; > + bool idx_inc; > }; > > @@ -3341,4 +3342,7 @@ rr_numa_list_populate(struct dp_netdev *dp, > struct rr_numa_list *rr) > numa->pmds = xrealloc(numa->pmds, numa->n_pmds * sizeof > *numa->pmds); > numa->pmds[numa->n_pmds - 1] = pmd; > + /* At least one pmd so initialise curr_idx and idx_inc. */ > + numa->cur_index = 0; > + numa->idx_inc = true; > } > } > @@ -3347,5 +3351,20 @@ static struct dp_netdev_pmd_thread * > rr_numa_get_pmd(struct rr_numa *numa) > { > - return numa->pmds[numa->cur_index++ % numa->n_pmds]; > + int numa_idx = numa->cur_index; > + > + if (numa->idx_inc == true) { > + if (numa->cur_index == numa->n_pmds-1) { > + numa->idx_inc = false; > + } else { > + numa->cur_index++; > + } > + } else { > > Maybe one sentence comment for others benefit? > yeah, good point. I added a few comments to make it clear what's happening here. > + if (numa->cur_index == 0) { > + numa->idx_inc = true; > + } else { > + numa->cur_index--; > + } > + } > + return numa->pmds[numa_idx]; > } > > diff --git a/tests/pmd.at b/tests/pmd.at > index b6732ea..e39a23a 100644 > --- a/tests/pmd.at > +++ b/tests/pmd.at > @@ -54,5 +54,5 @@ m4_define([CHECK_PMD_THREADS_CREATED], [ > > m4_define([SED_NUMA_CORE_PATTERN], ["s/\(numa_id \)[[0-9]]*\( > core_id \)[[0-9]]*:/\1<cleared>\2<cleared>:/"]) > -m4_define([SED_NUMA_CORE_QUEUE_PATTERN], ["s/\(numa_id \)[[0-9]]*\( > core_id \)[[0-9]]*:/\1<cleared>\2<cleared>:/;s/\(queue-id: \)0 2 4 > 6/\1<cleared>/;s/\(queue-id: \)1 3 5 7/\1<cleared>/"]) > +m4_define([SED_NUMA_CORE_QUEUE_PATTERN], ["s/\(numa_id \)[[0-9]]*\( > core_id \)[[0-9]]*:/\1<cleared>\2<cleared>:/;s/\(queue-id: \)1 2 5 > 6/\1<cleared>/;s/\(queue-id: \)0 3 4 7/\1<cleared>/"]) > m4_define([DUMMY_NUMA], [--dummy-numa="0,0,0,0"]) > > -- > 1.8.3.1 > > > > > > > > _______________________________________________ > dev mailing list > [email protected] > https://mail.openvswitch.org/mailman/listinfo/ovs-dev > _______________________________________________ dev mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-dev
