Re: [openstack-dev] [Performance][Shaker]
Thanks Ilya! On Mon, Jan 23, 2017 at 6:56 AM, Ilya Shakhatwrote: > Hi Sai, > > In UDP testing PPS represents packets sent by iperf client to server. Loss > is the percentage of packets that were not received by server (more > specifically the server tracks packets and sums gaps between of them, > https://github.com/esnet/iperf/blob/3.0.7/src/iperf_udp.c#L64). > > While reported PPS depends on bandwidth and concurrency it makes sense to > increase them until loss starts going up, meaning that the communication > channel is near the limit. > > Thanks, > Ilya > > 2017-01-21 1:19 GMT+04:00 Sai Sindhur Malleni : > >> Hey, >> >> When using the "iperf3" class in shaker for looking at UDP small packet >> performance, we see that as we scale up the concurrency the average PPS >> goes up and also the loss % increases. Is the loss % a percentage of the >> PPS or does the PPS only represent successful transmissions? Thanks! >> >> -- >> Sai Sindhur Malleni >> Software Engineer >> Red Hat Inc. >> 100 East Davie Street >> Raleigh, NC, USA >> Work: (919) 754-4557 | Cell: (919) 985-1055 >> >> >> __ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Sai Sindhur Malleni Software Engineer Red Hat Inc. 100 East Davie Street Raleigh, NC, USA Work: (919) 754-4557 | Cell: (919) 985-1055 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Performance][Shaker]
Hi Sai, In UDP testing PPS represents packets sent by iperf client to server. Loss is the percentage of packets that were not received by server (more specifically the server tracks packets and sums gaps between of them, https://github.com/esnet/iperf/blob/3.0.7/src/iperf_udp.c#L64). While reported PPS depends on bandwidth and concurrency it makes sense to increase them until loss starts going up, meaning that the communication channel is near the limit. Thanks, Ilya 2017-01-21 1:19 GMT+04:00 Sai Sindhur Malleni: > Hey, > > When using the "iperf3" class in shaker for looking at UDP small packet > performance, we see that as we scale up the concurrency the average PPS > goes up and also the loss % increases. Is the loss % a percentage of the > PPS or does the PPS only represent successful transmissions? Thanks! > > -- > Sai Sindhur Malleni > Software Engineer > Red Hat Inc. > 100 East Davie Street > Raleigh, NC, USA > Work: (919) 754-4557 | Cell: (919) 985-1055 > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Performance][Shaker]
Hey, When using the "iperf3" class in shaker for looking at UDP small packet performance, we see that as we scale up the concurrency the average PPS goes up and also the loss % increases. Is the loss % a percentage of the PPS or does the PPS only represent successful transmissions? Thanks! -- Sai Sindhur Malleni Software Engineer Red Hat Inc. 100 East Davie Street Raleigh, NC, USA Work: (919) 754-4557 | Cell: (919) 985-1055 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Performance][shaker] Triangular topology
- Mail original - > De: "Ilya Shakhat" <ishak...@mirantis.com> > À: "OpenStack Development Mailing List (not for usage questions)" > <openstack-dev@lists.openstack.org> > Envoyé: Mardi 6 Décembre 2016 14:39:28 > Objet: Re: [openstack-dev] [Performance][shaker] Triangular topology > > Hi Matt, > > I would suggest to let users specify custom topology in Shaker scenario via > graphs (e.g. directed triangle would look like: A -> B, B -> C, C -> A), > where every pair of nodes is pair of VMs and every edge corresponds to the > traffic flow. The above example will be deployed as 6 VMs, 2 per compute > node (since we need to separate ingress and egress flows). I totally agree as it could cover a lot of use cases. > > I already have a patch that allows to deploy graph-based topology: > https://review.openstack.org/#/c/407495/ but it does not configure > concurrency properly yet (concurrency still increments by pairs, solution > tbd) I'm guessing that changing the semantic of concurrency with regard to the other scenarios is maybe not a good thing. As far as I understand a concurrency of 3 with the following graph - [A, B] - [B, C] - [C, A] will lead to 3 flows (potentially bi-directionnal) being active. So without changing the current semantic of the concurrency we could have all flows active, with a concurrency of 6 for the following : graph: - [A, B] - [B, C] - [C, A] - [A, B] - [B, C] - [C, A] In that case, what would mean a concurrency of 3 with the above graph ? In other words, can we make sure that [A,B], [B,C] and [C,A] are active ? More generally, for a custom graph, maybe we can find a way to specify in the yaml what pairs should be active for a given concurrency level. This could be in the above case (pseudo-yaml) : graph: - [A, B],1 - [B, C],2 - [C, A],3 - [A, B],4 - [B, C],5 - [C, A],6 all pairs with a number less or equal to the concurrency will be considered active. > > Please check whether my approach suits your use case, feedback appreciated > :) I like it ! > > Thanks, > Ilya > > 2016-11-24 19:57 GMT+04:00 Matthieu Simonin <matthieu.simo...@inria.fr>: > > > Hi Ilya, > > > > Thanks for your answer, let me know your findings. > > In any case I'll be glad to help if needed. > > > > Matt > > > > ps : I just realized that I missed a proper subjet to the thread :(. > > If this thread continue it's maybe better to change that. > > > > - Mail original - > > > De: "Ilya Shakhat" <ishak...@mirantis.com> > > > À: "OpenStack Development Mailing List (not for usage questions)" < > > openstack-dev@lists.openstack.org> > > > Envoyé: Jeudi 24 Novembre 2016 13:03:33 > > > Objet: Re: [openstack-dev] [Performance][shaker] > > > > > > Hi Matt, > > > > > > Out of the box Shaker doesn't support such topology. > > > It shouldn't be hard to implement though. Let me check what needs to be > > > done. > > > > > > Thanks, > > > Ilya > > > > > > 2016-11-24 13:49 GMT+03:00 Matthieu Simonin <matthieu.simo...@inria.fr>: > > > > > > > Hello, > > > > > > > > I'm looking to shaker capabilities and I'm wondering if this kind > > > > of accomodation (see attachment also) can be achieved > > > > > > > > Ascii (flat) version : > > > > > > > > CN1 (2n VMs) <- n flows -> CN2 (2n VMs) > > > > CN1 (2n VMs) <- n flows -> CN3 (2n VMs) > > > > CN2 (2n VMs) <- n flows -> CN3 (2n VMs) > > > > > > > > In this situation concurrency could be mapped to the number of > > > > simultaneous flows in use per link. > > > > > > > > Best, > > > > > > > > Matt > > > > > > > > > > > > > > __ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject: > > unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > __ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject: > > unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > >
Re: [openstack-dev] [Performance][shaker] Triangular topology
Hi Matt, I would suggest to let users specify custom topology in Shaker scenario via graphs (e.g. directed triangle would look like: A -> B, B -> C, C -> A), where every pair of nodes is pair of VMs and every edge corresponds to the traffic flow. The above example will be deployed as 6 VMs, 2 per compute node (since we need to separate ingress and egress flows). I already have a patch that allows to deploy graph-based topology: https://review.openstack.org/#/c/407495/ but it does not configure concurrency properly yet (concurrency still increments by pairs, solution tbd) Please check whether my approach suits your use case, feedback appreciated :) Thanks, Ilya 2016-11-24 19:57 GMT+04:00 Matthieu Simonin <matthieu.simo...@inria.fr>: > Hi Ilya, > > Thanks for your answer, let me know your findings. > In any case I'll be glad to help if needed. > > Matt > > ps : I just realized that I missed a proper subjet to the thread :(. > If this thread continue it's maybe better to change that. > > - Mail original - > > De: "Ilya Shakhat" <ishak...@mirantis.com> > > À: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev@lists.openstack.org> > > Envoyé: Jeudi 24 Novembre 2016 13:03:33 > > Objet: Re: [openstack-dev] [Performance][shaker] > > > > Hi Matt, > > > > Out of the box Shaker doesn't support such topology. > > It shouldn't be hard to implement though. Let me check what needs to be > > done. > > > > Thanks, > > Ilya > > > > 2016-11-24 13:49 GMT+03:00 Matthieu Simonin <matthieu.simo...@inria.fr>: > > > > > Hello, > > > > > > I'm looking to shaker capabilities and I'm wondering if this kind > > > of accomodation (see attachment also) can be achieved > > > > > > Ascii (flat) version : > > > > > > CN1 (2n VMs) <- n flows -> CN2 (2n VMs) > > > CN1 (2n VMs) <- n flows -> CN3 (2n VMs) > > > CN2 (2n VMs) <- n flows -> CN3 (2n VMs) > > > > > > In this situation concurrency could be mapped to the number of > > > simultaneous flows in use per link. > > > > > > Best, > > > > > > Matt > > > > > > > > > > __ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject: > unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > __ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Performance][shaker]
Hi Ilya, Thanks for your answer, let me know your findings. In any case I'll be glad to help if needed. Matt ps : I just realized that I missed a proper subjet to the thread :(. If this thread continue it's maybe better to change that. - Mail original - > De: "Ilya Shakhat" <ishak...@mirantis.com> > À: "OpenStack Development Mailing List (not for usage questions)" > <openstack-dev@lists.openstack.org> > Envoyé: Jeudi 24 Novembre 2016 13:03:33 > Objet: Re: [openstack-dev] [Performance][shaker] > > Hi Matt, > > Out of the box Shaker doesn't support such topology. > It shouldn't be hard to implement though. Let me check what needs to be > done. > > Thanks, > Ilya > > 2016-11-24 13:49 GMT+03:00 Matthieu Simonin <matthieu.simo...@inria.fr>: > > > Hello, > > > > I'm looking to shaker capabilities and I'm wondering if this kind > > of accomodation (see attachment also) can be achieved > > > > Ascii (flat) version : > > > > CN1 (2n VMs) <- n flows -> CN2 (2n VMs) > > CN1 (2n VMs) <- n flows -> CN3 (2n VMs) > > CN2 (2n VMs) <- n flows -> CN3 (2n VMs) > > > > In this situation concurrency could be mapped to the number of > > simultaneous flows in use per link. > > > > Best, > > > > Matt > > > > > > __ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Performance][shaker]
Hi Matt, Out of the box Shaker doesn't support such topology. It shouldn't be hard to implement though. Let me check what needs to be done. Thanks, Ilya 2016-11-24 13:49 GMT+03:00 Matthieu Simonin: > Hello, > > I'm looking to shaker capabilities and I'm wondering if this kind > of accomodation (see attachment also) can be achieved > > Ascii (flat) version : > > CN1 (2n VMs) <- n flows -> CN2 (2n VMs) > CN1 (2n VMs) <- n flows -> CN3 (2n VMs) > CN2 (2n VMs) <- n flows -> CN3 (2n VMs) > > In this situation concurrency could be mapped to the number of > simultaneous flows in use per link. > > Best, > > Matt > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Performance][shaker]
Hello, I'm looking to shaker capabilities and I'm wondering if this kind of accomodation (see attachment also) can be achieved Ascii (flat) version : CN1 (2n VMs) <- n flows -> CN2 (2n VMs) CN1 (2n VMs) <- n flows -> CN3 (2n VMs) CN2 (2n VMs) <- n flows -> CN3 (2n VMs) In this situation concurrency could be mapped to the number of simultaneous flows in use per link. Best, Matt __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev