To add to my last reply, I have just created the following compute examples, tests all run with the VM launch on the same Hypervisor:
------------------------------------------------------------------------ === Offering with 50Mbps === <target dev='vnet2'/> <model type='e1000'/> <bandwidth> <inbound average='6400' peak='6400'/> <outbound average='6400' peak='6400'/> </bandwidth> # tc class show dev vnet2 class htb 1:1 root prio 0 rate 51200Kbit ceil 51200Kbit burst 1593b cburst 1593b Phy -> VM = 53Mbits/sec VM -> Phy = 1.92Mbits/sec ------------------------------------------------------------------------ === Offering with 100Mbps === <target dev='vnet2'/> <model type='e1000'/> <bandwidth> <inbound average='12800' peak='12800'/> <outbound average='12800' peak='12800'/> </bandwidth> # tc class show dev vnet2 class htb 1:1 root prio 0 rate 102400Kbit ceil 102400Kbit burst 1587b cburst 1587b Phy -> VM = 101Mbits/sec VM -> Phy = 2.04Mbits/sec ------------------------------------------------------------------------ === Offering with 500Mbps === <target dev='vnet2'/> <model type='e1000'/> <bandwidth> <inbound average='64000' peak='64000'/> <outbound average='64000' peak='64000'/> </bandwidth> # tc class show dev vnet2 class htb 1:1 root prio 0 rate 512000Kbit ceil 512000Kbit burst 1536b cburst 1536b Phy -> VM = 8.24Mbits/sec VM -> Phy = 2.60Mbits/sec ------------------------------------------------------------------------ === Offering with 1000Mbps === <target dev='vnet2'/> <model type='e1000'/> <bandwidth> <inbound average='128000' peak='128000'/> <outbound average='128000' peak='128000'/> </bandwidth> # tc class show dev vnet2 class htb 1:1 root prio 0 rate 1024Mbit ceil 1024Mbit burst 1408b cburst 1408b Phy -> VM = 8.88Mbits/sec VM -> Phy = 2.39Mbits/sec ------------------------------------------------------------------------ === Offering with 0Mbps === (In the Network Rate Field) <source bridge='brbond0-152'/> <target dev='vnet2'/> <model type='e1000'/> <alias name='net0'/> # tc class show dev vnet2 (No Output) Phy -> VM = 280Mbits/sec VM -> Phy = 693Mbits/sec ------------------------------------------------------------------------ It appears <=100Mbps, the Inbound shaping works as expected, beyond that it has somewhat interesting results. Setting 0Mbps has had varied results, Phy-> another VM (on the same Hypervisor) only gets 57Mbits/sec-160Mbits/sec, with ~799Mbits/sec outbound from the VM. Thanks, Marty On Mon, Oct 28, 2013 at 9:00 PM, Marty Sweet <msweet....@gmail.com> wrote: > Yeah, from the libvirt website (http://libvirt.org/cgroups.html): > > Network tuning > > The net_cls is not currently used. Instead traffic filter policies are > set directly against individual virtual network interfaces. > > > However, when bandwidth limiting is applied, I can't see any obvious rules > with any of the 'tc' commands. > Thanks for your help on this, > Marty > > Marty > > > On Mon, Oct 28, 2013 at 8:36 PM, Marcus Sorensen <shadow...@gmail.com>wrote: > >> It just uses the libvirt XML, which uses cgroups, which uses tc rules. >> >> On Mon, Oct 28, 2013 at 2:25 PM, Marty Sweet <msweet....@gmail.com> >> wrote: >> > Hi Marcus, >> > >> > My earlier email mentioned those configurations, unfortunately not >> really >> > complying with what was set out in the compute offering. >> > After setting the compute offering network limit and stop/starting the >> VMs >> > the lines do not appear, and outbound traffic has returned to normal >> speeds >> > but inbound is proving an issue. >> > >> > Also tried rebooting the hypervisor hosts with no success. >> > How is this traffic shaping implemented, is it just via KVM and virsh, >> or >> > does cloudstack run custom tc rules? >> > >> > Thanks, >> > Marty >> > >> > >> > On Mon, Oct 28, 2013 at 8:19 PM, Marcus Sorensen <shadow...@gmail.com >> >wrote: >> > >> >> Check the XML that was generated when the VM in question was started: >> >> >> >> # virsh dumpxml i-2-15-VM | egrep "inbound|outbound" >> >> <inbound average='2560000' peak='2560000'/> >> >> <outbound average='2560000' peak='2560000'/> >> >> >> >> >> >> See if the settings match what you put in your network offering or >> >> properties (whichever applies to your situation). >> >> >> >> >> >> On Oct 28, 2013 1:44 PM, "Marty Sweet" <msweet....@gmail.com> wrote: >> >> >> >> > Thanks for the links, while I have set 0 for all the properties the >> >> > following results still occur: >> >> > >> >> > Guest -> Other Server : >900Mbps (As expected) >> >> > Other Server -> Guest (so inbound to the VM) : Varies depending on >> >> > hypervisor host: 121, 405, 233, 234Mbps >> >> > >> >> > Each hypervisor has 2 NICS in an LACP bond, was working perfectly >> before >> >> > 4.2.0 :( >> >> > >> >> > Thanks, >> >> > Marty >> >> > >> >> > >> >> > >> >> > On Mon, Oct 28, 2013 at 2:44 PM, Marcus Sorensen < >> shadow...@gmail.com >> >> > >wrote: >> >> > >> >> > > Yeah, the bandwidth limiting for KVM was dropped into 4.2. You just >> >> > > need to tweak your settings, whether it's on network offerings or >> >> > > global. >> >> > > >> >> > > On Mon, Oct 28, 2013 at 8:25 AM, Wei ZHOU <ustcweiz...@gmail.com> >> >> wrote: >> >> > > > Please read this artcle >> http://support.citrix.com/article/CTX132019 >> >> > > > Hope this help you. >> >> > > > >> >> > > > >> >> > > > 2013/10/28 Marty Sweet <msweet....@gmail.com> >> >> > > > >> >> > > >> Hi Guys, >> >> > > >> >> >> > > >> Following my upgrade from 4.1.1 -> 4.2.0, I have noticed that VM >> >> > > traffic is >> >> > > >> now limited to 2Mbits. >> >> > > >> My compute offerings were already 1000 for network limit and I >> have >> >> > > created >> >> > > >> new offerings to ensure this wasn't the issue (this fixed it for >> >> > > someone in >> >> > > >> the mailing list). >> >> > > >> >> >> > > >> Is there anything that I am missing? I can't remember reading >> this >> >> as >> >> > a >> >> > > bug >> >> > > >> fix or new feature. >> >> > > >> If there is a way to resolve or disable it, it would be most >> >> > > appreciated - >> >> > > >> have been going round in circles for hours. >> >> > > >> >> >> > > >> Thanks, >> >> > > >> Marty >> >> > > >> >> >> > > >> >> > >> >> >> > >