Robert, As you've deliberately picked on me I feel compelled to reply! Jokes apart, I am going to retire that patch and push the new default in neutron. Regardless of considerations on real loads vs gate loads, I think it is correct to assume the default configuration should be one that will allow gate tests to pass. A sort of maximum common denominator, if you want. I think however that the discussion on whether our gate tests are representative of real world deployment is outside the scope of this thread, even if very interesting.
On the specific matter of this patch we've been noticing the CPU on the gate tests with neutron easily reaching 100%; this is not because of (b). I can indeed replicate the same behaviour on any other VM, even with twice as much vCPUs. Never tried baremetal though. However, because of the fact that 'just' the gate tests send the cpu on the single host to 100% should let us think that deployers might easily end up facing the same problem in real environment (your (a) point) regardless of how the components are split. Thankfully, Armando found out a related issue with the DHCP agent which was causing it to use a lot of cpu as well as terribly stressing ovsdbserver, and fixed it. Since then we're seeing a lot less timeout errors on the gate. Salvatore On 12 December 2013 20:23, Robert Collins <[email protected]> wrote: > A few times now we've run into patches for devstack-gate / devstack > that change default configuration to handle 'tempest load'. > > For instance - https://review.openstack.org/61137 (Sorry Salvatore I'm > not picking on you really!) > > So there appears to be a meme that the gate is particularly stressful > - a bad environment - and that real world situations have less load. > > This could happen a few ways: (a) deployers might separate out > components more; (b) they might have faster machines; (c) they might > have less concurrent activity. > > (a) - unlikely! Deployers will cram stuff together as much as they can > to save overheads. Big clouds will have components split out - yes, > but they will also have correspondingly more load to drive that split > out. > > (b) Perhaps, but not orders of magnitude faster, the clouds we run on > are running on fairly recent hardware, and by using big instances we > don't get crammed it with that many other tenants. > > (c) Almost certainly not. Tempest currently does a maximum of four > concurrent requests. A small business cloud could easily have 5 or 6 > people making concurrent requests from time to time, and bigger but > not huge clouds will certainly have that. Their /average/ rate of API > requests may be much lower, but when they point service orchestration > tools at it -- particularly tools that walk their dependencies in > parallel - load is going to be much much higher than what we generate > with Tempest. > > tl;dr : if we need to change a config file setting in devstack-gate or > devstack *other than* setting up the specific scenario, think thrice - > should it be a production default and set in the relevant projects > default config setting. > > Cheers, > Rob > -- > Robert Collins <[email protected]> > Distinguished Technologist > HP Converged Cloud > > _______________________________________________ > OpenStack-dev mailing list > [email protected] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >
_______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
