On Sun, Feb 8, 2015 at 12:14 PM, Tyler Wilson <[email protected]> wrote: > Try adjusting url_timeout in nova.conf, this is a new setting to allow more > time to connect to neutron.
I increased the url_timeout from 30 to 120 this seems to have no effect, still failing at 30 instances and in much less than 120sec. So I don't think we're getting so far as trying to talk to neutron yet. Thanks, -Jon > On Sun, Feb 8, 2015 at 9:37 AM, Xinyuan Huang <[email protected]> wrote: >> >> Hi Jon, >> >> Could you please also paste the messages in nova scheduler logs, if >> possible? >> >> It is true that in icehouse instances might get launched individually >> before scheduler realizes no enough hosts are available, but currently this >> has been changed and number of available hosts are checked before launching >> any. >> >> Thanks, >> Xinyuan >> >> >> From: Jonathan Proulx <[email protected]> >> Date: Fri, Feb 6, 2015 at 7:25 AM >> Subject: [Openstack] nova-sheduler timeouts in juno ... >> To: "[email protected]" <[email protected]> >> >> > >> > >> > Hi All, >> > >> > After upgrading from icehouse to juno I get timeouts when trying to >> > schedule >20 instances at once (20 deterministically works, 30 always >> > fails didn't bother to go finer than that) >> > >> > Note this is a single call with --max-count on the CLI or setting >> > number of instances in Horizon. If I do parallel batched of 10 or 20 >> > instances I can get hundreds to launch successfully as each set gets >> > handled by a different scheduler process. >> > >> > I know under icehouse I could get 100 or so to launch at once (then >> > neutron would start to trip over itself, so kudos on that not >> > happening anymore). I seem to recall with icehouse instances in a >> > scheduling batch would launch individually as scheduled in juno it >> > seems none get launched until all are scheduled. >> > >> > In any case timeouts appear in nova-conductor.log one set for each >> > instance in the set all with the same message ID >> > (http://paste.openstack.org/show/168157/). >> > >> > Is anyone else seeing this / know a work around? >> > >> > Thanks, >> > -Jon >> > >> > _______________________________________________ >> > Mailing list: >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > Post to : [email protected] >> > Unsubscribe : >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > >> > >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : [email protected] >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : [email protected] Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
