[Openstack-operators] Reminder: User Committee Meeting - Monday July 2nd @1400UTC
Hi everyone, Please be sure to join us - if not getting ready for firecrackers - on Monday July 2nd @1400UTC in #openstack-uc for weekly User Committee meeting. Also you can freely add to the meeting agenda here - ( https://share.polymail.io/v1/z/b/NWIzNjZlYzY4YmFm/a4uka23nadCqVfJyXGVUd8seO-2TVguxjo5CXjMRfv6BfwuOHgsFdwaD5gqm42rq7S0EQJ2MIGcgH9AtUVfEieePkQzsFoAt1OaUgaIp0NtZpZK4dWfyHXTS3KuBASt50Uw1EdlADr41wcc2nQVQpFf9trzWdTHt9_ZjAc0PQrBJvTlG2nXDmvunA1m2N-H8jMIRsejqbpleDwqc7eXzV-xJPvCinnzRWGeMohmiMraUGS3wlftXrtqhmmXCWh0aW0Xrr-GB2aoJBOwSodyJl5DisHXFxMlnk_z6OYrHfl2rU_ByIO4rhUL9zYxT ) Governance/Foundation/UserCommittee - OpenStack ( https://share.polymail.io/v1/z/b/NWIzNjZlYzY4YmFm/a4uka23nadCqVfJyXGVUd8seO-2TVguxjo5CXjMRfv6BfwuOHgsFdwaD5gqm42rq7S0EQJ2MIGcgH9AtUVfEieePkQzsFoAt1OaUgaIp0NtZpZK4dWfyHXTS3KuBASt50Uw1EdlADr41wcc2nQVQpFf9trzWdTHt9_ZjAc0PQrBJvTlG2nXDmvunA1m2N-H8jMIRsejqbpleDwqc7eXzV-xJPvCinnzRWGeMohmiMraUGS3wlftXrtqhmmXCWh0aW0Xrr-GB2aoJBOwSodyJl5DisHXFxMlnk_z6OYrHfl2rU_ByIO4rhUL9zYxT ) ( https://share.polymail.io/v1/z/b/NWIzNjZlYzY4YmFm/a4uka23nadCqVfJyXGVUd8seO-2TVguxjo5CXjMRfv6BfwuOHgsFdwaD5gqm42rq7S0EQJ2MIGcgH9AtUVfEieePkQzsFoAt1OaUgaIp0NtZpZK4dWfyHXTS3KuBASt50Uw1EdlADr41wcc2nQVQpFf9trzWdTHt9_ZjAc0PQrBJvTlG2nXDmvunA1m2N-H8jMIRsejqbpleDwqc7eXzV-xJPvCinnzRWGeMohmiMraUGS3wlftXrtqhmmXCWh0aW0Xrr-GB2aoJBOwSodyJl5DisHXFxMlnk_z6OYrHfl2rU_ByIO4rhUL9zYxT ) WIKI.OPENSTACK.ORG ( https://share.polymail.io/v1/z/b/NWIzNjZlYzY4YmFm/a4uka23nadCqVfJyXGVUd8seO-2TVguxjo5CXjMRfv6BfwuOHgsFdwaD5gqm42rq7S0EQJ2MIGcgH9AtUVfEieePkQzsFoAt1OaUgaIp0NtZpZK4dWfyHXTS3KuBASt50Uw1EdlADr41wcc2nQVQpFf9trzWdTHt9_ZjAc0PQrBJvTlG2nXDmvunA1m2N-H8jMIRsejqbpleDwqc7eXzV-xJPvCinnzRWGeMohmiMraUGS3wlftXrtqhmmXCWh0aW0Xrr-GB2aoJBOwSodyJl5DisHXFxMlnk_z6OYrHfl2rU_ByIO4rhUL9zYxT ) -- Kind regards, Melvin Hillsman mrhills...@gmail.com mobile: (832) 264-2646___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Neutron not adding iptables rules for metadata agent
Well, right now, I’ve managed to manually add those rules. For now, I will assume it was from the RabbitMQ upgrade process I’ve done few weeks ago. If the issue reappears, I’ll make sure I’ll add a bug report. Thanks, Radu > On Jun 29, 2018, at 3:55 PM, Saverio Proto wrote: > > Hello, > > I would suggest to open a bug on launchpad to track this issue. > > thank you > > Saverio > > 2018-06-18 12:19 GMT+02:00 Radu Popescu | eMAG, Technology > : >> Hi, >> >> We're using Openstack Ocata, deployed using Openstack Ansible v15.1.7. >> Neutron server is v10.0.3. >> I can see enable_isolated_metadata and enable_metadata_network only used for >> isolated networks that don't have a router which is not our case. >> Also, I checked all namespaces on all our novas and only affected 6 out of >> 66 ..and only 1 namespace / nova. Seems like isolated case that doesn't >> happen very often. >> >> Can it be RabbitMQ? I'm not sure where to check. >> >> Thanks, >> Radu >> >> On Fri, 2018-06-15 at 17:11 +0200, Saverio Proto wrote: >> >> Hello Radu, >> >> >> yours look more or less like a bug report. This you check existing >> >> open bugs for neutron ? Also what version of openstack are you running >> >> ? >> >> >> how did you configure enable_isolated_metadata and >> >> enable_metadata_network options ? >> >> >> Saverio >> >> >> 2018-06-13 12:45 GMT+02:00 Radu Popescu | eMAG, Technology >> >> : >> >> Hi all, >> >> >> So, I'm having the following issue. I'm creating a VM with floating IP. >> >> Everything is fine, namespace is there, postrouting and prerouting from the >> >> internal IP to the floating IP are there. The only rules missing are the >> >> rules to access metadata service: >> >> >> -A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp >> >> --dport 80 -j REDIRECT --to-ports 9697 >> >> -A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp >> >> --dport 80 -j MARK --set-xmark 0x1/0x >> >> >> (this is taken from another working namespace with iptables-save) >> >> >> Forgot to mention, VM is booting ok, I have both the default route and the >> >> one for the metadata service (cloud-init is running at boot time): >> >> [ 57.150766] cloud-init[892]: ci-info: >> >> ++--+--+---+---+---+ >> >> [ 57.150997] cloud-init[892]: ci-info: | Device | Up | Address| >> >> Mask | Scope | Hw-Address| >> >> [ 57.151219] cloud-init[892]: ci-info: >> >> ++--+--+---+---+---+ >> >> [ 57.151431] cloud-init[892]: ci-info: | lo: | True | 127.0.0.1 | >> >> 255.0.0.0 | . | . | >> >> [ 57.151627] cloud-init[892]: ci-info: | eth0: | True | 10.240.9.186 | >> >> 255.255.252.0 | . | fa:16:3e:43:d1:c2 | >> >> [ 57.151815] cloud-init[892]: ci-info: >> >> ++--+--+---+---+---+ >> >> [ 57.152018] cloud-init[892]: ci-info: >> >> +++Route IPv4 >> >> info >> >> [ 57.152225] cloud-init[892]: ci-info: >> >> +---+-++-+---+---+ >> >> [ 57.152426] cloud-init[892]: ci-info: | Route | Destination | >> >> Gateway | Genmask | Interface | Flags | >> >> [ 57.152621] cloud-init[892]: ci-info: >> >> +---+-++-+---+---+ >> >> [ 57.152813] cloud-init[892]: ci-info: | 0 | 0.0.0.0 | >> >> 10.240.8.1 | 0.0.0.0 |eth0 | UG | >> >> [ 57.153013] cloud-init[892]: ci-info: | 1 |10.240.1.0 | >> >> 0.0.0.0 | 255.255.255.0 |eth0 | U | >> >> [ 57.153202] cloud-init[892]: ci-info: | 2 |10.240.8.0 | >> >> 0.0.0.0 | 255.255.252.0 |eth0 | U | >> >> [ 57.153397] cloud-init[892]: ci-info: | 3 | 169.254.169.254 | >> >> 10.240.8.1 | 255.255.255.255 |eth0 | UGH | >> >> [ 57.153579] cloud-init[892]: ci-info: >> >> +---+-++-+---+---+ >> >> >> The extra route is there because the tenant has 2 subnets. >> >> >> Before adding those 2 rules manually, I had this coming from cloud-init: >> >> >> [ 192.451801] cloud-init[892]: 2018-06-13 12:29:26,179 - >> >> url_helper.py[WARNING]: Calling >> >> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: >> >> request error [('Connection aborted.', error(113, 'No route to host'))] >> >> [ 193.456805] cloud-init[892]: 2018-06-13 12:29:27,184 - >> >> url_helper.py[WARNING]: Calling >> >> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [1/120s]: >> >> request error [('Connection aborted.', error(113, 'No route to host'))] >> >> [ 194.461592] cloud-init[892]: 2018-06-13 12:29:28,189 - >> >>
Re: [Openstack-operators] Neutron not adding iptables rules for metadata agent
Hello, I would suggest to open a bug on launchpad to track this issue. thank you Saverio 2018-06-18 12:19 GMT+02:00 Radu Popescu | eMAG, Technology : > Hi, > > We're using Openstack Ocata, deployed using Openstack Ansible v15.1.7. > Neutron server is v10.0.3. > I can see enable_isolated_metadata and enable_metadata_network only used for > isolated networks that don't have a router which is not our case. > Also, I checked all namespaces on all our novas and only affected 6 out of > 66 ..and only 1 namespace / nova. Seems like isolated case that doesn't > happen very often. > > Can it be RabbitMQ? I'm not sure where to check. > > Thanks, > Radu > > On Fri, 2018-06-15 at 17:11 +0200, Saverio Proto wrote: > > Hello Radu, > > > yours look more or less like a bug report. This you check existing > > open bugs for neutron ? Also what version of openstack are you running > > ? > > > how did you configure enable_isolated_metadata and > > enable_metadata_network options ? > > > Saverio > > > 2018-06-13 12:45 GMT+02:00 Radu Popescu | eMAG, Technology > > : > > Hi all, > > > So, I'm having the following issue. I'm creating a VM with floating IP. > > Everything is fine, namespace is there, postrouting and prerouting from the > > internal IP to the floating IP are there. The only rules missing are the > > rules to access metadata service: > > > -A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp > > --dport 80 -j REDIRECT --to-ports 9697 > > -A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp > > --dport 80 -j MARK --set-xmark 0x1/0x > > > (this is taken from another working namespace with iptables-save) > > > Forgot to mention, VM is booting ok, I have both the default route and the > > one for the metadata service (cloud-init is running at boot time): > > [ 57.150766] cloud-init[892]: ci-info: > > ++--+--+---+---+---+ > > [ 57.150997] cloud-init[892]: ci-info: | Device | Up | Address| > > Mask | Scope | Hw-Address| > > [ 57.151219] cloud-init[892]: ci-info: > > ++--+--+---+---+---+ > > [ 57.151431] cloud-init[892]: ci-info: | lo: | True | 127.0.0.1 | > > 255.0.0.0 | . | . | > > [ 57.151627] cloud-init[892]: ci-info: | eth0: | True | 10.240.9.186 | > > 255.255.252.0 | . | fa:16:3e:43:d1:c2 | > > [ 57.151815] cloud-init[892]: ci-info: > > ++--+--+---+---+---+ > > [ 57.152018] cloud-init[892]: ci-info: > > +++Route IPv4 > > info > > [ 57.152225] cloud-init[892]: ci-info: > > +---+-++-+---+---+ > > [ 57.152426] cloud-init[892]: ci-info: | Route | Destination | > > Gateway | Genmask | Interface | Flags | > > [ 57.152621] cloud-init[892]: ci-info: > > +---+-++-+---+---+ > > [ 57.152813] cloud-init[892]: ci-info: | 0 | 0.0.0.0 | > > 10.240.8.1 | 0.0.0.0 |eth0 | UG | > > [ 57.153013] cloud-init[892]: ci-info: | 1 |10.240.1.0 | > > 0.0.0.0 | 255.255.255.0 |eth0 | U | > > [ 57.153202] cloud-init[892]: ci-info: | 2 |10.240.8.0 | > > 0.0.0.0 | 255.255.252.0 |eth0 | U | > > [ 57.153397] cloud-init[892]: ci-info: | 3 | 169.254.169.254 | > > 10.240.8.1 | 255.255.255.255 |eth0 | UGH | > > [ 57.153579] cloud-init[892]: ci-info: > > +---+-++-+---+---+ > > > The extra route is there because the tenant has 2 subnets. > > > Before adding those 2 rules manually, I had this coming from cloud-init: > > > [ 192.451801] cloud-init[892]: 2018-06-13 12:29:26,179 - > > url_helper.py[WARNING]: Calling > > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: > > request error [('Connection aborted.', error(113, 'No route to host'))] > > [ 193.456805] cloud-init[892]: 2018-06-13 12:29:27,184 - > > url_helper.py[WARNING]: Calling > > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [1/120s]: > > request error [('Connection aborted.', error(113, 'No route to host'))] > > [ 194.461592] cloud-init[892]: 2018-06-13 12:29:28,189 - > > url_helper.py[WARNING]: Calling > > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: > > request error [('Connection aborted.', error(113, 'No route to host'))] > > [ 195.466441] cloud-init[892]: 2018-06-13 12:29:29,194 - > > url_helper.py[WARNING]: Calling > > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [3/120s]: > > request error [('Connection aborted.', error(113, 'No route to host'))] > > > I can see no errors in neither nova or neutron services. > > In the mean time, I've searched all our nova