[openstack-dev] How does tempest read in the variables defined in tempest.conf?
Hi, There are variables defined in the tempest.conf. How does tempest read them and use them in the tests? I'm trying to write scenario tests in multiple regions. Under tempest.conf: [identity] Region = RegionOne [compute] image_ref = b6f85abb-c582-40e4-ad18-5a01431a6bfd image_ref_alt = b6f85abb-c582-40e4-ad18-5a01431a6bfd [network] public_network_id = 51efe3a5-390f-4a40-a480-8aa41d704c69 I'm thinking to change these variables within the test on the fly to run test within that particular region (region name, image id, public network id). My question is what tempest variables uses these conf variables? Is this the right approach or there is a better way to do it? Thanks, Danny __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [keystone] is all CLI commands supported in V3?
Hi, I’m running keystone V3. It does not seem to support all CLI commands. E.g. There is no “subnet create” command available. Is this expected? How to create a subnet in this case? Thanks, Danny __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [devstack] openstack install error with stable/kilo
Hi, I’m trying to run devstack install of stable/kilo on Ubuntu 14.04 and getting the following error. Any suggestion on how to resolve it? 2015-07-06 13:25:08.670 | + recreate_database glance 2015-07-06 13:25:08.670 | + local db=glance 2015-07-06 13:25:08.670 | + recreate_database_mysql glance 2015-07-06 13:25:08.670 | + local db=glance 2015-07-06 13:25:08.670 | + mysql -uroot -pmysql -h127.0.0.1 -e 'DROP DATABASE IF EXISTS glance;' 2015-07-06 13:25:08.678 | + mysql -uroot -pmysql -h127.0.0.1 -e 'CREATE DATABASE glance CHARACTER SET utf8;' 2015-07-06 13:25:08.684 | + /usr/local/bin/glance-manage db_sync 2015-07-06 13:25:09.067 | Traceback (most recent call last): 2015-07-06 13:25:09.067 | File /usr/local/bin/glance-manage, line 6, in module 2015-07-06 13:25:09.067 | from glance.cmd.manage import main 2015-07-06 13:25:09.067 | File /opt/stack/glance/glance/cmd/manage.py, line 47, in module 2015-07-06 13:25:09.067 | from glance.common import config 2015-07-06 13:25:09.067 | File /opt/stack/glance/glance/common/config.py, line 31, in module 2015-07-06 13:25:09.067 | from paste import deploy 2015-07-06 13:25:09.067 | ImportError: cannot import name deploy Thanks, Danny __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Using DevStack in Kilo
When use devstack with Kilo, after stack.sh, the following tracebacks are logged: localadmin@qa4:/opt/stack/logs$ grep -r Traceback * g-api.log:2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client Traceback (most recent call last): g-api.log:2015-04-16 14:16:20.237 TRACE glance.registry.client.v1.client Traceback (most recent call last): g-api.log:2015-04-16 14:16:21.973 TRACE glance.registry.client.v1.client Traceback (most recent call last): g-api.log:2015-04-16 14:16:24.538 TRACE glance.registry.client.v1.client Traceback (most recent call last): g-api.log.2015-04-16-141127:2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client Traceback (most recent call last): g-api.log.2015-04-16-141127:2015-04-16 14:16:20.237 TRACE glance.registry.client.v1.client Traceback (most recent call last): g-api.log.2015-04-16-141127:2015-04-16 14:16:21.973 TRACE glance.registry.client.v1.client Traceback (most recent call last): g-api.log.2015-04-16-141127:2015-04-16 14:16:24.538 TRACE glance.registry.client.v1.client Traceback (most recent call last): n-api.log:2015-04-16 14:18:28.163 TRACE nova.api.openstack Traceback (most recent call last): n-api.log:2015-04-16 14:19:34.470 TRACE nova.api.openstack Traceback (most recent call last): n-api.log:2015-04-16 14:20:39.624 TRACE nova.api.openstack Traceback (most recent call last): n-api.log:2015-04-16 14:21:44.879 TRACE nova.api.openstack Traceback (most recent call last): n-api.log:2015-04-16 14:22:49.676 TRACE nova.api.openstack Traceback (most recent call last): n-api.log:2015-04-16 14:23:55.475 TRACE nova.api.openstack Traceback (most recent call last): n-api.log.2015-04-16-141127:2015-04-16 14:18:28.163 TRACE nova.api.openstack Traceback (most recent call last): n-api.log.2015-04-16-141127:2015-04-16 14:19:34.470 TRACE nova.api.openstack Traceback (most recent call last): n-api.log.2015-04-16-141127:2015-04-16 14:20:39.624 TRACE nova.api.openstack Traceback (most recent call last): n-api.log.2015-04-16-141127:2015-04-16 14:21:44.879 TRACE nova.api.openstack Traceback (most recent call last): n-api.log.2015-04-16-141127:2015-04-16 14:22:49.676 TRACE nova.api.openstack Traceback (most recent call last): n-api.log.2015-04-16-141127:2015-04-16 14:23:55.475 TRACE nova.api.openstack Traceback (most recent call last): Traceback #1: 2015-04-16 14:16:17.996 ERROR glance.registry.client.v1.client [req-dc54f751-eb4b-4a03-9146-52623ed733a9 d0feb86a66d54df8b4aff1848c35729e 8afd7feb78ee445b8d642a66c96d49d5] Registry client request GET /images/cirros-0.3.3-x86_64-uec-kernel raised NotFound 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client Traceback (most recent call last): 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client File /opt/stack/glance/glance/registry/client/v1/client.py, line 117, in do_request 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client **kwargs) 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client File /opt/stack/glance/glance/common/client.py, line 71, in wrapped 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client return func(self, *args, **kwargs) 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client File /opt/stack/glance/glance/common/client.py, line 376, in do_request 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client headers=copy.deepcopy(headers)) 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client File /opt/stack/glance/glance/common/client.py, line 88, in wrapped 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client return func(self, method, url, body, headers) 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client File /opt/stack/glance/glance/common/client.py, line 523, in _do_request 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client raise exception.NotFound(res.read()) 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client NotFound: 404 Not Found 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client The resource could not be found. Traceback #2: == 2015-04-16 14:17:28.159 INFO oslo_messaging._drivers.impl_rabbit [req-78c0070a-2a89-452f-8f3b-7aa549b9a3ed admin admin] Connected to AMQP server on 172.29.172.1 61:5672 2015-04-16 14:18:28.163 ERROR nova.api.openstack [req-78c0070a-2a89-452f-8f3b-7aa549b9a3ed admin admin] Caught error: Timed out waiting for a reply to message I D 79c4306de3fc46aa986664735573100b 2015-04-16 14:18:28.163 TRACE nova.api.openstack Traceback (most recent call last): 2015-04-16 14:18:28.163 TRACE nova.api.openstack File /opt/stack/nova/nova/api/openstack/__init__.py, line 125, in __call__ 2015-04-16 14:18:28.163 TRACE nova.api.openstack return req.get_response(self.application) 2015-04-16 14:18:28.163 TRACE nova.api.openstack File /usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
[openstack-dev] [qa] Using DevStack for multi-node setup
Hi, I’m using DevStack to deploy OpenStack on a multi-node setup: Controller, Network, Compute as 3 separate nodes Since the Controller node is stacked first, during which the Network node is not yet ready, it fails to create the router instance and the public network. Both have to be created manually. Is this the expected behavior? Is there a workaround to have DevStack create them? Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] What does it mean when a network's admin_state_up = false?
Hi, I have a VM with an interface attached to network “provider-net-1” and assigned IP 66.0.0.8. localadmin@qa4:~/devstack$ nova list +--+--+++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-+--+ | d4815a38-ea64-4189-95b2-fefe82a07b72 | vm-1 | ACTIVE | - | Running | provider_net-1=66.0.0.8 | +--+--+++-+--+ Verify ping 66.0.0.8 from the router namespace is successful. Then I set the admin_state_up = false for the network. localadmin@qa4:~/devstack$ neutron net-update --admin_state_up=false provider_net-1 Updated network: provider_net-1 localadmin@qa4:~/devstack$ neutron net-show provider_net-1 +---+--+ | Field | Value| +---+--+ | admin_state_up| False| | id| 9532b759-68a2-4dc0-bcd4-b372fccabe3c | | name | provider_net-1 | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 399 | | router:external | False| | shared| False| | status| ACTIVE | | subnets | 8e75c110-9b31-4268-ba5c-e130fa139d32 | | tenant_id | e217fbc20a3b4f4fab49ec580e9b6a15 | +---+--+ Afterwards, the ping is still successful. I expect the ping to fail since the network admin_state_up= false. What is the expected behavior? What does it mean when a network's admin_state_up = false? Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] host aggregate's availability zone
Hi Joe, No, I did not. I’m not aware of this. Can you tell me exactly what needs to be done? Thanks, Danny -- Date: Sun, 21 Dec 2014 11:42:02 -0600 From: Joe Cropper cropper@gmail.commailto:cropper@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [qa] host aggregate's availability zone Message-ID: b36d2234-bee0-4c7b-a2b2-a09cc9098...@gmail.commailto:b36d2234-bee0-4c7b-a2b2-a09cc9098...@gmail.com Content-Type: text/plain; charset=utf-8 Did you enable the AvailabilityZoneFilter in nova.conf that the scheduler uses? And enable the FilterScheduler? These are two common issues related to this. - Joe On Dec 21, 2014, at 10:28 AM, Danny Choi (dannchoi) dannc...@cisco.commailto:dannc...@cisco.com wrote: Hi, I have a multi-node setup with 2 compute hosts, qa5 and qa6. I created 2 host-aggregate, each with its own availability zone, and assigned one compute host: localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-1 ++---+---+---+--+ | Id | Name | Availability Zone | Hosts | Metadata | ++---+---+---+--+ | 9 | host-aggregate-zone-1 | az-1 | 'qa5' | 'availability_zone=az-1' | ++---+---+---+--+ localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-2 ++---+---+---+--+ | Id | Name | Availability Zone | Hosts | Metadata | ++---+---+---+--+ | 10 | host-aggregate-zone-2 | az-2 | 'qa6' | 'availability_zone=az-2' | ++---+---+---+?+ My intent is to control at which compute host to launch a VM via the host-aggregate?s availability-zone parameter. To test, for vm-1, I specify --availiability-zone=az-1, and --availiability-zone=az-2 for vm-2: localadmin@qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-1 vm-1 +--++ | Property | Value | +--++ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name| instance-0066 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state| - | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass| kxot3ZBZcBH6 | | config_drive | | | created | 2014-12-21T15:59:03Z | | flavor | m1.tiny (1) | | hostId | | | id | 854acae9-b718-4ea5-bc28-e0bc46378b60 | | image| cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name
Re: [openstack-dev] [qa] Fail to launch VM due to maximum number of ports exceeded
Hi Timur, When you said the “global” counts, I assumed you refer to the admin tenant. BTW, I’m launching VMs in the demo tenant. localadmin@qa4:~/devstack$ keystone --os-tenant-name admin --os-username admin tenant-list +--++-+ |id|name| enabled | +--++-+ | 84827057a7444354b0bff11566ccb80b | admin| True | | 5977ba64a5734395a7dc1f8f1dbbac7c | alt_demo | True | | 1b2e5efaeeeb46f2922849b483f09ec1 |demo| True | | 7dbe65974f144993ad3fb165ced85a0e | invisible_to_admin | True | | eef9dee7066f4a30be32eaa67f2e40c9 | service | True | +--+++ localadmin@qa4:~/devstack$ keystone --os-tenant-name admin --os-username admin user-list +--+--+-+--+ |id| name | enabled |email | +--+--+-+--+ | 9d5fd9947d154a2db396fce177f1f83c | admin | True | | | bf51d29350b04a00aef1e701f1f6bb81 | alt_demo | True | alt_d...@example.com | | 668cf3505aba4e45b965cf2963942df9 | cinder | True | | | 4ddc6d36192c4c34bea3865b4286c90d | demo | True | d...@example.com | | f37bf45d6d0e4168bb3c18d07dbb39fc | glance | True | | | 20376173b10147b6a2111f976bf4e397 | heat | True | | | cf8bf98325964d04a4a3708e36d5f09d | neutron | True | | | fec102e33eb64c9e8866a5bd0f718d37 | nova | True | | +--+--+-+--+ localadmin@qa4:~/devstack$ neutron --os-tenant-name admin --os-username admin quota-show --tenant-id 84827057a7444354b0bff11566ccb80b +-+---+ | Field | Value | +-+---+ | floatingip | 50| | network | 10| | port| 50| | router | 10| | security_group | 10| | security_group_rule | 100 | | subnet | 10| +-+---+ localadmin@qa4:~/devstack$ nova --os-tenant-name admin --os-username admin quota-show --tenant 84827057a7444354b0bff11566ccb80b --user 9d5fd9947d154a2db396fce177f1f83c +-+---+ | Quota | Limit | +-+---+ | instances | 10| | cores | 20| | ram | 51200 | | floating_ips| 10| | fixed_ips | -1| | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes| 255 | | key_pairs | 100 | | security_groups | 10| | security_group_rules| 20| | server_groups | 10| | server_group_members| 10| +-+---+ Danny —— Date: Sun, 21 Dec 2014 07:02:05 +0300 From: Timur Nurlygayanov tnurlygaya...@mirantis.commailto:tnurlygaya...@mirantis.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [qa] Fail to launch VM due to maximum number of ports exceeded Message-ID: cahcyybnapbhywfvkjxghn7fckg1wp4sdzf8tujqbjj1lgk2...@mail.gmail.commailto:cahcyybnapbhywfvkjxghn7fckg1wp4sdzf8tujqbjj1lgk2...@mail.gmail.com Content-Type: text/plain; charset=utf-8 Hi Danny, what about the global ports count and quotas? On Sun, Dec 21, 2014 at 1:32 AM, Danny Choi (dannchoi) dannc...@cisco.commailto:dannc...@cisco.com wrote: Hi, The default quota for port is 50. +--++-+ localadmin@qa4:~/devstack$ neutron quota-show --tenant-id 1b2e5efaeeeb46f2922849b483f09ec1 +-+---+ | Field | Value | +-+---+ | floatingip | 50| | network | 10| | port| 50| 50 | router | 10| | security_group | 10| | security_group_rule | 100 | | subnet | 10| +-+---+ Total number of ports used so far is 40. localadmin@qa4:~/devstack$ nova list
[openstack-dev] [qa] host aggregate's availability zone
Hi, I have a multi-node setup with 2 compute hosts, qa5 and qa6. I created 2 host-aggregate, each with its own availability zone, and assigned one compute host: localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-1 ++---+---+---+--+ | Id | Name | Availability Zone | Hosts | Metadata | ++---+---+---+--+ | 9 | host-aggregate-zone-1 | az-1 | 'qa5' | 'availability_zone=az-1' | ++---+---+---+--+ localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-2 ++---+---+---+--+ | Id | Name | Availability Zone | Hosts | Metadata | ++---+---+---+--+ | 10 | host-aggregate-zone-2 | az-2 | 'qa6' | 'availability_zone=az-2' | ++---+---+---+—+ My intent is to control at which compute host to launch a VM via the host-aggregate’s availability-zone parameter. To test, for vm-1, I specify --availiability-zone=az-1, and --availiability-zone=az-2 for vm-2: localadmin@qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-1 vm-1 +--++ | Property | Value | +--++ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name| instance-0066 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state| - | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass| kxot3ZBZcBH6 | | config_drive | | | created | 2014-12-21T15:59:03Z | | flavor | m1.tiny (1) | | hostId | | | id | 854acae9-b718-4ea5-bc28-e0bc46378b60 | | image| cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name | vm-1 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id| 84827057a7444354b0bff11566ccb80b | | updated | 2014-12-21T15:59:03Z | | user_id | 9d5fd9947d154a2db396fce177f1f83c
[openstack-dev] [qa] Fail to launch VM due to maximum number of ports exceeded
Hi, The default quota for port is 50. +--++-+ localadmin@qa4:~/devstack$ neutron quota-show --tenant-id 1b2e5efaeeeb46f2922849b483f09ec1 +-+---+ | Field | Value | +-+---+ | floatingip | 50| | network | 10| | port| 50| 50 | router | 10| | security_group | 10| | security_group_rule | 100 | | subnet | 10| +-+---+ Total number of ports used so far is 40. localadmin@qa4:~/devstack$ nova list +--+--+++-+---+ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-+---+ | 595940bd-3fb1-4ad3-8cc0-29329b464471 | VM-1 | ACTIVE | - | Running | private_net30=30.0.0.44 | | 192ce36d-bc76-427a-a374-1f8e8933938f | VM-2 | ACTIVE | - | Running | private_net30=30.0.0.45 | | 10ad850e-ed9d-42d9-8743-b8eda4107edc | cirros--10ad850e-ed9d-42d9-8743-b8eda4107edc | ACTIVE | - | Running | private_net20=20.0.0.38; private=10.0.0.52| | 18209b40-09e7-4718-b04f-40a01a8e5993 | cirros--18209b40-09e7-4718-b04f-40a01a8e5993 | ACTIVE | - | Running | private_net20=20.0.0.40; private=10.0.0.54| | 1ededa1e-c820-4915-adf2-4be8eedaf012 | cirros--1ededa1e-c820-4915-adf2-4be8eedaf012 | ACTIVE | - | Running | private_net20=20.0.0.41; private=10.0.0.55| | 3688262e-d00f-4263-91a7-785c40f4ae0f | cirros--3688262e-d00f-4263-91a7-785c40f4ae0f | ACTIVE | - | Running | private_net20=20.0.0.34; private=10.0.0.49| | 4620663f-e6e0-4af2-84c0-6108279cbbed | cirros--4620663f-e6e0-4af2-84c0-6108279cbbed | ACTIVE | - | Running | private_net20=20.0.0.37; private=10.0.0.51| | 8f8252a3-fa23-47fc-8b32-7f7328ecfba2 | cirros--8f8252a3-fa23-47fc-8b32-7f7328ecfba2 | ACTIVE | - | Running | private_net20=20.0.0.39; private=10.0.0.53| | a228f33b-0388-464e-af49-b55af9601f56 | cirros--a228f33b-0388-464e-af49-b55af9601f56 | ACTIVE | - | Running | private_net20=20.0.0.42; private=10.0.0.56| | def5a255-0c9d-4df0-af02-3944bf5af2db | cirros--def5a255-0c9d-4df0-af02-3944bf5af2db | ACTIVE | - | Running | private_net20=20.0.0.36; private=10.0.0.50| | e1470813-bf4c-4989-9a11-62da47a5c4b4 | cirros--e1470813-bf4c-4989-9a11-62da47a5c4b4 | ACTIVE | - | Running | private_net20=20.0.0.33; private=10.0.0.48| | f63390fa-2169-45c0-bb02-e42633a08b8f | cirros--f63390fa-2169-45c0-bb02-e42633a08b8f | ACTIVE | - | Running | private_net20=20.0.0.35; private=10.0.0.47| | 2c34956d-4bf9-45e5-a9de-84d3095ee719 | vm--2c34956d-4bf9-45e5-a9de-84d3095ee719 | ACTIVE | - | Running | private_net30=30.0.0.39; private_net50=50.0.0.29; private_net40=40.0.0.29 | | 680c55f5-527b-49e3-847c-7794e1f8e7a8 | vm--680c55f5-527b-49e3-847c-7794e1f8e7a8 | ACTIVE | - | Running | private_net30=30.0.0.41; private_net50=50.0.0.30; private_net40=40.0.0.31 | | ade4c14b-baf7-4e57-948e-095689f73ce3 | vm--ade4c14b-baf7-4e57-948e-095689f73ce3 | ACTIVE | - | Running | private_net30=30.0.0.43; private_net50=50.0.0.32; private_net40=40.0.0.33 | | c91e426a-ed68-4659-89f6-df6d1154bb16 | vm--c91e426a-ed68-4659-89f6-df6d1154bb16 | ACTIVE | - | Running | private_net30=30.0.0.42; private_net50=50.0.0.33; private_net40=40.0.0.32 | | cedd9984-79f0-46b3-897d-b301cfa74a1a | vm--cedd9984-79f0-46b3-897d-b301cfa74a1a | ACTIVE | - | Running | private_net30=30.0.0.40; private_net50=50.0.0.31; private_net40=40.0.0.30 | | ec83e53f-556f-4e66-ab85-15a9e1ba9d28 | vm--ec83e53f-556f-4e66-ab85-15a9e1ba9d28 | ACTIVE | - | Running | private_net30=30.0.0.38; private_net50=50.0.0.28; private_net40=40.0.0.28 |
[openstack-dev] [qa] Very first VM launched won't response to ARP request
Hi, I have seen this issue consistently. I freshly install Ubuntu 14.04 onto Cisco UCS and use devstack to deploy OpenStack (stable Juno) to make it a Compute node. For the very first VM launched at this node, it won’t respond to ARP request (I ping from the router namespace). The Linux bridge tap interface shows it’s sending packets to the VM, and tcpdump confirms it. qbr8a29c673-4f Link encap:Ethernet HWaddr b2:76:d7:47:c2:fe inet6 addr: fe80::98ac:73ff:fea8:8be1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1137 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:49528 (49.5 KB) TX bytes:648 (648.0 B) qvb8a29c673-4f Link encap:Ethernet HWaddr b2:76:d7:47:c2:fe inet6 addr: fe80::b076:d7ff:fe47:c2fe/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:1132 errors:0 dropped:0 overruns:0 frame:0 TX packets:22 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:63592 (63.5 KB) TX bytes:3228 (3.2 KB) qvo8a29c673-4f Link encap:Ethernet HWaddr 9a:2b:5e:e4:22:f9 inet6 addr: fe80::982b:5eff:fee4:22f9/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:22 errors:0 dropped:0 overruns:0 frame:0 TX packets:1132 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3228 (3.2 KB) TX bytes:63592 (63.5 KB) tap8a29c673-4f Link encap:Ethernet HWaddr fe:16:3e:12:49:10 inet6 addr: fe80::fc16:3eff:fe12:4910/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:1143 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:2022 (2.0 KB) TX bytes:64490 (64.4 KB) localadmin@qa6:~/devstack$ ifconfig tap8a29c673-4f tap8a29c673-4f Link encap:Ethernet HWaddr fe:16:3e:12:49:10 inet6 addr: fe80::fc16:3eff:fe12:4910/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:1236 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:2022 (2.0 KB) TX bytes:69698 (69.6 KB) localadmin@qa6:~/devstack$ ifconfig tap8a29c673-4f tap8a29c673-4f Link encap:Ethernet HWaddr fe:16:3e:12:49:10 inet6 addr: fe80::fc16:3eff:fe12:4910/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:1239 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:2022 (2.0 KB) TX bytes:69866 (69.8 KB) localadmin@qa6:~/devstack$ sudo tcpdump -i tap8a29c673-4f tcpdump: WARNING: tap8a29c673-4f: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap8a29c673-4f, link-type EN10MB (Ethernet), capture size 65535 bytes 13:07:31.678751 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42 13:07:32.678813 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42 13:07:32.678838 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42 13:07:33.678778 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42 13:07:34.678840 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42 Usually I would reboot the VM and the ping works fine afterwards. localadmin@qa6:~/devstack$ sudo tcpdump -i tap8a29c673-4f tcpdump: WARNING: tap8a29c673-4f: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap8a29c673-4f, link-type EN10MB (Ethernet), capture size 65535 bytes 13:13:18.154711 IP 10.0.0.1 10.0.0.14: ICMP echo request, id 25711, seq 32, length 64 13:13:18.154996 IP 10.0.0.14 10.0.0.1: ICMP echo reply, id 25711, seq 32, length 64 13:13:19.156244 IP 10.0.0.1 10.0.0.14: ICMP echo request, id 25711, seq 33, length 64 13:13:19.156502 IP 10.0.0.14 10.0.0.1: ICMP echo reply, id 25711, seq 33, length 64 Looking for suggestions on how to debug this issue? Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] Question about nova boot --min-count number
Hi, According to the help text, “—min-count number” boot at least number servers (limited by quota): --min-count number Boot at least number servers (limited by quota). I used devstack to deploy OpenStack (version Kilo) in a multi-node setup: 1 Controller/Network + 2 Compute nodes I update the tenant demo default quota “instances and “cores from ’10’ and ’20’ to ‘100’ and ‘200’: localadmin@qa4:~/devstack$ nova quota-show --tenant 62fe9a8a2d58407d8aee860095f11550 --user eacb7822ccf545eab9398b332829b476 +-+---+ | Quota | Limit | +-+---+ | instances | 100 | | cores | 200 | | ram | 51200 | | floating_ips| 10| | fixed_ips | -1| | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes| 255 | | key_pairs | 100 | | security_groups | 10| | security_group_rules| 20| | server_groups | 10| | server_group_members| 10| +-+---+ When I boot 50 VMs using “—min-count 50”, only 48 VMs come up. localadmin@qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=5b464333-bad0-4fc1-a2f0-310c47b77a17 --min-count 50 vm- There is no error in logs; and it happens consistently. I also tried “—min-count 60” and only 48 VMs com up. In Horizon, left pane “Admin” - “System” - “Hypervisors”, it shows both Compute hosts, each with 32 total VCPUs for a grand total of 64, but only 48 used. Is this normal behavior or is there any other setting to change in order to use all 64 VCPUs? Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [devstack] localrc for mutli-node setup
Hi, I would like to use devstack to deploy OpenStack on a multi-node setup, i.e. separate Controller, Network and Compute nodes What is the localrc for each node? I would assume, for example, we don’t need to enable neutron service at the Controller node, etc… Does anyone has the localrc file for each node type that can share? Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] How to delete a VM which is in ERROR state?
Both “delete” and “force-delete” did not work for me; they failed to remove the VM. Danny Date: Sun, 7 Dec 2014 21:17:30 +0530 From: foss geek thefossg...@gmail.commailto:thefossg...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [qa] How to delete a VM which is in ERROR state? Message-ID: cadxhynxtvakcg58s2_ym5koozm6yufk9urok7wxtceya7oa...@mail.gmail.commailto:cadxhynxtvakcg58s2_ym5koozm6yufk9urok7wxtceya7oa...@mail.gmail.com Content-Type: text/plain; charset=utf-8 Also try with nova force-delete after reset: $ nova help force-delete usage: nova force-delete server Force delete a server. Positional arguments: server Name or ID of server. -- Thanks Regards E-Mail: thefossg...@gmail.commailto:thefossg...@gmail.com IRC: neophy Blog : http://lmohanphy.livejournal.com/ On Sun, Dec 7, 2014 at 9:10 PM, foss geek thefossg...@gmail.commailto:thefossg...@gmail.com wrote: Have you tried to delete after reset? # nova reset-state --active Name or ID of server # nova delete Name or ID of server It works well for me if the VM state is error state. -- Thanks Regards E-Mail: thefossg...@gmail.commailto:thefossg...@gmail.com IRC: neophy Blog : http://lmohanphy.livejournal.com/ On Sun, Dec 7, 2014 at 7:17 PM, Danny Choi (dannchoi) dannc...@cisco.commailto:dannc...@cisco.com wrote: That does not work. It put the VM in ACTIVE Status, but in NOSTATE Power State. Subsequent delete still won?t remove the VM. +--+--+++-++ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-++ | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ACTIVE | - | NOSTATE || Regards, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] How to delete a VM which is in ERROR state?
Hi, I have a VM which is in ERROR state. +--+--+++-++ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-++ | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR | - | NOSTATE || I tried in both CLI “nova delete” and Horizon “terminate instance”. Both accepted the delete command without any error. However, the VM never got deleted. Is there a way to remove the VM? Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] Should it be allowed to attach 2 interfaces from the same subnet to a VM?
Hi Andrea, Though both interfaces come up, only one will response to the ping from the neutron router. When I disable it, then the second one will response to ping. So it looks like only one interface is useful at a time. My question is is there any useful case for this, I.e. Why would you do this? Thanks, Danny Date: Tue, 2 Dec 2014 10:44:57 + From: Andrea Frittoli andrea.fritt...@gmail.commailto:andrea.fritt...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [qa] Should it be allowed to attach 2 interfaces from the same subnet to a VM? Message-ID: cab7wygv4+ji-tj5jkvg98kw0hxot8zlnuk+nzvjywfdijio...@mail.gmail.commailto:cab7wygv4+ji-tj5jkvg98kw0hxot8zlnuk+nzvjywfdijio...@mail.gmail.com Content-Type: text/plain; charset=UTF-8 Hello Danny, I think so. Any special concern with a VM using more than one port on a subnet? andrea On 2 December 2014 at 02:04, Danny Choi (dannchoi) dannc...@cisco.commailto:dannc...@cisco.com wrote: Hi, When I attach 2 interfaces from the same subnet to a VM, there is no error returned and both interfaces come up. lab@tme211:/opt/stack/logs$ nova interface-attach --net-id e38dba4a-74ed-4312-ba21-2a04b5c5a5b5 cirros-1 lab@tme211:/opt/stack/logs$ nova list +--+--+++-+---+ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-+---+ | 9d88d0b5-2453-4657-8058-987980ec7744 | cirros-1 | ACTIVE | - | Running | private=10.0.0.10 | +--+--+++-+---+ lab@tme211:/opt/stack/logs$ nova interface-attach --net-id e38dba4a-74ed-4312-ba21-2a04b5c5a5b5 cirros-1 lab@tme211:/opt/stack/logs$ nova list +--+--+++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-+--+ | 9d88d0b5-2453-4657-8058-987980ec7744 | cirros-1 | ACTIVE | - | Running | private=10.0.0.10, 10.0.0.11 | +--+--+++-+--+ $ ifconfig eth0 Link encap:Ethernet HWaddr FA:16:3E:92:2D:2B inet addr:10.0.0.10 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe92:2d2b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:514 errors:0 dropped:0 overruns:0 frame:0 TX packets:307 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:48342 (47.2 KiB) TX bytes:41750 (40.7 KiB) eth1 Link encap:Ethernet HWaddr FA:16:3E:EF:55:BC inet addr:10.0.0.11 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:feef:55bc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:49 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3556 (3.4 KiB) TX bytes:1120 (1.0 KiB) Should this operation be allowed? Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] How to perform VMs live migration?
Hi, When I try “nova host-evacuate-live”, I I got the following error: Error while live migrating instance: tme209 is not on shared storage: Live migration can not be used without shared storage. What do I need to do to create a shared storage? Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Launching VM in multi-node setup
Hi, In a multi-node setup with multiple Compute nodes, is there a way to control where a VM will reside when launching a VM? E.g. I would like to have VM-1 at Compute-1, VM-2 at Compute-2, etc… Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [api] Networking API Create network missing Request parameters
Hi, In neutron, user with “admin” role can specify the provider network parameters when creating a network. —provider:network_type —provider:physical_network —provider:segmentation_id localadmin@qa4:~/devstack$ neutron net-create test-network --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id 400 Created a new network: +---+--+ | Field | Value| +---+--+ | admin_state_up| True | | id| 389caa09-da54-4713-b869-12f7389cb9c6 | | name | test-network | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 400 | | router:external | False| | shared| False| | status| ACTIVE | | subnets | | | tenant_id | 92edf0cd20bf4085bb9dbe1b9084aadb | +---+--+ However, the Networking API v2.0 (http://developer.openstack.org/api-ref-networking-v2.html) “Create network” does not list them as Request parameters. Is this a print error? Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [devstack] Question about the OVS_PHYSICAL_BRIDGE attribute defined in localrc
Hi, When I have OVS_PHYSICAL_BRIDGE=br-p1p1” defined in localrc, devstack creates the OVS bridge br-p1p1. localadmin@qa4:~/devstack$ sudo ovs-vsctl show 5f845d2e-9647-47f2-b92d-139f6faaf39e Bridge br-p1p1 Port phy-br-p1p1 Interface phy-br-p1p1 type: patch options: {peer=int-br-p1p1} Port br-p1p1 Interface br-p1p1 type: internal However, no physical port is added to it. I have to manually do it. localadmin@qa4:~/devstack$ sudo ovs-vsctl add-port br-p1p1 p1p1 localadmin@qa4:~/devstack$ sudo ovs-vsctl show 5f845d2e-9647-47f2-b92d-139f6faaf39e Bridge br-p1p1 Port phy-br-p1p1 Interface phy-br-p1p1 type: patch options: {peer=int-br-p1p1} Port br-p1p1 Interface br-p1p1 type: internal Port “p1p1” Interface “p1p1 Is this expected behavior? Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] nova get-password does not seem to work
Hi, I used devstack to deploy Juno OpenStack. I spin up an instance with cirros-0.3.2-x86_64-uec. By default, useranme/password is cirrus/cubswin:) When I execute the command “nova get-password”, nothing is returned. localadmin@qa4:/etc/nova$ nova show vm1 +--++ | Property | Value | +--++ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state| - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2014-10-15T14:48:04.00 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-10-15T14:47:56Z | | flavor | m1.tiny (1) | | hostId | ea715752b11cf96b95f9742513a351d2d6571c4fdb76f497d64ecddb | | id | 1a3c487e-c3a3-4783-bd0b-e3c87bf22c3f | | image| cirros-0.3.2-x86_64-uec (1dda953b-9319-4c43-bd20-1ef75b491553) | | key_name | cirros-key | | metadata | {} | | name | vm1 | | os-extended-volumes:volumes_attached | [] | | private network | 10.0.0.11 | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id| c8daf9bd6dda40a982b074322c08da7d | | updated | 2014-10-15T14:48:04Z | | user_id | 2cbbafae01404d4ebeb6e6fbacfa6546 | +--++ localadmin@qa4:/etc/nova$ nova help get-password usage: nova get-password server [private-key] Get password for a server. Positional arguments: server Name or ID of server. private-key Private key (used locally to decrypt password) (Optional). When specified, the command displays the clear (decrypted) VM password. When not specified, the ciphered VM password is displayed. localadmin@qa4:/etc/nova$ nova get-password vm1 [NOTHING RETURNED] localadmin@qa4:/etc/nova$ Am I missing something? Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [QA] How to attach multiple NICs to an instance VM?
Hi, “nova help boot” shows the following: --nic net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid Create a NIC on the server. Specify option multiple times to create multiple NICs. net- id: attach NIC to network with this UUID (either port-id or net-id must be provided), v4-fixed-ip: IPv4 fixed address for NIC (optional), v6-fixed-ip: IPv6 fixed address for NIC (optional), port-id: attach NIC to port with this UUID (either port-id or net-id must be provided). NOTE: Specify option multiple times to create multiple NICs. I have two private networks and one public network (for floating IPs) configured. localadmin@qa4:~/devstack$ nova net-list +--+---+--+ | ID | Label | CIDR | +--+---+--+ | 6905cf7d-74d7-455b-b9d0-8cea972ec522 | private | None | | 8c25e33b-47be-47eb-a945-e0ac2ad6756a | Private_net20 | None | | faa138e6-4774-41ad-8b5f-9795788eca43 | public| None | +--+---+--+ When I launch an instance, I specify the “—nic” option twice. localadmin@qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=6905cf7d-74d7-455b-b9d0-8cea972ec522 --nic net-id=8c25e33b-47be-47eb-a945-e0ac2ad6756a vm10 And then I associate a floating IP to the instance. localadmin@qa4:~/devstack$ nova list +--+--+++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-+--+ | e6a13d2e-756b-4b96-bf0c-438c2c875675 | vm10 | ACTIVE | - | Running | Private_net20=20.0.0.10; private=10.0.0.7, 172.29.173.13 | localadmin@qa4:~/devstack$ nova show vm10 +--++ | Property | Value | +--++ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state| - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2014-10-15T20:22:50.00 | | OS-SRV-USG:terminated_at | - | | Private_net20 network| 20.0.0.10 | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-10-15T20:21:54Z | | flavor | m1.tiny (1) | | hostId | 4660a679d319992f764bcb245b71048212fe8cd67b769400d82382b7 | | id | e6a13d2e-756b-4b96-bf0c-438c2c875675 | | image| cirros-0.3.2-x86_64-uec (feaec710-c1cc-4071-aefa-c3dc2b915ab1) | | key_name | - | | metadata | {} | | name | vm10 | | os-extended-volumes:volumes_attached | [] | | private network | 10.0.0.7, 172.29.173.13 | |
Re: [openstack-dev] [QA] How to attach multiple NICs to an instance VM?
Hi Salvatore, eth1 is not configured in /etc/network/interfaces. After I manually added eth1 and bounced it, it came up with the 2nd private address. $ sudo vi /etc/network/interfaces # Configure Loopback auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet dhcp ~ ~ ~ $ sudo ifdown eht1 sudo ifup eth1 ifdown: interface eht1 not configured udhcpc (v1.20.1) started Sending discover... Sending select for 20.0.0.10... Lease of 20.0.0.10 obtained, lease time 86400 deleting routers adding dns 8.8.4.4 adding dns 8.8.8.8 $ ifconfig -a eth0 Link encap:Ethernet HWaddr FA:16:3E:7A:49:1E inet addr:10.0.0.7 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe7a:491e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:707 errors:0 dropped:0 overruns:0 frame:0 TX packets:446 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:66680 (65.1 KiB) TX bytes:57968 (56.6 KiB) eth1 Link encap:Ethernet HWaddr FA:16:3E:73:C7:F0 inet addr:20.0.0.10 Bcast:20.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe73:c7f0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:39 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3354 (3.2 KiB) TX bytes:1098 (1.0 KiB) loLink encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:336 (336.0 B) TX bytes:336 (336.0 B) $ ping 10.0.0.7 PING 10.0.0.7 (10.0.0.7): 56 data bytes 64 bytes from 10.0.0.7: seq=0 ttl=64 time=0.138 ms 64 bytes from 10.0.0.7: seq=1 ttl=64 time=0.041 ms 64 bytes from 10.0.0.7: seq=2 ttl=64 time=0.066 ms ^C --- 10.0.0.7 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.041/0.081/0.138 ms $ ping 20.0.0.10 PING 20.0.0.10 (20.0.0.10): 56 data bytes 64 bytes from 20.0.0.10: seq=0 ttl=64 time=0.078 ms 64 bytes from 20.0.0.10: seq=1 ttl=64 time=0.041 ms ^C --- 20.0.0.10 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.041/0.059/0.078 ms $ Thanks, Danny === Date: Thu, 16 Oct 2014 00:10:20 +0200 From: Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [QA] How to attach multiple NICs to an instance VM? Message-ID: CAGR=i3jeuz6-peghjze-hnh2yvn8ykmbn4ies4dtqc3b2xl...@mail.gmail.commailto:CAGR=i3jeuz6-peghjze-hnh2yvn8ykmbn4ies4dtqc3b2xl...@mail.gmail.com Content-Type: text/plain; charset=utf-8 I think you did everything right. Are you sure cirros images by default are configured to boostrap interfaces different from eth0? Perhaps all you need to do is just ifup the interface... have you already tried that? Salvatore On 15 October 2014 23:07, Danny Choi (dannchoi) dannc...@cisco.commailto:dannc...@cisco.com wrote: Hi, ?nova help boot? shows the following: --nic net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid Create a NIC on the server. Specify option multiple times to create multiple NICs. net- id: attach NIC to network with this UUID (either port-id or net-id must be provided), v4-fixed-ip: IPv4 fixed address for NIC (optional), v6-fixed-ip: IPv6 fixed address for NIC (optional), port-id: attach NIC to port with this UUID (either port-id or net-id must be provided). NOTE: Specify option multiple times to create multiple NICs. I have two private networks and one public network (for floating IPs) configured. localadmin@qa4:~/devstack$ nova net-list +--+---+--+ | ID | Label | CIDR | +--+---+--+ | 6905cf7d-74d7-455b-b9d0-8cea972ec522 | private | None | | 8c25e33b-47be-47eb-a945-e0ac2ad6756a | Private_net20 | None | | faa138e6-4774-41ad-8b5f-9795788eca43 | public| None
Re: [openstack-dev] [qa] Cannot start the VM console when VM is launched at Compute node
nova.console.websocketproxy File /usr/lib/python2.7/BaseHTTPServer.py, line 340, in handle 2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy self.handle_one_request() 2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy File /usr/lib/python2.7/BaseHTTPServer.py, line 328, in handle_one_request 2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy method() 2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy File /usr/local/lib/python2.7/dist-packages/websockify/websocket.py, line 506, in do_GET 2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy if not self.handle_websocket(): 2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy File /usr/local/lib/python2.7/dist-packages/websockify/websocket.py, line 494, in handle_websocket 2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy self.new_websocket_client() 2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy File /opt/stack/nova/nova/console/websocketproxy.py, line 74, in new_websocket_client 2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy raise exception.InvalidToken(token=token) 2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy InvalidToken: The token '9ced0dd0-f146-42eb-9b26-c64a29443936' is invalid or has expired 2014-10-15 15:11:06.652 TRACE nova.console.websocketproxy 2014-10-15 15:11:06.658 DEBUG nova.console.websocketproxy [-] Reaing zombies, active child count is 7 from (pid=21242) vmsg /usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824 2014-10-15 15:11:50.404 DEBUG nova.console.websocketproxy [-] Reaing zombies, active child count is 4 from (pid=21242) vmsg /usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824 2014-10-15 15:11:50.405 DEBUG nova.console.websocketproxy [-] Reaing zombies, active child count is 0 from (pid=21242) vmsg /usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824 2014-10-15 15:11:50.405 DEBUG nova.console.websocketproxy [-] Reaing zombies, active child count is 0 from (pid=21242) vmsg /usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824 Devstack is used to deploy OpenStack. I enabled the “n-novnc” service at the Compute node. Below is a snippet of the localrc. Compute: # Services disable_all_services ENABLED_SERVICES=neutron,n-cpu,rabbit,q-api,q-agt,n-novnc Controller: # Services disable_service n-net disable_service n-cpu enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron The process n-novnc is running at both the Controller and Compute nodes. Is this a misconfiguration issue? Thanks, Danny --- Date: Wed, 15 Oct 2014 08:10:00 -0700 From: Vishvananda Ishaya vishvana...@gmail.commailto:vishvana...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [qa] Cannot start the VM console when VM is launched at Compute node Message-ID: bc5f71f5-5f45-497a-bf99-f8ccd3c71...@gmail.commailto:bc5f71f5-5f45-497a-bf99-f8ccd3c71...@gmail.com Content-Type: text/plain; charset=windows-1252 No this is not expected and may represent a misconfiguration or a bug. Something is returning a 404 when it shouldn?t. You might get more luck running the nova command with ?debug to see what specifically is 404ing. You could also see if anything is reporting NotFound in the nova-consoleauth or nova-api or nova-compute logs Vish On Oct 14, 2014, at 10:45 AM, Danny Choi (dannchoi) dannc...@cisco.commailto:dannc...@cisco.com wrote: Hi, I used devstack to deploy multi-node OpenStack, with Controller + nova-compute + Network on one physical node (qa4), and Compute on a separate physical node (qa5). When I launch a VM which spun up on the Compute node (qa5), I cannot launch the VM console, in both CLI and Horizon. localadmin@qa4:~/devstack$ nova hypervisor-servers q +--+---+---+-+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--+---+---+-+ | 48b16e7c-0a17-42f8-9439-3146f26b4cd8 | instance-000e | 1 | qa4 | | 3eadf190-465b-4e90-ba49-7bc8ce7f12b9 | instance-000f | 1 | qa4 | | 056d4ad2-e081-4706-b7d1-84ee281e65fc | instance-0010 | 2 | qa5 | +--+---+---+-+ localadmin@qa4:~/devstack$ nova list +--+--+++-+-+ | ID | Name | Status | Task State | Power State | Networks
[openstack-dev] [devstack] Question about the OVS_PHYSICAL_BRIDGE attribute defined in localrc
Hi, When I have OVS_PHYSICAL_BRIDGE=br-p1p1” defined in localrc, devstack creates the OVS bridge br-p1p1. localadmin@qa4:~/devstack$ sudo ovs-vsctl show 5f845d2e-9647-47f2-b92d-139f6faaf39e Bridge br-p1p1 Port phy-br-p1p1 Interface phy-br-p1p1 type: patch options: {peer=int-br-p1p1} Port br-p1p1 Interface br-p1p1 type: internal However, no physical port is added to it. I have to manually do it. localadmin@qa4:~/devstack$ sudo ovs-vsctl add-port br-p1p1 p1p1 localadmin@qa4:~/devstack$ sudo ovs-vsctl show 5f845d2e-9647-47f2-b92d-139f6faaf39e Bridge br-p1p1 Port phy-br-p1p1 Interface phy-br-p1p1 type: patch options: {peer=int-br-p1p1} Port br-p1p1 Interface br-p1p1 type: internal Port “p1p1” Interface “p1p1 Is this expected behavior? Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] Cannot start the VM console when VM is launched at Compute node
Hi, I used devstack to deploy multi-node OpenStack, with Controller + nova-compute + Network on one physical node (qa4), and Compute on a separate physical node (qa5). When I launch a VM which spun up on the Compute node (qa5), I cannot launch the VM console, in both CLI and Horizon. localadmin@qa4:~/devstack$ nova hypervisor-servers q +--+---+---+-+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--+---+---+-+ | 48b16e7c-0a17-42f8-9439-3146f26b4cd8 | instance-000e | 1 | qa4 | | 3eadf190-465b-4e90-ba49-7bc8ce7f12b9 | instance-000f | 1 | qa4 | | 056d4ad2-e081-4706-b7d1-84ee281e65fc | instance-0010 | 2 | qa5 | +--+---+---+-+ localadmin@qa4:~/devstack$ nova list +--+--+++-+-+ | ID | Name | Status | Task State | Power State | Networks| +--+--+++-+-+ | 3eadf190-465b-4e90-ba49-7bc8ce7f12b9 | vm1 | ACTIVE | - | Running | private=10.0.0.17 | | 48b16e7c-0a17-42f8-9439-3146f26b4cd8 | vm2 | ACTIVE | - | Running | private=10.0.0.16, 172.29.173.4 | | 056d4ad2-e081-4706-b7d1-84ee281e65fc | vm3 | ACTIVE | - | Running | private=10.0.0.18, 172.29.173.5 | +--+--+++-+-+ localadmin@qa4:~/devstack$ nova get-vnc-console vm3 novnc ERROR (CommandError): No server with a name or ID of 'vm3' exists. [ERROR] This does not happen if the VM resides at the Controlller (qa5). localadmin@qa4:~/devstack$ nova get-vnc-console vm2 novnc +---+-+ | Type | Url | +---+-+ | novnc | http://172.29.172.161:6080/vnc_auto.html?token=f556dea2-125d-49ed-bfb7-55a9a7714b2e | +---+-+ Is this expected behavior? Thanks, Danny ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] How to troubleshoot why a VM at Compute node won't response to ARP request from Neutron router
Hi, Using devstack to deploy OpenStack, I have Controller + Network running at one physical node and Compute at a separate node. I launched a VM at the Compute node with a private address 10.0.0.2 (Neutron router interface is 10.0.0.1). At the Controller node, in the qrouter namespace, I could not ping the VM private address 10.0.0.2. At the Compute node, tcpdump of the tap interface indicated ARP requests were received. However, it did not show any ARP response. My understanding is that the VM’s virtual interface is directly connected to this tap interface. Since the VM is unreachable, I cannot launch its console to see if the ARP requests are received at the virtual interface. Any suggestions on how to troubleshoot this? localadmin@qa4:~/devstack$ nova show vm1 +--++ | Property | Value | +--++ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state| - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2014-10-12T14:25:15.00 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-10-12T14:23:30Z | | flavor | m1.tiny (1) | | hostId | 00ac69883737ebd290ad4f38cae979a6e268902333261ba6bfbade44 | | id | 04b5a345-cadf-4dee-9209-5bcf589b6a3c | | image| cirros-0.3.2-x86_64-uec (14a55982-a093-4850-80c8-7b2ae3a7eaba) | | key_name | - | | metadata | {} | | name | vm1 | | os-extended-volumes:volumes_attached | [] | | private network | 10.0.0.2, 172.29.173.5 | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id| 90058797dddc49efae4d1f45aa5ab982 | | updated | 2014-10-12T14:23:39Z | | user_id | 5ab6344540974a1fbda5b539778ebe35 | +--++ localadmin@qa4:~/devstack$ localadmin@qa4:~/devstack$ ip netns qdhcp-f55f0683-830f-4523-82cb-46d638f91d47 qrouter-62aecbdd-d58d-4b33-a743-b16ca38544c5 localadmin@qa4:~/devstack$ localadmin@qa4:~/devstack$ localadmin@qa4:~/devstack$ sudo ip netns exec qrouter-62aecbdd-d58d-4b33-a743-b16ca38544c5 ping 10.0.0.2 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. From 10.0.0.1 icmp_seq=1 Destination Host Unreachable From 10.0.0.1 icmp_seq=2 Destination Host Unreachable From 10.0.0.1 icmp_seq=3 Destination Host Unreachable From 10.0.0.1 icmp_seq=4 Destination Host Unreachable From 10.0.0.1 icmp_seq=5 Destination Host Unreachable From 10.0.0.1 icmp_seq=6 Destination Host Unreachable localadmin@qa5:~/devstack$ sudo tcpdump -i tapade47169-57 tcpdump: WARNING: tapade47169-57: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tapade47169-57, link-type EN10MB (Ethernet), capture size 65535
Re: [openstack-dev] [Openstack] [qa] How to troubleshoot why a VM at Compute node won't response to ARP request from Neutron router
I do have security rule configured to allow ICMP. localadmin@qa4:~/devstack$ nova secgroup-list-rules default +-+---+-+---+--+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-+---+-+---+--+ | tcp | 22| 22 | 0.0.0.0/0 | | | | | | | default | | | | | | default | | icmp| -1| -1 | 0.0.0.0/0 | | +-+---+-+---+———+ Danny From: Remo Mattei r...@italy1.commailto:r...@italy1.com Date: Sunday, October 12, 2014 at 1:00 PM To: Danny Choi dannc...@cisco.commailto:dannc...@cisco.com Cc: openst...@lists.openstack.orgmailto:openst...@lists.openstack.org openst...@lists.openstack.orgmailto:openst...@lists.openstack.org, openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [Openstack] [qa] How to troubleshoot why a VM at Compute node won't response to ARP request from Neutron router By default icmp is not allowed Inviato da iPhone () Il giorno 12/ott/2014, alle ore 09:25, Danny Choi (dannchoi) dannc...@cisco.commailto:dannc...@cisco.com ha scritto: Hi, Using devstack to deploy OpenStack, I have Controller + Network running at one physical node and Compute at a separate node. I launched a VM at the Compute node with a private address 10.0.0.2 (Neutron router interface is 10.0.0.1). At the Controller node, in the qrouter namespace, I could not ping the VM private address 10.0.0.2. At the Compute node, tcpdump of the tap interface indicated ARP requests were received. However, it did not show any ARP response. My understanding is that the VM’s virtual interface is directly connected to this tap interface. Since the VM is unreachable, I cannot launch its console to see if the ARP requests are received at the virtual interface. Any suggestions on how to troubleshoot this? localadmin@qa4:~/devstack$ nova show vm1 +--++ | Property | Value | +--++ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state| - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2014-10-12T14:25:15.00 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-10-12T14:23:30Z | | flavor | m1.tiny (1) | | hostId | 00ac69883737ebd290ad4f38cae979a6e268902333261ba6bfbade44 | | id | 04b5a345-cadf-4dee-9209-5bcf589b6a3c | | image| cirros-0.3.2-x86_64-uec (14a55982-a093-4850-80c8-7b2ae3a7eaba) | | key_name | - | | metadata | {} | | name | vm1 | | os-extended-volumes:volumes_attached | [] | | private network | 10.0.0.2, 172.29.173.5 | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id