[Openstack] Openstack maintenance updates - how to?
Hi colleagues, we're using Openstack from Ubuntu repositories. Everything is perfect except cases when I manually apply patches before supplier (e.g. Canonical) will issue updated versions. The problem is that it happens not immediately and not with the next update, thus all patches I applied manually, will be overwrited back during next update. How you, firends, deal with this? Is "manual" (as described above) way is the most safe and reliable? Or may be there is "stable" branch of Openstack components which can be used as maintenance? Or whether "master" branch is good and safe source for updating Openstack components is such way? Any thoughts on this? Thanks! -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] specify endpoint in API calls
Sorry, found the way immediately: ks = identity.Client(session = auth,interface='public') On 11/5/18 6:12 PM, Volodymyr Litovka wrote: Dear colleagues, I have the following configuration of endpoints: $ openstack endpoint list --service identity +--+---+--+--+-+---++ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +--+---+--+--+-+---++ | 68a4eabc27474beeb6f08d986cca3263 | RegionOne | keystone | identity | True | public | http://controller-ext:5000/v3/ | | 6fab7abe61e84463a05b4e58d8f7bb60 | RegionOne | keystone | identity | True | internal | http://controller:5000/v3/ | | eb378df5949046a49661dad3c887677f | RegionOne | keystone | identity | True | admin | http://controller:5000/v3/ | +--+---+--+--+-+---++ and want to explicitly use public endpoint (calling controller-ext, NOT controller) when doing API calls. Example of code: from keystoneauth1.identity import v3 from keystoneauth1 import session as authsession from keystoneclient.v3 import client as identity os_domain = 'default' auth_url = 'http://controller-ext:5000/v3' os_username = 'admin' os_password = 'adminpass' project_name = 'admin' password = v3.Password(auth_url=auth_url, username=os_username, password=os_password, user_domain_name=os_domain,project_name=project_name,project_domain_name=os_domain)auth = authsession.Session(auth=password) ks = identity.Client(session = auth) for ep in ks.endpoints.list(): pass returns an error since it tries to call 'controller' (which is internal address and isn't resolvable): keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://controller:5000/v3/endpoints?endpoint_filter=service_type_filter=interface: HTTPConnectionPool(host='controller', port=5000): Max retries exceeded with url: /v3/endpoints?endpoint_filter=service_type_filter=interface (Caused by NewConnectionError('object at 0x1063f1940>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',)) The question is: are there ways to implicitly point to 'public' (and whatever else) endpoint when working with identity service? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack] specify endpoint in API calls
Dear colleagues, I have the following configuration of endpoints: $ openstack endpoint list --service identity +--+---+--+--+-+---++ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +--+---+--+--+-+---++ | 68a4eabc27474beeb6f08d986cca3263 | RegionOne | keystone | identity | True | public | http://controller-ext:5000/v3/ | | 6fab7abe61e84463a05b4e58d8f7bb60 | RegionOne | keystone | identity | True | internal | http://controller:5000/v3/ | | eb378df5949046a49661dad3c887677f | RegionOne | keystone | identity | True | admin | http://controller:5000/v3/ | +--+---+--+--+-+---++ and want to explicitly use public endpoint (calling controller-ext, NOT controller) when doing API calls. Example of code: from keystoneauth1.identity import v3 from keystoneauth1 import session as authsession from keystoneclient.v3 import client as identity os_domain = 'default' auth_url = 'http://controller-ext:5000/v3' os_username = 'admin' os_password = 'adminpass' project_name = 'admin' password = v3.Password(auth_url=auth_url, username=os_username, password=os_password, user_domain_name=os_domain,project_name=project_name,project_domain_name=os_domain)auth = authsession.Session(auth=password) ks = identity.Client(session = auth) for ep in ks.endpoints.list(): pass returns an error since it tries to call 'controller' (which is internal address and isn't resolvable): keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://controller:5000/v3/endpoints?endpoint_filter=service_type_filter=interface: HTTPConnectionPool(host='controller', port=5000): Max retries exceeded with url: /v3/endpoints?endpoint_filter=service_type_filter=interface (Caused by NewConnectionError('at 0x1063f1940>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',)) The question is: are there ways to implicitly point to 'public' (and whatever else) endpoint when working with identity service? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] boot order with multiple attachments
Hi again, there is similar case - https://bugs.launchpad.net/nova/+bug/1570107 - but I get same result (booting from VOLUME2) regardless of whether I use or don't use device_type/disk_bus properties in BDM description. Any ideas on how to solve this issue? Thanks. On 9/11/18 10:58 AM, Volodymyr Litovka wrote: Hi colleagues, is there any mechanism to ensure boot disk when attaching more than two volumes to server? At the moment, I can't find a way to make it predictable. I have two bootable images with the following properties: 1) hw_boot_menu='true', hw_disk_bus='scsi', hw_qemu_guest_agent='yes', hw_scsi_model='virtio-scsi', img_hide_hypervisor_id='true', locations='[{u'url': u'swift+config:...', u'metadata': {}}]' which corresponds to the following volume: - attachments: [{u'server_id': u'...', u'attachment_id': u'...', u'attached_at': u'...', u'host_name': u'...', u'volume_id': u'', u'device': u'/dev/sda', u'id': u'...'}] - volume_image_metadata: {u'checksum': u'...', u'hw_qemu_guest_agent': u'yes', u'disk_format': u'raw', u'image_name': u'bionic-Qpub', u'hw_scsi_model': u'virtio-scsi', u'image_id': u'...', u'hw_boot_menu': u'true', u'min_ram': u'0', u'container_format': u'bare', u'min_disk': u'0', u'img_hide_hypervisor_id': u'true', u'hw_disk_bus': u'scsi', u'size': u'...'} and second image: 2) hw_disk_bus='scsi', hw_qemu_guest_agent='yes', hw_scsi_model='virtio-scsi', img_hide_hypervisor_id='true', locations='[{u'url': u'cinder://...', u'metadata': {}}]' which corresponds to the following volume: - attachments: [{u'server_id': u'...', u'attachment_id': u'...', u'attached_at': u'...', u'host_name': u'...', u'volume_id': u'', u'device': u'/dev/sdb', u'id': u'...'}] - volume_image_metadata: {u'checksum': u'...', u'hw_qemu_guest_agent': u'yes', u'disk_format': u'raw', u'image_name': u'xenial', u'hw_scsi_model': u'virtio-scsi', u'image_id': u'...', u'min_ram': u'0', u'container_format': u'bare', u'min_disk': u'0', u'img_hide_hypervisor_id': u'true', u'hw_disk_bus': u'scsi', u'size': u'...'} Using Heat, I'm creating the following block_devices_mapping_v2 scheme: block_device_mapping_v2: - volume_id: delete_on_termination: false device_type: disk disk_bus: scsi boot_index: 0 - volume_id: delete_on_termination: false device_type: disk disk_bus: scsi boot_index: -1 which maps to the following nova-api debug log: Action: 'create', calling method: ServersController.create of 0x7f6b08dd4890>>, body: {"ser ver": {"name": "jex-n1", "imageRef": "", "block_device_mapping_v2": [{"boot_index": 0, "uuid": "", "disk_bus": "scsi", "source_type": "volume" , "device_type": "disk", "destination_type": "volume", "delete_on_termination": false}, {"boot_index": -1, "uuid": "", "disk_bus": "scsi", "so urce_type": "volume", "device_type": "disk", "destination_type": "volume", "delete_on_termination": false}], "flavorRef": "4b3da838-3d81-461a-b946-d3613fb6f4b3", "user_data": "...", "max_count": 1, "min_count": 1, "networks": [{"port": "9044f884-1a3d-4dc6-981e-f585f5e45dd1"}], "config_drive": true}} _process_stack /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:604 Regardless of boot_index value, server boots from VOLUME2 (/dev/sdb), while having attached VOLUME1 as well as /dev/sda I'm using Queens. Where I'm wrong? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack] boot order with multiple attachments
Hi colleagues, is there any mechanism to ensure boot disk when attaching more than two volumes to server? At the moment, I can't find a way to make it predictable. I have two bootable images with the following properties: 1) hw_boot_menu='true', hw_disk_bus='scsi', hw_qemu_guest_agent='yes', hw_scsi_model='virtio-scsi', img_hide_hypervisor_id='true', locations='[{u'url': u'swift+config:...', u'metadata': {}}]' which corresponds to the following volume: - attachments: [{u'server_id': u'...', u'attachment_id': u'...', u'attached_at': u'...', u'host_name': u'...', u'volume_id': u'', u'device': u'/dev/sda', u'id': u'...'}] - volume_image_metadata: {u'checksum': u'...', u'hw_qemu_guest_agent': u'yes', u'disk_format': u'raw', u'image_name': u'bionic-Qpub', u'hw_scsi_model': u'virtio-scsi', u'image_id': u'...', u'hw_boot_menu': u'true', u'min_ram': u'0', u'container_format': u'bare', u'min_disk': u'0', u'img_hide_hypervisor_id': u'true', u'hw_disk_bus': u'scsi', u'size': u'...'} and second image: 2) hw_disk_bus='scsi', hw_qemu_guest_agent='yes', hw_scsi_model='virtio-scsi', img_hide_hypervisor_id='true', locations='[{u'url': u'cinder://...', u'metadata': {}}]' which corresponds to the following volume: - attachments: [{u'server_id': u'...', u'attachment_id': u'...', u'attached_at': u'...', u'host_name': u'...', u'volume_id': u'', u'device': u'/dev/sdb', u'id': u'...'}] - volume_image_metadata: {u'checksum': u'...', u'hw_qemu_guest_agent': u'yes', u'disk_format': u'raw', u'image_name': u'xenial', u'hw_scsi_model': u'virtio-scsi', u'image_id': u'...', u'min_ram': u'0', u'container_format': u'bare', u'min_disk': u'0', u'img_hide_hypervisor_id': u'true', u'hw_disk_bus': u'scsi', u'size': u'...'} Using Heat, I'm creating the following block_devices_mapping_v2 scheme: block_device_mapping_v2: - volume_id: delete_on_termination: false device_type: disk disk_bus: scsi boot_index: 0 - volume_id: delete_on_termination: false device_type: disk disk_bus: scsi boot_index: -1 which maps to the following nova-api debug log: Action: 'create', calling method: of 0x7f6b08dd4890>>, body: {"ser ver": {"name": "jex-n1", "imageRef": "", "block_device_mapping_v2": [{"boot_index": 0, "uuid": "", "disk_bus": "scsi", "source_type": "volume" , "device_type": "disk", "destination_type": "volume", "delete_on_termination": false}, {"boot_index": -1, "uuid": "", "disk_bus": "scsi", "so urce_type": "volume", "device_type": "disk", "destination_type": "volume", "delete_on_termination": false}], "flavorRef": "4b3da838-3d81-461a-b946-d3613fb6f4b3", "user_data": "...", "max_count": 1, "min_count": 1, "networks": [{"port": "9044f884-1a3d-4dc6-981e-f585f5e45dd1"}], "config_drive": true}} _process_stack /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:604 Regardless of boot_index value, server boots from VOLUME2 (/dev/sdb), while having attached VOLUME1 as well as /dev/sda I'm using Queens. Where I'm wrong? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [openstack-dev] [cinder] making volume available without stopping VM
eak dependencies. If this is fully controlled environment (nobody else can modify it in any way or reattach it to other instance or make anything else with the volume), which other kinds of problems can appear in this case? Thank you. You can get some details from the cinder spec: https://specs.openstack.org/openstack/cinder-specs/specs/pike/extend-attached-volume.html And the corresponding Nova spec: http://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/nova-support-attached-volume-extend.html You may also want to read through the mailing list thread if you want to get in to some of the nitty gritty details behind why certain design choices were made: http://lists.openstack.org/pipermail/openstack-dev/2017-April/115292.html -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] making volume available without stopping VM
Hi Jay, We have had similar issues with extending attached volumes that are iSCSI based. In that case the VM has to be forced to rescan the scsi bus. In this case I am not sure if there needs to be a change to Libvirt or to rbd or something else. I would recommend reaching out to John Bernard for help. In fact, I'm ok with delayed resize (upon power-cycle), and it's not an issue for me that VM don't detect changes immediately. What I want to understand is that changes to Cinder (and, thus, underlying changes to CEPH) are safe for VM while it's in active state. Hopefully, Jon will help with this question. Thank you! On 6/23/18 8:41 PM, Jay Bryant wrote: On Sat, Jun 23, 2018, 9:39 AM Volodymyr Litovka <mailto:doka...@gmx.com>> wrote: Dear friends, I did some tests with making volume available without stopping VM. I'm using CEPH and these steps produce the following results: 1) openstack volume set --state available [UUID] - nothing changed inside both VM (volume is still connected) and CEPH 2) openstack volume set --size [new size] --state in-use [UUID] - nothing changed inside VM (volume is still connected and has an old size) - size of CEPH volume changed to the new value 3) during these operations I was copying a lot of data from external source and all md5 sums are the same on both VM and source 4) changes on VM happens upon any kind of power-cycle (e.g. reboot (either soft or hard): openstack server reboot [--hard] [VM uuid] ) - note: NOT after 'reboot' from inside VM It seems, that all these manipilations with cinder just update internal parameters of cinder/CEPH subsystems, without immediate effect for VMs. Is it safe to use this mechanism in this particular environent (e.g. CEPH as backend)? From practical point of view, it's useful when somebody, for example, update project in batch mode, and will then manually reboot every VM, affected by the update, in appropriate time with minimized downtime (it's just reboot, not manual stop/update/start). Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder] making volume available without stopping VM
Dear friends, I did some tests with making volume available without stopping VM. I'm using CEPH and these steps produce the following results: 1) openstack volume set --state available [UUID] - nothing changed inside both VM (volume is still connected) and CEPH 2) openstack volume set --size [new size] --state in-use [UUID] - nothing changed inside VM (volume is still connected and has an old size) - size of CEPH volume changed to the new value 3) during these operations I was copying a lot of data from external source and all md5 sums are the same on both VM and source 4) changes on VM happens upon any kind of power-cycle (e.g. reboot (either soft or hard): openstack server reboot [--hard] [VM uuid] ) - note: NOT after 'reboot' from inside VM It seems, that all these manipilations with cinder just update internal parameters of cinder/CEPH subsystems, without immediate effect for VMs. Is it safe to use this mechanism in this particular environent (e.g. CEPH as backend)? From practical point of view, it's useful when somebody, for example, update project in batch mode, and will then manually reboot every VM, affected by the update, in appropriate time with minimized downtime (it's just reboot, not manual stop/update/start). Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [Openstack] can't resize server
eption: Bad or unexpected response from the storage volume backend API: Terminate volume connection failed: Failed to detach iSCSI target for volume f1ac2e94-b0ed-4089-898f-5b6467fb51e3. Any thoughts? Manuel Sopena Ballesteros | Big data Engineer Garvan Institute of Medical Research The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel...@garvan.org.au<mailto:manuel...@garvan.org.au> NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] which SDK to use?
Hi Adrian, Then we have the odd thing where most of the client list commands return lists of objects, while some (I'm looking at you glance), returns a generator. After short period of use, I completely agree with this and other your statements re client libraries. Thanks for pointing on ways to do actions on stack using SDK client. I'll think again about Unified SDK - if it's more unified than clients, some annoyings are acceptable :) -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack] volume state (in-use/available) vs real work
Hi colleagues, in order to change (increase) boot disk's size "on the fly", I can do the following sequense of commands without stopping VM: : openstack volume set --state available : openstack volume set --state in-use --size 32 and, if properly configured, disk will be automatically resized by cloud-init during next reboot. Is it dangerous to change volume state to "available" while VM is actively working? Which side-effects I can face while doing this? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] which SDK to use?
Hi Adrian, at the moment, "wildly different" python clients provide more, than Unified SDK. Not sure about all clients, but what I found and what finally turned me to client libraries is inability to to do actions on stack (e.g. suspend/resume) using Unified SDK (neither doc not source code contain any mentions on this, while python-heatclient describes this and can it to do). It's far from bleeding edge - it's huge gap in feature consistency. On 4/20/18 6:19 AM, Adrian Turjak wrote: As someone who used to use all the standalone clients, I'm leaning very heavily these days to using only the SDK and think we should encourage most projects to treat the SDK as their first point of implementation rather than all the wildly different python clients. So if you are new to OpenStack, the the SDK is the best and most consistent option right now for interacting with OpenStack from python. Sadly though the docs are lacking, but the docs for the other libraries aren't that much better anyway half the time. On 20/04/18 01:46, Chris Friesen wrote: On 04/19/2018 07:01 AM, Jeremy Stanley wrote: On 2018-04-19 12:24:48 +1000 (+1000), Joshua Hesketh wrote: There is also nothing stopping you from using both. For example, you could use the OpenStack SDK for most things but if you hit an edge case where you need something specific you can then import the particular client lib. [...] Or, for that matter, leverage OpenStackSDK's ability to pass arbitrary calls to individual service APIs when you need something not exposed by the porcelain layer. Is that documented somewhere? I spent some time looking at https://docs.openstack.org/openstacksdk/latest/ and didn't see anything that looked like that. Not that I believe, but basically it amounts to that on any service proxy object you can call .get .post etc. So if the SDK doesn't yet support a given feature, you can still use the feature yourself, but you need to do some raw requests work, which honestly isn't that bad. servers = list(conn.compute.servers()) vs servers_resp = conn.compute.get("/servers") The direct calls on the proxy object use your current session (auth and scope) against the endpoint specific to that service, and just return the raw request itself when called directly. This works even for Swift where the url has to include details about your account. It's surprisingly elegant. Ideally when people use the SDK like this they should also submit a patch to fill in the missing functionality. Adding to the SDK isn't that bad and the codebase is much better than it used to be. Thanks, Chris ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] which SDK to use?
Hi Chris and colleagues, based on your experience, can you specify an average delay between new OS release / new feature introduction and appearance of corresponding support in Unified Openstack SDK if you were experiencing such issues? Thanks. On 4/17/18 7:23 PM, Chris Friesen wrote: On 04/17/2018 07:13 AM, Jeremy Stanley wrote: The various "client libraries" (e.g. python-novaclient, python-cinderclient, et cetera) can also be used to that end, but are mostly for service-to-service communication these days, aren't extremely consistent with each other, and tend to eventually drop support for older OpenStack APIs so if you're going to be interacting with a variety of different OpenStack deployments built on different releases you may need multiple versions of the client libraries (depending on what it is you're trying to do). The above is all good information. I'd like to add that if you need bleeding-edge functionality in nova it will often be implemented first in python-novaclient. Chris ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack] which SDK to use?
Hi colleagues, I need to write client app (Python v3) to work with Openstack. At the moment, I need to work with Keystone (of course), Heat, Nova and Cinder. Support for other modules may be required later. Keeping in mind direct API calls, I, nevertheless, prefer to use SDK and there are two choices: 1) Openstack SDK (https://docs.openstack.org/openstacksdk/latest ) 2) Openstack Clients (https://wiki.openstack.org/wiki/OpenStackClients ) The question is which one to use in terms of support Openstack APIs, development longevity and consistency with Openstack development? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack] [HEAT] order in attributes list
Hi colleagues, I have the following HOT configuration of a port: n1-wan: type: OS::Neutron::Port properties: fixed_ips: - { subnet: e-subnet1, ip_address: 51.x.x.x } - { subnet: e-subnet2, ip_address: 25.x.x.x } when I try to extract these values in template using {get_attr}, then, regardless of fixed_ips' order in port definition (either "subnet1, subnet2" or "subnet2, subnet1"), the value of { get_attr: [n1-wan, fixed_ips] } always give the following result: output_value: - ip_address: 25.x.x.x subnet_id: ... - ip_address: 51.x.x.x subnet_id: ... and, thus, { get_attr: [n1-wan, fixed_ips, 1, ip_address ] } gives me 51.x.x.x value. So, the question is - how the list of fixed_ips is ordered? Is there way to know for sure index of entry I'm interested in? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[openstack-dev] [neutron] route metrics inside VR
Dear colleagues, for some reasons (see below explanation) , I'm trying to deploy the following network configuration: Network +---+ Subnet-1 Subnet-2 +---++--+ ++--+ | | ++ | | | | | | | ++ VR +-+ | | | +--+-+ ++ | | | VM | | | ++ where VR is Neutron's virtual router, connected to two subnets, which belong to same network: Subnet-1 is "LAN" interface (25.0.0.1/8) connected to qr-64c53cf8-d9 Subnet-2 is external gateway (51.x.x.x) connected to qg-16bdddb1-d5 with SNAT enabled The reason why I'm trying to use this configuration is pretty simple - this allows to switch VM between diffrent address scopes (e.g. "grey" and "white") while preserving port/MAC (which is created in the "Network" and remains there while I'm switching VM between different subnets). Such configuration produces the following commands list when creating VR: 14:45:18.043 Running command: ['ip', 'netns', 'exec', 'qrouter-UUID', 'ip', '-4', 'addr', 'add', '25.0.0.1/8', 'scope', 'global', 'dev', 'qr-64c53cf8-d9', 'brd', '25.255.255.255'] 14:45:19.815 Running command: ['ip', 'netns', 'exec', 'qrouter-UUID', 'ip', '-4', 'addr', 'add', '51.x.x.x/24', 'scope', 'global', 'dev', 'qg-16bdddb1-d5', 'brd', '51.x.x.255'] 14:45:20.283 Running command: ['ip', 'netns', 'exec', 'qrouter-UUID', 'ip', '-4', 'route', 'replace', '25.0.0.0/8', 'dev', 'qg-16bdddb1-d5', 'scope', 'link'] 14:45:20.919 Running command: ['ip', 'netns', 'exec', 'qrouter-UUID', 'ip', '-4', 'route', 'replace', 'default', 'via', '51.x.x.254', 'dev', 'qg-16bdddb1-d5'] Since 25/8 is extra subnet of "Network", Neutron installs this entry (by using 'ip route replace') despite the fact that there should be connected route (via qr-64c53cf8-d9). Due to current implementation, all traffic from VR to directly connected "subnet-1" goes over "subnet-2" (through NAT) and, thus, VM in Subnet-1 can't access VR - it "pings" local address (25.0.0.1) while replies return from another (NAT) address. Whether this behaviour can be safely changed by using "ip route add [...] metric " instead of "ip route replace"? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [Openstack] diskimage-builder: prepare ubuntu 17.x images
Hi Tony, this patch works for me: --- diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball.orig 2018-02-09 12:20:02.117793234 + +++ diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball 2018-02-09 13:25:48.654868263 + @@ -14,7 +14,9 @@ DIB_CLOUD_IMAGES=${DIB_CLOUD_IMAGES:-http://cloud-images.ubuntu.com} DIB_RELEASE=${DIB_RELEASE:-trusty} -BASE_IMAGE_FILE=${BASE_IMAGE_FILE:-$DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz} +SUFFIX="-root" +[[ $DIB_RELEASE =~ (artful|bionic) ]] && SUFFIX="" +BASE_IMAGE_FILE=${BASE_IMAGE_FILE:-$DIB_RELEASE-server-cloudimg-${ARCH}${SUFFIX}.tar.gz} SHA256SUMS=${SHA256SUMS:-https://${DIB_CLOUD_IMAGES##http?(s)://}/$DIB_RELEASE/current/SHA256SUMS} CACHED_FILE=$DIB_IMAGE_CACHE/$BASE_IMAGE_FILE CACHED_FILE_LOCK=$DIB_LOCKFILES/$BASE_IMAGE_FILE.lock @@ -45,9 +47,25 @@ fi popd fi - # Extract the base image (use --numeric-owner to avoid UID/GID mismatch between - # image tarball and host OS e.g. when building Ubuntu image on an openSUSE host) - sudo tar -C $TARGET_ROOT --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE + if [ -n "$SUFFIX" ]; then + # Extract the base image (use --numeric-owner to avoid UID/GID mismatch between + # image tarball and host OS e.g. when building Ubuntu image on an openSUSE host) + sudo tar -C $TARGET_ROOT --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE + else + # Unpack image to IDIR, mount it on MDIR, copy it to TARGET_ROOT + IDIR="/tmp/`head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''`" + MDIR="/tmp/`head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''`" + sudo mkdir $IDIR $MDIR + sudo tar -C $IDIR --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE + sudo mount -o loop -t auto $IDIR/$DIB_RELEASE-server-cloudimg-${ARCH}.img $MDIR + pushd $PWD 2>/dev/null + cd $MDIR + sudo tar c . | sudo tar x -C $TARGET_ROOT -k --numeric-owner 2>/dev/null + popd + # Clean up + sudo umount $MDIR + sudo rm -rf $IDIR $MDIR + fi } ( On 2/9/18 1:03 PM, Volodymyr Litovka wrote: Hi Tony, On 2/9/18 6:01 AM, Tony Breeds wrote: On Thu, Feb 08, 2018 at 10:53:14PM +0200, Volodymyr Litovka wrote: Hi colleagues, does anybody here know how to prepare Ubuntu Artful (17.10) image using diskimage-builder? diskimage-builder use the following naming style for download - $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, these archives for artful (and bionic) are absent on cloud-images.ubuntu.com. There are just different kinds of images, not source tree as in -root archives. I will appreciate any ideas or knowledge how to customize 17.10-based image using diskimage-builder or in diskimage-builder-like fashion. You might like to investigate the ubuntu-minimal DIB element which will build your ubuntu image from apt rather than starting with the pre-built image. good idea, but with export DIST="ubuntu-minimal" export DIB_RELEASE=artful diskimage-builder fails with the following debug: 2018-02-09 10:33:22.426 | dib-run-parts Sourcing environment file /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.427 | + source /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.427 | dirname /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.428 | +++ PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/pre-install.d/../environment.d/..' 2018-02-09 10:33:22.428 | +++ dib-init-system 2018-02-09 10:33:22.429 | + set -eu 2018-02-09 10:33:22.429 | + set -o pipefail 2018-02-09 10:33:22.429 | + '[' -f /usr/bin/systemctl -o -f /bin/systemctl ']' 2018-02-09 10:33:22.429 | + [[ -f /sbin/initctl ]] 2018-02-09 10:33:22.429 | + [[ -f /etc/gentoo-release ]] 2018-02-09 10:33:22.429 | + [[ -f /sbin/init ]] 2018-02-09 10:33:22.429 | + echo 'Unknown init system' 2018-02-09 10:36:54.852 | + exit 1 2018-02-09 10:36:54.853 | ++ DIB_INIT_SYSTEM='Unknown init system while earlier it find systemd 2018-02-09 10:33:22.221 | dib-run-parts Sourcing environment file /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.223 | + source /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.223 | dirname /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.224 | +++ PATH=/home/doka/DIB/dib/bin:/home/doka/DIB/dib/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/.. 2018-02-09 10:33:22.224 | +++ dib-init-system
Re: [Openstack-operators] [Openstack] diskimage-builder: prepare ubuntu 17.x images
Hi Tony, this patch works for me: --- diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball.orig 2018-02-09 12:20:02.117793234 + +++ diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball 2018-02-09 13:25:48.654868263 + @@ -14,7 +14,9 @@ DIB_CLOUD_IMAGES=${DIB_CLOUD_IMAGES:-http://cloud-images.ubuntu.com} DIB_RELEASE=${DIB_RELEASE:-trusty} -BASE_IMAGE_FILE=${BASE_IMAGE_FILE:-$DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz} +SUFFIX="-root" +[[ $DIB_RELEASE =~ (artful|bionic) ]] && SUFFIX="" +BASE_IMAGE_FILE=${BASE_IMAGE_FILE:-$DIB_RELEASE-server-cloudimg-${ARCH}${SUFFIX}.tar.gz} SHA256SUMS=${SHA256SUMS:-https://${DIB_CLOUD_IMAGES##http?(s)://}/$DIB_RELEASE/current/SHA256SUMS} CACHED_FILE=$DIB_IMAGE_CACHE/$BASE_IMAGE_FILE CACHED_FILE_LOCK=$DIB_LOCKFILES/$BASE_IMAGE_FILE.lock @@ -45,9 +47,25 @@ fi popd fi - # Extract the base image (use --numeric-owner to avoid UID/GID mismatch between - # image tarball and host OS e.g. when building Ubuntu image on an openSUSE host) - sudo tar -C $TARGET_ROOT --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE + if [ -n "$SUFFIX" ]; then + # Extract the base image (use --numeric-owner to avoid UID/GID mismatch between + # image tarball and host OS e.g. when building Ubuntu image on an openSUSE host) + sudo tar -C $TARGET_ROOT --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE + else + # Unpack image to IDIR, mount it on MDIR, copy it to TARGET_ROOT + IDIR="/tmp/`head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''`" + MDIR="/tmp/`head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''`" + sudo mkdir $IDIR $MDIR + sudo tar -C $IDIR --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE + sudo mount -o loop -t auto $IDIR/$DIB_RELEASE-server-cloudimg-${ARCH}.img $MDIR + pushd $PWD 2>/dev/null + cd $MDIR + sudo tar c . | sudo tar x -C $TARGET_ROOT -k --numeric-owner 2>/dev/null + popd + # Clean up + sudo umount $MDIR + sudo rm -rf $IDIR $MDIR + fi } ( On 2/9/18 1:03 PM, Volodymyr Litovka wrote: Hi Tony, On 2/9/18 6:01 AM, Tony Breeds wrote: On Thu, Feb 08, 2018 at 10:53:14PM +0200, Volodymyr Litovka wrote: Hi colleagues, does anybody here know how to prepare Ubuntu Artful (17.10) image using diskimage-builder? diskimage-builder use the following naming style for download - $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, these archives for artful (and bionic) are absent on cloud-images.ubuntu.com. There are just different kinds of images, not source tree as in -root archives. I will appreciate any ideas or knowledge how to customize 17.10-based image using diskimage-builder or in diskimage-builder-like fashion. You might like to investigate the ubuntu-minimal DIB element which will build your ubuntu image from apt rather than starting with the pre-built image. good idea, but with export DIST="ubuntu-minimal" export DIB_RELEASE=artful diskimage-builder fails with the following debug: 2018-02-09 10:33:22.426 | dib-run-parts Sourcing environment file /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.427 | + source /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.427 | dirname /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.428 | +++ PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/pre-install.d/../environment.d/..' 2018-02-09 10:33:22.428 | +++ dib-init-system 2018-02-09 10:33:22.429 | + set -eu 2018-02-09 10:33:22.429 | + set -o pipefail 2018-02-09 10:33:22.429 | + '[' -f /usr/bin/systemctl -o -f /bin/systemctl ']' 2018-02-09 10:33:22.429 | + [[ -f /sbin/initctl ]] 2018-02-09 10:33:22.429 | + [[ -f /etc/gentoo-release ]] 2018-02-09 10:33:22.429 | + [[ -f /sbin/init ]] 2018-02-09 10:33:22.429 | + echo 'Unknown init system' 2018-02-09 10:36:54.852 | + exit 1 2018-02-09 10:36:54.853 | ++ DIB_INIT_SYSTEM='Unknown init system while earlier it find systemd 2018-02-09 10:33:22.221 | dib-run-parts Sourcing environment file /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.223 | + source /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.223 | dirname /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.224 | +++ PATH=/home/doka/DIB/dib/bin:/home/doka/DIB/dib/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/.. 2018-02-09 10:33:22.224 | +++ dib-init-system
Re: [Openstack] diskimage-builder: prepare ubuntu 17.x images
Hi Tony, On 2/9/18 6:01 AM, Tony Breeds wrote: On Thu, Feb 08, 2018 at 10:53:14PM +0200, Volodymyr Litovka wrote: Hi colleagues, does anybody here know how to prepare Ubuntu Artful (17.10) image using diskimage-builder? diskimage-builder use the following naming style for download - $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, these archives for artful (and bionic) are absent on cloud-images.ubuntu.com. There are just different kinds of images, not source tree as in -root archives. I will appreciate any ideas or knowledge how to customize 17.10-based image using diskimage-builder or in diskimage-builder-like fashion. You might like to investigate the ubuntu-minimal DIB element which will build your ubuntu image from apt rather than starting with the pre-built image. good idea, but with export DIST="ubuntu-minimal" export DIB_RELEASE=artful diskimage-builder fails with the following debug: 2018-02-09 10:33:22.426 | dib-run-parts Sourcing environment file /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.427 | + source /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.427 | dirname /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.428 | +++ PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/pre-install.d/../environment.d/..' 2018-02-09 10:33:22.428 | +++ dib-init-system 2018-02-09 10:33:22.429 | + set -eu 2018-02-09 10:33:22.429 | + set -o pipefail 2018-02-09 10:33:22.429 | + '[' -f /usr/bin/systemctl -o -f /bin/systemctl ']' 2018-02-09 10:33:22.429 | + [[ -f /sbin/initctl ]] 2018-02-09 10:33:22.429 | + [[ -f /etc/gentoo-release ]] 2018-02-09 10:33:22.429 | + [[ -f /sbin/init ]] 2018-02-09 10:33:22.429 | + echo 'Unknown init system' 2018-02-09 10:36:54.852 | + exit 1 2018-02-09 10:36:54.853 | ++ DIB_INIT_SYSTEM='Unknown init system while earlier it find systemd 2018-02-09 10:33:22.221 | dib-run-parts Sourcing environment file /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.223 | + source /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.223 | dirname /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.224 | +++ PATH=/home/doka/DIB/dib/bin:/home/doka/DIB/dib/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/.. 2018-02-09 10:33:22.224 | +++ dib-init-system 2018-02-09 10:33:22.225 | + set -eu 2018-02-09 10:33:22.225 | + set -o pipefail 2018-02-09 10:33:22.225 | + '[' -f /usr/bin/systemctl -o -f /bin/systemctl ']' 2018-02-09 10:33:22.225 | + echo systemd 2018-02-09 10:33:22.226 | ++ DIB_INIT_SYSTEM=systemd 2018-02-09 10:33:22.226 | ++ export DIB_INIT_SYSTEM it seems somewhere in the middle something happens to systemd package In the meantime I'll look at how we can consume the .img file, which is similar to what we'd need to do for Fedora script diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball contains the function get_ubuntu_tarball() which, after all checks, does the following: sudo tar -C $TARGET_ROOT --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE probably, the easiest hack around the issue is to change above to smth like sudo ( mount -o loop tar cv | tar xv -C $TARGET_ROOT ... ) Will try this. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack-operators] [Openstack] diskimage-builder: prepare ubuntu 17.x images
Hi Tony, On 2/9/18 6:01 AM, Tony Breeds wrote: On Thu, Feb 08, 2018 at 10:53:14PM +0200, Volodymyr Litovka wrote: Hi colleagues, does anybody here know how to prepare Ubuntu Artful (17.10) image using diskimage-builder? diskimage-builder use the following naming style for download - $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, these archives for artful (and bionic) are absent on cloud-images.ubuntu.com. There are just different kinds of images, not source tree as in -root archives. I will appreciate any ideas or knowledge how to customize 17.10-based image using diskimage-builder or in diskimage-builder-like fashion. You might like to investigate the ubuntu-minimal DIB element which will build your ubuntu image from apt rather than starting with the pre-built image. good idea, but with export DIST="ubuntu-minimal" export DIB_RELEASE=artful diskimage-builder fails with the following debug: 2018-02-09 10:33:22.426 | dib-run-parts Sourcing environment file /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.427 | + source /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.427 | dirname /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.428 | +++ PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/pre-install.d/../environment.d/..' 2018-02-09 10:33:22.428 | +++ dib-init-system 2018-02-09 10:33:22.429 | + set -eu 2018-02-09 10:33:22.429 | + set -o pipefail 2018-02-09 10:33:22.429 | + '[' -f /usr/bin/systemctl -o -f /bin/systemctl ']' 2018-02-09 10:33:22.429 | + [[ -f /sbin/initctl ]] 2018-02-09 10:33:22.429 | + [[ -f /etc/gentoo-release ]] 2018-02-09 10:33:22.429 | + [[ -f /sbin/init ]] 2018-02-09 10:33:22.429 | + echo 'Unknown init system' 2018-02-09 10:36:54.852 | + exit 1 2018-02-09 10:36:54.853 | ++ DIB_INIT_SYSTEM='Unknown init system while earlier it find systemd 2018-02-09 10:33:22.221 | dib-run-parts Sourcing environment file /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.223 | + source /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.223 | dirname /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.224 | +++ PATH=/home/doka/DIB/dib/bin:/home/doka/DIB/dib/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/.. 2018-02-09 10:33:22.224 | +++ dib-init-system 2018-02-09 10:33:22.225 | + set -eu 2018-02-09 10:33:22.225 | + set -o pipefail 2018-02-09 10:33:22.225 | + '[' -f /usr/bin/systemctl -o -f /bin/systemctl ']' 2018-02-09 10:33:22.225 | + echo systemd 2018-02-09 10:33:22.226 | ++ DIB_INIT_SYSTEM=systemd 2018-02-09 10:33:22.226 | ++ export DIB_INIT_SYSTEM it seems somewhere in the middle something happens to systemd package In the meantime I'll look at how we can consume the .img file, which is similar to what we'd need to do for Fedora script diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball contains the function get_ubuntu_tarball() which, after all checks, does the following: sudo tar -C $TARGET_ROOT --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE probably, the easiest hack around the issue is to change above to smth like sudo ( mount -o loop tar cv | tar xv -C $TARGET_ROOT ... ) Will try this. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
[Openstack] diskimage-builder: prepare ubuntu 17.x images
Hi colleagues, does anybody here know how to prepare Ubuntu Artful (17.10) image using diskimage-builder? diskimage-builder use the following naming style for download - $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, these archives for artful (and bionic) are absent on cloud-images.ubuntu.com. There are just different kinds of images, not source tree as in -root archives. I will appreciate any ideas or knowledge how to customize 17.10-based image using diskimage-builder or in diskimage-builder-like fashion. Thanks! -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack-operators] diskimage-builder: prepare ubuntu 17.x images
Hi colleagues, does anybody here know how to prepare Ubuntu Artful (17.10) image using diskimage-builder? diskimage-builder use the following naming style for download - $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, these archives for artful (and bionic) are absent on cloud-images.ubuntu.com. There are just different kinds of images, not source tree as in -root archives. I will appreciate any ideas or knowledge how to customize 17.10-based image using diskimage-builder or in diskimage-builder-like fashion. Thanks! -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Octavia LBaaS - networking requirements
Hi Flint, I think, Octavia expects reachibility between components over management network, regardless of network's technology. On 2/6/18 11:41 AM, Flint WALRUS wrote: Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider network instead of a neutron L3 vxlan ? Is Octavia specifically relying on L3 networking or can it operate without neutron L3 features ? I didn't find anything specifically related to the network requirements except for the network itself. Thanks guys! ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack] OpenVSwitch inside Instance no ARP passthrough
an only see ARP requests on ens4 but not on the OVSbr1 bridge ... But I see now some LLDP packets on the ens4 and OVSbr1 Then I tried following ... I stopped the ping from source to TestNFVVM And start pinging from the TestNFV (192.168.120.5) to the Source (192.168.120.10) Again I didnt get any response ... And again looked at the tcpdump of OVSbr1 and ens4 ... tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on OVSbr1, link-type EN10MB (Ethernet), capture size 262144 bytes 13:59:18.245528 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 286, length 64 13:59:19.253513 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 287, length 64 13:59:20.261487 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 288, length 64 13:59:21.269499 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 289, length 64 13:59:21.680458 LLDP, length 110: openflow:214083694506308 13:59:22.277524 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 290, length 64 13:59:23.285531 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 291, length 64 13:59:24.293631 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 292, length 64 13:59:25.301529 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 293, length 64 13:59:26.309588 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 294, length 64 13:59:26.680238 LLDP, length 110: openflow:214083694506308 13:59:27.317591 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 295, length 64 13:59:28.325524 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 296, length 64 13:59:29.333618 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 297, length 64 13:59:30.341515 IP 192.168.120.5 > 192.168.120.10: ICMP echo request, id 1839, seq 298, length 64 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ens4, link-type EN10MB (Ethernet), capture size 262144 bytes 13:59:16.680452 LLDP, length 99: openflow:214083694506308 13:59:21.680791 LLDP, length 99: openflow:214083694506308 13:59:26.680532 LLDP, length 99: openflow:214083694506308 13:59:31.680503 LLDP, length 99: openflow:214083694506308 13:59:36.680681 LLDP, length 99: openflow:214083694506308 13:59:41.391777 ARP, Request who-has 192.168.120.10 tell 192.168.120.5, length 28 13:59:41.392345 ARP, Reply 192.168.120.10 is-at fa:16:3e:84:5c:29 (oui Unknown), length 28 13:59:41.680626 LLDP, length 99: openflow:214083694506308 13:59:46.680692 LLDP, length 99: openflow:214083694506308 This is a bit confusing for me ... First why does the echo request only appear on the OVSbr1 bridge and not also on the ens4 ... is this correct behaviour? and second why I got suddenly a ARP reply on ens4 which is indeed the correct mac of the VM1 interface ... and why the LLDP packets shown on both interfaces ... Is now something wrong with the FlowController? I use ODL with odl-l2switch-all feature enabled ... puhhh ... what do I miss??? I didn't get this ... Thx a lot Mathias. On 2018-02-01 23:49, Volodymyr Litovka wrote: Hi Mathias, I'm not so fluent with OVS, but I would recommend to join bridges using special "ports" like Port ovsbr1-patch Interface ovsbr1-patch type: patch options: {peer=ovsbr2-patch} and vice versa, keeping "native" configuration of "port OVSbr1" and "port OVSbr2" And keep in mind that ARP scope is broadcast domain and, if using just ARP (not routing), from VM1 you will be able to ping hosts, belonging to OVSbr1, particularly - OVSbr1's IP. On 2/1/18 4:11 PM, Mathias Strufe (DFKI) wrote: Dear Benjamin, Volodymyr, good question ;) ... I like to experiment with some kind of "Firewall NFV" ... but in the first step, I want to build a Router VM between two networks (and later extend it with some flow rules) ... OpenStack, in my case, is more a foundation to build a "test environment" for my "own" application ... please find attached a quick sketch of the current network ... I did this already before with iptables inside the middle instance ... worked quite well ... but know I like to achieve the same with OVS ... I didn't expect that it is so much more difficult ;) ... I'm currently checking Volodymyrs answer ... I think first point is now solved ... I "patched" now OVSbr1 and OVSbr2 inside the VM together (see OVpatch file)... but I think this is important later when I really like to ping from VM1 to VM2 ... but in the moment I only ping from VM1 to the TestNFV ... but the arp requests only reaches ens4 but not OVSbr1 (according to tcpdump)... May it have to do with port security and the (for OpenStack) unknown MAC address of the OVS bridge? Thanks so far ... Mathias. On 2018-02-01 14:28, Benjamin Dia
Re: [Openstack] OpenVSwitch inside Instance no ARP passthrough
Hi Mathias, I'm not so fluent with OVS, but I would recommend to join bridges using special "ports" like Port ovsbr1-patch Interface ovsbr1-patch type: patch options: {peer=ovsbr2-patch} and vice versa, keeping "native" configuration of "port OVSbr1" and "port OVSbr2" And keep in mind that ARP scope is broadcast domain and, if using just ARP (not routing), from VM1 you will be able to ping hosts, belonging to OVSbr1, particularly - OVSbr1's IP. On 2/1/18 4:11 PM, Mathias Strufe (DFKI) wrote: Dear Benjamin, Volodymyr, good question ;) ... I like to experiment with some kind of "Firewall NFV" ... but in the first step, I want to build a Router VM between two networks (and later extend it with some flow rules) ... OpenStack, in my case, is more a foundation to build a "test environment" for my "own" application ... please find attached a quick sketch of the current network ... I did this already before with iptables inside the middle instance ... worked quite well ... but know I like to achieve the same with OVS ... I didn't expect that it is so much more difficult ;) ... I'm currently checking Volodymyrs answer ... I think first point is now solved ... I "patched" now OVSbr1 and OVSbr2 inside the VM together (see OVpatch file)... but I think this is important later when I really like to ping from VM1 to VM2 ... but in the moment I only ping from VM1 to the TestNFV ... but the arp requests only reaches ens4 but not OVSbr1 (according to tcpdump)... May it have to do with port security and the (for OpenStack) unknown MAC address of the OVS bridge? Thanks so far ... Mathias. On 2018-02-01 14:28, Benjamin Diaz wrote: Dear Mathias, Could you attach a diagram of your network configuration and of what you are trying to achieve? Are you trying to install OVS inside a VM? If so, why? Greetings, Benjamin On Thu, Feb 1, 2018 at 8:30 AM, Volodymyr Litovka <doka...@gmx.com> wrote: Dear Mathias, if I correctly understand your configuration, you're using bridges inside VM and it configuration looks a bit strange: 1) you use two different bridges (OVSbr1/192.168.120.x and OVSbr2/192.168.110.x) and there is no patch between them so they're separate 2) while ARP requests for address in OVSbr1 arrives from OVSbr2: 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 but on the OVS bridge nothing arrives ... listening on OVSBR2, link-type EN10MB (Ethernet), capture size 262144 bytes while these bridges are separate, ARP requests and answers will not be passed between them. Regarding your devstack configuration - unfortunately, I don't have experience with devstack, so don't know, where it stores configs. In Openstack, ml2_conf.ini points to openvswitch in ml2's mechanism_drivers parameter, in my case it looks as the following: [ml2] mechanism_drivers = l2population,openvswitch and rest of openvswitch config described in /etc/neutron/plugins/ml2/openvswitch_agent.ini Second - I see an ambiguity in your br-tun configuration, where patch_int is the same as patch-int without corresponding remote peer config, probably you should check this issue. And third is - note that Mitaka is quite old release and probably you can give a chance for the latest release of devstack? :-) On 1/31/18 10:49 PM, Mathias Strufe (DFKI) wrote: Dear Volodymyr, all, thanks for your fast answer ... but I'm still facing the same problem, still can't ping the instance with configured and up OVS bridge ... may because I'm quite new to OpenStack and OpenVswitch and didn't see the problem ;) My setup is devstack Mitaka in single machine config ... first of all I didn't find there the openvswitch_agent.ini anymore, I remember in previous version it was in the neutron/plugin folder ... Is this config now done in the ml2 config file in the [OVS] section I'm really wondering ... so I can ping between the 2 instances without any problem. But as soon I bring up the OVS bridge inside the vm the ARP requests only visible at the ens interface but not reaching the OVSbr ... please find attached two files which may help for troubleshooting. One are some network information from inside the Instance that runs the OVS and one ovs-vsctl info of the OpenStack Host. If you need more info/logs please let me know! Thanks for your help! BR Mathias. On 2018-01-27 22:44, Volodymyr Litovka wrote: Hi Mathias, whether you have all corresponding bridges and patches between them as described in openvswitch_agent.ini using integration_bridge tunnel_bridge int_peer_patch_port tun_peer_patch_port bridge_mappings parameters? And make sure, that service "neutron-ovs-cleanup" is in use during system boot. You can check these bridges and patches using "ovs-vsctl show" command. On 1/27/18 9:00 PM, Mathias Strufe (DFKI) wrote: Dear all, I'm quite new to openstack an
Re: [Openstack] OpenVSwitch inside Instance no ARP passthrough
Dear Mathias, if I correctly understand your configuration, you're using bridges inside VM and it configuration looks a bit strange: 1) you use two different bridges (OVSbr1/192.168.120.x and OVSbr2/192.168.110.x) and there is no patch between them so they're separate 2) while ARP requests for address in OVSbr1 arrives from OVSbr2: > 18:50:58.080478 ARP, Request who-has *192.168.120.10* tell 192.168.120.6, length 28 > > but on the OVS bridge nothing arrives ... > > listening on *OVSbr2*, link-type EN10MB (Ethernet), capture size > 262144 bytes while these bridges are separate, ARP requests and answers will not be passed between them. Regarding your devstack configuration - unfortunately, I don't have experience with devstack, so don't know, where it stores configs. In Openstack, ml2_conf.ini points to openvswitch in ml2's mechanism_drivers parameter, in my case it looks as the following: [ml2] mechanism_drivers = l2population,openvswitch and rest of openvswitch config described in /etc/neutron/plugins/ml2/openvswitch_agent.ini Second - I see an ambiguity in your br-tun configuration, where patch_int is the same as patch-int without corresponding remote peer config, probably you should check this issue. And third is - note that Mitaka is quite old release and probably you can give a chance for the latest release of devstack? :-) On 1/31/18 10:49 PM, Mathias Strufe (DFKI) wrote: Dear Volodymyr, all, thanks for your fast answer ... but I'm still facing the same problem, still can't ping the instance with configured and up OVS bridge ... may because I'm quite new to OpenStack and OpenVswitch and didn't see the problem ;) My setup is devstack Mitaka in single machine config ... first of all I didn't find there the openvswitch_agent.ini anymore, I remember in previous version it was in the neutron/plugin folder ... Is this config now done in the ml2 config file in the [OVS] section I'm really wondering ... so I can ping between the 2 instances without any problem. But as soon I bring up the OVS bridge inside the vm the ARP requests only visible at the ens interface but not reaching the OVSbr ... please find attached two files which may help for troubleshooting. One are some network information from inside the Instance that runs the OVS and one ovs-vsctl info of the OpenStack Host. If you need more info/logs please let me know! Thanks for your help! BR Mathias. On 2018-01-27 22:44, Volodymyr Litovka wrote: Hi Mathias, whether you have all corresponding bridges and patches between them as described in openvswitch_agent.ini using integration_bridge tunnel_bridge int_peer_patch_port tun_peer_patch_port bridge_mappings parameters? And make sure, that service "neutron-ovs-cleanup" is in use during system boot. You can check these bridges and patches using "ovs-vsctl show" command. On 1/27/18 9:00 PM, Mathias Strufe (DFKI) wrote: Dear all, I'm quite new to openstack and like to install openVSwtich inside one Instance of our Mitika openstack Lab Enviornment ... But it seems that ARP packets got lost between the network interface of the instance and the OVS bridge ... With tcpdump on the interface I see the APR packets ... tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ens6, link-type EN10MB (Ethernet), capture size 262144 bytes 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 18:50:58.125009 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, length 28 18:50:59.077315 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 18:50:59.121369 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, length 28 18:51:00.077327 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 18:51:00.121343 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, length 28 but on the OVS bridge nothing arrives ... tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on OVSbr2, link-type EN10MB (Ethernet), capture size 262144 bytes I disabled port_security and removed the security group but nothing changed +---+---+ | Field | Value | +---+---+ | admin_state_up | True | | allowed_address_pairs | | | binding:host_id | node11 | | binding:profile | {} | | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": true} | | binding:vif_type | ovs | | binding:vnic_type | normal | | created_at | 2018-01-27T16:45:48Z | | description | | | device_id | 74916967-984c-4617-ae33-b847de73de13 | | device_owner | compute:nova | | extra_dhcp_opts | | | fixed_ips | {"subnet_id": "525db7ff-2bf2-4c64-b41e-1e41570ec358", "ip_address": "192.168.120.10
Re: [Openstack] OpenVSwitch inside Instance no ARP passthrough
Hi Mathias, whether you have all corresponding bridges and patches between them as described in openvswitch_agent.ini using integration_bridge tunnel_bridge int_peer_patch_port tun_peer_patch_port bridge_mappings parameters? And make sure, that service "neutron-ovs-cleanup" is in use during system boot. You can check these bridges and patches using "ovs-vsctl show" command. On 1/27/18 9:00 PM, Mathias Strufe (DFKI) wrote: Dear all, I'm quite new to openstack and like to install openVSwtich inside one Instance of our Mitika openstack Lab Enviornment ... But it seems that ARP packets got lost between the network interface of the instance and the OVS bridge ... With tcpdump on the interface I see the APR packets ... tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ens6, link-type EN10MB (Ethernet), capture size 262144 bytes 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 18:50:58.125009 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, length 28 18:50:59.077315 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 18:50:59.121369 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, length 28 18:51:00.077327 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 18:51:00.121343 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, length 28 but on the OVS bridge nothing arrives ... tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on OVSbr2, link-type EN10MB (Ethernet), capture size 262144 bytes I disabled port_security and removed the security group but nothing changed +---+---+ | Field | Value | +---+---+ | admin_state_up | True | | allowed_address_pairs | | | binding:host_id | node11 | | binding:profile | {} | | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": true} | | binding:vif_type | ovs | | binding:vnic_type | normal | | created_at | 2018-01-27T16:45:48Z | | description | | | device_id | 74916967-984c-4617-ae33-b847de73de13 | | device_owner | compute:nova | | extra_dhcp_opts | | | fixed_ips | {"subnet_id": "525db7ff-2bf2-4c64-b41e-1e41570ec358", "ip_address": "192.168.120.10"} | | id | 74b754d6--4c2e-bfd1-87f640154ac9 | | mac_address | fa:16:3e:af:90:0c | | name | | | network_id | 917254cb-9721-4207-99c5-8ead9f95d186 | | port_security_enabled | False | | project_id | c48457e73b664147a3d2d36d75dcd155 | | revision_number | 27 | | security_groups | | | status | ACTIVE | | tenant_id | c48457e73b664147a3d2d36d75dcd155 | | updated_at | 2018-01-27T18:54:24Z | +---+---+ maybe the port_filter causes still the problem? But how to disable it? Any other idea? Thanks and BR Mathias. ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] Could not load neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver
Hi Ata, when you use Octavia, you don't need agents, as specified in documentation: "Ensure that the LBaaS v1 and v2 service providers are removed from the [service_providers] section. They are not used with Octavia. *Verify that all LBaaS agents are stopped.*" Also, neutron lbaas CLI is deprecated in favor to openstack lbaas CLI, which talks Octavia directly, using corresponding endpoints. On 1/21/18 11:01 PM, Ata Abdollahi wrote: Hello everybody, I'm am beginner in openstack and I have install openstack ocata successfully.I using link below to install lbaas: https://docs.openstack.org/ocata/networking-guide/config-lbaas.html When i want to install lbaasv2 I encounter with error below: openstack@ubuntu:~$ sudo neutron-lbaasv2-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/lbaas_agent.ini Guru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports. 2018-01-21 22:43:12.408 10772 INFO neutron.common.config [-] Logging enabled! 2018-01-21 22:43:12.409 10772 INFO neutron.common.config [-] /usr/bin/neutron-lbaasv2-agent version 10.0.4 2018-01-21 22:43:12.411 10772 WARNING stevedore.named [req-6ebf45ef-7ff4-43c2-8c9a-d9b1f3acc839 - - - - -] Could not load neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver ^C2018-01-21 22:43:19.697 10772 INFO oslo_service.service [-] Caught SIGINT signal, instantaneous exiting I enter commands in controller node. Thanks a lot. ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] compute nodes down
2 node-9.mydom.com <http://node-9.mydom.com> sudo[40591]: nova : TTY=unknown ; PWD=/var/lib/nova ; USER=root ; COMMAND=/usr/bin/nova-rootwrap /etc/nova/rootwrap.conf touch -c /mnt/MSA_FC_Vol1/nodes/_base/a49721a231fdd7b45293b29dd13c34207c9c891b Dec 18 22:00:32 node-9.mydom.com <http://node-9.mydom.com> sudo[40591]: pam_unix(sudo:session): session opened for user root by (uid=0) Dec 18 22:00:32 node-9.mydom.com <http://node-9.mydom.com> sudo[40591]: pam_unix(sudo:session): session closed for user root lines 1-22/22 (END) root@node-9:~# Does anything in the status messages show what could be wrong? What do the "nova : TTY=unknown" messages mean? thanks!! -- Jim ___________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack-operators] Newton LBaaS v2 settings
Hi Grant, in case of Octavia, when you create healthmonitor with parameters of monitoring: $ openstack loadbalancer healthmonitor create usage: openstack loadbalancer healthmonitor create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--max-width ] [--fit-width] [--print-empty] [--noindent] [--prefix PREFIX] [--name ] --delay [--expected-codes ] [--http_method {GET,POST,DELETE,PUT,HEAD,OPTIONS,PATCH,CONNECT,TRACE}] --timeout --max-retries [--url-path ] --type {PING,HTTP,TCP,HTTPS,TLS-HELLO} [--max-retries-down ] [--enable | --disable] Octavia pushes these parameters to haproxy config on Amphorae agent (/var/lib/octavia//haproxy.cfg) like this: backend f30f2586-a387-40f4-a7b7-9718aebf49d4 mode tcp balance roundrobin timeout check 1s server 26ae7b5c-4ec4-4bb3-ba21-6c8bccd9cdf8 10.1.4.11:80 weight 1 check inter 5s fall 3 rise 3 server 611a645e-9b47-40cd-a26a-b0b2a6348959 10.1.4.14:80 weight 1 check inter 5s fall 3 rise 3 so, if you guess it's a problem with backend servers, you can play with HealthMonitor parameters in order to set appropriate timeouts for backend servers in this pool. On 12/15/17 12:11 PM, Grant Morley wrote: Hi All, I wonder if anyone would be able to help with some settings I might be obviously missing for LBaaS. We have a client that uses the service but they are coming across issues with their app randomly not working. Basically if their app takes longer than 20 seconds to process a request it looks like LBaaS times out the connection. I have had a look and I can't seem to find any default settings for either "server" or "tunnel" and wondered if there was a way I could increase or see any default timeout settings through the neutron cli? I can only see timeout settings for the "Health Monitor" Any help will be much appreciated. Regards, -- Grant Morley Senior Cloud Engineer Absolute DevOps Ltd Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP www.absolutedevops.io <http://www.absolutedevops.io/> gr...@absolutedevops.io <mailto:gr...@absolutedevops.io> 0845 874 0580 ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack] Certifying SDKs
Hi Melvin, isn't SDK the same as Openstack REST API? In my opinion (can be erroneous, though), SDK should just support everything that API supports, providing some basic checks of parameters (e.g. verify compliancy of passed parameter to IP address format, etc) before calling API (in order to decrease load of Openstack by eliminating obviously broken requests). Thanks. On 12/11/17 8:35 AM, Melvin Hillsman wrote: Hey everyone, On the path to potentially certifying SDKs we would like to gather a list of scenarios folks would like to see "guaranteed" by an SDK. Some examples - boot instance from image, boot instance from volume, attach volume to instance, reboot instance; very much like InterOp works to ensure OpenStack clouds provide specific functionality. Here is a document we can share to do this - https://docs.google.com/spreadsheets/d/1cdzFeV5I4Wk9FK57yqQmp5JJdGfKzEOdB3Vtt9vnVJM/edit#gid=0 -- Kind regards, Melvin Hillsman mrhills...@gmail.com <mailto:mrhills...@gmail.com> mobile: (832) 264-2646 ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack-operators] [Openstack] Certifying SDKs
Hi Melvin, isn't SDK the same as Openstack REST API? In my opinion (can be erroneous, though), SDK should just support everything that API supports, providing some basic checks of parameters (e.g. verify compliancy of passed parameter to IP address format, etc) before calling API (in order to decrease load of Openstack by eliminating obviously broken requests). Thanks. On 12/11/17 8:35 AM, Melvin Hillsman wrote: Hey everyone, On the path to potentially certifying SDKs we would like to gather a list of scenarios folks would like to see "guaranteed" by an SDK. Some examples - boot instance from image, boot instance from volume, attach volume to instance, reboot instance; very much like InterOp works to ensure OpenStack clouds provide specific functionality. Here is a document we can share to do this - https://docs.google.com/spreadsheets/d/1cdzFeV5I4Wk9FK57yqQmp5JJdGfKzEOdB3Vtt9vnVJM/edit#gid=0 -- Kind regards, Melvin Hillsman mrhills...@gmail.com <mailto:mrhills...@gmail.com> mobile: (832) 264-2646 ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openst...@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
[openstack-dev] [Heat] [Octavia] Re: [Bug 1737567] Re: Direct support for Octavia LBaaS API
Hi Rabi, see below On 12/13/17 11:03 AM, Rabi Mishra wrote: if Heat will provide a way to choose provider (neutron-lbaas or octavia), then customers will continue to use neutron-lbaas as long as it will be required, with their specific drivers (haproxy, F5, A10, etc), gradually migrating to Octavia when time will come. Heat already provides that, though it uses neutron lbaas api extenstions and not the octavia API (you've to set the service provider for lbaas config ex. service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default). As said: Octavia is properly returning a 409 HTTP status code telling the caller that the load balancer is in an immutable state and the user should try again. The issue is neutron-lbaas has some fundamental issues with it's object locking that would require a full re-write to correct. neutron-lbaas is not using transactions and locking correctly, so it is allowing your second request through even though the load balancer is/should be locked on the first request. which means, that neutron-lbaas unsuitable for automated operations (high operations rate) on same objects if, for some reasons, provider asks for delay. I confirm the issue, described here - http://lists.openstack.org/pipermail/openstack-dev/2017-July/120145.html. It's not Heat's issue, it's neutron-lbaas issue and, while neutron-lbaas has such kind of problems, relying on it is undesirable. We would probably not like to have the logic in the resources to call two different api endpoints based on the 'provider' choice in resource properties and then provide more functionality for the ones using 'octavia'. What I'm talking about is not replacing existing resources and not expanding functionality, but provide the same functionality using the same set of resources using two different providers. It's, IMHO, the easiest and fastest way to start support Octavia and work around current neutron-lbaas issues. Yes, Octavia provides superset of current neutron-lbaas API and said above doesn't cancel idea to create another set of resources. If Heat will provide basic set of functions within basic LBaaS framework and, sometimes, richer set within "NG LBaaS" framework - the only I can say: it will be great. Thanks. https://bugs.launchpad.net/heat/+bug/1737567 -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [octavia] API v2 or v1 or both?
Hi colleagues, when I use in [api_settings] section of octavia.conf api_v1_enabled = False api_v2_enabled = True I'm getting member creation failed: 2017-12-04 22:20:37.326 7199 INFO neutron_lbaas.services.loadbalancer.plugin [req-2c3d3185-9440-4f3d-90de-dff87de783b9 e6406606bd9d48aabc413468f9703cf6 c1114776e144400da17d8e060856be8c - default default] Calling driver operation LoadBalancerManager.create 2017-12-04 22:20:37.326 7199 DEBUG neutron_lbaas.drivers.octavia.driver [req-2c3d3185-9440-4f3d-90de-dff87de783b9 e6406606bd9d48aabc413468f9703cf6 c1114776e144400da17d8e060856be8c - default default] url = http://lagavulin:9876/v1/loadbalancers request /usr/lib/python2.7/dist-packages/neutron_lbaas/drivers/octavia/driver.py:138 2017-12-04 22:20:37.327 7199 DEBUG neutron_lbaas.drivers.octavia.driver [req-2c3d3185-9440-4f3d-90de-dff87de783b9 e6406606bd9d48aabc413468f9703cf6 c1114776e144400da17d8e060856be8c - default default] args = {"vip": {"subnet_id": "1748a90d-25bc-46a0-b623-d8450db83ed1", "port_id": "f773321c-eb34-41fa-8434-45358e6232fb", "ip_address": "10.1.1.12"}, "name": "nbt-balancer", "project_id": "c1114776e144400da17d8e060856be8c", "enabled": true, "id": "25604d3c-1714-4837-ad98-8ea2d2f03bc7", "description": ""} request /usr/lib/python2.7/dist-packages/neutron_lbaas/drivers/octavia/driver.py:139 2017-12-04 22:20:37.337 7199 DEBUG neutron_lbaas.drivers.octavia.driver [req-2c3d3185-9440-4f3d-90de-dff87de783b9 e6406606bd9d48aabc413468f9703cf6 c1114776e144400da17d8e060856be8c - default default] Octavia Response Code: 405 request /usr/lib/python2.7/dist-packages/neutron_lbaas/drivers/octavia/driver.py:144 2017-12-04 22:20:37.338 7199 DEBUG neutron_lbaas.drivers.octavia.driver [req-2c3d3185-9440-4f3d-90de-dff87de783b9 e6406606bd9d48aabc413468f9703cf6 c1114776e144400da17d8e060856be8c - default default] Octavia Response Body: 405 Method Not Allowed 405 Method Not Allowed The method POST is not allowed for this resource. request /usr/lib/python2.7/dist-packages/neutron_lbaas/drivers/octavia/driver.py:145 when I use both v1 and v2 enabled, I see ALL calls to API like http://...:9876/_v1_/... and corresponding log records like 2017-12-04 18:12:01.676 27192 INFO octavia.api._v1_.controllers.* does "v1" means that neutron use LBaaS API v1 instead of v2 ? Neutron itself configured to use LBaaS v2 : [DEFAULT] service_plugins = router,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2 [service_providers] service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default Is it enough or how to configure both neutron and octavia to use API v2 instead of v1? I'm under Ubuntu 16 and Openstack Pike and there are just "python-neutron-lbaas" package installed (no lbaasv2-agent, lbaas-common, etc) and octavia itself (using "pip install octavia" in _virtual environment_). Thus, endpoints configured unversioned in this way: +--+---+--++-+---+--+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +--+---+--++-+---+--+ | 18862b1bd4c643aca207f8b2d9066895 | RegionOne | octavia | load-balancer | True | internal | http://lagavulin:9876/ | | 8cb62ab73fb2431dbc3a0def744852ea | RegionOne | octavia | load-balancer | True | public | http://lagavulin:9876/ | | 909e9fc434cb4667bb828828bf49f906 | RegionOne | octavia | load-balancer | True | admin | http://lagavulin:9876/ | +--+---+--++-+---+--+ and no neutron lbaas agent in the list of agents (since there is no lbaasv2 agent installed and configured in the system). Seems I'm missing something. And two related questions: * which method to provide API is more preferable - WSGI or standalone octavia-api process? * how to be sure that Octavia code matches Pike version? I'm using "pip install octavia" and it's v1.0.1 at the moment, but not sure it matches package "python-neutron-lbaas" version 2:11.0.0-0ubuntu1~cloud0 Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [octavia] [heat] errors during loadbalancer creation
heat.engine.scheduler [req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task create from PoolMember "pm1" Stack "nbt" [b8beca77-19c7-49e5-94a7-ec079d841277] sleeping _sleep /usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:155 2017-11-29 12:04:36.017 6286 DEBUG heat.engine.scheduler [req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task create from PoolMember "pm1" Stack "nbt" [b8beca77-19c7-49e5-94a7-ec079d841277] running step /usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:214 2017-11-29 12:04:38.763 6286 DEBUG heat.engine.scheduler [req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task create from PoolMember "pm1" Stack "nbt" [b8beca77-19c7-49e5-94a7-ec079d841277] sleeping _sleep /usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:155 2017-11-29 12:04:39.763 6286 DEBUG heat.engine.scheduler [req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task create from PoolMember "pm1" Stack "nbt" [b8beca77-19c7-49e5-94a7-ec079d841277] running step /usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:214 2017-11-29 12:04:39.891 6286 DEBUG heat.engine.scheduler [req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task create from PoolMember "pm1" Stack "nbt" [b8beca77-19c7-49e5-94a7-ec079d841277] sleeping _sleep /usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:155 2017-11-29 12:04:40.892 6286 DEBUG heat.engine.scheduler [req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task create from PoolMember "pm1" Stack "nbt" [b8beca77-19c7-49e5-94a7-ec079d841277] running step /usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:214 2017-11-29 12:04:41.013 6286 DEBUG heat.engine.scheduler [req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task create from PoolMember "pm1" Stack "nbt" [b8beca77-19c7-49e5-94a7-ec079d841277] sleeping _sleep /usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:155 2017-11-29 12:04:42.013 6286 DEBUG heat.engine.scheduler [req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task create from PoolMember "pm1" Stack "nbt" [b8beca77-19c7-49e5-94a7-ec079d841277] running step /usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:214 2017-11-29 12:04:42.277 6286 DEBUG heat.engine.scheduler [req-346cc302-ec69-4781-9afb-1dc1474c6ebc - bush - default default] Task create from PoolMember "pm1" Stack "nbt" [b8beca77-19c7-49e5-94a7-ec079d841277] complete step /usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:220 Heat template for loadbalancer is the following: balancer: type: OS::Neutron::LBaaS::LoadBalancer properties: name: nbt-balancer vip_subnet: { get_resource: lan-subnet } listener: type: OS::Neutron::LBaaS::Listener properties: name: nbt-listener protocol: TCP protocol_port: { get_param: lb_port } loadbalancer: { get_resource: balancer } pool: type: OS::Neutron::LBaaS::Pool properties: name: nbt-pool protocol: TCP lb_algorithm: ROUND_ROBIN listener: { get_resource: listener } pm1: type: OS::Neutron::LBaaS::PoolMember properties: address: { get_attr: [ n1, first_address ]} pool: { get_resource: pool } protocol_port: { get_param: pool_port } subnet: { get_resource: lan-subnet } pm2: type: OS::Neutron::LBaaS::PoolMember properties: address: { get_attr: [ n2, first_address ]} pool: { get_resource: pool } protocol_port: { get_param: pool_port } subnet: { get_resource: lan-subnet } and, of course, servers n1 and n2 are exist and are operational. I will appreciate if you'll take a look at the issue and give some feedback on this. I can provide any related information in order to clarify this issue. Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [octavia] openstack CLI output don't match neutron lbaas-* output
7956-7c54-448a-96a7-709905c2bf4f neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. +--++ | Field | Value | +--++ | admin_state_up | True | | delay | 5 | | id | 71cc7956-7c54-448a-96a7-709905c2bf4f | | max_retries | 3 | | max_retries_down | 3 | | name | | | pools | {"id": "e106e039-af27-4cfa-baa2-7238acd3078e"} | | tenant_id | c1114776e144400da17d8e060856be8c | | timeout | 1 | | type | PING | +--++ but openstack cli extension thinks different* * doka@lagavulin(admin@bush):~/heat$ openstack loadbalancer healthmonitor list doka@lagavulin(admin@bush):~/heat$ openstack loadbalancer healthmonitor show 71cc7956-7c54-448a-96a7-709905c2bf4f Unable to locate 71cc7956-7c54-448a-96a7-709905c2bf4f in healthmonitors Be informed and can this impact something else? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [octavia] connection to external network
Hi colleagues, found the solution, it need to be done manually. No corresponding Octavia configuration responsible for this. Everything works, thank you :-) On 11/27/17 11:30 AM, Volodymyr Litovka wrote: Hello colleagues, I think I'm missing something architectural in LBaaS / Octavia, thus asking there - how to connect Amphora agent to external network? My current lab topology is the following: + | | ++ + ++ n1 | | +-+ | ++ ++ Amphora ++ | +-+ | ++ m | n ++ n2 | g | b | ++ + e m | t | | x t | | ++ | t | s ++ vR ++ e | u | ++ | r ++ b | | n | Controller | n | ++ | a ++ e |+ n3 | + l t | ++ + where "Amphora" is agent which loadbalances requests between "n1" and "n2": * openstack loadbalancer create --name lb1 --vip-subnet-id nbt-subnet --project bush * openstack loadbalancer listener create --protocol TCP --protocol-port 80 --name lis1 lb1 * openstack loadbalancer pool create --protocol TCP --listener lis1 --name lpool1 --lb-algorithm ROUND_ROBIN * openstack loadbalancer member create --protocol-port 80 --name n1 --address 1.1.1.11 lpool1 * openstack loadbalancer member create --protocol-port 80 --name n2 --address 1.1.1.14 lpool1 Everything works (n3-sourced connections to Amphora-agent return answers from n1 and n2 respectively in round robin way) and the question is how to connect Amphora-agent to external network in order to service requests from outside? In example above, nbt-subnet (which is VIP network) has a virtual router which is connected to external network and has all abilities to provide e.g. floating ip to Amphora, but I see nothing in octavia config files regarding floating ip functions. Am I missing something? Any ways on connect Web-servers in closed (project's) networks with Internet using Octavia / LBaaS? Thank you! -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [Openstack] prepare UEFI image using diskimage-builder
On 11/23/17 5:17 PM, Volodymyr Litovka wrote: Hi colleagues, we've faced an issue, described in https://bugzilla.redhat.com/show_bug.cgi?id=1020622 - when VM don't see virtio-scsi devices if there are 2+ connected devices (in our case there are block device itself and config drive). We're under Ubuntu 16.04 and while Redhat released fixed version of seabios (1.10.2-3), there is no update for Ubuntu (still 1.10.2-1). We see two ways to work around this problem: * try to replace seabios binaries (bios-256k.bin and bios.bin) with ones from Redhat RPM on compute nodes This works for Ubuntu 16. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[openstack-dev] [octavia] connection to external network
Hello colleagues, I think I'm missing something architectural in LBaaS / Octavia, thus asking there - how to connect Amphora agent to external network? My current lab topology is the following: + | | ++ + ++ n1 | | +-+ | ++ ++ Amphora ++ | +-+ | ++ m | n ++ n2 | g | b | ++ + e m | t | | x t | | ++ | t | s ++ vR ++ e | u | ++ | r ++ b | | n | Controller | n | ++ | a ++ e |+ n3 | + l t | ++ + where "Amphora" is agent which loadbalances requests between "n1" and "n2": * openstack loadbalancer create --name lb1 --vip-subnet-id nbt-subnet --project bush * openstack loadbalancer listener create --protocol TCP --protocol-port 80 --name lis1 lb1 * openstack loadbalancer pool create --protocol TCP --listener lis1 --name lpool1 --lb-algorithm ROUND_ROBIN * openstack loadbalancer member create --protocol-port 80 --name n1 --address 1.1.1.11 lpool1 * openstack loadbalancer member create --protocol-port 80 --name n2 --address 1.1.1.14 lpool1 Everything works (n3-sourced connections to Amphora-agent return answers from n1 and n2 respectively in round robin way) and the question is how to connect Amphora-agent to external network in order to service requests from outside? In example above, nbt-subnet (which is VIP network) has a virtual router which is connected to external network and has all abilities to provide e.g. floating ip to Amphora, but I see nothing in octavia config files regarding floating ip functions. Am I missing something? Any ways on connect Web-servers in closed (project's) networks with Internet using Octavia / LBaaS? Thank you! -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [Openstack] Implementation docs ?
Hi Guru, in general, there is no difference betweeb UI or CLI methods - both are using API, which is available through endpoints, described in Keystone. If you want to see details on what is happening when you're doing some operations - try to switch debug=True and verbose=True in Neutron's config files (neutron.conf and corresponding plugins e.g. neutron/plugins/ml2/*) - this will give you a comprehensive view on operations itselfs and their sequence. On 11/25/17 7:11 PM, Guru Desai wrote: Thanks Melvin.. I had a look at there. Unfortunately, that link does not provide much info on the details. I am curious to understand the operations that happens in the backend when some operation is done using UI or CLI i.e say when a security group is created, what happens in the backend with openvswitch or linux bridge etc.. Regards, Guru On Sat, Nov 25, 2017 at 9:52 PM, Melvin Hillsman <mrhills...@gmail.com <mailto:mrhills...@gmail.com>> wrote: Hey Guru, You can find some information here - https://docs.openstack.org/ocata/networking-guide/ <https://docs.openstack.org/ocata/networking-guide/> - I am not sure how dated the document is but it offers some good introductory information at least. Additionally you may find some help in the #openstack-neutron IRC channel. Hope this helps you get started. On Sat, Nov 25, 2017 at 4:44 AM, Guru Desai <guru...@gmail.com <mailto:guru...@gmail.com>> wrote: Hello All, I am new kid in the block !! looking for info on few things of Openstack. Apologies if this is not the place to post such questions. Any doc on security group/ general network operation(creating network, deleting etc) implementations, any flow-chart kind of ? i mean how would SG function with ovs,linux bridge Any pointers would be very helpful. Thanks GD ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack> Post to : openstack@lists.openstack.org <mailto:openstack@lists.openstack.org> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack> -- Kind regards, Melvin Hillsman mrhills...@gmail.com <mailto:mrhills...@gmail.com> mobile: (832) 264-2646 ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack] prepare UEFI image using diskimage-builder
Hi colleagues, we've faced an issue, described in https://bugzilla.redhat.com/show_bug.cgi?id=1020622 - when VM don't see virtio-scsi devices if there are 2+ connected devices (in our case there are block device itself and config drive). We're under Ubuntu 16.04 and while Redhat released fixed version of seabios (1.10.2-3), there is no update for Ubuntu (still 1.10.2-1). We see two ways to work around this problem: 1. try to replace seabios binaries (bios-256k.bin and bios.bin) with ones from Redhat RPM on compute nodes 2. or use UEFI for booting images Whether we missed others? We will try 1st way, but I guess UEFI is more preferable long-term solution, since it better itself than legacy BIOS. During googling I wasn't find any recommendations on how to prepare UEFI images using diskimage-builder. Can anybody there point on documentation or give some recommendations on how to create UEFI images with diskimage-builder? Thank you! -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [openstack-dev] [Octavia] networking issues
Please, disregard this message - I've found that part of networking resides in namespace. On 11/7/17 5:54 PM, Volodymyr Litovka wrote: Dear colleagues, while trying to setup Octavia, I faced the problem of connecting amphora agent to VIP network. *Environment: *Octavia 1.0.1 (installed by using "pip install") Openstack Pike: - Nova 16.0.1 - Neutron 11.0.1 - Keystone 12.0.0 *Topology of testbed:* + | | ++ + ++ n1 | | +-+ | ++ ++ Amphora ++ | +-+ | ++ m | l ++ n2 | g | b | ++ + e m | t | | x t | | ++ | t | s ++ vR ++ e | u | ++ | r ++ b | | n | Controller | n | | a ++ e | + l t | + *Summary:* $ openstack loadbalancer create --name nlb2 --vip-subnet-id lbt-subnet $ openstack loadbalancer list +--+--+--+-+-+--+ | id | name | project_id | vip_address | provisioning_status | provider | +--+--+--+-+-+--+ | 93facca0-d39a-44e0-96b6-28efc1388c2d | nlb2 | d8051a3ff3ad4c4bb380f828992b8178 | 1.1.1.16 | ACTIVE | octavia | +--+--+--+-+-+--+ $ openstack server list --all +--+--++-+-++ | ID | Name | Status | Networks | Image | Flavor | +--+--++-+-++ | 98ae591b-0270-4625-95eb-a557c1452eef | amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab | ACTIVE | lb-mgmt-net=172.16.252.28; lbt-net=1.1.1.11 | amphora | | | cc79ca78-b036-4d55-a4bd-5b3803ed2f9b | lb-n1 | ACTIVE | lbt-net=1.1.1.18 | | B-cup | | 6c43ccca-c808-44cf-974d-acdbdb4b26db | lb-n2 | ACTIVE | lbt-net=1.1.1.19 | | B-cup | +--+--++-+-++ This output shows that amphora agent is active with two interfaces, connected to management and project's networks (lb-mgmt-net and lbt-net respectively). BUT in fact there is no interface to lbt-net on the agent's VM: *ubuntu@amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab:~$* ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 [ ... ] 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether d0:1c:a0:58:e0:02 brd ff:ff:ff:ff:ff:ff inet 172.16.252.28/22 brd 172.16.255.255 scope global eth0 *ubuntu@amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab:~$* ls /sys/class/net/ _eth0_ _lo_ *ubuntu@amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab:~$* The issue is that eth1 exists during start of agent's VM and then it magically disappears (snipped from syslog, note timing): Nov 7 12:00:31 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1051]: DHCPREQUEST of 1.1.1.11 on eth1 to 255.255.255.255 port 67 (xid=0x1c65db9b) Nov 7 12:00:31 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1051]: DHCPOFFER of 1.1.1.11 from 1.1.1.10 Nov 7 12:00:31 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1051]: DHCPACK of 1.1.1.11 from 1.1.1.10 Nov 7 12:00:31 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1051]: bound to 1.1.1.11 -- renewal in 38793 seconds. [ ... ] Nov 7 12:00:44 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1116]: receive_packet failed on eth1: Network is down Nov 7 12:00:44 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab systemd[1]: Stopping ifup for eth1... Nov 7 12:00:44 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1715]: Killed old client process Nov 7 12:00:45 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1715]: Error getting hardware address for "eth1": No suc
[openstack-dev] [Octavia] networking issues
200} *Octavia-worker.log* is available at the following link: https://pastebin.com/44rwshKZ *Q**uestion**s are* - any ideas on what is happening and which further information and debugs I need to gather in order to resolve this issue? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [Openstack] file injection problem
Answer is: as written in source code, "config-drive = true" and file injection using personality are mutually exclusive mechanisms. On 10/25/17 2:14 AM, Volodymyr Litovka wrote: Also, python-guestfs package installed as well, so Nova is able to use it, at least quick check (snipped from Nova sources) passed: # python2.7 Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from oslo_utils import importutils >>> g = importutils.import_module('guestfs') >>> print g >>> from eventlet import tpool >>> t = tpool.Proxy(g.GuestFS()) >>> t.add_drive("/dev/null") >>> t.launch() >>> print t No ideas why I'm facing this problem. Anybody can comment on this? Thanks again. On 10/25/17 1:24 AM, Volodymyr Litovka wrote: Hi colleagues, it makes me crazy, but how to make it work file injection into instance? nova.conf already configured with == [DEFAULT] debug=true [libvirt] inject_partition = -1 [guestfs] debug=true [quota] injected_files = 5 injected_file_content_bytes = 10240 injected_file_path_length = 255 === libguestfs and libguestfs-tools are installed (on host machine): libguestfs-hfsplus:amd64 1:1.32.2-4ubuntu2 libguestfs-perl 1:1.32.2-4ubuntu2 libguestfs-reiserfs:amd64 1:1.32.2-4ubuntu2 libguestfs-tools 1:1.32.2-4ubuntu2 libguestfs-xfs:amd64 1:1.32.2-4ubuntu2 libguestfs0:amd64 1:1.32.2-4ubuntu2 and, finally, nova --debug boot --config-drive true --image --flavor --security-groups --key-name --file /etc/qqq=/dTest.txt --nic [...] dtest makes a correct request (note a personality parameter) REQ: curl -g -i -X POST http://controller:8774/v2.1/servers -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "OpenStack-API-Version: compute 2.53" -H "X-OpenStack-Nova-API-Version: 2.53" -H "X-Auth-Token: {SHA1}11e6bac1ea20a124903ff967873c186a179d545e" -H "Content-Type: application/json" -d '{"server": {"name": "dtest", "imageRef": "12c86830-8d76-4159-a6bc-81966d7a220e", "key_name": "xxx", "flavorRef": "d0ff4bc5-df38-4f20-8908-afc516d594e6", "max_count": 1, "min_count": 1, *"personality": [{"path": "/etc/qqq", "contents": "ZG9rYSB0ZXN0CmRva2EgdGVzdApkb2thIHRlc3QK"}]*, "networks": [{"uuid": "9cc72002-fe24-44a5-aa04-1ac0470f"}], "security_groups": [{"name": "dfc7d642-b55f-465c-84c2-9d95c9c565bf"}], "config_drive": true}}' but nothing everywhere - neither '/etc/qqq' on guest VM nor logs (according to guestfs.debug=true) on host machine. It's Pike on Ubuntu 16.04.3. What I'm doing wrong? Thanks. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] file injection problem
Also, python-guestfs package installed as well, so Nova is able to use it, at least quick check (snipped from Nova sources) passed: # python2.7 Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from oslo_utils import importutils >>> g = importutils.import_module('guestfs') >>> print g >>> from eventlet import tpool >>> t = tpool.Proxy(g.GuestFS()) >>> t.add_drive("/dev/null") >>> t.launch() >>> print t No ideas why I'm facing this problem. Anybody can comment on this? Thanks again. On 10/25/17 1:24 AM, Volodymyr Litovka wrote: Hi colleagues, it makes me crazy, but how to make it work file injection into instance? nova.conf already configured with == [DEFAULT] debug=true [libvirt] inject_partition = -1 [guestfs] debug=true [quota] injected_files = 5 injected_file_content_bytes = 10240 injected_file_path_length = 255 === libguestfs and libguestfs-tools are installed (on host machine): libguestfs-hfsplus:amd64 1:1.32.2-4ubuntu2 libguestfs-perl 1:1.32.2-4ubuntu2 libguestfs-reiserfs:amd64 1:1.32.2-4ubuntu2 libguestfs-tools 1:1.32.2-4ubuntu2 libguestfs-xfs:amd64 1:1.32.2-4ubuntu2 libguestfs0:amd64 1:1.32.2-4ubuntu2 and, finally, nova --debug boot --config-drive true --image --flavor --security-groups --key-name --file /etc/qqq=/dTest.txt --nic [...] dtest makes a correct request (note a personality parameter) REQ: curl -g -i -X POST http://controller:8774/v2.1/servers -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "OpenStack-API-Version: compute 2.53" -H "X-OpenStack-Nova-API-Version: 2.53" -H "X-Auth-Token: {SHA1}11e6bac1ea20a124903ff967873c186a179d545e" -H "Content-Type: application/json" -d '{"server": {"name": "dtest", "imageRef": "12c86830-8d76-4159-a6bc-81966d7a220e", "key_name": "xxx", "flavorRef": "d0ff4bc5-df38-4f20-8908-afc516d594e6", "max_count": 1, "min_count": 1, *"personality": [{"path": "/etc/qqq", "contents": "ZG9rYSB0ZXN0CmRva2EgdGVzdApkb2thIHRlc3QK"}]*, "networks": [{"uuid": "9cc72002-fe24-44a5-aa04-1ac0470f"}], "security_groups": [{"name": "dfc7d642-b55f-465c-84c2-9d95c9c565bf"}], "config_drive": true}}' but nothing everywhere - neither '/etc/qqq' on guest VM nor logs (according to guestfs.debug=true) on host machine. It's Pike on Ubuntu 16.04.3. What I'm doing wrong? Thanks. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack] launch Amphora (Octavia) image
Hi colleagues, I have an issue with launching 'amphora' image. I'm using Octavia's diskimage-create.sh tool on Ubuntu 16.04.3, which gives the following start: > Building elements: base base vm ubuntu haproxy-octavia-ubuntu rebind-sshd amphora-agent-ubuntu keepalived-octavia-ubuntu pip-cache certs-ramfs (note missing no-resolvconf for debug purposes) and then upload image to Glance using the following command: openstack image create \ --container-format bare --disk-format qcow2 --public \ --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi --property hw_cdrom_bus=scsi \ --property hw_qemu_guest_agent=yes \ --property img_hide_hypervisor_id=true \ --tag amphora \ --file amphora-x64-haproxy.qcow2 amphora but no Octavia's stuff start upon launch due to absence of amphora_agent.conf: Oct 20 13:45:57 lbt-n1 systemd[1]: Starting Creates an encrypted ramfs for Octavia certs... Oct 20 13:45:57 lbt-n1 sh[1779]: awk: fatal: *cannot open file `/etc/octavia/amphora-agent.conf'* for reading (No such file or directory) [ ... ] Oct 20 13:45:57 lbt-n1 amphora-agent[1683]: Traceback (most recent call last): Oct 20 13:45:57 lbt-n1 amphora-agent[1683]: File "/usr/local/bin/amphora-agent", line 11, in Oct 20 13:45:57 lbt-n1 amphora-agent[1683]: sys.exit(main()) Oct 20 13:45:57 lbt-n1 amphora-agent[1683]: File "/usr/local/lib/python2.7/dist-packages/octavia/cmd/agent.py", line 56, in main Oct 20 13:45:57 lbt-n1 amphora-agent[1683]: service.prepare_service(sys.argv) Oct 20 13:45:57 lbt-n1 amphora-agent[1683]: File "/usr/local/lib/python2.7/dist-packages/octavia/common/service.py", line 24, in prepare_service Oct 20 13:45:57 lbt-n1 amphora-agent[1683]: config.init(argv[1:]) Oct 20 13:45:57 lbt-n1 amphora-agent[1683]: File "/usr/local/lib/python2.7/dist-packages/octavia/common/config.py", line 576, in init Oct 20 13:45:57 lbt-n1 amphora-agent[1683]: **kwargs) Oct 20 13:45:57 lbt-n1 amphora-agent[1683]: File "/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2469, in __call__ Oct 20 13:45:57 lbt-n1 amphora-agent[1683]: raise ConfigFilesNotFoundError(self._namespace._files_not_found) Oct 20 13:45:57 lbt-n1 amphora-agent[1683]: oslo_config.cfg.ConfigFilesNotFoundError: *Failed to find some config files: /etc/octavia/amphora-agent.conf* I inspected /usr/local/share/octavia/elements and again found nothing about amphora-agent.conf, as well as nothing in git's tree (except elements/amphora-agent/init-scripts/upstart/amphora-agent.conf which isn't config but upstart script) So the question is - where to find amphora-agent.conf in order to launch amphora-agent on agent host? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack] Octavia (LBaaS) and Designate
Hi colleagues, is it mandatory to have Designate running in order to have Octavia (LBaaSv2) running as well? I see in logs the following messages: 2017-10-12 12:01:21.915 16382 DEBUG neutronclient.v2_0.client [-] Error message: {"message": "The resource could not be found./>\nExtension with alias dns-integration does not exist\n\n", "code": "404 Not Found", "title": "Not Found"} _handle_fault_response /usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:258 2017-10-12 12:01:21.916 16382 DEBUG octavia.network.drivers.neutron.base [-] Neutron extension dns-integration is not enabled _check_extension_enabled /usr/local/lib/python2.7/dist-packages/octavia/network/drivers/neutron/base.py:65 while never seen neither in documentations nor in config files any mentions about Designate. Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] extend attached volumes
As far as I understood, the chain is the same - admin extend volume by using Cinder, but unlike earlier implementation, if volume is in-use, Cinder will ask Nova whether Nova support extending attached volumes (by checking Nova's API microversion) and if yes, will resize volume informing Nova using "volume-extended" call. On Nova's side, it will inform QEMU about this change... then either user can manually resize or it will happen on next reboot. I can be wrong, of course, but there is no neither "nova volume-extend" nor word "extend" in "nova --help" :-) On 9/26/17 5:35 PM, John Petrini wrote: I think this feature is actually implemented in nova. So you have to use the nova volume-extend option to do what you want. This is just my interpretation of the release notes though. I haven't tried it. On Tue, Sep 26, 2017 at 10:20 AM, Volodymyr Litovka <doka...@gmx.com <mailto:doka...@gmx.com>> wrote: Hi Jay, I know about this way :-) but Pike introduced ability to resize attached volumes: "It is now possible to signal and perform an online volume size change as of the 2.51 microversion using the|volume-extended|external event. Nova will perform the volume extension so the host can detect its new size. It will also resize the device in QEMU so instance can detect the new disk size without rebooting." -- https://docs.openstack.org/releasenotes/nova/pike.html <https://docs.openstack.org/releasenotes/nova/pike.html> On 9/26/17 5:04 PM, Jay Pipes wrote: Detach the volume, then resize it, then re-attach. Best, -jay On 09/26/2017 09:22 AM, Volodymyr Litovka wrote: Colleagues, can't find ways to resize attached volume. I'm on Pike. As far as I understand, it required to be supported in Nova, because Cinder need to check with Nova whether it's possible to extend this volume. Well, - Nova's API microversion is 2.51, which seems to be enough to support "volume-extended" API call - Properties of image are *hw_disk_bus='scsi'* and *hw_scsi_model='virtio-scsi'*, type bare/raw, located in Cinder - hypervisor is KVM - volume is bootable, mounted as root, created as snapshot from Cinder volume - Cinder's backend is CEPH/Bluestore and both "cinder extend" and "openstack volume set --size" returns "Volume status must be '{'status': 'available'}' to extend, currently in-use". I did not find any configuration options neither in nova nor in cinder config files, which can help with this functionality. What I'm doing wrong? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack> Post to : openstack@lists.openstack.org <mailto:openstack@lists.openstack.org> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack> ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack> Post to : openstack@lists.openstack.org <mailto:openstack@lists.openstack.org> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack> -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack> Post to : openstack@lists.openstack.org <mailto:openstack@lists.openstack.org> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack> -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] extend attached volumes
Hi Jay, I know about this way :-) but Pike introduced ability to resize attached volumes: "It is now possible to signal and perform an online volume size change as of the 2.51 microversion using the|volume-extended|external event. Nova will perform the volume extension so the host can detect its new size. It will also resize the device in QEMU so instance can detect the new disk size without rebooting." -- https://docs.openstack.org/releasenotes/nova/pike.html On 9/26/17 5:04 PM, Jay Pipes wrote: Detach the volume, then resize it, then re-attach. Best, -jay On 09/26/2017 09:22 AM, Volodymyr Litovka wrote: Colleagues, can't find ways to resize attached volume. I'm on Pike. As far as I understand, it required to be supported in Nova, because Cinder need to check with Nova whether it's possible to extend this volume. Well, - Nova's API microversion is 2.51, which seems to be enough to support "volume-extended" API call - Properties of image are *hw_disk_bus='scsi'* and *hw_scsi_model='virtio-scsi'*, type bare/raw, located in Cinder - hypervisor is KVM - volume is bootable, mounted as root, created as snapshot from Cinder volume - Cinder's backend is CEPH/Bluestore and both "cinder extend" and "openstack volume set --size" returns "Volume status must be '{'status': 'available'}' to extend, currently in-use". I did not find any configuration options neither in nova nor in cinder config files, which can help with this functionality. What I'm doing wrong? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack] extend attached volumes
Colleagues, can't find ways to resize attached volume. I'm on Pike. As far as I understand, it required to be supported in Nova, because Cinder need to check with Nova whether it's possible to extend this volume. Well, - Nova's API microversion is 2.51, which seems to be enough to support "volume-extended" API call - Properties of image are *hw_disk_bus='scsi'* and *hw_scsi_model='virtio-scsi'*, type bare/raw, located in Cinder - hypervisor is KVM - volume is bootable, mounted as root, created as snapshot from Cinder volume - Cinder's backend is CEPH/Bluestore and both "cinder extend" and "openstack volume set --size" returns "Volume status must be '{'status': 'available'}' to extend, currently in-use". I did not find any configuration options neither in nova nor in cinder config files, which can help with this functionality. What I'm doing wrong? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack-operators] [nova] [neutron] Should we continue providing FQDNs for instance hostnames?
Hi Matt, On 9/22/17 7:10 PM, Matt Riedemann wrote: while this approach is ok in general, some comments from my side - 1. For a new instance, if the neutron network has a dns_domain set, use it. I'm not totally sure how we tell from the metadata API if it's a new instance or not, except when we're building the config drive, but that could be sorted out. In some scenarios, ports can be created for VM but be detached until "right" time. For example, at the moment Nova don't reflect Neutron's port admin state to VM (long time was going to, but thanks to this discussion just filled a bug https://bugs.launchpad.net/nova/+bug/1719261 ). So, if you need VM with predefined port roles (with corresponding iptables rules), but, for some reasons, these ports should be DOWN, you need: - create them before VM will be created - pass their MAC addresses to VM in order to create corresponding udev naming rules and subsequent configuration - but don't attach them In such scenario, network with "dns_domain" parameter can be unavailable to VM since there are no attached ports from this network at the VM creation time. And a second point: what I wanted to say is that "dns_domain" is a property, which is available only when Designate is in use. While it can be immanent property of network for use with dnsmasq's "--domain" parameter, in order to get useful responces for DHCP "domain" queries. Not too critical, but full integration with DNS don't always required while simple DHCP functionality is enough. 2. Otherwise use the dhcp_domain config option in nova. Crazy idea is to get customization right here - if instance's "name" is FQDN itself (e.g. myhost.some.domain.here) then: - ignore "dhcp_domain" and pass "name" unchanged as hostname to VM - but use "hostname"-part of name (e.g. myhost) to register VM in Openstack Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [nova] [neutron] Should we continue providing FQDNs for instance hostnames?
Hi Stephen, I think, it's useful to have hostname in Nova's metadata - this provides some initial information for cloud-init to configure newly created VM, so I would not refuse this method. A bit confusing is domain part of the hostname (novalocal), which derived from Openstack-wide deprecated-now parameter "dhcp_domain": $ curl http://169.254.169.254/latest/meta-data/hostname jex-n1.novalocal cloud-init qualify this as FQDN and prepare configuration accordingly. Not too critical, but if there would be any way to use user-defined domain part in metadata, it will not break backward compatibility with cloud-init but reduce bustle upon initial VM configuration :) And another topic, in Neutron, regarding domainname. Any DHCP-server, created by Neutron, will return "domain" derived from system-wide "dns_name" parameter (defined in neutron.conf and explicitly used in argument "--domain" of dnsmasq). There is no way to customize this parameter on a per-network basis (parameter "dns_domain" is in action only with Designate, no other ways to use it). Again, it would be great if it will be possible to set per-network domain name in order to deal with DHCP / DNS queries from connected VMs. Thank you. On 9/8/17 12:12 PM, Stephen Finucane wrote: [Re-posting (in edited from) from openstack-dev] Nova has a feature whereby it will provide instance host names that cloud-init can extract and use inside the guest, i.e. this won't happen without cloud- init. These host names are fully qualified domain names (FQDNs) based upon the instance name and local domain name. However, as noted in bug #1698010 [1], the domain name part of this is based up nova's 'dhcp_domain' option, which is a nova-network option that has been deprecated [2]. My idea to fix this bug was to start consuming this information from neutron instead, via the . However, per the feedback in the (WIP) fix [3], this requires requires that the 'DNS Integration' extension works and will introduce a regression for users currently relying on the 'dhcp_domain' option. This suggests it might not be the best approach to take but, alas, I don't have any cleverer ones yet. My initial question to openstack-dev was "are FQDNs a valid thing to use as a hostname in a guest" and it seems they definitely are, even if they're not consistently used [4][5]. However, based on other comments [6], it seems there are alternative approaches and even openstack-infra don't use this functionality (preferring instead to configure hostnames using their orchestration software, if that's what nodepool could be seen as?). As a result, I have a new question: "should nova be in the business of providing this information (via cloud-init and the metadata service) at all"? I don't actually have any clever ideas regarding how we can solve this. As such, I'm open to any and all input. Cheers, Stephen [1] https://bugs.launchpad.net/nova/+bug/1698010 [2] https://review.openstack.org/#/c/395683/ [3] https://review.openstack.org/#/c/480616/ [4] http://lists.openstack.org/pipermail/openstack-dev/2017-September/121948.ht ml [5] http://lists.openstack.org/pipermail/openstack-dev/2017-September/121794.ht ml [6] http://lists.openstack.org/pipermail/openstack-dev/2017-September/121877.ht ml ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
[Openstack] Ocata -> Pike security groups changed default behaviour?
Hi colleagues, after upgrade from Ocata to Pike I noticed change in security groups behaviour. In Ocata, I was using a combination of default security group + custom group (which matches ingress ethertype both IPv4 and IPv6) on a port and this was allowing ingress traffic to VM. In Pike this doesn't work anymore, i.e. having two security groups in project $ openstack security group list [ ... ] | 53ede63e-b08f-4c95-b5fe-29cd21ed442a | default | Default security group | d8051a3ff3ad4c4bb380f828992b8178 | | cd0bd222-78e1-42b2-b8a5-51d655c49a8f | jex-esg | | d8051a3ff3ad4c4bb380f828992b8178 | and using both on port disables any traffic from outside (e.g. ping): $ openstack port show jex-n1-wan [ ... ] | fixed_ips | ip_address='x.x.x.246', subnet_id='5cfcb94e-5865-4cbd-83e3-56e397a436ec' | | security_group_ids | 53ede63e-b08f-4c95-b5fe-29cd21ed442a, cd0bd222-78e1-42b2-b8a5-51d655c49a8f | while keeping only custom group allows traffic from outside: $ openstack port show jex-n1-wan | fixed_ips | ip_address='x.x.x.246', subnet_id='5cfcb94e-5865-4cbd-83e3-56e397a436ec' | | security_group_ids | cd0bd222-78e1-42b2-b8a5-51d655c49a8f | *I didn't find any notices on this in Pike release notes. Can anybody point me to the pla**ce**where I can find information on this and, possibly, other implicit changes?* For additional information, rules of jex-esg are these: $ openstack security group show jex-esg +-+-+ | Field | Value | +-+-+ | created_at | 2017-09-21T13:25:53Z | | description | | | id | cd0bd222-78e1-42b2-b8a5-51d655c49a8f | | name | jex-esg | | project_id | d8051a3ff3ad4c4bb380f828992b8178 | | revision_number | 4 | | rules | created_at='2017-09-21T13:25:53Z', direction='ingress', ethertype='IPv4', id='1b979cd7- | | | created_at='2017-09-21T13:25:53Z', direction='ingress', ethertype='IPv6', id='906ac4e2- | | | created_at='2017-09-21T13:25:53Z', direction='egress', ethertype='IPv6', id='c8cc2114- | | | created_at='2017-09-21T13:25:53Z', direction='egress', ethertype='IPv4', id='ebb060f5- | | updated_at | 2017-09-21T13:25:53Z | +-+-+ Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack] [heat] default collectors
Hi colleagues, when deploying VMs using Heat, os-collect-config automatically configured for three collectors: heat, ec2 and local. While I don't need for sure ec2, there is corresponding bug https://bugs.launchpad.net/tripleo/+bug/1669842 which suggests to remove unconditional adding of this collector to the list. What is 'local' collector needed for? There are lot of error messages every 30 seconds in syslog: Sep 13 11:39:12 vm os-collect-config[1570]: No local metadata found (['/var/lib/os-collect-config/local-data']) Sep 13 11:39:42 vm os-collect-config[1570]: /var/lib/os-collect-config/local-data not found. Skipping which impacts nothing except I understand that every 30 seconds process tries to access this data. According to source code, it also added unconditionally to the end of the list like this code: collectors = ['ec2'] if heat then collectors.append('heat') if zaqar then collectors.append('zaqar') if cfn then collectors.append('cfn') [ ... ] collectors.append('local') Is it safe to remove it from code as well as proposed in bug above? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] No ping to Openstack instance
Whether you've switched on DHCP-server on the subnet which VM belongs to? If yes - check logs of DHCP-server (dnsmasq) on the node where it resides - whether it receives DHCP requests and whether it sends answers or why it rejects requests. If DHCP-server don't receives requests or if it sends answers and VM don't receives them - check using tcpdump entire chain of ports between VM and DHCP server and find where it drops DHCP exchange. On 9/7/17 11:42 AM, wahi wrote: When I accessed cirros instance and checked the network file it was on dhcp mode, but ip a doesn't show any given IP only the local IP. Does it mean that the instance cannot get a DHCP IP ? ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack] instance snapshot
Hi, in my installation I'm going to use volumes to boot instances, not ephemeral disks. And I faced unexpected (for me, at least) behaviour when trying to implement snapshoting: image, created with "openstack server image create" can't be used for boot from volume, i.e. I create image using "openstack server image create --name jsnap jex-n1" and then: - creating server using ephemeral disk * openstack server create jTest [ ... ] --image jsnap is *OK* - creating server usingvolume, populated from image: * openstack volume create jVol --size 8 *--image jsnap* --bootable * openstack server create jTest [ ... ] --volume jVol *FAIL**S* with the following error: "_Invalid image metadata. Error: A list is required in field img_block_device_mapping, not a unicode (HTTP 400)_". - creating server using volume, populated from snapshot (which corresponds to the image): * openstack volume create jVol --size 8 *--snapshot f0ad0bf0-97f4-49df-b334-71b9eb639567* --bootable * openstack server create jTest [ ... ] --volume jVol is *OK* Assuming this is correct (oh, really? can't find this topic in documentation), I don't need image as it's senseless entity (I can't use it for booting from volumes) and just need snapshot. But I still like https://blueprints.launchpad.net/nova/+spec/quiesced-image-snapshots-with-qemu-guest-agent so there are two questions regarding this: 1) whether the following sequence will do exactly the same as "server image create" does? - - "manually" freeze VM filesystem (using "guest-fsfreeze-freeze" command) - create snapshot using "openstack volume snapshot create --volume --force " - "manually" unfreeze VM filesystem (using "guest-fsfreeze-thaw" command) 2) and, when using Cinder API, is there way to synchronously wait for end of snapshot creation? It's useful in order to thaw filesystem immediately after snapshot will be done - neither after nor before few seconds. Thanks. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] transfer of IP address between ports
Hi Andrew, sorry for delay in responding, there were Ukrainian Independence Day and we were on holidays, spending time with family and friends :) On 8/24/17 6:36 PM, 공용준 wrote: There is another scenario. It's going to be public cloud and there can be few reasons to allow customer to move pubic IP address between his VMs, e.g. he built another VM using another OS for same role and need to move this role from old VM to new VM, do not changing other infrastructure's configurations. Five or ten seconds of cool down time isn't a problem itself, since it's not for high availability Did you consider the lbaas for this purpose? I think floating IP’s concept is good, but the implementation I think we need to rethink about this. and I thinks opentack’s octavia also do the job. Yes, I'm considering LBaaS, but as another service in my public cloud :) So, don't want to provide it as part of base set of services. I will check what you did in order to solve this issue, but preliminary I think that you're right and floating IP is the best solution for this (since don't require Openstack modification). The only concern I have regarding floating IP is performance since NAT is involved and this can lead to performance degradation. I think I will provide two kinds of IP addresses - fixed and transferable. And if somebody needs to preserve IP address between two different instance, he will choose transferable IP for additional cost - this will prevent massive NAT, on the one hand and will compensate additional resources exhaustion, on the other. Thank you! -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] transfer of IP address between ports
Hi Clint, see inline, please. On 8/24/17 2:21 AM, Clint Byrum wrote: This is precisely the reason floating IPs that NAT to other IPs exists (not, as we think, to provide public IP access... we can do that with fixed IPs). Moving ports, moving the IP, they all involve a few layers of cache invalidation and complex manipulation at the lower networking layers. But changing a NAT destination is relatively instant. I'd recommend you using a floating IP for this. If you can't, please explain. It's going to be public cloud and there can be few reasons to allow customer to move pubic IP address between his VMs, e.g. he built another VM using another OS for same role and need to move this role from old VM to new VM, do not changing other infrastructure's configurations. Thanks. Excerpts from Volodymyr Litovka's message of 2017-08-23 16:58:32 +0300: Hi colleagues, imagine, somebody (e.g. me :-) ) needs to transfer IP address between two ports. The straight way is: release IP address and then assign it to another port. The possible problem with this way is time between release and assignment - during this time, this IP address is in DHCP pool and can be automatically assigned to some another port upon request. Any ideas how to prevent leasing this IP address during this time? Thank you. ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] transfer of IP address between ports
Hi Andrew, thanks for the prompt reply. I'm using fixed ip addresses, not floating IPs. In terms of Heat it looks like there: n1-wan: type: OS::Neutron::Port properties: name: n1-wan network: e-net fixed_ips: [ { subnet: e-subnet, ip_address: X.X.X.X } ] n1: type: OS::Nova::Server properties: name: n1 networks: - port: { get_resource: n1-wan } and there are some constraints in my installation: 1. I can't move ports between VMs (in order to support predictable naming according to port roles, their MAC addresses are stored in udev rules inside VM and if I will change port, rules/roles will fail) 2. I don't want to use floating ip due to possible performance degradation when using massive NAT Another idea I have is to move ports between VMs, changing their MACs accordingly and will try it if no other ways will be found :) Thanks again. On 8/23/17 5:17 PM, 공용준 wrote: Hi You can use fixed ip port for this. create neutron port and attach it to the one vm. or you can use floating ip for this purpose as well Regards, Andrew 2017. 8. 23. 오후 10:58, Volodymyr Litovka <doka...@gmx.com <mailto:doka...@gmx.com>> 작성: Hi colleagues, imagine, somebody (e.g. me :-) ) needs to transfer IP address between two ports. The straight way is: release IP address and then assign it to another port. The possible problem with this way is time between release and assignment - during this time, this IP address is in DHCP pool and can be automatically assigned to some another port upon request. Any ideas how to prevent leasing this IP address during this time? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org <mailto:openstack@lists.openstack.org> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack] transfer of IP address between ports
Hi colleagues, imagine, somebody (e.g. me :-) ) needs to transfer IP address between two ports. The straight way is: release IP address and then assign it to another port. The possible problem with this way is time between release and assignment - during this time, this IP address is in DHCP pool and can be automatically assigned to some another port upon request. Any ideas how to prevent leasing this IP address during this time? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] Solaris on openstack
Hi Shyam, Openstack control infrastructure (hypervisors, networking, storage, etc), not guest VMs. So, it's rather question to hypervisor (e.g. KVM) whether it support Solaris as guest OS. On 8/14/17 11:22 AM, Shyam Biradar wrote: Hi, I am trying to install solaris on OpenStack. I am facing many issues during os installation process. Does OpenStack support solaris in any release/distribution? Thanks & Regards, Shyam Biradar, Email: shyambiradarsgg...@gmail.com <mailto:shyambiradarsgg...@gmail.com>, Contact: +91 8600266938. ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack-operators] port state is UP when admin-state-up is False (neutron/+bug/1672629)
On 8/8/17 7:18 PM, Kevin Benton wrote: The best way to completely ensure the instance isn't trying to use it is to detach it from the instance using the 'nova interface-detach' command. Sure, but this introduces additional complexity in complex environments when it's required to have predictable interface naming accordingly to roles (e.g. eth0 is always WAN connection, eth1 is always LAN1, eth2 is always control/mgmt, etc etc). Attaching/detaching interfaces changes this and requires to manage udev rules, which adds issues when creating new VM from snapshot, ... :-) Not too critical, everything can be handled using more or less complex workarounds, but, since libvirt has support to set interface state (using '*virsh domif-setlink domain interface-device state*'), why don't use this call to reflect interface state according to Openstack's settings? Thanks. On Tue, Aug 8, 2017 at 7:49 AM, Volodymyr Litovka <doka...@gmx.com <mailto:doka...@gmx.com>> wrote: Hi Kevin, see below On 8/8/17 1:06 AM, Kevin Benton wrote: What backend are you using? That bug is about the port showing ACTIVE when admin_state_up=False but it's still being disconnected from the dataplane. If you are seeing dataplane traffic with admin_state_up=False, then that is a separate bug. I'm using OVS Also, keep in mind that marking the port down will still not be reflected inside of the VM via ifconfig or ethtool. It will always show active in there. So even after we fix bug 1672629, you are going to see the port is connected inside of the VM. Is there way to disconnect port, thus putting it into DOWN state on VM, using Openstack API ? This is important for *public clouds* when it can be necessary to shutdown port of unmanaged (customer's) VM. The only idea I have is to set admin_state_up to False and, actually, it's the only command, which control port state. As I mentioned earlier, it seems it was working in Kilo ("I have checked the behavior of admin_state_up of Kilo version, when port admin-state-up is set to False, the port status will be DOWN.") but Ocata shows another behaviour, ignoring this parameter. So, any ideas on how to shutdown port on VM using Openstack API? Thank you! On Mon, Aug 7, 2017 at 5:21 AM, Volodymyr Litovka <doka...@gmx.com <mailto:doka...@gmx.com>> wrote: Hi colleagues, am I the only who care about this case? - https://bugs.launchpad.net/neutron/+bug/1672629 <https://bugs.launchpad.net/neutron/+bug/1672629> The problem is when I set port admin_state_up to False, it still UP on the VM thus continuing to route statically configured networks (e.g. received from DHCP host_routes), sending DHCP reqs, etc As people discovered, in Kilo everything was ok - "I have checked the behavior of admin_state_up of Kilo version, when port admin-state-up is set to False, the port status will be DOWN." - but at least in Ocata it is broken. Anybody facing this problem too? Any ideas on how to work around it? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org <mailto:OpenStack-operators@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators> -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] port state is UP when admin-state-up is False (neutron/+bug/1672629)
Hi Kevin, see below On 8/8/17 1:06 AM, Kevin Benton wrote: What backend are you using? That bug is about the port showing ACTIVE when admin_state_up=False but it's still being disconnected from the dataplane. If you are seeing dataplane traffic with admin_state_up=False, then that is a separate bug. I'm using OVS Also, keep in mind that marking the port down will still not be reflected inside of the VM via ifconfig or ethtool. It will always show active in there. So even after we fix bug 1672629, you are going to see the port is connected inside of the VM. Is there way to disconnect port, thus putting it into DOWN state on VM, using Openstack API ? This is important for *public clouds* when it can be necessary to shutdown port of unmanaged (customer's) VM. The only idea I have is to set admin_state_up to False and, actually, it's the only command, which control port state. As I mentioned earlier, it seems it was working in Kilo ("I have checked the behavior of admin_state_up of Kilo version, when port admin-state-up is set to False, the port status will be DOWN.") but Ocata shows another behaviour, ignoring this parameter. So, any ideas on how to shutdown port on VM using Openstack API? Thank you! On Mon, Aug 7, 2017 at 5:21 AM, Volodymyr Litovka <doka...@gmx.com <mailto:doka...@gmx.com>> wrote: Hi colleagues, am I the only who care about this case? - https://bugs.launchpad.net/neutron/+bug/1672629 <https://bugs.launchpad.net/neutron/+bug/1672629> The problem is when I set port admin_state_up to False, it still UP on the VM thus continuing to route statically configured networks (e.g. received from DHCP host_routes), sending DHCP reqs, etc As people discovered, in Kilo everything was ok - "I have checked the behavior of admin_state_up of Kilo version, when port admin-state-up is set to False, the port status will be DOWN." - but at least in Ocata it is broken. Anybody facing this problem too? Any ideas on how to work around it? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org <mailto:OpenStack-operators@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators> -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [nova]
If you don't recreate Neutron ports (just destroying VM, creating it as new and attaching old ports), then you can distinguish between interfaces by MAC addresses and store this in udev rules. You can do this on first boot (e.g. in cloud-init's "startcmd" command), using information from /sys/class/net directory. On 7/31/17 9:14 PM, Morgenstern, Chad wrote: Hi, I am trying to programmatically rebuild a nova instance that has a persistent volume for its root device. I am specifically trying to rebuild an instance that has multiple network interfaces and a floating ip. I have observed that the order in which the network interface are attached matters, floating ip attach to eth0 based. How do I figure out which of the interfaces currently attached is associated with eth0? ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
[Openstack-operators] port state is UP when admin-state-up is False (neutron/+bug/1672629)
Hi colleagues, am I the only who care about this case? - https://bugs.launchpad.net/neutron/+bug/1672629 The problem is when I set port admin_state_up to False, it still UP on the VM thus continuing to route statically configured networks (e.g. received from DHCP host_routes), sending DHCP reqs, etc As people discovered, in Kilo everything was ok - "I have checked the behavior of admin_state_up of Kilo version, when port admin-state-up is set to False, the port status will be DOWN." - but at least in Ocata it is broken. Anybody facing this problem too? Any ideas on how to work around it? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Ocata diskimage-builder heat issues
Hi Amit, Please, check whether this can be an issue - https://bugs.launchpad.net/openstack-manuals/+bug/1661759 you should use 'v2.0' path in both ec2authtoken and keystone_authtoken sections of heat.conf. On 6/30/17 11:21 AM, Amit Kumar wrote: Hello, Yes, my instance had os-collect-config service running. Unfortunately, I am not having the same setup to see if /var/lib/heat-config/ is containing deployment scripts or not but I remember I checked /var/lib/cloud/instance/scripts/ was not having anything. Regards, Amit On Thu, Jun 29, 2017 at 7:54 PM, Ignazio Cassano <ignaziocass...@gmail.com <mailto:ignaziocass...@gmail.com>> wrote: Hello Amit, tomorrow I'll try with trusty. Centos7 is working. Some questions: your instance created ny heat with SoftwareDeployment has os-collect-config runnig ? If yes, lauching os.refresh-config ang going under /var/lib/heat-config/ on yoyr instance, you should see deployment scripts Regards Ignazio 2017-06-29 13:47 GMT+02:00 Amit Kumar <ebiib...@gmail.com <mailto:ebiib...@gmail.com>>: Hi, I tried to create Ubuntu Trusty image using diskimage-builder tag 1.28.0, dib-run-parts got included in VM os-refresh-config should have worked but still SoftwareDeployment didn't work with the cloud image. Regards, Amit On Jun 29, 2017 5:08 PM, "Matteo Panella" <matteo.pane...@cnaf.infn.it <mailto:matteo.pane...@cnaf.infn.it>> wrote: On 29/06/2017 12:11, Ignazio Cassano wrote: > Hello all, > the new version of diskimage-builder (I am testing for centos 7) does > not install dib-utils and jq in te image. > The above are required by os-refresh-config . Yup, I reverted diskimage-builder to 2.2.0 (the last tag before dib-run-parts was not injected anymore) and os-refresh-config works correctly. os-refresh-config should probably be modified to depend on dib-run-parts, however: a) dib-run-parts provides package-style installation with an RPM-specific package name b) the package does not exist for Ubuntu 14.04 and there are no source-style installation scripts Regards, -- Matteo Panella INFN CNAF Via Ranzani 13/2 c - 40127 Bologna, Italy Phone: +39 051 609 2903 <tel:+39%20051%20609%202903> ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Ocata diskimage-builder heat issues
Hi Ignazio, recently I've opened bug - https://bugs.launchpad.net/heat/+bug/1691964 - with instructions on how to build images with Software Deployment using diskimage-builder. Hope this'll help. On 6/28/17 7:52 PM, Ignazio Cassano wrote: Hi All, last March I user diskimage builder tool to generate a centos7 image with tools for heat software deployment. That image worked fine with heat software deployment in newton and works today with ocata. Upgrading diskimage builder and creating again the image , heat software deployment does not work because some errors in os-refresh-config . I do not remember what version of diskimage builder I used in March. At this time I am trying with 2.6.1 and 2.6.2 with same issues. Does anyone know what is changed ? Regards Ignazio ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack] [heat] Heat::SoftwareDeployment not working
Please, check whether this can be an issue - https://bugs.launchpad.net/openstack-manuals/+bug/1661759 you should use 'v2.0' path in both es2authtoken and keystone_authtoken sections of heat.conf. On 6/23/17 3:47 PM, Amit Kumar wrote: Hi All, I have installed OpenStack Mitaka using OpenStack-Ansible 13.3.13. Trying to use Heat::SoftwareDeployment resource similar to as described in: https://github.com/openstack/heat-templates/blob/master/hot/software-config/example-templates/example-script-template.yaml ; but is not working as expected. SoftwareDeployment resource is always in progress state once heat stack is created from command line. Here are the /var/log/cloud-init-output.log: http://paste.openstack.org/show/613502/ /var/log/os-collect-config.log shows these logs: http://paste.openstack.org/show/613503/. Can they cause any harm? /var/run/heat-config/heat-config is showing the script and the input parameters which I want to run on VM. Here are the logs: http://paste.openstack.org/show/613504/ but in-spite of script and its input being here, //var/lib/cloud/instances/i-003a/scripts/userdata/ file is empty. Here is the /var/lib/cloud/instances/i-003a/user-data.txt: http://paste.openstack.org/show/613505/ With the help of above logs, please see if you can point out if I am missing anything here. Thanks. Regards, Amit ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Openstack] host routes on provider subnet not working.
.noarch openstack-neutron-common-9.2.0-1.el7.noarch openstack-neutron-linuxbridge-9.2.0-1.el7.noarch python-openstackclient-3.2.1-1.el7.noarch openstack-neutron-9.2.0-1.el7.noarch -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack] restrict access for users between domains
Hi friends, is there way to define domain's admin and restrict this person to access only his domain? At the moment (Ocata release), if I : - create domain by '_openstack domain create devtest_ - create user in the domain by '_openstack user create udevtest --domain devtest --password xx_ - create project in the domain by '_openstack project create devmin --domain devtest_ - assign role 'admin' to the user on both the domain and the project: * _openstack role add admin --user udevtest --domain devtest_ * _openstack role add admin --project-domain devtest --project devmin --user udevtest_ then, using user's 'udevtest' credentials: OS_REGION_NAME=RegionOne OS_DEFAULT_DOMAIN=devtest OS_USER_DOMAIN_NAME=devtest OS_PROJECT_DOMAIN_NAME=devtest OS_PROJECT_NAME=devmin OS_USERNAME=udevtest OS_PASSWORD=x OS_AUTH_STRATEGY=keystone OS_IDENTITY_API_VERSION=3 OS_AUTH_URL=http://controller:5000/v3 OS_INTERFACE=internal I'm able to get a list of all users and projects in 'default' domain and even more - add / delete users and projects in 'default' domain. In fact, user 'udevtest' has nothing to domain 'default', but assigned global role 'admin' - probably, that is the problem, because policy.json's rule 'admin_required' is just check for 'role:admin', which is true. On the other hand, if I create role 'admin' specific to domain 'devtest' and assign it to user on both domain and project in the domain, then I get error "*User f1c1cd3438c24255a2baa85f326dfc40 *(which is udevtest)*has no access to project 1dbbaf2fb0bc4d5da270e48d4a92bc62* (which is devmin)", so seems local roles doesn't matter. Is the only way (I hope it's legacy way :-) ) to change policy.json (as some pages on Internet were suggesting) or I'm doing something wrong? Thank you! -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[Openstack-operators] control guest VMs in isolated network
Hi colleagues, are there ways to control guest VMs which reside in isolated network? In general, there two methods are available: 1. use Heat's SoftwareDeployment method 2. use Qemu Guest Agent First method requires accessibility of Keystone/Heat (os-collect-agent authorizes on Keystone, receives endpoints list and use public Heat's endpoint to deploy changes), but, since network is isolated, these addresses are inaccessible. It can work if Neutron can provide proxying like it do for Metadata server, but I didn't find this kind of functionality neither in Neutron's documentation nor in other sources. And I don't want to apply another NIC to VM for access to Keystone/Heat, since it violates customer's rules (this is, by design, isolated network with just VPN connection to premises). So the first question is - *whether Neutron can proxy requests to Keystone/Heat like it do this for Metadata*? Second method (using qemu guest agent) gives some control of VM, but, again, I wasn't be able to find how this can achieved using Nova. There are some mentions about this functionality but no details and examples. So, the second question - *whether Nova supports qemu guest agent and allows to use available calls of QEMU-ga protocol, including 'guest-exec**'*? And, may be, there are another methods or ways to use mentioned above methods to bypass isolation while keeping it? Thank you! -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators