Re: [openstack-dev] [openstack-ansible] mount ceph block from an instance

2017-05-29 Thread fabrice grelaud
Thanks for the answer.

My use case is for a file-hosting software system like « Seafile »  which can 
use a ceph backend (swift too but we don’t deploy swift on our infra).

Our network configuration of our infra is identical as your OSA documentation. 
So, on our compute node we have two bonding interface (bond0 and bond1).
The ceph vlan is actually propagate on bond0 (where is attach br-storage) to 
have ceph backend for our openstack.
And on bond1, among other, we have br-vlan for ours vlans providers.

If i understood correctly, the solution is to propagate too on our switch the 
ceph vlan on bond1, and create by neutron the provider network to be reachable 
in the tenant by our file-hosting software.

For security issues, using neutron rbac tool to share only this provider 
network to the tenant in question, could be sufficient ?

I’m all ears ;-) if you have another alternative.

Regards,
Fabrice


> Le 25 mai 2017 à 14:01, Jean-Philippe Evrard 
> <jean-philippe.evr...@rackspace.co.uk> a écrit :
> 
> I doubt many people have tried this, because 1) cinder/nova/glance probably 
> do the job well in a multi-tenant fashion 2) you’re poking holes into your 
> ceph cluster security.
> 
> Anyway, if you still want it, you would need (I guess) have to create a 
> provider network that will be allowed to access your ceph network.
> 
> You can either route it from your current public network, or create another 
> network. It’s 100% up to you, and not osa specific.
> 
> Best regards,
> JP
> 
> On 24/05/2017, 15:02, "fabrice grelaud" <fabrice.grel...@u-bordeaux.fr> wrote:
> 
>Hi osa team,
> 
>i have a multimode openstack-ansible deployed, ocata 15.1.3, with ceph as 
> backend for cinder (with our own ceph infra).
> 
>After create an instance with root volume, i would like to mount a ceph 
> block or cephfs directly to the vm (not a cinder volume). So i want to attach 
> a new interface to the vm that is in the ceph vlan.
>How can i do that ?
> 
>We have our ceph vlan propagated on bond0 interface (bond0.xxx and 
> br-storage configured as documented) for openstack infrastructure.
> 
>Should i have to propagate this vlan on the bond1 interface where my 
> br-vlan is attach ?
>Should i have to use the existing br-storage where the ceph vlan is 
> already propagated (bond0.xxx) ? And how i create the ceph vlan network in 
> neutron (by neutron directly or by horizon) ?
> 
>Has anyone ever experienced this ?
> 
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> Rackspace Limited is a company registered in England & Wales (company 
> registered number 03897010) whose registered office is at 5 Millington Road, 
> Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
> viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
> contain confidential or privileged information intended for the recipient. 
> Any dissemination, distribution or copying of the enclosed material is 
> prohibited. If you receive this transmission in error, please notify us 
> immediately by e-mail at ab...@rackspace.com and delete the original message. 
> Your cooperation is appreciated.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] mount ceph block from an instance

2017-05-24 Thread fabrice grelaud
Hi osa team,

i have a multimode openstack-ansible deployed, ocata 15.1.3, with ceph as 
backend for cinder (with our own ceph infra).

After create an instance with root volume, i would like to mount a ceph block 
or cephfs directly to the vm (not a cinder volume). So i want to attach a new 
interface to the vm that is in the ceph vlan.
How can i do that ? 

We have our ceph vlan propagated on bond0 interface (bond0.xxx and br-storage 
configured as documented) for openstack infrastructure.

Should i have to propagate this vlan on the bond1 interface where my br-vlan is 
attach ?
Should i have to use the existing br-storage where the ceph vlan is already 
propagated (bond0.xxx) ? And how i create the ceph vlan network in neutron (by 
neutron directly or by horizon) ?

Has anyone ever experienced this ?
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Live migration issue

2017-01-25 Thread fabrice grelaud
Thanks for reply.

But « a priori » log say « this error can be safely ignore ». And therefore, 
this log comes from the live migration that succeeded (compute 2 to compute 1).

The ERROR that questions me is (live migration compute 1 to 2), on compute 1:
2017-01-25 11:03:58.475 113231 ERROR nova.virt.libvirt.driver 
[req-7bd352bf-8818-4f71-9fa0-04fabccebf9c 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Live Migration failure: Requested 
operation is not valid: domain 'instance-0187' is already active



> Le 25 janv. 2017 à 13:24, Lenny Verkhovsky <len...@mellanox.com> a écrit :
> 
> Hi,
> 
> What domain name are you using?
> Check for 'Traceback' and ' ERROR ' in the logs, maybe you will get a hint
> 
> 
> 2017-01-25 11:00:21.215 28309 INFO nova.compute.manager 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] You may see the error "libvirt: QEMU 
> error: Domain not found: no domain with matching name." This error can be 
> safely ignored.
> 
> 
> -Original Message-
> From: fabrice grelaud [mailto:fabrice.grel...@u-bordeaux.fr] 
> Sent: Wednesday, January 25, 2017 1:06 PM
> To: OpenStack Development Mailing List (not for usage questions) 
> <openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [openstack-ansible] Live migration issue
> 
> Hi osa team,
> 
> i ‘ve got live migration issue in one direction but not in other.
> I deploy openstack with OSA, ubuntu trusty, stable/newton branch, 14.0.5 tag.
> 
> My 2 compute node are same host type and have nova-compute and cinder-volume 
> (our ceph cluster as backend) services.
> 
> No problem to live migrate instance from Compute 2 to Compute 1 whereas the 
> reverse is not true.
> See log below:
> 
> Live migration instance Compute 2 to 1: OK
> 
> Compute 2 log
> 2017-01-25 11:00:15.621 28309 INFO nova.virt.libvirt.migration 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Increasing downtime to 46 ms after 0 
> sec elapsed time
> 2017-01-25 11:00:15.787 28309 INFO nova.virt.libvirt.driver 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Migration running for 0 secs, memory 
> 100% remaining; (bytes processed=0, remaining=0, total=0)
> 2017-01-25 11:00:17.737 28309 INFO nova.compute.manager [-] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] VM Paused (Lifecycle Event)
> 2017-01-25 11:00:17.794 28309 INFO nova.virt.libvirt.driver 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Migration operation has completed
> 2017-01-25 11:00:17.795 28309 INFO nova.compute.manager 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] _post_live_migration() is started..
> 2017-01-25 11:00:17.815 28309 INFO oslo.privsep.daemon 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] Running privsep helper: ['sudo', 
> 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', 
> '--config-file', '/etc/nova/nova.conf', '--privsep_context', 
> 'os_brick.privileged.default', '--privsep_sock_path', 
> '/tmp/tmpfL96lI/privsep.sock']
> 2017-01-25 11:00:18.387 28309 INFO oslo.privsep.daemon 
> [req-6f21e4a4-28a8-48e3-bf2f-2e1ad3b52470 0329776bd1634978a7fed35a70c77479 
> 7531f209e3514e3f98eb58aafa480285 - - -] Spawned new privsep daemon via 
> rootwrap
> 2017-01-25 11:00:18.395 28309 INFO oslo.privsep.daemon [-] privsep daemon 
> starting
> 2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep process 
> running with uid/gid: 0/0
> 2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep process 
> running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
> 2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep daemon 
> running as pid 28815
> 2017-01-25 11:00:18.397 28309 INFO nova.compute.manager 
> [req-aa0997d7-bf5f-480f-abc5-beadd2d03409 - - - - -] [instance: 
> c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] During sync_power_state the instance 
> has a pending task (migrating). Skip.
> 2017-01-25 11:00:18.538 28309 INFO nova.compute.manager 
> [req-115a99b8-48ef-43d5-908b-5ff7aadc3df4 - - - - -] R

[openstack-dev] [openstack-ansible] Live migration issue

2017-01-25 Thread fabrice grelaud
 "pclmuldq", "acpi", "fma", "tsc-deadline", "mmx", "osxsave", 
"cx8", "mce", "de", "tm2", "ht", "dca", "lahf_lm", "abm", "popcnt", "mca", 
"pdpe1gb", "apic", "sse", "f16c", "pse", "ds", "invtsc", "pni", "rdtscp", 
"avx2", "aes", "sse2", "ss", "ds_cpl", "bmi1", "bmi2", "pcid", "fpu", "cx16", 
"pse36", "mtrr", "movbe", "pdcm", "rdrand", "x2apic"], "topology": {"cores": 
10, "cells": 2, "threads": 2, "sockets": 1}}
2017-01-25 11:03:56.849 28309 INFO os_vif 
[req-7bd352bf-8818-4f71-9fa0-04fabccebf9c 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] Successfully plugged vif 
VIFBridge(active=True,address=fa:16:3e:d2:7c:83,bridge_name='brqc434ace8-45',has_traffic_filtering=True,id=dff20b91-a654-437d-8a74-dc55aeac8ab7,network=Network(c434ace8-45f6-4bb1-95bc-d52dadb557c7),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tapdff20b91-a6')
2017-01-25 11:03:58.981 28309 INFO nova.compute.manager 
[req-7bd352bf-8818-4f71-9fa0-04fabccebf9c 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Detach volume 
f40efa24-4992-49ac-8d75-4ace88d9ecf7 from mountpoint /dev/vda
2017-01-25 11:03:58.984 28309 WARNING nova.compute.manager 
[req-7bd352bf-8818-4f71-9fa0-04fabccebf9c 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Detaching volume from unknown instance
2017-01-25 11:03:58.986 28309 WARNING nova.virt.libvirt.driver 
[req-7bd352bf-8818-4f71-9fa0-04fabccebf9c 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] During detach_volume, instance 
disappeared.
2017-01-25 11:04:00.033 28309 INFO nova.virt.libvirt.driver [-] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] During wait destroy, instance disappeared.
2017-01-25 11:04:00.034 28309 INFO os_vif 
[req-7bd352bf-8818-4f71-9fa0-04fabccebf9c 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] Successfully unplugged vif 
VIFBridge(active=True,address=fa:16:3e:d2:7c:83,bridge_name='brqc434ace8-45',has_traffic_filtering=True,id=dff20b91-a654-437d-8a74-dc55aeac8ab7,network=Network(c434ace8-45f6-4bb1-95bc-d52dadb557c7),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tapdff20b91-a6')
2017-01-25 11:04:00.049 28309 INFO nova.virt.libvirt.driver 
[req-7bd352bf-8818-4f71-9fa0-04fabccebf9c 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Deleting instance files 
/var/lib/nova/instances/c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3_del
2017-01-25 11:04:00.050 28309 INFO nova.virt.libvirt.driver 
[req-7bd352bf-8818-4f71-9fa0-04fabccebf9c 0329776bd1634978a7fed35a70c77479 
7531f209e3514e3f98eb58aafa480285 - - -] [instance: 
c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3] Deletion of 
/var/lib/nova/instances/c7a5e5d1-bc22-4143-a85a-ee5c3b6777b3_del complete 

Need some help or some hints to resolve/debug this issue. Give me some headache 
;-) Because compute node hardware are the same, nova/libvirt conf deployed with 
osa.

Regards,

Fabrice Grelaud
Université de Bordeaux
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] cinder volume lxc and iscsi

2016-09-16 Thread Fabrice Grelaud
Le 16/09/2016 12:18, Jesse Pretorius a écrit :
>>I found in google a bug for use of open-iscsi inside lxc-container
>>(https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855), a bug
>>commented by Kevin Carter (openstack-ansible core team) as a "blocking
>>issue" (in may 2015).
>>Is that bug still relevant ?
> Yes, unfortunately it is relevant. We implemented clarification patches in 
> Newton to clarify that:
> https://github.com/openstack/openstack-ansible/commit/a06d93daa9c0228abd46b1af462fb00651942b7e
> https://github.com/openstack/openstack-ansible-os_cinder/commit/d8daff7691de60ffc6bcc4faa851d9a90712d556
>
> So the documentation now makes it more clear in the note at the top of the 
> page:
> http://docs.openstack.org/developer/openstack-ansible-os_cinder/configure-cinder.html
Ok. Maybe will be great to backport to Mitaka documentation
(http://docs.openstack.org/developer/openstack-ansible/mitaka/install-guide/configure-cinder.html)
>>Do I need to rather deploy my cinder-volume on compute host (metal) to
>>solve my problem ?
> Yes, that is a known good configuration that is very stable.
Indeed. I redeploy cinder-volume on my compute host and everything is
functionnal.
Thanks again.
>
>
>
> 
> Rackspace Limited is a company registered in England & Wales (company 
> registered number 03897010) whose registered office is at 5 Millington Road, 
> Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
> viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
> contain confidential or privileged information intended for the recipient. 
> Any dissemination, distribution or copying of the enclosed material is 
> prohibited. If you receive this transmission in error, please notify us 
> immediately by e-mail at ab...@rackspace.com and delete the original message. 
> Your cooperation is appreciated.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Fabrice Grelaud
Secteur Infrastructure et Production
DI - Univ. Bordeaux 1
05 40 00 - 65 92
message...@u-bordeaux1.fr


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] cinder volume lxc and iscsi

2016-09-14 Thread Fabrice Grelaud
Hi,

i need recommendations to setup block storage with dell storage center
iscsi drivers.

As seen in doc
(http://docs.openstack.org/developer/openstack-ansible/mitaka/install-guide/configure-cinder.html),
no need for ISCSI block storage to have a separate host.
So, i modify env.d/cinder.yml to remove "is_metal: true", and configure
openstack_user_config.yml with:
(http://docs.openstack.org/mitaka/config-reference/block-storage/drivers/dell-storagecenter-driver.html)

storage_hosts:
  p-osinfra01:
ip: 172.29.236.11
container_vars:
  cinder_storage_availability_zone: Dell_SC
  cinder_default_availability_zone: Dell_SC
  cinder_default_volume_type: delliscsi
  cinder_backends:
limit_container_types: cinder_volume
delliscsi:
  volume_driver:
cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver
  volume_backend_name: dell_iscsi
  san_ip: 172.x.y.z
  san_login: admin
  san_password: 
  iscsi_ip_address: 10.a.b.c
  dell_sc_ssn: 46247
  dell_sc_api_port: 3033
  dell_sc_server_folder: Openstack
  dell_sc_volume_folder: Openstack
  iscsi_port: 3260

Same for p-osinfra02 and p-osinfra03.

I launch playbook os-cinder-install.yml and i have 3 cinder-volume
containers each on my infra hosts.
Everything is ok.

In horizon, i can create a volume (seen on the storage center) and can
attach this volume to an instance. Perfect !

But now, if i launch an instance with "Boot from image (create a new
volume)", i got an error from nova "Block Device Mapping is Invalid".
I checked my cinder-volume.log and i see:
ERROR cinder.volume.flows.manager.create_volume
FailedISCSITargetPortalLogin: Could not login to any iSCSI portal
ERROR cinder.volume.manager ImageCopyFailure: Failed to copy image to
volume: Could not login to any iSCSI portal.

I test in one container iscsi connection:
root@p-osinfra03-cinder-volumes-container-2408e151:~# iscsiadm -m
discovery -t sendtargets -p 10.a.b.c
10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a724
10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a728
10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a723
10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a727

But when login, i got:
root@p-osinfra03-cinder-volumes-container-2408e151:~# iscsiadm -m node
-T iqn.2002-03.com.compellent:5000d31000b4a724 --login
Logging in to [iface: default, target:
iqn.2002-03.com.compellent:5000d31000b4a724, portal: 10.a.b.c,3260]
(multiple)
iscsiadm: got read error (0/0), daemon died?
iscsiadm: Could not login to [iface: default, target:
iqn.2002-03.com.compellent:5000d31000b4a724, portal: 10.a.b.c,3260].
iscsiadm: initiator reported error (18 - could not communicate to iscsid)
iscsiadm: Could not log into all portals

I found in google a bug for use of open-iscsi inside lxc-container
(https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855), a bug
commented by Kevin Carter (openstack-ansible core team) as a "blocking
issue" (in may 2015).

Is that bug still relevant ?
Do i need to rather deploy my cinder-volume on compute host (metal) to
solve my problem ?
Or do you have others suggestions ?

Thanks.
Regards,

-- 
Fabrice Grelaud
Université de Bordeaux


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] existing centralized syslog server

2016-07-19 Thread fabrice grelaud

> Le 7 juil. 2016 à 15:02, Jesse Pretorius <jesse.pretor...@rackspace.co.uk> a 
> écrit :
> 
> On 7/6/16, 3:24 PM, "fabrice grelaud" <fabrice.grel...@u-bordeaux.fr> wrote:
> 
>> 
>> I would like to know what is the best approach to customize our 
>> openstack-ansible deployment if we want to use our existing solution of ELK 
>> and centralized rsyslog server.
>> 
>> We deploy openstack-ansible (mitaka 13.1.2 release) on our infrastructure 
>> with for convenient and no risk a vm for syslog server role. That is ok. But 
>> now if i want to use our centralized syslog server ?
>> 
>> What i need is to set ip address of our existing server to the rsyslog 
>> client (containers + metal) and of course configure our rsyslog.conf to 
>> manage openstack template.
>> So:
>> - no need to create on the log server a lxc container (setup-hosts.yml: 
>> lxc-hosts-setup, lxc-containers-create)
>> - no need to install syslog server (setup-infrastructure.yml: 
>> rsyslog-install.yml)
> 
> To add more rsyslog targets for logs you can see in 
> https://github.com/openstack/openstack-ansible-rsyslog_client/blob/stable/mitaka/defaults/main.yml#L56-L73
>  that there is an example of the changes you need to make to 
> /etc/openstack_deploy/user_variables.yml to include additional targets.

Really great… ;-)

> 
> You may be able to do away with the log host altogether as you desire by 
> simply leaving the ‘log_hosts’ group out of the 
> /etc/openstack_deploy/openstack_user_config.yml and 
> /etc/openstack_deploy/conf.d/*.yml files. This is an untested code path so 
> you may find that we make assumptions about the presence of the log_host so 
> please register bugs for any issues you find so that we can eradicate those 
> assumptions. To my mind the log host is not required for a deployment if the 
> deployer so chooses (and especially if the deployer has alternative syslog 
> targets in-place).

No issues found.
I launched the playbook setup-hosts.yml, everything good.
I tested to launch the rsyslog-install.yml playbook and i got "skipping: no 
hosts matched" as expected.

Yours assumptions look good. 

And an « openstack-ansible setup-everything.yml » with the tag « rsyslog_client 
» did the rest...  

> 
>> How can i modify my openstack-ansible environment (/etc/openstack_deploy, 
>> env.d, conf.d, openstack_user_config.yml, user_variables.yml, playbook ?) 
>> the most transparent manner and that permits minor release update simply ?
> 
> As long as you’re only editing things in user space (i.e. In 
> /etc/openstack_deploy/) and not in-tree (i.e. In /opt/openstack-ansible/) 
> then the minor upgrade process is documented here: 
> http://docs.openstack.org/developer/openstack-ansible/mitaka/install-guide/app-minorupgrade.html
> 
> I hope that this answers your questions!
> 

Perfect ! Thanks a lot…

> 
> Rackspace Limited is a company registered in England & Wales (company 
> registered number 03897010) whose registered office is at 5 Millington Road, 
> Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
> viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
> contain confidential or privileged information intended for the recipient. 
> Any dissemination, distribution or copying of the enclosed material is 
> prohibited. If you receive this transmission in error, please notify us 
> immediately by e-mail at ab...@rackspace.com and delete the original message. 
> Your cooperation is appreciated.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] existing centralized syslog server

2016-07-06 Thread fabrice grelaud
Hi Michael,

> Le 6 juil. 2016 à 17:07, Michael Gugino <michael.gug...@walmart.com> a écrit :
> 
> Hello Fabrice,
> 
>  I think the easiest way would be to set the affinity for each
> controller/infrastructure host using 'rsyslog_container: 0' as seen in the
> following section of the install guide:
> http://docs.openstack.org/developer/openstack-ansible/install-guide-revised
> -draft/configure-initial.html#affinity
> 
ok, i look at this.

Actually (first deploy with a dedicated log server) , i have:
./scripts/inventory-manage.py -G
rsyslog_container   |   log1_rsyslog_container-81441bbb

So, from what you write, not to create a log_container on the log server, i can 
modify my openstack_user_config.yml to be:

log_hosts:
  log1:
affinity:
  rsyslog_container: 0
ip: 172.29.236.240

Is that right ?

>  Next, you should add your actual logging hosts to your
> openstack_user_config as seen here:
> https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_de
> ploy/openstack_user_config.yml.aio#L122-L124
> 
>  Be sure to comment out the rsyslog-install.yml line of
> setup-infrastructure.yml, and be sure that you make any necessary
> modifications to the openstack-ansible-rsyslog_client role.  Modifications
> may not be necessary, depending on your needs, and you may be able to
> specify certain variables in user_variables.yml to achieve the desired
> results.
> 
In openstack-ansible-rsyslog_client, the template 99-rsyslog.conf.j2 use :
*.* @{{ hostvars[server]['ansible_ssh_host'] }}:{{ rsyslog_client_udp_port 
}};RFC3164fmt

I will test to ensure the IP is that from log1.
If yes, no more modification is needed.

Thanks again.
Regards

>  As always, make sure you test these modifications in a non-production
> environment to ensure you achieve the desired results.
> 
> 
> Michael Gugino
> Cloud Powered
> (540) 846-0304 Mobile
> 
> Walmart ✻
> Saving people money so they can live better.
> 
> 
> 
> 
> 
> On 7/6/16, 10:24 AM, "fabrice grelaud" <fabrice.grel...@u-bordeaux.fr>
> wrote:
> 
>> Hi,
>> 
>> I would like to know what is the best approach to customize our
>> openstack-ansible deployment if we want to use our existing solution of
>> ELK and centralized rsyslog server.
>> 
>> We deploy openstack-ansible (mitaka 13.1.2 release) on our infrastructure
>> with for convenient and no risk a vm for syslog server role. That is ok.
>> But now if i want to use our centralized syslog server ?
>> 
>> What i need is to set ip address of our existing server to the rsyslog
>> client (containers + metal) and of course configure our rsyslog.conf to
>> manage openstack template.
>> So:
>> - no need to create on the log server a lxc container (setup-hosts.yml:
>> lxc-hosts-setup, lxc-containers-create)
>> - no need to install syslog server (setup-infrastructure.yml:
>> rsyslog-install.yml)
>> 
>> How can i modify my openstack-ansible environment (/etc/openstack_deploy,
>> env.d, conf.d, openstack_user_config.yml, user_variables.yml, playbook ?)
>> the most transparent manner and that permits minor release update simply ?
>> 
>> Thanks.
>> 
>> Fabrice Grelaud
>> Université de Bordeaux
>> 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> This email and any files transmitted with it are confidential and intended 
> solely for the individual or entity to whom they are addressed. If you have 
> received this email in error destroy it immediately. *** Walmart Confidential 
> ***
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] existing centralized syslog server

2016-07-06 Thread fabrice grelaud
Hi,

I would like to know what is the best approach to customize our 
openstack-ansible deployment if we want to use our existing solution of ELK and 
centralized rsyslog server.

We deploy openstack-ansible (mitaka 13.1.2 release) on our infrastructure with 
for convenient and no risk a vm for syslog server role. That is ok. But now if 
i want to use our centralized syslog server ?

What i need is to set ip address of our existing server to the rsyslog client 
(containers + metal) and of course configure our rsyslog.conf to manage 
openstack template.
So:
- no need to create on the log server a lxc container (setup-hosts.yml: 
lxc-hosts-setup, lxc-containers-create)
- no need to install syslog server (setup-infrastructure.yml: 
rsyslog-install.yml)

How can i modify my openstack-ansible environment (/etc/openstack_deploy, 
env.d, conf.d, openstack_user_config.yml, user_variables.yml, playbook ?) the 
most transparent manner and that permits minor release update simply ?

Thanks.

Fabrice Grelaud
Université de Bordeaux




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] L3HA problem

2016-06-24 Thread fabrice grelaud

> Le 22 juin 2016 à 19:40, Assaf Muller <as...@redhat.com> a écrit :
> 
> On Wed, Jun 22, 2016 at 12:02 PM, fabrice grelaud
> <fabrice.grel...@u-bordeaux.fr <mailto:fabrice.grel...@u-bordeaux.fr>> wrote:
>> 
>> Le 22 juin 2016 à 17:35, fabrice grelaud <fabrice.grel...@u-bordeaux.fr> a
>> écrit :
>> 
>> 
>> Le 22 juin 2016 à 15:45, Assaf Muller <as...@redhat.com> a écrit :
>> 
>> On Wed, Jun 22, 2016 at 9:24 AM, fabrice grelaud
>> <fabrice.grel...@u-bordeaux.fr> wrote:
>> 
>> Hi,
>> 
>> we deployed our openstack infrastructure with your « exciting » project
>> openstack-ansible (mitaka 13.1.2) but we have some problems with L3HA after
>> create router.
>> 
>> Our infra (closer to the doc):
>> 3 controllers nodes (with bond0 (br-mgmt, br-storage), bond1 (br-vxlan,
>> br-vlan))
>> 2 compute nodes (same for network)
>> 
>> We create an external network (vlan type), an internal network (vxlan type)
>> and a router connected to both networks.
>> And when we launch an instance (cirros), we can’t receive an ip on the vm.
>> 
>> We have:
>> 
>> root@p-osinfra03-utility-container-783041da:~# neutron
>> l3-agent-list-hosting-router router-bim
>> +--+---++---+--+
>> | id   | host
>> | admin_state_up | alive | ha_state |
>> +--+---++---+--+
>> | 3c7918e5-3ad6-4f82-a81b-700790e3c016 |
>> p-osinfra01-neutron-agents-container-f1ab9c14 | True | :-)   |
>> active   |
>> | f2bf385a-f210-4dbc-8d7d-4b7b845c09b0 |
>> p-osinfra02-neutron-agents-container-48142ffe | True  | :-)   |
>> active   |
>> | 55350fac-16aa-488e-91fd-a7db38179c62 |
>> p-osinfra03-neutron-agents-container-2f6557f0 | True  | :-)   |
>> active   |
>> +--+---++---+—+
>> 
>> I know, i got a problem now because i should have :-) active, :-) standby,
>> :-) standby… Snif...
>> 
>> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns
>> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6
>> qdhcp-0ba266fb-15c4-4566-ae88-92d4c8fd2036
>> 
>> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns exec
>> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6 ip a sh
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
>> default
>>   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>   inet 127.0.0.1/8 scope host lo
>>  valid_lft forever preferred_lft forever
>>   inet6 ::1/128 scope host
>>  valid_lft forever preferred_lft forever
>> 2: ha-4a5f0287-91@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
>> pfifo_fast state UP group default qlen 1000
>>   link/ether fa:16:3e:c2:67:a9 brd ff:ff:ff:ff:ff:ff
>>   inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-4a5f0287-91
>>  valid_lft forever preferred_lft forever
>>   inet 169.254.0.1/24 scope global ha-4a5f0287-91
>>  valid_lft forever preferred_lft forever
>>   inet6 fe80::f816:3eff:fec2:67a9/64 scope link
>>  valid_lft forever preferred_lft forever
>> 3: qr-44804d69-88@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
>> pfifo_fast state UP group default qlen 1000
>>   link/ether fa:16:3e:a5:8c:f2 brd ff:ff:ff:ff:ff:ff
>>   inet 192.168.100.254/24 scope global qr-44804d69-88
>>  valid_lft forever preferred_lft forever
>>   inet6 fe80::f816:3eff:fea5:8cf2/64 scope link
>>  valid_lft forever preferred_lft forever
>> 4: qg-c5c7378e-1d@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> pfifo_fast state UP group default qlen 1000
>>   link/ether fa:16:3e:b6:4c:97 brd ff:ff:ff:ff:ff:ff
>>   inet 147.210.240.11/23 scope global qg-c5c7378e-1d
>>  valid_lft forever preferred_lft forever
>>   inet 147.210.240.12/32 scope global qg-c5c7378e-1d
>>  valid_lft forever preferred_lft forever
>>   inet6 fe80::f816:3eff:feb6:4c97/64 scope link
>>  valid_lft forever preferred_lft forever
>> 
>> Same result on infra02 and infra03, qr and qg interfaces have the same ip,
>> and ha interfaces the address 169.254.0.1.
>> 
>> If we stop 2 neutron agent containers (p-osinfra02, p-osinfra03) and we
>> restart the first (p-osinfra01), we can reboot the instance and we got an

Re: [openstack-dev] [openstack-ansible] L3HA problem

2016-06-22 Thread fabrice grelaud

> Le 22 juin 2016 à 17:35, fabrice grelaud <fabrice.grel...@u-bordeaux.fr> a 
> écrit :
> 
>> 
>> Le 22 juin 2016 à 15:45, Assaf Muller <as...@redhat.com 
>> <mailto:as...@redhat.com>> a écrit :
>> 
>> On Wed, Jun 22, 2016 at 9:24 AM, fabrice grelaud
>> <fabrice.grel...@u-bordeaux.fr <mailto:fabrice.grel...@u-bordeaux.fr>> wrote:
>>> Hi,
>>> 
>>> we deployed our openstack infrastructure with your « exciting » project
>>> openstack-ansible (mitaka 13.1.2) but we have some problems with L3HA after
>>> create router.
>>> 
>>> Our infra (closer to the doc):
>>> 3 controllers nodes (with bond0 (br-mgmt, br-storage), bond1 (br-vxlan,
>>> br-vlan))
>>> 2 compute nodes (same for network)
>>> 
>>> We create an external network (vlan type), an internal network (vxlan type)
>>> and a router connected to both networks.
>>> And when we launch an instance (cirros), we can’t receive an ip on the vm.
>>> 
>>> We have:
>>> 
>>> root@p-osinfra03-utility-container-783041da:~# neutron
>>> l3-agent-list-hosting-router router-bim
>>> +--+---++---+--+
>>> | id   | host
>>> | admin_state_up | alive | ha_state |
>>> +--+---++---+--+
>>> | 3c7918e5-3ad6-4f82-a81b-700790e3c016 |
>>> p-osinfra01-neutron-agents-container-f1ab9c14 | True   | :-)   |
>>> active   |
>>> | f2bf385a-f210-4dbc-8d7d-4b7b845c09b0 |
>>> p-osinfra02-neutron-agents-container-48142ffe | True   | :-)   |
>>> active   |
>>> | 55350fac-16aa-488e-91fd-a7db38179c62 |
>>> p-osinfra03-neutron-agents-container-2f6557f0 | True   | :-)   |
>>> active   |
>>> +--+---++---+—+
>>> 
>>> I know, i got a problem now because i should have :-) active, :-) standby,
>>> :-) standby… Snif...
>>> 
>>> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns
>>> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6
>>> qdhcp-0ba266fb-15c4-4566-ae88-92d4c8fd2036
>>> 
>>> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns exec
>>> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6 ip a sh
>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
>>> default
>>>link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>inet 127.0.0.1/8 scope host lo
>>>   valid_lft forever preferred_lft forever
>>>inet6 ::1/128 scope host
>>>   valid_lft forever preferred_lft forever
>>> 2: ha-4a5f0287-91@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
>>> pfifo_fast state UP group default qlen 1000
>>>link/ether fa:16:3e:c2:67:a9 brd ff:ff:ff:ff:ff:ff
>>>inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-4a5f0287-91
>>>   valid_lft forever preferred_lft forever
>>>inet 169.254.0.1/24 scope global ha-4a5f0287-91
>>>   valid_lft forever preferred_lft forever
>>>inet6 fe80::f816:3eff:fec2:67a9/64 scope link
>>>   valid_lft forever preferred_lft forever
>>> 3: qr-44804d69-88@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
>>> pfifo_fast state UP group default qlen 1000
>>>link/ether fa:16:3e:a5:8c:f2 brd ff:ff:ff:ff:ff:ff
>>>inet 192.168.100.254/24 scope global qr-44804d69-88
>>>   valid_lft forever preferred_lft forever
>>>inet6 fe80::f816:3eff:fea5:8cf2/64 scope link
>>>   valid_lft forever preferred_lft forever
>>> 4: qg-c5c7378e-1d@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>> pfifo_fast state UP group default qlen 1000
>>>link/ether fa:16:3e:b6:4c:97 brd ff:ff:ff:ff:ff:ff
>>>inet 147.210.240.11/23 scope global qg-c5c7378e-1d
>>>   valid_lft forever preferred_lft forever
>>>inet 147.210.240.12/32 scope global qg-c5c7378e-1d
>>>   valid_lft forever preferred_lft forever
>>>inet6 fe80::f816:3eff:feb6:4c97/64 scope link
>>>   valid_lft forever preferred_lft forever
>>> 
>>> Same result on infra02 and infra03, qr and qg interfaces have the same ip,
>>> and ha interfaces the address 169.254.0.1.
>>

Re: [openstack-dev] [openstack-ansible] L3HA problem

2016-06-22 Thread fabrice grelaud
Thanks. I will test…

Do you think trusty-backport is enough (1:1.2.13-1~ubuntu14.04.1) ?


> Le 22 juin 2016 à 16:21, Anna Kamyshnikova <akamyshnik...@mirantis.com> a 
> écrit :
> 
> Keepalived 1.2.7 is bad version. Please, see comments in this bug  
> https://bugs.launchpad.net/neutron/+bug/1497272 
> <https://bugs.launchpad.net/neutron/+bug/1497272>. I suggest you to try one 
> of the latest version of Keepalived.
> 
> On Wed, Jun 22, 2016 at 5:03 PM, fabrice grelaud 
> <fabrice.grel...@u-bordeaux.fr <mailto:fabrice.grel...@u-bordeaux.fr>> wrote:
> Hi,
> 
> keepalived 1:1.2.7-1ubuntu
> 
> 
>> Le 22 juin 2016 à 15:41, Anna Kamyshnikova <akamyshnik...@mirantis.com 
>> <mailto:akamyshnik...@mirantis.com>> a écrit :
>> 
>> Hi!
>> 
>> What Keepalived version is used?
>> 
>> On Wed, Jun 22, 2016 at 4:24 PM, fabrice grelaud 
>> <fabrice.grel...@u-bordeaux.fr <mailto:fabrice.grel...@u-bordeaux.fr>> wrote:
>> Hi,
>> 
>> we deployed our openstack infrastructure with your « exciting » project 
>> openstack-ansible (mitaka 13.1.2) but we have some problems with L3HA after 
>> create router.
>> 
>> Our infra (closer to the doc):
>> 3 controllers nodes (with bond0 (br-mgmt, br-storage), bond1 (br-vxlan, 
>> br-vlan))
>> 2 compute nodes (same for network)
>> 
>> We create an external network (vlan type), an internal network (vxlan type) 
>> and a router connected to both networks.
>> And when we launch an instance (cirros), we can’t receive an ip on the vm.
>> 
>> We have:
>> 
>> root@p-osinfra03-utility-container-783041da:~# neutron 
>> l3-agent-list-hosting-router router-bim
>> +--+---++---+--+
>> | id   | host
>>   | admin_state_up | alive | ha_state |
>> +--+---++---+--+
>> | 3c7918e5-3ad6-4f82-a81b-700790e3c016 | 
>> p-osinfra01-neutron-agents-container-f1ab9c14 | True   | :-)   | 
>> active   |
>> | f2bf385a-f210-4dbc-8d7d-4b7b845c09b0 | 
>> p-osinfra02-neutron-agents-container-48142ffe | True   | :-)   | 
>> active   |
>> | 55350fac-16aa-488e-91fd-a7db38179c62 | 
>> p-osinfra03-neutron-agents-container-2f6557f0 | True   | :-)   | 
>> active   |
>> +--+---++---+—+
>> 
>> I know, i got a problem now because i should have :-) active, :-) standby, 
>> :-) standby… Snif...
>> 
>> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns
>> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6
>> qdhcp-0ba266fb-15c4-4566-ae88-92d4c8fd2036
>> 
>> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns exec 
>> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6 ip a sh
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group 
>> default 
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 <http://127.0.0.1/8> scope host lo
>>valid_lft forever preferred_lft forever
>> inet6 ::1/128 scope host 
>>valid_lft forever preferred_lft forever
>> 2: ha-4a5f0287-91@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc 
>> pfifo_fast state UP group default qlen 1000
>> link/ether fa:16:3e:c2:67:a9 brd ff:ff:ff:ff:ff:ff
>> inet 169.254.192.1/18 <http://169.254.192.1/18> brd 169.254.255.255 
>> scope global ha-4a5f0287-91
>>valid_lft forever preferred_lft forever
>> inet 169.254.0.1/24 <http://169.254.0.1/24> scope global ha-4a5f0287-91
>>valid_lft forever preferred_lft forever
>> inet6 fe80::f816:3eff:fec2:67a9/64 scope link 
>>valid_lft forever preferred_lft forever
>> 3: qr-44804d69-88@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc 
>> pfifo_fast state UP group default qlen 1000
>> link/ether fa:16:3e:a5:8c:f2 brd ff:ff:ff:ff:ff:ff
>> inet 192.168.100.254/24 <http://192.168.100.254/24> scope global 
>> qr-44804d69-88
>>valid_lft forever preferred_lft forever
>> inet6 fe80::f816:3eff:fea5:8cf2/64 scope link 
>>valid_lft forever preferred_lft forever
>> 4: qg-c5c7378e-1d@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
>> pfifo_fast state UP group default qlen 1000
&

Re: [openstack-dev] [openstack-ansible] L3HA problem

2016-06-22 Thread fabrice grelaud

> Le 22 juin 2016 à 15:45, Assaf Muller <as...@redhat.com> a écrit :
> 
> On Wed, Jun 22, 2016 at 9:24 AM, fabrice grelaud
> <fabrice.grel...@u-bordeaux.fr <mailto:fabrice.grel...@u-bordeaux.fr>> wrote:
>> Hi,
>> 
>> we deployed our openstack infrastructure with your « exciting » project
>> openstack-ansible (mitaka 13.1.2) but we have some problems with L3HA after
>> create router.
>> 
>> Our infra (closer to the doc):
>> 3 controllers nodes (with bond0 (br-mgmt, br-storage), bond1 (br-vxlan,
>> br-vlan))
>> 2 compute nodes (same for network)
>> 
>> We create an external network (vlan type), an internal network (vxlan type)
>> and a router connected to both networks.
>> And when we launch an instance (cirros), we can’t receive an ip on the vm.
>> 
>> We have:
>> 
>> root@p-osinfra03-utility-container-783041da:~# neutron
>> l3-agent-list-hosting-router router-bim
>> +--+---++---+--+
>> | id   | host
>> | admin_state_up | alive | ha_state |
>> +--+---++---+--+
>> | 3c7918e5-3ad6-4f82-a81b-700790e3c016 |
>> p-osinfra01-neutron-agents-container-f1ab9c14 | True   | :-)   |
>> active   |
>> | f2bf385a-f210-4dbc-8d7d-4b7b845c09b0 |
>> p-osinfra02-neutron-agents-container-48142ffe | True   | :-)   |
>> active   |
>> | 55350fac-16aa-488e-91fd-a7db38179c62 |
>> p-osinfra03-neutron-agents-container-2f6557f0 | True   | :-)   |
>> active   |
>> +--+---++---+—+
>> 
>> I know, i got a problem now because i should have :-) active, :-) standby,
>> :-) standby… Snif...
>> 
>> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns
>> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6
>> qdhcp-0ba266fb-15c4-4566-ae88-92d4c8fd2036
>> 
>> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns exec
>> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6 ip a sh
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
>> default
>>link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>inet 127.0.0.1/8 scope host lo
>>   valid_lft forever preferred_lft forever
>>inet6 ::1/128 scope host
>>   valid_lft forever preferred_lft forever
>> 2: ha-4a5f0287-91@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
>> pfifo_fast state UP group default qlen 1000
>>link/ether fa:16:3e:c2:67:a9 brd ff:ff:ff:ff:ff:ff
>>inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-4a5f0287-91
>>   valid_lft forever preferred_lft forever
>>inet 169.254.0.1/24 scope global ha-4a5f0287-91
>>   valid_lft forever preferred_lft forever
>>inet6 fe80::f816:3eff:fec2:67a9/64 scope link
>>   valid_lft forever preferred_lft forever
>> 3: qr-44804d69-88@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
>> pfifo_fast state UP group default qlen 1000
>>link/ether fa:16:3e:a5:8c:f2 brd ff:ff:ff:ff:ff:ff
>>inet 192.168.100.254/24 scope global qr-44804d69-88
>>   valid_lft forever preferred_lft forever
>>inet6 fe80::f816:3eff:fea5:8cf2/64 scope link
>>   valid_lft forever preferred_lft forever
>> 4: qg-c5c7378e-1d@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> pfifo_fast state UP group default qlen 1000
>>link/ether fa:16:3e:b6:4c:97 brd ff:ff:ff:ff:ff:ff
>>inet 147.210.240.11/23 scope global qg-c5c7378e-1d
>>   valid_lft forever preferred_lft forever
>>inet 147.210.240.12/32 scope global qg-c5c7378e-1d
>>   valid_lft forever preferred_lft forever
>>inet6 fe80::f816:3eff:feb6:4c97/64 scope link
>>   valid_lft forever preferred_lft forever
>> 
>> Same result on infra02 and infra03, qr and qg interfaces have the same ip,
>> and ha interfaces the address 169.254.0.1.
>> 
>> If we stop 2 neutron agent containers (p-osinfra02, p-osinfra03) and we
>> restart the first (p-osinfra01), we can reboot the instance and we got an
>> ip, a floating ip and we can access by ssh from internet to the vm. (Note:
>> after few time, we loss our connectivity too).
>> 
>> But if we restart the two containers, we got a ha_state to « standby » until
>> the three become « active » and finally we have the problem again.
>> 

Re: [openstack-dev] [openstack-ansible] L3HA problem

2016-06-22 Thread fabrice grelaud
Hi,

keepalived 1:1.2.7-1ubuntu


> Le 22 juin 2016 à 15:41, Anna Kamyshnikova <akamyshnik...@mirantis.com> a 
> écrit :
> 
> Hi!
> 
> What Keepalived version is used?
> 
> On Wed, Jun 22, 2016 at 4:24 PM, fabrice grelaud 
> <fabrice.grel...@u-bordeaux.fr <mailto:fabrice.grel...@u-bordeaux.fr>> wrote:
> Hi,
> 
> we deployed our openstack infrastructure with your « exciting » project 
> openstack-ansible (mitaka 13.1.2) but we have some problems with L3HA after 
> create router.
> 
> Our infra (closer to the doc):
> 3 controllers nodes (with bond0 (br-mgmt, br-storage), bond1 (br-vxlan, 
> br-vlan))
> 2 compute nodes (same for network)
> 
> We create an external network (vlan type), an internal network (vxlan type) 
> and a router connected to both networks.
> And when we launch an instance (cirros), we can’t receive an ip on the vm.
> 
> We have:
> 
> root@p-osinfra03-utility-container-783041da:~# neutron 
> l3-agent-list-hosting-router router-bim
> +--+---++---+--+
> | id   | host 
>  | admin_state_up | alive | ha_state |
> +--+---++---+--+
> | 3c7918e5-3ad6-4f82-a81b-700790e3c016 | 
> p-osinfra01-neutron-agents-container-f1ab9c14 | True   | :-)   | 
> active   |
> | f2bf385a-f210-4dbc-8d7d-4b7b845c09b0 | 
> p-osinfra02-neutron-agents-container-48142ffe | True   | :-)   | 
> active   |
> | 55350fac-16aa-488e-91fd-a7db38179c62 | 
> p-osinfra03-neutron-agents-container-2f6557f0 | True   | :-)   | 
> active   |
> +--+---++---+—+
> 
> I know, i got a problem now because i should have :-) active, :-) standby, 
> :-) standby… Snif...
> 
> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns
> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6
> qdhcp-0ba266fb-15c4-4566-ae88-92d4c8fd2036
> 
> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns exec 
> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6 ip a sh
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group 
> default 
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 <http://127.0.0.1/8> scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host 
>valid_lft forever preferred_lft forever
> 2: ha-4a5f0287-91@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc 
> pfifo_fast state UP group default qlen 1000
> link/ether fa:16:3e:c2:67:a9 brd ff:ff:ff:ff:ff:ff
> inet 169.254.192.1/18 <http://169.254.192.1/18> brd 169.254.255.255 scope 
> global ha-4a5f0287-91
>valid_lft forever preferred_lft forever
> inet 169.254.0.1/24 <http://169.254.0.1/24> scope global ha-4a5f0287-91
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:fec2:67a9/64 scope link 
>valid_lft forever preferred_lft forever
> 3: qr-44804d69-88@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc 
> pfifo_fast state UP group default qlen 1000
> link/ether fa:16:3e:a5:8c:f2 brd ff:ff:ff:ff:ff:ff
> inet 192.168.100.254/24 <http://192.168.100.254/24> scope global 
> qr-44804d69-88
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:fea5:8cf2/64 scope link 
>valid_lft forever preferred_lft forever
> 4: qg-c5c7378e-1d@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
> pfifo_fast state UP group default qlen 1000
> link/ether fa:16:3e:b6:4c:97 brd ff:ff:ff:ff:ff:ff
> inet 147.210.240.11/23 <http://147.210.240.11/23> scope global 
> qg-c5c7378e-1d
>valid_lft forever preferred_lft forever
> inet 147.210.240.12/32 <http://147.210.240.12/32> scope global 
> qg-c5c7378e-1d
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:feb6:4c97/64 scope link 
>valid_lft forever preferred_lft forever
> 
> Same result on infra02 and infra03, qr and qg interfaces have the same ip, 
> and ha interfaces the address 169.254.0.1.
> 
> If we stop 2 neutron agent containers (p-osinfra02, p-osinfra03) and we 
> restart the first (p-osinfra01), we can reboot the instance and we got an ip, 
> a floating ip and we can access by ssh from internet to the vm. (Note: after 
> few time, we loss our connectivity too).
> 
> But if we restart the two containers, we got a ha_state to « standby » until 
&

[openstack-dev] [openstack-ansible] L3HA problem

2016-06-22 Thread fabrice grelaud
uppressed, use -v or -vv for full protocol decode
listening on ha-4a5f0287-91, link-type EN10MB (Ethernet), capture size 65535 
bytes
IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype 
simple, intvl 2s, length 20
IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype 
simple, intvl 2s, length 20
IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype 
simple, intvl 2s, length 20
IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype 
simple, intvl 2s, length 20

root@p-osinfra02-neutron-agents-container-48142ffe:~# ip netns exec 
qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6 tcpdump -nt -i ha-4ee5f8d0-7f
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ha-4ee5f8d0-7f, link-type EN10MB (Ethernet), capture size 65535 
bytes
IP 169.254.192.3 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype 
simple, intvl 2s, length 20
IP 169.254.192.3 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype 
simple, intvl 2s, length 20
IP 169.254.192.3 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype 
simple, intvl 2s, length 20
IP 169.254.192.3 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype 
simple, intvl 2s, length 20
IP 169.254.192.3 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype 
simple, intvl 2s, length 20


Someone could tell me if he has already encountered this problem ?
The infra and compute nodes are connected to a nexus 9000 switch.

Thank you in advance for taking the time to study my request.

Fabrice Grelaud
Université de Bordeaux

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] network question and documentation

2016-02-19 Thread fabrice grelaud

> Le 19 févr. 2016 à 14:20, Major Hayden <ma...@mhtx.net> a écrit :
> 
> On 02/17/2016 09:00 AM, Fabrice Grelaud wrote:
>> So, i would like to know if i'm going in the right direction.
>> We want to use both, existing vlan from our existing physical architecture 
>> inside openstack (vlan provider) and "private tenant network" with IP 
>> floating offer (from a flat network).
>> 
>> My question is about switch configuration:
>> 
>> On Bond0:
>> the switch port connected to bond0 need to be configured as trunks with:
>> - the host management network (vlan untagged but can be tagged ?)
>> - container(mngt) network (vlan-container)
>> - storage network (vlan-storage)
>> 
>> On Bond1:
>> the switch port connected to bond1 need to be configured as trunks with:
>> - vxlan network (vlan-vxlan)
>> - vlan X (existing vlan in our existing network infra)
>> - vlan Y (existing vlan in our existing network infra)
>> 
>> Is that right ?
> 
> You have a good plan here, Fabrice.  Although I don't have bonding configured 
> in my own production environment, I'm doing much the same as you are with 
> individual network interfaces.
> 
>> And do i have to define a new network (a new vlan, flat network) that offer 
>> floatting IP for private tenant (not using existing vlan X or Y)? Is that 
>> new vlan have to be connected to bond1 and/or bond0 ?
>> Is that host management network could play this role ?
> 
> You *could* use the host management network as your floating IP pool network, 
> but you'd need to create a flat network in OpenStack for that (unless your 
> host management network is tagged).  I prefer to use a specific VLAN for 
> those public-facing, floating IP addresses.  

Thanks a lot for your answer.
I prefer to use a specific vlan too. Could you confirm to me that this new vlan 
has to be part of the trunk between the switch port and the bond1 interface 
(where we have the br-vlan) ?

> You'll need routers between your internal networks and that floating IP VLAN 
> to make the floating IP addresses work (if I remember correctly).

Absolutely.

> 
>> ps: otherwise, about the documentation, for great understanding and perhaps 
>> consistency
>> In Github (https://github.com/openstack/openstack-ansible), in the file 
>> openstack_interface.cfg.example, you point out that for br-vxlan and 
>> br-storage, "only compute node have an IP on this bridge. When used by infra 
>> nodes, IPs exist in the containers and inet should be set to manual".
>> 
>> I think it will be good (but i may be wrong ;-) ) that in chapter 3 of the 
>> "install guide: configuring the network on target host", you propose the 
>> /etc/network/interfaces for both controller node (br-vxlan, br-storage: 
>> manual without IP) and compute node (br-vxlan, br-storage: static with IP).
> 
> That makes sense.  Would you be able to open a bug for us?  I'll be glad to 
> help you write some documentation if you're interested in learning that 
> process.
> 
> Our bug tracker is here in LaunchPad:
> 
>  https://bugs.launchpad.net/openstack-ansible

I open a bug (https://bugs.launchpad.net/openstack-ansible/+bug/1547598 
<https://bugs.launchpad.net/openstack-ansible/+bug/1547598>).

I’ll be delighted to contribute at the documentation, at my level. So, i’m 
interesting in learning that process.
We (my project team) plan to follow your guide then i’ll go back with pleasure 
that might be misunderstood to improve this guide.

Regards,

Fabrice Grelaud


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] network question and documentation

2016-02-19 Thread Fabrice Grelaud
Le 19/02/2016 00:31, Ian Cordasco a écrit :
>  
>
> -Original Message-
> From: Fabrice Grelaud <fabrice.grel...@u-bordeaux.fr>
> Reply: OpenStack Development Mailing List (not for usage questions) 
> <openstack-dev@lists.openstack.org>
> Date: February 17, 2016 at 09:02:49
> To: openstack-dev@lists.openstack.org <openstack-dev@lists.openstack.org>
> Subject:  [openstack-dev] [openstack-ansible] network question and 
> documentation
>
>> Hi,
>>  
>> after a first test architecture of openstack (juno then upgrade to kilo), 
>> installed  
>> from scratch, and because we use Ansible in our organization, we decided to 
>> deploy our  
>> next openstack generation architecture from the project openstack-ansible.
>>  
>> I studied your documentation (very good work and very appreciate, 
>> http://docs.openstack.org/developer/openstack-ansible/[kilo|liberty]/install-guide/index.html)
>>   
>> and i will need some more clarification compared to network architecture.
>>  
>> I'm not sure to be on the good mailing-list because it 's dev oriented here, 
>> for all that,  
>> i fear my request to be embedded in the openstack overall list, because it's 
>> very specific  
>> to the architecture proposed by your project (bond0 (br-mngt, br-storage), 
>> bond1 (br-vxlan,  
>> br-vlan)).
>>  
>> I'm sorry about that if that is the case...
>>  
>> So, i would like to know if i'm going in the right direction.
>> We want to use both, existing vlan from our existing physical architecture 
>> inside openstack  
>> (vlan provider) and "private tenant network" with IP floating offer (from a 
>> flat network).  
>>  
>> My question is about switch configuration:
>>  
>> On Bond0:
>> the switch port connected to bond0 need to be configured as trunks with:
>> - the host management network (vlan untagged but can be tagged ?)
>> - container(mngt) network (vlan-container)
>> - storage network (vlan-storage)
>>  
>> On Bond1:
>> the switch port connected to bond1 need to be configured as trunks with:
>> - vxlan network (vlan-vxlan)
>> - vlan X (existing vlan in our existing network infra)
>> - vlan Y (existing vlan in our existing network infra)
>>  
>> Is that right ?
>>  
>> And do i have to define a new network (a new vlan, flat network) that offer 
>> floatting IP  
>> for private tenant (not using existing vlan X or Y)? Is that new vlan have 
>> to be connected  
>> to bond1 and/or bond0 ?
>> Is that host management network could play this role ?
>>  
>> Thank you to consider my request.
>> Regards
>>  
>> ps: otherwise, about the documentation, for great understanding and perhaps 
>> consistency  
>> In Github (https://github.com/openstack/openstack-ansible), in the file 
>> openstack_interface.cfg.example,  
>> you point out that for br-vxlan and br-storage, "only compute node have an 
>> IP on this bridge.  
>> When used by infra nodes, IPs exist in the containers and inet should be set 
>> to manual".  
>>  
>> I think it will be good (but i may be wrong ;-) ) that in chapter 3 of the 
>> "install guide: configuring  
>> the network on target host", you propose the /etc/network/interfaces for 
>> both controller  
>> node (br-vxlan, br-storage: manual without IP) and compute node (br-vxlan, 
>> br-storage:  
>> static with IP).
> Hi Fabrice,
>
> Has anyone responded to your questions yet?
>
> --  
> Ian Cordasco
>
>
Hi Ian,

alas ! Not at the moment...

Thanks,

-- 
Fabrice Grelaud
Université de Bordeaux


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] network question and documentation

2016-02-17 Thread Fabrice Grelaud
Hi,

after a first test architecture of openstack (juno then upgrade to kilo), 
installed from scratch, and because we use Ansible in our organization, we 
decided to deploy our next openstack generation architecture from the project 
openstack-ansible.

I studied your documentation (very good work and very appreciate, 
http://docs.openstack.org/developer/openstack-ansible/[kilo|liberty]/install-guide/index.html)
 and i will need some more clarification compared to network architecture.

I'm not sure to be on the good mailing-list because it 's dev oriented here, 
for all that, i fear my request to be embedded in the openstack overall list, 
because it's very specific to the architecture proposed by your project (bond0 
(br-mngt, br-storage), bond1 (br-vxlan, br-vlan)).

I'm sorry about that if that is the case...

So, i would like to know if i'm going in the right direction.
We want to use both, existing vlan from our existing physical architecture 
inside openstack (vlan provider) and "private tenant network" with IP floating 
offer (from a flat network).

My question is about switch configuration:

On Bond0:
the switch port connected to bond0 need to be configured as trunks with:
- the host management network (vlan untagged but can be tagged ?)
- container(mngt) network (vlan-container)
- storage network (vlan-storage)

On Bond1:
the switch port connected to bond1 need to be configured as trunks with:
- vxlan network (vlan-vxlan)
- vlan X (existing vlan in our existing network infra)
- vlan Y (existing vlan in our existing network infra)

Is that right ?

And do i have to define a new network (a new vlan, flat network) that offer 
floatting IP for private tenant (not using existing vlan X or Y)? Is that new 
vlan have to be connected to bond1 and/or bond0 ?
Is that host management network could play this role ?

Thank you to consider my request.
Regards

ps: otherwise, about the documentation, for great understanding and perhaps 
consistency
In Github (https://github.com/openstack/openstack-ansible), in the file 
openstack_interface.cfg.example, you point out that for br-vxlan and 
br-storage, "only compute node have an IP on this bridge. When used by infra 
nodes, IPs exist in the containers and inet should be set to manual".

I think it will be good (but i may be wrong ;-) ) that in chapter 3 of the 
"install guide: configuring the network on target host", you propose the 
/etc/network/interfaces for both controller node (br-vxlan, br-storage: manual 
without IP) and compute node (br-vxlan, br-storage: static with IP).


Fabrice GRELAUD
Université de Bordeaux

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev