[Yahoo-eng-team] [Bug 1863213] [NEW] Spawning of DHCP processes fail: invalid netcat options

2020-02-13 Thread Carlos Goncalves
Public bug reported:

Devstack master, ML2/OVS, CentOS 7, Python 3.6.

No DHCP servers running. Instances fail to get a DHCP offer.

$ ps aux | egrep "dhcp|dnsmasq"
vagrant591  4.7  0.7 459196 114056 ?   Ss   07:26   0:33 
/usr/bin/python3.6 /usr/local/bin/neutron-dhcp-agent --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini
root  1057  0.0  0.0 102896  5472 ?S06:14   0:00 /sbin/dhclient 
-d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0.pid -lf 
/var/lib/NetworkManager/dhclient-5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03-eth0.lease
 -cf /var/lib/NetworkManager/dhclient-eth0.conf eth0
root  1219 14.9  0.4 684168 77988 ?Sl   07:26   1:43 
/usr/bin/python3.6 /usr/local/bin/privsep-helper --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini 
--privsep_context neutron.privileged.default --privsep_sock_path 
/tmp/tmpxg0wq6j2/privsep.sock
root 14783  0.0  0.0 102896  2632 ?Ss   07:29   0:00 dhclient -v 
o-hm0 -cf /etc/dhcp/octavia/dhclient.conf
vagrant  18136  0.0  0.0 112716   988 pts/2S+   07:37   0:00 grep -E 
--color=auto dhcp|dnsmasq


Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.dhcp [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Building initial lease file: 
/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/leases 
{{(pid=591) _output_init_lease_file 
/opt/stack/neutron/neutron/agent/linux/dhcp.py:681}}
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.dhcp [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Done building initial lease file 
/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/leases with 
contents:
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: 1581752053 
fa:16:3e:31:b7:ea 192.168.233.2 * *
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]:  {{(pid=591) 
_output_init_lease_file /opt/stack/neutron/neutron/agent/linux/dhcp.py:708}}
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.dhcp [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Building host file: 
/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/host 
{{(pid=591) _output_hosts_file 
/opt/stack/neutron/neutron/agent/linux/dhcp.py:739}}
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.dhcp [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Done building host file 
/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/host 
{{(pid=591) _output_hosts_file 
/opt/stack/neutron/neutron/agent/linux/dhcp.py:780}}
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.utils [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Unable to access 
/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/pid; Error: 
[Errno 2] No such file or directory: 
'/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/pid' 
{{(pid=591) get_value_from_file 
/opt/stack/neutron/neutron/agent/linux/utils.py:262}}
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.utils [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Running command (rootwrap daemon): ['ip', 'netns', 'exec', 
'qdhcp-06d0ae0b-d730-4871-bef3-fa52e8638214', 'dnsmasq', '--no-hosts', '', 
'--pid-file=/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/pid',
 
'--dhcp-hostsfile=/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/host',
 
'--addn-hosts=/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/addn_hosts',
 
'--dhcp-optsfile=/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/opts',
 
'--dhcp-leasefile=/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/leases',
 '--dhcp-match=set:ipxe,175', '--dhcp-userclass=set:ipxe6,iPXE', 
'--local-service', '--bind-dynamic', 
'--dhcp-range=set:subnet-879783df-943d-486d-8447-8730b9f3051a,192.168.233.0,static,255.255.255.0,86400s',
 '--dhcp-option-force=option:mtu,1450', '--dhcp-lease-max=256', '--conf-file=', 
'--domain=openstacklocal'] {{(pid=591) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:103}}
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: ERROR 
neutron.agent.linux.utils [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Exit code: 2; Stdin: ; Stdout: ; Stderr: /bin/ncat: unrecognized option 
'--no-hosts'
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: Ncat: Try 
`--help' or man(1) ncat for more information, usage options and help. QUITTING.
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: 
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.dhcp [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Spawning DHCP process for network 

[Yahoo-eng-team] [Bug 1863209] [NEW] [openstacksdk] image name is not set if filename is not passed to create_image method

2020-02-13 Thread Tushar Patil
Public bug reported:

I want to create an image without uploading image data using
openstacksdk create_image method.

sdkconnection.image.create_image(name, allow_duplicates=True, **fields)

fields = {"min_disk": min_disk, "min_ram": min_ram,
  "disk_format": "qcow2",
  "container_format": "bare",
  "sha256": ,
  "visibility": "private"}

Image is created successfully but it doesn't have any name to it.

** Affects: glance
 Importance: Undecided
 Status: New

** Summary changed:

- [openstacksdk] image name is not set if filename is not used during 
create_image
+ [openstacksdk] image name is not set if filename is not passed to 
create_image method

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1863209

Title:
  [openstacksdk] image name is not set if filename is not passed to
  create_image method

Status in Glance:
  New

Bug description:
  I want to create an image without uploading image data using
  openstacksdk create_image method.

  sdkconnection.image.create_image(name, allow_duplicates=True,
  **fields)

  fields = {"min_disk": min_disk, "min_ram": min_ram,
"disk_format": "qcow2",
"container_format": "bare",
"sha256": ,
"visibility": "private"}

  Image is created successfully but it doesn't have any name to it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1863209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863206] [NEW] Port is reported with 'port_security_enabled=True' without port-security extension

2020-02-13 Thread Yang Youseok
Public bug reported:

By default, if admin does not enable 'port_security' extension, all
ports are shown that 'port_security_enabled=False'.

However, L2 agent got ports which having 'port_security_enabled=True'
incorrectly because if there is no attribute in port object plugin
return wrong default value
(https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/rpc.py#L162)


I think is there is no attribute 'port_security_enabled', we have to get False 
by default.

Thanks.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863206

Title:
  Port is reported with 'port_security_enabled=True'  without port-
  security extension

Status in neutron:
  New

Bug description:
  By default, if admin does not enable 'port_security' extension, all
  ports are shown that 'port_security_enabled=False'.

  However, L2 agent got ports which having 'port_security_enabled=True'
  incorrectly because if there is no attribute in port object plugin
  return wrong default value
  
(https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/rpc.py#L162)

  
  I think is there is no attribute 'port_security_enabled', we have to get 
False by default.

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1863206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863203] [NEW] [cloud-init] "/etc/resolv.conf" is out of control of netconfig after done cloud-init GOSC on SLES15SP1

2020-02-13 Thread Cuixiang Zhang
Public bug reported:

[cloud-init] "/etc/resolv.conf" is out of control of netconfig after
done cloud-init GOSC on SLES15SP1

Before GOSC, "/etc/resolv.conf" is link file
"/run/netconfig/resolv.conf", in control by netconfig. After done cloud-
init GOSC, this file become real file not link file, total different
from "/run/netconfig/resolv.conf".

cloud-init version is 19.4.
```
pek2-gosv-16-dhcp83:~ # cloud-init -v
/usr/bin/cloud-init 19.4
```

Before GOSC:
```
pek2-gosv-16-dhcp83:~ # ll /etc/resolv.conf
lrwxrwxrwx 1 root root 26 Feb 10 23:33 /etc/resolv.conf -> 
/run/netconfig/resolv.conf
```

After GOSC:
```
pek2-gosv-16-dhcp83:~ # ll /etc/resolv.conf
-rw-r--r-- 1 root root 153 Feb 11 09:46 /etc/resolv.conf
pek2-gosv-16-dhcp83:~ # ll /run/netconfig/resolv.conf
-rw-r--r-- 1 root root 707 Feb 14 06:10 /run/netconfig/resolv.conf

```

cloud-init.log is attached.

Steps:
1. Deploy SLES15SP1 VM
2. Power off VM and Deploy GOSC
```
2020-02-11 08:46:34,797 - config_file.py[DEBUG]: FOUND CATEGORY = 'NETWORK'
2020-02-11 08:46:34,797 - config_file.py[DEBUG]: ADDED KEY-VAL :: 
'NETWORK|NETWORKING' = 'yes'
2020-02-11 08:46:34,797 - config_file.py[DEBUG]: ADDED KEY-VAL :: 
'NETWORK|BOOTPROTO' = 'dhcp'
2020-02-11 08:46:34,797 - config_file.py[DEBUG]: ADDED KEY-VAL :: 
'NETWORK|HOSTNAME' = 'gosc-dhcp-vm-01'
2020-02-11 08:46:34,797 - config_file.py[DEBUG]: ADDED KEY-VAL :: 
'NETWORK|DOMAINNAME' = 'eng.vmware.com'
...

```
3. Power on VM and check GOSC resolve, expect /etc/resolv.conf is still link to 
/run/netconfig/resolv.conf and DNS setting is saved into this file.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "cloud-init.log"
   
https://bugs.launchpad.net/bugs/1863203/+attachment/5328104/+files/cloud-init.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1863203

Title:
  [cloud-init] "/etc/resolv.conf" is out of control of netconfig after
  done cloud-init GOSC on SLES15SP1

Status in cloud-init:
  New

Bug description:
  [cloud-init] "/etc/resolv.conf" is out of control of netconfig after
  done cloud-init GOSC on SLES15SP1

  Before GOSC, "/etc/resolv.conf" is link file
  "/run/netconfig/resolv.conf", in control by netconfig. After done
  cloud-init GOSC, this file become real file not link file, total
  different from "/run/netconfig/resolv.conf".

  cloud-init version is 19.4.
  ```
  pek2-gosv-16-dhcp83:~ # cloud-init -v
  /usr/bin/cloud-init 19.4
  ```

  Before GOSC:
  ```
  pek2-gosv-16-dhcp83:~ # ll /etc/resolv.conf
  lrwxrwxrwx 1 root root 26 Feb 10 23:33 /etc/resolv.conf -> 
/run/netconfig/resolv.conf
  ```

  After GOSC:
  ```
  pek2-gosv-16-dhcp83:~ # ll /etc/resolv.conf
  -rw-r--r-- 1 root root 153 Feb 11 09:46 /etc/resolv.conf
  pek2-gosv-16-dhcp83:~ # ll /run/netconfig/resolv.conf
  -rw-r--r-- 1 root root 707 Feb 14 06:10 /run/netconfig/resolv.conf

  ```

  cloud-init.log is attached.

  Steps:
  1. Deploy SLES15SP1 VM
  2. Power off VM and Deploy GOSC
  ```
  2020-02-11 08:46:34,797 - config_file.py[DEBUG]: FOUND CATEGORY = 'NETWORK'
  2020-02-11 08:46:34,797 - config_file.py[DEBUG]: ADDED KEY-VAL :: 
'NETWORK|NETWORKING' = 'yes'
  2020-02-11 08:46:34,797 - config_file.py[DEBUG]: ADDED KEY-VAL :: 
'NETWORK|BOOTPROTO' = 'dhcp'
  2020-02-11 08:46:34,797 - config_file.py[DEBUG]: ADDED KEY-VAL :: 
'NETWORK|HOSTNAME' = 'gosc-dhcp-vm-01'
  2020-02-11 08:46:34,797 - config_file.py[DEBUG]: ADDED KEY-VAL :: 
'NETWORK|DOMAINNAME' = 'eng.vmware.com'
  ...

  ```
  3. Power on VM and check GOSC resolve, expect /etc/resolv.conf is still link 
to /run/netconfig/resolv.conf and DNS setting is saved into this file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1863203/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863201] [NEW] stein regression listing security group rules

2020-02-13 Thread Sam Morrison
Public bug reported:

Upgrading neutron from rocky -> stein and get a considerable slow down when 
listing all security groups for a project. Goes from ~2 seconds to almost 2 
minutes. Looking into the code it looks like it is very inefficient because it 
gets all rules from the DB and then filters after the fact.
We have around 7000 rules in our QA env.

Very keen to get this sorted but don't know the neutron code base that
well so can offer testing of patches if there are any out there already.

It looks like this happened with listing ports too for stein and found
this https://bugzilla.redhat.com/show_bug.cgi?id=1737012 so wonder if
this is related?

With Rocky:
time openstack security group rule list 
+--+-+---+++--+--+
| ID   | IP Protocol | Ethertype | IP Range 
  | Port Range | Remote Security Group| Security Group  
 |
+--+-+---+++--+--+
| 01b877cc-1621-44cd-8e69-1345ab01a1ef | None| IPv4  | 0.0.0.0/0
  || None | 
3dcbd4fa-d017-4361-b0b0-b7508e923087 |
| 0c744788-6319-42e5-931a-5e7b0df166c4 | None| IPv6  | ::/0 
  || None | 
3dcbd4fa-d017-4361-b0b0-b7508e923087 |
| 0fc6b79d-d211-4201-ac76-60fb8ea40c9c | None| IPv4  | 0.0.0.0/0
  || None | 
8f55c18b-cd8c-4d84-afef-f8b83d5eb128 |
| 17d6c8a3-7894-42a6-92f2-1bd56a30ef1d | tcp | IPv4  | 0.0.0.0/0
  | 80:80  | None | 
ed257fd7-d825-4014-96a8-c16adfea70f0 |
| 19d3ba79-65f1-4c89-a1c2-b32049ceb25a | None| IPv6  | ::/0 
  || None | 
008510a7-d176-4ee5-87e2-e74da06c55ba |
| 21f1d173-b99f-47a7-9983-6926f7bc58f3 | icmp| IPv4  | 0.0.0.0/0
  || None | 
008510a7-d176-4ee5-87e2-e74da06c55ba |
| 3321d5ff-11c3-4104-be13-107c789e4bf8 | None| IPv6  | ::/0 
  || None | 
57cb14de-dd5f-4f0c-b0cf-a7effc36fca5 |
| 381c6816-9b5c-42b7-9dd3-dae12a49c08b | None| IPv4  | 0.0.0.0/0
  || None | 
3f63cfbb-87ee-4aa2-8193-7e86cb542881 |
| 3886ad10-99ea-4f60-a36c-ffbe80d92907 | None| IPv6  | ::/0 
  || None | 
ed257fd7-d825-4014-96a8-c16adfea70f0 |
| 5be4853a-75d1-435c-87ca-56c54a243f70 | None| IPv4  | 0.0.0.0/0
  || None | 
57cb14de-dd5f-4f0c-b0cf-a7effc36fca5 |
| 71656249-4454-410e-8e7d-24910df127ba | None| IPv6  | ::/0 
  || None | 
8f55c18b-cd8c-4d84-afef-f8b83d5eb128 |
| 783324ac-6844-4d4d-985c-936015bcb66e | icmp| IPv4  | 0.0.0.0/0
  || None | 
3f63cfbb-87ee-4aa2-8193-7e86cb542881 |
| 7ca7f0cc-b4df-401f-aaa4-662f17afcfb0 | None| IPv4  | 0.0.0.0/0
  || None | 
008510a7-d176-4ee5-87e2-e74da06c55ba |
| 825a33ff-b693-456d-811e-a0b494e8e308 | None| IPv6  | ::/0 
  || 008510a7-d176-4ee5-87e2-e74da06c55ba | 
008510a7-d176-4ee5-87e2-e74da06c55ba |
| 89fd2d18-45d3-4a86-a020-09d240912e5c | tcp | IPv4  | 
128.250.116.173/32 | 22:22  | None | 
008510a7-d176-4ee5-87e2-e74da06c55ba |
| 8a1f45f1-e4c8-41e4-b6f3-80ab48b7e38d | None| IPv6  | ::/0 
  || None | 
bf7abb53-e5ca-428d-9fce-6a2e37a25ee0 |
| 9ebc6d15-e3eb-4d20-88d4-6737367ffc08 | None| IPv4  | 0.0.0.0/0
  || None | 
ed257fd7-d825-4014-96a8-c16adfea70f0 |
| 9f29f539-a80a-4a8d-89cc-f714224b5f8c | icmp| IPv4  | 0.0.0.0/0
  || None | 
8f55c18b-cd8c-4d84-afef-f8b83d5eb128 |
| a1bc8f05-3a20-48c2-bae5-a60f4ffed514 | None| IPv4  | 0.0.0.0/0
  || 008510a7-d176-4ee5-87e2-e74da06c55ba | 
008510a7-d176-4ee5-87e2-e74da06c55ba |
| bef999d6-669a-47f6-988c-e69bab6df87a | tcp | IPv4  | 0.0.0.0/0
  | 22:22  | 57cb14de-dd5f-4f0c-b0cf-a7effc36fca5 | 
bf7abb53-e5ca-428d-9fce-6a2e37a25ee0 |
| c5ce339b-cd92-492c-9af4-6eab875027ce | tcp | IPv4  | 0.0.0.0/0
  | 80:80  | 

[Yahoo-eng-team] [Bug 1863190] [NEW] Server group anti-affinity no longer works

2020-02-13 Thread Michael Johnson
Public bug reported:

Server group anti-affinity is no longer working, at least in the simple
case. I am able to boot two VMs in an anti-affinity server group on a
devstack that only has one compute instance. Previously this would fail
and/or require soft-anti-affinity enabled.

$ openstack host list
+---+---+--+
| Host Name | Service   | Zone |
+---+---+--+
| devstack2 | scheduler | internal |
| devstack2 | conductor | internal |
| devstack2 | conductor | internal |
| devstack2 | compute   | nova |
+---+---+--+

$ openstack compute service list
+++---+--+-+---++
| ID | Binary | Host  | Zone | Status  | State | Updated At 
|
+++---+--+-+---++
|  3 | nova-scheduler | devstack2 | internal | enabled | up| 
2020-02-14T00:59:15.00 |
|  6 | nova-conductor | devstack2 | internal | enabled | up| 
2020-02-14T00:59:16.00 |
|  1 | nova-conductor | devstack2 | internal | enabled | up| 
2020-02-14T00:59:19.00 |
|  3 | nova-compute   | devstack2 | nova | enabled | up| 
2020-02-14T00:59:17.00 |
+++---+--+-+---++

$ openstack server list
+--+--++---+-++
| ID   | Name   
  | Status | Networks  | Image  
 | Flavor |
+--+--++---+-++
| a44febef-330c-4db5-b220-959cbbff8f8c | 
amphora-1bc97ddb-80da-446a-bce3-0c867c1fc258 | ACTIVE | 
lb-mgmt-net=192.168.0.58; public=172.24.4.200 | amphora-x64-haproxy | 
m1.amphora |
| de776347-0cf4-47d5-bb37-17fb37d79f2e | 
amphora-433abe98-fd8e-4e4f-ac11-4f76bbfc7aba | ACTIVE | 
lb-mgmt-net=192.168.0.199; public=172.24.4.11 | amphora-x64-haproxy | 
m1.amphora |
+--+--++---+-++

$ openstack server group show ddbc8544-c664-4da4-8fd8-32f6bd01e960
+--++
| Field| Value  
|
+--++
| id   | ddbc8544-c664-4da4-8fd8-32f6bd01e960   
|
| members  | a44febef-330c-4db5-b220-959cbbff8f8c, 
de776347-0cf4-47d5-bb37-17fb37d79f2e |
| name | octavia-lb-cc40d031-6ce9-475f-81b4-0a6096178834
|
| policies | anti-affinity  
|
+--++

Steps to reproduce:
1. Boot a devstack.
2. Create an anti-affinity server group.
2. Boot two VMs in that server group.

Expected Behavior:

The second VM boot should fail with an error similar to "not enough
hosts"

Actual Behavior:

The second VM boots with no error, The two instances in the server group
are on the same host.

Environment:
Nova version (current Ussuri): 0d3aeb0287a0619695c9b9e17c2dec49099876a5
commit 0d3aeb0287a0619695c9b9e17c2dec49099876a5 (HEAD -> master, origin/master, 
origin/HEAD)
Merge: 1fcd74730d 65825ebfbd
Author: Zuul 
Date:   Thu Feb 13 14:25:10 2020 +

Merge "Make RBD imagebackend flatten method idempotent"

Fresh devstack install, however I have another devstack from August that
is also showing this behavior.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1863190

Title:
  Server group anti-affinity no longer works

Status in OpenStack Compute (nova):
  New

Bug description:
  Server group anti-affinity is no longer working, at least in the
  simple case. I am able to boot two VMs in an anti-affinity server
  group on a devstack that only has one compute instance. Previously
  this would fail and/or require soft-anti-affinity enabled.

  $ openstack host list
  +---+---+--+
  | Host Name | Service   | Zone |
  +---+---+--+
  | devstack2 | scheduler | internal |
  | devstack2 | conductor | internal |
  | devstack2 | conductor | internal |
  | devstack2 | compute   | nova |
  

[Yahoo-eng-team] [Bug 1860990] Re: RBD image backend tries to flatten images even if they are already flat

2020-02-13 Thread melanie witt
** Also affects: nova/train
   Importance: Undecided
   Status: New

** Changed in: nova/train
   Importance: Undecided => Medium

** Changed in: nova/train
   Status: New => In Progress

** Changed in: nova/train
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1860990

Title:
  RBD image backend tries to flatten images even if they are already
  flat

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) train series:
  In Progress

Bug description:
  When [DEFAULT]show_multiple_locations option is not set in glance, and
  both glance and nova use ceph as their backend, with properly
  configured accesses, nova will fail with the following exception:

  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager 
[req-8021fd76-d5ab-4a9b-bd17-f5eb4d4faf62 0e96a04f360644818632b7e46fe8d3e7 
ac01daacc7424a40b8b464a163902dcb - default default] [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] Instance failed to spawn: 
rbd.InvalidArgument: [errno 22] error flattening 
b'fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6_disk'
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] Traceback (most recent call last):
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/compute/manager.py", line 
5757, in _unshelve_instance
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] block_device_info=block_device_info)
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", 
line 3457, in spawn
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] block_device_info=block_device_info)
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", 
line 3832, in _create_image
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] fallback_from_host)
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", 
line 3923, in _create_and_inject_local_root
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] instance, size, fallback_from_host)
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", 
line 9267, in _try_fetch_image_cache
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] image.flatten()
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py",
 line 983, in flatten
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] self.driver.flatten(self.rbd_name, 
pool=self.driver.pool)
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
 line 290, in flatten
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] vol.flatten()
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/eventlet/tpool.py", line 190, 
in doit
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/eventlet/tpool.py", line 148, 
in proxy_call
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] rv = execute(f, *args, **kwargs)
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/eventlet/tpool.py", line 129, 
in execute
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1863126] [NEW] Missing Integration tests for Hypervisors validation

2020-02-13 Thread Akshay
Public bug reported:

Integration test for validating Hypervisors is missing and hence new
test need to be added.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: integration-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1863126

Title:
  Missing Integration tests for Hypervisors validation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Integration test for validating Hypervisors is missing and hence new
  test need to be added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1863126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863127] [NEW] Missing Integration tests for Overview Validation

2020-02-13 Thread Akshay
Public bug reported:

Integration tests for validating Overview is missing and hence new test
need to be added.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: integration-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1863127

Title:
  Missing Integration tests for Overview Validation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Integration tests for validating Overview is missing and hence new
  test need to be added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1863127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1860990] Re: RBD image backend tries to flatten images even if they are already flat

2020-02-13 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/704330
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=65825ebfbd58920adac5e8594891eec8e9cec41f
Submitter: Zuul
Branch:master

commit 65825ebfbd58920adac5e8594891eec8e9cec41f
Author: Vladyslav Drok 
Date:   Mon Jan 27 15:31:53 2020 +0100

Make RBD imagebackend flatten method idempotent

If glance and nova are both configured with RBD backend, but glance
does not return location information from the API, nova will fail to
clone the image from glance pool and will download it from the API.
In this case, image will be already flat, and subsequent flatten call
will fail.

This commit makes flatten call idempotent, so that it ignores already
flat images by catching ImageUnacceptable when requesting parent info
from ceph.

Closes-Bug: 1860990
Change-Id: Ia6c184c31a980e4728b7309b2afaec4d9f494ac3


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1860990

Title:
  RBD image backend tries to flatten images even if they are already
  flat

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When [DEFAULT]show_multiple_locations option is not set in glance, and
  both glance and nova use ceph as their backend, with properly
  configured accesses, nova will fail with the following exception:

  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager 
[req-8021fd76-d5ab-4a9b-bd17-f5eb4d4faf62 0e96a04f360644818632b7e46fe8d3e7 
ac01daacc7424a40b8b464a163902dcb - default default] [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] Instance failed to spawn: 
rbd.InvalidArgument: [errno 22] error flattening 
b'fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6_disk'
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] Traceback (most recent call last):
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/compute/manager.py", line 
5757, in _unshelve_instance
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] block_device_info=block_device_info)
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", 
line 3457, in spawn
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] block_device_info=block_device_info)
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", 
line 3832, in _create_image
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] fallback_from_host)
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", 
line 3923, in _create_and_inject_local_root
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] instance, size, fallback_from_host)
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", 
line 9267, in _try_fetch_image_cache
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] image.flatten()
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py",
 line 983, in flatten
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] self.driver.flatten(self.rbd_name, 
pool=self.driver.pool)
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
 line 290, in flatten
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] vol.flatten()
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6]   File 
"/var/lib/openstack/lib/python3.6/site-packages/eventlet/tpool.py", line 190, 
in doit
  2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [instance: 
fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2020-01-23 

[Yahoo-eng-team] [Bug 1863113] [NEW] [RFE] Introduce new testing framework for Neutron - OVN integration - a.k.a George

2020-02-13 Thread Jakub Libosvar
Public bug reported:

Currently there is a testing framework in Neutron tree called fullstack
that has been proven very useful over its time being, it discover
multiple issues that were not revealed by any other testing suites.

With networking-ovn, there is a new POC of a similar tool, where
multiple environments can run on a single host in parallel simulating
multi-node network and inject failures. The tool uses containers managed
by podman to isolate Neutron processes, essentially each container
represents one node in the cluster. Host network is used for underlaying
networking between containers using podman networks, that in practice
use linux bridges on hypervisor.

There is already a WIP patch [1] sent to upstream gerrit to prove its
functionality on Ubuntu boxes.

The goal of this RFE is to deliver the framework to Neutron tree and
later we can expand with the test coverage or copy tests from fullstack
suite as lots of things are common there.

[1] https://review.opendev.org/#/c/696926/

** Affects: neutron
 Importance: Undecided
 Assignee: Jakub Libosvar (libosvar)
 Status: New


** Tags: ovn rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863113

Title:
  [RFE] Introduce new testing framework for Neutron - OVN integration -
  a.k.a George

Status in neutron:
  New

Bug description:
  Currently there is a testing framework in Neutron tree called
  fullstack that has been proven very useful over its time being, it
  discover multiple issues that were not revealed by any other testing
  suites.

  With networking-ovn, there is a new POC of a similar tool, where
  multiple environments can run on a single host in parallel simulating
  multi-node network and inject failures. The tool uses containers
  managed by podman to isolate Neutron processes, essentially each
  container represents one node in the cluster. Host network is used for
  underlaying networking between containers using podman networks, that
  in practice use linux bridges on hypervisor.

  There is already a WIP patch [1] sent to upstream gerrit to prove its
  functionality on Ubuntu boxes.

  The goal of this RFE is to deliver the framework to Neutron tree and
  later we can expand with the test coverage or copy tests from
  fullstack suite as lots of things are common there.

  [1] https://review.opendev.org/#/c/696926/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1863113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863110] [NEW] 2/3 snat namespace transitions to master

2020-02-13 Thread Marek Grudzinski
Public bug reported:

neutron version: 14.0.2
general deployment version: stein
deployment method: kolla-ansible
neutron configuration:
 - l3 = ha
 - agent_mode = dvr_snat
 - ovs
general info: multi node deployment, ca ~100 computes

when spawning larger heat stacks with multiple instances (think k8s
infrastructure) sometimes (roughly 50%) we get a "split brain" on snat
namespaces.

logs looks like this on one of the three controller/network nodes.

11:53:43.402Handling notification for router 
2a218a31-2ef6-406a-a719-17965600e182, state master 11:53:43.403  enqueue 
/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/l3/ha.py:50
Router 2a218a31-2ef6-406a-a719-17965600e182 transitioned to master

and then this happens on another of the three controller/network nodes.

11:53:57.582Handling notification for router 
2a218a31-2ef6-406a-a719-17965600e182, state master enqueue 
/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/l3/ha.py:50
11:53:57.583Router 2a218a31-2ef6-406a-a719-17965600e182 
transitioned to master

so neutron sets up all routes in both controller nodes and wrecks havoc on 
session that instances are creating to the outside. obviously deleting the 
routes from the faulty namespace solves the issue.
i can't really find the reason for it being promoted to master even when 
looking through the debug logs. would greatly appreciate any helpful pointers.
the only thing i can think of is some kind of race condition happening and 
therefor everything in neutron looks fine.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863110

Title:
  2/3 snat namespace transitions to master

Status in neutron:
  New

Bug description:
  neutron version: 14.0.2
  general deployment version: stein
  deployment method: kolla-ansible
  neutron configuration:
   - l3 = ha
   - agent_mode = dvr_snat
   - ovs
  general info: multi node deployment, ca ~100 computes

  when spawning larger heat stacks with multiple instances (think k8s
  infrastructure) sometimes (roughly 50%) we get a "split brain" on snat
  namespaces.

  logs looks like this on one of the three controller/network nodes.

  11:53:43.402Handling notification for router 
2a218a31-2ef6-406a-a719-17965600e182, state master 11:53:43.403enqueue 
/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/l3/ha.py:50
  Router 2a218a31-2ef6-406a-a719-17965600e182 transitioned to master

  and then this happens on another of the three controller/network
  nodes.

  11:53:57.582  Handling notification for router 
2a218a31-2ef6-406a-a719-17965600e182, state master enqueue 
/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/l3/ha.py:50
  11:53:57.583  Router 2a218a31-2ef6-406a-a719-17965600e182 
transitioned to master

  so neutron sets up all routes in both controller nodes and wrecks havoc on 
session that instances are creating to the outside. obviously deleting the 
routes from the faulty namespace solves the issue.
  i can't really find the reason for it being promoted to master even when 
looking through the debug logs. would greatly appreciate any helpful pointers.
  the only thing i can think of is some kind of race condition happening and 
therefor everything in neutron looks fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1863110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1862927] Re: "ncat" rootwrap filter is missing

2020-02-13 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/707368
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0ef4233d891f8fa42a073901051bf0310f61eebb
Submitter: Zuul
Branch:master

commit 0ef4233d891f8fa42a073901051bf0310f61eebb
Author: Rodolfo Alonso Hernandez 
Date:   Wed Feb 12 11:43:27 2020 +

Add "ncat" rootwrap filter for debug

In [1], new tests to check "ncat" tool were added. The missing piece
of this patch was to add a new rootwrap filter to allow to execute
"ncat" binary as root and inside a namespace.

Closes-Bug: #1862927

[1]https://review.opendev.org/#/q/If8cf47a01dc353734ad07ca6cd4db7bec6c90fb6

Change-Id: I8e8e5cd8c4027cce58c7073002120d14f251463d


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1862927

Title:
  "ncat" rootwrap filter is missing

Status in neutron:
  Fix Released

Bug description:
  "ncat" rootwrap filter is missing, as we can see in [1].

  Log:
  RuntimeError: Process ['ncat', '0.0.0.0', '1234', '-l', '-k'] hasn't been 
spawned in 20 seconds. Return code: 99, stdout: , sdterr: 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/bin/neutron-rootwrap:
 Unauthorized command: ip netns exec nc-2aefd97b-cf51-4404-804b-b61dc17ce59f 
ncat 0.0.0.0 1234 -l -k (no filter matched)

  
  [1] 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_b89/701733/28/check/neutron-functional/b89805d/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1862927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863098] [NEW] Install and configure in keystone

2020-02-13 Thread Assassins!
Public bug reported:

- [x] I have a fix to the document that I can paste below including
example: input and output.

When setting up mysql before installing and configuring identity
service, the commands listed in the documentation is for <=mysql8.x
which is

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';

But in the latest release of mysql(for example 15.x) this should be done
in two separate steps

First, create users
[mysql]>CREATE USER 'keystone'@'localhost' IDENTIFIED BY 'mysql';
[mysql]> CREATE USER 'keystone'@'%' IDENTIFIED BY 'mysql';

Second, grant permissions
[mysql]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost';
[mysql]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%';


If you have a troubleshooting or support issue, use the following  resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release:  on 2019-09-18 18:54:05
SHA: 3d26cffc2393ae0270b5d073397ffaccc7dde20b
Source: 
https://opendev.org/openstack/keystone/src/doc/source/install/keystone-install-ubuntu.rst
URL: 
https://docs.openstack.org/keystone/latest/install/keystone-install-ubuntu.html

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1863098

Title:
  Install and configure in keystone

Status in OpenStack Identity (keystone):
  New

Bug description:
  - [x] I have a fix to the document that I can paste below including
  example: input and output.

  When setting up mysql before installing and configuring identity
  service, the commands listed in the documentation is for <=mysql8.x
  which is

  GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
  IDENTIFIED BY 'KEYSTONE_DBPASS';

  GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
  IDENTIFIED BY 'KEYSTONE_DBPASS';

  But in the latest release of mysql(for example 15.x) this should be
  done in two separate steps

  First, create users
  [mysql]>CREATE USER 'keystone'@'localhost' IDENTIFIED BY 'mysql';
  [mysql]> CREATE USER 'keystone'@'%' IDENTIFIED BY 'mysql';

  Second, grant permissions
  [mysql]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost';
  [mysql]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%';

  
  If you have a troubleshooting or support issue, use the following  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release:  on 2019-09-18 18:54:05
  SHA: 3d26cffc2393ae0270b5d073397ffaccc7dde20b
  Source: 
https://opendev.org/openstack/keystone/src/doc/source/install/keystone-install-ubuntu.rst
  URL: 
https://docs.openstack.org/keystone/latest/install/keystone-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1863098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863091] [NEW] IPVS setup fails with openvswitch firewall driver, works with iptables_hybrid

2020-02-13 Thread Dr. Jens Harbott
Public bug reported:

We have some IPVS setup deployed according to
https://cloudbau.github.io/openstack/loadbalancing/networking/ipvs/2017/03/20
/ipvs-direct-routing-on-top-of-openstack.html which stopped working
after upgrading from Queens to Rocky and switching from the
iptables_hybrid firewall driver to the native openvswitch firewall
driver.

The issue can be resolved by reverting to the iptables_hybrid driver on
the compute-node hosting the LB instance.

This is on Ubuntu Bionic using the Rocky UCA, Neutron version
13.0.6-0ubuntu1~cloud0.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863091

Title:
  IPVS setup fails with openvswitch firewall driver, works with
  iptables_hybrid

Status in neutron:
  New

Bug description:
  We have some IPVS setup deployed according to
  https://cloudbau.github.io/openstack/loadbalancing/networking/ipvs/2017/03/20
  /ipvs-direct-routing-on-top-of-openstack.html which stopped working
  after upgrading from Queens to Rocky and switching from the
  iptables_hybrid firewall driver to the native openvswitch firewall
  driver.

  The issue can be resolved by reverting to the iptables_hybrid driver
  on the compute-node hosting the LB instance.

  This is on Ubuntu Bionic using the Rocky UCA, Neutron version
  13.0.6-0ubuntu1~cloud0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1863091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863068] [NEW] Dublicated Neutron Meter Rules in different projects kills metering

2020-02-13 Thread Merlin
Public bug reported:

I want to use Neutron Meter with gnocchi to report the egress bandwidht used 
for public traffic.
So I created neutron meter labels and neutron meter rules to include all ipv4 
traffic:
+---++
| Field | Value 
 |
+---++
| direction | egress
 |
| id| f2c9b9a8-0af3-40a5-a718-6e841bad111d  
 |
| is_excluded   | False 
 |
| location  | cloud='', project.domain_id='default', 
project.domain_name=,   |
|   | project.id='80120067cd7949908e44dce45aeb7712', 
project.name='billing', region_name='xxx',  |
|   | zone= 
 |
| metering_label_id | d0068fc8-4a3e-4108-aa11-e3c171d4d1e1  
 |
| name  | None  
 |
| project_id| None  
 |
| remote_ip_prefix  | 0.0.0.0/0 
 |
+---++

And excluded all private nets:
+---++
| Field | Value 
 |
+---++
| direction | egress
 |
| id| 838c9631-665b-42b6-b1e9-539983a38573  
 |
| is_excluded   | True  
 |
| location  | cloud='', project.domain_id='default', 
project.domain_name=,   |
|   | project.id='80120067cd7949908e44dce45aeb7712', 
project.name='billing', region_name='xxx',  |
|   | zone= 
 |
| metering_label_id | 435652e6-e985-4351-a31a-954bace9eea0  
 |
| name  | None  
 |
| project_id| None  
 |
| remote_ip_prefix  | 10.0.0.0/8
 |
+---++

It works fine for just one project but if I apply it to all projects it
fails and no measures are recorded in gnocchi.

The neutron-metering-agent.log shows the following warning:
Feb 13 09:14:18 xxx_host neutron-metering-agent: 2020-02-13 09:14:09.648 4732 
WARNING neutron.agent.linux.iptables_manager 
[req-4c38f1f5-2db4-4d4a-9c1f-9585b1b50427 65c6d4bdcbc7469a910f6361b7f70f27 
80120067cd7949908e44dce45aeb7712 - - -] Duplicate iptables rule detected. This 
may indicate a bug in the iptables rule generation code. Line: -A 
neutron-meter-r-28155d45-d16 -s 10.0.0.0/8 -o qg-c61bafef-ea -j RETURN

I would expect that it is possible to have similar rules for different
projects.

What do you think? Is it part of the rule creation code?

In the iptables_manager code the function is criticised: 
https://github.com/openstack/neutron/blob/86e4f141159072421a19080455caba1b0efef776/neutron/agent/linux/iptables_manager.py
# TODO(kevinbenton): remove this function and the next one. They are
# just oversized brooms to sweep bugs under the rug!!! We generate the
# rules and we shouldn't be generating duplicates.
def _weed_out_duplicates(line):
if line in seen_lines:
thing = 'chain' if line.startswith(':') else 'rule'

[Yahoo-eng-team] [Bug 1863058] [NEW] Arm64 CI for Nova

2020-02-13 Thread Kevin Zhao
Public bug reported:

Linaro has donate a cluster for OpenStack CI on Arm64.
Now the cluster is ready, 
https://opendev.org/openstack/project-config/src/branch/master/nodepool/nl03.openstack.org.yaml#L414

We'd like to setup CI for Nova first.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1863058

Title:
  Arm64 CI for Nova

Status in OpenStack Compute (nova):
  New

Bug description:
  Linaro has donate a cluster for OpenStack CI on Arm64.
  Now the cluster is ready, 
https://opendev.org/openstack/project-config/src/branch/master/nodepool/nl03.openstack.org.yaml#L414

  We'd like to setup CI for Nova first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1863058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp