[Yahoo-eng-team] [Bug 1806504] Re: userdata runcmd overwrites runcmd under cloud.cfg.d

2019-08-14 Thread Tim Penhey
** Changed in: juju
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1806504

Title:
  userdata runcmd overwrites runcmd under cloud.cfg.d

Status in cloud-init:
  Incomplete
Status in juju:
  Invalid
Status in MAAS:
  Invalid

Bug description:
  I'm using Juju to deploy machine. Added runcmd section in
  /etc/cloud.cfg.d/60-my-conf.cfg:

  ```
  merge_how: 'list(append)+dict(recurse_array)+str()'

  runcmd:
   - echo "run in 60 my cloudinit cfg"
   - usermod -aG docker ubuntu

  merge_how: 'list(append)+dict(recurse_array)+str()'
  ```

  This script works when deploy machine using MAAS. But it didn't work
  when deploy machine using JUJU, Seems merge_how doesn't works.

  I want to use cloud-init single to seem how runcmd executes, but seems
  it's only displaying the last execution's status?

  ```
  cloud-init single --name cc_runcmd --frequency always
  Cloud-init v. 18.4-0ubuntu1~16.04.2 running 'single' at Tue, 04 Dec 2018 
00:01:54 +. Up 1338.24 seconds
  ```

  Seems I didn't find a good way to debug this problem, appreciate any
  debug commands or tips for debug this kind of problems here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1806504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831958] Re: Soft-reboot fails after live migrate VM

2019-08-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1831958

Title:
  Soft-reboot fails after live migrate VM

Status in OpenStack Compute (nova):
  Expired

Bug description:
  After live migrate VM, I get running config by command:"virsh domblklist 
" and check file xml that is difference.
  ex:
  - use command virsh domblklist 
   # virsh domblklist instance-290a
  Target Source
  
  vda/dev/sdcf

  
  -read file xml:
  



1afefa91-17db-4c1d-af2f-defdd993ddbc

  

  
  When i soft-reboot this VM (use Linux) and console, surprise it is Windows.
  Please tell to me about cause and fix it!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1831958/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840200] Re: Misuse of 'assert_has_calls' in unit tests

2019-08-14 Thread Takashi NATSUME
** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
 Assignee: (unassigned) => Takashi NATSUME (natsume-takashi)

** Changed in: nova/pike
 Assignee: (unassigned) => Takashi NATSUME (natsume-takashi)

** Changed in: nova/queens
 Assignee: (unassigned) => Takashi NATSUME (natsume-takashi)

** Changed in: nova/rocky
 Assignee: (unassigned) => Takashi NATSUME (natsume-takashi)

** Changed in: nova/stein
 Assignee: (unassigned) => Takashi NATSUME (natsume-takashi)

** Changed in: nova/ocata
   Status: New => Confirmed

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/rocky
   Status: New => Confirmed

** Changed in: nova/stein
   Status: New => Confirmed

** Changed in: nova/ocata
   Importance: Undecided => Medium

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/queens
   Importance: Undecided => Medium

** Changed in: nova/rocky
   Importance: Undecided => Medium

** Changed in: nova/stein
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1840200

Title:
  Misuse of 'assert_has_calls' in unit tests

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed
Status in OpenStack Compute (nova) stein series:
  Confirmed

Bug description:
  In unit tests, 'has_calls' method is used to assert mock calls.
  But 'has_calls' does not exist in assertion methods.
  It should be 'assert_has_calls'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1840200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840200] [NEW] Misuse of 'assert_has_calls' in unit tests

2019-08-14 Thread Takashi NATSUME
Public bug reported:

In unit tests, 'has_calls' method is used to assert mock calls.
But 'has_calls' does not exist in assertion methods.
It should be 'assert_has_calls'.

** Affects: nova
 Importance: Medium
 Assignee: Takashi NATSUME (natsume-takashi)
 Status: New


** Tags: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1840200

Title:
  Misuse of 'assert_has_calls' in unit tests

Status in OpenStack Compute (nova):
  New

Bug description:
  In unit tests, 'has_calls' method is used to assert mock calls.
  But 'has_calls' does not exist in assertion methods.
  It should be 'assert_has_calls'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1840200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836015] Re: [neutron-fwaas]firewall goup status is inactive when updating policy in fwg

2019-08-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/670010
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=3817119959f34ea2002608a43b350f3dd65ae26d
Submitter: Zuul
Branch:master

commit 3817119959f34ea2002608a43b350f3dd65ae26d
Author: zhanghao2 
Date:   Tue Jul 23 06:30:24 2019 -0400

Fix bug when updating policy in firewall group

When updating only the policy in firewall group, the 'del-port-ids'
and 'add-port-ids' return empty list, which causes the fwg status
to be inactive and iptables in the router namespace are not changed.
This patch fixes the above problem.

Change-Id: I1a4bc0a8258fbbc340825cccb6d287c94304d3c5
Closes-Bug: #1836015


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1836015

Title:
  [neutron-fwaas]firewall goup status is inactive when updating policy
  in fwg

Status in neutron:
  Fix Released

Bug description:
  [root@controller neutron]# openstack firewall group show fwg1
  +---+---+
  | Field | Value |
  +---+---+
  | Description   |   |
  | Egress Policy ID  | 57a7506f-f841-4679-bf90-e1e33ccc9dc7  |
  | ID| f4558994-d207-4183-a077-ea7837574ccf  |
  | Ingress Policy ID | 57a7506f-f841-4679-bf90-e1e33ccc9dc7  |
  | Name  | fwg1  |
  | Ports | [u'139e9560-9b72-4135-a3d4-94bf7cafbd6a'] |
  | Project   | 8c91479bacc64574b828d4809e2d23c2  |
  | Shared| False |
  | State | UP|
  | Status| ACTIVE|
  | project_id| 8c91479bacc64574b828d4809e2d23c2  |
  +---+---+

  openstack firewall group set fwg1 --no-ingress-firewall-policy

  [root@controller neutron]# openstack firewall group show fwg1
  +---+---+
  | Field | Value |
  +---+---+
  | Description   |   |
  | Egress Policy ID  | 57a7506f-f841-4679-bf90-e1e33ccc9dc7  |
  | ID| f4558994-d207-4183-a077-ea7837574ccf  |
  | Ingress Policy ID | None  |
  | Name  | fwg1  |
  | Ports | [u'139e9560-9b72-4135-a3d4-94bf7cafbd6a'] |
  | Project   | 8c91479bacc64574b828d4809e2d23c2  |
  | Shared| False |
  | State | UP|
  | Status| INACTIVE  |
  | project_id| 8c91479bacc64574b828d4809e2d23c2  |
  +---+---+

  iptables in the router namespace has not changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1836015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1839560] Related fix merged to nova (master)

2019-08-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/675705
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=89dd74ac7f1028daadf86cb18948e27fe9d1d411
Submitter: Zuul
Branch:master

commit 89dd74ac7f1028daadf86cb18948e27fe9d1d411
Author: Matt Riedemann 
Date:   Fri Aug 9 17:24:07 2019 -0400

Add functional regression recreate test for bug 1839560

This adds a functional test which recreates bug 1839560
where the driver reports a node, then no longer reports
it so the compute manager deletes it, and then the driver
reports it again later (this can be common with ironic
nodes as they undergo maintenance). The issue is that since
Ia69fabce8e7fd7de101e291fe133c6f5f5f7056a in Rocky, the
ironic node uuid is re-used for the compute node uuid but
there is a unique constraint on the compute node uuid so
when trying to create the compute node once the ironic node
is available again, the compute node create fails with a
duplicate entry error due to the duplicate uuid. To recreate
this in the functional test, a new fake virt driver is added
which provides a predictable uuid per node like the ironic
driver. The test also shows that archiving the database is
a way to workaround the bug until it's properly fixed.

Change-Id: If822509e906d5094f13a8700b2b9ed3c40580431
Related-Bug: #1839560


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1839560

Title:
  ironic: moving node to maintenance makes it unusable afterwards

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  In Progress
Status in OpenStack Compute (nova) stein series:
  In Progress

Bug description:
  If you use the Ironic API to set a node into a maintenance (for
  whatever reason), it will no longer be included in the list of
  available nodes to Nova.

  When Nova refreshes it's resources periodically, it will find that it
  is no longer in the list of available nodes and delete it from the
  database.

  Once you enable the node again and Nova attempts to create the
  ComputeNode again, it fails due to the duplicate UUID in the database,
  because the old record is soft deleted and had the same UUID.

  ref:
  
https://github.com/openstack/nova/commit/9f28727eb75e05e07bad51b6eecce667d09dfb65
  - this made computenode.uuid match the baremetal uuid

  
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L8304-L8316
  - this soft-deletes the computenode record when it doesn't see it in the list 
of active nodes

  
  traces:
  2019-08-08 17:20:13.921 6379 INFO nova.compute.manager 
[req-c71e5c81-eb34-4f72-a260-6aa7e802f490 - - - - -] Deleting orphan compute 
node 31 hypervisor host is 77788ad5-f1a4-46ac-8132-2d88dbd4e594, nodes are 
set([u'6d556617-2bdc-42b3-a3fe-b9218a1ebf0e', 
u'a634fab2-ecea-4cfa-be09-032dce6eaf51', 
u'2dee290d-ef73-46bc-8fc2-af248841ca12'])
  ...
  2019-08-08 22:21:25.284 82770 WARNING nova.compute.resource_tracker 
[req-a58eb5e2-9be0-4503-bf68-dff32ff87a3a - - - - -] No compute node record for 
ctl1-:77788ad5-f1a4-46ac-8132-2d88dbd4e594: ComputeHostNotFound_Remote: 
Compute host ctl1- could not be found.
  
  Remote error: DBDuplicateEntry (pymysql.err.IntegrityError) (1062, 
u"Duplicate entry '77788ad5-f1a4-46ac-8132-2d88dbd4e594' for key 
'compute_nodes_uuid_idx'")
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1839560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840068] Re: (lxc) Instance failed to spawn: TypeError: object of type 'filter' has no len() - python3

2019-08-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/676263
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=fc9fb383c16ecb98b1b546f21e7fabb5f00a42ac
Submitter: Zuul
Branch:master

commit fc9fb383c16ecb98b1b546f21e7fabb5f00a42ac
Author: Sean Mooney 
Date:   Tue Aug 13 18:58:41 2019 +0100

lxc: make use of filter python3 compatible

_detect_nbd_devices uses the filter
builtin internally to filter valid devices.

In python 2, filter returns a list. In python 3,
filter returns an iterable or generator function.
This change eagerly converts the result of calling filter
to a list to preserve the python 2 behaviour under python 3.

Closes-Bug: #1840068

Change-Id: I25616c5761ea625a15d725777ae58175651558f8


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1840068

Title:
  (lxc) Instance failed to spawn: TypeError: object of type 'filter' has
  no len() - python3

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed
Status in OpenStack Compute (nova) stein series:
  Confirmed

Bug description:
  Seen in the nova-lxc CI job here:

  https://logs.opendev.org/24/676024/2/experimental/nova-
  lxc/f9a892c/controller/logs/screen-n-cpu.txt.gz#_Aug_12_23_31_05_043911

  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [None req-55d6dd1b-96ca-4afe-9a0c-cec902d3bd87 
tempest-ServerAddressesTestJSON-1311986476 
tempest-ServerAddressesTestJSON-1311986476] [instance: 
842a9704-3700-42ef-adb3-b038ca525127] Instance failed to spawn: TypeError: 
object of type 'filter' has no len()
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127] 
Traceback (most recent call last):
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127]   
File "/opt/stack/nova/nova/compute/manager.py", line 2495, in _build_resources
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127] 
yield resources
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127]   
File "/opt/stack/nova/nova/compute/manager.py", line 2256, in 
_build_and_run_instance
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127] 
block_device_info=block_device_info)
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127]   
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3231, in spawn
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127] 
destroy_disks_on_failure=True)
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127]   
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5823, in 
_create_domain_and_network
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127] 
destroy_disks_on_failure)
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127]   
File "/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 220, 
in __exit__
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127] 
self.force_reraise()
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127]   
File "/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127] 
six.reraise(self.type_, self.value, self.tb)
  Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]: 
ERROR nova.compute.manager [instance: 842a9704-3700-42ef-adb3-b038ca525127]   
File "/usr/local/lib/python3.6/dist-packages/six.py", line 693, in reraise
  Aug 12 23:31:05.043911 

[Yahoo-eng-team] [Bug 1839814] Re: Docs: No info about Jinja templates require text/jinja mime type

2019-08-14 Thread StephenKing
Oh..

> If you're choosing the specify a mime type, then you do have to choose
the correct mime type.

By that you obviously mean the mime type is optional. D'oh.. I thought
it's required, but actually it's optional also in Terraform - and the
one who has put the wrong value there was obviously me.

Sorry for that mistake, thanks for helping me!

Steffen

** Changed in: cloud-init
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1839814

Title:
  Docs: No info about Jinja templates require text/jinja mime type

Status in cloud-init:
  Invalid

Bug description:
  The "Instance Metadata" documentation [1] describes nicely, how to use Jinja 
in cloud-config templates as well as shell scripts.
  It states that the first line must be

  > ## template: jinja

  But only after reading the code, I found out that parsing only
  happens, when the MIME type is "text/jinja". Otherwise, the template
  is not parsed through Jinja.

  Running cloud-init 19.1 on Ubuntu 18.04 on AWS.

  
  [1] 
https://cloudinit.readthedocs.io/en/latest/topics/instancedata.html#using-instance-data

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1839814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784874] Re: ResourceTracker doesn't clean up compute_nodes or stats entries

2019-08-14 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => In Progress

** Changed in: nova/ocata
   Importance: Undecided => Low

** Changed in: nova/ocata
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784874

Title:
  ResourceTracker doesn't clean up compute_nodes or stats entries

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress

Bug description:
  This was noted in review:

  https://review.openstack.org/#/c/587636/4/nova/compute/resource_tracker.py@141

  That the ResourceTracker.compute_nodes and ResourceTracker.stats (and
  old_resources) entries only grow and are never cleaned up as we
  rebalance nodes or nodes are deleted, which means it just takes up
  memory over time.

  When we cleanup compute nodes here:

  
https://github.com/openstack/nova/blob/47ef500f4492c731ebfa33a12822ef6b5db4e7e2/nova/compute/manager.py#L7759

  We should probably call a cleanup hook into the ResourceTracker to
  cleanup those entries as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1784874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784874] Re: ResourceTracker doesn't clean up compute_nodes or stats entries

2019-08-14 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784874

Title:
  ResourceTracker doesn't clean up compute_nodes or stats entries

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress

Bug description:
  This was noted in review:

  https://review.openstack.org/#/c/587636/4/nova/compute/resource_tracker.py@141

  That the ResourceTracker.compute_nodes and ResourceTracker.stats (and
  old_resources) entries only grow and are never cleaned up as we
  rebalance nodes or nodes are deleted, which means it just takes up
  memory over time.

  When we cleanup compute nodes here:

  
https://github.com/openstack/nova/blob/47ef500f4492c731ebfa33a12822ef6b5db4e7e2/nova/compute/manager.py#L7759

  We should probably call a cleanup hook into the ResourceTracker to
  cleanup those entries as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1784874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1839860] Re: cloud-init error while MAAS commissioning (PXE phase) P9 Witherspoon

2019-08-14 Thread Frank Heimes
So after restarting bind9 things work again - dns lookups in petitboot shell, 
netboot installs (incl. d/l of the installer files), and also MAAS 
commissioning.
Not sure what caused bind9 to run wild.
Would be nice to have an option to restart some selected services (like DNS) 
from the MAAS GUI (as admin).

Thanks for your help - closing ticket.

** Changed in: maas
   Status: Incomplete => Fix Released

** Changed in: ubuntu-power-systems
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1839860

Title:
  cloud-init error while MAAS commissioning (PXE phase) P9 Witherspoon

Status in cloud-init:
  Invalid
Status in MAAS:
  Fix Released
Status in The Ubuntu-power-systems project:
  Fix Released

Bug description:
  While trying to commissioning bobone (P9 withersppon machine with
  OpenBMC in Server team's MAAS) the PXE phase ended with the following
  cloud-init error (shown at sol):

   Starting Wait until snapd is fully seeded...

  Ubuntu 18.04.3 LTS ubuntu hvc0

  ubuntu login: [  131.162174] cloud-init[5497]: Can not apply stage config, no 
datasource found! Likely bad things to come!
  [  131.162320] cloud-init[5497]: 

  [  131.162414] cloud-init[5497]: Traceback (most recent call last):
  [  131.162512] cloud-init[5497]:   File 
"/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 485, in 
main_modules
  [  131.162614] cloud-init[5497]: init.fetch(existing="trust")
  [  131.162678] cloud-init[5497]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 351, in fetch
  [  131.162776] cloud-init[5497]: return 
self._get_data_source(existing=existing)
  [  131.162851] cloud-init[5497]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 261, in 
_get_data_source
  [  131.162934] cloud-init[5497]: pkg_list, self.reporter)
  [  131.163005] cloud-init[5497]:   File 
"/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 741, in 
find_source
  [  131.163104] cloud-init[5497]: raise DataSourceNotFoundException(msg)
  [  131.163177] cloud-init[5497]: 
cloudinit.sources.DataSourceNotFoundException: Did not find any data source, 
searched classes: ()
  [  131.163269] cloud-init[5497]: 

  [  131.53] cloud-init[5551]: Can not apply stage final, no datasource 
found! Likely bad things to come!
  [  131.566820] cloud-init[5551]: 

  [  131.566922] cloud-init[5551]: Traceback (most recent call last):
  [  131.567004] cloud-init[5551]:   File 
"/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 485, in 
main_modules
  [  131.567116] cloud-init[5551]: init.fetch(existing="trust")
  [  131.567193] cloud-init[5551]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 351, in fetch
  [  131.567274] cloud-init[5551]: return 
self._get_data_source(existing=existing)
  [  131.567348] cloud-init[5551]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 261, in 
_get_data_source
  [  131.567438] cloud-init[5551]: pkg_list, self.reporter)
  [  131.567508] cloud-init[5551]:   File 
"/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 741, in 
find_source
  [  131.567598] cloud-init[5551]: raise DataSourceNotFoundException(msg)
  [  131.567679] cloud-init[5551]: 
cloudinit.sources.DataSourceNotFoundException: Did not find any data source, 
searched classes: ()
  [  131.567779] cloud-init[5551]: 


  Ubuntu 18.04.3 LTS ubuntu hvc0

  ubuntu login:


  MAAS log (from UI):

  TIMEEVENT
    Mon, 12 Aug. 2019 11:10:45 PXE Request - commissioning
    Mon, 12 Aug. 2019 11:08:43 Node powered on
    Mon, 12 Aug. 2019 11:08:14 Powering node on
    Mon, 12 Aug. 2019 11:08:14 Node - Started commissioning on 'bobone'.
    Mon, 12 Aug. 2019 11:08:14 Node changed status - From 'New' to 
'Commissioning'
    Mon, 12 Aug. 2019 11:08:14 User starting node commissioning - (jfh)
    Mon, 12 Aug. 2019 11:07:04 Node powered off


  With that system cannot complete the Commissioning phase.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1839860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840159] [NEW] nova-grenade-live-migration intermittently fails with "Error monitoring migration: Timed out during operation: cannot acquire state change lock (held by remoteDisp

2019-08-14 Thread Matt Riedemann
Public bug reported:

Seen here:

https://logs.opendev.org/21/655721/14/check/nova-grenade-live-
migration/2ee634d/logs/subnode-2/screen-n-cpu.txt.gz?level=TRACE#_Aug_13_10_03_49_974378

Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: WARNING nova.virt.libvirt.driver [-] [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6] Error monitoring migration: Timed out 
during operation: cannot acquire state change lock (held by 
remoteDispatchDomainMigratePerform3Params): libvirtError: Timed out during 
operation: cannot acquire state change lock (held by 
remoteDispatchDomainMigratePerform3Params)
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6] Traceback (most recent call last):
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6]   File 
"/opt/stack/old/nova/nova/virt/libvirt/driver.py", line 8052, in _live_migration
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6] finish_event, disk_paths)
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6]   File 
"/opt/stack/old/nova/nova/virt/libvirt/driver.py", line 7857, in 
_live_migration_monitor
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6] info = guest.get_job_info()
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6]   File 
"/opt/stack/old/nova/nova/virt/libvirt/guest.py", line 709, in get_job_info
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6] stats = self._domain.jobStats()
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 190, in doit
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 148, in 
proxy_call
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6] rv = execute(f, *args, **kwargs)
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 129, in execute
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6] six.reraise(c, e, tb)
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6] rv = meth(*args, **kwargs)
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6]   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1403, in jobStats
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6] if ret is None: raise libvirtError 
('virDomainGetJobStats() failed', dom=self)
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 
nova-compute[25863]: ERROR nova.virt.libvirt.driver [instance: 
a1637e8b-6f2d-4127-9799-31cefb3f43a6] libvirtError: Timed out during operation: 
cannot acquire state change lock (held by 
remoteDispatchDomainMigratePerform3Params)
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920 

[Yahoo-eng-team] [Bug 1839860] Re: cloud-init error while MAAS commissioning (PXE phase) P9 Witherspoon

2019-08-14 Thread Ryan Harper
Looks like there's nothing for cloud-init here.  Please reopen the
cloud-init task if that changes.

** Changed in: cloud-init
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1839860

Title:
  cloud-init error while MAAS commissioning (PXE phase) P9 Witherspoon

Status in cloud-init:
  Invalid
Status in MAAS:
  Incomplete
Status in The Ubuntu-power-systems project:
  New

Bug description:
  While trying to commissioning bobone (P9 withersppon machine with
  OpenBMC in Server team's MAAS) the PXE phase ended with the following
  cloud-init error (shown at sol):

   Starting Wait until snapd is fully seeded...

  Ubuntu 18.04.3 LTS ubuntu hvc0

  ubuntu login: [  131.162174] cloud-init[5497]: Can not apply stage config, no 
datasource found! Likely bad things to come!
  [  131.162320] cloud-init[5497]: 

  [  131.162414] cloud-init[5497]: Traceback (most recent call last):
  [  131.162512] cloud-init[5497]:   File 
"/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 485, in 
main_modules
  [  131.162614] cloud-init[5497]: init.fetch(existing="trust")
  [  131.162678] cloud-init[5497]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 351, in fetch
  [  131.162776] cloud-init[5497]: return 
self._get_data_source(existing=existing)
  [  131.162851] cloud-init[5497]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 261, in 
_get_data_source
  [  131.162934] cloud-init[5497]: pkg_list, self.reporter)
  [  131.163005] cloud-init[5497]:   File 
"/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 741, in 
find_source
  [  131.163104] cloud-init[5497]: raise DataSourceNotFoundException(msg)
  [  131.163177] cloud-init[5497]: 
cloudinit.sources.DataSourceNotFoundException: Did not find any data source, 
searched classes: ()
  [  131.163269] cloud-init[5497]: 

  [  131.53] cloud-init[5551]: Can not apply stage final, no datasource 
found! Likely bad things to come!
  [  131.566820] cloud-init[5551]: 

  [  131.566922] cloud-init[5551]: Traceback (most recent call last):
  [  131.567004] cloud-init[5551]:   File 
"/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 485, in 
main_modules
  [  131.567116] cloud-init[5551]: init.fetch(existing="trust")
  [  131.567193] cloud-init[5551]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 351, in fetch
  [  131.567274] cloud-init[5551]: return 
self._get_data_source(existing=existing)
  [  131.567348] cloud-init[5551]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 261, in 
_get_data_source
  [  131.567438] cloud-init[5551]: pkg_list, self.reporter)
  [  131.567508] cloud-init[5551]:   File 
"/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 741, in 
find_source
  [  131.567598] cloud-init[5551]: raise DataSourceNotFoundException(msg)
  [  131.567679] cloud-init[5551]: 
cloudinit.sources.DataSourceNotFoundException: Did not find any data source, 
searched classes: ()
  [  131.567779] cloud-init[5551]: 


  Ubuntu 18.04.3 LTS ubuntu hvc0

  ubuntu login:


  MAAS log (from UI):

  TIMEEVENT
    Mon, 12 Aug. 2019 11:10:45 PXE Request - commissioning
    Mon, 12 Aug. 2019 11:08:43 Node powered on
    Mon, 12 Aug. 2019 11:08:14 Powering node on
    Mon, 12 Aug. 2019 11:08:14 Node - Started commissioning on 'bobone'.
    Mon, 12 Aug. 2019 11:08:14 Node changed status - From 'New' to 
'Commissioning'
    Mon, 12 Aug. 2019 11:08:14 User starting node commissioning - (jfh)
    Mon, 12 Aug. 2019 11:07:04 Node powered off


  With that system cannot complete the Commissioning phase.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1839860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1839491] Re: Manually performed partitioning changes get reverted on reboot

2019-08-14 Thread Ryan Harper
Moving cloud-init task to invalid, no bug/work for cloud-init.

** Changed in: cloud-init
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1839491

Title:
  Manually performed partitioning changes get reverted on reboot

Status in cloud-init:
  Invalid
Status in MAAS:
  Triaged

Bug description:
  Hello,

  I am facing an issue where I need to make changes to the initially
  deployed partition layout, but upon making those changes and
  rebooting, the partition layout gets reverted.

  My env:
  MAAS version: 2.6.0 (7802-g59416a869-0ubuntu1~18.04.1)
  System vendor: HP
  System product: ProLiant DL360 Gen9 (780021-S01)
  System version: Unknown
  Mainboard product: ProLiant DL360 Gen9
  Mainboard firmware version: P89
  Mainboard firmware date: 12/27/2015
  CPU model: Intel(R) Xeon(R) CPU E5-2690 v3
  Deployed (16.04 LTS "Xenial Xerus")
  Kernel: xenial (ga-16.04)
  Power type: ipmi
  Power driver: LAN_2_0 [IPMI 2.0]
  Power boot type: EFI boot
  Architecture amd64/generic
  Minimum Kernel: no minimum kernel
  Interfaces: eno1, eno2, noe3, eno4, eno49, eno50. Only eno49 is used.
  Storage: sda Physical 1TB, sdb Physical 1TB.

  
  Steps to reproduce:

  1. Deploy MAAS with the following partition configuration:
  sda-part1 536.9 MB Partition fat32 formatted filesystem mounted at /boot/efi
  sda-part2 100.0 GB Partition ext4 formatted filesystem mounted at /

  2. Check the partitions on the node:

  $ lsblk

  NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
  sda  8:00 931.5G  0 disk 
  |-sda1   8:10   512M  0 part /boot/efi
  `-sda2   8:20   931G  0 part /
  sdb  8:16   0 931.5G  0 disk 

  
  Here we notice the initial partitioning scheme is not respected. This could 
be related to the main issue of partitioning changes being reverted, but could 
also be a separate issue.

  3. Boot an ubuntu ISO and go into rescue mode. I used ubuntu-16.04.6
  -server-amd64.iso

  4. Choose "Do not use a root filesystem" and "Execute a shell in the
  installer environment".

  4. Run the following commands:

  $ e2fsck -f /dev/sda2

  $ resize2fs /dev/sda2 150G

  $ e2fsck -f /dev/sda2

  $ sudo parted /dev/sda

  (parted) unit GiB print

  (parted) resizepart

  Partition number? 2

  End? 200GiB

  (parted) print

  You should see partition 2 resized.

  (parted) quit

  $ e2fsck -f /dev/sda2

  5. Confirm

  $ fdisk -l

  6. Sync writes

  $ sync

  7. Reboot the node. Remove ISO image.

  8. Let system boot, check partitions again:

  $ lsblk

  NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
  sda  8:00 931.5G  0 disk 
  |-sda1   8:10   512M  0 part /boot/efi
  `-sda2   8:20   931G  0 part /
  sdb  8:16   0 931.5G  0 disk 

  We can see see that the changes were reverted.

  If I remove cloud-init, I can successfully re-partition and reboot,
  without the changes being reverted.

  Attached logs before and after repartition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1839491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840147] [NEW] network-vif-plugged and friends are not well documented

2019-08-14 Thread YAMAMOTO Takashi
Public bug reported:

network-vif-plugged and friends are not well documented.
the timing when those events are sent varies among drivers.
inter-project api like this should be defined and documented super-clearly.

the behaviour isn't consistent even within reference implementations.
nova folks call them bind-time events vs plug-time events.
https://review.opendev.org/#/c/667177/

networking-midonet always make compute ports ACTIVE. (create-time events?)
https://github.com/openstack/networking-midonet/blob/stable/stein/midonet/neutron/ml2/mech_driver.py#L178
it stopped working recently.
https://bugs.launchpad.net/networking-midonet/+bug/1839169

networking-ovn has a relevant dirty hack.
https://review.opendev.org/#/c/673803/
it has a nice bug description.
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1834045

references:
https://wiki.openstack.org/wiki/Nova/ExternalEventAPI

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1840147

Title:
  network-vif-plugged and friends are not well documented

Status in neutron:
  New

Bug description:
  network-vif-plugged and friends are not well documented.
  the timing when those events are sent varies among drivers.
  inter-project api like this should be defined and documented super-clearly.

  the behaviour isn't consistent even within reference implementations.
  nova folks call them bind-time events vs plug-time events.
  https://review.opendev.org/#/c/667177/

  networking-midonet always make compute ports ACTIVE. (create-time events?)
  
https://github.com/openstack/networking-midonet/blob/stable/stein/midonet/neutron/ml2/mech_driver.py#L178
  it stopped working recently.
  https://bugs.launchpad.net/networking-midonet/+bug/1839169

  networking-ovn has a relevant dirty hack.
  https://review.opendev.org/#/c/673803/
  it has a nice bug description.
  https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1834045

  references:
  https://wiki.openstack.org/wiki/Nova/ExternalEventAPI

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1840147/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840139] [NEW] Libvirt: Wrong usage for mem_stats_period_seconds

2019-08-14 Thread Dongcan Ye
Public bug reported:

>From the code, function _guest_add_memory_balloon in [1], if 
>mem_stats_period_seconds set to 0 or negative value, the memory balloon device 
>will disabled.
Is mem_stats_period_seconds can control the virtual memory balloon device 
added? Isn't it only control memory usage statistics?

But when I test with mem_stats_period_seconds=0, the virtual memory
balloon device will be added.

[1]
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py

** Affects: nova
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1840139

Title:
  Libvirt: Wrong usage for mem_stats_period_seconds

Status in OpenStack Compute (nova):
  New

Bug description:
  From the code, function _guest_add_memory_balloon in [1], if 
mem_stats_period_seconds set to 0 or negative value, the memory balloon device 
will disabled.
  Is mem_stats_period_seconds can control the virtual memory balloon device 
added? Isn't it only control memory usage statistics?

  But when I test with mem_stats_period_seconds=0, the virtual memory
  balloon device will be added.

  [1]
  https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1840139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840136] [NEW] [RFE] Add new config option to enable IGMP snooping in ovs

2019-08-14 Thread Slawek Kaplonski
Public bug reported:

This is proposal to add new config option to OVS agent's section to enable 
multicast snooping in br-int by neutron-ovs-agent.
This option can be useful for users who wants to have support for multicast 
traffic which will be treated as "real" multicast instead of broadcast 
delivered to all ports.

Some details about this are described in
https://gist.github.com/djoreilly/a22ca4f38396e8867215fca0ad67fa28

>From neutron's point of view it would required only config option in 
>neutron-ovs-agent and some code changes on agent's side.
No any changes in API or DB layers are required by that RFE.

** Affects: neutron
 Importance: Wishlist
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1840136

Title:
  [RFE] Add new config option to enable IGMP snooping in ovs

Status in neutron:
  New

Bug description:
  This is proposal to add new config option to OVS agent's section to enable 
multicast snooping in br-int by neutron-ovs-agent.
  This option can be useful for users who wants to have support for multicast 
traffic which will be treated as "real" multicast instead of broadcast 
delivered to all ports.

  Some details about this are described in
  https://gist.github.com/djoreilly/a22ca4f38396e8867215fca0ad67fa28

  From neutron's point of view it would required only config option in 
neutron-ovs-agent and some code changes on agent's side.
  No any changes in API or DB layers are required by that RFE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1840136/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840094] Re: api-ref: layout of descriptions of host_status is broken

2019-08-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/676301
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=abfb28291afb3b1ae0345a4f3cbb3d68c925a90b
Submitter: Zuul
Branch:master

commit abfb28291afb3b1ae0345a4f3cbb3d68c925a90b
Author: Takashi NATSUME 
Date:   Wed Aug 14 10:40:31 2019 +0900

api-ref: Fix collapse of 'host_status' description

Fix collapse of 'host_status' description in the follwoing APIs
in the compute API reference.

- PUT /servers/{server_id}
- POST /servers/{server_id}/action (rebuild)

Change-Id: I003f9a81ac6f7e0ec13a24db3fda1b7ff6612bc5
Closes-Bug: #1840094


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1840094

Title:
  api-ref: layout of descriptions of host_status is broken

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The layout of descriptions of 'host_status' is out of shape in the
  following APIs in the compute API reference.

  - PUT /servers/{server_id} (Update Server)
  - POST /servers/{server_id}/action (Rebuild Server (rebuild Action))

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1840094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1839491] Re: Manually performed partitioning changes get reverted on reboot

2019-08-14 Thread Björn Tillenius
Sounds like this is indeed an issue in MAAS then. MAAS should turn off
growpart, since we know how big the disks are already and can set up the
right partition size during installation.

** Changed in: maas
   Status: Invalid => Triaged

** Changed in: maas
   Importance: Undecided => High

** Changed in: maas
Milestone: None => 2.7.0alpha1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1839491

Title:
  Manually performed partitioning changes get reverted on reboot

Status in cloud-init:
  Incomplete
Status in MAAS:
  Triaged

Bug description:
  Hello,

  I am facing an issue where I need to make changes to the initially
  deployed partition layout, but upon making those changes and
  rebooting, the partition layout gets reverted.

  My env:
  MAAS version: 2.6.0 (7802-g59416a869-0ubuntu1~18.04.1)
  System vendor: HP
  System product: ProLiant DL360 Gen9 (780021-S01)
  System version: Unknown
  Mainboard product: ProLiant DL360 Gen9
  Mainboard firmware version: P89
  Mainboard firmware date: 12/27/2015
  CPU model: Intel(R) Xeon(R) CPU E5-2690 v3
  Deployed (16.04 LTS "Xenial Xerus")
  Kernel: xenial (ga-16.04)
  Power type: ipmi
  Power driver: LAN_2_0 [IPMI 2.0]
  Power boot type: EFI boot
  Architecture amd64/generic
  Minimum Kernel: no minimum kernel
  Interfaces: eno1, eno2, noe3, eno4, eno49, eno50. Only eno49 is used.
  Storage: sda Physical 1TB, sdb Physical 1TB.

  
  Steps to reproduce:

  1. Deploy MAAS with the following partition configuration:
  sda-part1 536.9 MB Partition fat32 formatted filesystem mounted at /boot/efi
  sda-part2 100.0 GB Partition ext4 formatted filesystem mounted at /

  2. Check the partitions on the node:

  $ lsblk

  NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
  sda  8:00 931.5G  0 disk 
  |-sda1   8:10   512M  0 part /boot/efi
  `-sda2   8:20   931G  0 part /
  sdb  8:16   0 931.5G  0 disk 

  
  Here we notice the initial partitioning scheme is not respected. This could 
be related to the main issue of partitioning changes being reverted, but could 
also be a separate issue.

  3. Boot an ubuntu ISO and go into rescue mode. I used ubuntu-16.04.6
  -server-amd64.iso

  4. Choose "Do not use a root filesystem" and "Execute a shell in the
  installer environment".

  4. Run the following commands:

  $ e2fsck -f /dev/sda2

  $ resize2fs /dev/sda2 150G

  $ e2fsck -f /dev/sda2

  $ sudo parted /dev/sda

  (parted) unit GiB print

  (parted) resizepart

  Partition number? 2

  End? 200GiB

  (parted) print

  You should see partition 2 resized.

  (parted) quit

  $ e2fsck -f /dev/sda2

  5. Confirm

  $ fdisk -l

  6. Sync writes

  $ sync

  7. Reboot the node. Remove ISO image.

  8. Let system boot, check partitions again:

  $ lsblk

  NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
  sda  8:00 931.5G  0 disk 
  |-sda1   8:10   512M  0 part /boot/efi
  `-sda2   8:20   931G  0 part /
  sdb  8:16   0 931.5G  0 disk 

  We can see see that the changes were reverted.

  If I remove cloud-init, I can successfully re-partition and reboot,
  without the changes being reverted.

  Attached logs before and after repartition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1839491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp