[Yahoo-eng-team] [Bug 1727602] [NEW] nova-specs: tests.test_titles.TestTitles.test_template fails

2017-10-25 Thread Takashi NATSUME
Public bug reported:

In nova-specs, tests.test_titles.TestTitles.test_template fails when
'tox -e py27' is executed.

stack@devstack-master:/tmp/nova-specs$ tox -e py27
py27 develop-inst-nodeps: /tmp/nova-specs
py27 installed: 
alabaster==0.7.10,Babel==2.5.1,certifi==2017.7.27.1,chardet==3.0.4,cssselect==1.0.1,docutils==0.14,extras==1.0.0,fixtures==3.0.0,idna==2.6,imagesize==0.7.1,Jinja2==2.9.6,linecache2==1.0.0,lxml==4.1.0,MarkupSafe==1.0,-e
 
git+https://git.openstack.org/openstack/nova-specs.git@a081ad1f0a6028d0fcb9f1a383fc96217f5c3ddf#egg=nova_specs,oslosphinx==4.17.0,pbr==3.1.1,pkg-resources==0.0.0,Pygments==2.2.0,pyquery==1.3.0,python-mimeparse==1.6.0,python-subunit==1.2.0,pytz==2017.2,requests==2.18.4,six==1.11.0,snowballstemmer==1.2.1,Sphinx==1.6.5,sphinxcontrib-websupport==1.0.1,testrepository==0.0.20,testtools==2.3.0,traceback2==1.4.0,typing==3.6.2,unittest2==1.1.0,urllib3==1.22,yasfb==0.6.1
py27 runtests: PYTHONHASHSEED='2547028128'
py27 runtests: commands[0] | find . -type f -name *.pyc -delete
py27 runtests: commands[1] | python setup.py testr --slowest --testr-args=
running testr
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 
${PYTHON:-python} -m subunit.run discover -t ./ . --list 
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 
${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmp5_llTX
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 
${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpW704PS
==
FAIL: tests.test_titles.TestTitles.test_template
tags: worker-0
--
Traceback (most recent call last):
  File "tests/test_titles.py", line 131, in test_template
self._check_titles(filename, template_titles, titles)
  File "tests/test_titles.py", line 69, in _check_titles
% (filename, "\n  ".join(msgs)))
  File 
"/tmp/nova-specs/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
raise self.failureException(msg)
AssertionError: While checking 'specs/backlog/approved/instance-tasks.rst':
  Section 'Proposed change' is missing subsections: [u'Upgrade impact']
Ran 2 tests in 0.101s (-0.022s)
FAILED (id=1, failures=1)
error: testr failed (1)
ERROR: InvocationError: '/tmp/nova-specs/.tox/py27/bin/python setup.py testr 
--slowest --testr-args='

 summary 

ERROR:   py27: commands failed
stack@devstack-master:/tmp/nova-specs$ git log -1
commit a081ad1f0a6028d0fcb9f1a383fc96217f5c3ddf
Merge: 65dabe0 7af4f1f
Author: Zuul 
Date:   Wed Oct 25 23:23:50 2017 +

Merge "Spec for API extensions policy removal"

** Affects: nova
 Importance: Undecided
 Assignee: Takashi NATSUME (natsume-takashi)
 Status: In Progress

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1727602

Title:
  nova-specs: tests.test_titles.TestTitles.test_template fails

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In nova-specs, tests.test_titles.TestTitles.test_template fails when
  'tox -e py27' is executed.

  stack@devstack-master:/tmp/nova-specs$ tox -e py27
  py27 develop-inst-nodeps: /tmp/nova-specs
  py27 installed: 
alabaster==0.7.10,Babel==2.5.1,certifi==2017.7.27.1,chardet==3.0.4,cssselect==1.0.1,docutils==0.14,extras==1.0.0,fixtures==3.0.0,idna==2.6,imagesize==0.7.1,Jinja2==2.9.6,linecache2==1.0.0,lxml==4.1.0,MarkupSafe==1.0,-e
 
git+https://git.openstack.org/openstack/nova-specs.git@a081ad1f0a6028d0fcb9f1a383fc96217f5c3ddf#egg=nova_specs,oslosphinx==4.17.0,pbr==3.1.1,pkg-resources==0.0.0,Pygments==2.2.0,pyquery==1.3.0,python-mimeparse==1.6.0,python-subunit==1.2.0,pytz==2017.2,requests==2.18.4,six==1.11.0,snowballstemmer==1.2.1,Sphinx==1.6.5,sphinxcontrib-websupport==1.0.1,testrepository==0.0.20,testtools==2.3.0,traceback2==1.4.0,typing==3.6.2,unittest2==1.1.0,urllib3==1.22,yasfb==0.6.1
  py27 runtests: PYTHONHASHSEED='2547028128'
  py27 runtests: commands[0] | find . -type f -name *.pyc -delete
  py27 runtests: commands[1] | python setup.py testr --slowest --testr-args=
  running testr
  running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 
${PYTHON:-python} -m subunit.run discover -t ./ . --list 
  running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 
${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmp5_llTX
  running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 
${PYTHON:-python} -m subunit.run discover -t ./ .  --load-list /tmp/tmpW704PS
  

[Yahoo-eng-team] [Bug 1727598] [NEW] API to list free ips

2017-10-25 Thread machi
Public bug reported:

Need API that can show the unused ip address in a IP pool range.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1727598

Title:
  API to list free ips

Status in neutron:
  New

Bug description:
  Need API that can show the unused ip address in a IP pool range.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1727598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727342] Re: Failed to delete lbaas-pool if L7 policy/rules attached to that pool

2017-10-25 Thread Akihiro Motoki
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1727342

Title:
  Failed to delete lbaas-pool if L7 policy/rules attached to that pool

Status in octavia:
  New

Bug description:
  Created LoadBalancer, Listener, Pool, members and healthmonitor.
  After this created L7 policy using above pool created and also added L7 rule 
in policy.
  When I am trying to delete pool directly without deleting L7 policy then 
exception raises on prompt.

  $ neutron lbaas-pool-delete pool2
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Driver error: Bad lbaas-pool request: Failed to delete lb pool

  q-svc.log file logs pasted in link:
  http://paste.openstack.org/show/624607/

  ##
  Commands used:
  ##
  neutron lbaas-loadbalancer-create --name lb1  public-subnet
  neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTPS 
--protocol-port 443 --name listener1
  neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 
--protocol HTTPS --name web_pool
  neutron lbaas-member-create --subnet private-subnet --address 10.0.0.7 
--protocol-port 443 web_pool
  #L7 policy redirect to pool
  neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --name policy1 
--redirect-pool web_pool --listener listener1
  neutron lbaas-l7rule-create --type PATH --compare-type=STARTS_WITH  --value 
/api policy1

  Why we are imposing this condition that pool can only be deleted if L7
  rule deleted?

  
  Steps:
  
  1) Create LB, Listener, pool, member and healthmonitor.
  2) Create L7 policy using pool created in step1
  4) Add rule inside L7 policy.
  5) Try to delete pool. It get fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1727342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727328] Re: Inspite of "Listener protocol HTTPS and pool protocol HTTP are not compatible" pool entry is getting added to neutron db

2017-10-25 Thread Akihiro Motoki
neutron-lbaas bug is tracked in not neutron but octavia project

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1727328

Title:
  Inspite of "Listener protocol HTTPS and pool protocol HTTP are not
  compatible" pool entry is getting added to neutron db

Status in octavia:
  New

Bug description:
  I created lbaas-listener using HTTPS protocol and lbaas-pool using HTTP.
  I got exception "Listener protocol HTTPS and pool protocol HTTP are not 
compatible" which is acceptable.
  But after this when I check neutron lbaas-pool-list , it shows pool being 
added.
  So exception came for incompatibility then why we are adding this pool into 
neutron db.

  Please check this link for commands executed:
  http://paste.openstack.org/show/624603/

  Steps to reproduce:-
  1) Create LB
  2) Create listener of type HTTPS.
  3) Create pool of protocol type HTTP.
  4) Exception would get appears but if we check neutron lbaas-pool-list pool 
entry would be visible.

  Can we stop adding pool into neutron db if we are getting exception
  while adding pool.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1727328/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1726871] Re: AttributeError: 'BlockDeviceMapping' object has no attribute 'uuid'

2017-10-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/514825
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1ca191fcc4d809c991c23dedc951bbe7206edf1d
Submitter: Zuul
Branch:master

commit 1ca191fcc4d809c991c23dedc951bbe7206edf1d
Author: Matt Riedemann 
Date:   Tue Oct 24 16:45:57 2017 -0400

Fix AttributeError in BlockDeviceMapping.obj_load_attr

The BDM has no uuid attribute so the debug message in here
would result in an AttributeError. This has been around since
the creation of this object, and the debug log message was
probably copied from the Instance object.

This was only exposed in Pike when this code started
lazy-loading the instance field:

  I1dc54a38f02bb48921bcbc4c2fdcc2c946e783c1

So this change fixes that bug and adds tests for obj_load_attr.

Change-Id: I8b55227b1530a76c2f396c035384abd89237d936
Closes-Bug: #1726871


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1726871

Title:
  AttributeError: 'BlockDeviceMapping' object has no attribute 'uuid'

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:
  running Pike, Cinder is configured with disabled NAS security and
  enabled snapshots in NFS driver.

  While trying to make a snapshot nova-api reports:

  2017-10-24 13:36:16.558 39 INFO nova.osapi_compute.wsgi.server 
[req-e6c3dec2-40ea-4858-af91-3d77395d6978 6f4347f3c1b34d69946b17592aaf5b7f 
aeb3218c5fdc4d58b1094a4d360a2a96 - default default] 
10.196.245.222,10.196.245.203 "GET /v2.1/ HTTP/1.1" status: 200 len: 763 time: 
0.0097859
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions 
[req-cb2a54c0-6103-41ca-a5dc-587e301bfbb1 6f4347f3c1b34d69946b17592aaf5b7f 
aeb3218c5fdc4d58b1094a4d360a2a96 - default default] Unexpected exception in API 
method: AttributeError: 'BlockDeviceMapping' object has no attribute 'uuid'
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/openstack/extensions.py",
 line 336, in wrapped
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/validation/__init__.py",
 line 108, in wrapper
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/openstack/compute/assisted_volume_snapshots.py",
 line 52, in create
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions 
create_info)
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/api.py", line 
4165, in volume_snapshot_create
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions return 
do_volume_snapshot_create(self, context, bdm.instance)
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 67, in getter
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions 
self.obj_load_attr(name)
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/objects/block_device.py", 
line 288, in obj_load_attr
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions 'uuid': 
self.uuid,
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions 
AttributeError: 'BlockDeviceMapping' object has no attribute 'uuid'
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions 
  2017-10-24 13:36:17.317 39 INFO nova.api.openstack.wsgi 
[req-cb2a54c0-6103-41ca-a5dc-587e301bfbb1 6f4347f3c1b34d69946b17592aaf5b7f 
aeb3218c5fdc4d58b1094a4d360a2a96 - default default] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1726871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727578] [NEW] [RFE]Support apply qos policy in VPN service

2017-10-25 Thread zhaobo
Public bug reported:


Issue
---
For site to site type VPN, we need to limit the bandwidth of the VPN services, 
as the VPN tunnel will cost the bandwidth from the outside public bandwidth 
provided by the ISP or other organizations.That means it is not free. The 
openstack provider or users should pay for the limited bandwidth.


Propose

So VPNaaS need find a way to meet this requirement by apply Neutron qos policy 
in VPN service, then the associated ServiceConnection will be affected by this 
Qos policy and do some limitation for east-west traffic.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1727578

Title:
  [RFE]Support apply qos policy in VPN service

Status in neutron:
  New

Bug description:
  
  Issue
  ---
  For site to site type VPN, we need to limit the bandwidth of the VPN 
services, as the VPN tunnel will cost the bandwidth from the outside public 
bandwidth provided by the ISP or other organizations.That means it is not free. 
The openstack provider or users should pay for the limited bandwidth.

  
  Propose
  
  So VPNaaS need find a way to meet this requirement by apply Neutron qos 
policy in VPN service, then the associated ServiceConnection will be affected 
by this Qos policy and do some limitation for east-west traffic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1727578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691274] Re: Error on adding duplicated secgroup

2017-10-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/465173
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=083bc89f99e007c400b9f89c63ac0459da910df8
Submitter: Zuul
Branch:master

commit 083bc89f99e007c400b9f89c63ac0459da910df8
Author: Hongbin Lu 
Date:   Tue May 16 20:54:25 2017 +

Handle exception on adding secgroup

If user adds security group to an instance and the instance already
has that security group, neutron will return a 400 response. Nova
should handle the 400 response properly. In before, Nova doesn't
seem to handle this case and end-user gets a 500 response. This
commit fixed it.

Closes-Bug: #1691274
Change-Id: I58b19ef6b537d690df90e542b6af3c64773ecc87


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1691274

Title:
  Error on adding duplicated secgroup

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  New
Status in OpenStack Compute (nova) ocata series:
  New

Bug description:
  Steps to reproduce:

  openstack server create --flavor m1.tiny --security-group default --image 
cirros-0.3.5-x86_64-disk myinstance
  $ openstack server add security group myinstance default
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-b5f07687-7b4d-4ff3-90b4-0835dbeef9c4)

  The error in n-api: http://paste.openstack.org/show/609722/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1691274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1720873] Re: config drive guide was not migrated from openstack-manuals

2017-10-25 Thread OpenStack Infra
*** This bug is a duplicate of bug 1714017 ***
https://bugs.launchpad.net/bugs/1714017

Reviewed:  https://review.openstack.org/514723
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=59bd2f6adc1777c7c8d7a2f442be965fecb573a0
Submitter: Zuul
Branch:master

commit 59bd2f6adc1777c7c8d7a2f442be965fecb573a0
Author: Matt Riedemann 
Date:   Tue Oct 24 11:51:47 2017 -0400

Import the config drive docs from openstack-manuals

As part of the docs migration from openstack-manuals to
nova in the pike release we missed the config-drive docs.

This change does the following:

1. Imports the config-drive doc into the user guide.
2. Fixes a broken link to the metadata service in the doc.
3. Removes a note about liberty being the current release.
4. Adds a link in the API reference parameters to actually
   point at the document we have in tree now, which is
   otherwise not very discoverable as the main index does
   not link to this page (or the user index for that matter).

Partial-Bug: #1714017
Closes-Bug: #1720873

Change-Id: I1d54e1f5a1a94e9821efad99b7fa430bd8fece0a


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1720873

Title:
  config drive guide was not migrated from openstack-manuals

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress

Bug description:
  It seems that this guide in openstack-manuals:

  https://github.com/openstack/openstack-manuals/blob/stable/ocata/doc
  /user-guide/source/cli-config-drive.rst

  Was not migrated to the nova repo in pike.

  The compute API reference refers to it though in the "config_drive"
  parameter for the "Create Server" action:

  https://developer.openstack.org/api-ref/compute/#create-server

  "Read more in the OpenStack End User Guide."

  Part of that guide is for end users and part of it is for operators,
  so we should probably split it up when we migrate it into nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1720873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727558] [NEW] libvirt driver ignores 'disk_cachemodes' configuration setting

2017-10-25 Thread melanie witt
Public bug reported:

The libvirt driver is ignoring the 'disk_cachemodes' configuration
setting and is always setting "cache='none'" in the device xml.

For example, with a setting in nova.conf of
"disk_cachemodes=network=writeback":

Expected result:

# virsh dumpxml  | grep cache





Actual result:

# virsh dumpxml  | grep cache





This is a regression in pike [1] that was also backported to ocata and
newton.

[1] https://review.openstack.org/#/c/485752

** Affects: nova
 Importance: Undecided
 Assignee: melanie witt (melwitt)
 Status: In Progress


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1727558

Title:
  libvirt driver ignores 'disk_cachemodes' configuration setting

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The libvirt driver is ignoring the 'disk_cachemodes' configuration
  setting and is always setting "cache='none'" in the device xml.

  For example, with a setting in nova.conf of
  "disk_cachemodes=network=writeback":

  Expected result:

  # virsh dumpxml  | grep cache
  
  
  
  

  Actual result:

  # virsh dumpxml  | grep cache
  
  
  
  

  This is a regression in pike [1] that was also backported to ocata and
  newton.

  [1] https://review.openstack.org/#/c/485752

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1727558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727527] Re: cloud-init 0.7.5 doesn't honor the apt: proxy: setting, needs apt_proxy instead

2017-10-25 Thread Scott Moser
Older versions of cloud init only supported the apt_proxy setting. Newer
versions support the newer 'apt' top level key. They also convert
support the old format.

Documentation that comes in the package installed will mention the older
in example. We just did not use rtd in 0.7.5 so you won't find doc on
that there.

Sorry, but this is just kind of the way it is.

Not all function has always been supported.

You're welcome to use the old format with 16.04.

** Changed in: cloud-init
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1727527

Title:
  cloud-init 0.7.5 doesn't honor the apt: proxy: setting, needs
  apt_proxy instead

Status in cloud-init:
  Invalid

Bug description:
  When passing a profile with an apt proxy defined in cloud-config data,
  it isn't honored for Ubuntu 14.04, but it is for 16.04 and 17.10 (I've
  only tried those).

  Steps to reproduce:

  This uses lxd as I initially ran into this while trying to have a
  profile that would automatically set up the proxy for my containers.

  Assuming you have an apt-cacher-ng or similar at 1.2.3.4:

  lxc profile create aptcache2
  cat << EOF | lxc profile edit aptcache2
  name: aptcache
  description: set up apt caching via 1.2.3.4
  config:
user.vendor-data: |
  #cloud-config
  apt:
proxy: "http://1.2.3.4:3142;
  EOF

  lxc launch -p default -p aptcache2 ubuntu:14.04 y-u-no-cache
  lxc launch -p default -p aptcache2 ubuntu:16.04 i-do-cache

  lxc exec y-u-no-cache -- grep -r 3142 /etc/apt/apt.conf.d
  lxc exec i-do-cache -- grep -r 3142 /etc/apt/apt.conf.d

  Expected result:
  Both lxc exec commands should show something like:

  /etc/apt/apt.conf.d/90cloud-init-aptproxy:Acquire::http::Proxy
  "http://1.2.3.4:3142;;

  Actual result:
  Only the command on the 16.04 container shows this, the other container has 
no proxy settings.

  The 14.04 image has cloud-init Installed: 0.7.5-0ubuntu1.22
  The 16.04 image has Installed: 0.7.9-233-ge586fe35-0ubuntu1~16.04.2

  
  Stéphane Graber pointed out that 0.7.5 supports this syntax:

  user.vendor-data: |
  #cloud-config
  apt_proxy: "http://1.2.3.4:3142;

  I didn't find any documentation on that when I went looking, current
  cloud-init documentation only mentions the apt: proxy: thing:

  http://cloudinit.readthedocs.io/en/latest/topics/examples.html
  #additional-apt-configuration

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1727527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727527] [NEW] cloud-init 0.7.5 doesn't honor the apt: proxy: setting, needs apt_proxy instead

2017-10-25 Thread Daniel Manrique
Public bug reported:

When passing a profile with an apt proxy defined in cloud-config data,
it isn't honored for Ubuntu 14.04, but it is for 16.04 and 17.10 (I've
only tried those).

Steps to reproduce:

This uses lxd as I initially ran into this while trying to have a
profile that would automatically set up the proxy for my containers.

Assuming you have an apt-cacher-ng or similar at 1.2.3.4:

lxc profile create aptcache2
cat << EOF | lxc profile edit aptcache2
name: aptcache
description: set up apt caching via 1.2.3.4
config:
  user.vendor-data: |
#cloud-config
apt:
  proxy: "http://1.2.3.4:3142;
EOF

lxc launch -p default -p aptcache2 ubuntu:14.04 y-u-no-cache
lxc launch -p default -p aptcache2 ubuntu:16.04 i-do-cache

lxc exec y-u-no-cache -- grep -r 3142 /etc/apt/apt.conf.d
lxc exec i-do-cache -- grep -r 3142 /etc/apt/apt.conf.d

Expected result:
Both lxc exec commands should show something like:

/etc/apt/apt.conf.d/90cloud-init-aptproxy:Acquire::http::Proxy
"http://1.2.3.4:3142;;

Actual result:
Only the command on the 16.04 container shows this, the other container has no 
proxy settings.

The 14.04 image has cloud-init Installed: 0.7.5-0ubuntu1.22
The 16.04 image has Installed: 0.7.9-233-ge586fe35-0ubuntu1~16.04.2


Stéphane Graber pointed out that 0.7.5 supports this syntax:

user.vendor-data: |
#cloud-config
apt_proxy: "http://1.2.3.4:3142;

I didn't find any documentation on that when I went looking, current
cloud-init documentation only mentions the apt: proxy: thing:

http://cloudinit.readthedocs.io/en/latest/topics/examples.html
#additional-apt-configuration

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1727527

Title:
  cloud-init 0.7.5 doesn't honor the apt: proxy: setting, needs
  apt_proxy instead

Status in cloud-init:
  New

Bug description:
  When passing a profile with an apt proxy defined in cloud-config data,
  it isn't honored for Ubuntu 14.04, but it is for 16.04 and 17.10 (I've
  only tried those).

  Steps to reproduce:

  This uses lxd as I initially ran into this while trying to have a
  profile that would automatically set up the proxy for my containers.

  Assuming you have an apt-cacher-ng or similar at 1.2.3.4:

  lxc profile create aptcache2
  cat << EOF | lxc profile edit aptcache2
  name: aptcache
  description: set up apt caching via 1.2.3.4
  config:
user.vendor-data: |
  #cloud-config
  apt:
proxy: "http://1.2.3.4:3142;
  EOF

  lxc launch -p default -p aptcache2 ubuntu:14.04 y-u-no-cache
  lxc launch -p default -p aptcache2 ubuntu:16.04 i-do-cache

  lxc exec y-u-no-cache -- grep -r 3142 /etc/apt/apt.conf.d
  lxc exec i-do-cache -- grep -r 3142 /etc/apt/apt.conf.d

  Expected result:
  Both lxc exec commands should show something like:

  /etc/apt/apt.conf.d/90cloud-init-aptproxy:Acquire::http::Proxy
  "http://1.2.3.4:3142;;

  Actual result:
  Only the command on the 16.04 container shows this, the other container has 
no proxy settings.

  The 14.04 image has cloud-init Installed: 0.7.5-0ubuntu1.22
  The 16.04 image has Installed: 0.7.9-233-ge586fe35-0ubuntu1~16.04.2

  
  Stéphane Graber pointed out that 0.7.5 supports this syntax:

  user.vendor-data: |
  #cloud-config
  apt_proxy: "http://1.2.3.4:3142;

  I didn't find any documentation on that when I went looking, current
  cloud-init documentation only mentions the apt: proxy: thing:

  http://cloudinit.readthedocs.io/en/latest/topics/examples.html
  #additional-apt-configuration

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1727527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727358] Re: cloud-init is slow to complete init on minimized images

2017-10-25 Thread Scott Moser
Could you run
  cloud-init collect-logs
And then attach the cloud-init.tar.gz

Also, Can you provide some information on what you were running?
 " cloud-init is slow to complete init on minimized images " 
How can I recreate this?

One curious thing there is:
 2017-10-25 13:22:07,157 - util.py[WARNING]: did not find either path 
/sys/class/dmi/id or dmidecode command

I suspect you have a kernel without CONFIG_DMI, which seems unfortunate or
possibly you're not on intel or arm64.

For large jumps in your log, it took ~ 9 seconds (13:22:07,337 ->
13:22:16,112) from the exit of cloud-init-local.service to get to
'cloud-init.service' printing its hello message.  That is basically the
time it took the network to come up.

Then we have a big jump (122 seconds):
 13:22:16,264 main.py[DEBUG]: no di_report found in config.
 13:24:38,088 stages.py[DEBUG]: Using distro class 

Those two lines in a vm I have on a openstack look like this:
2017-10-11 15:08:26,685 - main.py[DEBUG]: no di_report found in config.
2017-10-11 15:08:27,031 - stages.py[DEBUG]: Using distro class 
2017-10-11 15:08:27,032 - stages.py[DEBUG]: Running module migrator () with 
frequency always


That is ~ .5 seconds, which is not fast, but not 120 seconds either.


** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Incomplete

** Changed in: cloud-init
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1727358

Title:
  cloud-init is slow to complete init on minimized images

Status in cloud-init:
  Incomplete
Status in cloud-init package in Ubuntu:
  Incomplete

Bug description:
  http://paste.ubuntu.com/25816789/ for the full logs.

  cloud-init is very slow to complete its initialization steps.
  Specifically, the 'init' takes over 150 seconds.

  Cloud-init v. 17.1 running 'init-local' at Wed, 25 Oct 2017 13:22:07 +. 
Up 2.39 seconds.
  2017-10-25 13:22:07,157 - util.py[WARNING]: did not find either path 
/sys/class/dmi/id or dmidecode command
  Cloud-init v. 17.1 running 'init' at Wed, 25 Oct 2017 13:22:16 +. Up 
11.37 seconds.
  ci-info: Net device 
info+
  ci-info: 
++---+-+---+---+---+
  ci-info: | Device |   Up  | Address |  Mask | Scope | 
Hw-Address|
  ci-info: 
++---+-+---+---+---+
  ci-info: | ens3:  |  True | 192.168.100.161 | 255.255.255.0 |   .   | 
52:54:00:bb:ad:fb |
  ci-info: | ens3:  |  True |.|   .   |   d   | 
52:54:00:bb:ad:fb |
  ci-info: |  lo:   |  True |127.0.0.1|   255.0.0.0   |   .   | 
. |
  ci-info: |  lo:   |  True |.|   .   |   d   | 
. |
  ci-info: | sit0:  | False |.|   .   |   .   | 
. |
  ci-info: 
++---+-+---+---+---+
  ci-info: Route IPv4 
info
  ci-info: 
+---+---+---+-+---+---+
  ci-info: | Route |  Destination  |Gateway| Genmask | 
Interface | Flags |
  ci-info: 
+---+---+---+-+---+---+
  ci-info: |   0   |0.0.0.0| 192.168.100.1 | 0.0.0.0 |ens3  
 |   UG  |
  ci-info: |   1   | 192.168.100.0 |0.0.0.0|  255.255.255.0  |ens3  
 |   U   |
  ci-info: |   2   | 192.168.100.1 |0.0.0.0| 255.255.255.255 |ens3  
 |   UH  |
  ci-info: 
+---+---+---+-+---+---+
  2017-10-25 13:24:38,187 - util.py[WARNING]: Failed to resize filesystem 
(cmd=('resize2fs', '/dev/root'))
  2017-10-25 13:24:38,193 - util.py[WARNING]: Running module resizefs () failed
  Generating public/private rsa key pair.
  Your identification has been saved in /etc/ssh/ssh_host_rsa_key.
  Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub.
  The key fingerprint is:
  SHA256:LKNlCqqOgPB8KBKGfPhFO5Rs6fDMnAvVet/W9i4vLxY root@cloudimg
  The key's randomart image is:
  +---[RSA 2048]+
  | |
  |. +  |
  |   . O . |
  |o . % +. |
  |++.o %=.S|
  |+=ooo=+o. . .E   |
  |* +.+.   . o o.  |
  |=. .  . .=.  |
  |+.  . B= |
  +[SHA256]-+
  Generating public/private dsa key pair.
  Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
  Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
  The key fingerprint is:
  SHA256:dNWNyBHqTUCl820/vL0dEhOVDFYJzqr1WeuqV1PAmjk root@cloudimg
  The key's randomart image is:
  +---[DSA 1024]+
  | .oo=X==o|

[Yahoo-eng-team] [Bug 1715994] Re: mount-image-callback lxd: is broken with lxd 2.17

2017-10-25 Thread Scott Moser
** No longer affects: cloud-init

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1715994

Title:
  mount-image-callback lxd: is broken with lxd 2.17

Status in cloud-utils:
  Confirmed
Status in cloud-utils package in Ubuntu:
  Confirmed

Bug description:
  mostly described at https://github.com/lxc/lxd/issues/3784
  lxd no longer has stopped containers mounted so this is broken.

  ProblemType: Bug
  DistroRelease: Ubuntu 17.10
  Package: cloud-image-utils 0.30-0ubuntu2
  ProcVersionSignature: Ubuntu 4.12.0-11.12-generic 4.12.5
  Uname: Linux 4.12.0-11-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl zcommon znvpair
  ApportVersion: 2.20.7-0ubuntu1
  Architecture: amd64
  CurrentDesktop: GNOME
  Date: Fri Sep  8 09:59:02 2017
  EcryptfsInUse: Yes
  InstallationDate: Installed on 2015-07-23 (778 days ago)
  InstallationMedia: Ubuntu 15.10 "Wily Werewolf" - Alpha amd64 (20150722.1)
  PackageArchitecture: all
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: cloud-utils
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-utils/+bug/1715994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607345] Re: Collect all logs needed to debug curtin/cloud-init for each deployment

2017-10-25 Thread Chad Smith
=== cloud-init SRU Verification output ===
--- xenial 
root@test-xenial:~# ubuntu-bug cloud-init

*** Collecting problem information

The collected information can be sent to the developers to improve the
application. This might take a few minutes.
.
*** Your device details (lshw) may be useful to developers when addressing this 
bug, but gathering it requires admin privileges. Would you like to include this 
info?


What would you like to do? Your options are:
  Y: Yes
  N: No
  C: Cancel
Please choose (Y/N/C): y
  
*** Is this machine running in a cloud environment?


What would you like to do? Your options are:
  Y: Yes
  N: No
  C: Cancel
Please choose (Y/N/C): y

*** Please select the cloud vendor or environment in which this instance
is running


Choices:
  1: Amazon - Ec2
  2: AliYun
  3: AltCloud
  4: Azure
  5: Bigstep
  6: CloudSigma
  7: CloudStack
  8: DigitalOcean
  9: GCE - Google Compute Engine
  10: MAAS
  11: NoCloud
  12: OpenNebula
  13: OpenStack
  14: OVF
  15: Scaleway
  16: SmartOS
  17: VMware
  18: Other
  C: Cancel
Please choose (1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/C): 6
.
*** Your user-data or cloud-config file can optionally be provided from 
/var/lib/cloud/instance/user-data.txt and could be useful to developers when 
addressing this bug. Do you wish to attach user-data to this bug?


What would you like to do? Your options are:
  Y: Yes
  N: No
  C: Cancel
Please choose (Y/N/C): y
..

*** Send problem report to the developers?

After the problem report has been sent, please fill out the form in the
automatically opened web browser.

What would you like to do? Your options are:
  S: Send report (85.6 KB)
  V: View report
  K: Keep report file for sending later or copying to somewhere else
  I: Cancel and ignore future crashes of this program version
  C: Cancel
Please choose (S/V/K/I/C): v

What would you like to do? Your options are:
  S: Send report (85.6 KB)
  V: View report
  K: Keep report file for sending later or copying to somewhere else
  I: Cancel and ignore future crashes of this program version
  C: Cancel
Please choose (S/V/K/I/C): k
Problem report file: /tmp/apport.cloud-init.7_ilbqj5.apport


root@test-xenial:~# egrep 'user_data|Cloud|lshw' 
/tmp/apport.cloud-init.7_ilbqj5.apport
CloudName: CloudSigma
 Cloud-init v. 17.1 running 'init-local' at Mon, 16 Oct 2017 20:21:45 +. Up 
0.00 seconds.
 Cloud-init v. 17.1 running 'init' at Mon, 16 Oct 2017 20:21:49 +. Up 4.00 
seconds.
 Cloud-init v. 17.1 running 'modules:config' at Mon, 16 Oct 2017 20:21:50 
+. Up 5.00 seconds.
 Cloud-init v. 17.1 running 'modules:final' at Mon, 16 Oct 2017 20:21:51 +. 
Up 6.00 seconds.
 Cloud-init v. 17.1 finished at Mon, 16 Oct 2017 20:22:26 +. Datasource 
DataSourceNoCloud [seed=/var/lib/cloud/seed/nocloud-net][dsmode=net].  Up 41.00 
seconds
lshw.txt:
user_data.txt:

---zesty
root@test-zesty:~# ubuntu-bug cloud-init

*** Collecting problem information

The collected information can be sent to the developers to improve the
application. This might take a few minutes.

*** Your device details (lshw) may be useful to developers when addressing this 
bug, but gathering it requires admin privileges. Would you like to include this 
info?


What would you like to do? Your options are:
  Y: Yes
  N: No
  C: Cancel
Please choose (Y/N/C): y

*** Is this machine running in a cloud environment?


What would you like to do? Your options are:
  Y: Yes
  N: No
  C: Cancel
Please choose (Y/N/C): y

*** Please select the cloud vendor or environment in which this instance
is running


Choices:
  1: Amazon - Ec2
  2: AliYun
  3: AltCloud
  4: Azure
  5: Bigstep
  6: CloudSigma
  7: CloudStack
  8: DigitalOcean
  9: GCE - Google Compute Engine
  10: MAAS
  11: NoCloud
  12: OpenNebula
  13: OpenStack
  14: OVF
  15: Scaleway
  16: SmartOS
  17: VMware
  18: Other
  C: Cancel
Please choose (1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/C): 5

*** Your user-data or cloud-config file can optionally be provided from
/var/lib/cloud/instance/user-data.txt and could be useful to developers
when addressing this bug. Do you wish to attach user-data to this bug?


What would you like to do? Your options are:
  Y: Yes
  N: No
  C: Cancel
Please choose (Y/N/C): y
..

*** Send problem report to the developers?

After the problem report has been sent, please fill out the form in the
automatically opened web browser.

What would you like to do? Your options are:
  S: Send report (102.7 KB)
  V: View report
  K: Keep report file for sending later or copying to somewhere else
  I: Cancel and ignore future crashes of this program version
  C: Cancel
Please choose (S/V/K/I/C): k
Problem report file: /tmp/apport.cloud-init.c2rk91_3.apport
root@test-zesty:~# egrep 'user_data|cloud|lshw' 
/tmp/apport.cloud-init.c2rk91_3.apport 
 cloud-guest-utils 0.30-0ubuntu2
 Oct 16 20:25:24 hostname systemd[1]: cloud-init-local.service: 

[Yahoo-eng-team] [Bug 1685333] Re: Fatal Python error: Cannot recover from stack overflow. - in py35 unit test job

2017-10-25 Thread Matt Riedemann
** Changed in: nova
   Status: Confirmed => Fix Released

** Changed in: nova/pike
   Status: Confirmed => In Progress

** Changed in: nova/pike
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1685333

Title:
  Fatal Python error: Cannot recover from stack overflow. - in py35 unit
  test job

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress

Bug description:
  Seeing this in the py35 job, looks like it's related to an infinite
  recursion in oslo.config:

  Fatal Python error: Cannot recover from stack overflow.

  http://logs.openstack.org/34/458834/2/check/gate-nova-
  python35/c55b003/console.html#_2017-04-21_16_36_11_981505

  I'm not entirely sure which test it is, but I suspect this one which
  is still in progress when the job dies:

  {0} nova.tests.unit.test_rpc.TestRPC.test_add_extra_exmods [] ...
  inprogress

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1685333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727369] [NEW] _supports_direct_io() check fails on shared storage

2017-10-25 Thread Pavel Gluschak
Public bug reported:

When instances are deployed on multiple compute nodes concurrently and
instance_path is set to shared storage (i.e. Virtuozzo Storage), that
doesn't support concurrent write operations on the same file,
_supports_direct_io() fails with OSError: [Errno 16] Device or resource
busy: '/var/lib/nova/instances/.directio.test'.

2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] Traceback (most recent call last):
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2218, in 
_build_resources
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] yield resources
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2064, in 
_build_and_run_instance
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] block_device_info=block_device_info)
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2941, in 
spawn
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] write_to_disk=True)
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4909, in 
_get_guest_xml
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] context)
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4768, in 
_get_guest_config
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] flavor, guest.os_type)
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3806, in 
_get_guest_storage_config
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] inst_type)
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3748, in 
_get_guest_disk_config
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] self.disk_cachemode,
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 739, in 
disk_cachemode
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] if not 
self._supports_direct_io(CONF.instances_path):
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3128, in 
_supports_direct_io
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] {'path': dirpath, 'ex': e})
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] self.force_reraise()
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] six.reraise(self.type_, self.value, 
self.tb)
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3110, in 
_supports_direct_io
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] f = os.open(testfile, os.O_CREAT | 
os.O_WRONLY | os.O_DIRECT)
2017-10-19 21:11:29.030 160580 ERROR nova.compute.manager [instance: 
8ce8f6f4-a3a5-457e-bd18-72b453efe2e0] OSError: [Errno 16] Device or resource 
busy: '/var/lib/nova/instances/.directio.test'

** Affects: nova
 Importance: Undecided
 Assignee: Pavel Gluschak (scsnow)
 

[Yahoo-eng-team] [Bug 1727342] [NEW] Failed to delete lbaas-pool if L7 policy/rules attached to that pool

2017-10-25 Thread Puneet Arora
Public bug reported:

Created LoadBalancer, Listener, Pool, members and healthmonitor.
After this created L7 policy using above pool created and also added L7 rule in 
policy.
When I am trying to delete pool directly without deleting L7 policy then 
exception raises on prompt.

$ neutron lbaas-pool-delete pool2
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
Driver error: Bad lbaas-pool request: Failed to delete lb pool

q-svc.log file logs pasted in link:
http://paste.openstack.org/show/624607/

##
Commands used:
##
neutron lbaas-loadbalancer-create --name lb1  public-subnet
neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTPS 
--protocol-port 443 --name listener1
neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 
--protocol HTTPS --name web_pool
neutron lbaas-member-create --subnet private-subnet --address 10.0.0.7 
--protocol-port 443 web_pool
#L7 policy redirect to pool
neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --name policy1 
--redirect-pool web_pool --listener listener1
neutron lbaas-l7rule-create --type PATH --compare-type=STARTS_WITH  --value 
/api policy1

Why we are imposing this condition that pool can only be deleted if L7
rule deleted?


Steps:

1) Create LB, Listener, pool, member and healthmonitor.
2) Create L7 policy using pool created in step1
4) Add rule inside L7 policy.
5) Try to delete pool. It get fails.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1727342

Title:
  Failed to delete lbaas-pool if L7 policy/rules attached to that pool

Status in neutron:
  New

Bug description:
  Created LoadBalancer, Listener, Pool, members and healthmonitor.
  After this created L7 policy using above pool created and also added L7 rule 
in policy.
  When I am trying to delete pool directly without deleting L7 policy then 
exception raises on prompt.

  $ neutron lbaas-pool-delete pool2
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Driver error: Bad lbaas-pool request: Failed to delete lb pool

  q-svc.log file logs pasted in link:
  http://paste.openstack.org/show/624607/

  ##
  Commands used:
  ##
  neutron lbaas-loadbalancer-create --name lb1  public-subnet
  neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTPS 
--protocol-port 443 --name listener1
  neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 
--protocol HTTPS --name web_pool
  neutron lbaas-member-create --subnet private-subnet --address 10.0.0.7 
--protocol-port 443 web_pool
  #L7 policy redirect to pool
  neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --name policy1 
--redirect-pool web_pool --listener listener1
  neutron lbaas-l7rule-create --type PATH --compare-type=STARTS_WITH  --value 
/api policy1

  Why we are imposing this condition that pool can only be deleted if L7
  rule deleted?

  
  Steps:
  
  1) Create LB, Listener, pool, member and healthmonitor.
  2) Create L7 policy using pool created in step1
  4) Add rule inside L7 policy.
  5) Try to delete pool. It get fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1727342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723856] Re: lbaasv2 tests fail with error

2017-10-25 Thread Rico Lin
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1723856

Title:
  lbaasv2 tests fail with error

Status in neutron:
  In Progress

Bug description:
  Noticed at:

  http://logs.openstack.org/52/511752/3/check/legacy-heat-dsvm-
  functional-convg-mysql-lbaasv2/dcd512d/job-output.txt.gz

  
  lbaasv2 agent log:

  http://logs.openstack.org/52/511752/3/check/legacy-heat-dsvm-
  functional-convg-mysql-
  lbaasv2/dcd512d/logs/screen-q-lbaasv2.txt.gz?#_Oct_16_02_26_51_171646

  
  May be due to https://review.openstack.org/#/c/505701/

  traceback:

  2017-10-16 02:45:43.838922 | primary | 2017-10-16 02:45:43.838 | 
==
  2017-10-16 02:45:43.840365 | primary | 2017-10-16 02:45:43.840 | Failed 2 
tests - output below:
  2017-10-16 02:45:43.842320 | primary | 2017-10-16 02:45:43.841 | 
==
  2017-10-16 02:45:43.843926 | primary | 2017-10-16 02:45:43.843 |
  2017-10-16 02:45:43.845738 | primary | 2017-10-16 02:45:43.845 | 
heat_integrationtests.functional.test_lbaasv2.LoadBalancerv2Test.test_create_update_loadbalancer
  2017-10-16 02:45:43.847384 | primary | 2017-10-16 02:45:43.846 | 

  2017-10-16 02:45:43.848836 | primary | 2017-10-16 02:45:43.848 |
  2017-10-16 02:45:43.850193 | primary | 2017-10-16 02:45:43.849 | Captured 
traceback:
  2017-10-16 02:45:43.851909 | primary | 2017-10-16 02:45:43.851 | 
~~~
  2017-10-16 02:45:43.853340 | primary | 2017-10-16 02:45:43.852 | 
Traceback (most recent call last):
  2017-10-16 02:45:43.855053 | primary | 2017-10-16 02:45:43.854 |   File 
"/opt/stack/new/heat/heat_integrationtests/functional/test_lbaasv2.py", line 
109, in test_create_update_loadbalancer
  2017-10-16 02:45:43.856727 | primary | 2017-10-16 02:45:43.856 | 
parameters=parameters)
  2017-10-16 02:45:43.858396 | primary | 2017-10-16 02:45:43.857 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 437, in 
update_stack
  2017-10-16 02:45:43.859969 | primary | 2017-10-16 02:45:43.859 | 
self._wait_for_stack_status(**kwargs)
  2017-10-16 02:45:43.861455 | primary | 2017-10-16 02:45:43.861 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 368, in 
_wait_for_stack_status
  2017-10-16 02:45:43.862957 | primary | 2017-10-16 02:45:43.862 | 
fail_regexp):
  2017-10-16 02:45:43.864506 | primary | 2017-10-16 02:45:43.864 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 327, in 
_verify_status
  2017-10-16 02:45:43.866142 | primary | 2017-10-16 02:45:43.865 | 
stack_status_reason=stack.stack_status_reason)
  2017-10-16 02:45:43.867842 | primary | 2017-10-16 02:45:43.867 | 
heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
LoadBalancerv2Test-1022777367/f0a78a75-c1ed-4921-a7f7-c4028f3f60c3 is in 
UPDATE_FAILED status due to 'Resource UPDATE failed: ResourceInError: 
resources.loadbalancer: Went to status ERROR due to "Unknown"'
  2017-10-16 02:45:43.869183 | primary | 2017-10-16 02:45:43.868 |
  2017-10-16 02:45:43.870571 | primary | 2017-10-16 02:45:43.870 |
  2017-10-16 02:45:43.872501 | primary | 2017-10-16 02:45:43.872 | 
heat_integrationtests.scenario.test_autoscaling_lbv2.AutoscalingLoadBalancerv2Test.test_autoscaling_loadbalancer_neutron
  2017-10-16 02:45:43.874213 | primary | 2017-10-16 02:45:43.873 | 

  2017-10-16 02:45:43.875784 | primary | 2017-10-16 02:45:43.875 |
  2017-10-16 02:45:43.877352 | primary | 2017-10-16 02:45:43.876 | Captured 
traceback:
  2017-10-16 02:45:43.878767 | primary | 2017-10-16 02:45:43.878 | 
~~~
  2017-10-16 02:45:43.880302 | primary | 2017-10-16 02:45:43.879 | 
Traceback (most recent call last):
  2017-10-16 02:45:43.881941 | primary | 2017-10-16 02:45:43.881 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 97, in test_autoscaling_loadbalancer_neutron
  2017-10-16 02:45:43.883543 | primary | 2017-10-16 02:45:43.883 | 
self.check_num_responses(lb_url, 1)
  2017-10-16 02:45:43.884968 | primary | 2017-10-16 02:45:43.884 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 51, in check_num_responses
  2017-10-16 02:45:43.886354 | primary | 2017-10-16 02:45:43.885 | 
self.assertEqual(expected_num, len(resp))
  2017-10-16 02:45:43.887791 | primary | 2017-10-16 02:45:43.887 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 411, in 
assertEqual
  2017-10-16 02:45:43.889172 | primary | 2017-10-16 02:45:43.888 | 
self.assertThat(observed, matcher, message)
  

[Yahoo-eng-team] [Bug 1727328] [NEW] Inspite of "Listener protocol HTTPS and pool protocol HTTP are not compatible" pool entry is getting added to neutron db

2017-10-25 Thread Puneet Arora
Public bug reported:

I created lbaas-listener using HTTPS protocol and lbaas-pool using HTTP.
I got exception "Listener protocol HTTPS and pool protocol HTTP are not 
compatible" which is acceptable.
But after this when I check neutron lbaas-pool-list , it shows pool being added.
So exception came for incompatibility then why we are adding this pool into 
neutron db.

Please check this link for commands executed:
http://paste.openstack.org/show/624603/

Steps to reproduce:-
1) Create LB
2) Create listener of type HTTPS.
3) Create pool of protocol type HTTP.
4) Exception would get appears but if we check neutron lbaas-pool-list pool 
entry would be visible.

Can we stop adding pool into neutron db if we are getting exception
while adding pool.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1727328

Title:
  Inspite of "Listener protocol HTTPS and pool protocol HTTP are not
  compatible" pool entry is getting added to neutron db

Status in neutron:
  New

Bug description:
  I created lbaas-listener using HTTPS protocol and lbaas-pool using HTTP.
  I got exception "Listener protocol HTTPS and pool protocol HTTP are not 
compatible" which is acceptable.
  But after this when I check neutron lbaas-pool-list , it shows pool being 
added.
  So exception came for incompatibility then why we are adding this pool into 
neutron db.

  Please check this link for commands executed:
  http://paste.openstack.org/show/624603/

  Steps to reproduce:-
  1) Create LB
  2) Create listener of type HTTPS.
  3) Create pool of protocol type HTTP.
  4) Exception would get appears but if we check neutron lbaas-pool-list pool 
entry would be visible.

  Can we stop adding pool into neutron db if we are getting exception
  while adding pool.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1727328/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727323] [NEW] A procedure is missing in the article 'Install and configure (Red Hat) in glance'

2017-10-25 Thread Ryoga Saito
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [x] This doc is inaccurate in this way: This document has no procedure to 
create database 'glance'.
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 15.0.1.dev1 on 'Mon Aug 7 01:28:54 2017, commit 9091d26'
SHA: 9091d262afb120fd077bae003d52463f833a4fde
Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-rdo.rst
URL: https://docs.openstack.org/glance/pike/install/install-rdo.html

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1727323

Title:
  A procedure is missing in the article 'Install and configure (Red Hat)
  in glance'

Status in Glance:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: This document has no procedure to 
create database 'glance'.
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 15.0.1.dev1 on 'Mon Aug 7 01:28:54 2017, commit 9091d26'
  SHA: 9091d262afb120fd077bae003d52463f833a4fde
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-rdo.rst
  URL: https://docs.openstack.org/glance/pike/install/install-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1727323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727324] [NEW] Install and configure (SUSE) in glance

2017-10-25 Thread Ryoga Saito
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [ ] This doc is inaccurate in this way: This document has no procedure to 
create 
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 15.0.1.dev1 on 'Mon Aug 7 01:28:54 2017, commit 9091d26'
SHA: 9091d262afb120fd077bae003d52463f833a4fde
Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-obs.rst
URL: https://docs.openstack.org/glance/pike/install/install-obs.html

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1727324

Title:
  Install and configure (SUSE) in glance

Status in Glance:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: This document has no procedure to 
create 
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 15.0.1.dev1 on 'Mon Aug 7 01:28:54 2017, commit 9091d26'
  SHA: 9091d262afb120fd077bae003d52463f833a4fde
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-obs.rst
  URL: https://docs.openstack.org/glance/pike/install/install-obs.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1727324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727325] [NEW] Not found a procedure to create database

2017-10-25 Thread Ryoga Saito
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [ ] This doc is inaccurate in this way: This document has no procedure to 
create 
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 15.0.1.dev1 on 'Mon Aug 7 01:28:54 2017, commit 9091d26'
SHA: 9091d262afb120fd077bae003d52463f833a4fde
Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-ubuntu.rst
URL: https://docs.openstack.org/glance/pike/install/install-ubuntu.html

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1727325

Title:
  Not found a procedure to create database

Status in Glance:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: This document has no procedure to 
create 
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 15.0.1.dev1 on 'Mon Aug 7 01:28:54 2017, commit 9091d26'
  SHA: 9091d262afb120fd077bae003d52463f833a4fde
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-ubuntu.rst
  URL: https://docs.openstack.org/glance/pike/install/install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1727325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1726364] Re: Quota calulated when instance poweroff

2017-10-25 Thread LiweiWang
Rather than deleting a powered off instance, I may want to keep it
(because I will use it later). But I could not start anthor instance,
when there are free cup or ram in fact. Isn't that a waste?

** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1726364

Title:
  Quota calulated when instance poweroff

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  When I powered off an instance, I hope to start another, but there is
  a quota limit told me that I can not build more instance beacuse there
  is no more ram or cpus. I thought that it is not reasonable. This
  leads to a waste of physical resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1726364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727266] [NEW] archive_deleted_instances is not atomic for insert/delete

2017-10-25 Thread Belmiro Moreira
Public bug reported:

Description
===
Archive deleted instances first moves deleted rows to the shadow
tables and then deletes the rows from the original tables.
However, because it does 2 different selects (to get the rows to insert
and to delete) we can have the case that a row is not inserted in the
shadow table but removed from the original.
This can happen when there are new deleted rows between the insert and
delete.
Shouldn't we deleted explicitly only the IDs that were inserted?


See:
insert = shadow_table.insert(inline=True).\
from_select(columns,
sql.select([table],
   deleted_column != deleted_column.default.arg).
order_by(column).limit(max_rows))
query_delete = sql.select([column],
  deleted_column != deleted_column.default.arg).\
  order_by(column).limit(max_rows)

delete_statement = DeleteFromSelect(table, query_delete, column)

(...)

conn.execute(insert)
result_delete = conn.execute(delete_statement)

** Affects: nova
 Importance: Undecided
 Assignee: Belmiro Moreira (moreira-belmiro-email-lists)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Belmiro Moreira (moreira-belmiro-email-lists)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1727266

Title:
  archive_deleted_instances is not atomic for insert/delete

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Archive deleted instances first moves deleted rows to the shadow
  tables and then deletes the rows from the original tables.
  However, because it does 2 different selects (to get the rows to insert
  and to delete) we can have the case that a row is not inserted in the
  shadow table but removed from the original.
  This can happen when there are new deleted rows between the insert and
  delete.
  Shouldn't we deleted explicitly only the IDs that were inserted?

  
  See:
  insert = shadow_table.insert(inline=True).\
  from_select(columns,
  sql.select([table],
 deleted_column != deleted_column.default.arg).
  order_by(column).limit(max_rows))
  query_delete = sql.select([column],
deleted_column != deleted_column.default.arg).\
order_by(column).limit(max_rows)

  delete_statement = DeleteFromSelect(table, query_delete, column)

  (...)

  conn.execute(insert)
  result_delete = conn.execute(delete_statement)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1727266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727262] [NEW] L3 agent config option to pass provider:physical_network while creating HA network

2017-10-25 Thread venkata anil
Public bug reported:

In a customer environment, they are using sriov and openvswitch like below
mechanism_drivers = openvswitch,sriovnicswitch
[ml2_type_vlan]
network_vlan_ranges=ovs:1300:1500,sriov:1300:1500
but sriov is enabled only on compute nodes and openvswitch on controllers.
When first HA router is created on a tenant, l3 agent is creating new HA tenant 
network with segmentation_id from ovs driver.
But when no more vlan ids available on ovs, it is picking up segmentation_id 
from sriov for later HA tenant networks.
As ovs agent in controllers only supporting ovs and not sriov, binding for HA 
router network port(i.e device_owner as router_ha_interface) is failing.
As HA network creation and HA router creation was succeful, but keepalived was 
not spawned, confusing admins why it was not spawned.

So we need to enhance l3 agent to pass "provider:physical_network" while 
creating HA network.
In this case, if l3 agent was able to pass provider:physical_network=ovs, and 
if no free vlan id's available, then both HA network and HA router creat
ion will fail and admin can debug the failure easily.


Binding failure errors

2017-10-24 04:47:04.835 411054 DEBUG 
neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver 
[req-c50a05a9-d889-4abc-a267-dc3098ad854c - - - - -] Attempting to bind port 
95f5d893-3490-410b-8c3e-cd82e4831f34 on network 
d439f80d-ce31-496b-b048-a1056ed3f8b7 bind_port 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:111
2017-10-24 04:47:04.836 411054 DEBUG 
neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver 
[req-c50a05a9-d889-4abc-a267-dc3098ad854c - - - - -] Refusing to bind due to 
unsupported vnic_type: normal bind_port 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:116
2017-10-24 04:47:04.836 411054 ERROR neutron.plugins.ml2.managers 
[req-c50a05a9-d889-4abc-a267-dc3098ad854c - - - - -] Failed to bind port 
95f5d893-3490-410b-8c3e-cd82e4831f34 on host corea-controller0.mtcelab.com for 
vnic_type normal using segments [{'segmentation_id': 1321, 'physical_network': 
u'sriov_a', 'id': u'1ec5240e-84c1-4fad-b9ff-a40cea8ec14e', 'network_type': 
u'vlan'}]


[stack@txwlvcpdirector04 ~]$ neutron net-show 
d439f80d-ce31-496b-b048-a1056ed3f8b7
+---++
| Field | Value 
 |
+---++
| admin_state_up| True  
 |
| availability_zone_hints   |   
 |
| availability_zones|   
 |
| created_at| 2017-10-17T22:17:29Z  
 |
| description   |   
 |
| id| d439f80d-ce31-496b-b048-a1056ed3f8b7  
 |
| ipv4_address_scope|   
 |
| ipv6_address_scope|   
 |
| mtu   | 9200  
 |
| name  | HA network tenant 
ca7ceaf971014d01992b455119ca5990 |
| port_security_enabled | True  
 |
| project_id|   
 |
| provider:network_type | vlan  
 |
| provider:physical_network | sriov_a   
 |
| provider:segmentation_id  | 1321  
 |
| qos_policy_id |   
 |
| revision_number   | 5 
 |
| router:external   | False 
 |
| shared| False 
 |
| status| ACTIVE
 |
| subnets   | 4bdaed16-43c7-4b5e-acf2-0e9e55f44304  
 |
| tags  |   
 |
| tenant_id |   
 |
| updated_at| 2017-10-17T22:17:29Z  
 |
+---++

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ha

** Tags added: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1727262

Title:
  L3 agent config option to pass 

[Yahoo-eng-team] [Bug 1727260] [NEW] Nova assumes that a volume is fully detached from the compute if the volume is not defined in the instance's libvirt definition

2017-10-25 Thread sahid
Public bug reported:

During a volume detach operation, Nova compute attempts to remove the
volume from libvirt for the instance before proceeding to remove the
storage lun from the underlying compute host. If Nova discovers that the
volume was not found in the instance's libvirt definition then it
ignores that error condition and returns (after issuing a warning
message "Ignoring DiskNotFound exception while detaching").

However, under certain failure scenarios it may be that although the
libvirt definition for the volume has been removed for the instance that
the associated storage lun on the compute server may not have been fully
cleaned up yet.

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: libvirt ocata-backport-potential

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1727260

Title:
   Nova assumes that a volume is fully detached from the compute if the
  volume is not defined in the instance's libvirt definition

Status in OpenStack Compute (nova):
  New

Bug description:
  During a volume detach operation, Nova compute attempts to remove the
  volume from libvirt for the instance before proceeding to remove the
  storage lun from the underlying compute host. If Nova discovers that
  the volume was not found in the instance's libvirt definition then it
  ignores that error condition and returns (after issuing a warning
  message "Ignoring DiskNotFound exception while detaching").

  However, under certain failure scenarios it may be that although the
  libvirt definition for the volume has been removed for the instance
  that the associated storage lun on the compute server may not have
  been fully cleaned up yet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1727260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp