Public bug reported:
when delete a vm which status is VERIFY_RESIZE.the vm will be confirmed
first,and the vm task state will
become NONE,it will lead to the vm enters the synchronous power state process
and can not deleted.
** Affects: nova
Importance: Undecided
Status: New
It is, yes. I think the duplicate was created during one of those times
when launchpad was doing timeouts and I didn't notice that I created it
twice.
** Changed in: nova
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering
Reviewed: https://review.openstack.org/541497
Committed:
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=b3cf2228fd8cf41d539aa62ae6ff99c1870a67af
Submitter: Zuul
Branch:master
commit b3cf2228fd8cf41d539aa62ae6ff99c1870a67af
Author: Michal Kelner Mishali
Public bug reported:
Description
===
Allocation_ratio setted by aggregate metadata don't work in NUMATopologyFilter.
NUMATopologyFilter always takes the allocation_ratio from the compute node's
configuration file.
We do not provide a consistent way to set the allocation_ratio when user
Reviewed: https://review.openstack.org/527105
Committed:
https://git.openstack.org/cgit/openstack/neutron/commit/?id=43d3e88a07b4275ad814c6875fa037efd94223bb
Submitter: Zuul
Branch:master
commit 43d3e88a07b4275ad814c6875fa037efd94223bb
Author: Ahmed Zaid
Date: Wed
** Also affects: nova/pike
Importance: Undecided
Status: New
** Changed in: nova
Importance: Undecided => Medium
** Changed in: nova/pike
Status: New => Confirmed
** Changed in: nova/pike
Importance: Undecided => Medium
** Also affects: nova/queens
Importance:
For containerized services deployed with tripleo, it's addressed in
https://review.openstack.org/#/c/542858/
** Also affects: tripleo
Importance: Undecided
Status: New
** Changed in: tripleo
Status: New => In Progress
** Changed in: tripleo
Milestone: None => queens-rc1
**
Public bug reported:
When virtual router bind to L3 agent (router migration or create new),
synchronization function sync_routers() try get MTU for all router
interfaces by network IDs. This process run in function
'_get_mtus_by_network_list' in l3_db.py file. But, when is formed
database query,
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => In Progress
** Changed in: nova/queens
Assignee: (unassigned) => Matt Riedemann (mriedem)
** Changed in: nova/queens
Importance: Undecided => Medium
--
You
Public bug reported:
Hi,
We have observed instances failing to get a DHCP reply, either when booting for
the first time, or after a reboot.
By tcpdumping the traffic on the tap, then qvb and qvo interfaces, we can see
the DHCP request leaving, but it doesn't reach the neutron-gateway node
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => Triaged
** Changed in: nova/queens
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
** Changed in: horizon (Ubuntu)
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1702466
Title:
Subnet details page fails when
Public bug reported:
- [x] This doc is inaccurate in this way: __
https://developer.openstack.org/api-ref/image/v2/index.html#create-an-
image
Where it says:
"Additionally, you may include additional properties specified as
key:value pairs, where the value must be a string data type. Keys
*** This bug is a duplicate of bug 1734625 ***
https://bugs.launchpad.net/bugs/1734625
Sure, although this bug had some related patches merged, so it gets
weird, but yeah we can duplicate it.
** This bug has been marked a duplicate of bug 1734625
placement: Request IDs are not passed to
Public bug reported:
On modern servers we do see dozens of cores which is total overkill of
Glance workers while each of them has eventlet pool size of 1000
threads. Maybe we should consider limiting the total number of workers
unless sepcifically configured by deployer.
** Affects: glance
(1:29:28 PM) mgoddard: mriedem: the scheduling works, but any flavor-requested
traits won't be pushed to the ironic node's instance_info. These are not yet
used by ironic, but will be used in future for some capabilities-like things
(1:29:49 PM) mriedem: mgoddard: if they aren't used in ironic
Public bug reported:
During the Queens release, keystone added support for a new scope type
called system. This extended the support for users and groups to not
only have roles on projects and domains, but also on a different entity
called the "system". This is an effort to make RBAC support more
Public bug reported:
As of queens, the ironic virt driver pushes traits set on the flavor to
the ironic node's instance_info during instance spawn. This list of
traits is currently encoded as a JSON string, inside the JSON-encoded
instance_info. We should not use this double layer of JSON
Reviewed: https://review.openstack.org/543257
Committed:
https://git.openstack.org/cgit/openstack/nova/commit/?id=4f9667b7a92ffef4329380e39c64cf314203b06e
Submitter: Zuul
Branch:master
commit 4f9667b7a92ffef4329380e39c64cf314203b06e
Author: Matt Riedemann
Date: Sun
Public bug reported:
The PowerVM virt driver is passing the instance as the first argument to
the Task init in a few places [1] [2]. The init method [3] expects the
first arg to be the task name. This results in the task name being the
string representation of the instance. The instance should no
I feel that due to the design of the FilterScheduler that makes every filter
independent of the others, it's necessarly up to the operator to make sure that
if they need to use both AggregateCoreFilter and/or AggregateRamFilter *and*
NUMATopologyFilter for NUMA placement, they necessarly need
Reviewed: https://review.openstack.org/543039
Committed:
https://git.openstack.org/cgit/openstack/glance/commit/?id=3712dccfdb7ee30243e4efc09fcad1c3197bdb51
Submitter: Zuul
Branch:master
commit 3712dccfdb7ee30243e4efc09fcad1c3197bdb51
Author: Brian Rosmaita
** Changed in: glance
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1747304
Title:
Stable maint check of stable/pike failing
Status in
Public bug reported:
The api-ref describe how the sorting works in general:
https://developer.openstack.org/api-ref/network/v2/#sorting . However,
it doesn't explicitly list all the valid sort_keys for each API
resources.
Listing all the sort_keys for each API resources is necessary because
the
Public bug reported:
Neutron server will return 500 if we try to list ports with 'created_at'
as filter. For example:
$ curl -g -i -X GET -H "X-Auth-Token: $TOKEN"
"http://10.0.0.19:9696/v2.0/ports?created_at=test;
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
@Nisha, both patches have been abandoned, so I'm assuming that this RFE
is no longer of interest. I'm going to change the status to Won'tFix.
Feel free to change the status. Thanks.
** Changed in: ironic
Status: Confirmed => Won't Fix
** Changed in: ironic
Status: Won't Fix =>
** Changed in: glance
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1747305
Title:
Stable maint check of stable/ocata failing
Status in
Public bug reported:
In the api-ref, the parameter 'tags' is not documented in either
response parameters or response sample. This applies to all API
resources that support tags (i.e. network, subnet). It is better to add
this missing documentation.
** Affects: neutron
Importance: Undecided
Public bug reported:
Description
===
Boot an instance from volume failed, then delete this instance successfully,
but volume is still in-use.
Steps to reproduce
==
A chronological list of steps which will bring off the
issue you noticed:
* I did boot an instance from
Public bug reported:
Used mem in numa_topology does not include mem used by instance which is not
fix mem_page_size.
For example:
An instance which is not fix hw:mem_page_size but fix hw:cpu_policy will
consume mem whose page size is 4K, but at this time the used mem of 4K in
numa_topology of
Reviewed: https://review.openstack.org/542257
Committed:
https://git.openstack.org/cgit/openstack/neutron/commit/?id=02cc3ca30733c88003331af26fbd364d703dd552
Submitter: Zuul
Branch:master
commit 02cc3ca30733c88003331af26fbd364d703dd552
Author: Sławek Kapłoński
Date:
31 matches
Mail list logo