Public bug reported:
We want a way to change a VM port’s network when the VM’s operating
system does not support online removal and addition of devices.
These are the steps to accomplish this:
Given a running VM. We want to change the network of . No floating
IP is associated with the VM.
#
Public bug reported:
In the devstack environment (master branch), stack.sh created the
private network. It has two subnets it (an IPv4 and an IPv6).
A port is created using the "private" network.
Then the port is updated using the "openstack port set --no-fixed-ip"
command. The IPv4 IP
Public bug reported:
The neutron-vpn-netns-wrapper always assumes the rootwrap.conf lives in
the default location of /etc/neutron/ because it is not executed with
the --rootwrap_config parameter. If rootwrap.conf is not in the default
location, then execution will fail with a message like:
Public bug reported:
The network demo-net, owned by user demo, is shared with tenant demo-2.
The sharing is created by demo using the command
neutron rbac-create --type network --action access_as_shared --target-
tenant demo-net
A user on the demo-2 tenant is can see the network demo-net:
Public bug reported:
In my environment where there is a compute node and a controller node.
On the compute node the L3-agent mode is 'dvr'.
On the controller node the L3-agent mode is 'dvr-snat'.
Nova-compute is only running on the compute node.
Start: the compute node has no VMs running, there
Public bug reported:
In my environment where there is a compute node and a controller node.
On the compute node the L3-agent mode is 'dvr' on the controller node
the L3-agent mode is 'dvr-snat'. Nova-compute is only running on the
compute node.
Start: the compute node has no VMs running, there
didn't
check whether the router can be removed from an L3-agent.
** Affects: neutron
Importance: Undecided
Assignee: Stephen Ma (stephen-ma)
Status: New
** Tags: l3-dvr-backlog
** Changed in: neutron
Assignee: (unassigned) = Stephen Ma (stephen-ma)
** Tags added: l3-dvr
Public bug reported:
In https://review.openstack.org/#/c/195223/ (a backport to stable/juno)
the jenkins check job failed at the check-neutron-lbaasv1-dsvm-api.
Looking at the console.html log, the failure occurred during the test
setup phase. The console.log is:
Public bug reported:
On my single node devstack setup, there are 2 VMs hosted. VM1 has no floating
IP assigned. VM2 has a floating IP assigned. From VM1, ping to VM2 using the
floating IP. Ping output reports the replies comes from VM2's fixed ip address.
The reply should be from VM2's
Public bug reported:
On my single-node devstack setup running the latest neutron code, there
is one AgentNotFoundByTypeHost exception found for the L3-agent.
However, the AgentNotFoundByTypeHost exception is not logged for the
DHCP, OVS, or metadata agents. This fact would point to a problem
Public bug reported:
When using DVR, the fg device in a compute is needed to access VMs on a
compute node. If for any reason the fg- device is deleted. users will
not be able access the VMs on the compute node.
On a single node system where the L3-agent is running in 'dvr-snat'
mode, a VM is
** Changed in: neutron
Status: New = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445202
Title:
Bug #1414218 is not fixed on the stable/juno branch
Status in
is the
performance overhead due to the LOG.debug statements in the for-loop of
the _output_hosts_file() function.
This problem is only found on the stable/juno branch of neutron.
** Affects: neutron
Importance: Undecided
Assignee: Stephen Ma (stephen-ma)
Status: New
** Changed
** Changed in: neutron
Status: In Progress = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394043
Title:
KeyError: 'gw_port_host' seen for DVR router removal
Public bug reported:
As the administrator, a DVR router is created and attached to a shared
network. The administrator also created the shared network.
As a non-admin tenant, a VM is created with the port using the shared
network. The only VM using the shared network is scheduled to a compute
}
|
+--+--+---+--+
** Affects: neutron
Importance: Undecided
Assignee: Stephen Ma (stephen-ma)
Status: New
** Changed in: neutron
Assignee: (unassigned) = Stephen Ma (stephen-ma)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which
Public bug reported:
An in-use dhcp-port can be deleted by a tenant:
For example:
stack@Controller:~/DEVSTACK/user-1$ neutron net-list
+--+---+--+
| id | name |
Public bug reported:
On a controller node, with L3 agent mode of 'dvr_snat', the snat
namespace remains on the node even after the router is deleted.
This problem is reproduced on a 3 node setup with 2 compute nodes and
one controller node setup using devstack. L3 agent mode on compute nodes
is
: Stephen Ma (stephen-ma)
Status: New
** Tags: l3-dvr-backlog
** Changed in: neutron
Assignee: (unassigned) = Stephen Ma (stephen-ma)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https
** Changed in: neutron
Status: Incomplete = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1351416
Title:
neutron agent-list reports incorrect binary
Status in OpenStack
the VM comes boots up, check the VM is pingable
3. Delete the VM.
The router's namespaces remain on the node. They should have been
deleted.
** Affects: neutron
Importance: Undecided
Assignee: Stephen Ma (stephen-ma)
Status: New
** Tags: l3-dvr-backlog
** Tags added: l3-dvr
Public bug reported:
In an environment setup with devstack where the the neutron-vpn-agent is
used, 'neutron agent-list' is reporting the binary for the L3 agent type
is neutron-l3-agent. Neutron-vpn-agent is running but not
neutron-l3-agent. The binary column should list neutron-vpn-agent as
Public bug reported:
This traceback below was found in the neutron api server log. It
happened about one minute after the rabbitmq server was restarted after
the msgq service was out. Afterwards, noticed that the api server is no
longer responding to requests from python-neutronclient. However,
Public bug reported:
Given this is the ml2_conf.ini file on a controller node that also runs neutron
API server, DHCP and L3 agents.
[ml2]
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,l2population
tenant_network_types = vxlan
[ml2_type_flat]
[ml2_type_vlan]
- device.
** Affects: neutron
Importance: Undecided
Assignee: Stephen Ma (stephen-ma)
Status: New
** Changed in: neutron
Assignee: (unassigned) = Stephen Ma (stephen-ma)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which
[-]
log_opt_values /opt/stack/oslo.config/oslo/config/cfg.py:1923
2014-02-27 13:19:20.257 29288 INFO neutron.common.config [-] Config paste file:
/etc/neutron/api-paste.ini
** Affects: neutron
Importance: Undecided
Assignee: Stephen Ma (stephen-ma)
Status: New
** Description changed:
When
| c3d21dbd077144fe9d8f919488f72c2d |
+-+--+
** Affects: neutron
Importance: Undecided
Assignee: Stephen Ma (stephen-ma)
Status: New
** Changed in: neutron
Assignee: (unassigned) = Stephen Ma (stephen-ma)
--
You received
27 matches
Mail list logo