[Yahoo-eng-team] [Bug 1897637] [NEW] ovs firewall: mac learning of dest VM mac not working

2020-09-28 Thread Moshe Levi
Public bug reported:

I have using neutron master with ovs firewall driver and ovs 2.13
I have 2 compute nodes and VM on each one of them
both VM configure security groups which allow ingress and egress of tcp traffic 
I running iperf testing for tcp connection tracking
we traffic start I see the following rule:

ufid:58ea9ecf-9fe5-4662-ae46-be4b7540d9c5,
skb_priority(0/0),skb_mark(0/0),ct_state(0x2/0x2),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0x15),dp_hash(0/0),in_port(p4p2_11),packet_type(ns=0/0,id=0/0),eth(src=fa:16:3e:9e:77:5c,dst=fa:16:3e:35:c0:68),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no),
packets:7151296, bytes:64459680961, used:0.420s, offloaded:yes, dp:tc,
actions:set(tunnel(tun_id=0x1,src=172.16.0.148,dst=172.16.0.147,ttl=64,tp_dst=4789,flags(key))),vxlan_sys_4789

This is the fdb table of the br-int with "ovs-appctl fdb/show br-int"

 port  VLAN  MACAge
5 3  fa:16:3e:35:c0:68   97
6 3  fa:16:3e:9e:77:5c0

As you can see the dest mac of the remote VM is Age increasing and when
it get to 300s which is the default age time in the ovs the mac will
disappear and the rule above will changed to flood rule.

ufid:b2967a14-aa26-433a-8df1-1cc00ef662e7,
skb_priority(0/0),skb_mark(0/0),ct_state(0x2/0x2),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0x12),dp_hash(0/0),in_port(p4p2_11),packet_type(ns=0/0,id=0/0),eth(src=fa:16:3e:9e:77:5c,dst=fa:16:3e:35:c0:68),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no),
packets:23004560, bytes:204398890734, used:0.000s, dp:tc,
actions:push_vlan(vid=1,pcp=0),br-
int,set(tunnel(tun_id=0x1,src=172.16.0.148,dst=172.16.0.147,ttl=64,tp_dst=4789,flags(key))),pop_vlan,vxlan_sys_4789


This is the fdb table of the br-int with "ovs-appctl fdb/show br-int"
 port  VLAN  MACAge
9 1  fa:16:3e:9e:77:5c0

The flood rule is breaking the offload.

see like RULES_INGRESS_TABLE table 82 is output the dest port without doing the 
Normal action. if we change the openflow of this table from: 
table=82, n_packets=147206831, n_bytes=11772233989, 
priority=50,ct_state=+est-rel+rpl,ct_zone=1,ct_mark=0,reg5=0x9 
actions=output:"p4p2_11"
to: 
cookie=0x1e1cc3048de6c562, duration=196.708s, table=82, n_packets=145661342, 
n_bytes=11670250023, 
priority=50,ct_state=+est-rel+rpl,ct_zone=1,ct_mark=0,reg5=0x9 
actions=mod_vlan_vid:1,NORMAL

the problem will be solved.

** Affects: neutron
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1897637

Title:
  ovs firewall: mac learning of dest VM mac not working

Status in neutron:
  In Progress

Bug description:
  I have using neutron master with ovs firewall driver and ovs 2.13
  I have 2 compute nodes and VM on each one of them
  both VM configure security groups which allow ingress and egress of tcp 
traffic 
  I running iperf testing for tcp connection tracking
  we traffic start I see the following rule:

  ufid:58ea9ecf-9fe5-4662-ae46-be4b7540d9c5,
  
skb_priority(0/0),skb_mark(0/0),ct_state(0x2/0x2),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0x15),dp_hash(0/0),in_port(p4p2_11),packet_type(ns=0/0,id=0/0),eth(src=fa:16:3e:9e:77:5c,dst=fa:16:3e:35:c0:68),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no),
  packets:7151296, bytes:64459680961, used:0.420s, offloaded:yes, dp:tc,
  
actions:set(tunnel(tun_id=0x1,src=172.16.0.148,dst=172.16.0.147,ttl=64,tp_dst=4789,flags(key))),vxlan_sys_4789

  This is the fdb table of the br-int with "ovs-appctl fdb/show br-int"

   port  VLAN  MACAge
  5 3  fa:16:3e:35:c0:68   97
  6 3  fa:16:3e:9e:77:5c0

  As you can see the dest mac of the remote VM is Age increasing and
  when it get to 300s which is the default age time in the ovs the mac
  will disappear and the rule above will changed to flood rule.

  ufid:b2967a14-aa26-433a-8df1-1cc00ef662e7,
  
skb_priority(0/0),skb_mark(0/0),ct_state(0x2/0x2),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0x12),dp_hash(0/0),in_port(p4p2_11),packet_type(ns=0/0,id=0/0),eth(src=fa:16:3e:9e:77:5c,dst=fa:16:3e:35:c0:68),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no),
  packets:23004560, bytes:204398890734, used:0.000s, dp:tc,
  actions:push_vlan(vid=1,pcp=0),br-
  
int,set(tunnel(tun_id=0x1,src=172.16.0.148,dst=172.16.0.147,ttl=64,tp_dst=4789,flags(key))),pop_vlan,vxlan_sys_4789

  
  This is the fdb table of the br-int with "ovs-appctl fdb/show br-int"
   port  VLAN  MACAge
  9 1  fa:16:3e:9e:77:5c0

  The flood rule is breaking the offload.

  see like RULES_INGRESS_TABLE table 82

[Yahoo-eng-team] [Bug 1855888] [NEW] ovs-offload with vxlan is broken due to adding skb mark

2019-12-10 Thread Moshe Levi
Public bug reported:

The following patch [1] add use of egress_pkt_mark which is not support with 
ovs hardware offload. 
This cause regression break in openstack when using ovs hardware offload when 
using vxlan


[1] - https://review.opendev.org/#/c/675054/

** Affects: neutron
 Importance: High
 Assignee: Moshe Levi (moshele)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1855888

Title:
  ovs-offload with vxlan is broken due to adding skb mark

Status in neutron:
  In Progress

Bug description:
  The following patch [1] add use of egress_pkt_mark which is not support with 
ovs hardware offload. 
  This cause regression break in openstack when using ovs hardware offload when 
using vxlan

  
  [1] - https://review.opendev.org/#/c/675054/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1855888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627987] Re: [RFE] SR-IOV accelerated OVS integration

2018-11-25 Thread Moshe Levi
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627987

Title:
  [RFE] SR-IOV accelerated OVS integration

Status in neutron:
  Fix Released

Bug description:
  SR-IOV accelerated Open vSwitch significantly improves the performance of OVS 
while maintaining its core functionality.
  The idea is to leverage SR-IOV technology with OVS control plane management.

  Change on the neutron side:
  1. extend the OVS mechanism driver to bind direct port

  The vif change are:
  The mechanism driver will send new vif type to nova which will create hostdev 
interface (SR-IOV VF passthrough) and will look up the SR-IOV VF representor 
and plug it to the br-int

  kernel changes for tc offloading support are merged in 4.8
  ovs patches for using tc offloading  can be found here 
https://patchwork.ozlabs.org/patch/738176/
  references:
  [1] 
http://www.netdevconf.org/1.1/proceedings/slides/efraim-virtual-switch-hw-acceleration.pdf
  [2] http://netdevconf.org/1.2/papers/efraim-gerlitz-sriov-ovs-final.pdf
  [3] http://netdevconf.org/1.2/session.html?or-gerlitz
  [4] http://netdevconf.org/1.2/session.html?rony-efraim-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1627987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1789074] [NEW] failed to boot guest with vnic_type direct when rx_queue_size and tx_queue_size are set

2018-08-26 Thread Moshe Levi
Public bug reported:

Description of problem:

Nova compute forces the virtio RX/TX Queue Size also on SRIOV devices.
This makes VM spawn to fail. The configurable RX/TX Queue Size code is similar 
all the way from OSP10 to OSP13, so it's possible the issue is present also on 
other releases.

Version-Release number of selected component (if applicable):
OSP13 z3

How reproducible:

(quick and dirty way)
Change nova config file 

# crudini --set 
/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt 
rx_queue_size 1024
# crudini --set 
/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt 
tx_queue_size 1024
 
# restart nova_compute container
docker restart nova_compute

# boot a VM with an SRIOV (PF or VF) interface

Actual results:
Nova add on the sriov port section rx_queue_size


  
  
  

  
  

  

Expected results:


  
  
  

  
  

  
  
  


Additional info:

** Affects: nova
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Moshe Levi (moshele)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1789074

Title:
  failed to boot guest with vnic_type direct when rx_queue_size and
  tx_queue_size are set

Status in OpenStack Compute (nova):
  New

Bug description:
  Description of problem:

  Nova compute forces the virtio RX/TX Queue Size also on SRIOV devices.
  This makes VM spawn to fail. The configurable RX/TX Queue Size code is 
similar all the way from OSP10 to OSP13, so it's possible the issue is present 
also on other releases.

  Version-Release number of selected component (if applicable):
  OSP13 z3

  How reproducible:

  (quick and dirty way)
  Change nova config file 

  # crudini --set 
/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt 
rx_queue_size 1024
  # crudini --set 
/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf libvirt 
tx_queue_size 1024
   
  # restart nova_compute container
  docker restart nova_compute

  # boot a VM with an SRIOV (PF or VF) interface

  Actual results:
  Nova add on the sriov port section rx_queue_size

  



  


  


  Expected results:

  



  


  



  

  Additional info:

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1789074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1785608] [NEW] [RFE] neutron ovs agent support baremetal port using smart nic

2018-08-06 Thread Moshe Levi
Public bug reported:

Problem description
===

While Ironic today supports Neutron provisioned network connectivity for
Bare-Metal servers through ML2 mechanism driver, the existing support
is based largely on configuration of TORs through vendor-specific mechanism
drivers, with limited capabilities.

Proposed change
===

There is a wide range of smart/intelligent NICs emerging on the market.
These NICs generally incorporate one or more general purpose CPU cores along
with data-plane packet processing accelerations, and can efficiently run
virtual switches such as OVS, while maintaining the existing interfaces to the
SDN layer.

The goal is to enable
running the standard Neutron Open vSwitch L2 agent, providing a generic,
vendor-agnostic bare metal networking service with feature parity compared
to the virtualization use-case.

* Neutron ml2 ovs changes:
  Update the neuton ml2 ovs to bind bare metal port with smartNIC flag in the
  binding profile.

* Neutron ovs agent changes:

Example of SmartNIC model::

  +---+
  |Server |
  |   |
  |  +A   |
  +--|+
 |
 |
  +--|+
  |SmartNIC   |
  |+-+B-+ |
  ||OVS | |
  |+-+C-+ |
  +--|+
 |

  A - port on the baremetal
  B - port that represent the baremetal port in the SmartNIC
  C - port to the wire

  Add/Remove Port B to the ovs br-int with external-ids
  This part is mimc the nova-compute that plug the port to the ovs bridge.
  The external-ids information is:

'external-ids:iface-id=%s' % port_id
'external-ids:iface-status=active'
'external-ids:attached-mac=%s' % ironic_port.address
'external-ids:node-uuid=%s' % node_uuid

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Problem description
  ===
  
  While Ironic today supports Neutron provisioned network connectivity for
  Bare-Metal servers through ML2 mechanism driver, the existing support
  is based largely on configuration of TORs through vendor-specific mechanism
  drivers, with limited capabilities.
- 
  
  Proposed change
  ===
  
  There is a wide range of smart/intelligent NICs emerging on the market.
  These NICs generally incorporate one or more general purpose CPU cores along
  with data-plane packet processing accelerations, and can efficiently run
  virtual switches such as OVS, while maintaining the existing interfaces to the
  SDN layer.
  
  The goal is to enable
  running the standard Neutron Open vSwitch L2 agent, providing a generic,
  vendor-agnostic bare metal networking service with feature parity compared
  to the virtualization use-case.
  
  * Neutron ml2 ovs changes:
-   Update the neuton ml2 ovs to bind bare metal port with smartNIC flag in the
-   binding profile.
+   Update the neuton ml2 ovs to bind bare metal port with smartNIC flag in the
+   binding profile.
  
  * Neutron ovs agent changes:
  
  Example of SmartNIC model::
  
-   +---+
-   |Server |
-   |   |
-   |  +A   |
-   +--|+
-  |
-  |
-   +--|+
-   |SmartNIC   |
-   |+-+B-+ |
-   ||OVS | |
-   |+-+C-+ |
-   +--|+
-  |
+   +---+
+   |Server |
+   |   |
+   |  +A   |
+   +--|+
+  |
+  |
+   +--|+
+   |SmartNIC   |
+   |+-+B-+ |
+   ||OVS | |
+   |+-+C-+ |
+   +--|+
+  |
  
-   A - port on the baremetal
-   B - port that represent the baremetal port in the SmartNIC
-   C - port to the wire
+   A - port on the baremetal
+   B - port that represent the baremetal port in the SmartNIC
+   C - port to the wire
  
-   Add/Remove Port B to the ovs br-int with external-ids
-   This part is mimc the nova-compute that plug the port to the ovs bridge.
-   The external-ids information is:
+   Add/Remove Port B to the ovs br-int with external-ids
+   This part is mimc the nova-compute that plug the port to the ovs bridge.
+   The external-ids information is:
  
- 'external-ids:iface-id=%s' % port_id
- 'external-ids:iface-status=active'
- 'external-ids:attached-mac=%s' % ironic_port.address
- 'external-ids:node-uuid=%s' % node_uuid
+ 'external-ids:iface-id=%s' % port_id
+ 'external-ids:iface-status=active'
+ 'external-ids:attached-mac=%s' % ironic_port.address
+ 'external-ids:node-uuid=%s' % node_uuid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1785608

Title:
  [RFE] neutron ovs agent support baremetal port using smart nic

Status in neutron:
  New

Bug description:
  Problem description
  ===

  While Ironic today supports Neutron provisioned network connectivity for
  

[Yahoo-eng-team] [Bug 1719327] [NEW] nova compute overwrite binding-profile when updating port direct port

2017-09-25 Thread Moshe Levi
Public bug reported:

in case a user create direct port with binding-profile such as:
 --binding-profile '{"capabilities": ["switchdev"]}'. nova-compute will 
overwrite that info with the pci_vendor_info pci_slot which are used for the 
SR-IOV mechanism driver and agent, and on delete it will clear the 
binding:profile. 

This change is important to OVS hardware-offload, because we distinguish
between legacy SR-IOV to switchdev SR-IOV with the {"capabilities":
["switchdev"]} in the port profile. And with this info we know what
mechanism driver will bind the direct port. {"capabilities":
["switchdev"]} - will bound by the OVS and all the others will be bound
by SR-IOV mechanism driver

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1719327

Title:
  nova compute overwrite binding-profile when updating port direct port

Status in OpenStack Compute (nova):
  New

Bug description:
  in case a user create direct port with binding-profile such as:
   --binding-profile '{"capabilities": ["switchdev"]}'. nova-compute will 
  overwrite that info with the pci_vendor_info pci_slot which are used for the 
SR-IOV mechanism driver and agent, and on delete it will clear the 
binding:profile. 

  This change is important to OVS hardware-offload, because we
  distinguish between legacy SR-IOV to switchdev SR-IOV with the
  {"capabilities": ["switchdev"]} in the port profile. And with this
  info we know what mechanism driver will bind the direct port.
  {"capabilities": ["switchdev"]} - will bound by the OVS and all the
  others will be bound by SR-IOV mechanism driver

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1719327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658078] [NEW] AttributeError: 'NoneType' object has no attribute 'support_requests'

2017-01-20 Thread Moshe Levi
Public bug reported:

When the compute is ironic driver  and the shceduler is configured with 
pci passthrough filter the vm get to an error state and we can see the 
following error in the scheduler

2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server 
[req-d627c45c-a5cf-47bc-a8d1-fe4669516380 admin admin] Exception during message 
handling
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
222, in dispatch
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
192, in _do_dispatch
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
218, in inner
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server return 
func(*args, **kwargs)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/scheduler/manager.py", line 84, in select_destinations
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server dests = 
self.driver.select_destinations(ctxt, spec_obj)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 51, in 
select_destinations
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server 
selected_hosts = self._schedule(context, spec_obj)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 103, in _schedule
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server spec_obj, 
index=num)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/scheduler/host_manager.py", line 572, in 
get_filtered_hosts
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server hosts, 
spec_obj, index)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/filters.py", line 89, in get_filtered_objects
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server list_objs = 
list(objs)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/filters.py", line 44, in filter_all
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server if 
self._filter_one(obj, spec_obj):
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/scheduler/filters/__init__.py", line 26, in _filter_one
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server return 
self.host_passes(obj, filter_properties)
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/scheduler/filters/pci_passthrough_filter.py", line 48, in 
host_passes
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server if not 
host_state.pci_stats.support_requests(pci_requests.requests):
2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server AttributeError: 
'NoneType' object has no attribute 'support_requests'

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1658078

Title:
  AttributeError: 'NoneType' object has no attribute 'support_requests'

Status in OpenStack Compute (nova):
  New

Bug description:
  When the compute is ironic driver  and the shceduler is configured with 
  pci passthrough filter the vm get to an error state and we can see the 
following error in the scheduler

  2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server 
[req-d627c45c-a5cf-47bc-a8d1-fe4669516380 admin admin] Exception during message 
handling
  2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server Traceback 
(most recent call last):
  2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
  2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2017-01-19 06:00:35.887 105364 ERROR oslo_messaging.rpc.server   File 

[Yahoo-eng-team] [Bug 1618382] Re: test_update_instance_port_admin_state Fails sumetime with DB Update error

2016-12-21 Thread Moshe Levi
we can't repoudce is so moving it to invalid.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618382

Title:
  test_update_instance_port_admin_state Fails sumetime with DB Update
  error

Status in neutron:
  Invalid

Bug description:
  Sometimes in Mellanox CI we see that
  test_update_instance_port_admin_state[1] fails with error [2]
  "StaleDataError: UPDATE statement on table 'standardattributes'
  expected to update 1 row(s); 0 were matched." [3]

  [1] 
http://13.69.151.247/Test_Neutron_SRIOV_cloudx25/233_cloudx-25//testr_results.html.gz
  [2] http://paste.openstack.org/show/564796/
  [3] 
http://13.69.151.247/Test_Neutron_SRIOV_cloudx25/233_cloudx-25/logs/q-svc.log.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1618382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597208] Re: Failed to Create an Instance using macvtap cause existence of "_" in interface vf_name (Regex miss)

2016-12-20 Thread Moshe Levi
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597208

Title:
  Failed to Create an Instance using macvtap cause existence of "_" in
  interface vf_name (Regex miss)

Status in neutron:
  Fix Released

Bug description:
  Step to reproduce :

  1) neutron port-create --name port1 --binding:vnic_type=macvtap private
  2) nova boot --flavor 2 --image  --nic port-id= vm_1

  Interface Name on Compute = "p2p1_0"

  From n-cpu.log :

  2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager 
[req-a4694341-f7e0-4e7b-b98e-5f640abbf950 admin admin] [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] Failed to allocate network(s) 
2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] Traceback (most recent call last):
  2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2064, in 
_build_and_run_instance  2016-06-29 06:01:04.592 1265 ERROR 
nova.compute.manager [instance: 7a3063f5-43a7-4d25-b23e-335a2a3274ab] 
block_device_info=block_device_info)
  2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2780, in spawn  
  2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager 
[instance: 7a3063f5-43a7-4d25-b23e-335a2a3274ab] 
block_device_info=block_device_info)
  2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4946, in 
_create_domain_and_network   2016-06-29 06:01:04.592 1265 ERROR 
nova.compute.manager [instance: 7a3063f5-43a7-4d25-b23e-335a2a3274ab] raise 
exception.VirtualInterfaceCreateException()
  2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] VirtualInterfaceCreateException: Virtual 
Interface creation failed   2016-06-29 
06:01:04.592 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab]
  2016-06-29 06:01:04.593 1265 DEBUG nova.compute.utils 
[req-a4694341-f7e0-4e7b-b98e-5f640abbf950 admin admin] [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] Virtual Interface creation failed 
notify_about_instance_usage /opt/stack/nova/nova/compute/utils.py:284
  2016-06-29 06:01:04.593 1265 ERROR nova.compute.manager 
[req-a4694341-f7e0-4e7b-b98e-5f640abbf950 admin admin] [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] Build of instance 
7a3063f5-43a7-4d25-b23e-335a2a3274ab aborted: Failed to allocate the 
network(s), not rescheduling.
  2016-06-29 06:01:04.593 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] Traceback (most recent call last):
   2016-06-29 
06:01:04.593 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1926, in 
_do_build_and_run_instance
  2016-06-29 06:01:04.593 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] filter_properties)
   2016-06-29 
06:01:04.593 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2102, in _build_and_run_instance
  2016-06-29 06:01:04.593 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] reason=msg)   
   2016-06-29 
06:01:04.593 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] BuildAbortException: Build of instance 
7a3063f5-43a7-4d25-b23e-335a2a3274ab aborted: Failed to alloca
  te the network(s), not rescheduling.  

2016-06-29 06:01:04.593 1265 
ERROR nova.compute.manager [instance: 7a3063f5-43a7-4d25-b23e-335a2a3274ab]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627987] [NEW] [RFE] SR-IOV accelerated OVS integration

2016-09-27 Thread Moshe Levi
Public bug reported:

SR-IOV accelerated Open vSwitch significantly improves the performance of OVS 
while maintaining its core functionality.
The idea is to leverage SR-IOV technology with OVS control plane management.

Change on the neutron side:
1. extend the OVS mechanism driver to bind direct port
2. On the agent side we will add new bridge datapath_type=hw_acc. 
3. add a check to OVS mechanism driver that in case  agent datapath_type is and 
hw_acc and port direct it will bind the port.

The vif change are: 
The mechanism driver will send new vif type to nova which will create hostdev 
interface (SR-IOV VF passthrough) and will look up the SR-IOV VF representor 
and plug it to the br-int

I will soon references to kernel and ovs upstream change.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627987

Title:
  [RFE] SR-IOV accelerated OVS integration

Status in neutron:
  New

Bug description:
  SR-IOV accelerated Open vSwitch significantly improves the performance of OVS 
while maintaining its core functionality.
  The idea is to leverage SR-IOV technology with OVS control plane management.

  Change on the neutron side:
  1. extend the OVS mechanism driver to bind direct port
  2. On the agent side we will add new bridge datapath_type=hw_acc. 
  3. add a check to OVS mechanism driver that in case  agent datapath_type is 
and hw_acc and port direct it will bind the port.

  The vif change are: 
  The mechanism driver will send new vif type to nova which will create hostdev 
interface (SR-IOV VF passthrough) and will look up the SR-IOV VF representor 
and plug it to the br-int

  I will soon references to kernel and ovs upstream change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1627987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622854] [NEW] pci: double pci migration is putting vm in ERROR

2016-09-13 Thread Moshe Levi
Public bug reported:

nova master
devstack multinode with 2 compute nodes
1. booting vm with direct port
2. nova migrate 128a2ba4-fb6e-49f4-a6e0-45cde1c60215
3. nova resize-confirm 128a2ba4-fb6e-49f4-a6e0-45cde1c60215
4. nova migrate 128a2ba4-fb6e-49f4-a6e0-45cde1c60215
5. nova resize-confirm 128a2ba4-fb6e-49f4-a6e0-45cde1c60215

The second migration failed with this error:

2016-09-12 13:12:45.750 8388 DEBUG oslo_concurrency.lockutils 
[req-a4a0126a-215a-489a-b043-ad38d3b5e28d - -] Lock "compute_resources" 
released by "nova.compute.resource_tracker._update_available_resource" :: held 
0.143s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:282
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager 
[req-a4a0126a-215a-489a-b043-ad38d3b5e28d - -] Error updating resources for 
node r-dcs224.mtr.labs.mlnx.
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager Traceback (most recent 
call last):
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager   File 
"/.autodirect/mtrswgwork/moshele/openstack/nova/nova/compute/manager.py", line 
6408, in update_available_resource_for_node
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager 
rt.update_available_resource(context)
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager   File 
"/.autodirect/mtrswgwork/moshele/openstack/nova/nova/compute/resource_tracker.py",
 line 526, in update_available_resource
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager 
self._update_available_resource(context, resources)
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in 
inner
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager return f(*args, 
**kwargs)
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager   File 
"/.autodirect/mtrswgwork/moshele/openstack/nova/nova/compute/resource_tracker.py",
 line 580, in _update_available_resource
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager 
self.pci_tracker.clean_usage(instances, migrations, orphans)
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager   File 
"/.autodirect/mtrswgwork/moshele/openstack/nova/nova/pci/manager.py", line 326, 
in clean_usage
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager 
self._free_device(dev)
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager   File 
"/.autodirect/mtrswgwork/moshele/openstack/nova/nova/pci/manager.py", line 270, 
in _free_device
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager freed_devs = 
dev.free(instance)
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager   File 
"/.autodirect/mtrswgwork/moshele/openstack/nova/nova/objects/pci_device.py", 
line 397, in free
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager 
hopestatus=ok_statuses)
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager PciDeviceInvalidStatus: 
PCI device 3::03:00.5 is available instead of ('allocated', 'claimed')
2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager
2016-09-12 13:12:46.220 8388 DEBUG oslo_service.periodic_task 
[req-a4a0126a-215a-489a-b043-ad38d3b5e28d - -] Running periodic task 
ComputeManager._sync_scheduler_instance_info run_periodic_tasks 
/usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1622854

Title:
  pci: double pci migration is putting vm in ERROR

Status in OpenStack Compute (nova):
  New

Bug description:
  nova master
  devstack multinode with 2 compute nodes
  1. booting vm with direct port
  2. nova migrate 128a2ba4-fb6e-49f4-a6e0-45cde1c60215
  3. nova resize-confirm 128a2ba4-fb6e-49f4-a6e0-45cde1c60215
  4. nova migrate 128a2ba4-fb6e-49f4-a6e0-45cde1c60215
  5. nova resize-confirm 128a2ba4-fb6e-49f4-a6e0-45cde1c60215

  The second migration failed with this error:

  2016-09-12 13:12:45.750 8388 DEBUG oslo_concurrency.lockutils 
[req-a4a0126a-215a-489a-b043-ad38d3b5e28d - -] Lock "compute_resources" 
released by "nova.compute.resource_tracker._update_available_resource" :: held 
0.143s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:282
  2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager 
[req-a4a0126a-215a-489a-b043-ad38d3b5e28d - -] Error updating resources for 
node r-dcs224.mtr.labs.mlnx.
  2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager Traceback (most 
recent call last):
  2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager   File 
"/.autodirect/mtrswgwork/moshele/openstack/nova/nova/compute/manager.py", line 
6408, in update_available_resource_for_node
  2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager 
rt.update_available_resource(context)
  2016-09-12 13:12:45.750 8388 ERROR nova.compute.manager   File 

[Yahoo-eng-team] [Bug 1614092] Re: SRIOV - PF / VM that assign to PF does not get vlan tag

2016-08-22 Thread Moshe Levi
@Eran,

Please discard my previous comment.
It is not possible do config the vlan because you pass all the PF to the guest.
The you should create the vlan by himself or cloud init. 
I am not aware of easy solution for this. 

** Changed in: nova
   Status: Confirmed => Won't Fix

** Changed in: nova
   Status: Won't Fix => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614092

Title:
  SRIOV - PF / VM that assign to PF  does not get vlan tag

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  During RFE testing Manage SR-IOV PFs as Neutron ports, I found VM booted with 
Neutron port vnic_type  direct-physical  but it does not get access to DHCP 
server. 
  The problem is that the PF / VM does not get VLAN tag with the internal vlan.
  Workaround : 
  Enter to the VM via console and set vlan interface. 


  version RHOS 10 
  python-neutronclient-4.2.1-0.20160721230146.3b1c538.el7ost.noarch
  openstack-neutron-common-9.0.0-0.20160726001729.6a23add.el7ost.noarch
  python-neutron-9.0.0-0.20160726001729.6a23add.el7ost.noarch
  openstack-neutron-fwaas-9.0.0-0.20160720211704.c3e491c.el7ost.noarch
  openstack-neutron-metering-agent-9.0.0-0.20160726001729.6a23add.el7ost.noarch
  openstack-neutron-openvswitch-9.0.0-0.20160726001729.6a23add.el7ost.noarch
  puppet-neutron-9.1.0-0.20160725142451.4061b39.el7ost.noarch
  python-neutron-lib-0.2.1-0.20160726025313.405f896.el7ost.noarch
  openstack-neutron-ml2-9.0.0-0.20160726001729.6a23add.el7ost.noarch
  openstack-neutron-9.0.0-0.20160726001729.6a23add.el7ost.noarch
  openstack-neutron-sriov-nic-agent-9.0.0-0.20160726001729.6a23add.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611302] [NEW] SR-IOV: deprecate supported_pci_vendor_devs

2016-08-09 Thread Moshe Levi
Public bug reported:

To reduce complexity in configuring SR-IOV I want to deprecate
the supported_pci_vendor_devs option. This option is doing extra
validation that pci vendor id and product id provided by
nova in the neutron port binding profile is matching
to the vendor id and product id  in supported_pci_vendor_devs.
This is redundant, because nova-scheduler is the point to do
validation and select a suitable hypervisor and  The compute node is
already validating this through the pci_passthrough_whitelist
option in nova.conf


see http://lists.openstack.org/pipermail/openstack-dev/2016-August/101108.html

** Affects: neutron
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: In Progress


** Tags: sriov-pci-pt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611302

Title:
  SR-IOV: deprecate supported_pci_vendor_devs

Status in neutron:
  In Progress

Bug description:
  To reduce complexity in configuring SR-IOV I want to deprecate
  the supported_pci_vendor_devs option. This option is doing extra
  validation that pci vendor id and product id provided by
  nova in the neutron port binding profile is matching
  to the vendor id and product id  in supported_pci_vendor_devs.
  This is redundant, because nova-scheduler is the point to do
  validation and select a suitable hypervisor and  The compute node is
  already validating this through the pci_passthrough_whitelist
  option in nova.conf

  
  see http://lists.openstack.org/pipermail/openstack-dev/2016-August/101108.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1611302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607219] Re: revert-resize doen't drop new pci devices

2016-07-28 Thread Moshe Levi
duplicate to https://bugs.launchpad.net/nova/+bug/1594230

** Changed in: nova
   Status: In Progress => Incomplete

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1607219

Title:
  revert-resize doen't drop new pci devices

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  This commit https://review.openstack.org/#/c/307124/ fixes the resize for pci 
device,
  but in drop_move_claim it takes always the old pci device for the migration 
context.
  It should get the pci device according to prefix

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1607219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607219] [NEW] revert-resize doen't drop new pci devices

2016-07-28 Thread Moshe Levi
Public bug reported:

This commit https://review.openstack.org/#/c/307124/ fixes the resize for pci 
device,
but in drop_move_claim it takes always the old pci device for the migration 
context.
It should get the pci device according to prefix

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1607219

Title:
  revert-resize doen't drop new pci devices

Status in OpenStack Compute (nova):
  New

Bug description:
  This commit https://review.openstack.org/#/c/307124/ fixes the resize for pci 
device,
  but in drop_move_claim it takes always the old pci device for the migration 
context.
  It should get the pci device according to prefix

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1607219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606941] [NEW] nova hypervisor-show is broken when hypervisor_type is ironic is ironic type

2016-07-27 Thread Moshe Levi
Public bug reported:

openstack with master branch configure to use ironic

running 
stack@r-dcs88:~/ironic-inspector$ nova hypervisor-show 
98f78cb6-a157-4580-bbc7-7b0f9ea03245
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-0820f738-e07b-47f7-8f11-1399554e22d2)

the nova-api log show

^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTraceback (most recent call last):
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/extensions.py", line 
338, in wrapped
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn f(*args, **kwargs)
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/opt/stack/nova/nova/api/openstack/compute/hypervisors.py", line 132, in detail
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn self._detail(req)
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/opt/stack/nova/nova/api/openstack/compute/hypervisors.py", line 148, in 
_detail
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTrue, req) for hyp in compute_nodes]
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/opt/stack/nova/nova/api/openstack/compute/hypervisors.py", line 72, in 
_view_hypervisor
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mhyp_dict['cpu_info'] = jsonutils.loads(hypervisor.cpu_info)
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_serialization/jsonutils.py", line 
235, in loads
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn json.loads(encodeutils.safe_decode(s, encoding), 
**kwargs)
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn _default_decoder.decode(s)
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mobj, end = self.raw_decode(s, idx=_w(s, 0).end())
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/usr/lib/python2.7/json/decoder.py", line 384, in 
raw_decode
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mraise ValueError("No JSON object could be decoded")
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mValueError: No JSON object could be decoded

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1606941

Title:
  nova hypervisor-show is broken when hypervisor_type is ironic is
  ironic type

Status in OpenStack Compute (nova):
  New

Bug description:
  openstack with master branch configure to use ironic

  running 
  stack@r-dcs88:~/ironic-inspector$ nova hypervisor-show 
98f78cb6-a157-4580-bbc7-7b0f9ea03245
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-0820f738-e07b-47f7-8f11-1399554e22d2)

  the nova-api log show

  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTraceback (most recent call last):
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/extensions.py", line 
338, in wrapped
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn f(*args, **kwargs)
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/opt/stack/nova/nova/api/openstack/compute/hypervisors.py", line 132, in detail
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn self._detail(req)
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/opt/stack/nova/nova/api/openstack/compute/hypervisors.py", line 148, in 
_detail
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTrue, req) for hyp in compute_nodes]
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 

[Yahoo-eng-team] [Bug 1590556] [NEW] race condition with resize causing old resources not to be free

2016-06-08 Thread Moshe Levi
Public bug reported:

While I was working on fixing the resize for pci passthrough [1] I have
notice the following issue in resize.


If you are using small image and you resize-confirm it very fast the old
resources are not getting freed.


After debug this issue I found out the root cause of it.


A Good run of resize is as detailed below:


When doing resize the _update_usage_from_migration in the resource
trucker called twice.

1.   The first call we return  the instance type of the new flavor
and will enter this case

https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L718

2.   Then it will put in the tracked_migrations the migration and
the new instance_type

https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L763

3.   The second call we return the old  instance_type and will enter
this case

https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L725

4.   Then in the tracked_migrations it will overwrite  the old value
with migration and the old instance type

5.
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L763

6.   When doing resize-confirm the drop_move_claim called with the
old instance type

https://github.com/openstack/nova/blob/9a05d38f48ef0f630c5e49e332075b273cee38b9/nova/compute/manager.py#L3369

7.   The drop_move_claim will compare the instance_type[id] from the
tracked_migrations to the instance_type.id (which is the old one)

8.   And because they are equals it will  remove the old resource
usage

https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L315-L328


But with small image like CirrOS   and doing the revert-confirm fast the
second call of _update_usage_from_migration will not get executing.

The result is that when we enter the drop_move_claim it compares it with
the new instance_type and this  expression is false
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L314

This mean that this code block is not executed
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L315-L326
and therefore old resources are not getting freed.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590556

Title:
  race condition with resize causing old resources not to be  free

Status in OpenStack Compute (nova):
  New

Bug description:
  While I was working on fixing the resize for pci passthrough [1] I
  have notice the following issue in resize.


  If you are using small image and you resize-confirm it very fast the
  old resources are not getting freed.


  After debug this issue I found out the root cause of it.


  A Good run of resize is as detailed below:


  When doing resize the _update_usage_from_migration in the resource
  trucker called twice.

  1.   The first call we return  the instance type of the new flavor
  and will enter this case

  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L718

  2.   Then it will put in the tracked_migrations the migration and
  the new instance_type

  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L763

  3.   The second call we return the old  instance_type and will
  enter this case

  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L725

  4.   Then in the tracked_migrations it will overwrite  the old
  value with migration and the old instance type

  5.
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L763

  6.   When doing resize-confirm the drop_move_claim called with the
  old instance type

  
https://github.com/openstack/nova/blob/9a05d38f48ef0f630c5e49e332075b273cee38b9/nova/compute/manager.py#L3369

  7.   The drop_move_claim will compare the instance_type[id] from
  the tracked_migrations to the instance_type.id (which is the old one)

  8.   And because they are equals it will  remove the old resource
  usage

  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L315-L328


  But with small image like CirrOS   and doing the revert-confirm fast
  the second call of _update_usage_from_migration will not get
  executing.

  The result is that when we enter the drop_move_claim it compares it
  with the new instance_type and this  expression is false
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L314

  This mean that this code block is not executed
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L315-L326
  and therefore old resources are not getting freed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590556/+subscriptions

-- 
Mailing 

[Yahoo-eng-team] [Bug 1532534] Re: [RFE] InfiniBand support

2016-04-07 Thread Moshe Levi
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1532534

Title:
  [RFE] InfiniBand support

Status in Ironic:
  In Progress
Status in Ironic Inspector:
  In Progress

Bug description:
  today Ironic doesn't support InfiniBand interface
  this RFE adds support of  the following:
  1. Hardware inspection for InfiniBand - by increasing the address to 60 
characters
  2. PXE boot over InfiniBand interlace - by adding the GID (port address) as 
client-id to the neutron port

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1532534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397675] Re: Updating admin_state_up for port with vnic_type doesn't have affect when not using sriov nic agent

2016-04-03 Thread Moshe Levi
agnet_required is deprecated so I am marking this as invalid

** Changed in: neutron
   Status: Confirmed => New

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397675

Title:
  Updating admin_state_up  for port with vnic_type doesn't have affect
  when not using sriov nic agent

Status in neutron:
  Invalid

Bug description:
  Updating admin_state_up for port with binding:vnic_type='direct'
  doesn't have affect:

  Version
  ===
  RHEL7.0
  openstack-neutron-2014.2-11.el7ost

  How to reproduce
  ===

  1. Make sure you have a connectivity to the Instance with the port
  attached

  2. Run
  #neutron port-update --admin_state_up=False 

  3. Check connectivity - there is still connectivity to the Instance.

  Expected result
  ==
  When updating the port admin_state_up to False there should be no 
connectivity to the instance and when updating
  admin_state_up to True there should be connectivity to the Instance.

  If the change of the admin_state_up is not possible (E.g. when using
  SR-IOV the NIC doesn't support VF's link state change) the operation
  should fail with an error.

  We also need to document this behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1397675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565466] [NEW] pci detach failed with 'PciDevice' object has no attribute '__getitem__'

2016-04-03 Thread Moshe Levi
Public bug reported:

when doing suspend with pci device, nova tries to detach the pci device from 
libvrit dom.
after calling  guest.detach_device nova checks the dom to ensure the detaching 
is finished.
 if that detach failed (because of using old qemu in my case) the 
_detach_pci_devices method failed with the following error instead of raising 
PciDeviceDetachFailed


2016-03-31 08:50:46.727 10338 DEBUG nova.objects.instance 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] Lazy-loading 
'pci_devices' on Instance uuid 7114fa62-10bb-45dc-b64e-b301bfce4dfa 
obj_load_attr /opt/stack/nova/nova/objects/instance.py:895
2016-03-31 08:50:46.727 10338 DEBUG oslo_messaging._drivers.amqpdriver 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] CALL msg_id: 
c96a579643054867adc0e119d93cc6a9 exchange 'nova' topic 'conductor' _send 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:454
2016-03-31 08:50:46.745 10338 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received reply msg_id: c96a579643054867adc0e119d93cc6a9 __call__ 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:302
2016-03-31 08:50:46.751 10338 DEBUG nova.virt.libvirt.config 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] Generated XML ('\n  \n\n  \n\n',)  
to_xml /opt/stack/nova/nova/virt/libvirt/config.py:82
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] Setting instance vm_state to ERROR
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] Traceback (most recent call last):
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/compute/manager.py", line 6588, in 
_error_out_instance_on_exception
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] yield
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/compute/manager.py", line 4196, in suspend_instance
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] self.driver.suspend(context, instance)
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2641, in suspend
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] self._detach_sriov_ports(context, 
instance, guest)
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3432, in _detach_sriov_ports
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] self._detach_pci_devices(guest, 
sriov_devs)
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3350, in _detach_pci_devices
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] dbsf = 
pci_utils.parse_address(dev['address'])
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] TypeError: 'PciDevice' object has no 
attribute '__getitem__'
2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]
2016-03-31 08:50:51.792 10338 DEBUG oslo_messaging._drivers.amqpdriver 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] CALL msg_id: 
b5353aecfd4a44aa8735c46a0427a12d exchange 'nova' topic 'conductor' _send 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:454

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: pci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565466

Title:
  pci detach failed with 'PciDevice' object has no attribute
  '__getitem__'

Status in OpenStack Compute (nova):
  New

Bug description:
  when doing suspend with pci device, nova tries to detach the pci device from 
libvrit dom.
  after calling  guest.detach_device nova checks the dom to ensure the 
detaching is finished.
   if that detach failed (because of using old qemu in my case) the 
_detach_pci_devices method failed with the following error instead of raising 
PciDeviceDetachFailed

  
  2016-03-31 08:50:46.727 10338 DEBUG nova.objects.instance 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] Lazy-loading 
'pci_devices' on Instance uuid 7114fa62-10bb-45dc-b64e-b301bfce4dfa 
obj_load_attr /opt/stack/nova/nova/objects/instance.py:895
  

[Yahoo-eng-team] [Bug 1532534] Re: [RFE] InfiniBand support

2016-01-12 Thread Moshe Levi
** Changed in: ironic-inspector
 Assignee: (unassigned) => Moshe Levi (moshele)

** Changed in: ironic-inspector
   Status: Confirmed => In Progress

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Moshe Levi (moshele)

** Changed in: nova
   Importance: Undecided => Wishlist

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1532534

Title:
  [RFE] InfiniBand support

Status in Ironic:
  In Progress
Status in Ironic Inspector:
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  today Ironic doesn't support InfiniBand interface
  this RFE adds support of  the following:
  1. Hardware inspection for InfiniBand - by increasing the address to 60 
characters
  2. PXE boot over InfiniBand interlace - by adding the GID (port address) as 
client-id to the neutron port

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1532534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524643] Re: port 'binding:profile' can't be removed when VM is deleted

2016-01-12 Thread Moshe Levi
** Project changed: neutron => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524643

Title:
  port 'binding:profile' can't be removed when VM is deleted

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  reproduce this problem:
  1. create a sriov port
  2. use this port to boot a VM
  3. delete this VM
  4. we can see port still exist, but the 'binding:profile' can't be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524643/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525076] Re: delete vm, port mac is not be reset

2016-01-12 Thread Moshe Levi
you can workaround the issue by setting the macs of all the VF to 
00:00:00:00:00:01 before you start to use them, then libvirt will clean the VF.
The problem is with some driver that don't except 00:00:00:00:00:00

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525076

Title:
  delete vm, port mac is not be reset

Status in neutron:
  Invalid

Bug description:
  1 Create a vm with sriov port which use a  fix mac
  2 Delete vm.
  3 Use "ip link show" to see the sriov port's vf mac. we found it not change.

  We think that if we delete vm which use a fix mac ,the port's mac
  should be clear.

  def unplug_hw_veb(self, instance, vif):
  if vif['vnic_type'] == network_model.VNIC_TYPE_MACVTAP:
  # The ip utility doesn't accept the MAC 00:00:00:00:00:00.
  # Therefore, keep the MAC unchanged.  Later operations on
  # the same VF will not be affected by the existing MAC.
  linux_net.set_vf_interface_vlan(vif['profile']['pci_slot'],
  mac_addr=vif['address'])

  mac_addr=vif['address'] should be a random one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499269] Re: cannot attach direct type port (sr-iov) to existing instance

2016-01-12 Thread Moshe Levi
** No longer affects: nova

** Project changed: neutron => nova

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499269

Title:
  cannot attach direct type port (sr-iov) to existing instance

Status in OpenStack Compute (nova):
  New

Bug description:
  Whenever I try to attach a direct port to an existing instance It
  fails:

  #neutron port-create Management --binding:vnic_type direct
  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:host_id   | 
|
  | binding:profile   | {}  
|
  | binding:vif_details   | {}  
|
  | binding:vif_type  | unbound 
|
  | binding:vnic_type | direct  
|
  | device_id | 
|
  | device_owner  | 
|
  | fixed_ips | {"subnet_id": 
"6c82ff4c-124e-469a-8444-1446cc5d979f", "ip_address": "10.92.29.123"} |
  | id| ce455654-4eb5-4b89-b868-b426381951c8
|
  | mac_address   | fa:16:3e:6b:15:e8   
|
  | name  | 
|
  | network_id| 5764ca50-1f30-4daa-8c86-a21fed9a679c
|
  | security_groups   | 5d2faf7b-2d32-49a8-978e-a91f57ece17d
|
  | status| DOWN
|
  | tenant_id | d5ecb0eea96f4996b565fd983a768b11
|
  
+---+-+

  # nova interface-attach --port-id ce455654-4eb5-4b89-b868-b426381951c8 
voicisc4srv1
  ERROR (ClientException): Failed to attach interface (HTTP 500) (Request-ID: 
req-11516bf4-7ab7-414c-a4ee-63e44aaf00a5)

  nova-compute.log:

  0a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Exception 
during message handling: Failed to attach network adapter device to 
056d455a-314d-4853-839e-70229a56dfcd
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6632, in 
attach_interface
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher 
port_id, requested_ip)
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 443, in 
decorated_function
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1527991] [NEW] SR-IOV port doesn't reach router internal port when they on the same physical server

2015-12-20 Thread Moshe Levi
Public bug reported:

When I used SR-IOV port instance and route port, that reside on the same  
physical server I can't use floating ip and access the vm.
but it is working if the  SR-IOV port instance is on different physical server

** Affects: neutron
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: New


** Tags: sriov-pci-pt

** Tags added: sriov-pci-pt

** Changed in: neutron
 Assignee: (unassigned) => Moshe Levi (moshele)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527991

Title:
  SR-IOV port doesn't reach router internal  port when they on the same
  physical server

Status in neutron:
  New

Bug description:
  When I used SR-IOV port instance and route port, that reside on the same  
physical server I can't use floating ip and access the vm.
  but it is working if the  SR-IOV port instance is on different physical server

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527307] [NEW] SR-IOV Agent doesn't expose loaded extenstions

2015-12-17 Thread Moshe Levi
Public bug reported:

Using SR-IOV  L2 agent and configuring extensions=qos

Running
# neutron agent-show 

Should show the loaded extensions..

** Affects: neutron
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: New


** Tags: sriov-pci-pt

** Tags added: sriov-pci-pt

** Changed in: neutron
 Assignee: (unassigned) => Moshe Levi (moshele)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527307

Title:
  SR-IOV Agent doesn't expose loaded extenstions

Status in neutron:
  New

Bug description:
  Using SR-IOV  L2 agent and configuring extensions=qos

  Running
  # neutron agent-show 

  Should show the loaded extensions..

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523083] [NEW] launch a vm with macvtap port is not working with krenel < 3.13

2015-12-05 Thread Moshe Levi
Public bug reported:

the sriov agent check if vf is assigned to macvtap by the exists of
upper_macvtap symbolic

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_sriov/agent/eswitch_manager.py#L85-L86

The upper_macvtap symbolic exists only in kernel 3.13 and above
see 
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/net/macvlan.c?id=5831d66e8097aedfa3bc35941cf265ada2352317
"net: create sysfs symlinks for neighbour devices".

This is a problem when using rhel 7.1 which comes with kernel 3.10

** Affects: neutron
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: Confirmed


** Tags: sriov-pci-pt

** Changed in: neutron
 Assignee: (unassigned) => Moshe Levi (moshele)

** Changed in: neutron
   Status: New => Confirmed

** Description changed:

  the sriov agent check if vf is assigned to macvtap by the exists of
  upper_macvtap symbolic
  
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_sriov/agent/eswitch_manager.py#L85-L86
  
- The upper_macvtap symbolic exists only in kernel 3.13 and above 
+ The upper_macvtap symbolic exists only in kernel 3.13 and above
  see 
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/net/macvlan.c?id=5831d66e8097aedfa3bc35941cf265ada2352317
- "net: create sysfs symlinks for neighbour devices"
+ "net: create sysfs symlinks for neighbour devices".
+ 
+ This is a problem when using rhel 7.1 which comes with kernel 3.10

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523083

Title:
  launch a vm with  macvtap port is not working with krenel < 3.13

Status in neutron:
  Confirmed

Bug description:
  the sriov agent check if vf is assigned to macvtap by the exists of
  upper_macvtap symbolic

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_sriov/agent/eswitch_manager.py#L85-L86

  The upper_macvtap symbolic exists only in kernel 3.13 and above
  see 
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/net/macvlan.c?id=5831d66e8097aedfa3bc35941cf265ada2352317
  "net: create sysfs symlinks for neighbour devices".

  This is a problem when using rhel 7.1 which comes with kernel 3.10

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511234] Re: Decompose fully mlnx ml2 driver

2015-12-04 Thread Moshe Levi
** Changed in: networking-mlnx
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511234

Title:
  Decompose fully mlnx ml2 driver

Status in Mellanox backend  integration with Neutron (networking-mlnx):
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Nothing requires to have mlnx ml2 driver defined in neutron so let's
  move it networking-mlnx

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-mlnx/+bug/1511234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453410] Re: mlnx_direct removal

2015-12-04 Thread Moshe Levi
** Changed in: networking-mlnx
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453410

Title:
  mlnx_direct removal

Status in Mellanox backend  integration with Neutron (networking-mlnx):
  Invalid
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  mlnx_direct vif type is not used since Juno release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-mlnx/+bug/1453410/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513467] [NEW] resource tracker incorrect log of pci stats

2015-11-05 Thread Moshe Levi
Public bug reported:

in the nova compute the resource tracker log the pci stats as 
pci_stats=PciDevicePoolList(objects=[PciDevicePool]
and not showing the PciDevicePool values 

it is nova master branch

** Affects: nova
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1513467

Title:
  resource tracker incorrect log of  pci stats

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  in the nova compute the resource tracker log the pci stats as 
pci_stats=PciDevicePoolList(objects=[PciDevicePool]
  and not showing the PciDevicePool values 

  it is nova master branch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1513467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499204] [NEW] wrong check for physical function in pci utils

2015-09-24 Thread Moshe Levi
Public bug reported:

in pci utils the is_physical_function function check it based on existing 
virtfn* symbolic link. The check is incorrect because
if the PF doen't enable SR-IOV meaning sriov_numvfs is set to zero there are no 
 virtfn* ljnks and the nova-compute recognize it as VF.

see: 
root@r-ufm160:/opt/stack/logs# ls /sys/bus/pci/devices/\:03\:00.0/
broken_parity_status  d3cold_allowed   enableiommu_group
modalias   pools  reset sriov_numvfs  uevent
class device   infinibandirq
msi_buspower  resource  sriov_totalvfsvendor
commands_cachedma_mask_bitsinfiniband_cm local_cpulist  
msi_irqs   real_miss  resource0 subsystem vpd
configdriver   infiniband_madlocal_cpus net 
   remove resource0_wc  subsystem_device
consistent_dma_mask_bits  driver_override  infiniband_verbs  mlx5_num_vfs   
numa_node  rescan sriov subsystem_vendor
root@r-ufm160:/opt/stack/logs# cat 
/sys/bus/pci/devices/\:03\:00.0/sriov_numvfs 
0


root@r-ufm160:/opt/stack/logs# echo 4 > 
/sys/bus/pci/devices/\:03\:00.0/sriov_numvfs
root@r-ufm160:/opt/stack/logs# ls /sys/bus/pci/devices/\:03\:00.0/
broken_parity_status  d3cold_allowed   enableiommu_group
modalias   pools  reset sriov_numvfs  uevent   virtfn3
class device   infinibandirq
msi_buspower  resource  sriov_totalvfsvendor   vpd
commands_cachedma_mask_bitsinfiniband_cm local_cpulist  
msi_irqs   real_miss  resource0 subsystem virtfn0
configdriver   infiniband_madlocal_cpus net 
   remove resource0_wc  subsystem_device  virtfn1
consistent_dma_mask_bits  driver_override  infiniband_verbs  mlx5_num_vfs   
numa_node  rescan sriov subsystem_vendor  virtfn2

** Affects: nova
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: In Progress


** Tags: passthrough pci

** Tags added: pci-passthogth

** Tags removed: pci-passthogth
** Tags added: passthrough pci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1499204

Title:
  wrong check for physical function in pci utils

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  in pci utils the is_physical_function function check it based on existing 
virtfn* symbolic link. The check is incorrect because
  if the PF doen't enable SR-IOV meaning sriov_numvfs is set to zero there are 
no  virtfn* ljnks and the nova-compute recognize it as VF.

  see: 
  root@r-ufm160:/opt/stack/logs# ls /sys/bus/pci/devices/\:03\:00.0/
  broken_parity_status  d3cold_allowed   enableiommu_group
modalias   pools  reset sriov_numvfs  uevent
  class device   infinibandirq
msi_buspower  resource  sriov_totalvfsvendor
  commands_cachedma_mask_bitsinfiniband_cm local_cpulist  
msi_irqs   real_miss  resource0 subsystem vpd
  configdriver   infiniband_madlocal_cpus 
netremove resource0_wc  subsystem_device
  consistent_dma_mask_bits  driver_override  infiniband_verbs  mlx5_num_vfs   
numa_node  rescan sriov subsystem_vendor
  root@r-ufm160:/opt/stack/logs# cat 
/sys/bus/pci/devices/\:03\:00.0/sriov_numvfs 
  0

  
  root@r-ufm160:/opt/stack/logs# echo 4 > 
/sys/bus/pci/devices/\:03\:00.0/sriov_numvfs
  root@r-ufm160:/opt/stack/logs# ls /sys/bus/pci/devices/\:03\:00.0/
  broken_parity_status  d3cold_allowed   enableiommu_group
modalias   pools  reset sriov_numvfs  uevent   virtfn3
  class device   infinibandirq
msi_buspower  resource  sriov_totalvfsvendor   vpd
  commands_cachedma_mask_bitsinfiniband_cm local_cpulist  
msi_irqs   real_miss  resource0 subsystem virtfn0
  configdriver   infiniband_madlocal_cpus 
netremove resource0_wc  subsystem_device  virtfn1
  consistent_dma_mask_bits  driver_override  infiniband_verbs  mlx5_num_vfs   
numa_node  rescan sriov subsystem_vendor  virtfn2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1499204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492909] [NEW] QoS: Sr-IOV Agent doens't clear VF rate when deleteing VM

2015-09-07 Thread Moshe Levi
Public bug reported:

when launching VM with port with QoS policy and after a while deleting the VM 
the SR-IOV agent doesn't clear the VF max rate.
expected behavior is to delete   VF max rate upon VM deletion.

** Affects: neutron
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: In Progress


** Tags: qos

** Changed in: neutron
 Assignee: (unassigned) => Moshe Levi (moshele)

** Tags added: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492909

Title:
  QoS: Sr-IOV Agent  doens't clear VF rate when deleteing VM

Status in neutron:
  In Progress

Bug description:
  when launching VM with port with QoS policy and after a while deleting the VM 
the SR-IOV agent doesn't clear the VF max rate.
  expected behavior is to delete   VF max rate upon VM deletion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488807] [NEW] SR-IOV: deprecate agent_required option

2015-08-26 Thread Moshe Levi
Public bug reported:

When SR-IOV introduce in Juno Agent supported only link state change
Some Intel cards don't support setting link state, so to
resolve it the SR-IOV mech driver supports agent and agent less mode.
From Liberty the SR-IOV agent brings more functionality like
qos and port security so we want to make the agent mandatory 

This patch deprecates the agent_required in Liberty
and updates the agent_required default to be True

irc log
http://eavesdrop.openstack.org/meetings/pci_passthrough/2015/pci_passthrough.2015-06-23-13.09.log.txt.

** Affects: neutron
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488807

Title:
  SR-IOV: deprecate agent_required option

Status in neutron:
  In Progress

Bug description:
  When SR-IOV introduce in Juno Agent supported only link state change
  Some Intel cards don't support setting link state, so to
  resolve it the SR-IOV mech driver supports agent and agent less mode.
  From Liberty the SR-IOV agent brings more functionality like
  qos and port security so we want to make the agent mandatory 

  This patch deprecates the agent_required in Liberty
  and updates the agent_required default to be True

  irc log
  
http://eavesdrop.openstack.org/meetings/pci_passthrough/2015/pci_passthrough.2015-06-23-13.09.log.txt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486053] [NEW] get get_policy_bandwidth_limit_rules_for_policy return all bw rules and not fillter by qos polciy

2015-08-18 Thread Moshe Levi
Public bug reported:

when listing  qos-bandwidth-limit-rule-list, we get all the rules as
below instead of filler them by policy

moshele@r-ufm183:/.autodirect/mtrswgwork/moshele/openstack/devstack$
neutron qos-bandwidth-limit-rule-list test2
 +--++--+
 | id | max_burst_kbps | max_kbps |
 +--++--+
 | 79899489-f93b-488d-9dbc-178a109e8a34 | 0 | 1 |
 | 85ae8d6e-a73b-4016-b769-85f48b76c1a5 | 0 | 100 |
 +--++--+
moshele@r-ufm183:/.autodirect/mtrswgwork/moshele/openstack/devstack$
neutron qos-bandwidth-limit-rule-list test
 +--++--+
| id | max_burst_kbps | max_kbps |
+--++--+
 | 79899489-f93b-488d-9dbc-178a109e8a34 | 0 | 1 |
 | 85ae8d6e-a73b-4016-b769-85f48b76c1a5 | 0 | 100 |
+--++--+

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486053

Title:
  get get_policy_bandwidth_limit_rules_for_policy return all bw rules
  and  not fillter by qos polciy

Status in neutron:
  New

Bug description:
  when listing  qos-bandwidth-limit-rule-list, we get all the rules as
  below instead of filler them by policy

  moshele@r-ufm183:/.autodirect/mtrswgwork/moshele/openstack/devstack$
  neutron qos-bandwidth-limit-rule-list test2
   +--++--+
   | id | max_burst_kbps | max_kbps |
   +--++--+
   | 79899489-f93b-488d-9dbc-178a109e8a34 | 0 | 1 |
   | 85ae8d6e-a73b-4016-b769-85f48b76c1a5 | 0 | 100 |
   +--++--+
  moshele@r-ufm183:/.autodirect/mtrswgwork/moshele/openstack/devstack$
  neutron qos-bandwidth-limit-rule-list test
   +--++--+
  | id | max_burst_kbps | max_kbps |
  +--++--+
   | 79899489-f93b-488d-9dbc-178a109e8a34 | 0 | 1 |
   | 85ae8d6e-a73b-4016-b769-85f48b76c1a5 | 0 | 100 |
  +--++--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479694] [NEW] unable to change port state when using sriov agent

2015-07-30 Thread Moshe Levi
Public bug reported:

openstack version 
openstack-neutron-sriov-nic-agent-2015.1.0-1.el7.noarch
python-neutron-2015.1.0-1.el7.noarch
openstack-neutron-2015.1.0-1.el7.noarch
openstack-neutron-common-2015.1.0-1.el7.noarch

when changing port state this error appear in the log


#neutron port-update --admin_state_up=False/True port_name
VF state should be change to disable/enable -  vf 1 MAC fa:16:3e:9b:59:2e, 
vlan 3, spoof checking off, link-state enable/disable 
From sriov-nic-agent.log :
Stderr: RTNETLINK answers: Operation not permitted
2015-07-29 17:54:33.714 2082 ERROR neutron.plugins.sriovnicagent.pci_lib 
[req-c232ddc4-c065-4459-8552-6c7af2d3ad10 ] Failed executing ip command
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib 
Traceback (most recent call last):
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib File 
/usr/lib/python2.7/site-packages/neutron/plugins/sriovnicagent/pci_lib.py, 
line 102, in set_vf_state
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib 
str(vf_index), state, status_str))
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py, line 78, in 
_execute
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib 
log_fail_as_error=log_fail_as_error)
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py, line 137, in 
execute
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib raise 
RuntimeError(m)
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib 
RuntimeError:
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib 
Command: ['ip', 'link', 'set', 'p2p1', 'vf', '2', 'state', 'disable']
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib Exit 
code: 2
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib Stdin:
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib Stdout:
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib 
Stderr: RTNETLINK answers: Operation not permitted
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib
2015-07-29 17:54:33.714 2082 TRACE neutron.plugins.sriovnicagent.pci_lib
2015-07-29 17:54:33.715 2082 ERROR 
neutron.plugins.sriovnicagent.sriov_nic_agent [req-c232ddc4-c065-4459-8552-6c! 
7af2d3ad 10 ] Failed to set device fa:16:3e:29:d8:78 state
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent Traceback (most recent call last):
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent File 
/usr/lib/python2.7/site-packages/neutron/plugins/sriovnicagent/sriov_nic_agent.py,
 line 175, in treat_device
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent admin_state_up)
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent File 
/usr/lib/python2.7/site-packages/neutron/plugins/sriovnicagent/eswitch_manager.py,
 line 251, in set_device_state
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent admin_state_up)
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent File 
/usr/lib/python2.7/site-packages/neutron/plugins/sriovnicagent/eswitch_manager.py,
 line 163, in set_device_state
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent return 
self.pci_dev_wrapper.set_vf_state(vf_index, state)
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent File 
/usr/lib/python2.7/site-packages/neutron/plugins/sriovnicagent/pci_lib.py, 
line 106, in set_vf_state
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent reason=e)
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent IpCommandError: ip command failed 
on device p2p1:
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent Command: ['ip', 'link', 'set', 
'p2p1', 'vf', '2', 'state', 'disable']
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent Exit code: 2
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent Stdin:
2015-07-29 17:54:33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent Stdout:
2015-07-2! 9 17:54: 33.715 2082 TRACE 
neutron.plugins.sriovnicagent.sriov_nic_agent Stderr: RTNETLINK answers: 
Operation not permitted

** Affects: neutron
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Moshe Levi (moshele)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479694

Title:
  unable to change port state when using sriov agent

[Yahoo-eng-team] [Bug 1460430] [NEW] refactor mlnx mechnism driver for infiniband only

2015-05-31 Thread Moshe Levi
Public bug reported:

Refactor mlnx mechanism driver to be infiniband only

The old mlnx mechanism driver was used for SR-IOV in ethernet
and infiniband but the pci allocation wasn't done by nova.
Since juno sriovnicswitch mechanism driver was introduce for
SR-IOV in ethernet and Mellanox recommend useing it.
The new mlnx mechanism driver will be use for SR-IOV infiniband.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460430

Title:
  refactor mlnx mechnism driver for infiniband only

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Refactor mlnx mechanism driver to be infiniband only

  The old mlnx mechanism driver was used for SR-IOV in ethernet
  and infiniband but the pci allocation wasn't done by nova.
  Since juno sriovnicswitch mechanism driver was introduce for
  SR-IOV in ethernet and Mellanox recommend useing it.
  The new mlnx mechanism driver will be use for SR-IOV infiniband.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453418] [NEW] ebrctl should be in compute.filters

2015-05-09 Thread Moshe Levi
Public bug reported:

currently ebrctl is  network.filters, but the utility is used by libvirt in 
nova compute.
need to move it to   compute.filters.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453418

Title:
  ebrctl should be in compute.filters

Status in OpenStack Compute (Nova):
  New

Bug description:
  currently ebrctl is  network.filters, but the utility is used by libvirt in 
nova compute.
  need to move it to   compute.filters.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453410] [NEW] mlnx_direct removal

2015-05-09 Thread Moshe Levi
Public bug reported:

mlnx_direct vif type is not used since Juno release.

** Affects: networking-mlnx
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: networking-mlnx
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453410

Title:
  mlnx_direct removal

Status in Mellanox backend  integration with Neutron (networking-mlnx):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  mlnx_direct vif type is not used since Juno release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-mlnx/+bug/1453410/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451018] Re: allow nova rootwarp to search executable in /usr/local/bin

2015-05-02 Thread Moshe Levi
** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451018

Title:
  allow nova rootwarp to search executable in /usr/local/bin

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  when using devstack some utils are in installed in  /usr/local/bin.
  I think we need to include /usr/local/bin in the search dir of the 
nova-rootwrap

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451018/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451018] [NEW] allow nova rootwarp to search executable in /usr/local/bin

2015-05-02 Thread Moshe Levi
Public bug reported:

when using devstack some utils are in installed in  /usr/local/bin.
I think we need to include /usr/local/bin in the search dir of the nova-rootwrap

** Affects: nova
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Moshe Levi (moshele)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451018

Title:
  allow nova rootwarp to search executable in /usr/local/bin

Status in OpenStack Compute (Nova):
  New

Bug description:
  when using devstack some utils are in installed in  /usr/local/bin.
  I think we need to include /usr/local/bin in the search dir of the 
nova-rootwrap

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451018/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447105] [NEW] dhcp agent doesn't support client identifier option

2015-04-22 Thread Moshe Levi
Public bug reported:

according to the dnsmasq man ( http://linux.die.net/man/8/dnsmasq)
client id should be written to dhcp-hostsfile,  
[hwaddr][,id:client_id|*][,net:netid][,ipaddr][,hostname][,lease_time][,ignore]
but in current implementation all the option are written to dhcp-optsfile

** Affects: neutron
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Moshe Levi (moshele)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447105

Title:
  dhcp agent doesn't support client identifier option

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  according to the dnsmasq man ( http://linux.die.net/man/8/dnsmasq)
  client id should be written to dhcp-hostsfile,  
  
[hwaddr][,id:client_id|*][,net:netid][,ipaddr][,hostname][,lease_time][,ignore]
  but in current implementation all the option are written to dhcp-optsfile

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407721] Re: Add range support to all address fields in pci_passthrough_whitelist

2015-04-20 Thread Moshe Levi
** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407721

Title:
  Add range support to all address fields in pci_passthrough_whitelist

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  This way a user will  be able to exclude a specific VF.

  Example of func range:
  1 . pci_passthrough_whitelist = 
{address:*:02:00.2-4,physical_network:physnet1}
  2.  pci_passthrough_whitelist 
={address:*:02:00.2-*,physical_network:physnet1}

  Example of slot range:
  1. pci_passthrough_whitelist ={address:*:02:02-04.* 
,physical_network:physnet1}
  2. pci_passthrough_whitelist 
{address:*:02:02-*.*,physical_network:physnet1}

  same will be for bus and domain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437790] [NEW] vm stack in migration state when ssh is not configure correctly

2015-03-29 Thread Moshe Levi
Public bug reported:

when doing vm migration with libvirt driver and ssh not configure properly the 
vm is stack in migrate/resize state forever.
This is happen if the ssh prompt for password, to solve it we need to make sure 
ssh command won't prompt.

This can be done by adding BatchMode=yes (default no)  and also
StrictHostKeyChecking=yes(default ask)

** Affects: nova
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Moshe Levi (moshele)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1437790

Title:
  vm stack in migration state when ssh is not configure correctly

Status in OpenStack Compute (Nova):
  New

Bug description:
  when doing vm migration with libvirt driver and ssh not configure properly 
the vm is stack in migrate/resize state forever.
  This is happen if the ssh prompt for password, to solve it we need to make 
sure ssh command won't prompt.

  This can be done by adding BatchMode=yes (default no)  and also
  StrictHostKeyChecking=yes(default ask)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1437790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435483] [NEW] [pci-passthrough] Failed to launch vm after restarting nova compute

2015-03-23 Thread Moshe Levi
Public bug reported:

my nova.conf contains pci_passthrough_whitelist parameter

launch vm after openstack installation the vm was successfully booted.
when restarting nova compute and then again trying to launch vm I can see the 
following error 
An object of type PciDevicePoolList is required

please note it doesn't matter if it vm with normal or vm with direct
port.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: pci-passthrough

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1435483

Title:
   [pci-passthrough] Failed to launch vm after restarting nova compute

Status in OpenStack Compute (Nova):
  New

Bug description:
  my nova.conf contains pci_passthrough_whitelist parameter

  launch vm after openstack installation the vm was successfully booted.
  when restarting nova compute and then again trying to launch vm I can see the 
following error 
  An object of type PciDevicePoolList is required

  please note it doesn't matter if it vm with normal or vm with direct
  port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1435483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416807] Re: wrong version of oslo oslo.rootwrap in requirement.txt

2015-02-02 Thread Moshe Levi
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416807

Title:
  wrong version of oslo oslo.rootwrap in requirement.txt

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  oslo.rootwrap 1.5.0 as namespace change from oslo.rootwrap to
  oslo_rootwrap  , but requirement.txt  allow installing oslo.rootwrap
  1.3.0.

  this causing  openvswitch not to start in Mellanox CI.

  2015-02-01 09:07:31.259 28417 TRACE neutron Stderr: Traceback (most recent 
call last):
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/bin/neutron-rootwrap, line 9, in module
  2015-02-01 09:07:31.259 28417 TRACE neutron 
load_entry_point('neutron==2015.1.dev507', 'console_scripts', 
'neutron-rootwrap')()
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 519, 
in load_entry_point
  2015-02-01 09:07:31.259 28417 TRACE neutron return 
get_distribution(dist).load_entry_point(group, name)
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2630, 
in load_entry_point
  2015-02-01 09:07:31.259 28417 TRACE neutron return ep.load()
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2310, 
in load
  2015-02-01 09:07:31.259 28417 TRACE neutron return self.resolve()
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2316, 
in resolve
  2015-02-01 09:07:31.259 28417 TRACE neutron module = 
__import__(self.module_name, fromlist=['__name__'], level=0)
  2015-02-01 09:07:31.259 28417 TRACE neutron ImportError: No module named 
oslo_rootwrap.cmd
  2015-02-01 09:07:31.259 28417 TRACE neutron 
  2015-02-01 09:07:31.259 28417 TRACE neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416807] [NEW] wrong version of oslo oslo.rootwrap in requirement.txt

2015-01-31 Thread Moshe Levi
Public bug reported:

oslo.rootwrap 1.5.0 as namespace change from oslo.rootwrap to
oslo_rootwrap  , but requirement.txt  allow installing oslo.rootwrap
1.3.0.

this causing  openvswitch not to start in Mellanox CI.

2015-02-01 09:07:31.259 28417 TRACE neutron Stderr: Traceback (most recent call 
last):
2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/bin/neutron-rootwrap, line 9, in module
2015-02-01 09:07:31.259 28417 TRACE neutron 
load_entry_point('neutron==2015.1.dev507', 'console_scripts', 
'neutron-rootwrap')()
2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 519, 
in load_entry_point
2015-02-01 09:07:31.259 28417 TRACE neutron return 
get_distribution(dist).load_entry_point(group, name)
2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2630, 
in load_entry_point
2015-02-01 09:07:31.259 28417 TRACE neutron return ep.load()
2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2310, 
in load
2015-02-01 09:07:31.259 28417 TRACE neutron return self.resolve()
2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2316, 
in resolve
2015-02-01 09:07:31.259 28417 TRACE neutron module = 
__import__(self.module_name, fromlist=['__name__'], level=0)
2015-02-01 09:07:31.259 28417 TRACE neutron ImportError: No module named 
oslo_rootwrap.cmd
2015-02-01 09:07:31.259 28417 TRACE neutron 
2015-02-01 09:07:31.259 28417 TRACE neutron

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416807

Title:
  wrong version of oslo oslo.rootwrap in requirement.txt

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  oslo.rootwrap 1.5.0 as namespace change from oslo.rootwrap to
  oslo_rootwrap  , but requirement.txt  allow installing oslo.rootwrap
  1.3.0.

  this causing  openvswitch not to start in Mellanox CI.

  2015-02-01 09:07:31.259 28417 TRACE neutron Stderr: Traceback (most recent 
call last):
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/bin/neutron-rootwrap, line 9, in module
  2015-02-01 09:07:31.259 28417 TRACE neutron 
load_entry_point('neutron==2015.1.dev507', 'console_scripts', 
'neutron-rootwrap')()
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 519, 
in load_entry_point
  2015-02-01 09:07:31.259 28417 TRACE neutron return 
get_distribution(dist).load_entry_point(group, name)
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2630, 
in load_entry_point
  2015-02-01 09:07:31.259 28417 TRACE neutron return ep.load()
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2310, 
in load
  2015-02-01 09:07:31.259 28417 TRACE neutron return self.resolve()
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2316, 
in resolve
  2015-02-01 09:07:31.259 28417 TRACE neutron module = 
__import__(self.module_name, fromlist=['__name__'], level=0)
  2015-02-01 09:07:31.259 28417 TRACE neutron ImportError: No module named 
oslo_rootwrap.cmd
  2015-02-01 09:07:31.259 28417 TRACE neutron 
  2015-02-01 09:07:31.259 28417 TRACE neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414902] [NEW] Thin the in-tree MLNX Ml2 Driver and agent

2015-01-26 Thread Moshe Levi
Public bug reported:

This bug tracks the thinning of the in-tree MLNX ML2 Driver and agent

** Affects: neutron
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Moshe Levi (moshele)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414902

Title:
  Thin the in-tree MLNX Ml2 Driver and agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This bug tracks the thinning of the in-tree MLNX ML2 Driver and agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408572] Re: Launch VM is having issue when using sriov pci passthrough feature

2015-01-18 Thread Moshe Levi
for 1-3 the answer is yes.
for 4 you don't need this section as well.
I am moving it to invalid.


** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408572

Title:
  Launch VM is having issue when using sriov pci passthrough feature

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Launched VM :
  External network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| 35d70304-cbb6-4b1d-b73e-334c47148f32 |
  | name  | public   |
  | provider:network_type | vlan |
  | provider:physical_network | physnet1 |
  | provider:segmentation_id  | 101  |
  | router:external   | True |
  | shared| False|
  | status| ACTIVE   |
  | subnets   | f2025c22-a90d-4576-803a-b93274f5ef45 |
  | tenant_id | 5623e92cd94c4741b4b9c117ecebd0be |
  +---+--+
  subnet:
  +---++
  | Field | Value  |
  +---++
  | allocation_pools  | {start: 172.24.4.2, end: 172.24.4.254} |
  | cidr  | 172.24.4.0/24  |
  | dns_nameservers   ||
  | enable_dhcp   | False  |
  | gateway_ip| 172.24.4.1 |
  | host_routes   ||
  | id| f2025c22-a90d-4576-803a-b93274f5ef45   |
  | ip_version| 4  |
  | ipv6_address_mode ||
  | ipv6_ra_mode  ||
  | name  | public-subnet  |
  | network_id| 35d70304-cbb6-4b1d-b73e-334c47148f32   |
  | tenant_id | 5623e92cd94c4741b4b9c117ecebd0be   |
  +---++
  private network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| a82c3db3-1bb6-4b17-9c48-e508a49f5f4b |
  | name  | private  |
  | provider:network_type | vlan |
  | provider:physical_network | physnet1 |
  | provider:segmentation_id  | 100  |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   | 6fed0e5f-8037-4637-ae7e-1c413415172a |
  | tenant_id | 90e3e6153f444a07986c3c12d3129852 |
  +---+--+
  subnet:
  +---++
  | Field | Value  |
  +---++
  | allocation_pools  | {start: 10.0.0.2, end: 10.0.0.254} |
  | cidr  | 10.0.0.0/24|
  | dns_nameservers   ||
  | enable_dhcp   | True   |
  | gateway_ip| 10.0.0.1   |
  | host_routes   ||
  | id| 6fed0e5f-8037-4637-ae7e-1c413415172a   |
  | ip_version| 4  |
  | ipv6_address_mode ||
  | ipv6_ra_mode  ||
  | name  | private-subnet |
  | network_id| a82c3db3-1bb6-4b17-9c48-e508a49f5f4b   |
  | tenant_id | 

[Yahoo-eng-team] [Bug 1407721] [NEW] Add slot and func range to pci_passthrough_whitelist

2015-01-05 Thread Moshe Levi
Public bug reported:

This way a user will  be able to exclude a specific VF.

Example of func range:
1 . pci_passthrough_whitelist = 
{address:*:02:00.2-4,physical_network:physnet1} 
2.  pci_passthrough_whitelist 
={address:*:02:00.2-*,physical_network:physnet1}

Example of slot range:
1. pci_passthrough_whitelist ={address:*:02:02-04.* 
,physical_network:physnet1} 
2. pci_passthrough_whitelist 
{address:*:02:02-*.*,physical_network:physnet1}

** Affects: nova
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: New


** Tags: pci-passthrough

** Tags removed: pci-passthrouth
** Tags added: pci-passthrough

** Changed in: nova
 Assignee: (unassigned) = Moshe Levi (moshele)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407721

Title:
  Add slot and func range to pci_passthrough_whitelist

Status in OpenStack Compute (Nova):
  New

Bug description:
  This way a user will  be able to exclude a specific VF.

  Example of func range:
  1 . pci_passthrough_whitelist = 
{address:*:02:00.2-4,physical_network:physnet1} 
  2.  pci_passthrough_whitelist 
={address:*:02:00.2-*,physical_network:physnet1}

  Example of slot range:
  1. pci_passthrough_whitelist ={address:*:02:02-04.* 
,physical_network:physnet1} 
  2. pci_passthrough_whitelist 
{address:*:02:02-*.*,physical_network:physnet1}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp