[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-07-02 Thread Frode Nordahl
Executive summary for kernel team:
What makes both libvirt and Nova unhappy about the Cavium Thunder X NIC is the 
fact that they are denied with "Operation not supported" when attempting to 
read from sysfs node phys_port_id from its virtual functions.

Example:
'/sys/devices/pci0003:00/0003:00:00.0/0003:01:00.0/0003:02:09.0/0003:09:00.0/net/enP3p9s0f0/phys_port_id':
 Operation not supported

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-07-02 Thread Ryan Beisner
** Changed in: charm-nova-compute
 Assignee: (unassigned) => Frode Nordahl (fnordahl)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-06-29 Thread Frode Nordahl
1) The 'No compute node record for host phanpy:
ComputeHostNotFound_Remote: Compute host phanpy could not be found.'
message is benign, this message appears on first start of the `nova-
compute` service.  It keeps appearing in the log here due to failure to
register available resources.  See 3)


2) Technically, the compute hosts are partially registered with `nova`:
$ nova service-list
+++-+--+-+---++-+
| Id | Binary | Host| Zone | Status  | State | 
Updated_at | Disabled Reason |
+++-+--+-+---++-+
| 1  | nova-conductor | juju-302a0a-2-lxd-2 | internal | enabled | up| 
2018-06-29T10:28:00.00 | -   |
| 14 | nova-scheduler | juju-302a0a-2-lxd-2 | internal | enabled | up| 
2018-06-29T10:28:01.00 | -   |
| 15 | nova-compute   | phanpy  | nova | enabled | up| 
2018-06-29T10:28:01.00 | -   |
| 16 | nova-compute   | aurorus | nova | enabled | up| 
2018-06-29T10:28:05.00 | -   |
| 26 | nova-compute   | zygarde | nova | enabled | up| 
2018-06-29T10:28:05.00 | -   |
+++-+--+-+---++-+


3) However the compute hosts does not have any resources.  The reason for no 
resources appearing in `nova` is that `nova-compute` service hits a TraceBack 
during initial host registration:

2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager Traceback (most recent 
call last):
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 7277, in 
update_available_resource_for_node
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager 
rt.update_available_resource(context, nodename)
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 664, 
in update_available_resource
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager resources = 
self.driver.get_available_resource(nodename)
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6438, in 
get_available_resource
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager 
self._get_pci_passthrough_devices()
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5945, in 
_get_pci_passthrough_devices
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager 
pci_info.append(self._get_pcidev_info(name))
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5906, in 
_get_pcidev_info
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager 
device.update(_get_device_capabilities(device, address))
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5877, in 
_get_device_capabilities
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager pcinet_info = 
self._get_pcinet_info(address)
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5820, in 
_get_pcinet_info
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager virtdev = 
self._host.device_lookup_by_name(devname)
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/host.py", line 838, in 
device_lookup_by_name
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager return 
self.get_connection().nodeDeviceLookupByName(name)
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager result = 
proxy_call(self._autowrap, f, *args, **kwargs)
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager rv = execute(f, 
*args, **kwargs)
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager six.reraise(c, e, 
tb)
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
2018-06-29 06:25:57.161 35528 ERROR nova.compute.manager rv = meth(*args, 
**kwargs)
2018-06-29 06:25:57.161 35528 ERROR 

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-06-29 Thread Frode Nordahl
** Attachment added: "libvirt-debug.log"
   
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+attachment/5157735/+files/libvirt-debug.log

** Changed in: charm-nova-compute
   Status: Incomplete => Invalid

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-06-28 Thread Ryan Beisner
To be clear, on our lab machines (gigabyte arm64), we don't observe this
issue with Bionic + Queens, hence the request to try to triage on the
specific kit involved.  Thanks!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-06-27 Thread David Britton
Incomplete in libvirt pending debug from live system by openstack team.

** Changed in: libvirt (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-06-27 Thread Chris Gregan
Escalated due to delay in triage and fix given our contract with ARM

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-06-07 Thread Ryan Beisner
@raharper - I concur, there is a workflow gap in the nova-compute charm
with regard to hypervisor registration success with nova, and I've
raised a separate bug to address that generically.  However, that won't
fix this bug, it will just make it more visible by blocking the juju
charm unit and juju charm application states.

https://bugs.launchpad.net/charm-nova-compute/+bug/1775690

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-06-07 Thread Ryan Beisner
In order to make progress from the charm front, I would need access to
at least one machine with the hardware which is specific to this bug,
plus two adjacent machines for control/data plane.  Can we arrange that
access for openstack charms engineering?

** Changed in: charm-nova-compute
   Status: Opinion => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-29 Thread Ryan Harper
I'm not certain we can rule out the charm; the observant behavior is
that the compute nodes do not get enrolled.
Certainly the lack of a nova-compute node being registered has some
touch point to the charms.
The follow-up I think comes from the Openstack team to walk through
where the charm leaves off with the nova-compute package
and then how nova-compute interacts with libvirt and what ultimately
triggers the registration of a compute node with the cloud.


Christian and myself have looked at the logs, and while libvirt and
nova-compute are noisy w.r.t the virtual functions,
the node does not appear to be prevented from launching a guest, but
that could be confirmed to help rule out where
the failure to register the compute node is happening.


@Beisner thoughts?

On Tue, May 29, 2018 at 3:14 PM, Chris Gregan
 wrote:
> This defect seems to have stalled somewhat. Is there more information we
> can gather for this to move forward again?
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1771662
>
> Title:
>   libvirtError: Node device not found: no node device with matching name
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-29 Thread Chris Gregan
This defect seems to have stalled somewhat. Is there more information we
can gather for this to move forward again?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-23 Thread Ryan Beisner
If this is a bug on the OpenStack side, it's not in the charm.  It would
be in nova proper.

** Changed in: charm-nova-compute
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-23 Thread Ryan Harper
After comparing the sysfs data, I don't see any differences w.r.t the
physical paths in sysfs for the thunder nic.

I wonder if there is something that detects "xenial" and does one
thing, vs "bionic" despite the xenial host using the same kernel
level.
The apparmor denied on the namespaces only shows up under bionic but
both kernels are the same level, so we should be seeing the
same errors if both stacks were use the same cgroups.

Can we check charms, juju or lxd w.r.t how it those cgroups are
mounted?  That may not be related but we're running out of
differences.

On Tue, May 22, 2018 at 9:21 PM, Jason Hobbs  wrote:
> ls -alR /sys on bionic http://paste.ubuntu.com/p/nrxyRGP3By/
>
> The bionic kernel has also bumped:
> Linux aurorus 4.15.0-22-generic #24-Ubuntu SMP Wed May 16 12:14:36 UTC
> 2018 aarch64 aarch64 aarch64 GNU/Linux
>
> On Tue, May 22, 2018 at 7:10 PM, Ryan Harper <1771...@bugs.launchpad.net> 
> wrote:
>> Looks like the ls -aLR contains more data;  we can compare bionic.
>>
>> On Tue, May 22, 2018 at 6:53 PM, Jason Hobbs  
>> wrote:
>>> cd /sys/bus/pci/devices && grep -nr . *
>>>
>>> xenial:
>>> http://paste.ubuntu.com/p/F5qyvN2Qrr/
>>>
>>> On Tue, May 22, 2018 at 5:27 PM, Jason Hobbs  
>>> wrote:
 Do you really want a tar? How about ls -alR? xenial:

 http://paste.ubuntu.com/p/wyQ3kTsyBB/

 On Tue, May 22, 2018 at 5:14 PM, Jason Hobbs  
 wrote:
> ok; looks like that 4.15.0-22-generic just released and wasn't what I
> used in the first reproduction... I doubt that's it.
>
> On Tue, May 22, 2018 at 4:58 PM, Ryan Harper <1771...@bugs.launchpad.net> 
> wrote:
>> Comparing the kernel logs, on Xenial, the second nic comes up:
>>
>> May 22 15:00:27 aurorus kernel: [   24.840500] IPv6:
>> ADDRCONF(NETDEV_UP): enP2p1s0f2: link is not ready
>> May 22 15:00:27 aurorus kernel: [   25.472391] thunder-nicvf
>> 0002:01:00.2 enP2p1s0f2: Link is Up 1 Mbps Full duplex
>>
>> But on bionic, we only ever have f3 up.  Note this isn't a network
>> configuration, but rather the state of the Nic and the switch.
>> It doesn't appear to matter, 0f3 is what get's bridged by juju anyhow.
>> But it does suggest that something is different.
>>
>> There is a slight kernel version variance as well:
>>
>> Xenial:
>> May 22 15:00:27 aurorus kernel: [0.00] Linux version
>> 4.15.0-22-generic (buildd@bos02-arm64-038) (gcc version 5.4.0 20160609
>> (Ubuntu/Lin
>>
>> Bionic:
>> May 17 18:03:47 aurorus kernel: [0.00] Linux version
>> 4.15.0-20-generic (buildd@bos02-arm64-029) (gcc version 7.3.0
>> (Ubuntu/Linaro 7.3.
>>
>> Looks like Xenial does not use unified cgroup namespaces, not sure
>> what affect this may have on what's running in those lxd juju
>> containers.
>>
>> % grep DENIED *.log
>> bionic.log:May 17 18:19:33 aurorus kernel: [  983.592228] audit:
>> type=1400 audit(1526581173.043:70): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-1_"
>> name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>> bionic.log:May 17 18:19:33 aurorus kernel: [  983.592476] audit:
>> type=1400 audit(1526581173.043:71): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-1_"
>> name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>> bionic.log:May 17 18:19:41 aurorus kernel: [  991.818402] audit:
>> type=1400 audit(1526581181.267:88): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-1_"
>> name="/run/systemd/unit-root/var/lib/lxcfs/" pid=24757
>> comm="(networkd)" flags="ro, nosuid, nodev, remount, bind"
>> bionic.log:May 17 18:19:46 aurorus kernel: [  997.271203] audit:
>> type=1400 audit(1526581186.719:90): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-2_"
>> name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>> bionic.log:May 17 18:19:46 aurorus kernel: [  997.271425] audit:
>> type=1400 audit(1526581186.723:91): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-2_"
>> name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>> bionic.log:May 17 18:19:55 aurorus kernel: [ 1006.285863] audit:
>> type=1400 audit(1526581195.735:108): 

Re: [Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-22 Thread Jason Hobbs
ls -alR /sys on bionic http://paste.ubuntu.com/p/nrxyRGP3By/

The bionic kernel has also bumped:
Linux aurorus 4.15.0-22-generic #24-Ubuntu SMP Wed May 16 12:14:36 UTC
2018 aarch64 aarch64 aarch64 GNU/Linux

On Tue, May 22, 2018 at 7:10 PM, Ryan Harper <1771...@bugs.launchpad.net> wrote:
> Looks like the ls -aLR contains more data;  we can compare bionic.
>
> On Tue, May 22, 2018 at 6:53 PM, Jason Hobbs  
> wrote:
>> cd /sys/bus/pci/devices && grep -nr . *
>>
>> xenial:
>> http://paste.ubuntu.com/p/F5qyvN2Qrr/
>>
>> On Tue, May 22, 2018 at 5:27 PM, Jason Hobbs  
>> wrote:
>>> Do you really want a tar? How about ls -alR? xenial:
>>>
>>> http://paste.ubuntu.com/p/wyQ3kTsyBB/
>>>
>>> On Tue, May 22, 2018 at 5:14 PM, Jason Hobbs  
>>> wrote:
 ok; looks like that 4.15.0-22-generic just released and wasn't what I
 used in the first reproduction... I doubt that's it.

 On Tue, May 22, 2018 at 4:58 PM, Ryan Harper <1771...@bugs.launchpad.net> 
 wrote:
> Comparing the kernel logs, on Xenial, the second nic comes up:
>
> May 22 15:00:27 aurorus kernel: [   24.840500] IPv6:
> ADDRCONF(NETDEV_UP): enP2p1s0f2: link is not ready
> May 22 15:00:27 aurorus kernel: [   25.472391] thunder-nicvf
> 0002:01:00.2 enP2p1s0f2: Link is Up 1 Mbps Full duplex
>
> But on bionic, we only ever have f3 up.  Note this isn't a network
> configuration, but rather the state of the Nic and the switch.
> It doesn't appear to matter, 0f3 is what get's bridged by juju anyhow.
> But it does suggest that something is different.
>
> There is a slight kernel version variance as well:
>
> Xenial:
> May 22 15:00:27 aurorus kernel: [0.00] Linux version
> 4.15.0-22-generic (buildd@bos02-arm64-038) (gcc version 5.4.0 20160609
> (Ubuntu/Lin
>
> Bionic:
> May 17 18:03:47 aurorus kernel: [0.00] Linux version
> 4.15.0-20-generic (buildd@bos02-arm64-029) (gcc version 7.3.0
> (Ubuntu/Linaro 7.3.
>
> Looks like Xenial does not use unified cgroup namespaces, not sure
> what affect this may have on what's running in those lxd juju
> containers.
>
> % grep DENIED *.log
> bionic.log:May 17 18:19:33 aurorus kernel: [  983.592228] audit:
> type=1400 audit(1526581173.043:70): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-1_"
> name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
> bionic.log:May 17 18:19:33 aurorus kernel: [  983.592476] audit:
> type=1400 audit(1526581173.043:71): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-1_"
> name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
> bionic.log:May 17 18:19:41 aurorus kernel: [  991.818402] audit:
> type=1400 audit(1526581181.267:88): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-1_"
> name="/run/systemd/unit-root/var/lib/lxcfs/" pid=24757
> comm="(networkd)" flags="ro, nosuid, nodev, remount, bind"
> bionic.log:May 17 18:19:46 aurorus kernel: [  997.271203] audit:
> type=1400 audit(1526581186.719:90): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-2_"
> name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
> bionic.log:May 17 18:19:46 aurorus kernel: [  997.271425] audit:
> type=1400 audit(1526581186.723:91): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-2_"
> name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
> bionic.log:May 17 18:19:55 aurorus kernel: [ 1006.285863] audit:
> type=1400 audit(1526581195.735:108): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-2_"
> name="/run/systemd/unit-root/" pid=26209 comm="(networkd)" flags="ro,
> remount, bind"
> bionic.log:May 17 18:20:12 aurorus kernel: [ 1022.760512] audit:
> type=1400 audit(1526581212.211:110): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-0_"
> name="/sys/fs/cgroup/unified/" pid=28344 comm="systemd"
> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
> bionic.log:May 17 18:20:12 aurorus kernel: [ 1022.760713] audit:
> type=1400 audit(1526581212.211:111): apparmor="DENIED"
> operation="mount" info="failed flags match" 

Re: [Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-22 Thread Ryan Harper
Looks like the ls -aLR contains more data;  we can compare bionic.

On Tue, May 22, 2018 at 6:53 PM, Jason Hobbs  wrote:
> cd /sys/bus/pci/devices && grep -nr . *
>
> xenial:
> http://paste.ubuntu.com/p/F5qyvN2Qrr/
>
> On Tue, May 22, 2018 at 5:27 PM, Jason Hobbs  
> wrote:
>> Do you really want a tar? How about ls -alR? xenial:
>>
>> http://paste.ubuntu.com/p/wyQ3kTsyBB/
>>
>> On Tue, May 22, 2018 at 5:14 PM, Jason Hobbs  
>> wrote:
>>> ok; looks like that 4.15.0-22-generic just released and wasn't what I
>>> used in the first reproduction... I doubt that's it.
>>>
>>> On Tue, May 22, 2018 at 4:58 PM, Ryan Harper <1771...@bugs.launchpad.net> 
>>> wrote:
 Comparing the kernel logs, on Xenial, the second nic comes up:

 May 22 15:00:27 aurorus kernel: [   24.840500] IPv6:
 ADDRCONF(NETDEV_UP): enP2p1s0f2: link is not ready
 May 22 15:00:27 aurorus kernel: [   25.472391] thunder-nicvf
 0002:01:00.2 enP2p1s0f2: Link is Up 1 Mbps Full duplex

 But on bionic, we only ever have f3 up.  Note this isn't a network
 configuration, but rather the state of the Nic and the switch.
 It doesn't appear to matter, 0f3 is what get's bridged by juju anyhow.
 But it does suggest that something is different.

 There is a slight kernel version variance as well:

 Xenial:
 May 22 15:00:27 aurorus kernel: [0.00] Linux version
 4.15.0-22-generic (buildd@bos02-arm64-038) (gcc version 5.4.0 20160609
 (Ubuntu/Lin

 Bionic:
 May 17 18:03:47 aurorus kernel: [0.00] Linux version
 4.15.0-20-generic (buildd@bos02-arm64-029) (gcc version 7.3.0
 (Ubuntu/Linaro 7.3.

 Looks like Xenial does not use unified cgroup namespaces, not sure
 what affect this may have on what's running in those lxd juju
 containers.

 % grep DENIED *.log
 bionic.log:May 17 18:19:33 aurorus kernel: [  983.592228] audit:
 type=1400 audit(1526581173.043:70): apparmor="DENIED"
 operation="mount" info="failed flags match" error=-13
 profile="lxd-juju-657fe9-1-lxd-1_"
 name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
 fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
 bionic.log:May 17 18:19:33 aurorus kernel: [  983.592476] audit:
 type=1400 audit(1526581173.043:71): apparmor="DENIED"
 operation="mount" info="failed flags match" error=-13
 profile="lxd-juju-657fe9-1-lxd-1_"
 name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
 fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
 bionic.log:May 17 18:19:41 aurorus kernel: [  991.818402] audit:
 type=1400 audit(1526581181.267:88): apparmor="DENIED"
 operation="mount" info="failed flags match" error=-13
 profile="lxd-juju-657fe9-1-lxd-1_"
 name="/run/systemd/unit-root/var/lib/lxcfs/" pid=24757
 comm="(networkd)" flags="ro, nosuid, nodev, remount, bind"
 bionic.log:May 17 18:19:46 aurorus kernel: [  997.271203] audit:
 type=1400 audit(1526581186.719:90): apparmor="DENIED"
 operation="mount" info="failed flags match" error=-13
 profile="lxd-juju-657fe9-1-lxd-2_"
 name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
 fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
 bionic.log:May 17 18:19:46 aurorus kernel: [  997.271425] audit:
 type=1400 audit(1526581186.723:91): apparmor="DENIED"
 operation="mount" info="failed flags match" error=-13
 profile="lxd-juju-657fe9-1-lxd-2_"
 name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
 fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
 bionic.log:May 17 18:19:55 aurorus kernel: [ 1006.285863] audit:
 type=1400 audit(1526581195.735:108): apparmor="DENIED"
 operation="mount" info="failed flags match" error=-13
 profile="lxd-juju-657fe9-1-lxd-2_"
 name="/run/systemd/unit-root/" pid=26209 comm="(networkd)" flags="ro,
 remount, bind"
 bionic.log:May 17 18:20:12 aurorus kernel: [ 1022.760512] audit:
 type=1400 audit(1526581212.211:110): apparmor="DENIED"
 operation="mount" info="failed flags match" error=-13
 profile="lxd-juju-657fe9-1-lxd-0_"
 name="/sys/fs/cgroup/unified/" pid=28344 comm="systemd"
 fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
 bionic.log:May 17 18:20:12 aurorus kernel: [ 1022.760713] audit:
 type=1400 audit(1526581212.211:111): apparmor="DENIED"
 operation="mount" info="failed flags match" error=-13
 profile="lxd-juju-657fe9-1-lxd-0_"
 name="/sys/fs/cgroup/unified/" pid=28344 comm="systemd"
 fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
 bionic.log:May 17 18:20:20 aurorus kernel: [ 1031.256448] audit:
 type=1400 audit(1526581220.707:128): apparmor="DENIED"
 operation="mount" info="failed flags match" error=-13
 

Re: [Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-22 Thread Jason Hobbs
cd /sys/bus/pci/devices && grep -nr . *

xenial:
http://paste.ubuntu.com/p/F5qyvN2Qrr/

On Tue, May 22, 2018 at 5:27 PM, Jason Hobbs  wrote:
> Do you really want a tar? How about ls -alR? xenial:
>
> http://paste.ubuntu.com/p/wyQ3kTsyBB/
>
> On Tue, May 22, 2018 at 5:14 PM, Jason Hobbs  
> wrote:
>> ok; looks like that 4.15.0-22-generic just released and wasn't what I
>> used in the first reproduction... I doubt that's it.
>>
>> On Tue, May 22, 2018 at 4:58 PM, Ryan Harper <1771...@bugs.launchpad.net> 
>> wrote:
>>> Comparing the kernel logs, on Xenial, the second nic comes up:
>>>
>>> May 22 15:00:27 aurorus kernel: [   24.840500] IPv6:
>>> ADDRCONF(NETDEV_UP): enP2p1s0f2: link is not ready
>>> May 22 15:00:27 aurorus kernel: [   25.472391] thunder-nicvf
>>> 0002:01:00.2 enP2p1s0f2: Link is Up 1 Mbps Full duplex
>>>
>>> But on bionic, we only ever have f3 up.  Note this isn't a network
>>> configuration, but rather the state of the Nic and the switch.
>>> It doesn't appear to matter, 0f3 is what get's bridged by juju anyhow.
>>> But it does suggest that something is different.
>>>
>>> There is a slight kernel version variance as well:
>>>
>>> Xenial:
>>> May 22 15:00:27 aurorus kernel: [0.00] Linux version
>>> 4.15.0-22-generic (buildd@bos02-arm64-038) (gcc version 5.4.0 20160609
>>> (Ubuntu/Lin
>>>
>>> Bionic:
>>> May 17 18:03:47 aurorus kernel: [0.00] Linux version
>>> 4.15.0-20-generic (buildd@bos02-arm64-029) (gcc version 7.3.0
>>> (Ubuntu/Linaro 7.3.
>>>
>>> Looks like Xenial does not use unified cgroup namespaces, not sure
>>> what affect this may have on what's running in those lxd juju
>>> containers.
>>>
>>> % grep DENIED *.log
>>> bionic.log:May 17 18:19:33 aurorus kernel: [  983.592228] audit:
>>> type=1400 audit(1526581173.043:70): apparmor="DENIED"
>>> operation="mount" info="failed flags match" error=-13
>>> profile="lxd-juju-657fe9-1-lxd-1_"
>>> name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
>>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>>> bionic.log:May 17 18:19:33 aurorus kernel: [  983.592476] audit:
>>> type=1400 audit(1526581173.043:71): apparmor="DENIED"
>>> operation="mount" info="failed flags match" error=-13
>>> profile="lxd-juju-657fe9-1-lxd-1_"
>>> name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
>>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>>> bionic.log:May 17 18:19:41 aurorus kernel: [  991.818402] audit:
>>> type=1400 audit(1526581181.267:88): apparmor="DENIED"
>>> operation="mount" info="failed flags match" error=-13
>>> profile="lxd-juju-657fe9-1-lxd-1_"
>>> name="/run/systemd/unit-root/var/lib/lxcfs/" pid=24757
>>> comm="(networkd)" flags="ro, nosuid, nodev, remount, bind"
>>> bionic.log:May 17 18:19:46 aurorus kernel: [  997.271203] audit:
>>> type=1400 audit(1526581186.719:90): apparmor="DENIED"
>>> operation="mount" info="failed flags match" error=-13
>>> profile="lxd-juju-657fe9-1-lxd-2_"
>>> name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
>>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>>> bionic.log:May 17 18:19:46 aurorus kernel: [  997.271425] audit:
>>> type=1400 audit(1526581186.723:91): apparmor="DENIED"
>>> operation="mount" info="failed flags match" error=-13
>>> profile="lxd-juju-657fe9-1-lxd-2_"
>>> name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
>>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>>> bionic.log:May 17 18:19:55 aurorus kernel: [ 1006.285863] audit:
>>> type=1400 audit(1526581195.735:108): apparmor="DENIED"
>>> operation="mount" info="failed flags match" error=-13
>>> profile="lxd-juju-657fe9-1-lxd-2_"
>>> name="/run/systemd/unit-root/" pid=26209 comm="(networkd)" flags="ro,
>>> remount, bind"
>>> bionic.log:May 17 18:20:12 aurorus kernel: [ 1022.760512] audit:
>>> type=1400 audit(1526581212.211:110): apparmor="DENIED"
>>> operation="mount" info="failed flags match" error=-13
>>> profile="lxd-juju-657fe9-1-lxd-0_"
>>> name="/sys/fs/cgroup/unified/" pid=28344 comm="systemd"
>>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>>> bionic.log:May 17 18:20:12 aurorus kernel: [ 1022.760713] audit:
>>> type=1400 audit(1526581212.211:111): apparmor="DENIED"
>>> operation="mount" info="failed flags match" error=-13
>>> profile="lxd-juju-657fe9-1-lxd-0_"
>>> name="/sys/fs/cgroup/unified/" pid=28344 comm="systemd"
>>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>>> bionic.log:May 17 18:20:20 aurorus kernel: [ 1031.256448] audit:
>>> type=1400 audit(1526581220.707:128): apparmor="DENIED"
>>> operation="mount" info="failed flags match" error=-13
>>> profile="lxd-juju-657fe9-1-lxd-0_"
>>> name="/run/systemd/unit-root/" pid=29205 comm="(networkd)" flags="ro,
>>> remount, bind"
>>> bionic.log:May 17 18:30:03 aurorus kernel: [ 1613.787782] audit:
>>> type=1400 audit(1526581803.277:151): apparmor="DENIED"
>>> 

Re: [Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-22 Thread Jason Hobbs
Do you really want a tar? How about ls -alR? xenial:

http://paste.ubuntu.com/p/wyQ3kTsyBB/

On Tue, May 22, 2018 at 5:14 PM, Jason Hobbs  wrote:
> ok; looks like that 4.15.0-22-generic just released and wasn't what I
> used in the first reproduction... I doubt that's it.
>
> On Tue, May 22, 2018 at 4:58 PM, Ryan Harper <1771...@bugs.launchpad.net> 
> wrote:
>> Comparing the kernel logs, on Xenial, the second nic comes up:
>>
>> May 22 15:00:27 aurorus kernel: [   24.840500] IPv6:
>> ADDRCONF(NETDEV_UP): enP2p1s0f2: link is not ready
>> May 22 15:00:27 aurorus kernel: [   25.472391] thunder-nicvf
>> 0002:01:00.2 enP2p1s0f2: Link is Up 1 Mbps Full duplex
>>
>> But on bionic, we only ever have f3 up.  Note this isn't a network
>> configuration, but rather the state of the Nic and the switch.
>> It doesn't appear to matter, 0f3 is what get's bridged by juju anyhow.
>> But it does suggest that something is different.
>>
>> There is a slight kernel version variance as well:
>>
>> Xenial:
>> May 22 15:00:27 aurorus kernel: [0.00] Linux version
>> 4.15.0-22-generic (buildd@bos02-arm64-038) (gcc version 5.4.0 20160609
>> (Ubuntu/Lin
>>
>> Bionic:
>> May 17 18:03:47 aurorus kernel: [0.00] Linux version
>> 4.15.0-20-generic (buildd@bos02-arm64-029) (gcc version 7.3.0
>> (Ubuntu/Linaro 7.3.
>>
>> Looks like Xenial does not use unified cgroup namespaces, not sure
>> what affect this may have on what's running in those lxd juju
>> containers.
>>
>> % grep DENIED *.log
>> bionic.log:May 17 18:19:33 aurorus kernel: [  983.592228] audit:
>> type=1400 audit(1526581173.043:70): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-1_"
>> name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>> bionic.log:May 17 18:19:33 aurorus kernel: [  983.592476] audit:
>> type=1400 audit(1526581173.043:71): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-1_"
>> name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>> bionic.log:May 17 18:19:41 aurorus kernel: [  991.818402] audit:
>> type=1400 audit(1526581181.267:88): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-1_"
>> name="/run/systemd/unit-root/var/lib/lxcfs/" pid=24757
>> comm="(networkd)" flags="ro, nosuid, nodev, remount, bind"
>> bionic.log:May 17 18:19:46 aurorus kernel: [  997.271203] audit:
>> type=1400 audit(1526581186.719:90): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-2_"
>> name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>> bionic.log:May 17 18:19:46 aurorus kernel: [  997.271425] audit:
>> type=1400 audit(1526581186.723:91): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-2_"
>> name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>> bionic.log:May 17 18:19:55 aurorus kernel: [ 1006.285863] audit:
>> type=1400 audit(1526581195.735:108): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-2_"
>> name="/run/systemd/unit-root/" pid=26209 comm="(networkd)" flags="ro,
>> remount, bind"
>> bionic.log:May 17 18:20:12 aurorus kernel: [ 1022.760512] audit:
>> type=1400 audit(1526581212.211:110): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-0_"
>> name="/sys/fs/cgroup/unified/" pid=28344 comm="systemd"
>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>> bionic.log:May 17 18:20:12 aurorus kernel: [ 1022.760713] audit:
>> type=1400 audit(1526581212.211:111): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-0_"
>> name="/sys/fs/cgroup/unified/" pid=28344 comm="systemd"
>> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
>> bionic.log:May 17 18:20:20 aurorus kernel: [ 1031.256448] audit:
>> type=1400 audit(1526581220.707:128): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-0_"
>> name="/run/systemd/unit-root/" pid=29205 comm="(networkd)" flags="ro,
>> remount, bind"
>> bionic.log:May 17 18:30:03 aurorus kernel: [ 1613.787782] audit:
>> type=1400 audit(1526581803.277:151): apparmor="DENIED"
>> operation="mount" info="failed flags match" error=-13
>> profile="lxd-juju-657fe9-1-lxd-0_" name="/bin/"
>> pid=91926 comm="(arter.sh)" flags="ro, remount, bind"
>> bionic.log:May 17 18:30:03 aurorus kernel: [ 1613.832621] audit:
>> type=1400 audit(1526581803.321:152): 

Re: [Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-22 Thread Jason Hobbs
ok; looks like that 4.15.0-22-generic just released and wasn't what I
used in the first reproduction... I doubt that's it.

On Tue, May 22, 2018 at 4:58 PM, Ryan Harper <1771...@bugs.launchpad.net> wrote:
> Comparing the kernel logs, on Xenial, the second nic comes up:
>
> May 22 15:00:27 aurorus kernel: [   24.840500] IPv6:
> ADDRCONF(NETDEV_UP): enP2p1s0f2: link is not ready
> May 22 15:00:27 aurorus kernel: [   25.472391] thunder-nicvf
> 0002:01:00.2 enP2p1s0f2: Link is Up 1 Mbps Full duplex
>
> But on bionic, we only ever have f3 up.  Note this isn't a network
> configuration, but rather the state of the Nic and the switch.
> It doesn't appear to matter, 0f3 is what get's bridged by juju anyhow.
> But it does suggest that something is different.
>
> There is a slight kernel version variance as well:
>
> Xenial:
> May 22 15:00:27 aurorus kernel: [0.00] Linux version
> 4.15.0-22-generic (buildd@bos02-arm64-038) (gcc version 5.4.0 20160609
> (Ubuntu/Lin
>
> Bionic:
> May 17 18:03:47 aurorus kernel: [0.00] Linux version
> 4.15.0-20-generic (buildd@bos02-arm64-029) (gcc version 7.3.0
> (Ubuntu/Linaro 7.3.
>
> Looks like Xenial does not use unified cgroup namespaces, not sure
> what affect this may have on what's running in those lxd juju
> containers.
>
> % grep DENIED *.log
> bionic.log:May 17 18:19:33 aurorus kernel: [  983.592228] audit:
> type=1400 audit(1526581173.043:70): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-1_"
> name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
> bionic.log:May 17 18:19:33 aurorus kernel: [  983.592476] audit:
> type=1400 audit(1526581173.043:71): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-1_"
> name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
> bionic.log:May 17 18:19:41 aurorus kernel: [  991.818402] audit:
> type=1400 audit(1526581181.267:88): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-1_"
> name="/run/systemd/unit-root/var/lib/lxcfs/" pid=24757
> comm="(networkd)" flags="ro, nosuid, nodev, remount, bind"
> bionic.log:May 17 18:19:46 aurorus kernel: [  997.271203] audit:
> type=1400 audit(1526581186.719:90): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-2_"
> name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
> bionic.log:May 17 18:19:46 aurorus kernel: [  997.271425] audit:
> type=1400 audit(1526581186.723:91): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-2_"
> name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
> bionic.log:May 17 18:19:55 aurorus kernel: [ 1006.285863] audit:
> type=1400 audit(1526581195.735:108): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-2_"
> name="/run/systemd/unit-root/" pid=26209 comm="(networkd)" flags="ro,
> remount, bind"
> bionic.log:May 17 18:20:12 aurorus kernel: [ 1022.760512] audit:
> type=1400 audit(1526581212.211:110): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-0_"
> name="/sys/fs/cgroup/unified/" pid=28344 comm="systemd"
> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
> bionic.log:May 17 18:20:12 aurorus kernel: [ 1022.760713] audit:
> type=1400 audit(1526581212.211:111): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-0_"
> name="/sys/fs/cgroup/unified/" pid=28344 comm="systemd"
> fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
> bionic.log:May 17 18:20:20 aurorus kernel: [ 1031.256448] audit:
> type=1400 audit(1526581220.707:128): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-0_"
> name="/run/systemd/unit-root/" pid=29205 comm="(networkd)" flags="ro,
> remount, bind"
> bionic.log:May 17 18:30:03 aurorus kernel: [ 1613.787782] audit:
> type=1400 audit(1526581803.277:151): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-0_" name="/bin/"
> pid=91926 comm="(arter.sh)" flags="ro, remount, bind"
> bionic.log:May 17 18:30:03 aurorus kernel: [ 1613.832621] audit:
> type=1400 audit(1526581803.321:152): apparmor="DENIED"
> operation="mount" info="failed flags match" error=-13
> profile="lxd-juju-657fe9-1-lxd-0_" name="/bin/"
> pid=91949 comm="(y-helper)" flags="ro, remount, bind"
>
>
> xenial.log:May 22 15:15:10 aurorus kernel: [  918.311740] audit:
> type=1400 

Re: [Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-22 Thread Ryan Harper
Comparing the kernel logs, on Xenial, the second nic comes up:

May 22 15:00:27 aurorus kernel: [   24.840500] IPv6:
ADDRCONF(NETDEV_UP): enP2p1s0f2: link is not ready
May 22 15:00:27 aurorus kernel: [   25.472391] thunder-nicvf
0002:01:00.2 enP2p1s0f2: Link is Up 1 Mbps Full duplex

But on bionic, we only ever have f3 up.  Note this isn't a network
configuration, but rather the state of the Nic and the switch.
It doesn't appear to matter, 0f3 is what get's bridged by juju anyhow.
But it does suggest that something is different.

There is a slight kernel version variance as well:

Xenial:
May 22 15:00:27 aurorus kernel: [0.00] Linux version
4.15.0-22-generic (buildd@bos02-arm64-038) (gcc version 5.4.0 20160609
(Ubuntu/Lin

Bionic:
May 17 18:03:47 aurorus kernel: [0.00] Linux version
4.15.0-20-generic (buildd@bos02-arm64-029) (gcc version 7.3.0
(Ubuntu/Linaro 7.3.

Looks like Xenial does not use unified cgroup namespaces, not sure
what affect this may have on what's running in those lxd juju
containers.

% grep DENIED *.log
bionic.log:May 17 18:19:33 aurorus kernel: [  983.592228] audit:
type=1400 audit(1526581173.043:70): apparmor="DENIED"
operation="mount" info="failed flags match" error=-13
profile="lxd-juju-657fe9-1-lxd-1_"
name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
bionic.log:May 17 18:19:33 aurorus kernel: [  983.592476] audit:
type=1400 audit(1526581173.043:71): apparmor="DENIED"
operation="mount" info="failed flags match" error=-13
profile="lxd-juju-657fe9-1-lxd-1_"
name="/sys/fs/cgroup/unified/" pid=24143 comm="systemd"
fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
bionic.log:May 17 18:19:41 aurorus kernel: [  991.818402] audit:
type=1400 audit(1526581181.267:88): apparmor="DENIED"
operation="mount" info="failed flags match" error=-13
profile="lxd-juju-657fe9-1-lxd-1_"
name="/run/systemd/unit-root/var/lib/lxcfs/" pid=24757
comm="(networkd)" flags="ro, nosuid, nodev, remount, bind"
bionic.log:May 17 18:19:46 aurorus kernel: [  997.271203] audit:
type=1400 audit(1526581186.719:90): apparmor="DENIED"
operation="mount" info="failed flags match" error=-13
profile="lxd-juju-657fe9-1-lxd-2_"
name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
bionic.log:May 17 18:19:46 aurorus kernel: [  997.271425] audit:
type=1400 audit(1526581186.723:91): apparmor="DENIED"
operation="mount" info="failed flags match" error=-13
profile="lxd-juju-657fe9-1-lxd-2_"
name="/sys/fs/cgroup/unified/" pid=25227 comm="systemd"
fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
bionic.log:May 17 18:19:55 aurorus kernel: [ 1006.285863] audit:
type=1400 audit(1526581195.735:108): apparmor="DENIED"
operation="mount" info="failed flags match" error=-13
profile="lxd-juju-657fe9-1-lxd-2_"
name="/run/systemd/unit-root/" pid=26209 comm="(networkd)" flags="ro,
remount, bind"
bionic.log:May 17 18:20:12 aurorus kernel: [ 1022.760512] audit:
type=1400 audit(1526581212.211:110): apparmor="DENIED"
operation="mount" info="failed flags match" error=-13
profile="lxd-juju-657fe9-1-lxd-0_"
name="/sys/fs/cgroup/unified/" pid=28344 comm="systemd"
fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
bionic.log:May 17 18:20:12 aurorus kernel: [ 1022.760713] audit:
type=1400 audit(1526581212.211:111): apparmor="DENIED"
operation="mount" info="failed flags match" error=-13
profile="lxd-juju-657fe9-1-lxd-0_"
name="/sys/fs/cgroup/unified/" pid=28344 comm="systemd"
fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
bionic.log:May 17 18:20:20 aurorus kernel: [ 1031.256448] audit:
type=1400 audit(1526581220.707:128): apparmor="DENIED"
operation="mount" info="failed flags match" error=-13
profile="lxd-juju-657fe9-1-lxd-0_"
name="/run/systemd/unit-root/" pid=29205 comm="(networkd)" flags="ro,
remount, bind"
bionic.log:May 17 18:30:03 aurorus kernel: [ 1613.787782] audit:
type=1400 audit(1526581803.277:151): apparmor="DENIED"
operation="mount" info="failed flags match" error=-13
profile="lxd-juju-657fe9-1-lxd-0_" name="/bin/"
pid=91926 comm="(arter.sh)" flags="ro, remount, bind"
bionic.log:May 17 18:30:03 aurorus kernel: [ 1613.832621] audit:
type=1400 audit(1526581803.321:152): apparmor="DENIED"
operation="mount" info="failed flags match" error=-13
profile="lxd-juju-657fe9-1-lxd-0_" name="/bin/"
pid=91949 comm="(y-helper)" flags="ro, remount, bind"


xenial.log:May 22 15:15:10 aurorus kernel: [  918.311740] audit:
type=1400 audit(1527002110.131:109): apparmor="DENIED"
operation="file_mmap"
namespace="root//lxd-juju-878ab5-1-lxd-1_"
profile="/usr/lib/lxd/lxd-bridge-proxy"
name="/usr/lib/lxd/lxd-bridge-proxy" pid=40973 comm="lxd-bridge-prox"
requested_mask="m" denied_mask="m" fsuid=10 ouid=10
xenial.log:May 22 15:15:11 aurorus kernel: [  919.605481] audit:
type=1400 audit(1527002111.427:115): apparmor="DENIED"

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-22 Thread Jason Hobbs
marked new on nova-compute-charm due to rharper's comment #18, and new
on libvirt because I've posted all the requested logs now.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-22 Thread Jason Hobbs
@rharper, here are the logs you requested from the xenial deploy.

** Attachment added: "xenial-logs-1771662.tgz"
   
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+attachment/5142976/+files/xenial-logs-1771662.tgz

** Changed in: charm-nova-compute
   Status: Invalid => New

** Changed in: libvirt (Ubuntu)
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-18 Thread Jason Hobbs
Christian, thanks for digging in. Yes, I really just setup base
openstack and hit this condition. I'm not doing anything to setup
devices as passthrough or anything along those lines, and I'm not trying
to start instances.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-18 Thread  Christian Ehrhardt 
Newly deployed Cavium System with 18.04 to get my own view onto this
(without openstack/charms in the way)

1. start a basic guest
   $ sudo apt install uvtool-libvirt qemu-efi-aarch64
   $ uvt-simplestreams-libvirt --verbose sync --source 
http://cloud-images.ubuntu.com/daily arch=arm64 label=daily release=bionic
   $ uvt-kvm create --password=ubuntu b1 release=bionic arch=arm64 label=daily

=> Just works, nothing special in logs
Since it was stated that the special VF/PF are not uses this already breaks the 
argument made in the bug report - my guest just works on this system.

2. check the odd PF/VF situation

Please note that I had only the initial renames to the new naming scheme, but 
no others:
dmesg | grep renamed
[   10.450002] thunder-nicvf 0002:01:00.2 enP2p1s0f2: renamed from eth1
[   10.489989] thunder-nicvf 0002:01:00.1 enP2p1s0f1: renamed from eth0
[   10.629936] thunder-nicvf 0002:01:00.4 enP2p1s0f4: renamed from eth3
[   10.877936] thunder-nicvf 0002:01:00.3 enP2p1s0f3: renamed from eth2
[   10.957933] thunder-nicvf 0002:01:00.5 enP2p1s0f5: renamed from eth4

None of the devices has pyhsical_port_id but that is no fatal.
Because on other platforms I found the same e.g. ppc64el some have that some 
don't 
/sys/devices/pci0003:00/0003:00:00.0/0003:01:00.0/0003:02:09.0/0003:09:00.0/net/enP3p9s0f0/phys_port_id':
 Operation not supported
/sys/devices/pci0005:00/0005:00:00.0/0005:01:00.3/net/enP5p1s0f3/phys_port_id 
04334233343130363730453131

It will just use NULL which essentially menas there is just one phys
port and that is fine.

It is more interesting that it later checks physfn which exists on Cavium (but 
not on ppc64 for example)
ll /sys/devices/pci0002:00/0002:00:02.0/0002:01:01.4/physfn
lrwxrwxrwx 1 root root 0 May 18 06:23 
/sys/devices/pci0002:00/0002:00:02.0/0002:01:01.4/physfn -> ../0002:01:00.0/

If this would NOT exist it would give up here.
But it does exist, so it tries to go on with it and then fails as it doesn't 
find anything.
That would match what we read in the reported upstream mail discussion.

But none of this matters as per jhobbs it should not use those devices
at all.

FYI code in libvirt around that:
virNetDevGetPhysicalFunction
-> virNetDevGetPhysPortID
   -> virNetDevSysfsFile
   This gives you something like
   /sys/devices/pci0002:00/0002:00:02.0/0002:01:00.4/net/enP2p1s0f4/phys_port_id
-> virNetDevSysfsDeviceFile
-> virPCIGetNetName
If none of these functions failed BUT returned no path then the reported 
message appears.
On other HW it either works OR just doesn't find the paths and gives up before 
the error message.


3. check libvirt capabilities and status
As I asked before, we would need to know the libvirt action that fails, as all 
I tried just works.

Also general probing like one would expect on an initial nova node setup:
  $ virsh capabilities
  $ virsh domcapabilities
  $ virsh sysinfo
  $ virsh nodeinfo
works just fine without the reported errors.

4. Lets even use those devices now
The host uses enP2p1s0f1, that is:
0002:01:00.1 Ethernet controller: Cavium, Inc. THUNDERX Network Interface 
Controller virtual function (rev 09)
So lets use its siblings
As passthrough host-interface
  0002:01:00.2 Ethernet controller: Cavium, Inc. THUNDERX Network Interface 
Controller virtual function (rev 09)
  

  

  
As passthrough generic hostdev:
  0002:01:00.3 Ethernet controller: Cavium, Inc. THUNDERX Network Interface 
Controller virtual function (rev 09)
   
 
 
   
 
   

Note: please follow the upstream mailing list discussion on the
difference of those.

$ virsh attach-device b1 interface.xml
error: Failed to attach device from interface.xml
error: internal error: The PF device for VF /sys/bus/pci/devices/0002:01:00.2 
has no network device name
And in Log:
4624: error : virPCIGetVirtualFunctionInfo:3016 : internal error: The PF device 
for VF /sys/bus/pci/devices/0002:01:00.2 has no network device name

As outlined in the mail-thread these special devices can still be attached, if 
you let libvirt handle it not as VFs but as generic PCI.
$ virsh attach-device b1 hostdev.xml 
Device attached successfully
My guest can work fine with this now.

And e voila when you attach it as hostdev then (due to unplugging/pluggin on 
the host) you get the device renames you have seen.
[ 3222.919212] vfio-pci 0002:01:00.3: enabling device (0004 -> 0006)
[ 3229.172142] thunder-nicvf 0002:01:00.3: enabling device (0004 -> 0006)
[ 3229.219106] thunder-nicvf 0002:01:00.3 enP2p1s0f3: renamed from eth0


This is your error IMHO, but you said multiple times you are not doing that.
I assume you really want to use the VFs as passthrough devices - which is a 
whole other story than "just set up openstack".

If you really just set up the base nova node, then total +1 on Ryans:
"At this point, we can compare the logs to Xenial, but I think the next
step is back to the charms/nova-compute to determine how a node reports
back to openstack that 

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Ryan Harper
Thanks for the logs.

I generally don't see anything *fatal* to libvirt.  In the nova logs, I
can see that virsh capabilities returns host information.  It certainly
is failing to find the VFs on the SRIOV device; it's not clear if that's
because the device is misbehaving (we can see the kernel events
indicating the driver is being reset, enP2p1s0f1 renamed eth0, eth0
renamed to enP2p1s0f1 which can only happen if the driver has been
reset) or if the probing of device's PCI address space is triggering a
reset.

Note that netplan has no skin in this game; it applies a DHCP and DNS
config to enP2p1s0f3 which stays up the whole time, juju even bridges
en..f3 etc.  The other interfaces found during boot are set to "manual"
config; that is netplan writes a .link file for setting the name, but
note that the name is the predictable name it gets from the default udev
policy anyhow.

At this point, we can compare the logs to Xenial, but I think the next
step is back to the charms/nova-compute to determine how a node reports
back to openstack that a compute node is ready.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Jason Hobbs
all of /var/log and /etc from the bionic deploy.

** Attachment added: "bionic-var-log-and-etc.tgz"
   
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+attachment/5141000/+files/bionic-var-log-and-etc.tgz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Jason Hobbs
@rharper here are the logs you asked for from the bionic deploy

** Attachment added: "bionic-logs.tgz"
   
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+attachment/5140998/+files/bionic-logs.tgz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Ryan Harper
Some package level deltas that may be relevant:

ii  linux-firmware 1.173   
ii  linux-firmware 1.157.18

ii  pciutils   1:3.3.1-1.1ubuntu1.2
ii  pciutils   1:3.5.2-1ubuntu

libvirt0:arm644.0.0-1ubuntu7~cloud0
libvirt0:arm644.0.0-1ubuntu8

Less likely to have an impact, guest firmware but none-the-less delta:

qemu-efi-aarch64  0~20180205.c0d9813c-2
qemu-efi  0~20160408.ffea0a2c-2

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Jason Hobbs
@rharper still working on getting the other stuff you've asked for, but here is 
the uname -a output from xenial vs bionic:
http://paste.ubuntu.com/p/rJDpK5SyW9/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Ryan Harper
To make it more clear; the hardware SRIOV device is different that
normal:

 TL;DR this special device has VFs that have NO PF associated
 software doesn't understand this

Though per comment #3; it seems odd that a Xenial/Queens with the same
kernel (HWE) works OK. So some tracing in libvirt/nova to confirm
different paths, I think.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Ryan Harper
And for the xenial deployment version, can we get what's in
/etc/network/interfaces* (including the .d)?

I'm generally curious w.r.t what interfaces are managed by the OS, and
which ones are being delegated to the guests.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Ryan Harper
Please capture:

1) cloud-init collect-logs (writes cloud-init.tar to $CWD)
2) the journal /var/log/journal
3) /etc/netplan and /run/systemd
4) /etc/udev/rules.d

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Jason Hobbs
steve captured what I meant in #8 better than I did: 17:46 < slangasek>
one could as accurately say "I'm suspicious this is related to us
replacing the whole networking stack in Ubuntu" ;-)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Steve Langasek
> I'm suspicious of netplan here.

netplan is only the messenger here, between cloud-init+juju and
networkd.  Can you show the complete netplan yaml as it's been laid down
on the system in question?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Jason Hobbs
This looks like it is specific to this hardware and the way it does VFs
and PFs, so I'm removing field-high.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Jason Hobbs
given it works with the same libvirt and kernel on 16.04 but not 18.04,
I'm suspicious of netplan here.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Jason Hobbs
The deploy works fine with juju 2.4 beta 2 and xenial/queens.

package versions: http://paste.ubuntu.com/p/PF7Jb7gxnX/

we do see this in nova-compute.log, but it's not fatal:
http://paste.ubuntu.com/p/Dh4ZGVTtH8/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Andrew McLeod
Further information: Using juju 2.4 beta2 I was able to deploy magpie on
bionic in lxd and baremetal via MAAS.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Ryan Beisner
We think this is an issue in libvirt, related to how it handles the
sriov hardware in these machines.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Ryan Beisner
** Changed in: charm-nova-compute
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread Jason Hobbs
** Description changed:

  After deploying openstack on arm64 using bionic and queens, no
  hypervisors show upon. On my compute nodes, I have an error like:
  
  2018-05-16 19:23:08.165 282170 ERROR nova.compute.manager libvirtError:
  Node device not found: no node device with matching name
  'net_enP2p1s0f1_40_8d_5c_ba_b8_d2'
  
  In my /var/log/nova/nova-compute.log
  
  I'm not sure why this is happening - I don't use enP2p1s0f1 for
  anything.
  
  There are a lot of interesting messages about that interface in syslog:
  http://paste.ubuntu.com/p/8WT8NqCbCf/
  
  Here is my bundle: http://paste.ubuntu.com/p/fWWs6r8Nr5/
  
  The same bundle works fine for xenial-queens, with the source changed to
  the cloud-archive, and using stable charms rather than -next. I hit this
  same issue on bionic queens using either stable or next charms.
  
  This thread has some related info, I think:
  https://www.spinics.net/linux/fedora/libvir/msg160975.html
  
  This is with juju 2.4 beta 2.
+ 
+ Package versions on affected system:
+ http://paste.ubuntu.com/p/yfQH3KJzng/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771662] Re: libvirtError: Node device not found: no node device with matching name

2018-05-17 Thread  Christian Ehrhardt 
What puzzles me is Xenial-Queens working and Bionic showing issues.
Because it seems like  libvirt being unable to cope with this type of HW, but 
since it works in one but not the other ...
Yet versions are:
- xenial-queens
libvirt 4.0.0-1ubuntu7~cloud0
qemu 1:2.11+dfsg-1ubuntu7~cloud0
- bionic
libvirt 4.0.0-1ubuntu8
qemu 1:2.11+dfsg-1ubuntu7.1

Which are the same except a minor bump which UCA will sync in a bit.

And jhobbs reports even the kernels are the same (Xenial with HWE).
So for now, ?!?

** Also affects: libvirt (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771662

Title:
  libvirtError: Node device not found: no node device with matching name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1771662/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs