[Bug 1630891] Re: unable to start lxd container instances after host reboot
Version was LXD 2.3.0, and Openstack Mitaka. nova-compute-lxd nodes have been removed so not able to replicate exact issue -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1630891 Title: unable to start lxd container instances after host reboot To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1630891/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1639932] Re: cpu constraints not being applied
Feedback in regards to why bug has been filed: UnixBench parallel test final "System Benchmarks Index Score" run as root, on different size /flavor instances/containers, with different numbers of "copies" of UnixBench. instance cores -> parallel tests c20 c40 x10 2967.7 2844.6 x20 3802.4 3152.3 x40 4304.5 4185.0 x60 4312.2 4321.5 on an otherwise empty node, a c20 and c40 exhibited almost identical performance on a 60-copy UnixBench the c20 must have used more than its allocated resources to match the performance of an instance double its size during a simultaneous identical (x40) test, the node with a c40 had 21-36% higher load average than the node with a c20 this suggests the larger container is using more node resources, as it should (but contradicts the benchmark results) for some reason the larger instance did worse on x10, x20, and x40 tests – this should be verified and investigated -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1639932 Title: cpu constraints not being applied To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova-lxd/+bug/1639932/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1639932] [NEW] cpu constraints not being applied
Public bug reported: When lxd containers are launched, CPU constraints are either not being applied or not being honored by the server/hypervisor ** Affects: nova-lxd (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1639932 Title: cpu constraints not being applied To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova-lxd/+bug/1639932/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1630891] Re: unable to start lxd container instances after host reboot
This issue still exists. This also effects instances starting automatically after a reboot -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1630891 Title: unable to start lxd container instances after host reboot To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1630891/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1637620] Re: cannot delete lxd instances in horizon
Tested the package nova-compute-lxd 13.0.0.0b3.dev712.201610311523 .xenial-0ubuntu1 via the PPA. Have tested deleting, starting, stopping instances etc and all appears to be working. ** Tags removed: verification-needed ** Tags added: verification-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1637620 Title: cannot delete lxd instances in horizon To manage notifications about this bug go to: https://bugs.launchpad.net/opnfv/+bug/1637620/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1637620] [NEW] cannot delete lxd instances in horizon
Public bug reported: After deploying Openstack and spinning up lxd instances we are unable to delete those instances The following error appears in Horizon and in the nova-compute logs Exception during message handling: Failed to communicate with LXD API instance-005f: Error 400 - Profile is currently in use Its possible to delete the lxc container, then delete the profile. Then delete from Horizon. ** Affects: nova-lxd (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1637620 Title: cannot delete lxd instances in horizon To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova-lxd/+bug/1637620/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1624014] Re: Wrong bit set in klibc PXE dhcp/bootp flags
The fix has been tested and is working. Several machine's have been booted and commissioned successfully ** Tags removed: verification-needed ** Tags added: verification-done-xenial verification-needed-trusty -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1624014 Title: Wrong bit set in klibc PXE dhcp/bootp flags To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/klibc/+bug/1624014/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1271144] Re: br0 not brought up by cloud-init script with MAAS provider
I've found a workaround for the issue to let me launch lxc containers via juju After the first container fails to launch, ssh to the hypervisor edit /var/lib/lxc/juju-trusty-lxc-template/config change br0 - lxcbr0 After editing the template config all subsequent lxc containers will launch via juju -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to juju-core in Ubuntu. https://bugs.launchpad.net/bugs/1271144 Title: br0 not brought up by cloud-init script with MAAS provider To manage notifications about this bug go to: https://bugs.launchpad.net/juju-core/+bug/1271144/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1271144] Re: br0 not brought up by cloud-init script with MAAS provider
I've found a workaround for the issue to let me launch lxc containers via juju After the first container fails to launch, ssh to the hypervisor edit /var/lib/lxc/juju-trusty-lxc-template/config change br0 - lxcbr0 After editing the template config all subsequent lxc containers will launch via juju -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1271144 Title: br0 not brought up by cloud-init script with MAAS provider To manage notifications about this bug go to: https://bugs.launchpad.net/juju-core/+bug/1271144/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1271144] Re: br0 not brought up by cloud-init script with MAAS provider
Using 14.04, MAAS in a KVM, and Juju in a KVM. Same issue that lxe container launches with br0 instead of lxcvr0 juju status - https://pastebin.canonical.com/119755/ Unable to find failed container log output juju machine agent log - https://pastebin.canonical.com/119746/ cloudinit log - https://pastebin.canonical.com/119748/ ifconfig - https://pastebin.canonical.com/119752/ container config - https://pastebin.canonical.com/119753/ Same issue, lxc container does not launch -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to juju-core in Ubuntu. https://bugs.launchpad.net/bugs/1271144 Title: br0 not brought up by cloud-init script with MAAS provider To manage notifications about this bug go to: https://bugs.launchpad.net/juju-core/+bug/1271144/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1271144] Re: br0 not brought up by cloud-init script with MAAS provider
Using 14.04, MAAS in a KVM, and Juju in a KVM. Same issue that lxe container launches with br0 instead of lxcvr0 juju status - https://pastebin.canonical.com/119755/ Unable to find failed container log output juju machine agent log - https://pastebin.canonical.com/119746/ cloudinit log - https://pastebin.canonical.com/119748/ ifconfig - https://pastebin.canonical.com/119752/ container config - https://pastebin.canonical.com/119753/ Same issue, lxc container does not launch -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1271144 Title: br0 not brought up by cloud-init script with MAAS provider To manage notifications about this bug go to: https://bugs.launchpad.net/juju-core/+bug/1271144/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1160272] Re: Cannot add IM account (empty setup dialog)
This is still a bug. Much like Yionel I had to kill everything related to telepathy/keyring/empathy and then when I restarted empathy it worked. I'd say this is quite a serious bug and one that has made Empaty unusable for me personally -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1160272 Title: Cannot add IM account (empty setup dialog) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/empathy/+bug/1160272/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1096954] Re: Enabling Xinerama causes Unity Panel/Dash to become all black
xdpyinfo: name of display::0 version number:11.0 vendor string:The X.Org Foundation vendor release number:11103000 X.Org version: 1.11.3 maximum request size: 16777212 bytes motion buffer size: 256 bitmap unit, bit order, padding:32, LSBFirst, 32 image byte order:LSBFirst number of supported pixmap formats:7 supported pixmap formats: depth 1, bits_per_pixel 1, scanline_pad 32 depth 4, bits_per_pixel 8, scanline_pad 32 depth 8, bits_per_pixel 8, scanline_pad 32 depth 15, bits_per_pixel 16, scanline_pad 32 depth 16, bits_per_pixel 16, scanline_pad 32 depth 24, bits_per_pixel 32, scanline_pad 32 depth 32, bits_per_pixel 32, scanline_pad 32 keycode range:minimum 8, maximum 255 focus: window 0x365, revert to Parent number of extensions:25 BIG-REQUESTS DAMAGE DPMS DRI2 GLX Generic Event Extension MIT-SCREEN-SAVER MIT-SHM NV-CONTROL NV-GLX RECORD RENDER SECURITY SHAPE SYNC X-Resource XC-MISC XFIXES XFree86-DGA XFree86-VidModeExtension XINERAMA XInputExtension XKEYBOARD XTEST XVideo default screen number:0 number of screens:1 screen #0: dimensions:7680x1600 pixels (2294x473 millimeters) resolution:85x86 dots per inch depths (7):24, 1, 4, 8, 15, 16, 32 root window id:0x225 depth of root window:24 planes number of colormaps:minimum 1, maximum 1 default colormap:0x20 default number of colormap cells:256 preallocated pixels:black 0, white 16777215 options:backing-store NO, save-unders NO largest cursor:256x256 current input event mask:0xfac033 KeyPressMask KeyReleaseMask EnterWindowMask LeaveWindowMask KeymapStateMask ExposureMask StructureNotifyMask SubstructureNotifyMask SubstructureRedirectMask FocusChangeMask PropertyChangeMask ColormapChangeMask number of visuals:56 default visual id: 0x21 visual: visual id:0x21 class:TrueColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits visual: visual id:0x22 class:DirectColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits visual: visual id:0x23 class:TrueColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits visual: visual id:0x24 class:TrueColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits visual: visual id:0x25 class:TrueColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits visual: visual id:0x26 class:TrueColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits visual: visual id:0x27 class:TrueColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits visual: visual id:0x28 class:TrueColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits visual: visual id:0x29 class:TrueColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits visual: visual id:0x2a class:TrueColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits visual: visual id:0x2b class:TrueColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits visual: visual id:0x2c class:TrueColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits visual:
[Bug 1096954] Re: Enabling Xinerama causes Unity Panel/Dash to become all black
** Attachment added: Screenshot with 'patched' Launcher.qml https://bugs.launchpad.net/ubuntu/+source/unity/+bug/1096954/+attachment/3478147/+files/screenshot-with-patched-Launcher.qml.png -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1096954 Title: Enabling Xinerama causes Unity Panel/Dash to become all black To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/unity/+bug/1096954/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1096954] Re: Enabling Xinerama causes Unity Panel/Dash to become all black
** Attachment added: Panel now white after taking screenshot https://bugs.launchpad.net/ubuntu/+source/unity/+bug/1096954/+attachment/3478148/+files/screenshot-of-white-panel.png -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1096954 Title: Enabling Xinerama causes Unity Panel/Dash to become all black To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/unity/+bug/1096954/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1005027] Re: MAAS does not add the correct IP subnet to the squid-deb-proxy ACL
I can confirm the bug as well. Followed all the defaults from the wiki, how-ever am using public IP's and not private IP's. Upon launch all nodes got the same 'bad mirror archive'. After editing the squid-deb-proxy acl the nodes were able to connect and update It would be great if the wiki could be updated as well since in theory this isn't a bug, just a non mentioned item in the wiki -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1005027 Title: MAAS does not add the correct IP subnet to the squid-deb-proxy ACL To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+bug/1005027/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs