[Bug 1871869] Re: Nautilus copy task status is not accurate.
Ubuntu 20.04 here. I found reports of the file manager data transfer progress information being unreliable dating back 15 years (https://alt.os.linux.ubuntu.narkive.com/RxvTnprS/still-no-progress-bar-when-copying-files-to-flash-drive) and I cannot understand why, for the love of god, this is still the way it is today! At the very least the user should be presented with progress information when attempting to cleanly eject the USB stick! In Windows and MacOS the data transfer progress is actually what one expects - a reality-based depiction of the data being transferred - which when it says it's completed, is actually completed. In Ubuntu (all Linux?) you get this data transfer progress information which is more of a lie than the truth. And this in a PC OS which is supposed to be intuitive for the average PC user ... or else why bother with most of the GUI if to see something as rudimentary as this you have to start executing terminal commands!? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1871869 Title: Nautilus copy task status is not accurate. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nautilus/+bug/1871869/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1940665] Re: Lenovo Carbon X1 9th gen no longer powers off
I can confirm that with the generic 5.11 kernel shutdown works correctly - the laptop powers off completely. Workaround: - Installation of Generic kernel sudo apt install linux-generic-hwe-20.04 Note: It may be necessary to also manually install the corresponding linux-modules and linux-modules-extra packages. - Reboot into the Generic (5.11) kernel from the GRUB menu - Removal + Purging of OEM kernel dpkg -l | grep linux-oem Perform a 'sudo apt remove --purge' on all listed packages (in my case there were three listed) as well as on the associated linux-modules package. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1940665 Title: Lenovo Carbon X1 9th gen no longer powers off To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux-oem-5.10/+bug/1940665/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1940665] Re: Lenovo Carbon X1 9th gen no longer powers off
Same problem here with Thinkpad T14s Gen2 and OEM kernel. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1940665 Title: Lenovo Carbon X1 9th gen no longer powers off To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux-oem-5.10/+bug/1940665/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1841826] Re: Going to sleep instead of logging in while lid closed & external display
@Erik Jackson, it seems you confirmed my suspicions (see comment #26). -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1841826 Title: Going to sleep instead of logging in while lid closed & external display To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/1841826/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1841826] Re: Going to sleep instead of logging in while lid closed & external display
@Tim Wetzel, Or maybe it's not just a matter of it being docked, but rather to do with the laptop lid being closed while being logged into!? I never tried this, but maybe if an external monitor, mouse and keyboard is connected without the laptop being docked, and the laptop is booted and before the laptop even starts booting the OS the lid is closed - so mimicking the way that the laptop boots while docked, but without it being docked. Would be interesting to see if the same problem occurs. If it did this would point to the problem stemming from the laptop lid being closed during login - regardless of whether or not the laptop is docked. I cannot recall the problem occurring while the laptop is docked with the lid open. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1841826 Title: Going to sleep instead of logging in while lid closed & external display To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/1841826/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1841826] Re: Going to sleep instead of logging in while lid closed & external display
I can confirm that the same behavior occurs with the Thinkpad USB-C Dock Gen2 (FRU PN: 03X7609 Type: 40AS). -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1841826 Title: Going to sleep instead of logging in while lid closed & external display To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/1841826/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1841826] Re: Going to sleep instead of logging in while lid closed & external display
I guess this won't get fixed before the next LTS. Importance is still "undecided" and it hasn't been assigned to anyone ... :/ -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1841826 Title: Going to sleep instead of logging in while lid closed & external display To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/1841826/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1841826] Re: Going to sleep instead of logging in while lid closed & external display
My work setup doesn't change. Same laptop (Thinkpad P14s), Ultradock, peripherals, screens etc ... and yet the behavior is very erratic. Just locking the screen will set the laptop into this useless zombie state. Most of the time the laptop has then got to be removed from the dock, woken out of sleep, lid closed again, and re-docked, before it's useful again on the dock. I cannot believe how badly broken this is! And in an LTS no less ... -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1841826 Title: Going to sleep instead of logging in while lid closed & external display To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/1841826/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1875015] Re: Ubuntu 20.04 and Displaylink is extremely slow
Thinkpad P14s running 20.04 with Ultra Dock and three monitors connected (DP+DP+HDMI). The system is non-usable without using the Nvidia GPU. Non-usable because of constant slow-downs, especially with anything visually "intensive" going on ... like web-pages with embedded video etc ... -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1875015 Title: Ubuntu 20.04 and Displaylink is extremely slow To manage notifications about this bug go to: https://bugs.launchpad.net/xorg-server/+bug/1875015/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1899435] Re: laptop (docked w/ lid closed) suspends after login
I just setup a Thinkpad P14s with Ubuntu 20.04 and have the exact same problem as with the T440p. Here is the journalctl output ** Attachment added: "P14s_journalctl.txt" https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/1899435/+attachment/5434190/+files/P14s_journalctl.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1899435 Title: laptop (docked w/ lid closed) suspends after login To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/1899435/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1899435] Re: laptop (docked w/ lid closed) suspends after login
Why is this bug still marked as "Incomplete"? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1899435 Title: laptop (docked w/ lid closed) suspends after login To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/1899435/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1899435] Re: laptop (docked w/ lid closed) suspends after login
"Does it suspend if you start with the screen open and close the lid after login?" -> Laptop off and docked with lid open -> Power on, boot into Ubuntu and login -> Close lid on laptop -> Laptop suspends -> Open lid on laptop -> Laptop wakes up and login screen is presented -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1899435 Title: laptop (docked w/ lid closed) suspends after login To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/1899435/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1899435] Re: laptop (docked w/ lid closed) suspends after login
Attaching journalctl output. ** Attachment added: "journalctl.txt" https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/1899435/+attachment/5422020/+files/journalctl.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1899435 Title: laptop (docked w/ lid closed) suspends after login To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/1899435/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1899435] [NEW] laptop (docked w/ lid closed) suspends after login
Public bug reported: I installed 20.04 on my Thinkpad T440p laptop (previously 18.04). As with 18.04 I boot up the laptop with it docked and with the lid closed, however with 20.04 something new happens right after user login ... the laptop goes into suspend and has to be woken up by pressing the power button on the dock. This behavior is new and did not happen with 18.04. ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: gnome-settings-daemon 3.36.1-0ubuntu1 ProcVersionSignature: Ubuntu 5.4.0-48.52-generic 5.4.60 Uname: Linux 5.4.0-48-generic x86_64 ApportVersion: 2.20.11-0ubuntu27.9 Architecture: amd64 CasperMD5CheckResult: skip CurrentDesktop: ubuntu:GNOME Date: Mon Oct 12 09:32:53 2020 InstallationDate: Installed on 2020-09-22 (19 days ago) InstallationMedia: Ubuntu 20.04.1 LTS "Focal Fossa" - Release amd64 (20200731) SourcePackage: gnome-settings-daemon UpgradeStatus: No upgrade log present (probably fresh install) ** Affects: gnome-settings-daemon (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1899435 Title: laptop (docked w/ lid closed) suspends after login To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/1899435/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1850927] [NEW] freeradius not starting on boot
Public bug reported: 1) lsb_release -rd Description:Ubuntu 18.04.3 LTS Release:18.04 2) apt-cache policy freeradius freeradius: Installed: 3.0.16+dfsg-1ubuntu3.1 Candidate: 3.0.16+dfsg-1ubuntu3.1 Version table: *** 3.0.16+dfsg-1ubuntu3.1 500 500 http://ch.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages 500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages 100 /var/lib/dpkg/status 3.0.16+dfsg-1ubuntu3 500 500 http://ch.archive.ubuntu.com/ubuntu bionic/main amd64 Packages 3) What you expected to happen Freeradius service to start on with system boot. Freeradius service to start with 'service freeradius start'. 4) What happened instead Freeradius service doesn't start due to /tmp/radiusd missing and not being created automatically. Oct 30 14:14:56 radius systemd[1]: Starting FreeRADIUS multi-protocol policy server... Oct 30 14:14:56 radius freeradius[5524]: FreeRADIUS Version 3.0.16 Oct 30 14:14:56 radius freeradius[5524]: Copyright (C) 1999-2017 The FreeRADIUS server project and contributors Oct 30 14:14:56 radius freeradius[5524]: There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A Oct 30 14:14:56 radius freeradius[5524]: PARTICULAR PURPOSE Oct 30 14:14:56 radius freeradius[5524]: You may redistribute copies of FreeRADIUS under the terms of the Oct 30 14:14:56 radius freeradius[5524]: GNU General Public License Oct 30 14:14:56 radius freeradius[5524]: For more information about these matters, see the file named COPYRIGHT Oct 30 14:14:56 radius freeradius[5524]: Starting - reading configuration files ... Oct 30 14:14:56 radius freeradius[5524]: Debugger not attached Oct 30 14:14:56 radius freeradius[5524]: rlm_sql (sql): Driver rlm_sql_mysql (module rlm_sql_mysql) loaded and linked Oct 30 14:14:56 radius freeradius[5524]: Creating attribute SQL-Group Oct 30 14:14:56 radius freeradius[5524]: Creating attribute Unix-Group Oct 30 14:14:56 radius freeradius[5524]: rlm_sql_mysql: libmysql version: 5.7.27 Oct 30 14:14:56 radius freeradius[5524]: rlm_sql (sql): Attempting to connect to database "radiusdb" Oct 30 14:14:56 radius freeradius[5524]: rlm_sql (sql): Initialising connection pool Oct 30 14:14:56 radius freeradius[5524]: rlm_sql (sql): Processing generate_sql_clients Oct 30 14:14:56 radius freeradius[5524]: rlm_sql (sql) in generate_sql_clients: query is SELECT id, nasname, shortname, type, secret, server FROM nas Oct 30 14:14:56 radius freeradius[5524]: rlm_sql (sql): 0 of 0 connections in use. You may need to increase "spare" Oct 30 14:14:56 radius freeradius[5524]: rlm_sql (sql): Opening additional connection (0), 1 of 1 pending slots used Oct 30 14:14:56 radius freeradius[5524]: rlm_sql_mysql: Starting connect to MySQL server Oct 30 14:14:56 radius freeradius[5524]: rlm_sql (sql): Reserved connection (0) Oct 30 14:14:56 radius freeradius[5524]: rlm_sql (sql): Released connection (0) Oct 30 14:14:56 radius freeradius[5524]: rlm_cache (cache_eap): Driver rlm_cache_rbtree (module rlm_cache_rbtree) loaded and linked Oct 30 14:14:56 radius freeradius[5524]: [/etc/freeradius/3.0/mods-config/attr_filter/access_reject]:11 Check item "FreeRADIUS-Response-Delay" #011found in filter list for realm "DEFAULT". Oct 30 14:14:56 radius freeradius[5524]: [/etc/freeradius/3.0/mods-config/attr_filter/access_reject]:11 Check item "FreeRADIUS-Response-Delay-USec" #011found in filter list for realm "DEFAULT". Oct 30 14:14:56 radius freeradius[5524]: rlm_detail (auth_log): 'User-Password' suppressed, will not appear in detail output Oct 30 14:14:56 radius freeradius[5524]: rlm_mschap (mschap): using internal authentication Oct 30 14:14:56 radius freeradius[5524]: TLS section "tls" missing, trying to use legacy configuration Oct 30 14:14:56 radius freeradius[5524]: tls: Failed changing permissions on /tmp/radiusd: No such file or directory Oct 30 14:14:56 radius freeradius[5524]: rlm_eap_tls: Failed initializing SSL context Oct 30 14:14:56 radius freeradius[5524]: rlm_eap (EAP): Failed to initialise rlm_eap_tls Oct 30 14:14:56 radius freeradius[5524]: /etc/freeradius/3.0/mods-enabled/eap[2]: Instantiation failed for module "eap" Oct 30 14:14:56 radius systemd[1]: freeradius.service: Control process exited, code=exited status=1 Oct 30 14:14:56 radius systemd[1]: freeradius.service: Failed with result 'exit-code'. Oct 30 14:14:56 radius systemd[1]: Failed to start FreeRADIUS multi-protocol policy server. Oct 30 14:15:01 radius systemd[1]: freeradius.service: Service hold-off time over, scheduling restart. Oct 30 14:15:01 radius systemd[1]: freeradius.service: Scheduled restart job, restart counter is at 173. Oct 30 14:15:01 radius systemd[1]: Stopped FreeRADIUS multi-protocol policy server. 5) Fix: Create the following file … nano /etc/tmpfiles.d/radius.conf … with the following content … d /tmp/radiusd 0700 freerad freerad - - … save and exit, then execute the following
[Bug 1816653] [NEW] Memory Leak
Public bug reported: Gthumb version: 3.6.1 Ubuntu version: 18.04 What you expected to happen: Browse/Edit photos without all my 16GB of RAM being used up and my system freezing up. What happened instead: Gthumb uses all memory available on the system. ** Affects: gthumb (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1816653 Title: Memory Leak To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gthumb/+bug/1816653/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1808047] [NEW] Gthumb blurred edges effect
Public bug reported: Gthumb 3.6.1 running in Ubuntu 18.04 The blurred edges effect when applied is nothing like what it is in the preview. When applied it's almost entirely not noticeable. ** Affects: gthumb (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1808047 Title: Gthumb blurred edges effect To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gthumb/+bug/1808047/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1799084] [NEW] Gthumb Memory Leak
Public bug reported: Ubuntu 18.04 x64 Gthumb 3.6.1 Same issue as was reported here: https://bugs.debian.org/cgi- bin/bugreport.cgi?bug=893661 Specifically this part: 'if I use the editing mode like "automatic contrast adjustment" and "rotate". From 500 Mb it grows till all my 8 Gb RAM are occupied and the computer freezes.' This means that one has to manually keep an eye on the memory usage of Gthumb and close it periodically in order to not have a completely frozen system. Just looking through photos increases the memory used by Gthumb, let alone editing the photos which only increases the memory faster. ** Affects: gthumb (Ubuntu) Importance: Undecided Status: New ** Description changed: Ubuntu 18.04 x64 Gthumb 3.6.1 Same issue as was reported here: https://bugs.debian.org/cgi- bin/bugreport.cgi?bug=893661 Specifically this part: 'if I use the editing mode like "automatic - contrast adjustment" and "rotate". From 500 Mb it grows till all my 8 Gb RAM - are occupied and the computer freezes.' + contrast adjustment" and "rotate". From 500 Mb it grows till all my 8 Gb RAM are occupied and the computer freezes.' This means that one has to manually keep an eye on the memory usage of Gthumb and close it periodically in order to not have a completely frozen system. Just looking through photos increases the memory used by Gthumb, let alone editing the photos which only increases the memory faster. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1799084 Title: Gthumb Memory Leak To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gthumb/+bug/1799084/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1515288] Re: Thunderbird attachment from Samba share won't send
Yes, with TB 52.9.1 64bit under Ubuntu 18.04 The issue seems to be resolved. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1515288 Title: Thunderbird attachment from Samba share won't send To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/thunderbird/+bug/1515288/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1722478] Re: Two-finger scrolling no longer works after resuming from suspend
Ubuntu 18.04 kernel 4.15.0-22-generic T440p - same problem with two- finger scrolling. Will wait until this is fixed in an update ... how long will that be? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1722478 Title: Two-finger scrolling no longer works after resuming from suspend To manage notifications about this bug go to: https://bugs.launchpad.net/linux/+bug/1722478/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 698149] [NEW] os-prober lvm destroys VM
Public bug reported: Binary package hint: os-prober Os-prober mounts file systems of running VMs which leads to the kernel thinking that this FS needs repair, and then repairs it. It's not listed that this happens when using grub. After looking /var/log/ it seems 30+ file-systems in LVM-volumes were damaged. Debian seems to have this fixed: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=556739 Related: https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/384973 ** Affects: os-prober (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/698149 Title: os-prober lvm destroys VM -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 662998] Re: System freezes when running on battery
Dell Studio XPS Laptop 1647 with ATI Mobility Radeon HD 4670 When using a Huawei Vodafone USB-dongle the system freezes within 30 seconds. After disabling almost all applications and bluetooth/wireless and changing the resolution from HD to 1280x720 and reduce backlight to minimum I'm able to use the dongle for some time. -- System freezes when running on battery https://bugs.launchpad.net/bugs/662998 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 533493] Re: lvremove fails
I came across a PDF-document from HP. A recent white-paper on LVM snapshots. http://h2.www2.hp.com/bizsupport/TechSupport/CoreRedirect.jsp?redirectReason=DocIndexPDFprodSeriesId=4296010targetPage=http%3A%2F%2Fbizsupport2.austin.hp.com%2Fbc%2Fdocs%2Fsupport%2FSupportManual%2Fc02054539%2Fc02054539.pdf Most interesting part: In very low system memory conditions, deletion of a single snapshot can hang indefinitely for memory to become available. Ensure that sufficient memory is available during deletion of a single snapshot that requires data to be copied to its predecessor. If the lvremove command hangs in these cases, increase the system memory or free some existing system memory to proceed with the snapshot deletion. No further explaination is give Our host contains 64GB RAM and 2 6-core Intel CPU's. We're using Munin to graph memory-usage. The graphs are updated every 5 minutes, so we don't have a real numbers on usage on the moment the snapshot was removed. At the moment the removal of the snapshot was initiated the host used approximately 51GB RAM, 6GB buffers, 10GB unused and 3GB swap. I'm thinking about some NUMA-issues I researched last weeks. It's probably nothing to do with this issue. Some memory-statistics: # free -m total used free sharedbuffers cached Mem: 64549 64062487 0 23579780 -/+ buffers/cache: 39702 24847 Swap: 7627377 7250 # numactl --hardware available: 2 nodes (0-1) node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 node 0 size: 32768 MB node 0 free: 63 MB node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 node 1 size: 32758 MB node 1 free: 437 MB node distances: node 0 1 0: 10 20 The host is swapping a little now, but every day it swaps out 4GB of RAM. vm.swappiness=0 swapoff -a swapon -a is run every day a couple times. It should not swap, but it seems to be an issue with multiple CPU sockets and processes not using the same NUMA-node (CPU-pinning). It seems that hosts with multiple sockets (not cores) swaps out a lot more. It could be possible that the lvremove action thinks there is not enough ram and hangs indefinitely. Hopefully someone can confirm some of this. -- lvremove fails https://bugs.launchpad.net/bugs/533493 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 533493] Re: lvremove fails
ah, btw a accidental reboot fixed the locked LV. -- lvremove fails https://bugs.launchpad.net/bugs/533493 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 533493] Re: lvremove fails
Ubuntu 10.04.1 LTS Kerlen 2.6.32-25-server and 2.6.32-24-server x86_64 lvm2 2.02.54-1ubuntu4.1 udev 151-12.2 libvirt-bin 0.7.5-5ubuntu27.7 kvm 1:84+dfsg-0ubuntu16+0.12.3+noroms+0ubun We're running VM's with KVM on several hosts. We're using a LV per VM. Every night the LVM-volumes are snapshotted one-by-one. After saving the snapshotted volume to another server we need to remove the snapshot. /sbin/lvremove -f /dev/someVG/snap-VMvolumeXYZ hangs sometimes. On the 4 hosts we're using LVM this is the second time in 4 days. It's impossible to kill this process and all other LVM-commands are freezing to from this moment. I have to remove /dev/mapper/someVG-VMvolumeXYZ manualy to be able to use other LVM-commands again. It seems that the devices is SUSPENDED. # dmsetup info someVG-VMvolumeXYZ Name: someVG-VMvolumeXYZ State: SUSPENDED /dev/mapper/hl-someVG-VMvolumeXYZ: open failed: No such file or directory Tables present:None Open count:2 Event number: 0 Major, minor: 251, 9 Number of targets: 0 UUID: LVM-R6AybI9pE2adk8jBZuRc837oZl9Kh2k3p8WzNdpuyQT7zb1xfFb0pJ3CbdkNyx4K Is there a way to remove this lock or suspended state? Rebooting the host will solve this probably, but thats not an option twice a week. There are 15 to 30 VM's running on these hosts. Using a search-engine on this matters gives a lot of results going back to 2006. Seems to be exactly the same. Link: http://readlist.com/lists/redhat.com/linux-lvm/0/422.html -- lvremove fails https://bugs.launchpad.net/bugs/533493 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 662998] Re: System freezes when running on battery
Same problem here: Dell Studio XPS Laptop 1647 It's definitely something with powermanagement when using battery. When I enable my build-in webcam the system freezes within 10 seconds. Even when I attach my tiny Ipod Shuffle the system freezes. I managed to keep my system stable by doing 2 things: - Reduce the backlight - Use the Catalyst Control Center. Set Powerplay to Maximum Battery. -- System freezes when running on battery https://bugs.launchpad.net/bugs/662998 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 614322] Re: libvirt not recognizing NUMA architecture
I can confirm this. Most of our hosts have 64GB RAM and 2 Intel Nehalem hexa-cores. With 3 VM's using 16GB RAM each. I noticed performance degredation and the host also swapped out a lot of memory. The swapping degraded performance dramatically. vm.swappiness=0 in /etc/sysctl.conf did not help. It seems that NUMA on Intel CPU's can be expensive because RAM needs to be transfered from other nodes. With only 1 node (socket) there is no problem. With 2 or more nodes you see slower. Without the capabilities you can prevent this behavior by pinning the vcpu's. You should spread you VM's over the available nodes: numactl --hardware | grep node 0 cpus node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 Your XML should contain something like this: vcpu cpuset='0,2,4,6,8,10,12,14,16,18,20,22'1/vcpu vcpu cpuset='0,2,4,6,8,10,12,14,16,18,20,22'2/vcpu vcpu cpuset='0,2,4,6,8,10,12,14,16,18,20,22'3/vcpu vcpu cpuset='0,2,4,6,8,10,12,14,16,18,20,22'4/vcpu The next VM should use they other node. numactl --hardware | grep node 1 cpus node 0 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 vcpu cpuset='1,3,5,7,9,11,13,15,17,19,21,23'1/vcpu vcpu cpuset='1,3,5,7,9,11,13,15,17,19,21,23'2/vcpu vcpu cpuset='1,3,5,7,9,11,13,15,17,19,21,23'3/vcpu vcpu cpuset='1,3,5,7,9,11,13,15,17,19,21,23'4/vcpu So you don't need the NUMA info from virsh capabilities. We now split up our hosts by the amount of NUMA-nodes to prevent performance-degradation and swapping. -- libvirt not recognizing NUMA architecture https://bugs.launchpad.net/bugs/614322 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 614322] Re: libvirt not recognizing NUMA architecture
I can confirm this. Most of our hosts have 64GB RAM and 2 Intel Nehalem hexa-cores. With 3 VM's using 16GB RAM each. I noticed performance degredation and the host also swapped out a lot of memory. The swapping degraded performance dramatically. vm.swappiness=0 in /etc/sysctl.conf did not help. It seems that NUMA on Intel CPU's can be expensive because RAM needs to be transfered from other nodes. With only 1 node (socket) there is no problem. With 2 or more nodes you see slower. Without the capabilities you can prevent this behavior by pinning the vcpu's. You should spread you VM's over the available nodes: numactl --hardware | grep node 0 cpus node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 Your XML should contain something like this: vcpu cpuset='0,2,4,6,8,10,12,14,16,18,20,22'1/vcpu vcpu cpuset='0,2,4,6,8,10,12,14,16,18,20,22'2/vcpu vcpu cpuset='0,2,4,6,8,10,12,14,16,18,20,22'3/vcpu vcpu cpuset='0,2,4,6,8,10,12,14,16,18,20,22'4/vcpu The next VM should use they other node. numactl --hardware | grep node 1 cpus node 0 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 vcpu cpuset='1,3,5,7,9,11,13,15,17,19,21,23'1/vcpu vcpu cpuset='1,3,5,7,9,11,13,15,17,19,21,23'2/vcpu vcpu cpuset='1,3,5,7,9,11,13,15,17,19,21,23'3/vcpu vcpu cpuset='1,3,5,7,9,11,13,15,17,19,21,23'4/vcpu So you don't need the NUMA info from virsh capabilities. We now split up our hosts by the amount of NUMA-nodes to prevent performance-degradation and swapping. -- libvirt not recognizing NUMA architecture https://bugs.launchpad.net/bugs/614322 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 667986] Re: Disk image type defaults to raw since 0.7.5-5ubuntu27.5
We now use the type= part. Something different but also related to this new behavior. Live migration seems to crash the VM with VM's started before the upgrades. I can't reproduce it beceause all hosts are upgraded and I now don't do live migrations anymore because of 3 fails out of 3. The live migration only crashes the VM when we use qcow2 and migrate to another host which is also upgraded. The destination host reads the qcow2 as raw ouch! The VM on the destination has the driver type='raw'/ with virsh dumpxml. On the source host it was driver type='qcow2'/. -- Disk image type defaults to raw since 0.7.5-5ubuntu27.5 https://bugs.launchpad.net/bugs/667986 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 667986] Re: Disk image type defaults to raw since 0.7.5-5ubuntu27.5
My bad. libvirt-migrate-qemu-disks is unable to migrate my disks. The dumpxml output is the same after using libvirt-migrate-qemu-disks. Thats why a live migration failes. -- Disk image type defaults to raw since 0.7.5-5ubuntu27.5 https://bugs.launchpad.net/bugs/667986 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 667986] Re: Disk image type defaults to raw since 0.7.5-5ubuntu27.5
We now use the type= part. Something different but also related to this new behavior. Live migration seems to crash the VM with VM's started before the upgrades. I can't reproduce it beceause all hosts are upgraded and I now don't do live migrations anymore because of 3 fails out of 3. The live migration only crashes the VM when we use qcow2 and migrate to another host which is also upgraded. The destination host reads the qcow2 as raw ouch! The VM on the destination has the driver type='raw'/ with virsh dumpxml. On the source host it was driver type='qcow2'/. -- Disk image type defaults to raw since 0.7.5-5ubuntu27.5 https://bugs.launchpad.net/bugs/667986 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 667986] Re: Disk image type defaults to raw since 0.7.5-5ubuntu27.5
My bad. libvirt-migrate-qemu-disks is unable to migrate my disks. The dumpxml output is the same after using libvirt-migrate-qemu-disks. Thats why a live migration failes. -- Disk image type defaults to raw since 0.7.5-5ubuntu27.5 https://bugs.launchpad.net/bugs/667986 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 667986] [NEW] Disk image type defaults to raw since 0.7.5-5ubuntu27.5
Public bug reported: Ubuntu 10.04.1 LTS libvirt-bin 0.7.5-5ubuntu27.6 Since 0.7.5-5ubuntu27.5 (http://www.ubuntuupdates.org/packages/show/253540) the default type of diskimages is RAW. Before this version the diskimage type was automatically detected. This new behavior results in boot-failures and head-ages. After upgrading libvirt-bin and stopping and starting a VM it looked like the qcow2 image was completely unrecoverable. All types of recovery tools were not able to recover most of the data, only some snippets. Converting the qcow2 to RAW worked and it booted directly. So there was nothing wrong with the qcow2 image. When I checked the kvm-process with ps I found type=raw defined for the qcow2 image. Snippet form virsh dumpxml someVM before: disk type='file' device='disk' driver name='qemu' cache='writethrough'/ source file='/etc/libvirt/qemu/disks/somediskimage.qcow2'/ target dev='vda' bus='virtio'/ /disk Snippet form virsh dumpxml someVM now: disk type='file' device='disk' driver name='qemu' type='raw' cache='writethrough'/ source file='/etc/libvirt/qemu/disks/somediskimage.qcow2'/ target dev='vda' bus='virtio'/ /disk If a file (qcow2 or raw is used) detecting the type of image is very easy: r...@kvm:# file disk0.qcow2 disk0.qcow2: Qemu Image, Format: Qcow , Version: 2 r...@kvm:# qemu-img info disk0.qcow2 image: disk0.qcow2 file format: qcow2 virtual size: 10G (10737418240 bytes) disk size: 8.0G cluster_size: 4096 Maybe it's better to try to detect the type first, and if that fails use RAW as default (for blockdevices). ** Affects: libvirt (Ubuntu) Importance: Undecided Status: New ** Tags: libvirt-bin qcow2 raw virsh -- Disk image type defaults to raw since 0.7.5-5ubuntu27.5 https://bugs.launchpad.net/bugs/667986 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 667986] Re: Disk image type defaults to raw since 0.7.5-5ubuntu27.5
It seems to be I missed that security notice. ;) Without the security notice it is impossible to understand what changed by reading only the changelog. There is no link to the security notice in the changelog and the available links in the changelog don't contain any words about changed default behavior. So for future upgrades I simply have to search in the security notices to get know of any changes to? -- Disk image type defaults to raw since 0.7.5-5ubuntu27.5 https://bugs.launchpad.net/bugs/667986 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 667986] [NEW] Disk image type defaults to raw since 0.7.5-5ubuntu27.5
Public bug reported: Ubuntu 10.04.1 LTS libvirt-bin 0.7.5-5ubuntu27.6 Since 0.7.5-5ubuntu27.5 (http://www.ubuntuupdates.org/packages/show/253540) the default type of diskimages is RAW. Before this version the diskimage type was automatically detected. This new behavior results in boot-failures and head-ages. After upgrading libvirt-bin and stopping and starting a VM it looked like the qcow2 image was completely unrecoverable. All types of recovery tools were not able to recover most of the data, only some snippets. Converting the qcow2 to RAW worked and it booted directly. So there was nothing wrong with the qcow2 image. When I checked the kvm-process with ps I found type=raw defined for the qcow2 image. Snippet form virsh dumpxml someVM before: disk type='file' device='disk' driver name='qemu' cache='writethrough'/ source file='/etc/libvirt/qemu/disks/somediskimage.qcow2'/ target dev='vda' bus='virtio'/ /disk Snippet form virsh dumpxml someVM now: disk type='file' device='disk' driver name='qemu' type='raw' cache='writethrough'/ source file='/etc/libvirt/qemu/disks/somediskimage.qcow2'/ target dev='vda' bus='virtio'/ /disk If a file (qcow2 or raw is used) detecting the type of image is very easy: r...@kvm:# file disk0.qcow2 disk0.qcow2: Qemu Image, Format: Qcow , Version: 2 r...@kvm:# qemu-img info disk0.qcow2 image: disk0.qcow2 file format: qcow2 virtual size: 10G (10737418240 bytes) disk size: 8.0G cluster_size: 4096 Maybe it's better to try to detect the type first, and if that fails use RAW as default (for blockdevices). ** Affects: libvirt (Ubuntu) Importance: Undecided Status: New ** Tags: libvirt-bin qcow2 raw virsh -- Disk image type defaults to raw since 0.7.5-5ubuntu27.5 https://bugs.launchpad.net/bugs/667986 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 667986] Re: Disk image type defaults to raw since 0.7.5-5ubuntu27.5
It seems to be I missed that security notice. ;) Without the security notice it is impossible to understand what changed by reading only the changelog. There is no link to the security notice in the changelog and the available links in the changelog don't contain any words about changed default behavior. So for future upgrades I simply have to search in the security notices to get know of any changes to? -- Disk image type defaults to raw since 0.7.5-5ubuntu27.5 https://bugs.launchpad.net/bugs/667986 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 556732] Re: blkid not used in fstab
Does it? The UUID's are known from the beginning while the disks are partitioned. -- blkid not used in fstab https://bugs.launchpad.net/bugs/556732 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to vm-builder in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 556732] Re: blkid not used in fstab
Snippet from menu.list: title Ubuntu 9.10, kernel 2.6.31-20-server uuid618c7d1f-ef8c-423b-9f32-e8731d15daf2 kernel /boot/vmlinuz-2.6.31-20-server root=UUID=618c7d1f-ef8c-423b-9f32-e8731d15daf2 ro quiet splash initrd /boot/initrd.img-2.6.31-20-server /etc/fstab (after I fixed it): UUID=618c7d1f-ef8c-423b-9f32-e8731d15daf2 / xfs defaults0 0 UUID=409917ac-9244-4ae1-a5f8-d54b3b1665c6 swapswapdefaults0 0 The goal is to prevent issues when adding disks. When a new disk is added and recognized before the current disk, the root and swap partitions are not there (on the new added disk). /dev/sda is the new disk and /dev/sdb1 is root and /dev/sdb2 is swap in this new situation. UUIDs in fstab are used by default since 8.04. I only asking to put this on a whislist. plugins/ubuntu/dapper.py line 260 self.install_from_template('/etc/fstab', 'dapper_fstab', { 'parts' : disk.get_ordered_partitions(self.vm.disks), 'prefix' : self.disk_prefix }) And the template should use UUIDs. -- blkid not used in fstab https://bugs.launchpad.net/bugs/556732 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to vm-builder in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 556732] Re: blkid not used in fstab
Does it? The UUID's are known from the beginning while the disks are partitioned. -- blkid not used in fstab https://bugs.launchpad.net/bugs/556732 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 556732] Re: blkid not used in fstab
Snippet from menu.list: title Ubuntu 9.10, kernel 2.6.31-20-server uuid618c7d1f-ef8c-423b-9f32-e8731d15daf2 kernel /boot/vmlinuz-2.6.31-20-server root=UUID=618c7d1f-ef8c-423b-9f32-e8731d15daf2 ro quiet splash initrd /boot/initrd.img-2.6.31-20-server /etc/fstab (after I fixed it): UUID=618c7d1f-ef8c-423b-9f32-e8731d15daf2 / xfs defaults0 0 UUID=409917ac-9244-4ae1-a5f8-d54b3b1665c6 swapswapdefaults0 0 The goal is to prevent issues when adding disks. When a new disk is added and recognized before the current disk, the root and swap partitions are not there (on the new added disk). /dev/sda is the new disk and /dev/sdb1 is root and /dev/sdb2 is swap in this new situation. UUIDs in fstab are used by default since 8.04. I only asking to put this on a whislist. plugins/ubuntu/dapper.py line 260 self.install_from_template('/etc/fstab', 'dapper_fstab', { 'parts' : disk.get_ordered_partitions(self.vm.disks), 'prefix' : self.disk_prefix }) And the template should use UUIDs. -- blkid not used in fstab https://bugs.launchpad.net/bugs/556732 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 556732] [NEW] blkid not used in fstab
Public bug reported: vmbuilder uses blkid in /boot/grub/menu.lst It should use it to in /etc/fstab Now it's needed to rewrite (automated) /etc/fstab with UUID's to prevent issues when adding disks. ** Affects: vm-builder (Ubuntu) Importance: Undecided Status: New -- blkid not used in fstab https://bugs.launchpad.net/bugs/556732 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to vm-builder in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 556732] [NEW] blkid not used in fstab
Public bug reported: vmbuilder uses blkid in /boot/grub/menu.lst It should use it to in /etc/fstab Now it's needed to rewrite (automated) /etc/fstab with UUID's to prevent issues when adding disks. ** Affects: vm-builder (Ubuntu) Importance: Undecided Status: New -- blkid not used in fstab https://bugs.launchpad.net/bugs/556732 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 555115] [NEW] Can't use XFS as filesystem
Public bug reported: Ubuntu 9.10 Karmic It's impossible to use XFS as filesystem. Seems to be that EXT3 is hardcoded in: /usr/lib/python2.6/dist-packages/VMBuilder/plugins/cli/__init__.py It's not possible to use XFS without changing this file, which is bad. (sed 's/ext3/xfs/g' /usr/lib/python2.6/dist-packages/VMBuilder/plugins/cli/__init__.py) Option: --rootsize=SIZE only accepts the size of the root filesystem Option: --part=PATH only accepts mountpoints and size ** Affects: vm-builder (Ubuntu) Importance: Undecided Status: New -- Can't use XFS as filesystem https://bugs.launchpad.net/bugs/555115 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to vm-builder in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 555115] [NEW] Can't use XFS as filesystem
Public bug reported: Ubuntu 9.10 Karmic It's impossible to use XFS as filesystem. Seems to be that EXT3 is hardcoded in: /usr/lib/python2.6/dist-packages/VMBuilder/plugins/cli/__init__.py It's not possible to use XFS without changing this file, which is bad. (sed 's/ext3/xfs/g' /usr/lib/python2.6/dist-packages/VMBuilder/plugins/cli/__init__.py) Option: --rootsize=SIZE only accepts the size of the root filesystem Option: --part=PATH only accepts mountpoints and size ** Affects: vm-builder (Ubuntu) Importance: Undecided Status: New -- Can't use XFS as filesystem https://bugs.launchpad.net/bugs/555115 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
Mark , I fully agree. It should be fixed in Karmic. I'm not going to use Lucid in production for the next 3-4 months. It has to prove to be stable first. Is it so hard to fix this bug? Probably it's not high on the list to be fixed. -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
Mark , I fully agree. It should be fixed in Karmic. I'm not going to use Lucid in production for the next 3-4 months. It has to prove to be stable first. Is it so hard to fix this bug? Probably it's not high on the list to be fixed. -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
In Karmic there is a workaround. In Lucid this problem is not reproducable. I also think it don't meet the SRU. 6 months ago when I reported this bug I hoped the bug to be fixed within a couple weeks, but now I rather wait a month and test al needed features again and again in Lucid (doing it already for some months). The migrate feature in Karmic works 9 out of 10 times with the workarround (the failures are with random errors). In Lucid I migrated 6 VM's hundreds of times without failures. So KVM/Qemu/Libvirt is much more stable in Lucid. Why I also like to upgrade to Lucid? Because Lucid is LTS and features like KSM: http://www.linux-kvm.com/content/using-ksm-kernel-samepage-merging-kvm It's absolutely worth waiting now. -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
In Karmic there is a workaround. In Lucid this problem is not reproducable. I also think it don't meet the SRU. 6 months ago when I reported this bug I hoped the bug to be fixed within a couple weeks, but now I rather wait a month and test al needed features again and again in Lucid (doing it already for some months). The migrate feature in Karmic works 9 out of 10 times with the workarround (the failures are with random errors). In Lucid I migrated 6 VM's hundreds of times without failures. So KVM/Qemu/Libvirt is much more stable in Lucid. Why I also like to upgrade to Lucid? Because Lucid is LTS and features like KSM: http://www.linux-kvm.com/content/using-ksm-kernel-samepage-merging-kvm It's absolutely worth waiting now. -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
Ah my bad It's indeed not working without the suspend-resume workaround. I used a bash-script which contained the suspend-resume workaround. I was not aware of that. -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
Migrating from Karmic - Karmic seems to work for some time now. This bug can be closed -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
Migrating from Karmic - Karmic seems to work for some time now. This bug can be closed -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
Migrating between these CPU types: Testserver01: 2 X Intel(R) Core(TM)2 CPU 6300 @ 1.86GHz Productionserver01: 16 X Intel(R) Xeon(R) CPU X5570 @ 2.93GHz Works for me with: Karmic - Lucid Lucid - Lucid This was on 2010-01-13. Now Migration to Lucid fails from Karmic. Lot of updates on Lucid on KVM/QEMU last days. -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
Migrating between these CPU types: Testserver01: 2 X Intel(R) Core(TM)2 CPU 6300 @ 1.86GHz Productionserver01: 16 X Intel(R) Xeon(R) CPU X5570 @ 2.93GHz Works for me with: Karmic - Lucid Lucid - Lucid This was on 2010-01-13. Now Migration to Lucid fails from Karmic. Lot of updates on Lucid on KVM/QEMU last days. -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
Finished some new tests. Test is prety much the same as the bug description and comment 2 (https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/448674/comments/2) only hostB is Lucid. Brought HostA up-to-date: Ubuntu Karmic 9.10 libvirt-bin 0.7.0-1ubuntu13/.1 qemu-kvm 0.11.0-0ubuntu6.3 2.6.31-16-server Upgraded HostB to: Ubuntu Lucid 10.04 (development branch) libvirt-bin 0.7.2-4ubuntu5 qemu-kvm 0.11.0-0ubuntu6.3 2.6.32-10-server VM running Ubuntu Jaunty 9.04 - Karmic - Lucid : Migration works without suspend/resume workaround. - Lucid - Lucid : Migration works without suspend/resume workaround. For fun: - Lucid - Karmic (so back) : Migration works but suspend/resume workaround needed. Instance is migrated but all partitions are gone so I/O errors and everything crashes ;) -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
Finished some new tests. Test is prety much the same as the bug description and comment 2 (https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/448674/comments/2) only hostB is Lucid. Brought HostA up-to-date: Ubuntu Karmic 9.10 libvirt-bin 0.7.0-1ubuntu13/.1 qemu-kvm 0.11.0-0ubuntu6.3 2.6.31-16-server Upgraded HostB to: Ubuntu Lucid 10.04 (development branch) libvirt-bin 0.7.2-4ubuntu5 qemu-kvm 0.11.0-0ubuntu6.3 2.6.32-10-server VM running Ubuntu Jaunty 9.04 - Karmic - Lucid : Migration works without suspend/resume workaround. - Lucid - Lucid : Migration works without suspend/resume workaround. For fun: - Lucid - Karmic (so back) : Migration works but suspend/resume workaround needed. Instance is migrated but all partitions are gone so I/O errors and everything crashes ;) -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
I tested migrations on Karmic with guests OS Ubuntu Hardy, Ubuntu Jaunty, Ubuntu Karmic Guests hangs and suspend+resume fixes this. -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
I tested migrations on Karmic with guests OS Ubuntu Hardy, Ubuntu Jaunty, Ubuntu Karmic Guests hangs and suspend+resume fixes this. -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
Seems to be a known issue and patches are available: https://www.redhat.com/archives/libvir-list/2009-October/msg00019.html -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
Seems to be a known issue and patches are available: https://www.redhat.com/archives/libvir-list/2009-October/msg00019.html -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
Hosts: CPU: Intel(R) Core(TM)2 CPU 6300 @ 1.86GHz RAM: 2GB Disk: Gbit NFS-mount on NetApp FAS3040 (/etc/libvirt/qemu) 10.0.40.100:/vol/hl/disk_images /etc/libvirt/qemu/disks nfs rsize=32768,wsize=32768,hard,intr,tcp,timeo=600,rw0 0 Installed both hosts with Ubuntu Jaunty 9.04. aptitude install libvirt-bin qemu kvm host sysstat iptraf iptables portmap nfs-common realpath bridge-utils vlan ubuntu-virt-server python-vm-builder whois postfix hdparm After some testing with migration (all failed because of several errors/bugs) I upgraded to Ubuntu Karmic 9.10 Beta. cat /etc/network/interfaces: auto lo iface lo inet loopback auto eth1 iface eth1 inet manual up ifconfig eth1 0.0.0.0 up up ip link set eth1 promisc on auto eth1.1503 iface eth1.1503 inet manual up ifconfig eth1.1503 0.0.0.0 up up ip link set eth1.1503 promisc on auto br_extern iface br_extern inet static address 123.123.32.252 # HOSTA address 123.123.32.253 # HOSTB network 123.123.32.0 netmask 255.255.252.0 broadcast 123.123.35.255 gateway 123.123.32.1 bridge_ports eth0.1503 bridge_stp off /etc/resolv.conf is correct /etc/hosts is correct Hostnames are correct and resolvable VM running Ubuntu Jaunty 9.04: fqdn.com.xml: ?xml version=1.0? domain type=kvm namefqdn.com/name uuid70a1c1f2-9a3e-4ee5-9f95-69e7e2682e15/uuid memory1048576/memory currentMemory1048576/currentMemory vcpu1/vcpu features acpi/ apic/ pae/ /features os typehvm/type boot dev=cdrom/ boot dev=hd/ /os clock offset=utc/ on_poweroffdestroy/on_poweroff on_rebootrestart/on_reboot on_crashrestart/on_crash devices emulator/usr/bin/kvm/emulator disk type=file device=disk source file=/etc/libvirt/qemu/disks/1378/fqdn.com/disk0.qcow2/ target dev=hda bus=ide/ driver cache=writethrough/ /disk interface type=bridge mac address=56:16:43:76:ab:09/ source bridge=br_extern/ /interface disk type=file device=cdrom target dev=hdc bus=ide/ readonly/ /disk input type=mouse bus=ps2/ graphics type=vnc port=-1 listen=127.0.0.1/ /devices /domain Define instance: /usr/bin/virsh define /etc/libvirt/qemu/xml/1378/fqdn.com.xml Start instance: /usr/bin/virsh start fqdn.com ps auxf | grep kvm: /usr/bin/kvm -S -M pc-0.11 -m 1024 -smp 1 -name fqdn.com -uuid 70a1c1f2-9a3e-4ee5-9f95-69e7e2682e15 -monitor unix:/var/run/libvirt/qemu/fqdn.com.monitor,server,nowait -boot dc - drive file=/etc/libvirt/qemu/disks/1378/fqdn.com/disk0.qcow2,if=ide,index=0,boot=on -drive file=,if=ide,media=cdrom,index=2 -net nic,macaddr=56:16:43:76:ab:09,vlan=0,name=nic.0 -net tap,fd=17,vlan=0 ,name=tap.0 -serial none -parallel none -usb -vnc 127.0.0.1:0 -vga cirrus Migrate instance: /usr/bin/virsh migrate fqdn.com qemu+ssh://hostb.fqdn.com/system Migration will complete but the instance seems to be suspended. On HostB to resume the instance: /usr/bin/virsh suspend fqdn.com /usr/bin/virsh resume fqdn.com Only running resume fqdn.com does nothing. The Hosts were initialy installed as Ubuntu Jaunty 9.04 and upgraded to Ubuntu Karmic 9.10 Beta. Maybe this is the problem? -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 448674] Re: VM is suspended after live migrate in Karmic
Hosts: CPU: Intel(R) Core(TM)2 CPU 6300 @ 1.86GHz RAM: 2GB Disk: Gbit NFS-mount on NetApp FAS3040 (/etc/libvirt/qemu) 10.0.40.100:/vol/hl/disk_images /etc/libvirt/qemu/disks nfs rsize=32768,wsize=32768,hard,intr,tcp,timeo=600,rw0 0 Installed both hosts with Ubuntu Jaunty 9.04. aptitude install libvirt-bin qemu kvm host sysstat iptraf iptables portmap nfs-common realpath bridge-utils vlan ubuntu-virt-server python-vm-builder whois postfix hdparm After some testing with migration (all failed because of several errors/bugs) I upgraded to Ubuntu Karmic 9.10 Beta. cat /etc/network/interfaces: auto lo iface lo inet loopback auto eth1 iface eth1 inet manual up ifconfig eth1 0.0.0.0 up up ip link set eth1 promisc on auto eth1.1503 iface eth1.1503 inet manual up ifconfig eth1.1503 0.0.0.0 up up ip link set eth1.1503 promisc on auto br_extern iface br_extern inet static address 123.123.32.252 # HOSTA address 123.123.32.253 # HOSTB network 123.123.32.0 netmask 255.255.252.0 broadcast 123.123.35.255 gateway 123.123.32.1 bridge_ports eth0.1503 bridge_stp off /etc/resolv.conf is correct /etc/hosts is correct Hostnames are correct and resolvable VM running Ubuntu Jaunty 9.04: fqdn.com.xml: ?xml version=1.0? domain type=kvm namefqdn.com/name uuid70a1c1f2-9a3e-4ee5-9f95-69e7e2682e15/uuid memory1048576/memory currentMemory1048576/currentMemory vcpu1/vcpu features acpi/ apic/ pae/ /features os typehvm/type boot dev=cdrom/ boot dev=hd/ /os clock offset=utc/ on_poweroffdestroy/on_poweroff on_rebootrestart/on_reboot on_crashrestart/on_crash devices emulator/usr/bin/kvm/emulator disk type=file device=disk source file=/etc/libvirt/qemu/disks/1378/fqdn.com/disk0.qcow2/ target dev=hda bus=ide/ driver cache=writethrough/ /disk interface type=bridge mac address=56:16:43:76:ab:09/ source bridge=br_extern/ /interface disk type=file device=cdrom target dev=hdc bus=ide/ readonly/ /disk input type=mouse bus=ps2/ graphics type=vnc port=-1 listen=127.0.0.1/ /devices /domain Define instance: /usr/bin/virsh define /etc/libvirt/qemu/xml/1378/fqdn.com.xml Start instance: /usr/bin/virsh start fqdn.com ps auxf | grep kvm: /usr/bin/kvm -S -M pc-0.11 -m 1024 -smp 1 -name fqdn.com -uuid 70a1c1f2-9a3e-4ee5-9f95-69e7e2682e15 -monitor unix:/var/run/libvirt/qemu/fqdn.com.monitor,server,nowait -boot dc - drive file=/etc/libvirt/qemu/disks/1378/fqdn.com/disk0.qcow2,if=ide,index=0,boot=on -drive file=,if=ide,media=cdrom,index=2 -net nic,macaddr=56:16:43:76:ab:09,vlan=0,name=nic.0 -net tap,fd=17,vlan=0 ,name=tap.0 -serial none -parallel none -usb -vnc 127.0.0.1:0 -vga cirrus Migrate instance: /usr/bin/virsh migrate fqdn.com qemu+ssh://hostb.fqdn.com/system Migration will complete but the instance seems to be suspended. On HostB to resume the instance: /usr/bin/virsh suspend fqdn.com /usr/bin/virsh resume fqdn.com Only running resume fqdn.com does nothing. The Hosts were initialy installed as Ubuntu Jaunty 9.04 and upgraded to Ubuntu Karmic 9.10 Beta. Maybe this is the problem? -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 448674] [NEW] VM is suspended after live migrate in Karmic
Public bug reported: Ubuntu Karmic 9.10 libvirt-bin 0.7.0-1ubuntu10 qemu-kvm 0.11.0-0ubuntu1 2.6.31-13-server VM running Ubuntu Jaunty 9.04 On hostA: virsh migrate fqdn.com qemu+ssh://hostb.fqdn.com/system Migration completed in about 8 seconds. Virsh tells me the VM is running: virsh list | grep fqdn.com Connecting to uri: qemu:///system 1 fqdn.comrunning The VM seems to be frozen after migration on hostB. After executing this on hostB the VM is working fine: virsh suspend fqdn.com virsh resume fqdn.com It's expected behavior that the VM is suspended before migration, but it needs to be resumed when the migration is completed. ** Affects: libvirt (Ubuntu) Importance: Undecided Status: New -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 448674] [NEW] VM is suspended after live migrate in Karmic
Public bug reported: Ubuntu Karmic 9.10 libvirt-bin 0.7.0-1ubuntu10 qemu-kvm 0.11.0-0ubuntu1 2.6.31-13-server VM running Ubuntu Jaunty 9.04 On hostA: virsh migrate fqdn.com qemu+ssh://hostb.fqdn.com/system Migration completed in about 8 seconds. Virsh tells me the VM is running: virsh list | grep fqdn.com Connecting to uri: qemu:///system 1 fqdn.comrunning The VM seems to be frozen after migration on hostB. After executing this on hostB the VM is working fine: virsh suspend fqdn.com virsh resume fqdn.com It's expected behavior that the VM is suspended before migration, but it needs to be resumed when the migration is completed. ** Affects: libvirt (Ubuntu) Importance: Undecided Status: New -- VM is suspended after live migrate in Karmic https://bugs.launchpad.net/bugs/448674 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 179068] Re: Installation hangs on nameserver typo
Tested it with 8.04 and 8.04 Minimal CD Image (Server) and seems to be fixed in. It now fails whitin a minute whit a more friendly message. -- Installation hangs on nameserver typo https://bugs.launchpad.net/bugs/179068 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 179068] Installation hangs on nameserver typo
Public bug reported: Gutsy (7.10) The installation hangs on a wrong nameserver. Even on a very fast CPU and 1000Mbit network-connection it takes about 30 minutes to proceed only 20%. Correcting /etc/resolv.conf doesn't resolve the issue. There seems to be no check for the used nameserver. ** Affects: ubuntu Importance: Undecided Status: New -- Installation hangs on nameserver typo https://bugs.launchpad.net/bugs/179068 You received this bug notification because you are a member of Ubuntu Bugs, which is the bug contact for Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs