Ubuntu 20.04 here.
I found reports of the file manager data transfer progress information being
unreliable dating back 15 years
(https://alt.os.linux.ubuntu.narkive.com/RxvTnprS/still-no-progress-bar-when-copying-files-to-flash-drive)
and I cannot understand why, for the love of god, this is
I can confirm that with the generic 5.11 kernel shutdown works correctly
- the laptop powers off completely.
Workaround:
- Installation of Generic kernel
sudo apt install linux-generic-hwe-20.04
Note: It may be necessary to also manually install the corresponding
linux-modules and
Same problem here with Thinkpad T14s Gen2 and OEM kernel.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940665
Title:
Lenovo Carbon X1 9th gen no longer powers off
To manage notifications about
@Erik Jackson, it seems you confirmed my suspicions (see comment #26).
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1841826
Title:
Going to sleep instead of logging in while lid closed & external
@Tim Wetzel,
Or maybe it's not just a matter of it being docked, but rather to do with the
laptop lid being closed while being logged into!? I never tried this, but maybe
if an external monitor, mouse and keyboard is connected without the laptop
being docked, and the laptop is booted and
I can confirm that the same behavior occurs with the Thinkpad USB-C Dock
Gen2 (FRU PN: 03X7609 Type: 40AS).
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1841826
Title:
Going to sleep instead of
I guess this won't get fixed before the next LTS. Importance is still
"undecided" and it hasn't been assigned to anyone ... :/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1841826
Title:
Going to
My work setup doesn't change. Same laptop (Thinkpad P14s), Ultradock,
peripherals, screens etc ... and yet the behavior is very erratic. Just
locking the screen will set the laptop into this useless zombie state.
Most of the time the laptop has then got to be removed from the dock,
woken out of
Thinkpad P14s running 20.04 with Ultra Dock and three monitors connected
(DP+DP+HDMI). The system is non-usable without using the Nvidia GPU.
Non-usable because of constant slow-downs, especially with anything
visually "intensive" going on ... like web-pages with embedded video etc
...
--
You
I just setup a Thinkpad P14s with Ubuntu 20.04 and have the exact same
problem as with the T440p. Here is the journalctl output
** Attachment added: "P14s_journalctl.txt"
Why is this bug still marked as "Incomplete"?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899435
Title:
laptop (docked w/ lid closed) suspends after login
To manage notifications about this bug
"Does it suspend if you start with the screen open and close the lid
after login?"
-> Laptop off and docked with lid open
-> Power on, boot into Ubuntu and login
-> Close lid on laptop
-> Laptop suspends
-> Open lid on laptop
-> Laptop wakes up and login screen is presented
--
You received this
Attaching journalctl output.
** Attachment added: "journalctl.txt"
https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/1899435/+attachment/5422020/+files/journalctl.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
Public bug reported:
I installed 20.04 on my Thinkpad T440p laptop (previously 18.04). As
with 18.04 I boot up the laptop with it docked and with the lid closed,
however with 20.04 something new happens right after user login ... the
laptop goes into suspend and has to be woken up by pressing the
Public bug reported:
1) lsb_release -rd
Description:Ubuntu 18.04.3 LTS
Release:18.04
2) apt-cache policy freeradius
freeradius:
Installed: 3.0.16+dfsg-1ubuntu3.1
Candidate: 3.0.16+dfsg-1ubuntu3.1
Version table:
*** 3.0.16+dfsg-1ubuntu3.1 500
500
Public bug reported:
Gthumb version: 3.6.1
Ubuntu version: 18.04
What you expected to happen: Browse/Edit photos without all my 16GB of
RAM being used up and my system freezing up.
What happened instead: Gthumb uses all memory available on the system.
** Affects: gthumb (Ubuntu)
Public bug reported:
Gthumb 3.6.1
running in Ubuntu 18.04
The blurred edges effect when applied is nothing like what it is in the
preview. When applied it's almost entirely not noticeable.
** Affects: gthumb (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug
Public bug reported:
Ubuntu 18.04 x64
Gthumb 3.6.1
Same issue as was reported here: https://bugs.debian.org/cgi-
bin/bugreport.cgi?bug=893661
Specifically this part: 'if I use the editing mode like "automatic
contrast adjustment" and "rotate". From 500 Mb it grows till all my 8 Gb RAM
are
Yes, with TB 52.9.1 64bit under Ubuntu 18.04 The issue seems to be
resolved.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1515288
Title:
Thunderbird attachment from Samba share won't send
To
Ubuntu 18.04 kernel 4.15.0-22-generic T440p - same problem with two-
finger scrolling. Will wait until this is fixed in an update ... how
long will that be?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Public bug reported:
Binary package hint: os-prober
Os-prober mounts file systems of running VMs which leads to the kernel thinking
that this FS needs repair, and then repairs it.
It's not listed that this happens when using grub. After looking /var/log/ it
seems 30+ file-systems in
Dell Studio XPS Laptop 1647 with ATI Mobility Radeon HD 4670
When using a Huawei Vodafone USB-dongle the system freezes within 30 seconds.
After disabling almost all applications and bluetooth/wireless and changing the
resolution from HD to 1280x720 and reduce backlight to minimum I'm able to use
I came across a PDF-document from HP. A recent white-paper on LVM
snapshots.
ah, btw a accidental reboot fixed the locked LV.
--
lvremove fails
https://bugs.launchpad.net/bugs/533493
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
Ubuntu 10.04.1 LTS
Kerlen 2.6.32-25-server and 2.6.32-24-server x86_64
lvm2 2.02.54-1ubuntu4.1
udev 151-12.2
libvirt-bin 0.7.5-5ubuntu27.7
kvm 1:84+dfsg-0ubuntu16+0.12.3+noroms+0ubun
We're running VM's with KVM on several hosts. We're using a LV per VM.
Every night the LVM-volumes are
Same problem here: Dell Studio XPS Laptop 1647
It's definitely something with powermanagement when using battery.
When I enable my build-in webcam the system freezes within 10 seconds.
Even when I attach my tiny Ipod Shuffle the system freezes.
I managed to keep my system stable by doing 2
I can confirm this.
Most of our hosts have 64GB RAM and 2 Intel Nehalem hexa-cores.
With 3 VM's using 16GB RAM each.
I noticed performance degredation and the host also swapped out a lot of
memory. The swapping degraded performance dramatically.
vm.swappiness=0 in /etc/sysctl.conf did not help.
I can confirm this.
Most of our hosts have 64GB RAM and 2 Intel Nehalem hexa-cores.
With 3 VM's using 16GB RAM each.
I noticed performance degredation and the host also swapped out a lot of
memory. The swapping degraded performance dramatically.
vm.swappiness=0 in /etc/sysctl.conf did not help.
We now use the type= part.
Something different but also related to this new behavior.
Live migration seems to crash the VM with VM's started before the upgrades. I
can't reproduce it beceause all hosts are upgraded and I now don't do live
migrations anymore because of 3 fails out of 3.
The live
My bad.
libvirt-migrate-qemu-disks is unable to migrate my disks. The dumpxml output is
the same after using libvirt-migrate-qemu-disks.
Thats why a live migration failes.
--
Disk image type defaults to raw since 0.7.5-5ubuntu27.5
https://bugs.launchpad.net/bugs/667986
You received this bug
We now use the type= part.
Something different but also related to this new behavior.
Live migration seems to crash the VM with VM's started before the upgrades. I
can't reproduce it beceause all hosts are upgraded and I now don't do live
migrations anymore because of 3 fails out of 3.
The live
My bad.
libvirt-migrate-qemu-disks is unable to migrate my disks. The dumpxml output is
the same after using libvirt-migrate-qemu-disks.
Thats why a live migration failes.
--
Disk image type defaults to raw since 0.7.5-5ubuntu27.5
https://bugs.launchpad.net/bugs/667986
You received this bug
Public bug reported:
Ubuntu 10.04.1 LTS
libvirt-bin 0.7.5-5ubuntu27.6
Since 0.7.5-5ubuntu27.5 (http://www.ubuntuupdates.org/packages/show/253540) the
default type of diskimages is RAW.
Before this version the diskimage type was automatically detected.
This new behavior results in boot-failures
It seems to be I missed that security notice. ;)
Without the security notice it is impossible to understand what changed by
reading only the changelog.
There is no link to the security notice in the changelog and the available
links in the changelog don't contain any words about changed default
Public bug reported:
Ubuntu 10.04.1 LTS
libvirt-bin 0.7.5-5ubuntu27.6
Since 0.7.5-5ubuntu27.5 (http://www.ubuntuupdates.org/packages/show/253540) the
default type of diskimages is RAW.
Before this version the diskimage type was automatically detected.
This new behavior results in boot-failures
It seems to be I missed that security notice. ;)
Without the security notice it is impossible to understand what changed by
reading only the changelog.
There is no link to the security notice in the changelog and the available
links in the changelog don't contain any words about changed default
Does it?
The UUID's are known from the beginning while the disks are partitioned.
--
blkid not used in fstab
https://bugs.launchpad.net/bugs/556732
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to vm-builder in ubuntu.
--
Snippet from menu.list:
title Ubuntu 9.10, kernel 2.6.31-20-server
uuid618c7d1f-ef8c-423b-9f32-e8731d15daf2
kernel /boot/vmlinuz-2.6.31-20-server
root=UUID=618c7d1f-ef8c-423b-9f32-e8731d15daf2 ro quiet splash
initrd /boot/initrd.img-2.6.31-20-server
Does it?
The UUID's are known from the beginning while the disks are partitioned.
--
blkid not used in fstab
https://bugs.launchpad.net/bugs/556732
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
Snippet from menu.list:
title Ubuntu 9.10, kernel 2.6.31-20-server
uuid618c7d1f-ef8c-423b-9f32-e8731d15daf2
kernel /boot/vmlinuz-2.6.31-20-server
root=UUID=618c7d1f-ef8c-423b-9f32-e8731d15daf2 ro quiet splash
initrd /boot/initrd.img-2.6.31-20-server
Public bug reported:
vmbuilder uses blkid in /boot/grub/menu.lst
It should use it to in /etc/fstab
Now it's needed to rewrite (automated) /etc/fstab with UUID's to prevent
issues when adding disks.
** Affects: vm-builder (Ubuntu)
Importance: Undecided
Status: New
--
blkid not
Public bug reported:
vmbuilder uses blkid in /boot/grub/menu.lst
It should use it to in /etc/fstab
Now it's needed to rewrite (automated) /etc/fstab with UUID's to prevent
issues when adding disks.
** Affects: vm-builder (Ubuntu)
Importance: Undecided
Status: New
--
blkid not
Public bug reported:
Ubuntu 9.10 Karmic
It's impossible to use XFS as filesystem.
Seems to be that EXT3 is hardcoded in:
/usr/lib/python2.6/dist-packages/VMBuilder/plugins/cli/__init__.py
It's not possible to use XFS without changing this file, which is bad. (sed
's/ext3/xfs/g'
Public bug reported:
Ubuntu 9.10 Karmic
It's impossible to use XFS as filesystem.
Seems to be that EXT3 is hardcoded in:
/usr/lib/python2.6/dist-packages/VMBuilder/plugins/cli/__init__.py
It's not possible to use XFS without changing this file, which is bad. (sed
's/ext3/xfs/g'
Mark , I fully agree.
It should be fixed in Karmic.
I'm not going to use Lucid in production for the next 3-4 months.
It has to prove to be stable first.
Is it so hard to fix this bug? Probably it's not high on the list to be
fixed.
--
VM is suspended after live migrate in Karmic
Mark , I fully agree.
It should be fixed in Karmic.
I'm not going to use Lucid in production for the next 3-4 months.
It has to prove to be stable first.
Is it so hard to fix this bug? Probably it's not high on the list to be
fixed.
--
VM is suspended after live migrate in Karmic
In Karmic there is a workaround.
In Lucid this problem is not reproducable.
I also think it don't meet the SRU.
6 months ago when I reported this bug I hoped the bug to be fixed within a
couple weeks, but now I rather wait a month and test al needed features again
and again in Lucid (doing it
In Karmic there is a workaround.
In Lucid this problem is not reproducable.
I also think it don't meet the SRU.
6 months ago when I reported this bug I hoped the bug to be fixed within a
couple weeks, but now I rather wait a month and test al needed features again
and again in Lucid (doing it
Ah my bad
It's indeed not working without the suspend-resume workaround.
I used a bash-script which contained the suspend-resume workaround. I was not
aware of that.
--
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because
Migrating from Karmic - Karmic seems to work for some time now.
This bug can be closed
--
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in
Migrating from Karmic - Karmic seems to work for some time now.
This bug can be closed
--
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
Migrating between these CPU types:
Testserver01: 2 X Intel(R) Core(TM)2 CPU 6300 @ 1.86GHz
Productionserver01: 16 X Intel(R) Xeon(R) CPU X5570 @ 2.93GHz
Works for me with:
Karmic - Lucid
Lucid - Lucid
This was on 2010-01-13.
Now Migration to Lucid fails from Karmic. Lot of
Migrating between these CPU types:
Testserver01: 2 X Intel(R) Core(TM)2 CPU 6300 @ 1.86GHz
Productionserver01: 16 X Intel(R) Xeon(R) CPU X5570 @ 2.93GHz
Works for me with:
Karmic - Lucid
Lucid - Lucid
This was on 2010-01-13.
Now Migration to Lucid fails from Karmic. Lot of
Finished some new tests.
Test is prety much the same as the bug description and comment 2
(https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/448674/comments/2)
only hostB is Lucid.
Brought HostA up-to-date:
Ubuntu Karmic 9.10
libvirt-bin 0.7.0-1ubuntu13/.1
qemu-kvm 0.11.0-0ubuntu6.3
Finished some new tests.
Test is prety much the same as the bug description and comment 2
(https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/448674/comments/2)
only hostB is Lucid.
Brought HostA up-to-date:
Ubuntu Karmic 9.10
libvirt-bin 0.7.0-1ubuntu13/.1
qemu-kvm 0.11.0-0ubuntu6.3
I tested migrations on Karmic with guests OS Ubuntu Hardy, Ubuntu Jaunty,
Ubuntu Karmic
Guests hangs and suspend+resume fixes this.
--
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Server
I tested migrations on Karmic with guests OS Ubuntu Hardy, Ubuntu Jaunty,
Ubuntu Karmic
Guests hangs and suspend+resume fixes this.
--
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Bugs,
Seems to be a known issue and patches are available:
https://www.redhat.com/archives/libvir-list/2009-October/msg00019.html
--
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Server Team,
Seems to be a known issue and patches are available:
https://www.redhat.com/archives/libvir-list/2009-October/msg00019.html
--
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Hosts:
CPU: Intel(R) Core(TM)2 CPU 6300 @ 1.86GHz
RAM: 2GB
Disk: Gbit NFS-mount on NetApp FAS3040 (/etc/libvirt/qemu)
10.0.40.100:/vol/hl/disk_images /etc/libvirt/qemu/disks nfs
rsize=32768,wsize=32768,hard,intr,tcp,timeo=600,rw0 0
Installed both
Hosts:
CPU: Intel(R) Core(TM)2 CPU 6300 @ 1.86GHz
RAM: 2GB
Disk: Gbit NFS-mount on NetApp FAS3040 (/etc/libvirt/qemu)
10.0.40.100:/vol/hl/disk_images /etc/libvirt/qemu/disks nfs
rsize=32768,wsize=32768,hard,intr,tcp,timeo=600,rw0 0
Installed both
Public bug reported:
Ubuntu Karmic 9.10
libvirt-bin 0.7.0-1ubuntu10
qemu-kvm 0.11.0-0ubuntu1
2.6.31-13-server
VM running Ubuntu Jaunty 9.04
On hostA:
virsh migrate fqdn.com qemu+ssh://hostb.fqdn.com/system
Migration completed in about 8 seconds.
Virsh tells me the VM is running:
virsh list |
Public bug reported:
Ubuntu Karmic 9.10
libvirt-bin 0.7.0-1ubuntu10
qemu-kvm 0.11.0-0ubuntu1
2.6.31-13-server
VM running Ubuntu Jaunty 9.04
On hostA:
virsh migrate fqdn.com qemu+ssh://hostb.fqdn.com/system
Migration completed in about 8 seconds.
Virsh tells me the VM is running:
virsh list |
Tested it with 8.04 and 8.04 Minimal CD Image (Server) and seems to be fixed in.
It now fails whitin a minute whit a more friendly message.
--
Installation hangs on nameserver typo
https://bugs.launchpad.net/bugs/179068
You received this bug notification because you are a member of Ubuntu
Bugs,
Public bug reported:
Gutsy (7.10)
The installation hangs on a wrong nameserver.
Even on a very fast CPU and 1000Mbit network-connection it takes about 30
minutes to proceed only 20%.
Correcting /etc/resolv.conf doesn't resolve the issue.
There seems to be no check for the used nameserver.
**
65 matches
Mail list logo