shim boot-loader problem
I have an ACER ASPIRE 5.14 laptop with an internal hard disk, with both Windows 10, & Ubuntu v.20.04 on separate partitions (which I use only occasionally), but have been running the machine primarily from a USB stick with Debian 11.6: Linux cpe-67-241-65-193 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64 GNU/Linux The problem: > I can boot with Debian with no problems; > I can boot with Windows with no problems; > Through about May of 2022 I was able to also boot with Ubuntu, with no problems... but some time in the last half of 2022, I updated Debian, & now, although the Ubuntu option exists in the GRUB boot loader menu, when I select it, I get the error message: 'bad shim signature' & I cannot boot with Ubuntu any more. > To boot with Ubuntu, I have to disable secure boot in the BIOS/UEFI setup (F2 on my computer). With earlier versions of the kernel, I think one had to disable secure boot to boot with debian, but after kernel 5.10, one could boot with secure boot enabled, as my experiences through the middle of 2022 showed. > The APPARENT reason is that on the Debian boot volume, the /boot/efi/ directory contains: /EFI/debian/ fbx64.efi, grubx64.efi, mmx64.efi, shimx64.efi BOOTX64.CSV & grub.cfg I think the relevant file is the shimx64.efi file. On the Ubuntu volume, the /boot/efi/ directory is completely empty & I've not been able to find any files with names containing shim. My QUESTION: can I simply copy the /EFI/debian/... directory & files to the UBUNTU volume to enable the machine to boot when secure boot is enabled? My worry is that the Ubuntu OS uses a different version of kernel: the 2 most recent versions of kernel on each volume are: DEBIAN 11.6 | UBUNTU 5.10.0-20-amd64 | 5.15.0-67-generic 5.10.0-21-amd64 | 5.19.0-35-generic so the shimx64.efi may work for the debian OS but not the UBUNTU, though this shim 'boot-loader' is 'used' before the kernel, I think. I would be most appreciative of any advice, or suggestions for a better place to submit this question, if this forum's not appropriate. With many thanks, Ken (I have not subscribed to the list, but will try to check it; I would be very grateful if replies could be cc to my e-mail address: kcbl2...@yahoo.co.uk.)
Culling old versions of Kernel from /usr/lib/modules/
This message is related to the 'Re: solution to / full' thread. I am running my computer from a Debian 11.6 OS on a 25GB partition on a USB stick. The root partition is now 70% full, with over 4GB (16%) of the volume occupied by /usr/lib/modules/ (3.5GB) & /usr/lib/x86_64-linux-gnu (1.4GB) - as far as I can tell, this latter directory only has essential, current files. I have been using this volume for over a year, & the modules directory now has over a dozen kernels from previous versions of the operating system. I can see the need to retain the last couple of versions, but within the space constraints, I really cannot afford to keep all of these old kernels, each consuming 307-323MB. MY QUESTION: - Is there some utility that pares these files, or must one do this manually? - I have been reluctant to do this manually because I'm not too familiar with the structure of the operating system & do not want to delete a file which may be required by some other part of the system that I've not also removed. Specifically, in the /boot/ directory there are also files related to these older kernel versions: config-5.10.0-*-amd64 (236 kB each) initrd.img-5.10.0-*-amd64 ( 72.7MB each) System.map-5.10.0-*-amd64 ( 83 bytes each) vmlinuz-5.10.0-*-amd64 ( 6.8MB each) corresponding to each of the dozen or so old versions of kernel in the /usr/lib/modules/ directory. I would think the older versions of these files should also be removed when the kernels are. ? & are there any other files related to these kernels that should also be deleted? I think there is some file that contains the list of possible boot volumes displayed by grub: I think I've found this somewhere in the distant past, but can't recall whether it contains a list of these older kernels, or whether that list is dynamically updated depending on what is found on the boot volume. I would appreciate any references that might give more information about this, or any advice. With thanks in advance, Ken (I have not subscribed to the list, but will try to check it; I would be very grateful if replies could be cc to my e-mail address: kcbl2...@yahoo.co.uk.)
Re: Installation fails to recognize SSD
Message-id: <[] alpine.DEB.2.21.2205121632130.1910@Asus1> In-reply-to: <[] CAP1wdQsgJZY9x=8+olbyxjgguxjfdastne0yyyg1mh3jic8...@mail.gmail.com> References: <[] CAP1wdQs=a4HhZV4k8PG=_rjoxlht4pnm9h69ungatahxusy...@mail.gmail.com> <[] d93c2fb1-7ec4-2935-7855-0b6f68bf0...@holgerdanske.com> <[] CAP1wdQsgJZY9x=8+olbyxjgguxjfdastne0yyyg1mh3jic8...@mail.gmail.com> I really want to thank you all for this advice: it solved a problem with which I've been struggling for MONTHS!! My Computer: ACER ASPIRE 514-54 BIOS/UEFI SETUP: INSYDE vers. 1.17 Internal Storage: - HDD0: 256GB Western Digital WDC PC SN530 SDBPNPZ-256G-1114 NVMe solid state storage (SSD) - HDD1: 1000GB Western Digital Blue WDC WD10SPZX-00Z10T0 Hard disk (HDD) Operating System. installed on USB stick: - Linux cpe-67-241-65-193 5.10.0-14-amd64 #1 SMP Debian 5.10.113-1 (2022-04-29) x86_64 GNU/Linux (Debian 11, installed from the CD image with non-free firmware) MY PROBLEM: was somewhat different to Mr Bruno Schneiders: whilst the internal NVMe drive on my machine WAS visible to the DEBIAN installer & DEBIAN OS on external drives connected to USB ports, the INTERNAL HARD DISK was not visible to either, & booting the machine with any DEBIAN OS took a long time (the delay occurred before the fsck step of the boot process). The reason that the boot process took so long is that the machine could not access the pci bridge (1:e0:1c.4) & hence, could not access the SATA controller (1:e0:17.0): Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 1.138746] pcieport 1:e0:1c.4: can't derive routing for PCI INT A Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 1.138747] nvme 1:e1:00.0: PCI INT A: no GSI Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 1.141351] ahci 1:e0:17.0: version 3.0 Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 1.141360] ahci 1:e0:17.0: can't derive routing for PCI INT A Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 1.141360] ahci 1:e0:17.0: PCI INT A: no GSI Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 1.141446] ahci 1:e0:17.0: AHCI 0001.0301 32 slots 2 ports 6 Gbps 0x3 impl SATA mode Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 1.141448] ahci 1:e0:17.0: flags: 64bit ncq sntf pm clo only pio slum part deso sadm sds Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 1.141618] scsi host0: ahci Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 1.141701] scsi host1: ahci resulting in repeated unsuccessful attempts to access the drive: Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 6.656323] ata1.00: qc timeout (cmd 0xec) Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 6.657434] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x4) Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 6.972824] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 17.152287] ata1.00: qc timeout (cmd 0xec) Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 17.153480] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x4) Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 17.153495] ata1: limiting SATA link speed to 3.0 Gbps Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 17.468760] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320) Mar 5 12:53:33 cpe-67-241-65-193 kernel: [ 48.128329] ata1.00: qc timeout (cmd 0xec) After reading your advice, I went to the MAIN menu of the InsydeH2O setup utility, typed Control-S as Mr Felmon Davis suggested. This caused the option: VMD Controller [Enabled] to appear (Intel Volume Management Device). This is a different option to the one described by Mr David Christiansen, but when I disabled this, the internal hard disk was visible to the DEBIAN OS, & there were no delays during the boot process! I had noticed that the UBUNTU OS WAS able to access the internal hard disk, because it used some method to access the pci bridge 'behind VMD': UBUNTU: 1:e0:1c.4: enable ASPM for pci bridge behind vmd but I did not know how to access the VMD setting until I read your posts. I am VERY GRATEFUL FOR YOUR HELP!! With many thanks, Ken
Re: Re: Cannot login to my user?
I experienced this same problem with Debian 11 with GNOME 3.38.5: I had tried to change the Settings > Accessibility > Zoom Options > Magnifier Position from 'Magnifier cursor moves with contents'. The moment I selected the option 'Keep Magnifier Cursor centred', I was logged out & the login screen appeared... & whenever I entered the password for that account, the screen went black & the login screen re-appeared. The accounts of other users, also using GNOME, were completely unaffected. I resolved the problem by logging in as another user & removing the file /home/(username of corrupted account)/.config/dconf/user & logging out. On entering the password on the login screen for (username of corrupted account), that file was re-created & it was again possible to login to that account. I'm not sure what caused this or whether it is relevant to your problem, though. Regards, Ken
Re: Re: systemd user@###.service failure causing 90 sec delays during boot, login
The problem occurred after I installed the ufw firewall package. I finally figured out (as Mr Richard Hector wrote me) that the problem was caused by ufw blocking the network connection on the loopback interface. Removing the ufw package resolved the problem.
systemd user@###.service failure causing 90 sec delays during boot, login
I installed Debian 11 (Bullseye) with GNOME 3.38.5 (Wayland), LINUX kernel Linux version 5.10.0-11-amd64 (gcc-10 (Debian 10.2.1-6) 10.2.1 20210110, GNU ld 2.35.2) #1 SMP Debian 5.10.92-1 (2022-01-18) on a USB stick, and am using it with an ACER Aspire 514 laptop. This operating system has worked excellently for months, but for the last 2 days has suddenly been taking a very long time to boot. The cause of the delay can be seen from the syslog: Feb 28 10:09:30 cpe-67-241-65-193 systemd[1]: Started GNOME Display Manager. (The above is the last line on the verbose boot log printed on screen during boot process) (omitted next lines from network manager, & kernel, about setting up network & loading audio firmware, etc.) Feb 28 10:09:31 cpe-67-241-65-193 systemd[1]: Created slice User Slice of UID 119. Feb 28 10:09:31 cpe-67-241-65-193 systemd[1]: Starting User Runtime Directory /run/user/119... Feb 28 10:09:31 cpe-67-241-65-193 systemd[1]: Finished User Runtime Directory /run/user/119. Feb 28 10:09:31 cpe-67-241-65-193 systemd[1]: Starting User Manager for UID 119... ... Feb 28 10:11:01 cpe-67-241-65-193 systemd[1]: user@119.service: Main process exited, code=exited, status=1/FAILURE Feb 28 10:11:01 cpe-67-241-65-193 systemd[1]: user@119.service: Killing process 1144 (gpgconf) with signal SIGKILL. Feb 28 10:11:01 cpe-67-241-65-193 systemd[1]: user@119.service: Killing process 1145 (awk) with signal SIGKILL. Feb 28 10:11:01 cpe-67-241-65-193 systemd[1]: user@119.service: Killing process 1174 (dirmngr) with signal SIGKILL. Feb 28 10:11:01 cpe-67-241-65-193 systemd[1]: user@119.service: Killing process 1144 (gpgconf) with signal SIGKILL. Feb 28 10:11:01 cpe-67-241-65-193 systemd[1]: user@119.service: Killing process 1145 (awk) with signal SIGKILL. Feb 28 10:11:01 cpe-67-241-65-193 systemd[1]: user@119.service: Killing process 1174 (dirmngr) with signal SIGKILL. Feb 28 10:11:01 cpe-67-241-65-193 systemd[1]: user@119.service: Failed with result 'exit-code'. Feb 28 10:11:01 cpe-67-241-65-193 systemd[1]: user@119.service: Unit process 1174 (dirmngr) remains running after unit stopped. Feb 28 10:11:01 cpe-67-241-65-193 systemd[1]: Failed to start User Manager for UID 119. Feb 28 10:11:01 cpe-67-241-65-193 systemd[1]: Started Session c1 of user Debian-gdm. The login screen appeared at 10:11:09: Feb 28 10:11:09 cpe-67-241-65-193 systemd[1]: Startup finished in 51.017s (kernel) + 1min 48.624s (userspace) = 2min 39.642s. The same 90 sec delay then occurs again after any user enters his password (at 10:11:46): Feb 28 10:11:46 cpe-67-241-65-193 systemd[1]: Created slice User Slice of UID 1003. Feb 28 10:11:46 cpe-67-241-65-193 systemd[1]: Starting User Runtime Directory /run/user/1003... Feb 28 10:11:46 cpe-67-241-65-193 systemd[1]: Finished User Runtime Directory /run/user/1003. Feb 28 10:11:46 cpe-67-241-65-193 systemd[1]: Starting User Manager for UID 1003... Feb 28 10:13:16 cpe-67-241-65-193 systemd[1]: user@1003.service: Main process exited, code=exited, status=1/FAILURE (as above) Feb 28 10:13:16 cpe-67-241-65-193 systemd[1]: Failed to start User Manager for UID 1003. Feb 28 10:13:16 cpe-67-241-65-193 systemd[1]: Started Session 2 of user kcl. The first 90 sec. delay only occurs on initial startup, & the second only when any user logs in (the problem is not particular to UID1003, but occurs for all UIDs 100[0-5]. Once the machine has booted & the user is logged in, it functions normally with no observable problems. I do not know what caused this, but it occurred right after I: > allowed the installation of the latest software update (some lib files, the names of which I unfortunately did not record) > installed the ufw firewall package. I would be very grateful for any information about how to resolve this (apart from re-installing the system) or even any reference that might give information about how I might resolve this problem. (I have looked at the systemd.service man page, but although I have experience with the unix command line, I have only been using Debian since Nov. 2021 & am not familiar with its system administration, & I could find no information there about this problem.)