Re: /boot too small
On Tue, May 14, 2024, at 7:27 PM, richard emberson wrote: > Poking about I see that the default workstation disk layout: >https://docs.fedoraproject.org/en-US/workstation-docs/disk-config/ > has /boot on a ext4 partition and everything else on btrfs. > Also, the replacement for Anaconda will not happen until Fedora 41: Now Fedora 42. https://lists.fedoraproject.org/archives/list/de...@lists.fedoraproject.org/message/EIKMTS3SYSKH7R2VIJ37NMSYDCIUK466/ > So, are you saying that with Fedora 41, it will be the default for the > /boot partition to use btrfs? There are some unresolved issues before we do this. I think top on the list is fscrypt support for Btrfs (it's in final code review prior to being merged so hopefully this year). That way we can have a non-encrypted /boot on Btrfs subvolume for GRUB2, and not have to do the dirty work of configuring GRUB for LUKS. But then we can have per directory encryption. Like we could eventually have / encrypted with a key sealed in the TPM so it just automatically gets unlocked during startup, but it's protected from tampering when at rest (off). And then use a different key based on user login passphrase or a hardware key like a FIDO device, e.g. systemd-homed, per user home. A lot of this is being worked on already but I can't tell you when it'll land in a future Fedora version until it's farther along. GRUB hidden boot menu feature depends on a really curious file called grubenv in order to know if the boot has failed, and if it's failed, to show the otherwise hidden boot menu (kernel list). The grubenv file was conceived in ancient times when it was acceptable for GRUB to find the file (via file system read only drivers), and overwrite the contents of its blocks, modifying typically just one byte, in order to reset the boot counter. Then later if the boot succeeds, the grubenv is modified in (linux) user space. But when this file is on Btrfs, modifying the contents of a file outside the kernel code (by GRUB just changing bytes on disk in a single sector) is indistinguishable by Btrfs from corruption, because GRUB does not have a btrfs write driver - it's read only, so it doesn't recompute checksums, and rewrite all the leaf and node blocks to correctly update the file system for this change. Therefore the file will fail checksum verification by Btrfs, so it can't be read, and thus the file would need to be replaced in user space every time... we don't really need the data in it, probably? I have to think about it some more. Or alternatively we rethink the hidden grub menu feature. There's been a bunch of ideas where the grubenv data should go instead because really none of the file system developers like the current approach of code that isn't upstream (kernel) based writing inside the file system area. It's now frowned on. Another consideration is that /boot on Btrfs means it's harder to support other bootloaders like sd-boot, which don't contain file system drivers like GRUB's read-only drivers. There's a couple of different projects to create an EFI FS driver for Btrfs so that the UEFI pre-boot environment can read Btrfs natively - therefore any bootloader would be able to read it. The drawback with this is, it needs to be UEFI Secure Boot signed, and we need some plan and policy how that would work - per distribution btrfs drivers? Or is there a way to make it generically supportable across distributions with a single signature? That's off the top of my head there might be some other things. -- Chris Murphy -- ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: /boot too small
On Tue, May 14, 2024, at 2:12 PM, Tim via users wrote: > On Tue, 2024-05-14 at 08:27 -0700, richard emberson wrote: >> Back on 05/03/2024 I posted the question: >> "How to increase size of /boot partition" >> I had the same problem. >> >> As was noted by some, I had not upgraded for a long, long time: >> "This type of layout and partition sizes is ancient. /tmp isn't even a >> partition now." > > Does /boot still need to be its own partition, these days? Indirectly yes, because the installer doesn't configure GRUB2 for unlocking a LUKS encrypted partition (so that GRUB can find the kernel and initramfs, load them, and start the kernel). Therefore, on Fedora /boot is not encrypted, and LUKS unlock for root is done in the initramfs. Otherwise it's not necessary, GRUB2 has Btrfs support since forever. -- Chris Murphy -- ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: fstab
On Sat, Jun 1, 2024, at 11:06 AM, Patrick Dupre via users wrote: > Hello, > > With ext4 filesystems, I used to set > > ext4noauto,errors=remount-ro1 2 > > in the fstab. > > With btrfs FS > it does not like the option errors=remount-ro That is an ext4 specific mount option. Btrfs already does this by default. -- Chris Murphy -- ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: Installed Rawhide with btrfs; dnf system update to v40 has errors ; re-install from Live USB without affecting a non-root subvolume?
On Mon, Jun 3, 2024, at 3:42 PM, Philip Rhoades via users wrote: > So, if I try to reinstall from the current f40 live usb - can I do that > without touching the backup subvolume? ie: > > /dev/nvme0n1p3btrfs 1,951,850,496 464,659,184 1,486,438,224 > 24% / > /dev/nvme0n1p3btrfs 1,951,850,496 464,659,184 1,486,438,224 > 24% /backup Yes. You need to do a custom installation. This is not official documentation, it's intended for Fedora QA purposes, it has macros that show Rawhide versions because - well it's for testing :D so you just stick with the version you have, and adapt it for your usecase. https://fedoraproject.org/wiki/QA:Testcase_partitioning_custom_btrfs_preserve_home It sounds like you would just create a new root mountpoint, thus a new root subvolume, and not create a /home mountpoint. You can optionally click on the existing backup subvolume and then assign it a mountpoint /backup to have the installer do this for you and add it to fstab. Or you can do it post-install yourself. Note that unless you explicitly delete the current (broken?) root in the intstaller, it will live on. And you also won't be able to navigate to it from / - you can either mount the top level of the file system and delete it, or you can delete directly by subvolume ID without mounting the top level, see man btrfs subvolume If you have questions find me (cmurf) on matrix https://matrix.to/#/#fedora:fedoraproject.org -- Chris Murphy -- ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: is cockpit able to handle btrfs mirror's ?
On Fri, Mar 10, 2023, at 1:57 PM, old sixpack13 wrote: > more to read: > https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org/thread/MNAGV2XFQOLXQAXGP2CBHOQRGVYDXD2O/ This is expected behavior right now (certainly not expected by a reasonable user) from a development perspective. There's a similar effect with multiple device Btrfs in KDE and GNOME, so it's not a Cockpit issue. https://github.com/storaged-project/udisks/issues/802 -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: Adding file swap on btrfs
FWIW there are some fixes and enhancements coming in btrfs-progs 6.1, now in koji for Rawhide, including making it easier to get info about hibernation file offset in a swapfile. I haven't messed with the new subcommand, but I personally prefer putting swapfiles in their own subvolume so that I can still snapshot the root subvolume. Snapshotting a subvolume containing a swapfile will render the swapfile invalid for use (or maybe the snapshot fails, not sure, haven't tried recently). The way I do this is mount the top-level of the file system (just do a normal mount without any options), and inside you'll see what appears to be two directories: root and home. Those are the subvolumes the installer creates by default. Create a new subvolume in here and add it to fstab such that: UUID=$fsuuid /var/swap btrfs noatime,subvol=swap 0 0 I use chattr +C on this swap subvolume, that way any new files created inside will inherit. This is something the new subcommand will do for you. An additional entry in fstab: /var/swap/swapfile1 none swap defaults 0 0 You can certainly make a nested /var/swap thereby avoiding the need to create the earlier fstab entry. But note that snapshots still don't have this nested subvolume in them, so if you do a rollback it also won't have the nested swap subvolume or file - thus you boot probably hangs because the fstab is looking for this swapfile to activate and never finds it. So I just do it the way I describe, that way I can more or less forget about it. But an alternative to that, if you really prefer nested, is s/defaults/nofail/ for the swapfile entry and now a missing swap won't cause boot to fail *but* you also may one day forget all this and come to realize that there's no swap activated because you once did a rollback way back when... :D -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: Can't create msdos partition table without advanced partitioning?
On Sat, Nov 19, 2022, at 9:18 PM, Tom Horsley wrote: > Installing fedora 37 from workstation live iso to a virtual machine. > > I couldn't find any way to partition a blank disk with a msdos > partition table without using the advanced manual partitioning. > Did I miss something, or is that the way it works now? Yeah it's in the change set but somehow is missing from the release notes. I've let docs folks know. https://fedoraproject.org/wiki/Changes/GPTforBIOSbyDefault You can force MBR by using a boot parameter at boot time: inst.mbr I only advise doing this if there's a problem (firmware confusion) with GPT. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: How do I rebuild Grub/Boot/initramfs from a Live USB?
On Sat, Oct 29, 2022, at 1:18 AM, Tim via users wrote: > Here's some more advice you probably won't like: Multi-booting (any > computer, any OS) can be a pain, and it may be best to only attempt > that after you've learnt how a system works. Your safest approach to > learning a new system is to get a second hard drive, unplug your first > one, install onto a fresh drive in isolation, and learn how the system > works. Multiboot is probably fragile. Or at least it is inclined toward chaos, in that we cannot account for everything. Fedora only tests and blocks releases against Windows and macOS existing first, and Fedora second. As there's no manifestation of support beyond community support, is anything supported? We more or less say things that we are willing to block release on are supported, as in they have to work at the time a version is released. But there's lots of buts. And one of those buts is, if you're setting up a system in a way we aren't testing, we certainly aren't going to block release on such edge cases that affect few users. Triboot is definitely one of those. But case in point, lets say you have a completely clean system. Install Fedora copy A, and then you want to do a dual boot of two Fedoras. Say, Fedora Workstation and KDE. Or Fedora 35 and 36. Dual boot Fedoras. Supported? Nope. We have no release criterion for that. Only bugs that happen independent of that configuration would be blockers, not bugs that only manifest as part of a dual boot Fedora installation. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: How do I rebuild Grub/Boot/initramfs from a Live USB?
On Sat, Oct 29, 2022, at 12:54 AM, Slade Watkins via users wrote: > On 10/28/22 4:27 PM, Jake D wrote: >> I really can't believe that these Linux systems are so fragile and the ONLY >> option is to start over > > Wanted to hop in here real fast and say: > Pop!_OS, which is my primary distro (with Fedora being my secondary), > has the option to go into recovery (has a small partition just for > up-to-date recovery media) and reinstall your OS without losing any > personal files. > > AFAIK, it's one of the only distributions that has something like it. Fedora doesn't have a recovery partition. But there is a (sort of hidden, or at least non-obvious) way of doing a custom installation while preserving home. There isn't detailed documentation for it. It's just a test case. https://fedoraproject.org/wiki/QA:Testcase_partitioning_custom_btrfs_preserve_home This is more official than the draft dual boot one I posted previously, in that I try to keep this one up to date. But it's just a test case, intended for a test setup to make sure installer functionality isn't lost. It's not really intended as installation advise. It could be adapted into a Quick Doc for that purpose though. But my understanding of the original poster's issue is that he now has a bunch of system level customizations. Not so much user level customizations. Therefore the reuse home directory method wouldn't help much. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: How do I rebuild Grub/Boot/initramfs from a Live USB?
On Fri, Oct 28, 2022, at 7:09 PM, Jake D wrote: > No, it's not a troll. > > Thank-you for your otherwise completely irrelevant , unsolicited and > entirely unhelpful opinion piece. I'm sorry for not realising Windows > upsets you so much and is therefore inferior, and for daring to ask if > Linux has similar recovery functionality, after being told the only > answer is to completely start from scratch. Clearly, I should have > realised immediately that this means it's the more resilient system. > > I will immediately stop using my working windows partition and stare at > my non booting "grub> prompt instead. Also can you advise which bridge > you wish for me to jump off as punishment for my question? Multiboot systems are inherently complex. Like, you really have had a train derailment. And you're asking for help with that as if there's a button you can push to fix that. There's no button for train derailments. It's a customized recovery every time that requires esoteric knowledge. Everything about it is manual. Are trains fragile? I'm not sure that's the best description but yeah once they start going off the rails it's a catastrophic system. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: How do I rebuild Grub/Boot/initramfs from a Live USB?
On Fri, Oct 28, 2022, at 5:53 PM, Jonathan Billings wrote: > It sounds like you only wiped the /boot partition, which should be > fairly easy to get back. But /boot/grub2 and /boot/loader/entries are on /boot so the real grub.cfg and BLS snippets are gone, which are trivial to create if you're an expert. If you're not, it might as well be ouji board antics. There's so many tiny things like this that the installer does, that it sorta makes it a grand experiment in "what the hell did we miss" doing the fix manually. We can just (a) reformat the boot partition, get its new fsUUID put into (b) /boot/efi/EFI/fedora/grub.cfg and (c) /etc/fstab on the existing root (d) mount everything in the proper order so we can do a chroot (e) run grub2.mkconfig (f) do a dnf reinstall kernel so that we for sure have a vmlinuz on /boot and modules of the same version in /usr and BLS snippet But uhh what am I missing? Like, I'd just reboot at this point and wait for the failure to give me a hint what I'm missing. But in this case, the user gets stuck in a way they maybe can't get out of or describe and then we're worse off having wasted the time. >Reinstalling the kernel and grub2 packages > will get you the packaged bits, and running dracut like you ran should > get you the initrd, although only after you’ve got the kernel. reinstalling the kernel will run dracut, we don't need to run it again > > After everything is reinstalled, then run the grub2-mkconfig command to > create the grub config file in your new /boot partition. > > Just make sure you do everything from the chroot in an EFI booted > rescue environment, either on a livecd or booting the NetBoot iso with > the rescue option. So unfortunately Live images don't have Anaconda rescue environments. And while the netinstaller does, it will fail to find /boot by UUID in the /etc/fstab because it doesn't exist anymore so the assembly and chroot will fail. The manual method means high likelihood of missing a step or getting it in the wrong order. Or asking someone to test all the steps in a VM. Hence why reinstallation is easier. If it's Btrfs, we can keep the old root and swap roots. We need to fix two things (a) change the rootflags=subvol argument in the BLS snippets, so they mount root subvolume instead of root00 subvolume (b) update the /boot fsUUID in /etc/fstab Right? Is that it? I think that's it. OH OK there is one more problem. Probably the kernel in the new /boot is old and that version's modules don't exist anymore in the (old) root subvolume's /usr and therefore we can't boot. So yeah that has to get fixed somehow... If the Btrfs volume is mounted normally (without any options) to /mnt we can copy the old kernel modules from the new root to the old root and now the old root will boot. cp -r /mnt/root00/usr/lib/modules/$theonlydirpresent /mnt/root/usr/lib/modules/ Right? That will return fairly instantly because it'll be a reflink copy, so it might lead some folks to think it didn't work because it was too fast :P What should be true is in the first path with root00, if you hit TAB after the last /, it should autocomplete the only directory present which is the shipping kernel for Fedora 36. Or hey I have that kernel in a btrfs snapshot I created after an F36 clean install. It should be 5.17.5-300.fc36.x86_64 so the actual command ought to be cp -r /mnt/root00/usr/lib/modules/5.17.5-300.fc36.x86_64 /mnt/root/usr/lib/modules/ OK I think that's everything? -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: How do I rebuild Grub/Boot/initramfs from a Live USB?
On Fri, Oct 28, 2022, at 4:27 PM, Jake D wrote: >> My opinion: This is probably easier in a live discussion on IRC or Matrix. >> There's >> just too much back and forth required. > > Are these different forums? I just googled 'Fedora matrix 'and I'm > getting a lot of very varied hits https://chat.fedoraproject.org IRC and Matrix are chat protocols. There's a bridge in place so that, in effect, you're in the same channel whether you join #fedora on libera.chat (IRC) or #fedora on fedora.im (Matrix). You can get a native matrix account via element.io or you can get a Fedora account and use that to connect to Fedora's matrix home server. Why you pick one vs another depends on how big your matrix presence is, I guess. You're finding multiple options because of multiple eras: IRC then matrix.org then Fedora got its own matrix home servers. But you can access them all with any method due to the behind the scenes bridging. I personally use cmurf:fedora.im (matrix using the Fedora matrix home servers) >> But the absolute easiest thing to do is mount the encrypted btrfs, make a >> backup of the >> home directory, and then clean install followed by restoring the home >> directory files from >> the backup. > > I understand what you're saying but as I mentioned in a previous > comment, I have 4 weeks of setup already in place on this system. > Theres not really any files on there worth covering but more dayd and > days of fiddling to get things working, much of which im sturggling to > remember, > > Reinstall is essentially a complete writeoff. THeres no way I'll have > it setup up again by Monday, I'll have to withdraw from the class and > thats a large course fee forfeiture I'd rather avoid. OK, there is another option which is a clean install along side the existing installation, but following this clean install you'll abandon the new root and switch to the old root that has all your customizations. Basically you'll be doing a clean install just to get all the nuggets in /boot and /boot/efi in the proper shape. Some post install surgery will still necessary though... Is it more complicated to do a modified dual boot where we have two roots and then switch the roots and make the necessary changes to /etc/fstab and BLS snippets in /boot/loader/entries? Or just manually create every file we need? Uhhh I think it's probably easier to do the side by side installation, preserving the existing root. And then abandon the new root, while keeping the replacement /boot/efi/EFI/fedora and /boot But that's a guess. So this is a draft I helped write for having dual boot Fedoras, i.e. two roots. https://fedoraproject.org/wiki/User:Sumantrom/Draft/dualboot_f33_btrfs That needs a number of modifications for your situation though. So I think before going down that road we should discuss whether that's really the best option or not. > I really can't believe that these Linux systems are so fragile and the > ONLY option is to start over, is there nothing like Resotre Points in > windows? Not automatically and only for the root (/) and home (/home). You've toasted the /boot volume which is functionally like Windows' system volume that it boots from initially, if you wipe that, you'd be totally screwed on Windows too. And also these days on Windows you don't get automatic restore points either, you'd have to have done it before the mistake. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: How do I rebuild Grub/Boot/initramfs from a Live USB?
On Fri, Oct 28, 2022, at 4:21 PM, Jake D wrote: >> HyperKitty is a web front-end for what is really a mailing list. Most >> people here access the list via a standard mailer rather than the web >> interface, which (IMHO) gives better results, including proper quoting. > > I 'm not really sure what a "standard mailer" is or what you mean by > 'mailing list', and honestly I'm a bit more focused on getting the > computer to boot at all by monday. This was one of the resources on > the course material and I'm asking everywhere I can, I'm a bit frantic > frankly. Panic is one of the best known paths to data loss. Before making any further changes, I advise a backup of important user data. Do not attempt repair or reinstallation until there's at least two independent copies of your important data. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: How do I rebuild Grub/Boot/initramfs from a Live USB?
On Fri, Oct 28, 2022, at 2:31 PM, Jake D wrote: > Hello all. > > I need some help. My opinion: This is probably easier in a live discussion on IRC or Matrix. There's just too much back and forth required. But the absolute easiest thing to do is mount the encrypted btrfs, make a backup of the home directory, and then clean install followed by restoring the home directory files from the backup. Otherwise you and at just one other person have to have a fairly intense conversation about very low level esoterics. Boot is very distro specific. There's maybe two or three dozen steps to repair this setup. They all have to be exactly correct or it won't work. Much of this logic is in the installer. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: Silverblue user mailing list??
On Fri, Sep 9, 2022, at 5:44 AM, Jack Craig wrote: > hi all, > > I've been subscribed to this list for a while and I have never seen any > traffic regarding Silverblue, is there a separate mailing list for > Silverblue?? Pretty sure most of it happens on Discourse. https://discussion.fedoraproject.org/tag/silverblue -- Chris Murphy___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: /etc/grub2.cfg Flagged a Potenially missing
On Sun, Aug 28, 2022, at 7:38 PM, Stephen Morris wrote: > Hi, > /etc/extlinux.conf is flagged as missing flagged as missing by what? This file is normally not created on any Fedora variant I'm aware of. It could be a legacy file. > /etc/grub2.cfg and /etc/grub2-efi.cfg both of which point to the > same file also display the same way as /etc/extlinux.conf, but in this > case the file pointed to actually does exist, and is linking to > /boot/grub2/grub.cfg, which I regularly write to with sudo and > grub2-mkconfig, There is no reason to regularly replace grub.cfg, it's a static file these days. The files that change are drop-in files found in /boot/loader/entries and can be modified with grubby per examples at: https://fedoraproject.org/wiki/GRUB_2#Changing_kernel_command-line_parameters_with_grubby -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: opinions: backups
On Mon, Aug 15, 2022, at 2:32 PM, Bill Cunningham wrote: > On 8/15/2022 12:17 PM, Chris Murphy wrote: >> >> On Sun, Aug 14, 2022, at 5:08 PM, Bill Cunningham wrote: >>> I just thought I would ask for opinions on backups that people use. >>> I have thought about the old fashioned dump/restore; IDK if that would >>> be good for modern use or not. My system isn't really that big. My >>> allotted size is 30 Gig, and it's not full. There's dar and xar and >>> fsarchiver. There's backing up with btrfs too. >> I mainly backup just /home because I consider everything else replaceable. >> So for that it's > I want to keep my valuable info and get rid of everything else. But not > have to go through downloading and manually running dnf every time for > the rpms I individually install. There's quite a few of them. Are you > uploading to a server online? Or copying to another partition formatted > with btrfs? This command is an Intel NUC on the local network, and /srv/backups is a Btrfs formatted volume. $ sudo btrfs send -p home.20220810 home.20220815 | ssh chris@fnuc.local "sudo btrfs receive /srv/backups/fovo/" Although when I'm traveling it looks a bit differently because I'll use a locally attached USB stick: $ sudo btrfs send -p home.20220810 home.20220815 | sudo btrfs receive /run/chris/backups/fovo/ Because the "root" subvolume contains everything, including logs and VM images if you use virt-manager, databases, it can actually go through quite a bit of churn. Probably more than the typical "home". -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: opinions: backups
On Sun, Aug 14, 2022, at 5:08 PM, Bill Cunningham wrote: > I just thought I would ask for opinions on backups that people use. > I have thought about the old fashioned dump/restore; IDK if that would > be good for modern use or not. My system isn't really that big. My > allotted size is 30 Gig, and it's not full. There's dar and xar and > fsarchiver. There's backing up with btrfs too. I mainly backup just /home because I consider everything else replaceable. So for that it's # mount /dev/nvme0n1p5 /mnt ##mount top-level of btrfs # cd /mnt # btrfs sub snap -r home home.20220815 $ sudo btrfs send -p home.20220810 home.20220815 | ssh chris@fnuc.local "sudo btrfs receive /srv/backups/fovo/" That's it. Incremental backup using btrfs send/receive over ssh, where /srv/backups/fovo is also btrfs. These are quite cheap because no deep traversal is required on either side. Btrfs keeps a generation number on each file, so it's cheap for it to locate files that differ. This cheapness also applies to rename and moving files, if you move a big file, most other methods require delete in one location and copy to another location, whereas btrfs sees that it's moved on the source and merely moves it on the destination. Ok that's not entirely it. There is clean up. You can keep them around as long as you want with no ill impact, but when you decide to clean up, you can remove all the snapshots except the most recently sent, i.e. you want a common snapshot on the local and remote systems, in order to preserve the ability to do an incremental send/receive. The same goes for the remote - you can keep those indefinitely, limited only by space available. Or clean them up leaving just one in common with send and receive sides. > > I am thinking about back ups of the whole system and rpms I have > installed. And maybe not backing up logs, old settings like are stored > in the root directory of the user(s) and root account. You could also employ the same technique above to the "root" subvolume. But it'll include logs unless you split that out with a separate subvolume in the top-level along side root and home subvolumes, and add to fstab so it mounts at /var/log I really don't often backup / though. What I do tend to do semi-often is replicate a root between systems, virtual and real. The part I like about send/receive here, even though it's not any faster than rsync or even cp -a, is that everything is preserved: permissions, owner, selinux labels, all of the time stamps (otime, atime, mtime, ctime); I just don't have to think about it. I could use btrbk to help automate all of this, but what can I say, I'm a bit lazy and I'd have to think about it a little bit. It really is cheap enough you could kick off a backup every 10 minutes if you want. It'd take a couple seconds for it to figure out what's changed and whether the parent snapshot is already on the destination. And then it's just the time to transfer the data that's changed. So for a single file changed in 10 minutes, it could take just a couple seconds. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: New error message when selecting Windows Boot Loader from the GRUB menu
On Thu, Aug 4, 2022, at 6:39 PM, Scott Beamer wrote: > Greetings, > > After a recent Fedora 36 update, I'm getting an error message when > selecting "Windows Boot Manager" from the GRUB menu. Instead of booting > like it had previously, it gives me an almost blank screen with the > following text in the upper left: > > /EndEntire > > And all I can do at that point is shut down and restart my computer, go > to it's boot menu, and sekect > Windows Boot Manager, in order to boot into Windows. > > I've been dual-booting Fedora 36 and Windows for weeks prior to this one > without issue. What version? Current is 2.06-45.fc36 and there are a few complaints. https://bodhi.fedoraproject.org/updates/FEDORA-2022-8ffd58c713#comment-2667330 https://bugzilla.redhat.com/show_bug.cgi?id=2115202 dnf downgrade will get you the -29 version which is a ways back, but also easy and will get you working. But to avoid it getting updated and breaking again in the near future you'll need to add an exclude in the dnf.conf (man dnf.conf) for "grub2-*" Set a reminder though, you'll eventually want to update it and it might even prevent a system upgrade from getting a new version (or even failing, I'm not really sure). An alternative is to: rpm -qa | grep grub2 That'll get you a list of all the grub2 packages you'lll need to download from here: https://koji.fedoraproject.org/koji/buildinfo?buildID=1977609 That's the -42 version which should work. Then you have the packages locally and you can just put them in a directory and if you do an update that steps on this version again, you can just cd to that dir and do "dnf downgrade *rpm" and it'll use the local rpm files. If you have a Fedora account or RGBZ account you can add yourself to the bug. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: EFI
On Wed, Jul 27, 2022, at 12:13 PM, Patrick Dupre wrote: > OK, > This is correct, but is is absolutely confusing !!! > Same name 2 different files ! > > Are there both generated by grub2-mkconfig -o /boot/grub2/grub.cfg ? > I do not think so, if I check the date of creation. > How is generated /boot/grub2/grub.cfg ? grub2-mkconfig creates /boot/grub2/grub.cfg There's a script in grub2-common that creates /boot/efi/fedora/grub.cfg, hence the reinstallation instructions to delete it so it's properly recreated if accidentally stepped on: https://fedoraproject.org/wiki/GRUB_2#Instructions_for_UEFI-based_systems But ordinarily grub.cfg is static, you don't ever need to interact with it anymore. All the Fedora bootloader files are in /boot/loader/entries in BLS format. One file per kernel. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: /boot/efi
On Wed, Jul 27, 2022, at 11:20 AM, stan via users wrote: > On Wed, 27 Jul 2022 10:54:51 +0200 > Patrick Dupre wrote: > >> If I have several distributions on a single machine with several >> disks, should I have a single /boot/efi ? > > It will work as long as none of the installed distributions are > duplicated, because that will duplicate the label. e.g. default fedora > label is fedora, so if more than one is installed there will be > problems, because efi expects to find fedora at /boot/efi/EFI/fedora. This should still be valid: https://fedoraproject.org/wiki/User:Sumantrom/Draft/dualboot_f33_btrfs The gist is you can share a single ESP for two Fedoras. Since this location has static configuration it doesn't really matter that the two Fedora's will occasionally step on the bootloaders there, but yeah you could choose to configure the older version to not update shim and grub, thereby only ever using a single newer base shim and grub from the newer Fedora. I imagine a conflict can arise if these are two Fedora variants of the *same* release. i.e. Workstation and KDE. It's possible an update on one results in removal of kernels referred to by the other. It really depends on how stale one of the variants is allowed to get, e.g. if they're months apart it might not be possible to boot the older variant, since its boot menu entries only refer to old kernels long since removed from /boot. There is a vmlinuz copy in /usr along with that versions modules, so you could get out of this situation with some effort, but you pretty much would want them on approximately the same update cadence to avoid the problem. Another way you end up with multiple Fedoras with shared ESP and /boot is snapshotting, and significantly branching the snapshots, e.g. install Fedora 34, snapshot it and upgrade the snapshot to 35, snapshot that and upgrade the snapshot to 36, snapshot again and upgrade to Rawhide. Four Fedoras. There's more to it than this, you probably want some naming scheme for the snapshots like root35, root36, root37; you might want to adjust fstab in each, and you'll definitely need to edit the rootflags argument to boot a root for which a boot loader snippet doesn't yet exist (it will if you do a system upgrade because that will install a new kernel for that version, with a bls snippet containing a rootflags entry pointing to the currently active subvolume used as root. Anyway, it's slightly tricky but not endlessly. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: EFI
On Wed, Jul 27, 2022, at 5:22 AM, Patrick Dupre wrote: > In addition to my previous question. > Should I have a grub.cfg in /boot/efi/EFI/fedora ? There should be two: /boot/efi/EFI/fedora/grub.cfg /boot/grub2/grub.cfg The first one has a few lines to find and load the 2nd one. grubx$arch.efi assumes a grub.cfg in the same directory, hence the need for the 1st one; and this feature made the actual grub.cfg reside in /boot/grub2 regardless of arch or firmware type. https://fedoraproject.org/wiki/Changes/UnifyGrubConfig For what it's worth, there's many valid way to do this. Therefore it gets confusing because you have to evaluate all the tradeoffs of each layout. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Are there typical ways Fedora tends to break? part 2
OK thanks for the responses so far. I have followup questions for everyone, even if you didn't previously respond. Do you think a graphical rescue environment would be helpful in troubleshooting system problems? Do you think a graphical rescue environment using volatile storage, would be useful? i.e. similar to a Live boot, by default no changes to the system or Live user environment would be written to persistent media; e.g. Firefox cache files and history, or even installing software, would be entirely lost on reboot from this graphical rescue environment Do you think a mechanism for system snapshots and rollbacks would be useful in troubleshooting system problems? Do you think a snapshot+rollback mechanism would be more or less useful than a graphical rescue environment, for troubleshooting system problems? Thanks! -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: are there typical ways Fedora tends to break?
This is the current Fedora GRUB doc. https://fedoraproject.org/wiki/GRUB_2 This doc needs updating but skimming it I'm not finding outright bad advice. https://docs.fedoraproject.org/en-US/fedora/latest/system-administrators-guide/kernel-module-driver-configuration/Working_with_the_GRUB_2_Boot_Loader/ The wiki doc contains grubby examples for modifying the kernel command line and is the preferred tool for such modifications because they're applied correctly universally: regardless of BIOS or UEFI, or Fedora version, or architecture, or whether the user might have at one time opted out of BootLoaderSpec conversion. In particular the section https://fedoraproject.org/wiki/GRUB_2#Instructions_for_UEFI-based_systems contains more thorough instructions resulting in a more complete reinstallation that should be equivalent to a clean installed system. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
are there typical ways Fedora tends to break?
Hi, I have a request for list regulars. The Fedora Workstation working group is curious if there's any pattern or categorization how Fedora installations typically break. i.e. the installation is successful, the system has been updated multiple times successfully, and then for whatever reason it breaks. Are most failures hardware related? This could be broken down into hard failure (drive or logic board failed) and soft failure (some hardware configuration change and reverting the change resolves the problem). What portion of the failures are early boot failures? (Defined as bootloader, kernel, or early initramfs failures. But excludes being landed at a dracut prompt.) What portion of the failures land the user at a dracut shell? What portion of the failures does the user get to a graphical shell but can't login? What portion of the failures can the user login but there's some sort of anomalous behavior? What portion of all failures are fixable without reinstalling? Is the GRUB "rescue" menu entry ever useful in resolving problems? Could everyone reading this try booting the "rescue" menu entry and describe what happens? How does the actual behavior compare to what you thought would happen? The questions list is not complete, feel free to add your own categorizations / failure patterns that you tend to see. Thanks! -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: hardware RAID1 NVMe card?
On Thu, Jun 23, 2022 at 11:05 PM ToddAndMargo via users wrote: > > Hi All, > > Any of you guys know of a PCIe card that will do > hardware RAID 1 with two NVMe drives? > > I have found some, but they are way to elaborate, > and as such, way too expensive. I'm really not certain how sophisticated or reliable either PCIe or NVMe is with respect to error reporting. Or even if it varies by make/model. My understanding is that internally it has to be good because your data isn't really stored in any recognizable form on solid state drives, it's a "probabilistic representation of your data" and requires really sophisticated encoding/decoding to "almost certainly" return your data. But when that doesn't happen, curiously it (anecdotally) seems rare to get discrete read errors like we see with hard drives. Common instead, the drive returns garbage or zeros instead of your data. This is where btrfs shines, in general, but really shines in the raid1 configuration. In the normal single drive configuration, Btrfs will verbosely complain. It has limited ability to correct when the metadata profile is dup (two copies of the file system on one drive), which is the mkfs default since btrfs-progs ~5.15. For various reasons, even dup might have two bad copies on a single SSD. But in the raid1 configuration (two copies on different devices), Btrfs can unambiguously determine on every read whether data or metadata is wrong, and grab the good copy from the other drive, and overwrite the bad copy. And this is all automatic. You can see the same scary verbose message in dmesg, but you'll see additional messages for the fixups. Fixup also happens during scrub, useful for the areas that aren't regularly read. Conversely, any hardware, mdadm, or LVM RAID depends on the hardware reporting a read error. If garbage or zeros are returned, the RAID can't do anything about it. [1] Sounds great. So why not btrfs raid1? Well, right now the code that handles degraded mdadm RAID is all in dracut (in the initramfs). The initramfs contains dracut scripts that try to assemble the RAID and if a drive is missing, it won't assemble, so the scripts know to start a loop to wait for about 3 minutes, and then attempt a degraded assemble. But dracut doesn't handle Btrfs in the same situation, and no one has done the work so far to make it possible. If a drive flat out dies, what happens at boot time is you get an indefinite wait for the device to appear, because of a udev rule that requires waiting for all Btrfs devices to appear before mount is attempted. That's good because we don't want to prematurely try to do a normal or degraded mount. Anyway, this area needs development work. So if your use case requires unattended boot when a drive has failed, this set up is not for you. So those are the current trade offs. [1] There's experimental dm-integrity support via cryptsetup. It works rather differently than Btrfs, but has the ability to detect such corruption problems and report them to the upper layer as a read error where the normal RAID error correction can then work properly. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: EXT4 Data Loss Error on Resume from Suspend
On Tue, Jul 5, 2022 at 8:40 PM Joseph D Wagner wrote: > > Should this go here, or the devel list? Is this filtered in any way? Can you post a complete dmesg? -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Dual booting
It can be a big tricky if this is BIOS firmware rather than UEFI. I'm guessing it's probably UEFI in which case you can do the installations in any order. For BIOS, it really should be Windows first, Fedora last or else you get into a situation where Windows steps on GRUB and you'll have to repair it by reinstalling GRUB (i.e. grub2-install). Backup first. From Windows, download the installation media creation tool and have it make installation media. Do not install. You can also create Fedora USB stick media with Fedora Media Creator for Windows. https://www.microsoft.com/en-us/software-download/windows10 https://www.microsoft.com/en-us/software-download/windows11 Then boot a Fedora Live ISO and obliterate the entire disk. If it's an HDD, you can use the wipefs tool. If it's an SSD or NVMe, you can use blkdiscard command. Now install Windows first if the firmware is BIOS. Fedora last. If firmware is UEFI, the order doesn't matter. I agree with the suggestions to use Windows' tools to shrink NTFS and its partition. But more importantly make sure to disable Fast startup.https://dev.to/xeroxism/how-to-disable-fast-start-in-ubuntu-windows-dual-booting-setup-4akn -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: System is locking up a lot
What version of Firefox are you using? There was a version a couple weeks ago that snuck passed updates-testing into stable that has a nasty memory leak. I ran into it with oomd killing it off, so I masked oomd. Haha, bad idea, the memory leak locked the whole setup, had to force power off (I gave it a few minutes, which is really too long). I think the FF bug got fixed but maybe not always? So as it turns out oomd is doing the right thing in this case, just that we're not getting a desktop notification about what happened. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: auto unlock encrypted disks using clevis/tang works for ext4 but not btrfs?
On Sun, Jun 5, 2022 at 2:28 PM Barry Scott wrote: > > I have setup a tang server to offer up the unlock key for > by fedora systems that uses encrypted disks. > > This works great with my file server that uses LVM and ext4. > > But my desktop system that uses the btrfs does not unlock the > disk automatically. I see the logs on the tang server that show > that there are transactions to ask for the key but it does not work. > > I'm jumping to the difference being btrfs, but admit that I'm far > from having evidence to show that is the problem. > > I used the exact same setup steps for both systems so I'm reasonably > confident that the config is good. > > Anyone else see this issue? Need logs. And it might help to have the exact same binaries available in A vs B config, i.e. LUKS LVM ext4 vs LUKS Btrfs, but both are Fedora 36. I don't know that much about tang or clevis, but my understanding is the central aspect is `clevis luks unlock` and once the LUKS volume is unlocked, then libblkid should see the btrfs volume on it, and then possible to mount it. While the LUKS volume is locked, the Btrfs volume in effect is invisible (all ciphertext) so the fact it's btrfs is obscured and can't be a factor until the LUKS volume is unlocked. So I'm thinking unlock problems are unrelated to the fs selection, and it's some other factor (package versions, network latency, race condition). -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: maybe OT
On Fri, Mar 18, 2022 at 4:47 PM Paolo Galtieri wrote: > > I'm having issues with a VM. > > The VM was originally created under VMware and has worked fine for a > while. Today when I booted it up instead of seeing the usual MATE login > screen I get a login prompt: > > f34-01-vm: > > no matter what I enter, root or pgaltieri as login it never asks for > password and immediately says login incorrect. While it's booting I see > several [FAILED]... messages, e.g. [FAILED] to start CUPS Scheduler > > I booted the system again and this time it dropped into emergency mode. > In emergency mode I see the following messages in dmesg: > > BTRFS info (device sda2): flagging fs with big metadata feature > BTRFS info (device sda2): disk space caching is enabled > BTRFS info (device sda2): has skinny extents > BTRFS info (device sda2): start tree-log replay > BTRFS info (device sda2): parent transid verify failed on 61849600 > wanted 145639 fount 145637 > BTRFS info (device sda2): parent transid verify failed on 61849600 > wanted 145639 fount 145637 > BTRFS: error (device sda2) in btrfs_replay_log:2423 errno=-5 IO failure > (Failed to recover log tree) > BTRFS error (device sda2) open_ctree failed That's not good. The tree-log is used during fsync as an optimization to avoid having to do full file system metadata updates. Since the tree-log exists, we know this file system was undergoing some fsync write operations which were then interrupted. Either the VM or host crashed, or one of them was forced to shutdown, or there's a bug that otherwise prevented the guest operations from completing. Further, the parent transid verification failure messages indicate some out of order writes, as if the virtual drive+controller+cache is occasionally ignoring flush/FUA requests. I regularly use qemu-kvm VM with cache mode "unsafe". The VM can crash all day long and at most I lose ~30s of the most recent writes, depending on the fsync policy of the application doing the writes. But the file system mounts normally otherwise following the crash. However if the host crashes while the guest is writing, that file system can be irreparably damaged. This is expected. So you might want to check the cache policy being used, make sure that the guest VM is really shutting down properly before rebooting/shutting down the host. > > I ran btrfs check in emergency mode and it came up with a lot of errors. > > How do i recover the partition(s) so I can boot the system, or at least > mount them? I'd start with mount -o ro,nologreplay,rescue=usebackuproot Followed by mount -o ro,nologreplay,rescue=all The second one is a bit of a heavy hammer but it's safe insofar as it's mounting the fs read only and making no changes. It is also disabling csum checking so any corrupt files still get copied out, and without any corruption warnings. You can check man 5 btrfs to read a bit more about the other options and vary the selection. This is pretty much a recovery operation, i.e. get the important data out. The repair comment for this particular set of errors: btrfs rescue zero-log btrfs check --repair --init-extent-tree btrfs check --repair I have somewhat low confidence that it can be repaired rather than make things worse. So you should start out with the earlier mount commands to get anything important out of the fs first. IF those don't work and there's important information to get out, you need to use btrfs restore. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Fedora 35 swapon: /swapfile: swapon failed: Invalid argument
On Thu, Mar 24, 2022 at 6:36 AM Brad Bell wrote: > > I am doing algorithmic differentiation with very large tapes and my jobs > sometimes run out of memory. So the workload is producing a substantial amount of anonymous pages. It's kinda hard to tell what to do to optimize without a decent amount of knowledge about the workload's behavior. So you'd have to just change some things and see if the performance improves. I tentatively expect that you'd be better off disabling zram-generator, setting up a swap partition or file, and optionally enabling zswap (which is a different thing than zram). On the plus side, this frees up quite a lot of memory (roughly half), but on the negative side it might increase swap thrashing - it really depends on the workload. But zswap has the benefit of using Least Recently Used (LRU) to evict pages from the in-memory compressed cache pool to the conventional swap file. That way it's the stale pages going to disk and the active ones being compressed in memory. Also, for what it's worth, on Btrfs I use /var/swap/swapfile1 /var/swap/swapfile2 ... and so on. Where "varswap" is a subvolume located on the top-level of the file system (next to install time default subvolumes "root" and "home") and has chattr +C set on it. That way should I take snapshots of root (or even var in some custom configurations) I'm not snapshotting the swapfiles. Snapshotting the swap files ends up making them subject to COW again, and that's incompatible with using them as swapfiles. The entry in fstab looks like this: UUID=$uuid /var/swap btrfs noatime,subvol=varswap 0 0 man 5 btrfs has a SWAPFILE SUPPORT section that's fairly detailed steps. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: enabling hibernate on a new F35 installation
On Wed, Mar 2, 2022 at 4:02 PM Samuel Sieb wrote: > > On 3/2/22 13:56, Ranjan Maitra wrote: > > My approach to enabling hibernate on Fedora since F20 has been to create a > > swap partition and then do the following: > > > > sudo vi /etc/default/grub > > > > add --> resume=UUID="" <-- to the line GRUB_CMDLINE_LINUX= > > > > where the uuid is obtained using blkid, and then for efi-based systems do: > > > > sudo bash -x grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg > > > > and then use: > > > > systemctl hibernate > > > > However, this approach no longer works for me. It goes down all right, but > > comes back into a newly booted system. > > > > Reading up, it appears that things changed in F34, but I have been caught > > napping since I have been upgrading from previous versions for a while (I > > guess this was sort of grandfathered in). > > > > I tried a few things, but what do I do to get hibernate going on a new > > (clean) F35 installation. > > I just tried this out in a VM. I did an install of F35 with a swap > partition and it setup everything for hibernating including the kernel > command line parameter. "systemctl hibernate" does the full hibernating > process, but resuming doesn't work. This seems like a rather > unfortunate bug. Why set everything up so that you can hibernate, but > not resume? > > The fix is to run "dracut -a resume -f". This will update the initramfs > to include the bits that let resume work. In order for this to continue > working with kernel updates, you need to add a dracut config file with > the module. Sounds like a dracut bug to me. It should see the resume parameter on the kernel command line and just add the resume dracut module to the initramfs without having to request it explicitly. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Bad workloads for RAID0?
On Sat, Feb 12, 2022 at 8:32 AM Jamie Fargen wrote: > > Not familiar with DejaDup, but with this setup on RAID0 do an rsync every 15 > minutes to the backup system. rsync has some advantages: destination does not need to be btrfs, the --inplace option for VM images But for such very frequent backups, it's more like a replication use case. And btrfs send/receive is very efficient at this because unlike rsync, no deep traversal is required of either the source or destination. Btrfs will increment the generation anytime a file is modified, and the generation of the leaf the inode is contained in, and the node the leaf is referenced in, all the way up to the file tree root. This makes it very cheap when doing a diff between two snapshots, for btrfs to figure out what has changed without having to look at every inode. It just skips all the parts of the tree that haven't changed, in effect it creates a "replay" list between the two generations. An incremental send contains just the changes, and that includes knowing when files are renamed or moved, so their data doesn't need to be sent again. So if you were to change just one file in 15 minutes, a btrfs send -p stream (an incremental stream produced as a "diff" between two snapshots) and receive will take a few seconds, even if the snapshot contains millions of files. There'd be a straight line following the nodes and leaves with the incremented generation leading straight to the only changed file. (You could use 'btrfs send -f' and place the stream as a file on a non-btrfs file system. But you can't really look inside of it like a snapshot received on a btrfs file system.) -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Bad workloads for RAID0?
Mostly trivia, but might help someone one day... True that raid0 is basically for data you don't care about, if any drive in the array dies, you lose everything. Except on Btrfs... If the metadata profile is raid1 (or raid1c34), you will still lose all the data on the failed drive. But you will be able to mount the remaining drive(s) using `mount -o ro,degraded` due to the raid1 metadata profile. The file system itself is not striped, but mirrored (two copies for raid1 no matter how many drives). You can't mount it read-write because it's below the minimum number of drives due to raid0 data. If you copy the files out, you'll have quite a mess because obviously most files are missing or damaged (swiss cheese). You'll need a tool that tolerates I/O errors, by continuing to read the rest of the file rather than giving up on the first I/O error. ddrescue does this (it works on block devices or files, in this case you'd use it on files). The mkfs time default profile for metadata is raid1 if you include 2 or more disks in the mkfs command. Otherwise you get DUP profile for metadata and single profile for data. If you add a second drive to a single drive Btrfs, you need to manually convert, e.g. `btrfs balance start -mconvert=raid1 -dconvert=raid0` This same behavior happens with an e.g. 2-disk Btrfs with single profile data, and raid1 profile metadata. You can mount it ro,degraded, and get the files off the surviving drives. In this case, you'll get both more completely lost files and more completely intact files because single profile doesn't stripe data. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: why are / and /home the same filesystem?
On Tue, Feb 8, 2022 at 6:30 AM Peter Boy wrote: > > > > > Am 08.02.2022 um 12:11 schrieb Patrick O'Callaghan : > > > > On Tue, 2022-02-08 at 16:48 +1030, Tim via users wrote: > >> You may actually want hard size limits on different partitions. > > > > You can still have this with subvolumes. See btrfs-quota(8). > > Yes, a sentence beginning with „You can have this with ….“ is probably true > for every IT topic. > > The question is rather whether you can realistically have it in everyday > practice. Yes, (open)SUSE enables qgroups by default for years. Fedora doesn't enable them, but it's worth checking out 'man btrfs quota'. They're pretty cool, and the docs consider the dilemmas raised by snapshots, and how that affects accounting. There are some performance concerns. Desktop users don't need to worry about it, you can enable them, play, disable them. The whole quota btree is removed, no residue remains on the file system. > Workstation WG made BTRFS default with F33. Even now with F35 one year later, > where is easily accessible documentation for a user who wants to install > Workstation? Neither the current Installation Guide nor the Administrator's > Guide give any information about how to handle BTFRS. The complete text is up > to date with Fedora 25 or perhaps a bit later, only minimally updated to > subsequent versions. ? https://docs.fedoraproject.org/en-US/fedora/f35/install-guide/install/Installing_Using_Anaconda/ Those are Fedora 33 screenshots. "btrfs" appears 34 times in the document. There's an entire section on creating a btrfs layout. https://docs.fedoraproject.org/en-US/fedora/f35/install-guide/install/Installing_Using_Anaconda/#sect-installation-gui-manual-partitioning-btrfs Since I wrote up the lightweight changes for docs when btrfs became the default, I'm aware that the documentation has weaknesses. It is still LVM centric, and doesn't have hints for Btrfs nuances. In particular, with how to get the installer to reuse the "home" subvolume for the /home mountpoint. It is super easy to do, but totally non-obvious. The part most folks run into is not reusing "home" subvolume itself, which is just a matter of clicking on the previous installation "home" and assigning it to the /home mountpoint. But rather how to install to / because it won't let you reuse the "root" subvolume. This is due to the installer requiring a new clean filesystem for root. Ext4 and XFS require reformat, but Btrfs gets a partial exemption. You don't have to reformat, but you do need a new "root" subvolume, so you just create a new mountpoint with the + button, specify the mountpoint as /, and leave the capacity field blank. It'll add / mountpoint *and* create a new subvolume in the process. You can either delete the old "root" subvolume, or keep it - it's a matter of space available but as all subvolume share space it's a pretty simple calculation whether you have room for it or not. > And I see no Workstation doc listet on docs.fp.o, unlike the other Fedora > editions, again, after a year. 1. https://docs.fedoraproject.org/en-US/docs/ 2. click on engineering teams, https://docs.fedoraproject.org/en-US/engineering/ 3. click on workstation working group, https://docs.fedoraproject.org/en-US/workstation-working-group/ > And is there an adapted installation step in Anaconda to expose an option to > set a max. limit (e.g. like to handle the root login - deactivated, key only, > . . .) and probably some other valuable capabilities? I can’t remember to > have seen something like that. Workstation is a different installation experience than Server. Workstation does a Live install, using rsync, and users are setup in GNOME Initial Setup rather than in the installer. > Therefore, a user is dependent on clear and informative terminology. And, > well, sub-„volume“ after 32 Fedora releases has a specific meaning. There are only so many words. I'm reminded of the word "chunk". You see this word quite a bit in computer storage. If you specialize in one thing or another, you might get the idea that chunk is a specialized word that has a pretty specific meaning. And then you're surprised when you come across that same term in another context, it means something quite different. Chunk in mdadm is what the SNIA dictionary calls "strip" or "stripe element" [1] On Btrfs, chunk is a different thing entirely, there's no SNIA equivalent term. So it's certainly easy to get confused when terms get reused. [1] can you believe those two terms are synonyms?[2] [2] part of the problem might be the English language, really. If you've ever been confused about strip and stripe [3], it's not you, it's the words themselves. [3] These two terms are not synonyms. [4] [4] It really could m
Re: Failed installation of F 35 Workstation
On Sun, Jan 30, 2022 at 8:12 PM WMU Bavaria wrote: > > > > On 01/30/2022 10:00 PM Aaron wrote: > > > If your computer was shipped with Windows 8 (release date 2012) or newer it > > most likely has an UEFI capable bios. > > My laptop shipped with Windows Vista installed, long before UEFI was invented. Intel EFI ~1998 Tiano - 2004 UEFI 2.0 - 2005 Windows Vista - 2007 Windows 7 - 2009 In fact Vista was the first version of Windows to have EFI support, but the vast majority of (U)EFI systems of this era shipped with a Compatibility Support Module (CSM) enabled by default to present a faux-BIOS to the operating system. A 10 year old laptop is ~2011 which would put it in the Windows 7 era, and had substantially improved UEFI support. Anyway, I'm willing to bet this laptop has an MBR with a 1st partition starting at an LBA less than 2048, therefore there isn't enough room for modern GRUB to embed in the small MBR gap of this era. You can remedy this by offering up the 1st partition to Anaconda for deletion. When a new 1st partition is created as part of the installation, probably for /boot, it will start at LBA 2048 which will provide a large enough MBR gap for GRUB to be installed. fdisk -l /dev/sdX -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Bad USB drive?
On Fri, Dec 31, 2021 at 9:10 AM Robert Moskowitz wrote: > > > > On 12/31/21 00:22, Chris Murphy wrote: > > On Wed, Dec 22, 2021 at 2:03 PM Robert Moskowitz > > wrote: > >> so something killed it and since tomorrow is garbage day... > >> > >> Sigh. that was a good $8 down the drain. > > Return it. Don't let people steal from you. Used a credit card? Report > > it. Do a charge back. You have 60 days from the date the statement > > containing the charge was mailed to you. If it's after 60 days, card > > might have an extended warranty on everything, give that a shot. > > > It has been sitting in my "backed up" bin for over a year. I backed up > something to it and now it is dead. > > Typically I back up something important with a double backup, like tax > records, so I doubt I really lost anything, but it is way past > warranty. Sigh. > > It is from Microcomputer Center over in Troy MI. They have these bins > of USB sticks very low price. No packaging, just take a couple out of > the bin at check out. I got a Dec present from them in a free 128GB > stick and bought 4 16GB for $5 each while I was restocking my DVD blanks > (stack of 500 Verbatim DVDs). They are a good place to pick up stuff > after a run to Costco; just a couple miles up the road. :) Yeah there's a Microcomputer Center near me. I sooner suspect it's just a bad luck defect rather than fake flash. But also, some flash out there will lose data when on the shelf too long. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Bad USB drive?
On Wed, Dec 22, 2021 at 2:03 PM Robert Moskowitz wrote: > > so something killed it and since tomorrow is garbage day... > > Sigh. that was a good $8 down the drain. Return it. Don't let people steal from you. Used a credit card? Report it. Do a charge back. You have 60 days from the date the statement containing the charge was mailed to you. If it's after 60 days, card might have an extended warranty on everything, give that a shot. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Problems with disk? apps failed, then reboot failed. Needed fsck
On Sun, Dec 26, 2021 at 8:07 AM Robert Moskowitz wrote: > > Quick note: > > I am using SSD: > > fdisk -l /dev/sda > Disk /dev/sda: 465.76 GiB, 500107862016 bytes, 976773168 sectors > Disk model: WDC WDBNCE5000PN > Units: sectors of 1 * 512 = 512 bytes > Sector size (logical/physical): 512 bytes / 512 bytes > I/O size (minimum/optimal): 512 bytes / 512 bytes > Disklabel type: dos > > But there may be a connection problem. I have been bouncing the > notebook in my backpack these passed days. > > This Lenovo x140e may be old, but I think clean inside. Sensors is > reporting CPU temp of 55C. > > This is a new, current install of F35. I always start clean then move > over /home from my old SSD with rsync. > > I am still using ext4. I have heard of poor performance reviews still > with btrfs. I guess it is time to read up on it. > > Nothing much I can do until tonight when I get back home. Sounds like SSD failure. It'd help to see some logs. If you can boot from Fedora installer media, use 'mount -o ro' to read-only mount the ext4 rootfs somewhere, and then point journalctl to the journal location, something like 'journalctl -k -D /mnt/var/log/journal/$machineid/ --no-pager' where you have to just tab to get the $machineid to fill in, and that should get us a bunch of dmesg-like output from the most recent bootwhich now that i think about it might not have made it to disk if the SSD is failing. But it could still provide a hint... Also including the dmesg for the LiveOS boot might show issues when you do the above mount. I would look into doing a backup of at least /home sooner than later too. Btrfs is faster at some things slower at others. But it'll also detect SSD pre-failure symptoms before anything else including the drive's SMART reporting, by showing transient corruption. All such messages appear in dmesg. Btrfs is more sensitive to pre-failure because it's checksumming everything, not just the file system. So it'll detect even a bit flip. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: F35 install doesn't find existing Windows installation
On Sat, Nov 6, 2021 at 2:12 PM Matthew Saltzman wrote: > > I have a Lenovo Yoga X1 (2nd generation), on which I was happily > running a dual boot of the original Windows 10 installation that came > with the machine and Fedora 34, which I had upgraded through a few > versions of Fedora. > > This time, I decided to do a fresh install of Fedora 35 (to try out > BTRFS), but when the installation finished, there was no option to boot > Windows. Did you install Fedora on the same physical drive as Windows? And was a 2nd drive involved in the installation? And what do you get for 'efibootmgr -v' ? There are some edge cases where you get what you describe, and it suggests the initial grub2-mkconfig at the end of installation didn't find the Windows boot loader. There's also a gotcha with recent installations of Windows 10 which automatically encrypt the Windows installation and sequester the encryption key in the TPM. The only way the key is revealed is when measured boot indicates the system isn't compromised. The problem is that booting shim+grub results in measured boot failure when choosing the Windows boot entry, and while Windows does boot, it also asks for the recovery key for the drive. The alternative is use the firmware's built-in boot-manager (boot selection menu) to choose Windows. On my Lenovo laptop you can get to this menu with F9 (or press enter at the logo screen, which gets you a function key lookup chart superimposed on the splash, which shows F9). And from there choose the Windows Boot Manager. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Fedora 35 using XFS as default for Network iso / XFCE?
On Sun, Nov 21, 2021 at 9:30 AM Ed Greshko wrote: > But, never mind, I'll just consider it an oddity. The particular ISO is what determines the default layout, not the package set chosen. That's why you can get XFS by default if you use the Server net install, but choose the Workstation package set. Or Btrfs if you use Everything netinstall, but choose the Server package set. I think that's the slightly confusing part is that it ends up being an attribute of the install media. The other oddity is that Everything isn't really "owned" by any one working group or SIG, it's sort of a catch all media, and due to some esoteric aspects of how images are created it was just a lot easier to let the Everything netintsaller default to Btrfs. And also it's the only way to do netinstalls for any of the desktop spins, they don't each have their own. Workstation once had a netinstaller, but it was dropped maybe 1/2 dozen releases or so ago. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Fedora 35 using XFS as default for Network iso / XFCE?
On Sat, Nov 20, 2021 at 9:42 PM Ed Greshko wrote: > > On 21/11/2021 07:09, Peter Boy wrote: > > > >> Am 20.11.2021 um 23:14 schrieb Ed Greshko : > >> > >> On 21/11/2021 01:10, Doug H. wrote: > >>> Shortly after the release of Fedora 35 I downloaded: > >>> > >>> Fedora-Server-netinst-x86_64-35-1.2.iso > >>> > >>> ... > >>> I have stated the install and it is currently downloading the packages. > >>> The interesting part is that I see that it has used XFS instead of > >>> BTRFS. > >>> > >>> Might this be expected? > >>> > >>> ... > >>> > >> I installed the Fedora Sever from > >> Fedora-Everything-netinst-x86_64-35-1.2.iso. > >> > >> ….. > >> > >> No sign of XFS. > > The Everything ISOs storage configuration defaults to Workstation. > > > > If you use the Server iso you get the „real“ Server that defaults to XFS/LVM > > > > Configuration of the filesystem is not part of the package set you choose > > from the dvd, but part of Anaconda configuration which differs between the > > various deliverables / isos > > Yes. Sorry to not be clear. I was just pointing out that depending on the > install media you use you'll get different > disk layouts. I wonder why the inconsistency is permitted. Each edition (working group) and spin (special interest group) can choose their own default layout and filesystem. For the btrfs by default change, all the desktop spins were consulted in advance with a preview of the proposal, to address any concerns they had and how to opt-out of the change if they wished. All the desktops went with btrfs. Server, Cloud, IoT editions were given a heads up, but at the time of the original proposal for Fedora 33 it wasn't expected any would join right away anyway. Fedora 35 Cloud edition switched to Btrfs by default for its images. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: How did I get a flatpack installed?
On Tue, Oct 19, 2021 at 10:30 AM Chris Murphy wrote: > > Hi, > > I think it's a bug, so I filed one. > https://bugzilla.redhat.com/show_bug.cgi?id=2015569 Yep. So the bug is that the RPM and flatpak versions use different ID's for the same application, therefore Software lists them in search results as separate apps. There should be only one result for Thunderbird and Firefox. And then there's a drop down menu on the upper right side that let's you choose whether to install RPM or flatpak and if flatpak from which source because it could be from flathub.org or fedoraproject.org. I think any confusion in this area is at least suspicious of a bug (or suboptimal UI/UX). So I recommend constructive criticism with bug reporting for this. My understanding is the intent is to favor RPM on conventional desktop edition/spins. And to favor flatpak on rpm-ostree spins (Silverblue, Kinoite). -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: How did I get a flatpack installed?
Hi, I think it's a bug, so I filed one. https://bugzilla.redhat.com/show_bug.cgi?id=2015569 -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: LUKS on shutdown.
On Thu, Sep 30, 2021 at 4:13 PM Gordon Messmer wrote: > > On 9/30/21 12:41, Chris Murphy wrote: > > Seems likely to me some service is not quitting properly, preventing / > > from being unmounted > > > If that were the case, there might be information about an exit failure > in the log. On the next boot, "journalctl -b -1" might have useful info. > > I also wonder if enabling sysrq and using sysrq+'t' would help determine > which process is stuck: It might but it's also really verbose, so for starters "systemctl list-jobs" is the path of least resistance I suspect. Report suggests the systemd cylon eye, which suggests a job is running and not exiting as you say, so list the jobs and then do a systemctl status on each one of them to find out what they are up to. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: LUKS on shutdown.
On Wed, Sep 29, 2021 at 2:18 PM murph nj wrote: > > I'm having an issue shutting down. Fedora 34, just updated to 35, same > problem. (It was around in 33 as well, I've been putting up with it.) I > know that there is another list for beta versions, but this issue was there > in the current version as well. > > Somewhere around 1 out of every 10 shutdowns, the system goes down to > "[ OK ] Reached target System Shutdown", but then, > "[ *** ] a stop job is running for Cryptography setup for luks- LUKS volumes>" > > It typically goes for about 1/2 hour if I don't get fed up, and hold the > power button down. > > I just saw after that 1/2 hour: > [Time] Timed out starting System Reboot > Forcibly Rebooting: job timed out. > audit: type=1334. > > I still had to power it off. On reboot, it seems OK. > > > I haven't found anything interesting in /var/log/messages regarding it. > > Any suggestions to further troubleshooting? Seems likely to me some service is not quitting properly, preventing / from being unmounted, which prevents cryptsetup from closing the dm-crypt device. And as tedious as it is, the trick is to find out what that service is. You might also get a clue by comparing the shutdown sequences between successful and unsuccessful variants. The successful log will list all kinds of services being quit, whereas the unsuccessful one will be missing two or more services. Maybe from that you can infer what the problem is. Another idea: boot using systemd.log_level=debug to get more information in the journal (a *lot* more information). And also consider adding systemd.debug-shell=1 as well, but note that this is a security risk because it will persistently put a root level shell on tty9, which is what you'll switch to with control-alt-F9 when you get the shutdown hang. And then do: systemctl list-jobs df lsof -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: KDE not starting
On Mon, Sep 20, 2021 at 3:44 AM GianPiero Puccioni wrote: > After every possible thing I could think of , I removed all KDE and > reinstalled > it, no joy, exactly the same. Time to reinstall F34, and this is when I > discovered that it is nearly impossible to reinstall F34 on a btrfs system > without reformatting /home too (!?) It is possible, it's just a bit non-obvious how to do it. This test case explains how to to it step by step. You already have a Btrfs installation, so Setup steps 1 and 2 are done. You can go to the how to test steps. The critical steps are 9 and 10. In particular 10 is not obvious that it's creating a new subvolume for '/' and that's why the Btrfs file system isn't reformatted, and hence the existing home subvolume is retained, and just assigned to /home in the new installation. https://fedoraproject.org/wiki/QA:Testcase_partitioning_custom_btrfs_preserve_home And yeah we need a better way to document this than a test case. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Fedora Server docs
Hi, A new section in https://docs.fedoraproject.org/en-US/docs/ has appeared specifically for Fedora Server. https://docs.fedoraproject.org/en-US/fedora-server/ -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: machine suspending
On Wed, Aug 11, 2021 at 1:04 PM George Avrunin wrote: > > On Mon, 9 Aug 2021 12:15:20 -0600, Chris Murphy wrote: > > > Yeah I have a bit of a gripe with systemd that it doesn't, by default, > > insert the sleep request in the log. What exactly requested it? User > > hit the power button? User closed the lid? Some service like apcuspd > > requested it? I dunno, seems like an obvious thing that needs to go in > > the log, one line. And for NetworkManager to be the first indication > > that S3 or s2idle was requested is not helpful at all, I see this too > > in cases when I close the lid or the GNOME Shell power save timeout is > > reached (screen dim or whatever it's called). > > > > I don't know all the different ways sleep can be requested but we need > > the logs to indicate where this request is coming from. I don't know > > for sure but maybe you can boot with systemd.log_level=debug and get > > more detail, probably way too much detail because it's very verbose. > > But I'm not sure how to narrow it down. But either way, I think it's > > worth a thread on systemd-devel@ and ask if a single line of info > > about sleep being initiated can be dumped into the log by default? > > Thanks. I've asked our staff to let me know when they're going to replace > the switch so I can reboot with systemd.log_level=debug and I'll see if I > can get more information then. > > I assume "systemd-devel@" is the list at freedesktop.org? As soon as I can > find a little time, I'll subscribe and start a thread there. Yep systemd-de...@lists.freedesktop.org -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Kernel Panic with Kernel 5.13.6-200
5.13.9 is now in koji you might give that a shot. 5.13.5 changelog shows commit c3eb534eae09de6cdd3e0ff63897b20e1b079cdb vmxnet3: fix cksum offload issues for tunnels with non-default udp ports 5.13.6 changelog shows several vmware related commits https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.13.6 But I don't know if any are related to the problem. And I don't see followup patches in 5.13.7 through 5.13.9. I suggest filing a bug against the kernel and fill out the template provided. I also suggest testing 5.13.5 to see if that's the first version that broke things or if it was in fact first broken with 5.13.6 (and still broken in 5.13.9). But with such a new kernel, I also suggest making sure vmware is up to date. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: machine suspending
On Mon, Aug 9, 2021 at 10:53 AM George Avrunin wrote: > > Aug 06 17:47:32 ext.math.umass.edu kernel: Lockdown: systemd-logind: > hibernation is restricted; see man kernel_lockdown.7 > Aug 06 17:47:32 ext.math.umass.edu kernel: Lockdown: systemd-logind: > hibernation is restricted; see man kernel_lockdown.7 > Aug 06 17:47:32 ext.math.umass.edu NetworkManager[2111]: > [1628286452.1872] manager: sleep: sleep requested (sleeping: no enabled: yes) > Aug 06 17:47:32 ext.math.umass.edu NetworkManager[2111]: Yeah I have a bit of a gripe with systemd that it doesn't, by default, insert the sleep request in the log. What exactly requested it? User hit the power button? User closed the lid? Some service like apcuspd requested it? I dunno, seems like an obvious thing that needs to go in the log, one line. And for NetworkManager to be the first indication that S3 or s2idle was requested is not helpful at all, I see this too in cases when I close the lid or the GNOME Shell power save timeout is reached (screen dim or whatever it's called). I don't know all the different ways sleep can be requested but we need the logs to indicate where this request is coming from. I don't know for sure but maybe you can boot with systemd.log_level=debug and get more detail, probably way too much detail because it's very verbose. But I'm not sure how to narrow it down. But either way, I think it's worth a thread on systemd-devel@ and ask if a single line of info about sleep being initiated can be dumped into the log by default? -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: how to replace a disk in a raid-1 array by a larger one?
On Thu, Aug 5, 2021 at 4:50 AM José María Terry Jiménez wrote: > > El 5/8/21 a las 12:35, François Patte escribió: > > > Bonjour, > > I have a raid-1 array of 2 disks (1 Tb) and I want to replace these disks by > 2 2Tb disks. Just as a point of translation/familiarization/comparison of the same task with btrfs... btrfs replace start 1 /dev/sdc /mnt btrfs replace start 2 /dev/sdd /mnt btrfs filesystem resize 1:max /mnt btrfs filesystem resize 2:max /mnt * btrfs uses a concept of "devid" to keep track of devices; devices also each have a device item uuid that's totally unambiguous among all other btrfs file systems, but the devid is unambiguous within a specific btrfs, unlike /dev/ node which isn't always the same between reboots. * use 'btrfs filesystem show' to see devid's * replace requires the replacement drive is equal to or bigger than the size of the device being replaced; replace is so much better than "btrfs device add" followed by "btrfs device remove" (which does file system resizes automatically) that you're best off shrinking the source a bit in order to be allowed to use "btrfs replace". * file system resize is per device, specified by devid; if you don't specify devid, then devid 1 is assumed * btrfs replace is derived from the btrfs scrub kernel code, in effect it makes a virtual and temporary "mirror" between the to-be-replaced device and replacement device, and does a scrub to quickly replicate in-use blocks from source to destination. * btrfs replace works with non-raid profiles, including live migration from a single drive to a new drive * you can use 'btrfs replace' even if the drive is missing, even if you're mounted degraded * writes go to both the replacee and replacement devices * following crash/power fail the replace will resume * reminder that there's a built-in shortcut for all btrfs commands, you don't need to configure it or set it up; as long as you enter an unambiguous command, it'll be accepted, if you enter something ambiguous, it'll make suggestions: $ sudo btrfs rep sta btrfs replace: ambiguous token 'sta' Did you mean one of these ? start status 'btrfs rep star' is unambiguous for 'btrfs replace start' - just make up your own short hand... -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Unable to delete BTRFS snapshot
On Tue, Aug 3, 2021 at 1:51 AM Sreyan Chakravarty wrote: > > Hi, > > I am faced with a strange problem. I have a BTRFS snapshot that I want to > delete but just can't. > > So I always manipulate BTRFS snapshots from a live-cd environment which I > setup using the following commands: > > sudo cryptsetup open /dev/sda3 dm-crypt > <> > sudo mount /dev/mapper/dm-crypt /mnt > > Now I can query the snapshots: > > $ sudo btrfs subvolume list /mnt > ID 448 gen 235989 top level 5 path root > ID 449 gen 0 top level 5 path before_live_cd_exp > > I want to delete the snapshot "before_live_cd_exp", which I am unable to do: > > $ sudo btrfs subvolume delete /mnt/before_live_cd_exp/ > ERROR: Not a Btrfs subvolume: Invalid argument > > What does it mean it's not a Btrfs subvolume ? It was picked up by the list > command. > > I also tried via the subvolid: > > $ sudo btrfs subvolume delete --subvolid 449 /mnt > Delete subvolume (no-commit): '/mnt/before_live_cd_exp' > ERROR: Could not destroy subvolume/snapshot: No such file or directory What messages appear in dmesg and coincide with the failed command? I haven't heard of anyone being unable to delete snapshots before. But also suspicious about this particular snapshot (which I also haven't ever seen) is its generation is 0. Even brand new subvolumes on a brand new file system have non-zero generation. > This is even weirder since as you can clearly see the directory exists in the > given path. > > Any suggestions on what is going wrong ? > > I should tell you that a while back I had a huge BTRFS file system crash, and > it took a lot of targeted help from the community to get my system to boot. This could be a side effect of an incomplete repair. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Fw: Kernel update again (solved)
On Sat, Jul 31, 2021 at 9:19 AM jarmo wrote: > > Now working. Just changed screen resolution from 1280x720 16:9 > which was good for my old eyes :) to 1920x1080. Only bad thing > is, that texts are so small, have to tri increas font size... > But, booting and running, so SOLVED... It's a work around that suggests there's still a bug here, if an older kernel can do 1280x720 but a new kernel cannot. Sounds somewhat like this but the kernel version doesn't match up. https://gitlab.freedesktop.org/drm/amd/-/issues/1589 Consider filing a new upstream bug there. Attached dmesg for both non-working and working kernels. And also lspci -vvnn (attached as file). The best way to get this resolved though, is tedious. Kernel bisect. You know the working kernel is 5.12 series and the non-working kernel is 5.13 series. If you decide to bisect, it's easiest to test already built kernels in koji to narrow down when the problem started. Some of the early 5.13-rc kernels for fc35 were built against a new glibc which had an inadvertent ABI break (as I understand it) and those kernels wouldn't install for me on Fedora 34. But I don't remember when they stopped working or resumed working...so this particular kernel series might be harder to test with prebuilt kernels in koji than it ordinarily would be. But the gist of the strategy is to test 5.13-rc7, rc5, rc3, rc1 to see if the problem goes away as you work backward. You are trying to understand the last known working kernel and first known broken kernel. That'll help upstream figure out what commit broke it. Or you could start with the first 5.13 build in Fedora which was kernel-5.13.0-0.rc0.20210428gitacd3d2859453.2.fc35 Note that this contains a partial git hash, acd3d2859453 which helps distinguish between these "not really rc builds" because officially there's no such thing as rc0. Next you'll work you way up the rc0 kernels and see which one breaks (has the problem). Viola, first known bad kernel. It is also possible to do 'git bisect' on the kernel's mainline git, and git bisect does most of the work. But it's slow because you get to rebuild the kernel (just 'make -j4' is fine) in between each git checkout step. But then all you have to do is boot it to test, and either it works or doesn't, and you report this as "git bisect good" or "git bisect bad" and git tracks this as it continues doing checkouts to narrow down to exactly which commit is the bad one. Everyone has their own preference I suppose, and mine is to just use the upstream kernel mainline git, and build the kernel the upstream way. I do use Fedora config files, e.g. cp /boot/config-5.12... .config to create a .config in the kernel source directory. https://kernelnewbies.org/KernelBuild Clone the latest rc tree. That's mainline. For this kind of problem you don't need to test any of the stable updates because the problem must have been introduced during the 5.13 development cycle and just didn't get caught for some reason. That's the basics. It's not that hard. But there's lots of tricks and personal preferences that are all non-obvious, so if you get stuck, head to irc.libera.chat or matrix.org and ask in #fedora-qa or #fedora-kernel if you get stuck. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Kernel update again
On Fri, Jul 30, 2021 at 12:28 AM jarmo wrote: > > Got new kernel and that stil won't work with > my HP laptop. > It boots so far, that I get login window, I give passwd > after that, black window, nothing else happens, except ctl+alt+del > works. > What I have last working kernel in that laptop, is > 5.12.15-300.fc34.x86_64 > I'm not any programmer nor bugzilla familiar. > I have sent to Chris Murphy both kernel boots, working and not > working ones. Hopefully I sent them right address... Select the new (problem) kernel version, but don't boot it. Edit instead, and find the "resume=UUID" parameter and delete it. Now boot with control-x or f10. See you can boot now. There is a long standing dbus-broker related bug some folks are hitting, and because the problem happens in the initramfs, the version of dbus-broker you have installed is not necessarily the one being used. i.e. the older kernel can have the older dbus-broker that doesn't have the problem. Updating to the latest dbus-broker that fixes the problem isn't enough, you have to rebuild the initramfs so that it gets the fixed copy baked into it. My understanding of it is it's related to having resume= boot parameter (which has not been the default for a couple Fedora releases, which is why not everyone is hitting it at once). gory details (long) https://bugzilla.redhat.com/show_bug.cgi?id=1976653 If that's not the problem, I'm not sure what the issue is. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Maybe offline updates aren't a bad idea
On Fri, Jul 30, 2021 at 2:00 PM Roger Heflin wrote: > > If it was just a plasma crash, then ssh and/or the alt keys would have > worked to switch terminals. > > Details said neither worked. The kernel and/or a significant part of > userspace was deadlocked and/or crashed. I wonder if logs contain anything... i.e. from the boot following the failed update, use journalctl -b-1 and if it's 5 boots back use -b-5 It might have the start of the problem anyway. I also suspect a deadlock. It can make it seem like ssh is dead but it's just super slow. Or may even time out unless a session has already started. Workstation edition and KDE spin have improved resource control, which is a work in-progress (also on KDE you will need to install uresourced). This attempts to ensure minimum resources are available for the desktop to be responsive. One possible limitation is IO pressure, we're not quite there yet implementing IO isolation. A deadlock though is a different problem so the resource control work wouldn't help. If you ever see "task xxx:yyy blocked for more than 120 seconds" it's best to issue sysrq+w (i.e. echo w > /proc/sysrq-trigger) to dump extra debugging information into the kernel message buffer, and then file a bug attaching dmesg. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Extreme startup delay on F34
On Tue, Jul 27, 2021 at 3:36 PM old sixpack13 wrote: > > ... > > is your GPU from intel ? > > if so: > > - I get it too, sometimes while browsing with FF. > > - Crtl+Alt+F3 to get a console (?) and do dmesg => ...GPU Crash dump ... > > GPU hang... > > +++ EDIT +++ > I should have read the first thread again: it's an Intel GPU. > > anyway, after Crtl+Alt+F3 you should be able to do > "sync && sync && sudo systemctl reboot" > > saves the headache about an possible (?) brtfs filesystem corruption when > doing a "hardcore power off" > IIRC, a brtfs scrub ... afterwards could help There shouldn't be such a thing as file system corruption following forced power off. It's sufficiently well tested on ext4, xfs, and btrfs that if there's corruption, it's almost certainly a drive firmware bug getting write order wrong by not honoring flush/FUA when it should. Btrfs has a bit of an advantage in these cases because it's got a pretty simple update order: data + metadata -> flush/FUA -> superblock -> flush/FUA So in theory, the superblock only points to trees that are definitely valid. All changes, data and metadata get written into free space (copy-on-write, no overwrites), and therefore the worst case is data being written is simply lost during a crash because a superblock update didn't happen before the crash. A superblock that points to bad/stale/missing trees means a new superblock made it to disk before the metadata, metadata was lost. That's a firmware bug. We know that because there's asstrometric amounts of tests done on all the file systems, including btrfs, using xfstests. And a number of those tests use dm-log-writes which expressly test for proper write ordering by the file system. Even in case of such a firmware bug, Btrfs can sometimes recover by mounting with: mount -o usebackuproot mount -o rescue=usebackuproot (same thing) This picks an older root to mount instead of the one the super says should be the most recent. But this still implies the drive firmware did something wrong. btrfs scrub checks integrity, it compares the information in a data and metadata blocks with the checksum for that block; this can only be done with the file system mounted btrfs check checks the consistency of the file system, it's a metadata only check but it's not just checking that there's a checksum match but is it correct; the file system needs to be unmounted. There's also the write time and read time tree checkers. Not everything is included in these checks but it does catch certain kinds of corruption at either read time (it's already happened and on disk so let's stop here and no make it worse), or write time (it's not yet on disk, let's stop here). Common cause of write time tree check errors are memory bit flips, but also sometimes kernel bugs and even btrfs bugs. I guess you could call it a nascent online fsck, but without repair capability. Currently it flips the file system read-only to stop further confusion and keep data safe. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: resizing qemu image
On Wed, Jul 21, 2021 at 3:23 PM Chris Adams wrote: > And you have more space! I do this all the time with libvirt-managed > Linux VMs. I haven't yet gone through th necessary steps for the more > recent btrfs setup. Guest: # lsblk -o NAME,SIZE NAMESIZE vda 100G ├─vda1 600M ├─vda21G └─vda3 98.4G Host: $ sudo virsh blockresize uefivm /var/lib/libvirt/images/f34w-uefi-defaultbtrfs.raw 200G Block device '/var/lib/libvirt/images/f34w-uefi-defaultbtrfs.raw' is resized Guest: # lsblk -o NAME,SIZE vda 200G ├─vda1 600M ├─vda21G └─vda3 98.4G (gdisk requires three changes: move the secondary GPT to the end of the disk, then delete vda3 partition, and recreate it max size, which is the default behavior) # partprobe # lsblk -o NAME,SIZE vda 200G ├─vda1 600M ├─vda2 1G └─vda3 198.4G # btrfs fi resize max / Resize device id 1 (/dev/vda3) from 98.41GiB to max Single device btrfs resize is straightforward. But with multiple device Btrfs, you need to specify the devid you want resized, otherwise it defaults to devid 1. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: usb port enumeration changed?
On Mon, Jul 26, 2021 at 3:25 AM Eyal Lebedinsky wrote: > > Is this an intentional change? You might search https://lore.kernel.org/linux-usb/ and see if anything pops up; and if not then ask on the same list, linux-...@vger.kernel.org. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: New kernel
On Fri, Jul 23, 2021 at 1:56 PM Chris Murphy wrote: > > Even better is if you can reproduce the high load and try to capture > one or more of the following: sysrq+l which will dump the result into Actually, sysrq+w is also useful. The root user can use: echo l > /proc/sysrq-trigger echo w > /proc/sysrq-trigger And their result is dumped into dmesg/journal. Some of them like sysrq+t are so huge they fill up the kernel message buffer, but journalctl will show the output since it's writing to a persistent log. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: New kernel
On Fri, Jul 23, 2021 at 12:48 PM Patrick O'Callaghan wrote: > > On Fri, 2021-07-23 at 21:09 +0300, jarmo wrote: > > Somethin went't wrong. Got updates of new kernel, > > kernel-5.13.4-200.fc34.x86_64. > > > > Now my HPlaptop won't start. I can write to loginwindow > > asked passwd. After that everything hangs. > > with kernel 5.12.15-300.fc34.x86_64 works. I have system in legacy > > mode, no LVM no UEFI. > > Only patitions there are /swap and / FS is EXT4... > > An my wife's Toshiba Satellite, After I disabled /dev/zram0 from > > system, because from boot.log I could see, that It tried to create > > SWAP to /dev/zram0. That Toshiba has also partitions /swap and / > > EXT4 no LVM no UEFI. > > After disbling, it boots, but takes terrible time, "fedora sircle" > > stops and takes long time to start XFCE4 VM. > > I have that kernel and had to hard-reset my system after load average > went to over 30, apparently due to a BTRFS cache flush process. May or > may not be related. It has never happened before. If you can find the boot this happened in and file a bug against the kernel, that would be great: This can help find the boot that it happened with journalctl --list-boots I also will use this method to iterate to find the boot, with the kernel appearing in the first line. journalctl -b-1 journalctl -b-2 and so on, goes back each boot For the log to attach, you can filter just for kernel messages with journalctl -b-3 -k -o short-monotonic --no-hostname > dmesg.txt So that's the 3rd boot back, kernel messages only, monotonic time stamps, and no hostname, directed to a file that you can attach to the bug. Even better is if you can reproduce the high load and try to capture one or more of the following: sysrq+l which will dump the result into dmesg and will end up in the journal, which you can filter similarly: journalctl -b -k -o short-monotonic --no-hostname > dmesg-cpustack.txt And attach to the bug report. Also, you can cc me bugzilla at colorremedies dot com -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Extreme startup delay on F34
lddecodes=io+mem,decodes=io+mem:owns=io+mem hmmm. Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Extreme startup delay on F34
> On Sun, Jul 18, 2021 at 11:12 AM Joe Zeff wrote: > > > > On 7/18/21 7:38 AM, John Mellor wrote: Oh nice, the previous email is meant for John, not Joe. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Extreme startup delay on F34
On Sun, Jul 18, 2021 at 11:12 AM Joe Zeff wrote: > > On 7/18/21 7:38 AM, John Mellor wrote: > > Here is the top of the current systemd-analyse blame output: > > > > $ systemd-analyze blame > > 27.625s plymouth-quit-wait.service This counts the time to get to the login window, plymouth itself doesn't have a meaningful amount of startup time cost. It's just getting dinged because it's waiting for everything else required for startup prior to it being able to exit. So you can ignore it. > > 21.726s udisks2.service > > 13.932s libvirtd.service > > 13.711s systemd-journal-flush.service All three of these are suspiciously long. Post /etc/fstab, /proc/cmdline, and the output from 'journalctl -b -o short-monotonic --no-hostname > journal.log' -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Editing the Fedora Help documentation
On Fri, Jul 16, 2021 at 6:38 PM Devin Prater via users wrote: > > Hi all. I’m a person who is blind, and have used Fedora off and on for years, > but may have found my forever home in Fedora, after finding Arch to be too > advanced for me, but Debian to have too old of packages, especially in > regards to accessibility. > Now, I looked at the accessibility parts of Fedora documentation, like the > page at: > > https://docs.fedoraproject.org/en-US/Fedora/12/html/Accessibility_Guide/index.html > > There are parts that talk about using Emacs with Emacspeak to do things like > browsing the web or doing email. These days, Firefox and Thunderbird with > Orca are good enough to do that. So, is there any way I can help? Hi Devin, you might check with the docs team via the docs mailing list? https://lists.fedoraproject.org/archives/list/d...@lists.fedoraproject.org/ And the docs project list of sub projects https://pagure.io/group/fedora-docs Which includes the accessibility guide sub project https://pagure.io/accessibility-guide -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: failed to mount /boot/efi
On Fri, Jul 16, 2021 at 1:20 PM Patrick Dupre wrote: > > I collected the maximum information that I could > > Failed Mounting /boot/efi > See "systemctl boot-efi" for details > Dependency failed for local file system > Dependency failed for mark the ... to relabel after reboot > Stopped > You are in emergency mode. After logging in type "journalctl -xb" to view > system logs > "systemctl reboot" to reboot > "systemctl default" to exit > cannot open access to console, the root account is locked > see sulogin(8) man page for more details > Press Enter to continue > Rebooting system manager confirmation > Starting default target > You are in emergency mode > > What are your siggestions ? > > > I'm not sure why /boot/efi is failing to mount need to see logs for > > > that, but you can also add nofail to the fstab for it so at least boot > > > won't hang. Did you add nofail to fstab options for the /boot/efi line? Can you post 'journalctl -b -o short-monotonic --no-hostname' somewhere? -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: failed to mount /boot/efi
On Thu, Jul 15, 2021 at 12:31 PM Patrick Dupre wrote: > > I have been able to boot on a rescue kernel! > Linux localhost.localdomain 5.6.13-100.fc30.x86_64 #1 SMP Fri May 15 00:36:06 > UTC 2020 x86_64 x86_64 x86_64 GNU/Linux > > bootctl > systemd-boot not installed in ESP. > System: > Firmware: n/a (n/a) > Secure Boot: disabled >Setup Mode: user > TPM2 Support: yes > Boot into FW: supported > > Current Boot Loader: > Product: n/a > Features: ✗ Boot counting >✗ Menu timeout control >✗ One-shot menu timeout control >✗ Default entry control >✗ One-shot entry control >✗ Support for XBOOTLDR partition >✗ Support for passing random seed to OS >✗ Boot loader sets ESP partition information > ESP: n/a > File: └─n/a > > > Is this normal ? Yes, bootctl is for the systemd-boot bootloader, which Fedora doesn't use right now. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: failed to mount /boot/efi
On Thu, Jul 15, 2021 at 12:01 PM Patrick Dupre wrote: > > Hello, > > On a dual boot machine (one fc32, and one fc34). > after I upgraded the fc32 to fc34, I cannot boot on the new fc34. What does happen? I'm not sure what "cannot boot" means because it doesn't tell me how it's failing. > I get > failed to mount /boot/efi > /boot/efi is shared by both machines > > Here is the fstab of the fc34 machine > > # > UUID=a5b809ae-61c2-4d67-b0f3-109e99faad39 / ext4 > defaults1 1 > UUID=B2EF-0CE4 /boot/efi vfat > umask=0077,shortname=winnt 0 2 > > > The /boot/efi seem correct (it is currently mounted by the fc32). > > blkid |grep B2EF > /dev/sdb1: UUID="B2EF-0CE4" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI > System Partition" PARTUUID="0fe1d405-6683-456c-a306-140e882e322c" > blkid |grep 9faad39 > /dev/sda6: LABEL="fedora_rescue" UUID="a5b809ae-61c2-4d67-b0f3-109e99faad39" > BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="OS" > PARTUUID="41b9d134-b477-4ef5-8602-15fdc482a51e" > > What should I do ? > It seems that the installer create /boot/efi/EFI/fedora/grub.cfg I'm not really certain the best way to share a single /boot/efi/EFI/fedora between two Fedoras. My idea would be for Fedora 34 to be primary, and exclude shim and all of grub2 from being updated on Fedora 32. Well, ha, as I think about it, that doesn't matter anymore because there are no more updates, it's EOL. What you really want to do though is share a single /boot because the single bootloader needs to see a single set of configuration files found in /boot/loader/entries. And the f32 snippets will point to the f32 kernels and f32 system root. And the f34 snippets will point to the f34 kernels and system root. It definitely works, except for one problem: https://bugzilla.redhat.com/show_bug.cgi?id=1874724 > with > search --no-floppy --fs-uuid --set=dev a5b809ae-61c2-4d67-b0f3-109e99faad39 > set prefix=($dev)/boot/grub2 > export $prefix > configfile $prefix/grub.cfg > > (I could not boot either) > Then I run (from the fc32) > grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg.new > > and used this file to boot (by renamed). OK I can't tell where the f32 /boot is, but you need all the snippets in one /boot/loader/entries that both f32 and f34 share. I'm not sure why /boot/efi is failing to mount need to see logs for that, but you can also add nofail to the fstab for it so at least boot won't hang. The gist of what you need is one f34 bootloader in /boot/efi/EFI/fedora/ and its grub.cfg points to the real grub.cfg at /boot/grub2 which in turn loads blscfg.mod which finds and reads /boot/loader/entries and then creates a GRUB menu from all the snippets found there. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: GRUB_DEFAULT entry
On Tue, Jul 6, 2021 at 8:11 AM Robert McBroom via users wrote: > > grub seems to now ignore the settings in /etc/default/grub. What is > needed to restore the function of setting a specific boot option? Whatever is in /etc/default/grub will be set in each BLS snippet if you run grub2-mkconfig -o /etc/grub2.cfg New kernels will get whatever is currently in /proc/cmdline. If you use grubby with --update-kernel=ALL then all possibilities are modified regardless of Fedora release version (grubenv or BLS snippets or grub.cfg as the case may be). https://fedoraproject.org/wiki/GRUB_2?rd=Grub2#Changing_kernel_command-line_parameters_with_grubby -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Clonezilla.
On Sat, Jul 3, 2021 at 10:18 AM Ger van Dijck wrote: > > Hi Fedora , > > > Thanks to George N. Withe 111 , Bob Marcom, Erik P. Olsen and Klaus > Peter Schrage. > > I did install rpmshere-release with dnf : Runs fine. > > > I did install clonezilla-2.3.1-1.noarch.rpm with dnf and got following > message :Nothing provides drbl-partimage >=0.6.7 needed by clonezilla > -2.3.1-1 noarch. > > Nothing provides mkswap-uuid needed by > clonezilla-2.3.1-1.noarch. > > So , What now ? Hmm, I'm not finding clonezilla in Fedora repositories, or in RPM Fusion. And at https://clonezilla.org/downloads.php I'm not finding it packaged for rhel/centos/fedora. And 2.7.2 is current at clonezilla.org, so I'm not sure about the provenance of clonezilla-2.3.1-1.noarch.rpm but I suspect it's a stale package given the version. I'm not sure what to recommend without knowing the use case. For generic use case, sync or backup, I'd use something either rsync or borg based. I tend to consider block based backups (dd, ddrescue, dd_rescue) pretty much for scraping, i.e. for emergencies and recovery operations where you must have an identical copy. That's usually not what you want for a backup. Btrfs is compatible with the above options but adds some unique capabilities of its own: * snapshot send+receive Makes an essentially identical copy of a snapshot. There are advantages (immediately accessible just by mounting the file system; full checksumming; simpler incrementals management) if you use Btrfs for the destination file system, but it is possible to use 'btrfs send -f' to create a file on any file system. But that file must be "received" on a Btrfs file system to navigate it. * seed+sprout Setting the "seed" flag on a Btrfs file system makes it read-only. Mount it, and 'btrfs device add' a 2nd device, followed by 'btrfs device remove' the 1st device, and it will kick off replication at a block group level. The copy is identical in every meaningful way, but the new destination device can be any size. Of course, it needs to be at least as large as the data usage on the seed. [1] -- Chris Murphy [1] Sorta :) Once the 2nd device is added, you are allowed to delete things you don't want replicated. They are retained on the read-only seed, but are not replicated when you remove the seed. This might all seem a bit confusing but it's all a direct function of copy-on-write. The "delete" is just writing an updated portion of the file system tree that lacks references for the files/dirs you've deleted, and this updated portion of the tree is only written to the 2nd device (all writes get directed to this 2nd device because the 1st device, the "seed" is read only). There's more but if I don't stop here even I will get confused. The feature is quite nuanced and powerful, including even stacked seed devices. Considering devices could be loop mounted files, it's perhaps an underrated feature so far. ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: slow startup process and dmesg times
[ 8.611345] psmouse serio1: Failed to enable mouse on isa0060/serio1 [ 17.872494] IPv6: ADDRCONF(NETDEV_CHANGE): wlp2s0: link becomes ready I'm not sure exactly how it works, but NetworkManager-wait-online.service is intended to slow things down in case there are other services (like automounts, FreeIPA or AD) that depend on networking being up. I don't have such a setup so I disable NetworkManager-wait-online.service. I have one computer where this makes a difference in boot time. With that off, other services can be started while networking comes up whenever it's going to come up, and overall improve boot time. [ 21.458981] Bluetooth: RFCOMM ver 1.11 [ 30.323705] nouveau :01:00.0: Enabling HDA controller The 2nd one strikes me as a bug because it was reported almost 11 seconds earlier. There was a report on reddit someone having really slow boot times and it was resolved by installing the proprietary nvidia driver; but also seemed to be the consensus to install it. https://rpmfusion.org/Howto/NVIDIA That is a bit frustrating to recommend something proprietary. But nvidia have generally been uncooperative when it comes to helping make the open source driver better. But it also takes some effort on users to give good bug reports (I don't know the status of nouveau development and whether good bug reports help). Ahh and here it is a 3rd time... [ 47.987079] nouveau :01:00.0: Enabling HDA controller This is a bug. Maybe there's already an upstream bug report and a work around is published in the bug. Or just install the proprietary driver via RPM Fusion repo. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: DNF not Installing all Updates?
PackageKit uses libdnf. libdnf is the core library for dnf,PackageKit and rpm-ostree. You should generally get the same results using either dnf or PackageKit. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: packagekitd Hogging CPU
On Wed, Jun 23, 2021, 3:44 AM Patrick O'Callaghan wrote: > > > Interesting. That sounds superficially similar to Android's A/B system > update method. Is there work being done on getting this into Fedora? > Folks are looking at multiple ways of doing it. All options imply some kind of layout change, and we need to consider upgrades. It has to work for dnf and PackageKit, etc. -- Chris Murphy > ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Long wait for start job
On Tue, Jun 22, 2021 at 4:05 AM Patrick O'Callaghan wrote: > > One other data point and I'll leave it unless anything else turns up: I > switched the two drives in the dock and got this from dmesg: > > [Tue Jun 22 10:52:03 2021] usb 4-3: USB disconnect, device number 2 > [Tue Jun 22 10:52:03 2021] sd 6:0:0:0: [sdd] Synchronizing SCSI cache > [Tue Jun 22 10:52:03 2021] sd 6:0:0:0: [sdd] Synchronize Cache(10) failed: > Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK > [Tue Jun 22 10:52:03 2021] sd 6:0:0:1: [sde] Synchronizing SCSI cache > [Tue Jun 22 10:52:03 2021] sd 6:0:0:1: [sde] Synchronize Cache(10) failed: > Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK > [Tue Jun 22 10:52:27 2021] usb 4-3: new SuperSpeed Gen 1 USB device number 4 > using xhci_hcd > [Tue Jun 22 10:52:27 2021] usb 4-3: New USB device found, idVendor=174c, > idProduct=55aa, bcdDevice= 1.00 > [Tue Jun 22 10:52:27 2021] usb 4-3: New USB device strings: Mfr=2, Product=3, > SerialNumber=1 > [Tue Jun 22 10:52:27 2021] usb 4-3: Product: ASM1156-PM > [Tue Jun 22 10:52:27 2021] usb 4-3: Manufacturer: ASMT > [Tue Jun 22 10:52:27 2021] usb 4-3: SerialNumber: > [Tue Jun 22 10:52:27 2021] scsi host6: uas > [Tue Jun 22 10:52:27 2021] scsi 6:0:0:0: Direct-Access ASMT > ASM1156-PM 0PQ: 0 ANSI: 6 > [Tue Jun 22 10:52:28 2021] scsi 6:0:0:1: Direct-Access ASMT > ASM1156-PM 0PQ: 0 ANSI: 6 > [Tue Jun 22 10:52:28 2021] sd 6:0:0:0: Attached scsi generic sg4 type 0 > [Tue Jun 22 10:52:28 2021] sd 6:0:0:1: Attached scsi generic sg5 type 0 > [Tue Jun 22 10:52:28 2021] sd 6:0:0:0: [sdd] 1953525168 512-byte logical > blocks: (1.00 TB/932 GiB) > [Tue Jun 22 10:52:28 2021] sd 6:0:0:0: [sdd] 4096-byte physical blocks > [Tue Jun 22 10:52:28 2021] sd 6:0:0:0: [sdd] Write Protect is off > [Tue Jun 22 10:52:28 2021] sd 6:0:0:0: [sdd] Mode Sense: 43 00 00 00 > [Tue Jun 22 10:52:28 2021] sd 6:0:0:1: [sde] 1953525168 512-byte logical > blocks: (1.00 TB/932 GiB) > [Tue Jun 22 10:52:28 2021] sd 6:0:0:1: [sde] 4096-byte physical blocks > [Tue Jun 22 10:52:28 2021] sd 6:0:0:0: [sdd] Write cache: enabled, read > cache: enabled, doesn't support DPO or FUA > [Tue Jun 22 10:52:28 2021] sd 6:0:0:1: [sde] Write Protect is off > [Tue Jun 22 10:52:28 2021] sd 6:0:0:1: [sde] Mode Sense: 43 00 00 00 > [Tue Jun 22 10:52:28 2021] sd 6:0:0:0: [sdd] Optimal transfer size 33553920 > bytes not a multiple of physical block size (4096 bytes) > [Tue Jun 22 10:52:28 2021] sd 6:0:0:0: [sdd] Attached SCSI disk > [Tue Jun 22 10:52:58 2021] sd 6:0:0:1: tag#26 uas_eh_abort_handler 0 uas-tag > 2 inflight: IN <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<*** > [Tue Jun 22 10:52:58 2021] sd 6:0:0:1: tag#26 CDB: Mode Sense(6) 1a 00 08 00 > 04 00 > [Tue Jun 22 10:52:58 2021] scsi host6: uas_eh_device_reset_handler start > [Tue Jun 22 10:52:58 2021] usb 4-3: reset SuperSpeed Gen 1 USB device number > 4 using xhci_hcd > [Tue Jun 22 10:52:58 2021] scsi host6: uas_eh_device_reset_handler success > [Tue Jun 22 10:52:58 2021] sd 6:0:0:1: [sde] Write cache: enabled, read > cache: enabled, doesn't support DPO or FUA > [Tue Jun 22 10:52:58 2021] sd 6:0:0:1: [sde] Optimal transfer size 33553920 > bytes not a multiple of physical block size (4096 bytes) > [Tue Jun 22 10:52:58 2021] sd 6:0:0:1: [sde] Attached SCSI disk > > The uas message is again from device 6:0:0:1 as before, even though the > disks have been swapped. IOW the issue definitely comes from the dock, > not from the physical drives themselves. I don't know if it's coming from the dock's usb chipset or the usb-sata adapter on the drive. That adapter has a chipset on it also. These are exactly the sorts of problems often resolved by putting both drives on a USB hub, and then the hub into the dock. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Long wait for start job
On Mon, Jun 21, 2021 at 4:10 PM Patrick O'Callaghan wrote: > There is a single dock with two slots and no other type of enclosure. > The disks are internal SATA units inserted directly into the slots. The > dock has a single dedicated USB-3 connection direct to the system > motherboard with no intervening hub or splitter. It is independently > powered via a wall socket and power block. Does the error messages I referred to happen when the system is booted with the drives attached separately? Or does it happen when only connected to a particular port on the dock? -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: packagekitd Hogging CPU
On Tue, Jun 22, 2021 at 11:24 AM Anil Felipe Duggirala wrote: > I don't know a lot about Packagekit, or anything else really. > But I will take this chance to complain again. When rebooting or shutting > down my laptop, many times the process is delayed (up to 1.5 minutes) and it > displays its waiting for a Packagekit job to finish. Thats really annoying > and I have not suffered from anything similar on Linux before. > Just saying, if anyone knows of a solution for this, Im all ears. It is annoying, and a known problem. I'm not sure if it's given a quit or terminate signal at shutdown, but it's become sufficiently busy that it ignores it. And then systemd hits a time out 1m30s later and kills it anyway. There is a Workstation ticket about shortening shutdown times. https://pagure.io/fedora-workstation/issue/163 -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: packagekitd Hogging CPU
On Tue, Jun 22, 2021 at 10:58 AM Joe Zeff wrote: > > On 6/22/21 10:29 AM, George N. White III wrote: > > The Gnome software manager has the added advantages > > that it a) forces a reboot and b) offers flatpak versions of major > > applications. > > The forced reboot is only an advantage if some of the upgrades require a > reboot to get them started. Most upgrades only need to have their > package restarted, and that only if it was running when the upgrade > occurs. This is what needs-restarting is for, but if you don't know how > to use dnf (and don't want to) it's not going to do you any good. And, > for that matter, what do people like that do if they're not set up with > Gnome? My personal opinion is that people like that should be using > Ubuntu, as that distro is specifically designed for Windows refugees. > (I've set two people up with Linux because they wanted to get away from > Windows, and both of them are happily running Xubuntu.) > > Sorry for ranting, but forced reboots are a pet peeve of mine and you > just petted it. https://lwn.net/Articles/702629/ Kindof an old argument at this point. One of the things I'm curious about right now: https://pagure.io/libdnf-plugin-txnupd https://kubic.opensuse.org/documentation/transactional-update-guide/transactional-update.html It's a more sophisticated variation on on I came up with by (rw) snapshotting the 'root' subvolume, mounting it, and using chroot to do a full system update (and upgrade). It's an out of band or side car update. No reboot to a special environment. If it goes wrong, just delete it. If there's a crash or power fail, you still boot the untouched current root. Only once it completes, and optionally passes some tests, would the root be switched to the updated snapshot, and reboot. And the user can choose when that happens. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: packagekitd Hogging CPU
On Mon, Jun 21, 2021 at 7:18 AM Tim Evans wrote: > > $ uname -a > Linux harrier 5.12.11-300.fc34.x86_64 #1 SMP Wed Jun 16 15:47:58 UTC > 2021 x86_64 x86_64 x86_64 GNU/Linux > > As I sit here, my Lenovo T530 laptop is reporting packagekitd is taking > anywhere from 20 to 40 percent of CPU, per 'top.' There is continuous > disk activity. Nothing going on with the system other than Thunderbird > e-mail and Chrome browser. > > This seems to go on, with CPU percentage growing over time, and it > rebooting cures this, but it comes back after the system has slept > overnight (lid closed). PackageKit and dnf keep separate metadata in /var/cache and they update periodically. PackageKit seems to do this on login, but I've also noticed it trigger an update when I switch networks. And dnf is on a timer. Either of them can use a lot of cpu, it just depends on how much updating they need. Recently I've been experimenting with cgroups to restrict the amount of cpu packagekit gets via the packagekit.service unit. i.e. this is a service unit specific restriction, not on all instances of packagekit. Thus it doesn't affect offline updates, where it can still use 100% cpu if need be. But, it's possible GNOME Software could be a bit slower since it uses packagekit, though I haven't noticed any ill effect so far. $ sudo systemctl edit packagekit.service Read the file that appears and insert these two lines where it says to: [Service] CPUQuota=25% Save it out, and when the unit restarts (logout and login or do the daemon-reload followed by service restart dance) you'll see packagekit uses this value as a maximum. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: No Swap Allocation in FSTAB
On Mon, Jun 21, 2021 at 1:25 PM Samuel Sieb wrote: > > On 2021-06-21 1:05 a.m., Bill Shirley wrote: > > The server is running on Raid-1 SSDs with 64GB of RAM > > > > Bill > > > > On 6/21/2021 3:41 AM, Samuel Sieb wrote: > >> On 6/20/21 7:25 PM, Bill Shirley wrote: > >>> One of the first things I did after installing F34 is disable > >>> swap-on-zram: > >>>touch /etc/systemd/zram-generator.conf > >>> and define a swap partition in fstab. > >> > >> Why? > > I don't see how that's an answer to why you would disable zram. > Especially when your later reply shows that you're not really even using > the disk swap anyway. Lots of folks don't realize that zram devices don't use any memory (small amount of overhead based on the size of the zram device, and driver; less than 0.1% of the zram device size), and that it's dynamically allocated. But it's true that swap efficacy as a percentage cannot be 100% like it is with disk or file based swap. That it's so much faster makes up for the lower efficacy. By that I mean, a 4 KiB page being swapped out to disk means you free 4 KiB RAM and consume 4 KiB on disk. With zram based swap, it's still memory, but it's compressed. So you free up 4 KiB uncompressed memory for other things; and you consume ~2 KiB RAM for that compressed page in the zram device. The efficacy is related to the compression ratio you get, which is anywhere 2:1 to 3:1. So it's an efficacy of 50% to 75%. For sure it's better than no swap which has an efficacy of 0% :) In fact that's misleading because when a system can't evict dirty pages at all, it's forced to do file page reclaim, i.e. libraries, executables, configurations that exist as files on disk, can be removed from memory via reclaim, because they're already on disk and can just be read back in. But when under memory pressure, reclaim can look a lot like swap thrashing and even compete with it. So some swap is better, and also due to SSDs, we're probably better off with a higher swappiness value, i.e. give equal weight to page out of anonymous pages as reclaim. But some workloads are different and you can actually get a kind of in-memory swap thrashing same as on disk. It's so fast that it's normally not a problem. Until it is. So we definitely want to keep an eye on reports of folks having issues that sound like hangs or lockups, any time the desktop becomes unresponsive we want to find out what's going on First thing to check in those cases is if the system is running uresourced. It's only enabled by default on GNOME right now. But it's considered safe to run as an opt in for other desktops, though we want to keep an eye on possible regressions. There is still more work to do in this area, in particular wiring up the IO isolation. Any time there's memory pressure it'll quickly lead to IO pressure: reclaim and swap. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: No Swap Allocation in FSTAB
On Mon, Jun 21, 2021 at 10:51 AM Barry Scott wrote: > > The SSDs are a lot slower than compressing a page into RAM. > > There was extensive discussion on the Fedora Devel list when this change was > proposed. > > Personally I was convinced that this change is an improvement for any system > that is under > memory pressure. I'm not going to try to recall the discussion as I may get > some details > wrong. At a high level, zram is a ram disk that has transparent compression. You can format it with mkswap or any other file system, use it as a block device. But nuts and bolts memory management, reclaim, paging in and out, it's quite complicated. There's work happening since kernel 5.8 to make swap a lot more effective. And on going work to make mm and zswap do the right thing. And zswap is a different thing altogether, it's a front cache that uses a compressed memory pool as a cache for a conventional swap file or partition. And it works on an least recently used basis. So it has a way of determining what's stale and pushing that out to disk, while keeping recent things in the (memory) cache. In this case we don't have the concerns with priority inversions that can happen when a particular sequence of events happens: 1. zram based swap has higher priority 2. conventional swap has lower priority 3. early workloads fill up zram with stale things not used again later 4. the general workload ends up using disk based swap So this is not really any worse than before at this point except it is consuming some memory, just to keep stale things available in case they get used. And if they do get used, it'll be quite fast. That's not obviously a bad thing, except it is taking a limited resource off the table. That's atypical for desktop workloads. But you can imagine that the more resources a system has the more variable the workload can be, and you could see early swap fill up a zram device, and then it can't be used again until the programs that created those anonymous pages are quit, in the extreme case. Anyway, how to optimize was the whole point of moving to zram based swap. And it won't stop there. There is still more work happening to get zswap cgroups aware. Neither zram nor zswap are at the moment so for resource control purposes, we actually need a plain swap partition or swap file, it can't even be on dm-crypt at the moment. And one of the nice things about zram based swap is, it's volatile, so we have less security concerns about questionable things ending up on persistent storage that don't even go in a user's ~/home. Anything could be evicted to swap. > I switched it on in F33 and have for over a year seen no down side for my > work loads. > My work loads are file+email server, firewall, KDE desktops, Kodi music > server. > > At my work once we get on to Centos 8 I'm planning to performance test with > zram swap. > We have a work load that is very sensitive to disk I/O spikes and there is > some sad > code that uses swap for 10s to 15s every 20mins of so that I want to make go > away. > We have RAID-10 SSD where we see issues. With centos 8 kernels you'd probably use zram based swap because it's more mature in the older kernels. If you are able to use an elrepo kernel you could try changing nothing else and see if the kernel 5.8+ changes help your workload all by themselves. And if not you could look at either zram based swap or zswap. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Long wait for start job
On Mon, Jun 21, 2021 at 10:58 AM Chris Murphy wrote: > > Jun 19 12:47:12 fedora kernel: sd 6:0:0:1: tag#2 uas_eh_abort_handler > 0 uas-tag 2 inflight: IN > Jun 19 12:47:12 fedora kernel: sd 6:0:0:1: tag#2 CDB: Mode Sense(6) 1a > 00 08 00 18 00 Yeah and in the install-boot log it happens again: Jun 20 15:45:20 Bree kernel: sd 6:0:0:1: tag#16 uas_eh_abort_handler 0 uas-tag 1 inflight: IN Jun 20 15:45:20 Bree kernel: sd 6:0:0:1: tag#16 CDB: Mode Sense(6) 1a 00 08 00 18 00 Jun 20 15:45:20 Bree kernel: scsi host6: uas_eh_device_reset_handler start Jun 20 15:45:20 Bree kernel: usb 4-3: reset SuperSpeed Gen 1 USB device number 2 using xhci_hcd Jun 20 15:45:20 Bree kernel: scsi host6: uas_eh_device_reset_handler success Jun 20 15:45:20 Bree kernel: sd 6:0:0:1: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 15:45:20 Bree kernel: sd 6:0:0:1: [sde] Optimal transfer size 33553920 bytes not a multiple of physical block size (4096 bytes) Jun 20 15:45:20 Bree kernel: sd 6:0:0:1: [sde] Attached SCSI disk What's new here is the explicit USB reset message. Jun 20 15:44:50 Bree kernel: usb 4-3: new SuperSpeed Gen 1 USB device number 2 using xhci_hcd Jun 20 15:44:50 Bree kernel: usb 4-3: New USB device found, idVendor=174c, idProduct=55aa, bcdDevice= 1.00 Jun 20 15:44:50 Bree kernel: usb 4-3: New USB device strings: Mfr=2, Product=3, SerialNumber=1 Jun 20 15:44:50 Bree kernel: usb 4-3: Product: ASM1156-PM Jun 20 15:44:50 Bree kernel: usb 4-3: Manufacturer: ASMT Jun 20 15:44:50 Bree kernel: usb 4-3: SerialNumber: Curiously there is no usb of a second ASM1156 product, even though there are two: Jun 20 15:44:50 Bree kernel: scsi 6:0:0:0: Direct-Access ASMT ASM1156-PM 0PQ: 0 ANSI: 6 Jun 20 15:44:50 Bree kernel: scsi 6:0:0:1: Direct-Access ASMT ASM1156-PM 0PQ: 0 ANSI: 6 Jun 20 15:44:50 Bree kernel: sd 6:0:0:0: [sdd] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB) Jun 20 15:44:50 Bree kernel: sd 6:0:0:1: [sde] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB) And they have their own /dev node. So is one in a USB enclosure and the other isn't? Or maybe they are both just appearing as usb 4-3 even though they get different scsi id's - that's probably it. But then one of them is having some sort of issue, even if it's just confusion that results in the need for the kernel to do a reset on it. but *shrug* this is the joy of USB, it's not necessarily a hardware problem per se. I've got one SATA USB enclosure that tries to use the uas driver if direct connected to an Intel NUC. And I get no end of grief from it (this is with a pre 5.0 kernel I'm sure, it may very well have since been fixed in the kernel). All kinds of uas related errors. Plug it into an externally powered USB hub, and it doens't use the uas driver and I don't have any problems with it, and it reads/writes at approximately the drive's spec'd performance, depending on where on the platter the head is at. As it turns out a USB hub is very much like an ethernet hub. It's not just amplifying a signal, it's reading it, parsing it, and rewriting out that entire stream, as well as any other stream from another connected device. They're a PITA but kind of an engineering marvel (from my perspective anyway). So the hub does in effect seem to normalize the command/control/data streams from myriad devices so they have a better chance of playing well together. It's almost like the idea is "we'll use crap chipsets in the devices and hosts themselves, and just let the hubs figure it all out". And as it turns out with the well behaved SATA USB enclosures, they have transient read and write errors (one a day, then 10 times in a row) if direct connect to that same Intel NUC. With the *externally* powered (not bus powered) hub, zero problems. For years. So if both drives are in SATA USB enclosures, and if they're both connected to ports on a dock, you might track down or borrow an externally powered hub and connect both of the drives to that hub, and the hub to the dock. And see if this problem goes away. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Long wait for start job
On Mon, Jun 21, 2021 at 3:20 AM Patrick O'Callaghan wrote: > > The logs are now publicly visible at: > https://drive.google.com/drive/folders/1nGwVkeTJh5hz4dYRBD7Ikpx6q4MaPki9?usp=sharing From the live boot (the least complicated one to look at for starters), there is an anomaly: Jun 19 08:46:41 fedora kernel: BTRFS: device label fedora_localhost-live devid 1 transid 1768306 /dev/sda3 scanned by systemd-udevd (602) Jun 19 08:46:41 fedora kernel: BTRFS: device label storage devid 1 transid 3481192 /dev/sdc1 scanned by systemd-udevd (595) Jun 19 08:46:44 fedora kernel: BTRFS: device label RAID devid 1 transid 1973 /dev/sdd scanned by systemd-udevd (595) Jun 19 08:46:56 fedora systemd[1]: Mounted /sysroot. Jun 19 12:47:14 fedora kernel: BTRFS: device label RAID devid 2 transid 1973 /dev/sde scanned by systemd-udevd (1000) rewinding to see why this device is so much later (ignoring 8 vs 12 which is some clock artifact and not real), even though it's not holding up live boot: Jun 19 08:46:41 fedora kernel: scsi host6: uas Jun 19 08:46:41 fedora kernel: usbcore: registered new interface driver uas Jun 19 08:46:41 fedora kernel: scsi 6:0:0:0: Direct-Access ASMT ASM1156-PM 0PQ: 0 ANSI: 6 Jun 19 08:46:41 fedora kernel: scsi 6:0:0:1: Direct-Access ASMT ASM1156-PM 0PQ: 0 ANSI: 6 Jun 19 08:46:41 fedora kernel: sd 6:0:0:0: Attached scsi generic sg4 type 0 Jun 19 08:46:41 fedora kernel: sd 6:0:0:1: Attached scsi generic sg5 type 0 Jun 19 08:46:41 fedora kernel: sd 6:0:0:0: [sdd] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB) Jun 19 08:46:41 fedora kernel: sd 6:0:0:0: [sdd] 4096-byte physical blocks Jun 19 08:46:41 fedora kernel: sd 6:0:0:0: [sdd] Write Protect is off Jun 19 08:46:41 fedora kernel: sd 6:0:0:0: [sdd] Mode Sense: 43 00 00 00 Jun 19 08:46:41 fedora kernel: sd 6:0:0:1: [sde] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB) Jun 19 08:46:41 fedora kernel: sd 6:0:0:1: [sde] 4096-byte physical blocks Jun 19 08:46:41 fedora kernel: sd 6:0:0:1: [sde] Write Protect is off Jun 19 08:46:41 fedora kernel: sd 6:0:0:1: [sde] Mode Sense: 43 00 00 00 Jun 19 08:46:41 fedora kernel: sd 6:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 19 08:46:41 fedora kernel: sd 6:0:0:0: [sdd] Optimal transfer size 33553920 bytes not a multiple of physical block size (4096 bytes) Jun 19 08:46:44 fedora kernel: sd 6:0:0:0: [sdd] Attached SCSI disk Jun 19 12:47:12 fedora kernel: sd 6:0:0:1: tag#2 uas_eh_abort_handler 0 uas-tag 2 inflight: IN Jun 19 12:47:12 fedora kernel: sd 6:0:0:1: tag#2 CDB: Mode Sense(6) 1a 00 08 00 18 00 Jun 19 12:47:12 fedora kernel: scsi host6: uas_eh_device_reset_handler start Jun 19 12:47:12 fedora kernel: usb 4-3: reset SuperSpeed Gen 1 USB device number 3 using xhci_hcd Jun 19 12:47:12 fedora kernel: scsi host6: uas_eh_device_reset_handler success Jun 19 12:47:12 fedora kernel: sd 6:0:0:1: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 19 12:47:12 fedora kernel: sd 6:0:0:1: [sde] Optimal transfer size 33553920 bytes not a multiple of physical block size (4096 bytes) Jun 19 12:47:14 fedora kernel: sd 6:0:0:1: [sde] Attached SCSI disk Jun 19 12:47:14 fedora kernel: BTRFS: device label RAID devid 2 transid 1973 /dev/sde scanned by systemd-udevd (1000) Both sd 6:0:0:0: (/dev/sdd) and sd 6:0:0:1: (/dev/sde) are found at the same time, but there's a uas related reset that happens only on /dev/sde. I don't know what this means: Jun 19 12:47:12 fedora kernel: sd 6:0:0:1: tag#2 uas_eh_abort_handler 0 uas-tag 2 inflight: IN Jun 19 12:47:12 fedora kernel: sd 6:0:0:1: tag#2 CDB: Mode Sense(6) 1a 00 08 00 18 00 But that's the clue that there's some kind of communication issue between drive and kernel. If these are both in SATA USB enclosures I'd ask on the linux-usb list what these messages mean and why the file system on the one with these messages isn't recognized by the kernel until later. You could switch cables only around and see if the problem follows the cables; then switch the drives to see if it follows the drives. I would sooner expect the drive enclosure than cable but since I've got no clue what the above two messages mean, it's just an iterative process. I do recommend using lsusb -v in the initial email to linux-usb, maybe compare the two enclosure outputs to see if there's anything different. The make/model might be identical but it's possible a partial explanation lies with a chipset difference, or revision of the chipset. There's a bunch of usb and uas driver quirks that can be applied to work around problems like this. By reporting them, if there's enough differentiation reported by the enclosures to use a quirk, they'll apply it by default in a future kernel. If not, then it becomes something you apply every boot. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to
Re: Long wait for start job
On Sun, Jun 20, 2021 at 3:48 PM Patrick O'Callaghan wrote: > > If I power on the dock on after startup is complete, one drive appears > immediately and the other takes 30 seconds or so, so the delay is not > being caused by the boot process itself. It must be the hardware (the > drive or the dock) taking that long for whatever reason, possibly power > management as George suggested. As I've said, my goal is to convince > the kernel that it doesn't need to wait for this so as to continue with > the startup. > dmesg will show this whole sequence: dock appearing, bus appearing, drive on bus appearing, partition map appearing. I couldn't open the previous journal log provided, it wasn't publicly visible or I'd have taken a gander. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Long wait for start job
I make fstab mount options include: nofail,noauto,x-systemd.automount That way it is only mounted on demand, i.e. when the mount point is "touched". It's also possible to use x-systemd.idle-timeout=300 which will unmount it after 5 minutes idle. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Long wait for start job
On Fri, Jun 18, 2021, 11:10 AM Patrick O'Callaghan wrote: > > My problem is that one drive comes up almost instantly and the other > takes 30 seconds. In fact I can live with that. My real gripe is that > the kernel makes me wait even though the drive is not being accessed. > If it just wants to make the drive available, it should be able to wait > asynchronously. Keep the hardware config the same, but boot a Fedora Live image (from USB stick or whatever). Does it still hang during boot? The kernel shouldn't wait unless the device has put some kind of busy state on the bus shared by sysroot. I'm suspicious that something is trying to mount it, or otherwise access it, but I haven't seen the logs. Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Easiest way to move from BTRFS to EXT4 without losing data
On Tue, Jun 15, 2021 at 8:54 AM Sreyan Chakravarty wrote: > > > > On Tue, Jun 15, 2021 at 2:42 AM Sreyan Chakravarty wrote: >> >> >> >> On Tue, 15 Jun 2021, 1:27 am Garry T. Williams, wrote: >>> >>> On Monday, June 14, 2021 3:50:57 PM EDT Joe Zeff wrote: >>> > On 6/14/21 1:12 PM, Sreyan Chakravarty wrote: >>> > > I mean if I backup from BTRFS can I restore it into ext4 ? >>> > >>> > Your backup software neither knows nor cares how your filesystem is >>> > formatted, so of course you can. Unless, of course, you're cloning the >>> > partition, in which case a restore will overwrite the partition with the >>> > original formatting. >>> >>> I'm pretty sure Chris was correct. You system is set up to boot from >>> the btrfs file system -- not ext4. Changing the file system will >>> result in needed changes in boot loader, fstab, etc. >>> >>> Restoring to an ext4 file system will not result in a bootable system. >> >> >> How does this sound? >> >> I make a complete tar backup of my system. >> >> Reinstall F33 to ext4. >> >> Restore that tar, of course fstab and crypttab needs to be corrected. >> >> >> Will this work? Does it make any sense? > > > > So will this work ? Any feedback ? Probably not because it'll step on valid bootloader things with stale copies. If you avoid stepping on anything in: /boot /etc/grub* It might work... but you'll still have kernels that rpm database says are installed that aren't installed; you'll have stale boot entries for kernels that aren't installed and thus won't work. None of it will get cleaned up on its own. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Easiest way to move from BTRFS to EXT4 without losing data
On Mon, Jun 14, 2021 at 1:57 PM Garry T. Williams wrote: > > On Monday, June 14, 2021 3:50:57 PM EDT Joe Zeff wrote: > > On 6/14/21 1:12 PM, Sreyan Chakravarty wrote: > > > I mean if I backup from BTRFS can I restore it into ext4 ? > > > > Your backup software neither knows nor cares how your filesystem is > > formatted, so of course you can. Unless, of course, you're cloning the > > partition, in which case a restore will overwrite the partition with the > > original formatting. > > I'm pretty sure Chris was correct. You system is set up to boot from > the btrfs file system -- not ext4. Changing the file system will > result in needed changes in boot loader, fstab, etc. > > Restoring to an ext4 file system will not result in a bootable system. Strictly speaking, restoring to a *new* file system will result in an unbootable system. The FS UUID has changed, all the changes for assembly need to be reflected in /etc/fstab, and the bootloader configuration files (a minimum of four). Even if this were LVM/ext4 being backedup and restored to LVM/ext4 it's the same problem and same number of steps to fix all of it after the restore. And to my knowledge no tool knows how to do that except the installer. And the installer only knows how to do it in the context of a clean install. Not a repair. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Easiest way to move from BTRFS to EXT4 without losing data
On Mon, Jun 14, 2021 at 10:38 AM Sreyan Chakravarty wrote: > > Hi, > > BTRFS is not working out for me. > > What will be the easiest way to move to EXT4 ? > > I am on Fedora 33. > > Please note I also want to backup my root filesystem and not just my home. > I think if you ask 10 people you'll get 10 different answers. The easiest to *explain* is: * backup /home * clean install the OS using Custom partitioning's "LVM" preset partitioning scheme * restore /home from backup And that's because the installer does a lot of work you otherwise have to do manually: creates and assembles the new setup, writes out the correct bootloader and fstab information, etc. If you know how to do these things manually already, then that path is probably easier than a clean install, and having to reinstall some things and adjust settings. Explaining all that in detail is tedious, but maybe someone knows of a guide how to do all that. But this process is the same whether the source is btrfs, xfs or already ext4 and you need to migrate it to new file systems/layout. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Long wait for start job
On Sun, Jun 13, 2021 at 5:26 AM Patrick O'Callaghan wrote: > > I'm 99% certain it's being caused by my external USB dock starting up. > See my reply to Ed. The dock is not mounted at boot, but has a BTRFS > filesystem that (possibly) the kernel insists on checking before the > rest of the startup can proceed. This is speculation at the moment. > > systemd-analyze blame shows a long delay in a unit I created to > automatically power down the dock if it's not mounted: > > $ systemd-analyze blame|head > 4min 18.016s dock-watch.service > 30.517s systemd-udev-settle.service > 15.273s logrotate.service > 6.274s NetworkManager-wait-online.service > 5.765s raid.mount > 5.452s plymouth-quit-wait.service > 5.038s akmods.service > 4.541s upower.service > 4.427s sssd.service > > I've uploaded the dock-watch unit and the scripts it calls, together > with the automount unit, to: > > https://drive.google.com/drive/folders/1BT5w4u7TzBmWbhx97sWvfOErCIUylrik?usp=sharing dock-wait script contains: RAID=/dev/sdd This may not be reliable because /dev nodes frequently change between reboots. You're better off using /dev/disk/by-... any of them are better than node. You can use label, uuid, wwn, whatever. I actually use a udev rule for idle spin down: $ cat /etc/udev/rules.d/69-hdparm.rules ACTION=="add", SUBSYSTEM=="block", \ KERNEL=="sd*[!0-9]", \ ENV{ID_SERIAL_SHORT}=="WDZ47F0A", \ RUN+="/usr/sbin/hdparm -B 100 -S 252 /dev/disk/by-id/wwn-0x5000c500a93cae8a" $ Chris Murphy -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Long wait for start job
On Sun, Jun 13, 2021 at 3:56 AM Patrick O'Callaghan wrote: > > On Sun, 2021-06-13 at 07:09 +0800, Ed Greshko wrote: > > On 13/06/2021 06:57, Ed Greshko wrote: > > > But, does your plot show a difference? > > > > Speaking of your plot. > > > > Don't you think the time between > > > > sys-devices-pci:00-:00:1a.0-usb1-1\x2d1-1\x2d1.6- > > 1\x2d1.6.2.device and > > dev-disk-by\x2dpath- > > pci\x2d:00:14.0\x2dusb\x2d0:3:1.0\x2dscsi\x2d0:0:0:1.device > > > > worth looking into? > > Of course. That's precisely the issue I'm concerned about. I don't see > what's causing it. My working hypothesis is that it's somehow related > to the fact that the external dock supports two drives in a BTRFS RAID1 > configuration and that the kernel is verifying them when it starts up, > even though the drives are not being mounted (they have an automount > unit but nothing in /etc/fstab). > > Why it would delay the rest of the system startup while this is > happening is something I don't understand. The delay is very visible (I > get three dots on a blank screen while it's happening). Short version: Is this Btrfs raid1 listed at all in fstab? If so, add noauto,nofail to the mount options, see if that clears it up. Long version: Dracut handles mdadm array assembly. Normal assembly (non-degraded) is done by dracut using the mdadm command; but if that fails, dracut starts a count down loop, I think 300 seconds, before it tries a degraded assembly. None of this exists for btrfs raid at all in dracut. For one, btrfs raid assembly is combined with mount. The mount command pointed to any of the member devices results in the kernel finding all the member devices automagically. If 1+ member is missing, mount fails. Since systemd only tries to mount one time, and because it's decently likely mounting a multiple device btrfs as /sysroot will fail as a result of one or more devices not yet being ready, there is a udev rule to wait for everyone to get ready: /usr/lib/udev/rules.d/64-btrfs.rules The gotcha is this simple rule waits indefinitely. This udev rule is there to make sure normal (non-degraded) boot doesn't incorrectly fail just because of a 1s delay with one of the devices showing up. But if a drive has actually failed, it results in a hang. Forever. You can add "x.systemd.timeout=300" boot parameter to approximate the rather long dracut wait for mdadm. And at a dracut shell, you can then just: mount -o degraded /dev/sdXY /sysroot exit And away you go. Of course this is non-obvious. And it needs to work better. And it will, eventually. So the next gotcha is if /sysroot is not Btrfs. In this case there's a bug in dracut that prevents this udev rule from being put into the initramfs. That means anything that does try to mount a non-root Btrfs during boot, either fstab or gpt discoverable partitions, might possibly fail if "not all devices are ready" at the time of the mount attempt. https://github.com/dracutdevs/dracut/issues/947 This should be fixed in dracut 055, but if you already have 055 and have an initramfs built with it and this problem you're having is a new problem, maybe we've got a regression in 055 or something? I'm not sure yet...still kinda in the dark on what's going wrong. Also, it is possible it's not related to this btrfs file system at all, but I'm throwing it out there just as something to be aware of. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Long wait for start job
On Sat, Jun 12, 2021 at 11:20 AM Patrick O'Callaghan wrote: > > On Sat, 2021-06-12 at 07:25 -0600, Chris Murphy wrote: > > Both problems need logs. It's quite a bit over kill but these boot > > parameters will help provide enough info. > > > > systemd.log_level=debug udev.log-priority=debug > > rd.debug x-systemd.timeout=180 > > > > The debug options are resource inventive and slow down boot by allot. > > The > > point of the timeout is hopefully avoiding the dracut shell. But better > > to > > get the shell than an indefinite hang. > > > > journalctl -b -o short-monotonic --no-hostname > journal.log > > > > Copy that out to a file sharing service, and post the URL. > > https://paste.centos.org/view/3932b04e Sorry, it expired before I had a chance to look at it. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Long wait for start job
Both problems need logs. It's quite a bit over kill but these boot parameters will help provide enough info. systemd.log_level=debug udev.log-priority=debug rd.debug x-systemd.timeout=180 The debug options are resource inventive and slow down boot by allot. The point of the timeout is hopefully avoiding the dracut shell. But better to get the shell than an indefinite hang. journalctl -b -o short-monotonic --no-hostname > journal.log Copy that out to a file sharing service, and post the URL. Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: does rescue kernel ever update
On Thu, Jun 3, 2021 at 3:19 PM Patrick O'Callaghan wrote: > > On Thu, 2021-06-03 at 12:51 -0600, Joe Zeff wrote: > > On 6/3/21 12:20 PM, Jon LaBadie wrote: > > > > > > Are old rescue kernels still useful? (6 years?) > > > > They're still just as useful as they were when they were installed. > > Of > > course, that means that any function that was added later isn't > > there, > > but that doesn't matter because you're only going to use it in > > emergencies to troubleshoot a broken system. > > Surely an old rescue kernel may not be able to mount a BTRFS > filesystem? Not only Btrfs but any file system. A new mkfs may set options that an old kernel doesn't support. There's quite a bit of that in ext4 and XFS land. If you mkfs.btrfs with today's progs and use defaults, an ancient kernel will mount it. But there are features that old kernels don't support, like for example zstd compression which arrived in kernel 4.14, thus not mountable (incompatible) with older kernels. Free space tree v2 exists since kernel 4.5, but its flag permits ro mount with old kernels. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: does rescue kernel ever update
On Thu, Jun 3, 2021 at 12:20 PM Jon LaBadie wrote: > > On my 3 systems, F34, F34, and CentOS7, they are > 1, 2, and 6 years old respectively. > > Are old rescue kernels still useful? (6 years?) They might be useful to a sysadmin, I think they are useless. The rescue kernel is really just a "no host-only" initramfs that contains a bunch of extra dracut and kernel modules that the host only initramfs doesn't. The difficulty is the rescue initramfs can't do a full graphical boot once /usr/lib/modules/ dir for that kernel has been removed, which was likely in its first 4 weeks following installation. Since you won't get graphical boot anyway, I'm not sure you have a good chance of dracut building a new host only initramfs that contains the driver needed for whatever new hardware you've added or changed to. It's pretty esoteric landing in a dracut shell even for experienced users. So I am not a fan. What I would like to see is (a) an initramfs that can boot a graphical stack (b) contains the Live OS dracut modules (c) and overlayfs, and wire it up so that the rescue boot entry does a read-only sysroot boot + writable overlay like a LiveOS. So now folks can use a web browser normally, get on irc or whatever, and get some help with why they can't boot without having to resort to mobile or a 2nd computer they may not have handy. A side plus for Btrfs cases, it has a unique ro,rescue=all mount option that tolerates file system problems. Plausibly we can still boot read-only in situations where other file systems would face plant until they get an fsck. Whereas on Btrfs we really want to steer folks towards freshening backups before they attempt a repair, if they end up in a disaster situation. But nevertheless, such an effort would be generically beneficial no matter the file system. A variation on that might be a read-only "rescue" or "recovery" snapshot that would be immutable, paired with the same LiveOS+overlayfs concept. That way a boot is possible in a variety of other more likely user error or update related scenarios, i.e. file system isn't damaged, it's the installation that's messed up, and what you need is a live boot but don't have a USB stick handy, so just bake it into a small snapshot. *shrug* maybe. We could make it completely self contained including the kernel, initramfs, and the whole graphics stack. But, near term such a thing would be btrfs only unless we dedicate a literal partition and stick a Live OS ISO image on it (effectively). -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: How do I recover from BTRFS issues?
t; cannot describe it as stable with this BTRFS issue. A scrub currently > says that / (and therefore also /home) has 10 unrecoverable errors. I > can find no Fedora or Suse documentation on how to recover from what > should be impossible situations like this. It's not supposed to happen. But once it happens, it's very case specific and a bit complicated to figure out what probably happened and what the next steps are. It's good at avoiding trouble in the first place due to COW, i.e. nothing is being overwritten, therefore interruptions during writes, whether crash or power fail, aren't a problem. But write ordering violations can result in more problems with Btrfs. There are some safeguards built in to work around that, but they are limited. fpaste --btrfsinfo Post the resulting URL. It'll expire in 24 hours. But if the problem file system is for sysroot, that will help better understand the storage stack, mount options, and recent btrfs messages. If the problem file system is not sysroot, you'll want to add --printonly and use the commands shown for each section on the proper mount point or device. >A reinstall will not > preserve /home, leading to unacceptable data loss. Hopefully there is a backup no matter what the file system is; and if not, creating a backup is the top priority in any disaster situation. There is a way to reinstall and preserve /home in Anaconda, but before doing that we really need to understand what's broken. Because if the file system is broken and can't be fixed, then it's mkfs time. And for that you need backups of at least the important user data. > I did an offline > btrfs check on my F33 machine that left the machine unbootable, so its > probably not an option either. I'm stuck at this point. btrfs check --readonly is safe, it's not touching anything on the drive at all --repair should at worst fail safe but it does still have rather scary warnings in the man page; it's best to consider --repair a last resort. You need to use other options before --repair, but we need to see the errors to know what to recommend. >Should I just > stop using the default BTRFS filesystem and go back to ext4? On the one hand, e2fsck has a pretty good chance of fixing damaged file system metadata resulting from storage stack problems, including hardware issues. But it doesn't check data integrity at all, and data is a much larger portion of what's written to a drive, so it's a much larger target for hardware problems resulting in corruption, dropped/torn/misdirected writes or even bit flips. Btrfs is intentionally fussier about these kinds of problems. And yeah it'll often just stop, to seek human attention what to do about it. That's pretty onerous, but it's also what protects your data from being damaged even worse. But anyway there's not much to go on here yet. We need to see dmesg for these problems. I personally prefer to see the entire dmesg because isolated errors don't tell me about what was going on immediately prior to the Btrfs error which is almost always a related factor. Mount options can matter too. In the raid1 case, same thing, need to see dmesg because that's where btrfs spits out all of it's complaints. And it is quite verbose. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Preserving @home brtfs subvolume on a fresh Fedora installation
On Sun, May 23, 2021 at 11:25 AM Marco Guazzone wrote: > > Hello, > > I have just done a fresh installation of Fedora 34 on a new computer and used > the automatic disk partitioning proposed by the installer. > Now my disk has the following layout: > - /boot (ext4) > - /boot/efi (EFI system partition) > - / (btrfs), with two subvolumes: @root and @home. > > In case of a new fresh installation of Fedora, I would like to preserve the > @home subvolume only and instead overwrite the rest. However, I am not sure > what I should do (note, I don't want to use dnf upgrade). > > Just as an experiment, I tried to simulate a fresh (re)installation of Fedora > 34 and I selected "Custom" as the disk partitioning method. The installer > showed the above disk layout. So, my idea was to use the same approach I used > in the past (with ext4 partitions). Specifically: > * For the "/boot" and "/boot/efi" partitions, I specified "/boot" and > "/boot/efi" as mount points, respectively, and flagged the "Reformat" > checkbox. > * For the "/home" subvolume, I specified "/home" as the mount point, without > flagging the "Reformat" checkbox. > * For "/", I cannot tell the installer to reformat it. I am not sure what to > do. I would create a new btrfs filesystem with "/" as the mount point, but I > am not sure it is correct. > > Do you have any suggestions? There's definitely a trick. The installer normally enforces reformatting a partition/LV for sysroot. Btrfs gets an exception by merely enforcing creation of a new subvolume on an existing Btrfs file system for sysroot. The way to do that is to create a new / mount point rather than clicking on an existing one; also helpful is to not specify a size for this mount point, just leave that 2nd field empty. There is a test case that describes this in detail and hopefully someone will turn it into a quickdoc. (It's on my to do list but I'm not sure when I'm going to get around to it. https://fedoraproject.org/wiki/QA:Testcase_partitioning_custom_btrfs_preserve_home -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: Strange compress behavior under /usr
On Tue, May 11, 2021 at 8:58 AM Qiyu Yan wrote: > > Dear folks, > > My problem is that new file created at /usr won't be compressed. > > I accenticly that none of my files under /usr is compressed[1], so I > tried to run `sudo btrfs fi def -czstd -v -r /usr` to compress them, > that seems to work. And `sudo compsize /usr` now gives > Processed 431312 files, 224528 regular extents (230971 refs), 253758 > inline. > Type Perc Disk Usage Uncompressed Referenced > TOTAL 56% 7.2G 12G 13G > none 100% 3.5G 3.5G 3.5G > zstd39% 3.6G 9.2G 9.7G > > This seems pretty good, but when I am testing dding to dump a file to > /usr to test compress for new file, problem happens: > > [root@yan-desktop /]# dd if=/dev/zero of=/usr/1 bs=10240 count=1 > 记录了1+0 的读入 > 记录了1+0 的写出 > 10240字节(102 MB,98 MiB)已复制,0.0426441 s,2.4 GB/s > [root@yan-desktop /]# dd if=/dev/zero of=/etc/1 bs=10240 count=1 > 记录了1+0 的读入 > 记录了1+0 的写出 > 10240字节(102 MB,98 MiB)已复制,0.0585055 s,1.8 GB/s > [root@yan-desktop /]# compsize /usr/1 > Processed 1 file, 1 regular extents (1 refs), 0 inline. > Type Perc Disk Usage Uncompressed Referenced > TOTAL 100% 97M 97M 97M > none 100% 97M 97M 97M > [root@yan-desktop /]# compsize /etc/1 > Processed 1 file, 782 regular extents (782 refs), 0 inline. > Type Perc Disk Usage Uncompressed Referenced > TOTAL3% 3.0M 97M 97M > zstd 3% 3.0M 97M 97M I'm unable to reproduce this. Can you do: ls -li to list the inode number of a file in /usr that should be compress but isn't and then plug that inode number into btrfs insp dump-t -t 257 /dev/xyz | grep -C 20 $INUM This may expose file names for other files. Doesn't matter to me if you include the whole output of the above comment or trim to just the cluster of items referencing that inode number. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: automatic updates corrupting grub loader
A grub> prompt means GRUB wasn't able to find the grub.cfg. 1. On x86_64, reinstall the bootloader(s) by: sudo dnf reinstall shim-* grub2-efi-* sudo grub2-mkconfig -o /etc/grub2-efi.cfg * In no case should grub2-install be used on UEFI. 2. Check GRUB_ENABLE_BLSCFG=true is set in /etc/default/grub 3. grub2-mkconfig -o /etc/grub2-efi.cfg * On Fedora 33 /etc/grub2-efi.cfg->/boot/efi/EFI/fedora/grub.cfg * On Fedora 34 /etc/grub2-efi.cfg->/boot/grub2/grub.cfg 4. On Fedora 34, there is a /boot/efi/EFI/fedora/grub.cfg but it's a simple four line file that merely forwards to the real one at /boot/grub2/grub.cfg. In case this stub file was accidentally stepped on by using grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg it can be fixed by: sudo dnf reinstall grub2-common This reruns the f33->f34 upgrade script that will move /boot/efi/EFI/fedora/grub.cfg to /boot/grub2/grub.cfg, and then create the proper forwarding /boot/efi/EFI/fedora/grub.cfg stub file. 5. Another possible source of difficulty is the bootorder and boot entry stored in NVRAM. The bootorder should have the boot number for the Fedora entry first in the list. And the Fedora entry should point to the EFI system partition and path to shimx64.efi or shim.efi. Boot order can be reset by: efibootmgr --bootorder $ Where $ is the four digit boot number for the Fedora boot entry. No other entries need to be specified but you could optionally add a fallback entry, e.g. --bootorder 0006,0002 -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Re: F34 problems
On Thu, Apr 29, 2021 at 3:44 PM Matti Pulkkinen wrote: > > to, 2021-04-29 kello 15:32 -0600, Chris Murphy kirjoitti: > > > > Looks like you found it? > > https://bugzilla.redhat.com/show_bug.cgi?id=1955162 > > > > Yup. As for your earlier question about why I think Plymouth is > defaulting to US keymap, it's because when I try to enter the password > as if I was using the Finnish keymap, it doesn't work. It's only when I > look up a picture of a US keyboard, and then type as if I was using one > of those, the password works, i.e. if my password has a / character, I > need to press where it _would_ be on a US keyboard rather than where it > actually is on my physical keyboard. > > As for why testing didn't pick this up, I have no clue. It's entirely > possible I've done something horribly wrong, but if so, I have done the > same thing wrong when installing F32 and F33, and had no issues there > even though I had passwords containing special characters with > different placements between FI and US keymaps. > > Do you think it would be wise to open a separate bug against Anaconda > as you suggested, or just work with the kbd bug (that someone else > reported earlier today before I had the chance) and see where that > goes? If you've changed the keyboard layout to Finnish in step 3 "how to test" https://fedoraproject.org/wiki/QA:Testcase_Non-English_European_Language_Install and entered in a passphrase; and dracut comes up with a Finnish layout at the next reboot, but you have to fiddle your way through as if you had used a US layout, that means the installer didn't check for something it requires to properly encode the passphrase as requested. So I think it's anaconda or possibly lorax. I'd set it to anaconda, reference the kdb bug with a "see also" and we'll see what happens. Post the URL here though because I'll want to ask about it when QA does an F34 retrospective to see if it should have been caught. -- Chris Murphy ___ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure