On 10/01/2018 05:24 PM, Chris Laprise wrote:
On 10/01/2018 02:50 PM, Micah Lee wrote:
I recently installed Qubes 4.0 on a laptop, installed updates in dom0 and my templates, restored a backup, and did a bunch of custom configuration. And then when I rebooted, Qubes wouldn't boot up due to a partitioning error. (It looks like it's the same problem described here [1]). During boot, I get a hundreds of lines that says:

dracut-initqueue[343]: Warning: dracut-initqueue timeout - starting timeout scripts

Followed by:

dracut-initqueue[343]: Warning: Could not boot.
dracut-initqueue[343]: Warning: /dev/mapper/qubes_dom0-root does not exist
dracut-initqueue[343]: Warning: /dev/qubes_dom0/root does not exist
      Starting Dracut Emergency Shell...

Then it drops me into an emergency shell.

When I run lv_scan, I can see:

Scanning devices dm-0 for LVM logical volumes qubes_dom0/root qubes_dom/swap
inactive '/dev/qubes_dom0/pool00' [444.64 GiB] inherit
inactive '/dev/qubes_dom0/root' [444.64 GiB] inherit
ACTIVE '/dev/qubes_dom0/swap' [15.29 GiB] inherit
inactive '/dev/qubes_dom0/vm-sys-net-private [2 GiB] inherit

And it continues to list another inactive line for each private or root partition for each of my VMs. Only swap is active.

I spent a little time trying to troubleshoot this, but ultimately decided that it wasn't worth the time, since I have a fresh backup. So I formatted my disk again, reinstalled Qubes, restored my backup, etc. After installing more updates and rebooting, I just ran into this exact same problem *again*. I think this could be a Qubes bug.

Any idea on how I can fix this situation? The dracut emergency shell doesn't seem to come with many LVM tools. There's lvm, lvm_scan, thin_check, thun_dump, thin_repair, and thin_restore. I could boot to the Qubes USB and drop into a troubleshooting shell to have access to more tools.

[1] https://groups.google.com/forum/#!searchin/qubes-users/dracut-initqueue$20could$20not$20boot|sort:date/qubes-users/PR3-ZbZXo_0/G8DA86zhCAAJ


If you do 'sudo lvdisplay qubes_dom0/root' it will probably say LV status is 'Not Available'. This could mean an 'lvchange' somewhere set those volumes (pool00, root, etc) to setactivationskip=y.

You can attempt to fix it at least temporarily like so:

sudo lvchange -kn -ay qubes_dom0/pool00
sudo lvchange -kn -ay qubes_dom0/root
sudo lvchange -kn -ay qubes_dom0/vm-sys-net-private

Then use lvdisplay to verify the LV status has changed.


BTW if you can run 'lvm' in the rescue shell then you can use that for various lv* commands including 'lvchange'. Just run 'lvm' by itself and that will put you in an lvm shell where the 'lvchange' command and others are accessible.

--

Chris Laprise, [email protected]
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB  4AB3 1DC4 D106 F07F 1886

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/9f8ce612-f061-62d2-9237-55ef0a5e7193%40posteo.net.
For more options, visit https://groups.google.com/d/optout.

Reply via email to