On 7/28/19 9:47 PM, 'awokd' via qubes-users wrote:
'awokd' via qubes-users:
Also, if you can't get a --private volume active, try its
I've just encountered this issue, and I thought my problems were over
once I found this post..
Can't tell which post you're replying to, but I get the idea. The
volumes you are most concerned about all end in --private. If you've
gotten them to the point where they show as active, you can make a
subdir and "sudo mount /dev/mapper/qubes_dom0-vm--work--private
subdir" for example, copy out the contents, umount subdir and move on
to the next. You can ignore --root volumes, since installing default
templates will recreate. If you can't get the --private volumes you
want to show as active, I'm afraid recovering those is beyond me.
Fyi, previously lvscan on my system shown root, pool00, and every
volume but swap as inactive
I followed your instructions, but the system still fails to boot.
I've run 'vgchange -ay' and o saw the following printed a number of
device-mapper: table 253:6: thin: Couldn't open thin internal device
device-mapper: reload ioctl on (253:6) failed: no data available
I ran 'lvscan' again, and this time some VMS were marked active, but
a number (root,various -back volumes, several -root volumes, etc)
Really terrified everything is gone as I had just recovered from a
backup while my hardware got fixed, but I don't have the backup anymore.
Did you run "lvm lvconvert --repair qubes_dom0/pool00"? I think that
would be one of the first things you do when the underlying thin device
If it needs additional space, you could delete the swap lv, then re-add
Chris Laprise, tas...@posteo.net
PGP: BEE2 20C5 356E 764A 73EB 4AB3 1DC4 D106 F07F 1886
You received this message because you are subscribed to the Google Groups
To unsubscribe from this group and stop receiving emails from it, send an email
To view this discussion on the web visit