Sorry, didn't send to list. See my response to Chris.

---------- Forwarded message ---------
From: Thomas Kerin <thomas.ke...@gmail.com>
Date: Mon, 29 Jul 2019, 3:40 pm
Subject: Re: [qubes-users] fixing LVM corruption, question about LVM
locking type in Qubes
To: Chris Laprise <tas...@posteo.net>


Hi Chris,

Yes, I think I tried that once last night.

I notice it creates a qubes_dom0/pool_meta$N volume each time.

Note my earlier post (before I saw yours!) had a weird error about 2.6 tb
thin volume sizes exceeds size of pools and volume group

Output this time was:
WARNING: recovery of pools without pool metadata spare LV is not automated
WARNING: if everything works, remove qubes_dom0/pool00_meta2 volumWARNING:
Use pvmove command to move qubes_dom0/pool00_meta2 on the best fitting PV


Currently I have qubes_dom0/pool_meta0, 1, and 2.

On Mon, 29 Jul 2019, 3:18 pm Chris Laprise, <tas...@posteo.net> wrote:

> On 7/28/19 9:47 PM, 'awokd' via qubes-users wrote:
> > 'awokd' via qubes-users:
> >> thomas.ke...@gmail.com:
> >>> I've just encountered this issue, and I thought my problems were over
> >>> once I found this post..
> >>>
> >>> Fyi, previously lvscan on my system shown root, pool00, and every
> >>> volume but swap as inactive
> >>>
> >>> I followed your instructions, but the system still fails to boot.
> >>> I've run 'vgchange -ay' and o saw the following printed a number of
> >>> times.
> >>>
> >>> device-mapper: table 253:6: thin: Couldn't open thin internal device
> >>>    device-mapper: reload ioctl on (253:6) failed: no data available
> >>>
> >>>
> >>> I ran 'lvscan' again, and this time some VMS were marked active, but
> >>> a number (root,various -back volumes, several -root volumes, etc)
> >>>
> >>> Really terrified everything is gone as I had just recovered from a
> >>> backup while my hardware got fixed, but I don't have the backup
> anymore.
> >>>
> >> Can't tell which post you're replying to, but I get the idea. The
> >> volumes you are most concerned about all end in --private. If you've
> >> gotten them to the point where they show as active, you can make a
> >> subdir and "sudo mount /dev/mapper/qubes_dom0-vm--work--private
> >> subdir" for example, copy out the contents, umount subdir and move on
> >> to the next. You can ignore --root volumes, since installing default
> >> templates will recreate. If you can't get the --private volumes you
> >> want to show as active, I'm afraid recovering those is beyond me.
> >>
> > Also, if you can't get a --private volume active, try its
> > --private--######--back equivalent.
>
> Did you run "lvm lvconvert --repair qubes_dom0/pool00"? I think that
> would be one of the first things you do when the underlying thin device
> fails.
>
> If it needs additional space, you could delete the swap lv, then re-add
> it later.
>
> --
>
> Chris Laprise, tas...@posteo.net
> https://github.com/tasket
> https://twitter.com/ttaskett
> PGP: BEE2 20C5 356E 764A 73EB  4AB3 1DC4 D106 F07F 1886
>

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CAHv%2Btb6-PxnOuLzzEaPPFdfYx3aBJT77fuXgzfkpw6K2m3nzLw%40mail.gmail.com.

Reply via email to