On 7/29/19 10:19 AM, [email protected] wrote:
Thanks for your response.

I have some which don't show as active - it's looking like some data loss..

Something I am getting when I run
Lvconvert --repair qubes_dom0/pool00


WARNING: Sum of all thin volume sizes (2.67TiB) exceeds the size of thin pools 
and the size of whole volume group (931.02GiB)

Is this something I can fix perhaps?

This is normal. Thin provisioning usually involves over-provisioning, and that's what you're seeing. Most of our Qubes systems display this warning when using lvm commands.


Also, I have some large volumes which are present. I've considered trying to 
remove them, but I might hold off until I get data off the active volumes 
first..

I've run across the thin_dump / thin_check / thin_repair commands. It seems 
they're used under the hood by lvconvert --repair to check thin volumes.

Is there a way to relate those dev_ids back to the thin volumes lvm can't seem 
to find?

If 'lvs' won't show them, then I don't know precisely how. A long time ago, I think I used 'vgcfgrestore /etc/lvm/archive/<latest-file>' to resolve this kind of issue.

I also recommend seeking help from the wider Linux community, since this is a basic Linux storage issue.

And of course, a reminder there mishaps are a good reason to do the following:

1. After installation, at least double the size of your pool00 tmeta volume.

2. Perform regular backups (I'm working on a tool that can backup lvs much quicker than the Qubes backup tool).

--

Chris Laprise, [email protected]
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB  4AB3 1DC4 D106 F07F 1886

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/11508ea4-fed4-e09c-875c-37026e6197a4%40posteo.net.

Reply via email to