Thanks Chris for your response!

On Mon, 29 Jul 2019, 4:25 pm Chris Laprise, <[email protected]> wrote:

> On 7/29/19 10:19 AM, [email protected] wrote:
> > Thanks for your response.
> >
> > I have some which don't show as active - it's looking like some data
> loss..
> >
> > Something I am getting when I run
> > Lvconvert --repair qubes_dom0/pool00
> >
> >
> > WARNING: Sum of all thin volume sizes (2.67TiB) exceeds the size of thin
> pools and the size of whole volume group (931.02GiB)
> >
> > Is this something I can fix perhaps?
>
> This is normal. Thin provisioning usually involves over-provisioning,
> and that's what you're seeing. Most of our Qubes systems display this
> warning when using lvm commands.
>
>
Understood. Thanks!


> >
> > Also, I have some large volumes which are present. I've considered
> trying to remove them, but I might hold off until I get data off the active
> volumes first..
> >
> > I've run across the thin_dump / thin_check / thin_repair commands. It
> seems they're used under the hood by lvconvert --repair to check thin
> volumes.
> >
> > Is there a way to relate those dev_ids back to the thin volumes lvm
> can't seem to find?
>
> If 'lvs' won't show them, then I don't know precisely how. A long time
> ago, I think I used 'vgcfgrestore /etc/lvm/archive/<latest-file>' to
> resolve this kind of issue.
>
>
> Sorry, I mean, lvs does show them, I'm just wondering what it'll take to
show them as active again.

That directory seems to just have files from today!

I also recommend seeking help from the wider Linux community, since this
> is a basic Linux storage issue.
>
> I have spent the morning researching, and found a few posts on redhat.com
and some other sites describing how to repair the metadata.

The most common reason seems to be overflowing the metadata partition,
though mine is currently around 37%

Others (one qubes user) encountered this after power failure. I shut down
cleanly as far as I can tell this was a routine reboot..

And of course, a reminder there mishaps are a good reason to do the
> following:
>
> 1. After installation, at least double the size of your pool00 tmeta
> volume.
>
> 2. Perform regular backups (I'm working on a tool that can backup lvs
> much quicker than the Qubes backup tool).
>
I definitely agree with both, although seems unlikely to have been point
one in this case.

I'm fairly sure the main disk has about 50% free also

Backups are evidently a must.. I've screwed up qubes installs before, but
never lost data until maybe now. I know lvm was only adopted in R4.0,
everything else has been going so well with this install, but I had only
just recovered and organized several old disks worth of data so I'll be
gutted if I lost it and won't know why :/

I see a few people posting on the GitHub qubes-issues repo, one guy says 3
people in the past month have had this issue (or at least the same symptoms)

>
> --
>
> Chris Laprise, [email protected]
> https://github.com/tasket
> https://twitter.com/ttaskett
> PGP: BEE2 20C5 356E 764A 73EB  4AB3 1DC4 D106 F07F 1886
>

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CAHv%2Btb7994bBVc-XZWBADke6r0znUC2b5sGaLfs%2BoVFOy0A_yA%40mail.gmail.com.

Reply via email to