>>> <[email protected]> schrieb am 09.03.2020 um 22:14 in Nachricht
<32730_1583788450_5E66B1A2_32730_943_1_20200309221408.Horde.6suQ5c39eHZROYAnW9Jp
[email protected]>:
> Hello folks,
> 
> I have Qubes 4.0 Release standard luks + lvm thin pool.
> After a sudden reboot and entering the encryption pass, the dracut
> emergency shell comes up.
> "Check for pool qubes-dom/pool00 failed (status:1). Manual repair
> required!"
> The only aclive lv is qubes_dom0/swap.
> All the others are inactive.
> 
> step 1:
> from https://github.com/QubesOS/qubes-issues/issues/5160 
> /lvm vgscan vgchange -ay
> lvm lvconvert --repair qubes_dom0/pool00/
> Result:
> /using default stripesize 64.00 KiB.
> Terminate called after throwing an instance of 'std::runtime_error'
> what(): transaction_manager::new_block() couldn't allocate new block
> Child 7212 exited abnormally
> Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed
> (status:1). Manual repair required!/
> 
> step 2:
> since i suspect that my lvm is full (though it does mark 15 g as free)
> i tried the following changes in the /etc/lvm/lvm.conf
> thin_pool_autoextend_threshold = 80
> thin_pool_autoextend_percent = 2 (Since my the pvs output gives PSize:
> 465.56g Pfree 15.78g, I set this to 2% to be overly cautious not to extend
> beyond the 15 G marked as free, since idk)
> auto_activation_volume_list = to hold the group, root, pool00, swap and a
> vm that would like to delete to free some space
> volume_list = the same as auto_activation_volume_list
> 
> and tried step 1 again, did not work, got the same result as above with
> qubes_swap as active only
> 
> step 3
> tried /lvextend -L+1G qubes_dom0/pool00_tmeta/
> Result:
> /metadata reference count differ for block xxxxxx, expected 0, but got 1
> ...
> Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!/
> 
> Since I do not know my way around lvm, what do you think, would be the best
> way out of this?
> Adding another external PV? migrating to a bigger PV?
> I did not play with backup or achive out of fear to loose any unbackuped
> data which happens to be a bit :|

For some reason I have a "watch -n30 lvs" running in a big terminal. On one of 
the op lines I see the usage of the thin pool. Of course this only helps before 
the problem...

But I thought some app is monitoring the VG; wasn't there some space warning 
before the actual problem?


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/5E67408E020000A100037B31%40gwsmtp.uni-regensburg.de.

Reply via email to