Qubes OS 4.0

Well, I broke my Qubes.
I was messing with a large vmdk converted to raw image, expanded to 100G on a 
Volume Group that could not handle it.

Some errors did pop up, but with the space filled up, I could not do anything.
After reboot, no vms would start.
After attempting to repair using lvm2 tools, will no longer boot up.


I don't know enough about thin provisioning. I may have made it worse by 
attempting to add another Physical Volume. The volume I added was a 
re-partitioned disk that had a previous Qubes 4 install on it... which also had 
the same volume group name of qubes_dom0. This was causing all sorts of trouble 
like causing the PV to be MISSING (which stops most commands from executing). 
While the VG included this new PV, I attempted to expand the Pool00 and its 
metadata. Expanding to a new PV had resolved the last time I ran out of space. 
But this time it seemed to make things much worse.

I think I am going to have to reinstall Qubes onto some new drives that I will 
buy and possibly RAID.
My question is, what is the easiest way to recover the qubes?

I do have some backups, but...

If I wanted to move LVs over to the new Qubes, is that as easy as changing the 
VG name, and moving the whole pool?
Should I mount each LV and attempt to import the root.img and private.img 
files? How would I mount those to get to the files, and how would import to 
Qubes?

Most LVs say, "Thin's thin-pool needs inspection."
I have tried repairing:

root@kali:~# lvconvert --repair qubes_dom0/pool00
Using default stripesize 64.00 KiB.
terminate called after throwing an instance of 'std::runtime_error'
  what():  transaction_manager::new_block() couldn't allocate new block
  Child 15634 exited abnormally
  Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed 
(status:-1). Manual repair required!

I could not fix this by adding another disk to the VG.


Here are the outputs you may be wanting:

root@kali:~# lvmdiskscan 
  /dev/nvme0n1   [     232.89 GiB] 
  /dev/loop0     [       2.45 GiB] 
  /dev/mapper/q1 [     231.88 GiB] LVM physical volume
  /dev/nvme0n1p1 [       1.00 GiB] 
  /dev/sda1      [     232.88 GiB] LVM physical volume
  /dev/nvme0n1p2 [     231.88 GiB] 
  /dev/sdb3      [       3.64 TiB] 
  /dev/sdd1      [      58.84 GiB] 
  /dev/sdd4      [     667.00 MiB] 
  0 disks
  7 partitions
  0 LVM physical volume whole disks
  2 LVM physical volumes

I got /dev/mapper/q1 after cryptsetup luksOpen /dev/nvme0n1p2 q1

root@kali:~# pvs
  PV             VG         Fmt  Attr PSize   PFree 
  /dev/mapper/q1 qubes_dom0 lvm2 a--  231.88g 15.80g
  /dev/sda1      qubes_dom0 lvm2 a--  232.88g     0 

root@kali:~# vgs
  VG         #PV #LV #SN Attr   VSize   VFree 
  qubes_dom0   2  59   0 wz--n- 464.76g 15.80g

root@kali:~# lvs
  LV                                 VG         Attr       LSize   Pool   
Origin                  Data%  Meta%  Move Log Cpy%Sync Convert
  pool00                             qubes_dom0 twi-cotzM- 425.47g              
                  90.48  99.95                           
  root                               qubes_dom0 Vwi-a-tz-- 192.59g pool00       
                  99.95                                  
  swap                               qubes_dom0 -wi-a-----  23.29g 
...


Please let me know how much, if anything is recoverable.
Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ad5c4410-3371-44d9-a428-37b04619f141%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to