On 12/12/20 1:16 AM, 'keyandthegate' via qubes-users wrote:
Oops, I forgot I'm using btrfs.
Well, it's not specific to Qubes OS, but maybe you'd like to read this:
Setting up a HA cluster using Xen PVMs recently, I found a bug that
activated one VM on two nodes at the same time... The VM was using BtrFS
as root / boot filesystem with many subvolumes and automatic snapshots
before each software update.
As a result the BtrFS was corrupted, and there was NO way to recover any
of the snapshots or subvolumes. Maybe keep this in mind. In the past I'd
traditionally use separate ext2/3 filesystems for things like /, /boot,
/var, etc. And the changes to recover something are probably higher than
with BtrFS... Anyway: I just wanted to tell.
Regards,
Ulrich
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, December 11, 2020 11:14 AM, keyandthegate
<keyandtheg...@protonmail.com> wrote:
Hi I recently upgraded to a new primary HD and these are the steps
I've taken:
1. plug the new HD in via USB
2. boot from debian live
3. use dd to copy my entire old HD to new HD
4. use gdisk to convert from MBR to GPT
5. use gparted to move the swap partition to the end of the drive, and
resize the primary partition to use the remaining space
6. swap in the new HD
I read I need to resize the LVM thin pool but, I'm not seeing the
right output from lvs.
Existing threads:
https://groups.google.com/g/qubes-users/c/D-on-hSX1Dc/m/Q3rbYGyvAAAJ
<https://groups.google.com/g/qubes-users/c/D-on-hSX1Dc/m/Q3rbYGyvAAAJ>
https://groups.google.com/g/qubes-users/c/w9CIDaZ3Cc4/m/0xvtMUrIAgAJ
<https://groups.google.com/g/qubes-users/c/w9CIDaZ3Cc4/m/0xvtMUrIAgAJ>
I also have a second 2TB drive with a second pool.
lsblk output:
nvme0n1
259:0 0 7.3T 0 disk
├─nvme0n1p3
259:3 0 15.4G 0 part
│ └─luks-[...]
253:1 0 15.4G 0 crypt [SWAP]
├─nvme0n1p1
259:1 0 1G 0 part /boot
└─nvme0n1p2
259:2 0 7.3T 0 part
└─luks-[..]
253:0 0 7.3T 0 crypt /
[...]
sda
8:0 0 1.8T 0 disk
└─luks-[...]
253:2 0 1.8T 0 crypt
├─qubes-poolhd0_tdata
253:4 0 1.8T 0 lvm
│ └─qubes-poolhd0-tpool
253:5 0 1.8T 0 lvm
[... my qubes on second HD]
└─qubes-poolhd0_tmeta
253:3 0 120M 0 lvm
└─qubes-poolhd0-tpool
253:5 0 1.8T 0 lvm
[... my qubes on second HD]
[...]
$ qvm-pool -l
NAME DRIVER
varlibqubes file-reflink
linux-kernel linux-kernel
poolhd0_qubes lvm_thin
$ sudo lvs -a
LV VG Attr LSize
Pool Origin Data% Meta% Move
Log Cpy%Sync Convert
[lvol0_pmspare] qubes ewi------- 120.00m
poolhd0 qubes twi-aotz--
1.82t 69.41 43.01
[poolhd0_tdata] qubes Twi-ao---- 1.82t
[poolhd0_tmeta] qubes ewi-ao---- 120.00m
[... my qubes on second HD]
Where have my Qubes on the first HD gone? They still work, but I don't
see them in the output of these commands.
--
You received this message because you are subscribed to the Google
Groups "qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to qubes-users+unsubscr...@googlegroups.com
<mailto:qubes-users+unsubscr...@googlegroups.com>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/qubes-users/HA2V2H7xCHTxhIlQ7HvG9BdLmlOdOsZRfYJeFCkDQANMLsLwg5qBofGGTY388Wg709VswBrbt4f01UylsHfpXSqF2AkqFGYACWxrsnGf8lA%3D%40protonmail.com
<https://groups.google.com/d/msgid/qubes-users/HA2V2H7xCHTxhIlQ7HvG9BdLmlOdOsZRfYJeFCkDQANMLsLwg5qBofGGTY388Wg709VswBrbt4f01UylsHfpXSqF2AkqFGYACWxrsnGf8lA%3D%40protonmail.com?utm_medium=email&utm_source=footer>.
--
You received this message because you are subscribed to the Google Groups
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/qubes-users/536a5877-91d1-b719-35e5-20e836a5765f%40rz.uni-regensburg.de.