I did some checking and my disk is not in a state I expected. (The system
doesn't even know the VG exists in it's present state) See the results:
PV VG Fmt Attr PSize PFree
/dev/md127 onn_vmh lvm2 a-- 222.44g 43.66g
/dev/sdd1 gluster_vg3 lvm2 a-- <4.00g
> >>>"Try to get all data in advance (before deactivating the VG)".
> Can you clarify? What do you mean by this?
Get all necessary info before disabling the VG.
lvdisplay -m /dev/gluster_vg1/lvthinpool
lvdisplay -m /dev/gluster_vg1/lvthinpool_tmeta
Sorry for the multiple posts. I had so many thoughts rolling around in my
head. I'll try to consolidate my questions here and rephrase the last three
>>>"Try to get all data in advance (before deactivating the VG)".
Can you clarify? What do you mean by this?
>>>"I still can't
Trust Red Hat :)
At least their approach should be safer.
Of course, you can raise a docu bug but RHEL7 is in such phase that it might
not be fixed unless this is found in v8.
Strahil NikolovOn Oct 2, 2019 05:43, jeremy_tourvi...@hotmail.com wrote:
It's strage it doesn't detect the VG , but could be related to the issue.
Accordi g to this:
id = "WBut10-rAOP-FzA7-bJvr-ZdxL-lB70-jzz1Tv"
status = ["READ", "WRITE"]
flags = 
creation_time = 1545495487 # 2018-12-22 10:18:07 -0600
creation_host = "vmh.cyber-range.lan"
Is this an oVirt Node or a regular CentOS/RHEL ?On Oct 2, 2019 05:06,
> Here is my fstab file:
> # /etc/fstab
> # Created by anaconda on Fri Dec 21 22:26:32 2018
> # Accessible filesystems, by reference, are maintained under '/dev/disk'
> # See man
Try to get all data in advance (before deactivating the VG).
I still can't imagine why the VG will disappear. Try with 'pvscan --cache' to
redetect the PV.
Afrer all , all VG info is in the PVs' headers and should be visible no matter
the VG is deactivated or not.
Command to repair a thin pool:
lvconvert --repair VG/ThinPoolLV
Repair performs the following steps:
1. Creates a new, repaired copy of the metadata.
lvconvert runs the thin_repair command to read damaged metadata
Here is my fstab file:
# Created by anaconda on Fri Dec 21 22:26:32 2018
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
/dev/onn_vmh/ovirt-node-ng-126.96.36.199-0.20181216.0+1 / ext4
I don't know why I didn't think to get some more info regarding my storage
environment and post it here earlier. My gluster_vg1 volume is on /dev/sda1.
I can access the engine storage directory but I think that is because it is not
thin provisioned. I guess I was too bogged down in solving
"lvs -a" does not list the logical volume I am missing.
"lvdisplay -m /dev/gluster_vg1-lvthinpool-tpool_tmeta" does not work either.
Error message is: Volume Group xxx not found. Cannot process volume group xxx."
I am trying to follow the procedure from
You can view all LVs via 'lvs -a' and create a new metadata LV of bigger size.
Of course , lvdisplay -m /dev/gluster_vg1-lvthinpool-tpool_tmeta shoould also
Strahil NikolovOn Oct 1, 2019 03:11, jeremy_tourvi...@hotmail.com wrote:
> vgs displays everything EXCEPT
vgs displays everything EXCEPT gluster_vg1
"dmsetup ls" does not list the VG in question. That is why I couldn't run the
lvchange command. They were not active or even detected by the system.
OK, I found my problem, and a solution:
# cd /var/log
What happens when it complain that there is no VGs ?
When you run 'vgs' what is the output?
Also, take a look into
I have the feeling that you need to disable all lvs - not only the thin pool,
but also the thin LVs
Yes, I can take the downtime. Actually, I don't have any choice at the moment
because it is a single node setup. :) I think this is a distributed volume
from the research I have performed. I posted the lvchange command in my last
post, this was the result- I ran the command lvchange -an
Can you suffer downtime ?
You can try something like this (I'm improvising):
Set to global maintenance (either via UI or hosted-engine --set-maintenance
Stop the engine.
Stop ovirt-ha-agent ovirt-ha-broker vdsmd supervdsmd sanlock glusterd.
Stop all gluster processes via thr
Thank you for the reply. Please pardon my ignorance, I'm not very good with
GlusterFS. I don't think this is a replicated volume (though I could be wrong)
I built a single node hyperconverged hypervisor. I was reviewing my gdeploy
file from when I originally built the system. I have the
If it's a replicated volume - then you can safely rebuild your bricks and don't
even tryhto repair. There is no guarantee that the issue will not reoccur.
Strahil NikolovOn Sep 29, 2019 00:22, jeremy_tourvi...@hotmail.com wrote:
> I see evidence that appears to be a problem with
Mail list logo