Seems the Gluster Node that booted from 4.5 into emergency mode was due to not 
finding the
mounts for gluster_bricks data and engine, its as if it cant see them any more.
Once i removed the mounts from /etc/fstab it lets me in, I could boot up.

This is node 3 of the Gluster, Arbiter.

[root@ovirt-3 ~]# mount -a
mount: /gluster_bricks/engine: can't find 
UUID=45349e37-7a97-4a64-81eb-5fac3fec6477.
mount: /gluster_bricks/data: can't find 
UUID=c0fa1111-0db2-4e08-bb79-be677530aeaa.


SDA4 is where the Data and Engine glusters are, although If I try to make a new
volumegroup for the gluster, I get this serror

If  I try try to make a new vgroup I get the following error, it almost sounds 
as if the
partition is corrupted or something, basically i was trying to get it a new 
UUID.
  Physical volume '/dev/sda4' is already in volume group
'gluster_vg_sda4'
  Unable to add physical volume '/dev/sda4' to volume group
'gluster_vg_sda4'
  /dev/sda4: physical volume not initialized.


[root@ovirt-3 ~]# dnf -q list installed centos-release\* ovirt-release\* 
ovirt-engine
Installed Packages
centos-release-advanced-virtualization.noarch                                   
          
             1.0-4.el8                                                          
    
@@commandline
centos-release-ceph-pacific.noarch                                              
          
             1.0-2.el8                                                          
    
@System
centos-release-gluster10.noarch                                                 
          
             1.0-1.el8s                                                         
    
@System
centos-release-messaging.noarch                                                 
          
             1-3.el8                                                            
    
@@commandline
centos-release-nfv-common.noarch                                                
          
             1-3.el8                                                            
    
@System
centos-release-nfv-openvswitch.noarch                                           
          
             1-3.el8                                                            
    
@System
centos-release-openstack-xena.noarch                                            
          
             1-1.el8                                                            
    
@@commandline
centos-release-opstools.noarch                                                  
          
             1-12.el8                                                           
    
@System
centos-release-ovirt45.noarch                                                   
          
             8.6-1.el8                                                          
    
@@commandline
centos-release-rabbitmq-38.noarch                                               
          
             1-3.el8                                                            
    
@@commandline
centos-release-storage-common.noarch                                            
          
             2-2.el8                                                            
    
@System
centos-release-virt-common.noarch                                               
          
             1-2.el8                                                            
    
@System
ovirt-release-host-node.x86_64                                                  
          
             4.5.0.1-1.el8                                                      
    
@System

[root@ovirt-3 ~]#  rpm -qa | grep gluster
glusterfs-server-10.1-1.el8s.x86_64
glusterfs-selinux-2.0.1-1.el8s.noarch
glusterfs-client-xlators-10.1-1.el8s.x86_64
qemu-kvm-block-gluster-6.2.0-5.module_el8.6.0+1087+b42c8331.x86_64
vdsm-gluster-4.50.0.13-1.el8.x86_64
gluster-ansible-maintenance-1.0.1-12.el8.noarch
centos-release-gluster10-1.0-1.el8s.noarch
libvirt-daemon-driver-storage-gluster-8.0.0-2.module_el8.6.0+1087+b42c8331.x86_64
python3-gluster-10.1-1.el8s.x86_64
glusterfs-geo-replication-10.1-1.el8s.x86_64
gluster-ansible-cluster-1.0-4.el8.noarch
gluster-ansible-repositories-1.0.1-5.el8.noarch
libglusterfs0-10.1-1.el8s.x86_64
glusterfs-fuse-10.1-1.el8s.x86_64
glusterfs-events-10.1-1.el8s.x86_64
gluster-ansible-features-1.0.5-12.el8.noarch
gluster-ansible-roles-1.0.5-26.el8.noarch
glusterfs-10.1-1.el8s.x86_64
libglusterd0-10.1-1.el8s.x86_64
gluster-ansible-infra-1.0.4-20.el8.noarch
glusterfs-cli-10.1-1.el8s.x86_64
[root@ovirt-3 ~]#
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5IQE3GNBZ4UZ2KAM5YLKETLUNXBJIJXD/

Reply via email to