Re: [ovirt-users] Gluster Data domain not correctly setup at boot
Il 04/01/2016 18.27, Nir Soffer ha scritto: On Mon, Jan 4, 2016 at 6:36 PM, Stefano Danzi wrote: This sounds like https://bugzilla.redhat.com/1271771 This patch may fix this: https://gerrit.ovirt.org/#/c/27334/ Would you like to test it? I patched vdsm and now gluster sd work at boot. To dig deeper, we need the logs: - /var/log/vdsm/vdsm.log (the one showing this timeframe) - /var/log/sanlock.log - /var/log/messages - /var/log/glusterfs/:-.log Nir ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Gluster Data domain not correctly setup at boot
On Mon, Jan 4, 2016 at 6:36 PM, Stefano Danzi wrote: > I have one testing host (only one) with hosted engine and a gluster data > domain (in the same machine). > > When I start the host and the engine I can see data domain active "up and > green" but in the event list I get: > > - Invalid status on Data Center Default. Setting status to Non Responsive > - Storage Domain Data (Data Center Default) was deactivated by system > because it's not visible by any of the hosts > > If I try tu start a VM I get: > > - Failed to run onDalesSRV on Host ovirt01 > - VM onSalesSRV is down with error. Exit message: Cannot access storage file > '/rhev/data-center/0002-0002-0002-0002-01ef/f739b27a-35bf-49c7-a95b-a92ec5c10320/images.. > > The gluster volume is correctly mounted: > > [root@ovirt01 ~]# df -h > File system Dim. Usati Dispon. Uso% > Montato su > /dev/mapper/centos_ovirt01-root50G 18G 33G 35% / > devtmpfs 7,8G 07,8G 0% /dev > tmpfs 7,8G 07,8G 0% > /dev/shm > tmpfs 7,8G 17M7,8G 1% /run > tmpfs 7,8G 07,8G 0% > /sys/fs/cgroup > /dev/mapper/centos_ovirt01-home10G 1,3G8,8G 13% /home > /dev/mapper/centos_ovirt01-glusterOVEngine 50G 11G 40G 22% > /home/glusterfs/engine > /dev/md0 494M 244M251M 50% /boot > /dev/mapper/centos_ovirt01-glusterOVData 500G 135G366G 27% > /home/glusterfs/data > ovirt01.hawai.lan:/engine 50G 11G 40G 22% > /rhev/data-center/mnt/ovirt01.hawai.lan:_engine > tmpfs 1,6G 01,6G 0% > /run/user/0 > ovirtbk-sheng.hawai.lan:/var/lib/exports/iso 22G 7,6G 15G 35% > /rhev/data-center/mnt/ovirtbk-sheng.hawai.lan:_var_lib_exports_iso > ovirt01.hawai.lan:/data 500G 135G366G 27% > /rhev/data-center/mnt/glusterSD/ovirt01.hawai.lan:_data > > But link on '/rhev/data-center/0' is missing: > > [root@ovirt01 ~]# ls -la > /rhev/data-center/0002-0002-0002-0002-01ef/ > totale 0 > drwxr-xr-x. 2 vdsm kvm 64 4 gen 14.31 . > drwxr-xr-x. 4 vdsm kvm 59 4 gen 14.31 .. > lrwxrwxrwx. 1 vdsm kvm 84 4 gen 14.31 46f55a31-f35f-465c-b3e2-df45c05e06a7 > -> > /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7 > lrwxrwxrwx. 1 vdsm kvm 84 4 gen 14.31 mastersd -> > /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7 > > If I put data domain on maintenace mode and reactivate it I can run the VMs. > Mounted fs are the same, but now I have links into /rhev/data-center/ : > > [root@ovirt01 ~]# ls -la > /rhev/data-center/0002-0002-0002-0002-01ef/ > totale 4 > drwxr-xr-x. 2 vdsm kvm 4096 4 gen 17.10 . > drwxr-xr-x. 4 vdsm kvm 59 4 gen 17.10 .. > lrwxrwxrwx. 1 vdsm kvm 84 4 gen 14.31 > 46f55a31-f35f-465c-b3e2-df45c05e06a7 -> > /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7 > lrwxrwxrwx. 1 vdsm kvm 103 4 gen 17.10 > 837f-d2d4-4684-a389-ac1adb050fa8 -> > /rhev/data-center/mnt/ovirtbk-sheng.hawai.lan:_var_lib_exports_iso/837f-d2d4-4684-a389-ac1adb050fa8 > lrwxrwxrwx. 1 vdsm kvm 92 4 gen 17.10 > f739b27a-35bf-49c7-a95b-a92ec5c10320 -> > /rhev/data-center/mnt/glusterSD/ovirt01.hawai.lan:_data/f739b27a-35bf-49c7-a95b-a92ec5c10320 > lrwxrwxrwx. 1 vdsm kvm 84 4 gen 14.31 mastersd -> > /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7 This sounds like https://bugzilla.redhat.com/1271771 This patch may fix this: https://gerrit.ovirt.org/#/c/27334/ Would you like to test it? To dig deeper, we need the logs: - /var/log/vdsm/vdsm.log (the one showing this timeframe) - /var/log/sanlock.log - /var/log/messages - /var/log/glusterfs/:-.log Nir > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Gluster Data domain not correctly setup at boot
I have one testing host (only one) with hosted engine and a gluster data domain (in the same machine). When I start the host and the engine I can see data domain active "up and green" but in the event list I get: - Invalid status on Data Center Default. Setting status to Non Responsive - Storage Domain Data (Data Center Default) was deactivated by system because it's not visible by any of the hosts If I try tu start a VM I get: - Failed to run onDalesSRV on Host ovirt01 - VM onSalesSRV is down with error. Exit message: Cannot access storage file '/rhev/data-center/0002-0002-0002-0002-01ef/f739b27a-35bf-49c7-a95b-a92ec5c10320/images.. The gluster volume is correctly mounted: [root@ovirt01 ~]# df -h File system Dim. Usati Dispon. Uso% Montato su /dev/mapper/centos_ovirt01-root50G 18G 33G 35% / devtmpfs 7,8G 07,8G 0% /dev tmpfs 7,8G 07,8G 0% /dev/shm tmpfs 7,8G 17M7,8G 1% /run tmpfs 7,8G 07,8G 0% /sys/fs/cgroup /dev/mapper/centos_ovirt01-home10G 1,3G8,8G 13% /home /dev/mapper/centos_ovirt01-glusterOVEngine 50G 11G 40G 22% /home/glusterfs/engine /dev/md0 494M 244M251M 50% /boot /dev/mapper/centos_ovirt01-glusterOVData 500G 135G366G 27% /home/glusterfs/data ovirt01.hawai.lan:/engine 50G 11G 40G 22% /rhev/data-center/mnt/ovirt01.hawai.lan:_engine tmpfs 1,6G 01,6G 0% /run/user/0 ovirtbk-sheng.hawai.lan:/var/lib/exports/iso 22G 7,6G 15G 35% /rhev/data-center/mnt/ovirtbk-sheng.hawai.lan:_var_lib_exports_iso ovirt01.hawai.lan:/data 500G 135G366G 27% /rhev/data-center/mnt/glusterSD/ovirt01.hawai.lan:_data But link on '/rhev/data-center/0' is missing: [root@ovirt01 ~]# ls -la /rhev/data-center/0002-0002-0002-0002-01ef/ totale 0 drwxr-xr-x. 2 vdsm kvm 64 4 gen 14.31 . drwxr-xr-x. 4 vdsm kvm 59 4 gen 14.31 .. lrwxrwxrwx. 1 vdsm kvm 84 4 gen 14.31 46f55a31-f35f-465c-b3e2-df45c05e06a7 -> /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7 lrwxrwxrwx. 1 vdsm kvm 84 4 gen 14.31 mastersd -> /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7 If I put data domain on maintenace mode and reactivate it I can run the VMs. Mounted fs are the same, but now I have links into /rhev/data-center/ : [root@ovirt01 ~]# ls -la /rhev/data-center/0002-0002-0002-0002-01ef/ totale 4 drwxr-xr-x. 2 vdsm kvm 4096 4 gen 17.10 . drwxr-xr-x. 4 vdsm kvm 59 4 gen 17.10 .. lrwxrwxrwx. 1 vdsm kvm 84 4 gen 14.31 46f55a31-f35f-465c-b3e2-df45c05e06a7 -> /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7 lrwxrwxrwx. 1 vdsm kvm 103 4 gen 17.10 837f-d2d4-4684-a389-ac1adb050fa8 -> /rhev/data-center/mnt/ovirtbk-sheng.hawai.lan:_var_lib_exports_iso/837f-d2d4-4684-a389-ac1adb050fa8 lrwxrwxrwx. 1 vdsm kvm 92 4 gen 17.10 f739b27a-35bf-49c7-a95b-a92ec5c10320 -> /rhev/data-center/mnt/glusterSD/ovirt01.hawai.lan:_data/f739b27a-35bf-49c7-a95b-a92ec5c10320 lrwxrwxrwx. 1 vdsm kvm 84 4 gen 14.31 mastersd -> /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7 ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users