[ovirt-users] Re: ovirt upgrade 4.1 -> 4.2: host bricks down
Updated hosts and engine. Ran engine-setup but still version is showing 4.2.3.8-1.el7. The issue has been resolved though. Bricks are shown up. Thanx, Alex On Wed, Jun 20, 2018 at 10:48 AM, Alex K wrote: > I am running 4.2.3.8-1.el7. > I will upgrade and check. > > Thanx, > Alex > > On Tue, Jun 19, 2018 at 11:59 AM, Sahina Bose wrote: > >> >> >> On Sat, Jun 16, 2018 at 5:34 PM, Alex K wrote: >> >>> Hi all, >>> >>> I have a ovirt 2 node cluster for testing with self hosted engine on top >>> gluster. >>> >>> The cluster was running on 4.1. After the upgrade to 4.2, which >>> generally went smoothly, I am seeing that the bricks of one of the hosts >>> (v1) are detected as down, while the gluster is ok when checked with >>> command lines and all volumes mounted. >>> >>> Below is the error that the engine logs: >>> >>> 2018-06-17 00:21:26,309+03 ERROR >>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] >>> (DefaultQuartzScheduler2) >>> [98d7e79] Error while refreshing brick statuses for volume 'vms' of >>> cluster 'test': null >>> 2018-06-17 00:21:26,318+03 ERROR [org.ovirt.engine.core.vdsbrok >>> er.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>> (DefaultQuartzScheduler2) [98d7e79] Command >>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = v0.test-group.com >>> , >>> VdsIdVDSCommandParametersBase:{hostId='d5a96118-ca49-411f-86cb-280c7f9c421f'})' >>> execution failed: null >>> 2018-06-17 00:21:26,323+03 ERROR [org.ovirt.engine.core.vdsbrok >>> er.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>> (DefaultQuartzScheduler2) [98d7e79] Command >>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = v1.test-group.com >>> , >>> VdsIdVDSCommandParametersBase:{hostId='12dfea4a-8142-484e-b912-0cbd5f281aba'})' >>> execution failed: null >>> 2018-06-17 00:21:27,015+03 INFO >>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >>> (DefaultQuartzScheduler9) >>> [426e7c3d] Failed to acquire lock and wait lock >>> 'EngineLock:{exclusiveLocks='[0002-0002-0002-0002-017a=GLUSTER]', >>> sharedLocks=''}' >>> 2018-06-17 00:21:27,926+03 ERROR >>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] >>> (DefaultQuartzScheduler2) >>> [98d7e79] Error while refreshing brick statuses for volume 'engine' of >>> cluster 'test': null >>> >>> Apart from this everything else is operating normally and VMs are >>> running on both hosts. >>> >> >> Which version of 4.2? This issue is fixed with 4.2.4 >> >> >>> Any idea to isolate this issue? >>> >>> Thanx, >>> Alex >>> >>> >>> ___ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-le...@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: https://www.ovirt.org/communit >>> y/about/community-guidelines/ >>> List Archives: https://lists.ovirt.org/archiv >>> es/list/users@ovirt.org/message/J3MQD5KRVIRHFCD3I54P5PHCQCCZ3ETG/ >>> >>> >> > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/O5JCX5NU7P3U2PAVX7HXXXF5RZOU57NQ/
[ovirt-users] Re: ovirt upgrade 4.1 -> 4.2: host bricks down
I am running 4.2.3.8-1.el7. I will upgrade and check. Thanx, Alex On Tue, Jun 19, 2018 at 11:59 AM, Sahina Bose wrote: > > > On Sat, Jun 16, 2018 at 5:34 PM, Alex K wrote: > >> Hi all, >> >> I have a ovirt 2 node cluster for testing with self hosted engine on top >> gluster. >> >> The cluster was running on 4.1. After the upgrade to 4.2, which generally >> went smoothly, I am seeing that the bricks of one of the hosts (v1) are >> detected as down, while the gluster is ok when checked with command lines >> and all volumes mounted. >> >> Below is the error that the engine logs: >> >> 2018-06-17 00:21:26,309+03 ERROR >> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] >> (DefaultQuartzScheduler2) >> [98d7e79] Error while refreshing brick statuses for volume 'vms' of >> cluster 'test': null >> 2018-06-17 00:21:26,318+03 ERROR [org.ovirt.engine.core.vdsbrok >> er.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >> (DefaultQuartzScheduler2) [98d7e79] Command >> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName >> = v0.test-group.com, >> VdsIdVDSCommandParametersBase:{hostId='d5a96118-ca49-411f-86cb-280c7f9c421f'})' >> execution failed: null >> 2018-06-17 00:21:26,323+03 ERROR [org.ovirt.engine.core.vdsbrok >> er.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >> (DefaultQuartzScheduler2) [98d7e79] Command >> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName >> = v1.test-group.com, >> VdsIdVDSCommandParametersBase:{hostId='12dfea4a-8142-484e-b912-0cbd5f281aba'})' >> execution failed: null >> 2018-06-17 00:21:27,015+03 INFO >> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] >> (DefaultQuartzScheduler9) >> [426e7c3d] Failed to acquire lock and wait lock >> 'EngineLock:{exclusiveLocks='[0002-0002-0002-0002-017a=GLUSTER]', >> sharedLocks=''}' >> 2018-06-17 00:21:27,926+03 ERROR >> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] >> (DefaultQuartzScheduler2) >> [98d7e79] Error while refreshing brick statuses for volume 'engine' of >> cluster 'test': null >> >> Apart from this everything else is operating normally and VMs are running >> on both hosts. >> > > Which version of 4.2? This issue is fixed with 4.2.4 > > >> Any idea to isolate this issue? >> >> Thanx, >> Alex >> >> >> ___ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-le...@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: https://www.ovirt.org/communit >> y/about/community-guidelines/ >> List Archives: https://lists.ovirt.org/archiv >> es/list/users@ovirt.org/message/J3MQD5KRVIRHFCD3I54P5PHCQCCZ3ETG/ >> >> > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PMVE6UGFAL5SOWFP6M6XF46WVHEGF6VD/
[ovirt-users] Re: ovirt upgrade 4.1 -> 4.2: host bricks down
On Sat, Jun 16, 2018 at 5:34 PM, Alex K wrote: > Hi all, > > I have a ovirt 2 node cluster for testing with self hosted engine on top > gluster. > > The cluster was running on 4.1. After the upgrade to 4.2, which generally > went smoothly, I am seeing that the bricks of one of the hosts (v1) are > detected as down, while the gluster is ok when checked with command lines > and all volumes mounted. > > Below is the error that the engine logs: > > 2018-06-17 00:21:26,309+03 ERROR > [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] > (DefaultQuartzScheduler2) > [98d7e79] Error while refreshing brick statuses for volume 'vms' of > cluster 'test': null > 2018-06-17 00:21:26,318+03 ERROR [org.ovirt.engine.core.vdsbroker.gluster. > GetGlusterLocalLogicalVolumeListVDSCommand] > (DefaultQuartzScheduler2) [98d7e79] Command ' > GetGlusterLocalLogicalVolumeListVDSCommand(HostName = v0.test-group.com, > VdsIdVDSCommandParametersBase:{hostId='d5a96118-ca49-411f-86cb-280c7f9c421f'})' > execution failed: null > 2018-06-17 00:21:26,323+03 ERROR [org.ovirt.engine.core.vdsbroker.gluster. > GetGlusterLocalLogicalVolumeListVDSCommand] > (DefaultQuartzScheduler2) [98d7e79] Command ' > GetGlusterLocalLogicalVolumeListVDSCommand(HostName = v1.test-group.com, > VdsIdVDSCommandParametersBase:{hostId='12dfea4a-8142-484e-b912-0cbd5f281aba'})' > execution failed: null > 2018-06-17 00:21:27,015+03 INFO > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (DefaultQuartzScheduler9) > [426e7c3d] Failed to acquire lock and wait lock > 'EngineLock:{exclusiveLocks='[0002-0002-0002-0002-017a=GLUSTER]', > sharedLocks=''}' > 2018-06-17 00:21:27,926+03 ERROR > [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] > (DefaultQuartzScheduler2) > [98d7e79] Error while refreshing brick statuses for volume 'engine' of > cluster 'test': null > > Apart from this everything else is operating normally and VMs are running > on both hosts. > Which version of 4.2? This issue is fixed with 4.2.4 > Any idea to isolate this issue? > > Thanx, > Alex > > > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: https://www.ovirt.org/community/about/community- > guidelines/ > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ > message/J3MQD5KRVIRHFCD3I54P5PHCQCCZ3ETG/ > > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ENFPMNF4CC5KKL6CK4P2E5CBGP3WO3MA/