[ovirt-users] Re: alertMessage, [Warning! Low confirmed free space on gluster volume M2Stick1]

2019-03-11 Thread Sahina Bose
+Denis Chapligin

On Wed, Mar 6, 2019 at 2:03 PM Robert O'Kane  wrote:
>
> Hello,
>
> With my first "in Ovirt" made Gluster Storage I am getting some annoying 
> Warnings.
>
> On the Hypervisor(s) engine.log :
>
> 2019-03-05 13:07:45,281+01 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>  (DefaultQuartzScheduler5) [59957167] START,
> GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = Hausesel3, 
> GlusterVolumeAdvancedDetailsVDSParameters:{hostId='d7db584e-03e3-4a37-abc7-73012a9f5ba8',
> volumeName='M2Stick1'}), log id: 74482de6
> 2019-03-05 13:07:46,814+01 INFO  
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
> (DefaultQuartzScheduler10) [6d40c5d0] Failed to acquire lock and wait lock
> 'EngineLock:{exclusiveLocks='[27f8ed93-c857-41ae-af16-e1af9f0b62d4=GLUSTER]', 
> sharedLocks=''}'
> 2019-03-05 13:07:46,823+01 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>  (DefaultQuartzScheduler5) [59957167]
> FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: 
> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@868edb00,
>  log id:
> 74482de6
>
>
> I find no other correlated messages in the Gluster logs.  Where else should I 
> look?
>
> It (seems) to work very well. Just these "Warnings" which only worry me due 
> to the "Failed to acquire lock" messages.
> This is one of 3 Gluster Storage Domains. The other 2 were "Hand made" and 
> exist since Ovirt-3.5 and show no messages.
>
>
> 1x standalone engine
> 6x Hypervisors  in 2 clusters.
>
> One other special condition:
>
> I am in the processes of moving my VM's to a second cluster (same Data 
> Center) with a different defined Gluster Network. (New 10Gb cards).
> All Hypervisors see all Networks. but since there is only one SPM, the SPM is 
> never a "Gluster Peer" of all Domains due to
> the "only one Gluster Network per Cluster" definition.  Is this the 
> Problem/Situation?
> There is another "Hand Made" Domain in the new Cluster but it does not have 
> any problems.  The only difference between the two is that the
> new Domain was created over the Ovirt Web interface.
>
> Cheers,
>
> Robert O'Kane
>
>
>
> engine:
>
> libgovirt-0.3.4-1.el7.x86_64
> libvirt-bash-completion-4.5.0-10.el7_6.4.x86_64
> libvirt-client-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-interface-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-network-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-nodedev-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-nwfilter-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-qemu-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-secret-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-storage-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-storage-core-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-storage-disk-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-storage-iscsi-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-storage-logical-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-storage-mpath-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-storage-rbd-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-driver-storage-scsi-4.5.0-10.el7_6.4.x86_64
> libvirt-daemon-kvm-4.5.0-10.el7_6.4.x86_64
> libvirt-glib-1.0.0-1.el7.x86_64
> libvirt-libs-4.5.0-10.el7_6.4.x86_64
> libvirt-python-4.5.0-1.el7.x86_64
> ovirt-ansible-cluster-upgrade-1.1.10-1.el7.noarch
> ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
> ovirt-ansible-engine-setup-1.1.6-1.el7.noarch
> ovirt-ansible-hosted-engine-setup-1.0.2-1.el7.noarch
> ovirt-ansible-image-template-1.1.9-1.el7.noarch
> ovirt-ansible-infra-1.1.10-1.el7.noarch
> ovirt-ansible-manageiq-1.1.13-1.el7.noarch
> ovirt-ansible-repositories-1.1.3-1.el7.noarch
> ovirt-ansible-roles-1.1.6-1.el7.noarch
> ovirt-ansible-shutdown-env-1.0.0-1.el7.noarch
> ovirt-ansible-v2v-conversion-host-1.9.0-1.el7.noarch
> ovirt-ansible-vm-infra-1.1.12-1.el7.noarch
> ovirt-cockpit-sso-0.0.4-1.el7.noarch
> ovirt-engine-4.2.8.2-1.el7.noarch
> ovirt-engine-api-explorer-0.0.2-1.el7.centos.noarch
> ovirt-engine-backend-4.2.8.2-1.el7.noarch
> ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
> ovirt-engine-dashboard-1.2.4-1.el7.noarch
> ovirt-engine-dbscripts-4.2.8.2-1.el7.noarch
> ovirt-engine-dwh-4.2.4.3-1.el7.noarch
> ovirt-engine-dwh-setup-4.2.4.3-1.el7.noarch
> ovirt-engine-extension-aaa-jdbc-1.1.7-1.el7.centos.noarch
> ovirt-engine-extension-aaa-ldap-1.3.8-1.el7.noarch
> ovirt-engine-extension-aaa-ldap-setup-1.3.8-1.el7.noarch
> ovirt-engine-extensions-api-impl-4.2.8.2-1.el7.noarch
> ovirt-engine-lib-4.2.8.2-1.el7.noarch
> ovirt-engine-metrics-1.1.8.1-1.el7.noarch
> ovirt-engine-restapi-4.2.8.2-1.el7.noarch
> ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
> ovirt-engine-setup-4.2.8.2-1.el7.noarch
> ovirt-engine-setup-base-4.2.8.2-1.el7.noarch
> ovirt-engine-setup-plugin-ovirt-engine-4.2.8.2-1.el7.noarch
> 

[ovirt-users] Re: alertMessage, [Warning! Low confirmed free space on gluster volume M2Stick1]

2019-03-06 Thread Robert O'Kane

forgot the Gluster Versions:

Hypervisors:

glusterfs-3.12.15-1.el7.x86_64
glusterfs-api-3.12.15-1.el7.x86_64
glusterfs-cli-3.12.15-1.el7.x86_64
glusterfs-client-xlators-3.12.15-1.el7.x86_64
glusterfs-events-3.12.15-1.el7.x86_64
glusterfs-fuse-3.12.15-1.el7.x86_64
glusterfs-geo-replication-3.12.15-1.el7.x86_64
glusterfs-gnfs-3.12.15-1.el7.x86_64
glusterfs-libs-3.12.15-1.el7.x86_64
glusterfs-rdma-3.12.15-1.el7.x86_64
glusterfs-server-3.12.15-1.el7.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64
python2-gluster-3.12.15-1.el7.x86_64
vdsm-gluster-4.20.46-1.el7.x86_64

engine:

glusterfs-3.12.15-1.el7.x86_64
glusterfs-api-3.12.15-1.el7.x86_64
glusterfs-cli-3.12.15-1.el7.x86_64
glusterfs-client-xlators-3.12.15-1.el7.x86_64
glusterfs-libs-3.12.15-1.el7.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64







On 03/06/2019 09:22 AM, Robert O'Kane wrote:

Hello,

With my first "in Ovirt" made Gluster Storage I am getting some annoying 
Warnings.

On the Hypervisor(s) engine.log :

2019-03-05 13:07:45,281+01 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler5) [59957167] START, 
GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = Hausesel3, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='d7db584e-03e3-4a37-abc7-73012a9f5ba8', 
volumeName='M2Stick1'}), log id: 74482de6
2019-03-05 13:07:46,814+01 INFO  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler10) [6d40c5d0] Failed to acquire lock and wait lock 
'EngineLock:{exclusiveLocks='[27f8ed93-c857-41ae-af16-e1af9f0b62d4=GLUSTER]', sharedLocks=''}'
2019-03-05 13:07:46,823+01 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler5) [59957167] 
FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@868edb00, log id: 
74482de6



I find no other correlated messages in the Gluster logs.  Where else should I 
look?

It (seems) to work very well. Just these "Warnings" which only worry me due to the 
"Failed to acquire lock" messages.
This is one of 3 Gluster Storage Domains. The other 2 were "Hand made" and 
exist since Ovirt-3.5 and show no messages.


1x standalone engine
6x Hypervisors  in 2 clusters.

One other special condition:

I am in the processes of moving my VM's to a second cluster (same Data Center) 
with a different defined Gluster Network. (New 10Gb cards).
All Hypervisors see all Networks. but since there is only one SPM, the SPM is never a 
"Gluster Peer" of all Domains due to
the "only one Gluster Network per Cluster" definition.  Is this the 
Problem/Situation?
There is another "Hand Made" Domain in the new Cluster but it does not have any 
problems.  The only difference between the two is that the
new Domain was created over the Ovirt Web interface.

Cheers,

Robert O'Kane



engine:

libgovirt-0.3.4-1.el7.x86_64
libvirt-bash-completion-4.5.0-10.el7_6.4.x86_64
libvirt-client-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-interface-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-network-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-nodedev-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-nwfilter-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-qemu-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-secret-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-core-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-disk-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-iscsi-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-logical-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-mpath-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-rbd-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-scsi-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-kvm-4.5.0-10.el7_6.4.x86_64
libvirt-glib-1.0.0-1.el7.x86_64
libvirt-libs-4.5.0-10.el7_6.4.x86_64
libvirt-python-4.5.0-1.el7.x86_64
ovirt-ansible-cluster-upgrade-1.1.10-1.el7.noarch
ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
ovirt-ansible-engine-setup-1.1.6-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.2-1.el7.noarch
ovirt-ansible-image-template-1.1.9-1.el7.noarch
ovirt-ansible-infra-1.1.10-1.el7.noarch
ovirt-ansible-manageiq-1.1.13-1.el7.noarch
ovirt-ansible-repositories-1.1.3-1.el7.noarch
ovirt-ansible-roles-1.1.6-1.el7.noarch
ovirt-ansible-shutdown-env-1.0.0-1.el7.noarch
ovirt-ansible-v2v-conversion-host-1.9.0-1.el7.noarch
ovirt-ansible-vm-infra-1.1.12-1.el7.noarch
ovirt-cockpit-sso-0.0.4-1.el7.noarch
ovirt-engine-4.2.8.2-1.el7.noarch
ovirt-engine-api-explorer-0.0.2-1.el7.centos.noarch
ovirt-engine-backend-4.2.8.2-1.el7.noarch
ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
ovirt-engine-dashboard-1.2.4-1.el7.noarch
ovirt-engine-dbscripts-4.2.8.2-1.el7.noarch