I've checked id's in /rhev/data-center/mnt/glusterSD/*./dom_md/
# -rw-rw. 1 vdsm kvm 1048576 Mar 12 05:14 ids
seems ok
sanlock.log showing;
---
r14 acquire_token open error -13
r14 cmd_acquire 2,11,89283 acquire_token -13
Now I'm not quiet sure on which directi
On Fri, Mar 10, 2017 at 12:39 PM, Martin Sivak wrote:
> Hi Ian,
>
> it is normal that VDSMs are competing for the lock, one should win
> though. If that is not the case then the lockspace might be corrupted
> or the sanlock daemons can't reach it.
>
> I would recommend putting the cluster to globa
Hi Ian,
it is normal that VDSMs are competing for the lock, one should win
though. If that is not the case then the lockspace might be corrupted
or the sanlock daemons can't reach it.
I would recommend putting the cluster to global maintenance and
attempting a manual start using:
# hosted-engine
Hi All
I had a storage issue with my gluster volumes running under ovirt hosted.
I now cannot start the hosted engine manager vm from "hosted-engine
--vm-start".
I've scoured the net to find a way, but can't seem to find anything
concrete.
Running Centos7, ovirt 4.0 and gluster 3.8.9
How do I re
I just noticed this in the vdsm.logs. The agent looks like it is trying to
start hosted engine on both machines??
destroydestroydestroy
Thread-7517::ERROR::2017-03-10
01:26:13,053::vm::773::virt.vm::(_startUnderlyingVm)
vmId=`2419f9fe-4998-4b7a-9fe9-151571d20379`::The vm start process failed
Trac
5 matches
Mail list logo