[ovirt-users] Can't access ovirt-engine webpage

2017-06-04 Thread Thomas Wakefield
After a reboot I can’t access the management ovirt-engine webpage anymore.


server.log line that looks bad:

2017-06-04 21:35:55,652-04 ERROR [org.jboss.as.controller.management-operation] 
(DeploymentScanner-threads - 2) WFLYCTL0190: Step handler 
org.jboss.as.server.deployment.DeploymentHandlerUtil$5@61b686ed for operation 
{"operation" => "undeploy","address" => [("deployment" => 
"engine.ear")],"owner" => [("subsystem" => "deployment-scanner"),("scanner" => 
"default")]} at address [("deployment" => "engine.ear")] failed handling 
operation rollback -- java.lang.IllegalStateException: WFLYCTL0345: Timeout 
after 5 seconds waiting for existing service service 
jboss.deployment.unit."engine.ear".contents to be removed so a new instance can 
be installed.: java.lang.IllegalStateException: WFLYCTL0345: Timeout after 5 
seconds waiting for existing service service 
jboss.deployment.unit."engine.ear".contents to be removed so a new instance can 
be installed.
at 
org.jboss.as.controller.OperationContextImpl$ContextServiceBuilder.install(OperationContextImpl.java:2107)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.server.deployment.PathContentServitor.addService(PathContentServitor.java:50)
 [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.server.deployment.DeploymentHandlerUtil.doDeploy(DeploymentHandlerUtil.java:165)
 [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.server.deployment.DeploymentHandlerUtil$5$1.handleResult(DeploymentHandlerUtil.java:333)
 [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.controller.AbstractOperationContext$Step.invokeResultHandler(AbstractOperationContext.java:1384)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.controller.AbstractOperationContext$Step.handleResult(AbstractOperationContext.java:1366)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.controller.AbstractOperationContext$Step.finalizeInternal(AbstractOperationContext.java:1328)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.controller.AbstractOperationContext$Step.finalizeStep(AbstractOperationContext.java:1311)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.controller.AbstractOperationContext$Step.access$300(AbstractOperationContext.java:1185)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.controller.AbstractOperationContext.executeResultHandlerPhase(AbstractOperationContext.java:767)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.controller.AbstractOperationContext.processStages(AbstractOperationContext.java:644)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:370)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.controller.OperationContextImpl.executeOperation(OperationContextImpl.java:1329)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.controller.ModelControllerImpl.internalExecute(ModelControllerImpl.java:400)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.controller.ModelControllerImpl.execute(ModelControllerImpl.java:222)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.controller.ModelControllerImpl$3$1$1.run(ModelControllerImpl.java:756)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
org.jboss.as.controller.ModelControllerImpl$3$1$1.run(ModelControllerImpl.java:750)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at java.security.AccessController.doPrivileged(Native Method) 
[rt.jar:1.8.0_131]
at 
org.jboss.as.controller.ModelControllerImpl$3$1.run(ModelControllerImpl.java:750)
 [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[rt.jar:1.8.0_131]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[rt.jar:1.8.0_131]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 [rt.jar:1.8.0_131]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [rt.jar:1.8.0_131]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[rt.jar:1.8.0_131]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[rt.jar:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_131]
at org.jboss.threads.JBossThread.run(JBossThread.java:320) 
[jboss-threads-2.2.1.Final.jar:2.2.1.Final]

2017-06-04 21:36:00,654-04 ERROR [org.jboss.as.controller.management-operation] 
(DeploymentScanner-threads - 2) WFLYCTL0349: Timeout after [5] seconds waiting 
for service container stability while 

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-04 Thread Maor Lipchuk
On Sun, Jun 4, 2017 at 8:51 PM, Abi Askushi  wrote:
> I clean installed everything and ran into the same.
> I then ran gdeploy and encountered the same issue when deploying engine.
> Seems that gluster (?) doesn't like 4K sector drives. I am not sure if it
> has to do with alignment. The weird thing is that gluster volumes are all
> ok, replicating normally and no split brain is reported.
>
> The solution to the mentioned bug (1386443) was to format with 512 sector
> size, which for my case is not an option:
>
> mkfs.xfs -f -i size=512 -s size=512 /dev/gluster/engine
> illegal sector size 512; hw sector is 4096
>
> Is there any workaround to address this?
>
> Thanx,
> Alex
>
>
> On Sun, Jun 4, 2017 at 5:48 PM, Abi Askushi  wrote:
>>
>> Hi Maor,
>>
>> My disk are of 4K block size and from this bug seems that gluster replica
>> needs 512B block size.
>> Is there a way to make gluster function with 4K drives?
>>
>> Thank you!
>>
>> On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk  wrote:
>>>
>>> Hi Alex,
>>>
>>> I saw a bug that might be related to the issue you encountered at
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1386443
>>>
>>> Sahina, maybe you have any advise? Do you think that BZ1386443is related?
>>>
>>> Regards,
>>> Maor
>>>
>>> On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi 
>>> wrote:
>>> > Hi All,
>>> >
>>> > I have installed successfully several times oVirt (version 4.1) with 3
>>> > nodes
>>> > on top glusterfs.
>>> >
>>> > This time, when trying to configure the same setup, I am facing the
>>> > following issue which doesn't seem to go away. During installation i
>>> > get the
>>> > error:
>>> >
>>> > Failed to execute stage 'Misc configuration': Cannot acquire host id:
>>> > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22, 'Sanlock
>>> > lockspace add failure', 'Invalid argument'))
>>> >
>>> > The only different in this setup is that instead of standard
>>> > partitioning i
>>> > have GPT partitioning and the disks have 4K block size instead of 512.
>>> >
>>> > The /var/log/sanlock.log has the following lines:
>>> >
>>> > 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace
>>> >
>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:250:/rhev/data-center/mnt/_var_lib_ovirt-hosted-engin-setup_tmptjkIDI/ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047/dom_md/ids:0
>>> > 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource
>>> >
>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:SDM:/rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmptjkIDI/ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047/dom_md/leases:1048576
>>> > for 2,9,23040
>>> > 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace
>>> >
>>> > a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922:250:/rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids:0
>>> > 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD
>>> > 0x7f59b8c0:0x7f59b8d0:0x7f59b0101000 result -22:0 match res
>>> > 2017-06-03 19:21:36+0200 23471 [23522]: read_sectors delta_leader
>>> > offset
>>> > 127488 rv -22
>>> >
>>> > /rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids
>>> > 2017-06-03 19:21:37+0200 23472 [930]: s9 host 250 1 23450
>>> > 88c2244c-a782-40ed-9560-6cfa4d46f853.v0.neptune
>>> > 2017-06-03 19:21:37+0200 23472 [943]: s10 add_lockspace fail result -22
>>> >
>>> > And /var/log/vdsm/vdsm.log says:
>>> >
>>> > 2017-06-03 19:19:38,176+0200 WARN  (jsonrpc/3)
>>> > [storage.StorageServer.MountConnection] Using user specified
>>> > backup-volfile-servers option (storageServer:253)
>>> > 2017-06-03 19:21:12,379+0200 WARN  (periodic/1) [throttled] MOM not
>>> > available. (throttledlog:105)
>>> > 2017-06-03 19:21:12,380+0200 WARN  (periodic/1) [throttled] MOM not
>>> > available, KSM stats will be missing. (throttledlog:105)
>>> > 2017-06-03 19:21:14,714+0200 WARN  (jsonrpc/1)
>>> > [storage.StorageServer.MountConnection] Using user specified
>>> > backup-volfile-servers option (storageServer:253)
>>> > 2017-06-03 19:21:15,515+0200 ERROR (jsonrpc/4) [storage.initSANLock]
>>> > Cannot
>>> > initialize SANLock for domain a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922
>>> > (clusterlock:238)
>>> > Traceback (most recent call last):
>>> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py",
>>> > line
>>> > 234, in initSANLock
>>> > sanlock.init_lockspace(sdUUID, idsPath)
>>> > SanlockException: (107, 'Sanlock lockspace init failure', 'Transport
>>> > endpoint is not connected')
>>> > 2017-06-03 19:21:15,515+0200 WARN  (jsonrpc/4)
>>> > [storage.StorageDomainManifest] lease did not initialize successfully
>>> > (sd:557)
>>> > Traceback (most recent call last):
>>> >   File "/usr/share/vdsm/storage/sd.py", line 552, in initDomainLock
>>> > self._domainLock.initLock(self.getDomainLease())
>>> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py",
>>> > line
>>> > 271, 

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-04 Thread Abi Askushi
I clean installed everything and ran into the same.
I then ran gdeploy and encountered the same issue when deploying engine.
Seems that gluster (?) doesn't like 4K sector drives. I am not sure if it
has to do with alignment. The weird thing is that gluster volumes are all
ok, replicating normally and no split brain is reported.

The solution to the mentioned bug (1386443
) was to format with
512 sector size, which for my case is not an option:

mkfs.xfs -f -i size=512 -s size=512 /dev/gluster/engine
illegal sector size 512; hw sector is 4096

Is there any workaround to address this?

Thanx,
Alex


On Sun, Jun 4, 2017 at 5:48 PM, Abi Askushi  wrote:

> Hi Maor,
>
> My disk are of 4K block size and from this bug seems that gluster replica
> needs 512B block size.
> Is there a way to make gluster function with 4K drives?
>
> Thank you!
>
> On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk  wrote:
>
>> Hi Alex,
>>
>> I saw a bug that might be related to the issue you encountered at
>> https://bugzilla.redhat.com/show_bug.cgi?id=1386443
>>
>> Sahina, maybe you have any advise? Do you think that BZ1386443is related?
>>
>> Regards,
>> Maor
>>
>> On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi 
>> wrote:
>> > Hi All,
>> >
>> > I have installed successfully several times oVirt (version 4.1) with 3
>> nodes
>> > on top glusterfs.
>> >
>> > This time, when trying to configure the same setup, I am facing the
>> > following issue which doesn't seem to go away. During installation i
>> get the
>> > error:
>> >
>> > Failed to execute stage 'Misc configuration': Cannot acquire host id:
>> > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22, 'Sanlock
>> > lockspace add failure', 'Invalid argument'))
>> >
>> > The only different in this setup is that instead of standard
>> partitioning i
>> > have GPT partitioning and the disks have 4K block size instead of 512.
>> >
>> > The /var/log/sanlock.log has the following lines:
>> >
>> > 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace
>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:250:/rhev/data-center/
>> mnt/_var_lib_ovirt-hosted-engin-setup_tmptjkIDI/ba6bd862
>> -c2b8-46e7-b2c8-91e4a5bb2047/dom_md/ids:0
>> > 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource
>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:SDM:/rhev/data-center/
>> mnt/_var_lib_ovirt-hosted-engine-setup_tmptjkIDI/ba6bd86
>> 2-c2b8-46e7-b2c8-91e4a5bb2047/dom_md/leases:1048576
>> > for 2,9,23040
>> > 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace
>> > a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922:250:/rhev/data-center/
>> mnt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-
>> 8e26-c8b4d5e5e922/dom_md/ids:0
>> > 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD
>> > 0x7f59b8c0:0x7f59b8d0:0x7f59b0101000 result -22:0 match res
>> > 2017-06-03 19:21:36+0200 23471 [23522]: read_sectors delta_leader offset
>> > 127488 rv -22
>> > /rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/
>> a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids
>> > 2017-06-03 19:21:37+0200 23472 [930]: s9 host 250 1 23450
>> > 88c2244c-a782-40ed-9560-6cfa4d46f853.v0.neptune
>> > 2017-06-03 19:21:37+0200 23472 [943]: s10 add_lockspace fail result -22
>> >
>> > And /var/log/vdsm/vdsm.log says:
>> >
>> > 2017-06-03 19:19:38,176+0200 WARN  (jsonrpc/3)
>> > [storage.StorageServer.MountConnection] Using user specified
>> > backup-volfile-servers option (storageServer:253)
>> > 2017-06-03 19:21:12,379+0200 WARN  (periodic/1) [throttled] MOM not
>> > available. (throttledlog:105)
>> > 2017-06-03 19:21:12,380+0200 WARN  (periodic/1) [throttled] MOM not
>> > available, KSM stats will be missing. (throttledlog:105)
>> > 2017-06-03 19:21:14,714+0200 WARN  (jsonrpc/1)
>> > [storage.StorageServer.MountConnection] Using user specified
>> > backup-volfile-servers option (storageServer:253)
>> > 2017-06-03 19:21:15,515+0200 ERROR (jsonrpc/4) [storage.initSANLock]
>> Cannot
>> > initialize SANLock for domain a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922
>> > (clusterlock:238)
>> > Traceback (most recent call last):
>> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py",
>> line
>> > 234, in initSANLock
>> > sanlock.init_lockspace(sdUUID, idsPath)
>> > SanlockException: (107, 'Sanlock lockspace init failure', 'Transport
>> > endpoint is not connected')
>> > 2017-06-03 19:21:15,515+0200 WARN  (jsonrpc/4)
>> > [storage.StorageDomainManifest] lease did not initialize successfully
>> > (sd:557)
>> > Traceback (most recent call last):
>> >   File "/usr/share/vdsm/storage/sd.py", line 552, in initDomainLock
>> > self._domainLock.initLock(self.getDomainLease())
>> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py",
>> line
>> > 271, in initLock
>> > initSANLock(self._sdUUID, self._idsPath, lease)
>> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py",
>> line
>> > 239, 

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-04 Thread Abi Askushi
Hi Maor,

My disk are of 4K block size and from this bug seems that gluster replica
needs 512B block size.
Is there a way to make gluster function with 4K drives?

Thank you!

On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk  wrote:

> Hi Alex,
>
> I saw a bug that might be related to the issue you encountered at
> https://bugzilla.redhat.com/show_bug.cgi?id=1386443
>
> Sahina, maybe you have any advise? Do you think that BZ1386443is related?
>
> Regards,
> Maor
>
> On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi 
> wrote:
> > Hi All,
> >
> > I have installed successfully several times oVirt (version 4.1) with 3
> nodes
> > on top glusterfs.
> >
> > This time, when trying to configure the same setup, I am facing the
> > following issue which doesn't seem to go away. During installation i get
> the
> > error:
> >
> > Failed to execute stage 'Misc configuration': Cannot acquire host id:
> > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22, 'Sanlock
> > lockspace add failure', 'Invalid argument'))
> >
> > The only different in this setup is that instead of standard
> partitioning i
> > have GPT partitioning and the disks have 4K block size instead of 512.
> >
> > The /var/log/sanlock.log has the following lines:
> >
> > 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace
> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:250:/rhev/data-
> center/mnt/_var_lib_ovirt-hosted-engin-setup_tmptjkIDI/
> ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047/dom_md/ids:0
> > 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource
> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:SDM:/rhev/data-
> center/mnt/_var_lib_ovirt-hosted-engine-setup_tmptjkIDI/
> ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047/dom_md/leases:1048576
> > for 2,9,23040
> > 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace
> > a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922:250:/rhev/data-
> center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-
> 4838-8e26-c8b4d5e5e922/dom_md/ids:0
> > 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD
> > 0x7f59b8c0:0x7f59b8d0:0x7f59b0101000 result -22:0 match res
> > 2017-06-03 19:21:36+0200 23471 [23522]: read_sectors delta_leader offset
> > 127488 rv -22
> > /rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-
> 8e26-c8b4d5e5e922/dom_md/ids
> > 2017-06-03 19:21:37+0200 23472 [930]: s9 host 250 1 23450
> > 88c2244c-a782-40ed-9560-6cfa4d46f853.v0.neptune
> > 2017-06-03 19:21:37+0200 23472 [943]: s10 add_lockspace fail result -22
> >
> > And /var/log/vdsm/vdsm.log says:
> >
> > 2017-06-03 19:19:38,176+0200 WARN  (jsonrpc/3)
> > [storage.StorageServer.MountConnection] Using user specified
> > backup-volfile-servers option (storageServer:253)
> > 2017-06-03 19:21:12,379+0200 WARN  (periodic/1) [throttled] MOM not
> > available. (throttledlog:105)
> > 2017-06-03 19:21:12,380+0200 WARN  (periodic/1) [throttled] MOM not
> > available, KSM stats will be missing. (throttledlog:105)
> > 2017-06-03 19:21:14,714+0200 WARN  (jsonrpc/1)
> > [storage.StorageServer.MountConnection] Using user specified
> > backup-volfile-servers option (storageServer:253)
> > 2017-06-03 19:21:15,515+0200 ERROR (jsonrpc/4) [storage.initSANLock]
> Cannot
> > initialize SANLock for domain a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922
> > (clusterlock:238)
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py",
> line
> > 234, in initSANLock
> > sanlock.init_lockspace(sdUUID, idsPath)
> > SanlockException: (107, 'Sanlock lockspace init failure', 'Transport
> > endpoint is not connected')
> > 2017-06-03 19:21:15,515+0200 WARN  (jsonrpc/4)
> > [storage.StorageDomainManifest] lease did not initialize successfully
> > (sd:557)
> > Traceback (most recent call last):
> >   File "/usr/share/vdsm/storage/sd.py", line 552, in initDomainLock
> > self._domainLock.initLock(self.getDomainLease())
> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py",
> line
> > 271, in initLock
> > initSANLock(self._sdUUID, self._idsPath, lease)
> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py",
> line
> > 239, in initSANLock
> > raise se.ClusterLockInitError()
> > ClusterLockInitError: Could not initialize cluster lock: ()
> > 2017-06-03 19:21:37,867+0200 ERROR (jsonrpc/2) [storage.StoragePool]
> Create
> > pool hosted_datacenter canceled  (sp:655)
> > Traceback (most recent call last):
> >   File "/usr/share/vdsm/storage/sp.py", line 652, in create
> > self.attachSD(sdUUID)
> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py",
> line
> > 79, in wrapper
> > return method(self, *args, **kwargs)
> >   File "/usr/share/vdsm/storage/sp.py", line 971, in attachSD
> > dom.acquireHostId(self.id)
> >   File "/usr/share/vdsm/storage/sd.py", line 790, in acquireHostId
> > self._manifest.acquireHostId(hostId, async)
> >   File "/usr/share/vdsm/storage/sd.py", line 449, in acquireHostId
> > 

[ovirt-users] Regarding wiki on Backup Storage

2017-06-04 Thread shubham dubey
Hello,

I just have put a PR for wiki on Backup Storage at ovirt-site[1].
The idea is to have a better backup solution for disaster recovery, in
place of export storage domain. The idea is already discussed in mailing
list earlier. Me and maor(as mentor) are working on this feature and till
now I have done a push[2] for adding backup flag in database.

I will appreciate everyone to review the wiki and share some
feedback/comments about our implementation process.


[1]:https://github.com/oVirt/ovirt-site/pull/1003
[2]:https://gerrit.ovirt.org/#/c/77142/

Thanks,
Shubham
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM/Template copy issue

2017-06-04 Thread Maor Lipchuk
On Sat, Jun 3, 2017 at 6:20 AM, Bryan Sockel  wrote:
> This happening to a number  of vm's. All vm's are running and can be stopped
> and re started.  We can read and write data within the vm.
>
> All vms are currently running on a single node gluster file system.  I am
> attempting to migrate to a replica 3 gluster file system when i exprience
> these issues.  The problem always seems to happen when finalizing the move
> or copy.
>
> If it makes a difference the gluster storage we are coping to and from are
> dedicated storage servers.
>
>
>  Original message 
> From: Maor Lipchuk 
> Date: 6/2/17 5:29 PM (GMT-06:00)
> To: Bryan Sockel 
> Cc: users@ovirt.org
> Subject: Re: [ovirt-users] VM/Template copy issue
>
> 
> From : Maor Lipchuk [mlipc...@redhat.com]
> To : Bryan Sockel [bryan.soc...@altn.com]
> Cc : users@ovirt.org [users@ovirt.org]
> Date : Friday, June 2 2017 17:27:32
>
> Hi Bryan,
>
> It seems like there was an error from qemu-img while reading sector
> 143654878 .
>  the Image copy (conversion) failed with low level qemu-img failure:
>
> CopyImageError: low level Image copy failed:
> ("cmd=['/usr/bin/taskset', '--cpu-list', '0-31', '/usr/bin/nice',
> '-n', '19', '/usr/bin/ionice', '-c', '3', '/usr/bin/qemu-img',
> 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw',
> u'/rhev/data-center/d776b537-16f2-4543-bd96-9b4cba69e247/e371d380-7194-4950-b901-5f2aed5dfb35/images/9959c6e4-1fb7-455b-ad5e-8b9e2324a0ab/3f4e183b-7957-4bee-9153-9d967491f882',
> '-O', 'raw',
> u'/rhev/data-center/mnt/glusterSD/vs-host-colo-1-gluster.altn.int:_desktop-vdi1/f927ceb8-91d2-41bd-ba42-dc795395b6d0/images/9959c6e4-1fb7-455b-ad5e-8b9e2324a0ab/3f4e183b-7957-4bee-9153-9d967491f882'],
> ecode=1, stdout=, stderr=qemu-img: error while reading sector
> 143654878: No data available\n, message=None",)
>
> Can you verify those disks are indeed valid? Can you IO to them while
> attaching them to a running VM?
>
> On Tue, May 30, 2017 at 9:10 PM, Bryan Sockel  wrote:
>>
>> Hi,
>>
>> I am trying to rebuild my ovirt environment after having to juggle some
>> hardware around.  I am moving from hosted engine environment into a engine
>> install on a dedicated server.  I have two data centers in my setup and
>> each
>> DC has a non-routable vlan dedicated to storage.
>>
>> As i rebuild my setup i am trying to clean up my storage configuration.  I
>> am attempting to copy vm's and templates to a dedicated gluster setup.
>> However  each time i attempt to copy a template or move a vm, the
>> operation
>> fails.  The failure always happens when it is finalizing the copy.
>>
>> The operation does not happen with all vm's, but seems to happen mostly
>> with
>> vm's created from with in Ovirt, and not imported from vmware.
>>
>>
>> I have attached the logs from where i was trying to copy to templates to a
>> new Gluster Filesystem
>>
>> Thanks
>> Bryan
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>


I'm not sure if this is related to the hardware that was juggled, but
it seems like the volume has a bad sector and qemu-img reports it.
Do you have this volume in another storage domain maybe before the
hardware change, so we can eliminate the issue of hardware change.
You can also open a bug so it will be easier to track and investigate
it:  https://bugzilla.redhat.com/enter_bug.cgi?product=vdsm
please also attach the engine and vdsm logs.

Regards,
Maor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot Image ID

2017-06-04 Thread Maor Lipchuk
Hi Marcelo.

Can you please elaborate a bit more about the issue.
Can you please also attach the engine and VDSM logs.

Thanks,
Maor

On Wed, May 31, 2017 at 10:08 AM, Sandro Bonazzola 
wrote:

>
>
> On Tue, May 23, 2017 at 4:25 PM, Marcelo Leandro 
> wrote:
>
>> I see now that image base id stay in snapshot the before of the last. I
>> think that should stay in the last. correct?
>>
>> Thanks,
>> Marcelo Leandro
>>
>> 2017-05-23 11:16 GMT-03:00 Marcelo Leandro :
>>
>>> Good morning,
>>> I have a problem with snapshot, my last snapshot that should have the
>>> image base ID not contain this reference in sub-tab in the dashboard.
>>> how can see in the picture attached.
>>> I think that is necessary add this reference in the database again. The
>>> image exist in the storage and it is used for VM execution, but i think
>>> when I shutdown the VM it not start anymore.
>>>
>>> someone help me?
>>>
>>> Thanks,
>>> Marcelo Leandro
>>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage domain does not exist

2017-06-04 Thread Maor Lipchuk
It seems related to the destroy of the storage domain since destroy
will still remove it from the engine if there is a problem.
Have you tried to restart VDSM, or reboot the Host?
There could be a problem with umount the storage domain.
Can you please send the VDSM logs

Regards,
Maor

On Thu, Jun 1, 2017 at 2:19 PM, Bryan Sockel  wrote:
> Hey,
>
> I am having an issue moving/copying vm's and templates around in my ovirt
> environment. I am getting the following error in my VDSM Logs:
>
> "2017-06-01 01:01:54,425-0500 ERROR (jsonrpc/5) [storage.Dispatcher]
> {'status': {'message': "Storage domain does not exist:
> (u'97258ca6-5acd-40c7-a812-b8cac02a9621',)", 'code': 358}} (dispatcher:77)"
>
> I believe this started when i destroyed a data domain instead of removing
> the domain correctly.  I have since then rebuilt my ovirt environment,
> importing my gluster domains back into my new setup.
>
> I believe the issue is related to some stale metadata on my gluster storage
> servers somewhere but do not know how to remove it or where it exists.
>
> I found these two posts that seem to deal with the same problem i am seeing.
>
> https://access.redhat.com/solutions/180623
> and
> https://access.redhat.com/solutions/2355061
>
> Currently running 2 dedicated gluster servers and 2 Ovirt VM hosts servers,
> one is acting as an arbiter for my replica 3 gluster file system.
>
> All hosts are running CentOS Linux release 7.3.1611 (Core)
>
> Thanks
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-04 Thread Maor Lipchuk
Hi Alex,

I saw a bug that might be related to the issue you encountered at
https://bugzilla.redhat.com/show_bug.cgi?id=1386443

Sahina, maybe you have any advise? Do you think that BZ1386443is related?

Regards,
Maor

On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi  wrote:
> Hi All,
>
> I have installed successfully several times oVirt (version 4.1) with 3 nodes
> on top glusterfs.
>
> This time, when trying to configure the same setup, I am facing the
> following issue which doesn't seem to go away. During installation i get the
> error:
>
> Failed to execute stage 'Misc configuration': Cannot acquire host id:
> (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22, 'Sanlock
> lockspace add failure', 'Invalid argument'))
>
> The only different in this setup is that instead of standard partitioning i
> have GPT partitioning and the disks have 4K block size instead of 512.
>
> The /var/log/sanlock.log has the following lines:
>
> 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace
> ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:250:/rhev/data-center/mnt/_var_lib_ovirt-hosted-engin-setup_tmptjkIDI/ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047/dom_md/ids:0
> 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource
> ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:SDM:/rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmptjkIDI/ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047/dom_md/leases:1048576
> for 2,9,23040
> 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace
> a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922:250:/rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids:0
> 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD
> 0x7f59b8c0:0x7f59b8d0:0x7f59b0101000 result -22:0 match res
> 2017-06-03 19:21:36+0200 23471 [23522]: read_sectors delta_leader offset
> 127488 rv -22
> /rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids
> 2017-06-03 19:21:37+0200 23472 [930]: s9 host 250 1 23450
> 88c2244c-a782-40ed-9560-6cfa4d46f853.v0.neptune
> 2017-06-03 19:21:37+0200 23472 [943]: s10 add_lockspace fail result -22
>
> And /var/log/vdsm/vdsm.log says:
>
> 2017-06-03 19:19:38,176+0200 WARN  (jsonrpc/3)
> [storage.StorageServer.MountConnection] Using user specified
> backup-volfile-servers option (storageServer:253)
> 2017-06-03 19:21:12,379+0200 WARN  (periodic/1) [throttled] MOM not
> available. (throttledlog:105)
> 2017-06-03 19:21:12,380+0200 WARN  (periodic/1) [throttled] MOM not
> available, KSM stats will be missing. (throttledlog:105)
> 2017-06-03 19:21:14,714+0200 WARN  (jsonrpc/1)
> [storage.StorageServer.MountConnection] Using user specified
> backup-volfile-servers option (storageServer:253)
> 2017-06-03 19:21:15,515+0200 ERROR (jsonrpc/4) [storage.initSANLock] Cannot
> initialize SANLock for domain a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922
> (clusterlock:238)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
> 234, in initSANLock
> sanlock.init_lockspace(sdUUID, idsPath)
> SanlockException: (107, 'Sanlock lockspace init failure', 'Transport
> endpoint is not connected')
> 2017-06-03 19:21:15,515+0200 WARN  (jsonrpc/4)
> [storage.StorageDomainManifest] lease did not initialize successfully
> (sd:557)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/sd.py", line 552, in initDomainLock
> self._domainLock.initLock(self.getDomainLease())
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
> 271, in initLock
> initSANLock(self._sdUUID, self._idsPath, lease)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
> 239, in initSANLock
> raise se.ClusterLockInitError()
> ClusterLockInitError: Could not initialize cluster lock: ()
> 2017-06-03 19:21:37,867+0200 ERROR (jsonrpc/2) [storage.StoragePool] Create
> pool hosted_datacenter canceled  (sp:655)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/sp.py", line 652, in create
> self.attachSD(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
> 79, in wrapper
> return method(self, *args, **kwargs)
>   File "/usr/share/vdsm/storage/sp.py", line 971, in attachSD
> dom.acquireHostId(self.id)
>   File "/usr/share/vdsm/storage/sd.py", line 790, in acquireHostId
> self._manifest.acquireHostId(hostId, async)
>   File "/usr/share/vdsm/storage/sd.py", line 449, in acquireHostId
> self._domainLock.acquireHostId(hostId, async)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
> 297, in acquireHostId
> raise se.AcquireHostIdFailure(self._sdUUID, e)
> AcquireHostIdFailure: Cannot acquire host id:
> (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22, 'Sanlock
> lockspace add failure', 'Invalid argument'))
> 2017-06-03 19:21:37,870+0200 ERROR (jsonrpc/2) [storage.StoragePool] Domain
> ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047 

Re: [ovirt-users] virt-viewer disabling rhel6

2017-06-04 Thread Lev Veyde
Hi Cam,

The reason why it works in RHEL 6.7 clients is due to the fact that version
of virt-viewer that is supplied with it, doesn't support the mechanism to
check for the minimum required version.

Wasn't aware we can modify the versions we require through
RemoteViewerSupportedVersions config. Michal - thanks for the hint.

Thanks in advance,

On Fri, Jun 2, 2017 at 4:00 PM, cmc  wrote:

> Thanks Michal, that is a huge help. We're busy building an image for EL7
> but it isn't yet fully finished, so we're still on 6.x for now. We're
> updating to 6.8 and then 6.9 in the meantime. Interesting that it still
> works for 6.7 though - I can't explain that. I have updated
> RemoteViewerSupportedVersions and restarted the engine and it works like a
> charm.
>
> Cheers,
>
> Cam
>
> On Thu, Jun 1, 2017 at 7:22 PM, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>> On 1 Jun 2017, at 15:10, Lev Veyde  wrote:
>>
>> Hi Cam,
>>
>> Unfotunately RHEL 6 clients are no longer supported in the oVirt 4.1 due
>> to the new functions that were added which require a more recent version of
>> the virt-viewer, thus the issue.
>>
>> You should use a more recent version e.g. to use RHEL 7 as the client to
>> resolve the issue.
>>
>>
>> That said, using engine-config you can change the version check and allow
>> to launch it . You’ll miss some features but it may not really be that
>> important if all you need is to see the screen and cannot update clients to
>> EL7
>> See https://bugzilla.redhat.com/show_bug.cgi?id=1285883
>>
>> Thanks,
>> michal
>>
>>
>> Thanks in advance,
>>
>> On Wed, May 31, 2017 at 4:50 PM, cmc  wrote:
>>
>>> Hi,
>>>
>>> virt-viewer no longer appears to work when trying to launch a console
>>> with EL 6.8 and oVirt 4.1. The error is:
>>>
>>> "At least Remote Viewer version 99.0-1 is required to setup this
>>> connection"
>>>
>>> When I ran remote-viewer in debug mode, it seems that it is
>>> deliberately disabling rhel6 by setting the version to a non-existent
>>> version:
>>>
>>> (remote-viewer:23829): remote-viewer-DEBUG: Minimum version '2.0-160'
>>> for OS id 'rhev-win64'
>>> (remote-viewer:23829): remote-viewer-DEBUG: Minimum version '2.0-160'
>>> for OS id 'rhev-win32'
>>> (remote-viewer:23829): remote-viewer-DEBUG: Minimum version '2.0-6'
>>> for OS id 'rhel7'
>>> (remote-viewer:23829): remote-viewer-DEBUG: Minimum version '99.0-1'
>>> for OS id 'rhel6'
>>>
>>> rhel 6.7 (and presumably brfore) works fine. I contacted the
>>> maintainers of virt-viewer and they said that this is an ovirt issue.
>>> Is this somehow disabled in 4.1? Can someone tell me why this is the
>>> case?
>>>
>>> Thanks in advance for any insights,
>>>
>>> Cam
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>>
>> --
>>
>> Lev Veyde
>>
>> Software Engineer, RHCE | RHCVA | MCITP
>> Red Hat Israel
>>
>> 
>>
>> l...@redhat.com | lve...@redhat.com
>> 
>> TRIED. TESTED. TRUSTED. 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users