[ovirt-users] Re: (no subject)

2018-12-03 Thread Sahina Bose
On Tue, Dec 4, 2018 at 11:32 AM Abhishek Sahni 
wrote:

> Hello Team,
>
>
> We are running a setup of 3-way replica HC gluster setup configured during
> the initial deployment from the cockpit console using ansible.
>
> NODE1
>   - /dev/sda   (OS)
>   - /dev/sdb   ( Gluster Bricks )
>* /gluster_bricks/engine/engine/
>* /gluster_bricks/data/data/
>* /gluster_bricks/vmstore/vmstore/
>
> NODE2 and NODE3 with a similar setup.
>
> Hosted engine was running on node2.
>
> - While moving NODE1 to maintenance mode along with stopping the
> gluster service as it prompts before, Hosted engine instantly went down.
>
> - I start the gluster service back on node1 and start the hosted engine
> again and found hosted engine started properly but getting crashed again
> and again within frames of second after a successful start because HE
> itself stopping glusterd on node1. (not sure) but cross-verified by
> checking glusterd status.
>
> *Is it possible to clear pending tasks or not let the HE to stop
> glusterd on node1?*
>
> *Or we can start the HE using other gluster node?*
>
> https://paste.fedoraproject.org/paste/Qu2tSHuF-~G4GjGmstV6mg
>


The Hosted Engine storage domain should have the backup-volfile-servers
mount options specified, so that even if the node used to mount the gluster
volume initially goes down, it can try with an alternate server.
Can you check if this is set? ( check
/etc/ovirt-hosted-engine/hosted-engine.conf
or hosted-engine --get-shared-config mnt_options --type=he_shared)

If not set, you can update the mount options for HE domain using
hosted-engine --set-shared-config
mnt_options="backup-volfile-servers=:" --type=he_shared



>
>
>
>
>
> --
>
> ABHISHEK SAHNI
> IISER Bhopal
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/26C32RPGG6OF7L2FUFGCVHYKRWWWX7K7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EV7ZJCCZ4LYZ4Y5GDOABT6BOSJQMCZLQ/


[ovirt-users] Hosted Engine goes down while putting gluster node into maintenance mode.

2018-12-03 Thread Abhishek Sahni
Hello Team,

We are running a setup of 3-way replica HC gluster setup configured during
the initial deployment from the cockpit console using ansible.

NODE1
  - /dev/sda   (OS)
  - /dev/sdb   ( Gluster Bricks )
   * /gluster_bricks/engine/engine/
   * /gluster_bricks/data/data/
   * /gluster_bricks/vmstore/vmstore/

NODE2 and NODE3 with a similar setup.

Hosted engine was running on node2.

- While moving NODE1 to maintenance mode along with stopping the
gluster service as it prompts before, Hosted engine instantly went down.

- I start the gluster service back on node1 and start the hosted engine
again and found hosted engine started properly but getting crashed again
and again within frames of second after a successful start because HE
itself stopping glusterd on node1. (not sure) but cross-verified by
checking glusterd status.

*Is it possible to clear pending tasks or not let the HE to stop
glusterd on node1?*

*Or we can start the HE using other gluster node?*

https://paste.fedoraproject.org/paste/Qu2tSHuF-~G4GjGmstV6mg


-- 

ABHISHEK SAHNI


IISER Bhopal
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7ETASYIKXRAGYZRBZIS6G743UHPKGCNA/


[ovirt-users] From Self-hosted engine to standalone engine

2018-12-03 Thread Punaatua PK
Hello,

we currently have a self-hosted engine on gluster with 3 hosts. We want to have 
the engine on a single machine on a standalone KVM.

We did the following steps on our test platform.
- Create a VM on a standalone KVM
- Put the self hosted engine into global maintenance
- Shut the self-hosted engine
- Copy the self-hosted engine image disk (by browsing into the gluster engine 
volume) by using linux dd command to the standalone KVM
- Reusing the self-hosted engine's MAC address on the new standalone VM
- Starting the standalone VM which use the self-hosted image disk previously 
copied on the standalone KVM

- Log in the engine and then undeploy the hosted-engine by re-installing all 
the host and go for UNDEPLOY on the self-hosted section
- Stop ovirt-ha-agent and ovirt-ha-broker

Everything seems to be ok for now.

What do you think about our process ? (to go from self-hosted to standalone)
Do you have any idea on what should be checked ?

Thank you
(We wanted to go out from self-hosted engine, because we don't really master 
this deployment)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5UZROLYIC273ARCXUIWHOIIVC3EPGS6E/


[ovirt-users] (no subject)

2018-12-03 Thread Abhishek Sahni
Hello Team,


We are running a setup of 3-way replica HC gluster setup configured during
the initial deployment from the cockpit console using ansible.

NODE1
  - /dev/sda   (OS)
  - /dev/sdb   ( Gluster Bricks )
   * /gluster_bricks/engine/engine/
   * /gluster_bricks/data/data/
   * /gluster_bricks/vmstore/vmstore/

NODE2 and NODE3 with a similar setup.

Hosted engine was running on node2.

- While moving NODE1 to maintenance mode along with stopping the
gluster service as it prompts before, Hosted engine instantly went down.

- I start the gluster service back on node1 and start the hosted engine
again and found hosted engine started properly but getting crashed again
and again within frames of second after a successful start because HE
itself stopping glusterd on node1. (not sure) but cross-verified by
checking glusterd status.

*Is it possible to clear pending tasks or not let the HE to stop
glusterd on node1?*

*Or we can start the HE using other gluster node?*

https://paste.fedoraproject.org/paste/Qu2tSHuF-~G4GjGmstV6mg





-- 

ABHISHEK SAHNI
IISER Bhopal
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/26C32RPGG6OF7L2FUFGCVHYKRWWWX7K7/


[ovirt-users] Re: cluster 4 pc's for more scanning power?

2018-12-03 Thread Staniforth, Paul
Hello Peter,
   if you install it on one VM it will only run on one of 
the hosts however if you had a distributed scanner it could run on multiple 
hosts. I think there was some work with a distibuted scanner in docker (don't 
know if there is openshift or kubernetes) but it may be better to run it in a 
docker or kubernetes cluster rather than oVirt.

Regards,
 Paul S.

From: Peter C. 
Sent: 01 December 2018 00:15
To: users@ovirt.org
Subject: [ovirt-users] cluster 4 pc's for more scanning power?

Hello and sorry for the ignorance behind this question. I've been reading about 
various scale out, "hyper convergence" solutions and want to ask this about 
oVirt.

I do vulnerability scanning on my company's assets. I do it from an obsolete 
laptop that was given to me. The load goes over 13 sometimes, and the scans 
take a long time.

If I built an oVirt cluster from 4 or 5 desktop pc's, build  VM to run OpenVAS, 
would the cpu load demanded by the scanning be spread accross the 3-4 hosts, 
not inlcuding the management host, and therby give my scans more CPU power?

If not oVirt, is there another project that would be better suited to what I'm 
trying to achieve?

Qualifiers:
-I'm not asking if this is the best way to get high-load scanning done. I'm 
just asking if I'll get the combined power from the cpu cores of all the host 
machines. The scanning jobs thread already.
-I know it would probably be more efficient to get a powerful multi core 
workstation or server to do this. That is not my question.
-The pc's are perfectly good, they are just not being used and won't be used 
for anything else.

Thanks in advance.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S5WBOBNB6IRXWKIDITYIGGR6NUYMYSFM/
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GQ74EMGRU3MFNEJPPETXOTATWQC3PXHB/


[ovirt-users] Re: Delete un-imported VMs from FC storage or any storage. UPDATE2

2018-12-03 Thread Jacob Green
Ok so two things, one I found this bug report which I believe is 
relevant to my issue. *https://bugzilla.redhat.com/show_bug.cgi?id=1497931*


*
*

And secondly I found using a tip from that bug report that I in fact 
have two identical LVMs. I need to remove the one that already exists on 
the FC, so I can move the one from the iSCSI to the FC.


"*bb20b34d-96b9-4dfe-b122-0172546e51ce* 
0e4ca0da-1721-4ea7-92af-25233a8679e0 -wi-a-  80.00g   Fibre Channel 
/Un-imported old version of the VM/
*bb20b34d-96b9-4dfe-b122-0172546e51ce* 
b9cce591-fed5-4b59-872b-41a9fecb607e -wi---  80.00g"  iSCSI Current 
good version of the VM that I would like to move back to FC.



So there must be a manual way to do this? And I would then need to 
remove the record of the VM from the FC OVF store? I am having trouble 
finding good documentation on how to do this.




On 12/03/2018 01:37 PM, Jacob Green wrote:


I believe I have found the relevant error in the engine.log.

__

Exception: 'VDSGenericException: VDSErrorException: Failed to 
HSMGetAllTasksStatusesVDS, error = Cannot create Logical Volume: 
u'vgname=0e4ca0da-1721-4ea7-92af-25233a8679e0 
lvname=bb20b34d-96b9-4dfe-b122-0172546e51ce err=[\'  /dev/mapp
er/36589cfc003413cb667d8cd46ffb8: read failed after 0 of 4096 at 
0: Input/output error\', \' 
/dev/mapper/36589cfc003413cb667d8cd46ffb8: read failed after 0 of 
4096 at 858993393664: Input/output error\', \' /dev/mapper/36589cfc00
3413cb667d8cd46ffb8: read failed after 0 of 4096 at 858993451008: 
Input/output error\', \'  WARNING: Error counts reached a limit of 3. 
Device /dev/mapper/36589cfc003413cb667d8cd46ffb8 was disabled\', 
\'  Logical Volume "bb20b34d-
96b9-4dfe-b122-0172546e51ce" already exists in volume group 
"0e4ca0da-1721-4ea7-92af-25233a8679e0"\']', code = 550'


__


So my question is how do I delete that already existing Logical Volume 
safely, so that I can move a disk from my iscsi to the FC. This VM 
used to exsist on the FC before I migrated which is how I ended up in 
this situation.




On 12/03/2018 01:13 PM, Jacob Green wrote:


Any thoughts on how I remove an un-imported VM from my FIbre 
Channel storage? I cannot move my working VMs back to the FC, and I 
believe it is because the older un-imported version is creating conflict.



On 11/29/2018 03:12 PM, Jacob Green wrote:


Ok, so here is the situation, before moving/importing our 
primary storage domain, I exported a few VMs, so they would not need 
to go down during the "Big migration" however they are now residing 
on some iscsi storage, now that the Fibre Channel storage is back in 
place, I cannot move the disks of the VMs to the Fiber channel 
because their is an unimported version of that VM residing on the 
Fibre Channel. I need to delete or remove the VM from the Fibre 
Channel so I can move the working VM back to the Fibre Channel.


I hope that makes sense, but essentially I have a current duplicate 
VM running in iscsi that I need to move to the fibre channel. 
however because the VM used to exsist on the fibre channel and has 
the same disk name, I cannot move it to the fibre channel, also it 
seems odd to me there is no way to clear the VMs from the storage 
without importing them. There must be a way?



Thank you.



On 11/28/2018 10:14 PM, Jacob Green wrote:


Hey wanted to thank you for your reply, and wanted to let you know 
that late after I sent this email, my colleuege and I figured out 
we needed to enable the FCoE key in the ovirt manager and tell 
ovirt that eno51 and eno52 interfaces are FCoE.



However I ran into another issue. Now that our Fiber channel is 
imported to the new environment and were able to import VMs, we 
have some VMs that we will not be importing, however we see no way 
to delete them from the storage in the GUI. Or I am just missing it.



*TLDR*: How does one delete VMs available for import, without 
importing them from the storage domain?



Thank you.



On 11/28/2018 01:26 AM, Luca 'remix_tj' Lorenzetto wrote:

On Wed, Nov 28, 2018 at 6:54 AM Jacob Green  wrote:

Any help or insight into fiber channel with ovirt 4.2 would be greatly
appreciated.


Hello Jacob,

we're running a cluster of 6 HP BL460 G9 with virtual connect without issues.

[root@kvmsv003 ~]# dmidecode | grep -A3 '^System Information'
System Information
Manufacturer: HP
Product Name: ProLiant BL460c Gen9
Version: Not Specified

We've, anyway, a different kind of CNA:

[root@kvmsv003 ~]# cat /sys/class/fc_host/host1/device/scsi_host/host1/modeldesc
HP FlexFabric 20Gb 2-port 650FLB Adapter

But i see is running the same module you're reporting

[root@kvmsv003 ~]# lsmod | grep bnx
bnx2fc 

[ovirt-users] Re: Delete un-imported VMs from FC storage or any storage. UPDATE

2018-12-03 Thread Jacob Green

I believe I have found the relevant error in the engine.log.

__

Exception: 'VDSGenericException: VDSErrorException: Failed to 
HSMGetAllTasksStatusesVDS, error = Cannot create Logical Volume: 
u'vgname=0e4ca0da-1721-4ea7-92af-25233a8679e0 
lvname=bb20b34d-96b9-4dfe-b122-0172546e51ce err=[\'  /dev/mapp
er/36589cfc003413cb667d8cd46ffb8: read failed after 0 of 4096 at 0: 
Input/output error\', \' /dev/mapper/36589cfc003413cb667d8cd46ffb8: 
read failed after 0 of 4096 at 858993393664: Input/output error\', \' 
/dev/mapper/36589cfc00
3413cb667d8cd46ffb8: read failed after 0 of 4096 at 858993451008: 
Input/output error\', \'  WARNING: Error counts reached a limit of 3. 
Device /dev/mapper/36589cfc003413cb667d8cd46ffb8 was disabled\', \' 
Logical Volume "bb20b34d-
96b9-4dfe-b122-0172546e51ce" already exists in volume group 
"0e4ca0da-1721-4ea7-92af-25233a8679e0"\']', code = 550'


__


So my question is how do I delete that already existing Logical Volume 
safely, so that I can move a disk from my iscsi to the FC. This VM used 
to exsist on the FC before I migrated which is how I ended up in this 
situation.




On 12/03/2018 01:13 PM, Jacob Green wrote:


Any thoughts on how I remove an un-imported VM from my FIbre 
Channel storage? I cannot move my working VMs back to the FC, and I 
believe it is because the older un-imported version is creating conflict.



On 11/29/2018 03:12 PM, Jacob Green wrote:


Ok, so here is the situation, before moving/importing our primary 
storage domain, I exported a few VMs, so they would not need to go 
down during the "Big migration" however they are now residing on some 
iscsi storage, now that the Fibre Channel storage is back in place, I 
cannot move the disks of the VMs to the Fiber channel because their 
is an unimported version of that VM residing on the Fibre Channel. I 
need to delete or remove the VM from the Fibre Channel so I can move 
the working VM back to the Fibre Channel.


I hope that makes sense, but essentially I have a current duplicate 
VM running in iscsi that I need to move to the fibre channel. however 
because the VM used to exsist on the fibre channel and has the same 
disk name, I cannot move it to the fibre channel, also it seems odd 
to me there is no way to clear the VMs from the storage without 
importing them. There must be a way?



Thank you.



On 11/28/2018 10:14 PM, Jacob Green wrote:


Hey wanted to thank you for your reply, and wanted to let you know 
that late after I sent this email, my colleuege and I figured out we 
needed to enable the FCoE key in the ovirt manager and tell ovirt 
that eno51 and eno52 interfaces are FCoE.



However I ran into another issue. Now that our Fiber channel is 
imported to the new environment and were able to import VMs, we have 
some VMs that we will not be importing, however we see no way to 
delete them from the storage in the GUI. Or I am just missing it.



*TLDR*: How does one delete VMs available for import, without 
importing them from the storage domain?



Thank you.



On 11/28/2018 01:26 AM, Luca 'remix_tj' Lorenzetto wrote:

On Wed, Nov 28, 2018 at 6:54 AM Jacob Green  wrote:

Any help or insight into fiber channel with ovirt 4.2 would be greatly
appreciated.


Hello Jacob,

we're running a cluster of 6 HP BL460 G9 with virtual connect without issues.

[root@kvmsv003 ~]# dmidecode | grep -A3 '^System Information'
System Information
Manufacturer: HP
Product Name: ProLiant BL460c Gen9
Version: Not Specified

We've, anyway, a different kind of CNA:

[root@kvmsv003 ~]# cat /sys/class/fc_host/host1/device/scsi_host/host1/modeldesc
HP FlexFabric 20Gb 2-port 650FLB Adapter

But i see is running the same module you're reporting

[root@kvmsv003 ~]# lsmod | grep bnx
bnx2fc103061  0
cnic   67392  1 bnx2fc
libfcoe58854  2 fcoe,bnx2fc
libfc 116357  3 fcoe,libfcoe,bnx2fc
scsi_transport_fc  64007  4 fcoe,lpfc,libfc,bnx2fc
[root@fapikvmpdsv003 ~]#

Since the fcoe connection is managed directly by virtual connect, i'm
not having fcoe informations shown with fcoeadm:

[root@kvmsv003 ~]# fcoeadm -i
[root@kvmsv003 ~]#

Are you sure you set up the right configuration on virtual connect side?

Luca



--
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690


___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/site/privacy-policy/
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 

[ovirt-users] Re: Delete un-imported VMs from FC storage or any storage.

2018-12-03 Thread Jacob Green
Any thoughts on how I remove an un-imported VM from my FIbre 
Channel storage? I cannot move my working VMs back to the FC, and I 
believe it is because the older un-imported version is creating conflict.



On 11/29/2018 03:12 PM, Jacob Green wrote:


Ok, so here is the situation, before moving/importing our primary 
storage domain, I exported a few VMs, so they would not need to go 
down during the "Big migration" however they are now residing on some 
iscsi storage, now that the Fibre Channel storage is back in place, I 
cannot move the disks of the VMs to the Fiber channel because their is 
an unimported version of that VM residing on the Fibre Channel. I need 
to delete or remove the VM from the Fibre Channel so I can move the 
working VM back to the Fibre Channel.


I hope that makes sense, but essentially I have a current duplicate VM 
running in iscsi that I need to move to the fibre channel. however 
because the VM used to exsist on the fibre channel and has the same 
disk name, I cannot move it to the fibre channel, also it seems odd to 
me there is no way to clear the VMs from the storage without importing 
them. There must be a way?



Thank you.



On 11/28/2018 10:14 PM, Jacob Green wrote:


Hey wanted to thank you for your reply, and wanted to let you know 
that late after I sent this email, my colleuege and I figured out we 
needed to enable the FCoE key in the ovirt manager and tell ovirt 
that eno51 and eno52 interfaces are FCoE.



However I ran into another issue. Now that our Fiber channel is 
imported to the new environment and were able to import VMs, we have 
some VMs that we will not be importing, however we see no way to 
delete them from the storage in the GUI. Or I am just missing it.



*TLDR*: How does one delete VMs available for import, without 
importing them from the storage domain?



Thank you.



On 11/28/2018 01:26 AM, Luca 'remix_tj' Lorenzetto wrote:

On Wed, Nov 28, 2018 at 6:54 AM Jacob Green  wrote:

Any help or insight into fiber channel with ovirt 4.2 would be greatly
appreciated.


Hello Jacob,

we're running a cluster of 6 HP BL460 G9 with virtual connect without issues.

[root@kvmsv003 ~]# dmidecode | grep -A3 '^System Information'
System Information
Manufacturer: HP
Product Name: ProLiant BL460c Gen9
Version: Not Specified

We've, anyway, a different kind of CNA:

[root@kvmsv003 ~]# cat /sys/class/fc_host/host1/device/scsi_host/host1/modeldesc
HP FlexFabric 20Gb 2-port 650FLB Adapter

But i see is running the same module you're reporting

[root@kvmsv003 ~]# lsmod | grep bnx
bnx2fc103061  0
cnic   67392  1 bnx2fc
libfcoe58854  2 fcoe,bnx2fc
libfc 116357  3 fcoe,libfcoe,bnx2fc
scsi_transport_fc  64007  4 fcoe,lpfc,libfc,bnx2fc
[root@fapikvmpdsv003 ~]#

Since the fcoe connection is managed directly by virtual connect, i'm
not having fcoe informations shown with fcoeadm:

[root@kvmsv003 ~]# fcoeadm -i
[root@kvmsv003 ~]#

Are you sure you set up the right configuration on virtual connect side?

Luca



--
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690


___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/site/privacy-policy/
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/PR2ABJDQLY7JIR3SDSMYWXMHPE2RSBIH/


--
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQ7G5QQZZMEOV7G43EUU3NRRMCMML7HR/


--
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IRHR3TWOHGA32JOKGSXFL6CN5EHQNUV6/


[ovirt-users] Re: hosted-engine --deploy fails on Ovirt-Node-NG 4.2.7

2018-12-03 Thread Ralf Schenk
Hello,

attached the qemu Log.

This ist the problem:

Could not open
'/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08':
Permission denied

When I do "su - vdsm -s /bin/bash"
I can hexdump the file !

-bash-4.2$ id
uid=36(vdsm) gid=36(kvm) groups=36(kvm),107(qemu),179(sanlock)
-bash-4.2$ hexdump -Cn 512
/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08
  eb 63 90 10 8e d0 bc 00  b0 b8 00 00 8e d8 8e c0 
|.c..|
0010  fb be 00 7c bf 00 06 b9  00 02 f3 a4 ea 21 06 00 
|...|.!..|
0020  00 be be 07 38 04 75 0b  83 c6 10 81 fe fe 07 75 
|8.uu|
0030  f3 eb 16 b4 02 b0 01 bb  00 7c b2 80 8a 74 01 8b 
|.|...t..|
0040  4c 02 cd 13 ea 00 7c 00  00 eb fe 00 00 00 00 00 
|L.|.|
0050  00 00 00 00 00 00 00 00  00 00 00 80 01 00 00 00 
||
0060  00 00 00 00 ff fa 90 90  f6 c2 80 74 05 f6 c2 70 
|...t...p|
0070  74 02 b2 80 ea 79 7c 00  00 31 c0 8e d8 8e d0 bc 
|ty|..1..|
0080  00 20 fb a0 64 7c 3c ff  74 02 88 c2 52 be 05 7c  |.
..d|<.t...R..||
0090  b4 41 bb aa 55 cd 13 5a  52 72 3d 81 fb 55 aa 75 
|.A..U..ZRr=..U.u|
00a0  37 83 e1 01 74 32 31 c0  89 44 04 40 88 44 ff 89 
|7...t21..D.@.D..|
00b0  44 02 c7 04 10 00 66 8b  1e 5c 7c 66 89 5c 08 66 
|D.f..\|f.\.f|
00c0  8b 1e 60 7c 66 89 5c 0c  c7 44 06 00 70 b4 42 cd 
|..`|f.\..D..p.B.|
00d0  13 72 05 bb 00 70 eb 76  b4 08 cd 13 73 0d 5a 84 
|.r...p.vs.Z.|
00e0  d2 0f 83 de 00 be 85 7d  e9 82 00 66 0f b6 c6 88 
|...}...f|
00f0  64 ff 40 66 89 44 04 0f  b6 d1 c1 e2 02 88 e8 88 
|d.@f.D..|
0100  f4 40 89 44 08 0f b6 c2  c0 e8 02 66 89 04 66 a1 
|.@.D...f..f.|
0110  60 7c 66 09 c0 75 4e 66  a1 5c 7c 66 31 d2 66 f7 
|`|f..uNf.\|f1.f.|
0120  34 88 d1 31 d2 66 f7 74  04 3b 44 08 7d 37 fe c1 
|4..1.f.t.;D.}7..|
0130  88 c5 30 c0 c1 e8 02 08  c1 88 d0 5a 88 c6 bb 00 
|..0Z|
0140  70 8e c3 31 db b8 01 02  cd 13 72 1e 8c c3 60 1e 
|p..1..r...`.|
0150  b9 00 01 8e db 31 f6 bf  00 80 8e c6 fc f3 a5 1f 
|.1..|
0160  61 ff 26 5a 7c be 80 7d  eb 03 be 8f 7d e8 34 00 
|a.|..}}.4.|
0170  be 94 7d e8 2e 00 cd 18  eb fe 47 52 55 42 20 00 
|..}...GRUB .|
0180  47 65 6f 6d 00 48 61 72  64 20 44 69 73 6b 00 52  |Geom.Hard
Disk.R|
0190  65 61 64 00 20 45 72 72  6f 72 0d 0a 00 bb 01 00  |ead.
Error..|
01a0  b4 0e cd 10 ac 3c 00 75  f4 c3 00 00 00 00 00 00 
|.<.u|
01b0  00 00 00 00 00 00 00 00  4f ee 04 00 00 00 80 20 
|O.. |
01c0  21 00 83 aa 28 82 00 08  00 00 00 00 20 00 00 aa 
|!...(... ...|
01d0  29 82 8e fe ff ff 00 08  20 00 00 f8 5b 05 00 fe  |)...
...[...|
01e0  ff ff 83 fe ff ff 00 00  7c 05 00 f8 c3 00 00 00 
||...|
01f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 55 aa 
|..U.|
0200


Am 03.12.2018 um 16:43 schrieb Simone Tiraboschi:
>
>
> On Mon, Dec 3, 2018 at 2:07 PM Ralf Schenk  > wrote:
>
> Hello,
>
> I try to deploy hosted-engine to a NFS Share accessible by
> (currently) two hosts. The host is running latest ovirt-node-ng 4.2.7.
>
> hosted-engine --deploy fails constantly in late stage when trying
> to run engine from NFS. It already ran as "HostedEngineLocal" and
> I think is then migrated to NFS storage.
>
> Engine seems to be deployed to NFS already:
>
> [root@epycdphv02 ~]# ls -al
> /rhev/data-center/mnt/storage01.office.databay.de:_ovirt_engine
> total 23
> drwxrwxrwx 3 vdsm kvm    4 Dec  3 13:01 .
> drwxr-xr-x 3 vdsm kvm 4096 Dec  1 17:11 ..
> drwxr-xr-x 6 vdsm kvm    6 Dec  3 13:09
> 1dacf1ea-0934-4840-bed4-e9d023572f59
> -rwxr-xr-x 1 vdsm kvm    0 Dec  3 13:42 __DIRECT_IO_TEST__
>
> NFS Mount:
>
> storage01.office.databay.de:/ovirt/engine on
> /rhev/data-center/mnt/storage01.office.databay.de:_ovirt_engine
> type nfs4
> 
> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.1.121,local_lock=none,addr=192.168.1.3)
>
> Libvirt quemu states an error:
>
> Could not open
> 
> '/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08':
> Permission denied
>
> Even permissions of mentioned file seem to be ok. SELINUX is
> disabled since I had a lots of problems with earlier versions
> trying to deploy hosted-engine.
>
> You can keep it on without any know issue.
>  
>
> [root@epycdphv02 ~]# ls -al
> 
> '/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08'
> -rw-rw 1 vdsm kvm 53687091200 Dec  3 

[ovirt-users] Re: Cannot remove old renamed hosted-engine storage domain

2018-12-03 Thread Simone Tiraboschi
On Sat, Dec 1, 2018 at 5:32 AM Andrew DeMaria 
wrote:

> Hi,
>
> I've been testing the hosted-engine restore process and have gotten stuck
> because I cannot remove the old hosted-engine storage domain. Here is the
> process I went through:
>
> 1. Setup two 70G luns under an iscsi target (one for the original, one for
> the restored hosted-engine. They are both empty at this point)
> 2. Install RHEL 7.6 on two other hosts (synergy and pointblank)
> 3. Install ovirt 4.2 on host synergy thru hosted-engine --deploy using one
> of the luns
> 4. Add host pointblank
> 5. Perform engine-backup and copy it off
> 6. Shutdown both RHEL hosts
> 7. Reinstall RHEL on both (wipe disk)
> 8. Install ovirt 4.2 on host synergy via hosted-engine --deploy
> --restore-from-file=backup.tgz using the other unused lun
> 9. Remove old host pointblank
> 10. Add host pointblank
> 11. Noticed that old hosted-engine storage domain (named
> hosted_storage_old_20181130T111054) is designated the master storage domain
> 12. Move storage domain hosted_storage_old_20181130T111054 into maintenance
>

We are simply renaming it without automatically removing since the user can
potentially store there also other VMs.


> 13. Noticed that hosted_storage_old_20181130T111054 is still the master
> (even after moving it into maint while the new hosted_storage storage
> domain is healthy/active)
>

This looks really weird, the engine should select another SD as the new
master SD and you should be able to destroy that one.
Can you please attach engine.log for the relevant time frame?


> 14. Tried to detach hosted_storage_old_20181130T111054 anyways, but
> presented with:
>
> Error while executing action: Cannot detach the master Storage Domain from
> the Data Center while there are other storage domains attached to it.
> -Please try to activate another Storage Domain in the Data Center.
>
> This is quite strange as the new hosted_storage storage domain is listed
> under that same Data Center named "Default" and is shown active.
>
> Any guidance would be much appreciated!
>
> Thanks,
> Andrew
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PVASAUXKFO4OK4FEJAOV47VP3X4VRRGX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MFRQD6G2WTID6VA4A6EL67D6O4SX2YYM/


[ovirt-users] Re: hosted-engine deploy seems to be failed, but it is not

2018-12-03 Thread Simone Tiraboschi
On Sat, Dec 1, 2018 at 3:30 PM Sinan Polat  wrote:

> Hi folks,
>
>
> A while  ago I installed oVirt 4.2 on my CentOS 7.5 server. It is a single
> node.
>
>
> Everything is working, I can deploy VM's, etc. But it look likes that the
> hosted-engine is partly installed or something.
>
>
> [root@s01 ~]# hosted-engine --vm-status
> It seems like a previous attempt to deploy hosted-engine failed or it's
> still in progress. Please clean it up before trying again
> [root@s01 ~]#
>
>
> Any suggestions?
>

Can you please attach the initial deployment logs?


>
> Thanks!
>
> Sinan
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IP3IANH42HHY2ICMFZRT76LDDTMPYTA/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IRRWIGHKNAJTCZZPOXECSDRKRH2UK5ZO/


[ovirt-users] Re: hosted-engine --deploy fails on Ovirt-Node-NG 4.2.7

2018-12-03 Thread Simone Tiraboschi
On Mon, Dec 3, 2018 at 2:07 PM Ralf Schenk  wrote:

> Hello,
>
> I try to deploy hosted-engine to a NFS Share accessible by (currently) two
> hosts. The host is running latest ovirt-node-ng 4.2.7.
>
> hosted-engine --deploy fails constantly in late stage when trying to run
> engine from NFS. It already ran as "HostedEngineLocal" and I think is then
> migrated to NFS storage.
>
> Engine seems to be deployed to NFS already:
>
> [root@epycdphv02 ~]# ls -al
> /rhev/data-center/mnt/storage01.office.databay.de:_ovirt_engine
> total 23
> drwxrwxrwx 3 vdsm kvm4 Dec  3 13:01 .
> drwxr-xr-x 3 vdsm kvm 4096 Dec  1 17:11 ..
> drwxr-xr-x 6 vdsm kvm6 Dec  3 13:09
> 1dacf1ea-0934-4840-bed4-e9d023572f59
> -rwxr-xr-x 1 vdsm kvm0 Dec  3 13:42 __DIRECT_IO_TEST__
>
> NFS Mount:
>
> storage01.office.databay.de:/ovirt/engine on
> /rhev/data-center/mnt/storage01.office.databay.de:_ovirt_engine type nfs4
> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.1.121,local_lock=none,addr=192.168.1.3)
>
> Libvirt quemu states an error:
>
> Could not open
> '/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08':
> Permission denied
>
> Even permissions of mentioned file seem to be ok. SELINUX is disabled
> since I had a lots of problems with earlier versions trying to deploy
> hosted-engine.
>
You can keep it on without any know issue.


> [root@epycdphv02 ~]# ls -al
> '/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08'
> -rw-rw 1 vdsm kvm 53687091200 Dec  3 13:09
> /var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08
>
> hosted-engine --deploy ends with error. Logfile is attached.
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.218320", "end": "2018-12-03 13:20:19.139919", "rc": 0, "start":
> "2018-12-03 13:20:18.921599", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=156443
> (Mon Dec  3 13:20:16
> 2018)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=156443 (Mon Dec  3
> 13:20:16
> 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Fri
> Jan  2 20:29:01 1970\\n\", \"hostname\": \"epycdphv02.office.databay.de\",
> \"host-id\": 1, \"engine-status\": {\"reason\": \"bad vm status\",
> \"health\": \"bad\", \"vm\": \"down_unexpected\", \"detail\": \"Down\"},
> \"score\": 0, \"stopped\": false, \"maintenance\": false, \"crc32\":
> \"d3355c40\", \"local_conf_timestamp\": 156443, \"host-ts\": 156443},
> \"global_maintenance\": false}", "stdout_lines": ["{\"1\":
> {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=156443
> (Mon Dec  3 13:20:16
> 2018)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=156443 (Mon Dec  3
> 13:20:16
> 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Fri
> Jan  2 20:29:01 1970\\n\", \"hostname\": \"epycdphv02.office.databay.de\",
> \"host-id\": 1, \"engine-status\": {\"reason\": \"bad vm status\",
> \"health\": \"bad\", \"vm\": \"down_unexpected\", \"detail\": \"Down\"},
> \"score\": 0, \"stopped\": false, \"maintenance\": false, \"crc32\":
> \"d3355c40\", \"local_conf_timestamp\": 156443, \"host-ts\": 156443},
> \"global_maintenance\": false}"]}
>

state=EngineUnexpectedlyDown
means that the host tried to start the engine VM from the shared storage
but it failed due to other reasons.
Could you please attach /var/log/messages
and /var/log/libvirt/qemu/HostedEngine.log ?


>
> [ INFO  ] TASK [Check VM status at virt level]
> [ INFO  ] TASK [Fail if engine VM is not running]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine
> VM is not running, please check vdsm logs"}
> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
> ansible-playbook
> [ INFO  ] Stage: Clean up
> [ INFO  ] Cleaning temporary resources
> [ INFO  ] TASK [Gathering Facts]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [Fetch logs from the engine VM]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [Set destination directory path]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [Create destination directory]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [include_tasks]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [Find the local appliance image]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [Set local_vm_disk_path]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [Give the vm time to flush dirty buffers]
> [ INFO  ] ok: [localhost]
> [ 

[ovirt-users] Re: cluster 4 pc's for more scanning power?

2018-12-03 Thread Peter Collins
Thanks very much for these suggestions and assistance.

Peter

On Mon., Dec. 3, 2018, 3:26 a.m. Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk wrote:

> Hello Peter,
>if you install it on one VM it will only run on one
> of the hosts however if you had a distributed scanner it could run on
> multiple hosts. I think there was some work with a distibuted scanner in
> docker (don't know if there is openshift or kubernetes) but it may be
> better to run it in a docker or kubernetes cluster rather than oVirt.
>
> Regards,
>  Paul S.
> 
> From: Peter C. 
> Sent: 01 December 2018 00:15
> To: users@ovirt.org
> Subject: [ovirt-users] cluster 4 pc's for more scanning power?
>
> Hello and sorry for the ignorance behind this question. I've been reading
> about various scale out, "hyper convergence" solutions and want to ask this
> about oVirt.
>
> I do vulnerability scanning on my company's assets. I do it from an
> obsolete laptop that was given to me. The load goes over 13 sometimes, and
> the scans take a long time.
>
> If I built an oVirt cluster from 4 or 5 desktop pc's, build  VM to run
> OpenVAS, would the cpu load demanded by the scanning be spread accross the
> 3-4 hosts, not inlcuding the management host, and therby give my scans more
> CPU power?
>
> If not oVirt, is there another project that would be better suited to what
> I'm trying to achieve?
>
> Qualifiers:
> -I'm not asking if this is the best way to get high-load scanning done.
> I'm just asking if I'll get the combined power from the cpu cores of all
> the host machines. The scanning jobs thread already.
> -I know it would probably be more efficient to get a powerful multi core
> workstation or server to do this. That is not my question.
> -The pc's are perfectly good, they are just not being used and won't be
> used for anything else.
>
> Thanks in advance.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S5WBOBNB6IRXWKIDITYIGGR6NUYMYSFM/
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PXWLAZUOT37KDDJL7PIAJ4AOESGHETYX/


[ovirt-users] hosted-engine --deploy fails on Ovirt-Node-NG 4.2.7

2018-12-03 Thread Ralf Schenk
Hello,

I try to deploy hosted-engine to a NFS Share accessible by (currently)
two hosts. The host is running latest ovirt-node-ng 4.2.7.

hosted-engine --deploy fails constantly in late stage when trying to run
engine from NFS. It already ran as "HostedEngineLocal" and I think is
then migrated to NFS storage.

Engine seems to be deployed to NFS already:

[root@epycdphv02 ~]# ls -al
/rhev/data-center/mnt/storage01.office.databay.de:_ovirt_engine
total 23
drwxrwxrwx 3 vdsm kvm    4 Dec  3 13:01 .
drwxr-xr-x 3 vdsm kvm 4096 Dec  1 17:11 ..
drwxr-xr-x 6 vdsm kvm    6 Dec  3 13:09 1dacf1ea-0934-4840-bed4-e9d023572f59
-rwxr-xr-x 1 vdsm kvm    0 Dec  3 13:42 __DIRECT_IO_TEST__

NFS Mount:

storage01.office.databay.de:/ovirt/engine on
/rhev/data-center/mnt/storage01.office.databay.de:_ovirt_engine type
nfs4
(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.1.121,local_lock=none,addr=192.168.1.3)

Libvirt quemu states an error:

Could not open
'/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08':
Permission denied

Even permissions of mentioned file seem to be ok. SELINUX is disabled
since I had a lots of problems with earlier versions trying to deploy
hosted-engine.

[root@epycdphv02 ~]# ls -al
'/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08'
-rw-rw 1 vdsm kvm 53687091200 Dec  3 13:09
/var/run/vdsm/storage/1dacf1ea-0934-4840-bed4-e9d023572f59/2b1332f6-3bb6-495b-87fe-c5b85e0ac495/39d45b33-5f29-430b-8b58-14a8ea20fb08

hosted-engine --deploy ends with error. Logfile is attached.

[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
"0:00:00.218320", "end": "2018-12-03 13:20:19.139919", "rc": 0, "start":
"2018-12-03 13:20:18.921599", "stderr": "", "stderr_lines": [],
"stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
true, \"extra\":
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=156443
(Mon Dec  3 13:20:16
2018)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=156443 (Mon Dec  3
13:20:16
2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Fri
Jan  2 20:29:01 1970\\n\", \"hostname\":
\"epycdphv02.office.databay.de\", \"host-id\": 1, \"engine-status\":
{\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\":
\"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \"stopped\":
false, \"maintenance\": false, \"crc32\": \"d3355c40\",
\"local_conf_timestamp\": 156443, \"host-ts\": 156443},
\"global_maintenance\": false}", "stdout_lines": ["{\"1\":
{\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=156443
(Mon Dec  3 13:20:16
2018)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=156443 (Mon Dec  3
13:20:16
2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Fri
Jan  2 20:29:01 1970\\n\", \"hostname\":
\"epycdphv02.office.databay.de\", \"host-id\": 1, \"engine-status\":
{\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\":
\"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \"stopped\":
false, \"maintenance\": false, \"crc32\": \"d3355c40\",
\"local_conf_timestamp\": 156443, \"host-ts\": 156443},
\"global_maintenance\": false}"]}
[ INFO  ] TASK [Check VM status at virt level]
[ INFO  ] TASK [Fail if engine VM is not running]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
"Engine VM is not running, please check vdsm logs"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook
[ INFO  ] Stage: Clean up
[ INFO  ] Cleaning temporary resources
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Fetch logs from the engine VM]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Set destination directory path]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Create destination directory]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Find the local appliance image]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Set local_vm_disk_path]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Give the vm time to flush dirty buffers]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Copy engine logs]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Remove local vm dir]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Remove temporary entry in /etc/hosts for the local VM]
[ INFO  ] ok: [localhost]
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20181203132110.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the
issue, fix 

[ovirt-users] Re: SPICE QXL Crashes Linux Guests

2018-12-03 Thread Victor Toso
Hi,

On Sun, Nov 25, 2018 at 02:48:13PM -0500, Alex McWhirter wrote:
> I'm having an odd issue that i find hard to believe could be a
> bug, and not some kind of user error, but im at a loss for
> where else to look.

Looks like a bug or bad configuration between host/guest. Let's
see.

> when booting a linux ISO with QXL SPICE graphics, the boot
> hangs as soon as kernel modesetting kicks in. Tried with latest
> debian, fedora, and centos.  Sometimes it randomly works, but
> most often it does not. QXL / VGA VNC work fine. However if i
> wait a few minutes after starting the VM for the graphics to
> start, then there are no issues and i can install as usual.
> 
> So after install, i reboot, hangs on reboot right after
> graphics switch back to text mode with QXL SPICE, not with VNC.
> So i force power off, reboot, and wait a while for it to boot.
> If i did text only install, when i open a spice console it will
> hang after typing a few characters. If i did a graphical
> install then as long as i waited long enough for X to start,
> then it works perfectly fine.

From the logs of second email:

 | [3.201725] [drm] Initialized qxl 0.1.0 20120117 for :00:02.0 on 
minor 0

Quite old qxl? Would it be possible to update it? 0.1.5 was
released in 2016-12-19 [0], that's almost 5 years of bugfixing,
etc.


https://gitlab.freedesktop.org/xorg/driver/xf86-video-qxl/tags/xf86-video-qxl-0.1.5

> I tried to capture some logs, but since the whole guest OS
> hangs it's rather hard to pull off. I did see an occasional
> error about the mouse driver, so that's really all i have to go
> on.
> 
> As for the spice client, im using virt-viewer on windows 10
> x64, tried various versions of virt-viewer just to be sure, no
> change. I also have a large amount on windows guests with QXL
> SPICE. These all work with no issue.  Having guest agent
> installed in the linux guest seems to make no difference.
> 
> There are no out of the ordinary logs on the VDSM hosts, but i
> can provide anything you may need. It's not specific to any one
> host, i have 10 VM hosts in the cluster, they all do. They are
> westmere boxes if that makes a difference.
> 
> Any ideas on how i should approach this? VNC works well enough
> for text only linux guest, but not being able to reboot my GUI
> linux guests without also closing my spice connection is a
> small pain.

Maybe just updating the qxl would solve the issue but which
configuration you have set in regards to memory for qxl?

I see in the logs

 | [3.163127] [drm] qxl: 16M of VRAM memory size
 | [3.163127] [drm] qxl: 63M of IO pages memory ready (VRAM domain)
 | [3.163128] [drm] qxl: 32M of Surface memory size

Not sure if driver is using UMS (old) or KMS mode on Debian...

https://www.spice-space.org/multiple-monitors.html

> as far as ovirt versions im on the latest, this is a rather
> fresh install.  just set it up a few days ago, but i've been a
> long time ovirt user. I am using a squid spice proxy if that
> makes a difference.

Cheers,
Victor


signature.asc
Description: PGP signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BYV5NFMZQOCKCJ4RXXVGO5PZR27E2J7O/