[ovirt-users] Re: Error creating a storage domain (On Cisco UCS Only)

2019-01-28 Thread nico . kruger
Hi There,

Here is a link for compressed /var/log 

https://drive.google.com/file/d/1MO2ls_27h86vQlgPW7zXsZxZYU9oyE9c/view?usp=sharing

Thank you for looking at this..

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R6VJD6UM6XFS6DUDA4YHD5DGKM5HIXJH/


[ovirt-users] Administration Portal High Availability

2019-01-28 Thread ernestclyde
Hello,
Currently i am finding some Documentation or "how to" setup High Availability 
of oVirt Engine's Administration Portal.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DUJ3DHDKXPVADHOKXNAWSIG4PLUBDZZ5/


[ovirt-users] Re: oVirt SHE 4.2.8 and CentOS 7.6

2019-01-28 Thread Vrgotic, Marko
I started on 4.2.8 release notes and than navigate to Deploying SelfHostedEngine

https://www.ovirt.org/documentation/self-hosted/chap-Introduction.html

[cid:image001.png@01D4B6EF.1D798850]

From: Greg Sheremeta 
Date: Sunday, 27 January 2019 at 02:33
To: "Vrgotic, Marko" 
Cc: "users@ovirt.org" 
Subject: Re: [ovirt-users] oVirt SHE 4.2.8 and CentOS 7.6


On Sat, Jan 26, 2019 at 1:30 PM Vrgotic, Marko 
mailto:m.vrgo...@activevideo.com>> wrote:
Hi all,

Looking at 4.2.8 and How Support version is EL 7.5.

Where did you see that?

From https://www.ovirt.org/release/4.2.8/ :
"""
oVirt 4.2.8 Release Notes
The oVirt Project is pleased to announce the availability of the 4.2.8 release 
as of January 22, 2019.
...
This release is available now for Red Hat Enterprise Linux 7.6, CentOS Linux 
7.6 (or similar).
"""


Does that mean that is not recommended to upgrade already running  SHE to 
higher than EL 7.5,  or it means not to deploy it on EL version lower than 7.5?

Thank you.

Marko Vrgotic
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6QUZEUCD62TGAWAWAWD42EYETTWVQ3IO/


--

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme
[Image removed by sender.]


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4GRJEVEMWQAAPUJXYWRIKP52LIUNPLCD/


[ovirt-users] Get Your Homework Done Online - Pro Homework Help

2019-01-28 Thread stephaniejackson394
Homework writing nowadays is becoming less of a device of education and, as an 
alternative, the maximum crucial issue used to answer career-primarily based 
questions. What this indicates for individuals who are tasked with writing is 
that your essays eventually rely! they may be now not in reality a continuum of 
writing for useless term papers which are in no way going to be study so that 
it will build rapport. while you requested us to do my project for you, we 
usually observe through, in contrast to many of our predecessors!

Website: https://www.prohomeworkhelp.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YT6IMSXC4VAS3VNL52ZD3DOI7WEROSYV/


[ovirt-users] Re: host NonResponsive after trying to update network MTUs

2019-01-28 Thread Edward Berger
After changing the engine's MTU and ifdown eth0 && ifup eth0, the host is
now able to get capabilities and resync the hosts' networks, and allows the
host to be activated.

However I have a new problem.  During the course of fighting the above, I
"undeployed" hosted-engine, and now it looks it seems to freeze when I try
to reinstall with engine deploy... It eventually times out with a failure.


On Sun, Jan 27, 2019 at 8:59 PM Edward Berger  wrote:

> I have a problem host which also is the one I deployed a hyperconverged
> oVirt node-ng cluster from with the cockpit's hyperconverged installation
> wizard.
>
> When I realized after deploying that I hadn't set the MTUs correctly for
> the engine-mgmt, associated vlan and eno.2 device and also for my
> infiniband interface ib0, I went in and tried to set them to new values
> 9000 and 65520 it got into some kind of hung state.
>
> The engine task window shows a task in "executing" and a never ending
> spinning widget
> "Handing non responsive Host track00..."
>
> I can tried updating the hosts /etc/sysconfig/network-scripts by hand.
> I've tried every combination of the engines set host in maintenance mode ,
> "sync networks" "refresh host capabilities" activating and rebooting, but
> I'm still stuck with an unresponsive host.
>
> I had another host that also failed but it allowed me to put it into
> maintenance mode and then remove it from the cluster and "add new" it back
> and it was happy.
>
> This one won't let me remove it because its serving the gluster volume
> mount point, even though I did give the mount options for the 2nd and 3rd
> backup volume servers.
>
> I'd appreciate any help restoring it to proper working order.
>
> I'm attaching the gzip'd engine log.
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UX2HLMQW6SIDCUS7ESSXV22NYZYGIHKU/


[ovirt-users] Re: Open_vSwitch no key error after upgrading to 4.2.8

2019-01-28 Thread Jorick Astrego
Noticed this one too on 4.3.0 rc2, didn't have time to check it out though.

Today I saw it after I removed an OVS cluster and recreated the cluster
with "legacy" bridge networking.

Regards,

Jorick Astrego

Netbulae


On 1/28/19 2:34 PM, Jayme wrote:
> I upgraded oVirt to 4.2.8 and now I am spammed with the following
> message in all host syslog.  How can I stop/fix this error?
>
> ovs-vsctl: ovs|1|db_ctl_base|ERR|no key "odl_os_hostconfig_hostid"
> in Open_vSwitch record "." column external_ids
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OH2VH3AYBRCNVM24PTP34OTS5ISYLUAD/




Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ASUGIXCS5W5YKHUTNFUZSJT2S6SZW5A4/


[ovirt-users] Re: Sanlock volume corrupted on deployment

2019-01-28 Thread Nir Soffer
On Sat, Jan 26, 2019 at 6:13 PM Strahil  wrote:

> Hey guys,
>
> I have noticed that with 4.2.8 the sanlock issue (during deployment) is
> still not fixed.
> Am I the only one with bad luck or there is something broken there ?
>
> The sanlock service reports code 's7 add_lockspace fail result -233'
> 'leader1 delta_acquire_begin error -233 lockspace hosted-engine host_id
> 1'.
>

Sanlock does not have such error code - are you sure this is -233?

Here sanlock return values:
https://pagure.io/sanlock/blob/master/f/src/sanlock_rv.h

Can you share your sanlock log?



>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SZMF5KKHSXOUTLGX3LR2NBN7E6QGS6G3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RUONCMQFRH3HBTBFD4YMI7AGDPAS5D6T/


[ovirt-users] Re: ovirt-sdk_external network add

2019-01-28 Thread Dominik Holler
On Mon, 28 Jan 2019 16:34:30 -
"ada per"  wrote:

> Hello everyone, 
> I  have the following script, 
> i ve been looking in ovirt-sdk but i cannot seem to find the proper
> way of adding an external provider network under ovirt-ovn I manage
> to add logical networks and vlans but no luck in external provider. 
> 
> Any advice is appreciated 
> 
> network = networks_service.add(
> network=types.Network(
> name='ext_net',
> description='Testing network', 
> data_center=types.DataCenter(
> name='Default'
> ),
>  usages=[types.NetworkUsage.VM],
>  external_provider='ovirt-provider-ovn',  -->i know this
> part is wrong what is it supposed to be called? ),

external_provider=types.OpenStackNetworkProvider(
id=provider.id
)

please find a full example script in
https://gist.github.com/dominikholler/be7286931c0ea26b14965a5f91783cd4

> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/ List
> Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LGGF7HZTMWDMCLNUATLHIXRYP7666TE4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDVTGOIK7YVHKBUSKXDWF7KQ7PFJFISN/


[ovirt-users] Re: Error creating a storage domain (On Cisco UCS Only)

2019-01-28 Thread Benny Zlotnik
Can you please attach full engine and vdsm logs?

On Sun, Jan 27, 2019 at 2:32 PM  wrote:

> Hi Guys,
>
> I am trying to install a new cluster... I currently have a 9 node and two
> 6 node oVirt Clusters... (these were installed on 4.1 and upgraded to 4.2)
>
> So i want to build a new cluster, which is working fine on this HP
> notebook i use for testing (using single node gluster deployment)
>
> But when i try install this on my production servers which are Cisco UCS
> servers i keep getting this error:
>
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
> "[Error creating a storage domain]". HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
> reason is \"Operation Failed\". Fault detail is \"[Error creating a storage
> domain]\". HTTP response code is 400."}
>
>
> This happens during storage creation after the hosted-engine is built and
> after gluster has been deployed (error happens for both single and 3
> replica deployments)
>
> I just cant see how an install on one type of server is successful but not
> on the UCS servers (which i am running my other ovirt clusters on)
>
> BTW i dont think the issue is related to Gluster Storage Create as i tried
> using NFS and Local storage and get the same error (on UCS server only)
>
> I am using the ovirt-node-ng-installer-4.2.0-2019011406.el7.iso install
> ISO
>
>
> Below is a tail from
> ovirt-hosted-engine-setup-ansible-create_storage_domain Log file
> 2019-01-27 11:09:49,754+0400 INFO ansible ok {'status': 'OK',
> 'ansible_task': u'Fetch Datacenter name', 'ansible_host': u'localhost',
> 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml',
> 'ansible_type': 'task'}
> 2019-01-27 11:09:49,754+0400 DEBUG ansible on_any args
>  kwargs
> 2019-01-27 11:09:50,478+0400 INFO ansible task start {'status': 'OK',
> 'ansible_task': u'Add NFS storage domain', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml',
> 'ansible_type': 'task'}
> 2019-01-27 11:09:50,479+0400 DEBUG ansible on_any args TASK: Add NFS
> storage domain kwargs is_conditional:False
> 2019-01-27 11:09:51,151+0400 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details_nfs" type "" value: "{
> "changed": false,
> "skip_reason": "Conditional result was False",
> "skipped": true
> }"
> 2019-01-27 11:09:51,151+0400 INFO ansible skipped {'status': 'SKIPPED',
> 'ansible_task': u'Add NFS storage domain', 'ansible_host': u'localhost',
> 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml',
> 'ansible_type': 'task'}
> 2019-01-27 11:09:51,151+0400 DEBUG ansible on_any args
>  kwargs
> 2019-01-27 11:09:51,820+0400 INFO ansible task start {'status': 'OK',
> 'ansible_task': u'Add glusterfs storage domain', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml',
> 'ansible_type': 'task'}
> 2019-01-27 11:09:51,821+0400 DEBUG ansible on_any args TASK: Add glusterfs
> storage domain kwargs is_conditional:False
> 2019-01-27 11:10:02,045+0400 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details_gluster" type "" value: "{
> "changed": false,
> "exception": "Traceback (most recent call last):\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_Xous24/__main__.py\", line 682,
> in main\nret = storage_domains_module.create()\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_Xous24/ansible_ovirt_storage_domain_payload.zip/ansible/module_utils/ovirt.py\",
> line 587, in create\n**kwargs\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 24225,
> in add\nreturn self._internal_add(storage_domain, headers, query,
> wait)\n  File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\",
> line 232, in _internal_add\nreturn future.wait() if wait else future\n
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55,
> in wait\nreturn self._code(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
> callback\nself._check_fault(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
> _check_fault\n
>  self._raise_error(response, body)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
> _raise_error\nraise error\nError: Fault reason is \"Operation Failed\".
> Fault detail is \"[Error creating a storage domain]\". HTTP response code
> is 400.\n",
> "failed": true,
> "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Error
> creating a storage domain]\". HTTP response code is 400."
> }"
> 2019-01-27 11:10:02,045+0400 DEBUG var changed: host "localhost" var
> "ansible_play_hosts" type "" value: "[]"
> 2019-01-27 11:10:02,045+0400 DEBUG var changed: host "localhost" var
> "play_hosts" type "" value: "[]"
> 2019-01-27 11:10:02,045+0

[ovirt-users] ovirt-sdk_external network add

2019-01-28 Thread ada per
Hello everyone, 
I  have the following script, 
i ve been looking in ovirt-sdk but i cannot seem to find the proper way of 
adding an external provider network under ovirt-ovn
I manage to add logical networks and vlans but no luck in external provider. 

Any advice is appreciated 

network = networks_service.add(
network=types.Network(
name='ext_net',
description='Testing network', 
data_center=types.DataCenter(
name='Default'
),
 usages=[types.NetworkUsage.VM],
 external_provider='ovirt-provider-ovn',  -->i know this part is 
wrong what is it supposed to be called?
),
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LGGF7HZTMWDMCLNUATLHIXRYP7666TE4/


[ovirt-users] Re: Sanlock volume corrupted on deployment

2019-01-28 Thread Strahil Nikolov
 Hi Simone,
I will reinstall the nodes and will provide an update.
Best Regards,Strahil Nikolov
On Sat, Jan 26, 2019 at 5:13 PM Strahil  wrote:

Hey guys,
I have noticed that with 4.2.8 the sanlock issue (during deployment) is still 
not fixed.Am I the only one with bad luck or there is something broken there ?

Hi,I'm not aware on anything breaking hosted-engine deployment on 4.2.8.Which 
kind of storage are you using?Can you please share your logs? 

The sanlock service reports code 's7 add_lockspace fail result -233' 'leader1 
delta_acquire_begin error -233 lockspace hosted-engine host_id 1'.
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SZMF5KKHSXOUTLGX3LR2NBN7E6QGS6G3/

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MTRUQE7Q7NHIK7MHROGIN56FXVK65ZOD/


[ovirt-users] Open_vSwitch no key error after upgrading to 4.2.8

2019-01-28 Thread Jayme
I upgraded oVirt to 4.2.8 and now I am spammed with the following message
in all host syslog.  How can I stop/fix this error?

ovs-vsctl: ovs|1|db_ctl_base|ERR|no key "odl_os_hostconfig_hostid" in
Open_vSwitch record "." column external_ids
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OH2VH3AYBRCNVM24PTP34OTS5ISYLUAD/


[ovirt-users] Hyperconverged setup - storage architecture - scaling

2019-01-28 Thread Leo David
Hello Everyone,
Reading through the document:
"Red Hat Hyperconverged Infrastructure for Virtualization 1.5
 Automating RHHI for Virtualization deployment"

Regarding storage scaling,  i see the following statements:





*2.7. SCALINGRed Hat Hyperconverged Infrastructure for Virtualization is
supported for one node, and for clusters of 3, 6, 9, and 12 nodes.The
initial deployment is either 1 or 3 nodes.There are two supported methods
of horizontally scaling Red Hat Hyperconverged Infrastructure for
Virtualization:*


*1 Add new hyperconverged nodes to the cluster, in sets of three, up to the
maximum of 12 hyperconverged nodes.*


*2 Create new Gluster volumes using new disks on existing hyperconverged
nodes.You cannot create a volume that spans more than 3 nodes, or expand an
existing volume so that it spans across more than 3 nodes at a time*




*2.9.1. Prerequisites for geo-replicationBe aware of the following
requirements and limitations when configuring geo-replication:One
geo-replicated volume onlyRed Hat Hyperconverged Infrastructure for
Virtualization (RHHI for Virtualization) supports only one geo-replicated
volume. Red Hat recommends backing up the volume that stores the data of
your virtual machines, as this is usually contains the most valuable data.*
--

Also  in oVirtEngine UI, when I add a brick to an existing volume i get the
following warning:

*"Expanding gluster volume in a hyper-converged setup is not recommended as
it could lead to degraded performance. To expand storage for cluster, it is
advised to add additional gluster volumes." *

Those things are raising a couple of questions that maybe for some for you
guys are easy to answer, but for me it creates a bit of confusion...
I am also referring to RedHat product documentation,  because I  treat
oVirt as production-ready as RHHI is.

*1*. Is there any reason for not going to distributed-replicated volumes (
ie: spread one volume across 6,9, or 12 nodes ) ?
- ie: is recomanded that in a 9 nodes scenario I should have 3 separated
volumes,  but how should I deal with the folowing question

*2.* If only one geo-replicated volume can be configured,  how should I
deal with 2nd and 3rd volume replication for disaster recovery

*3.* If the limit of hosts per datacenter is 250, then (in theory ) the
recomended way in reaching this treshold would be to create 20 separated
oVirt logical clusters with 12 nodes per each ( and datacenter managed from
one ha-engine ) ?

*4.* In present, I have the folowing one 9 nodes cluster , all hosts
contributing with 2 disks each  to a single replica 3 distributed
replicated volume. They where added to the volume in the following order:
node1 - disk1
node2 - disk1
..
node9 - disk1
node1 - disk2
node2 - disk2
..
node9 - disk2
At the moment, the volume is arbitrated, but I intend to go for full
distributed replica 3.

Is this a bad setup ? Why ?
It oviously brakes the redhat recommended rules...

Is there anyone so kind to discuss on these things ?

Thank you very much !

Leo


-- 
Best regards, Leo David




-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGZZJIT4JSLYSOVLVYZADXJTWVEM42KY/


[ovirt-users] Re: converting oVirt host to be able to hosts to a Self-Hosted Environment

2019-01-28 Thread Simone Tiraboschi
On Mon, Jan 28, 2019 at 1:31 PM Jarosław Prokopowski 
wrote:

> Thanks Simone
>
> I forgot to mention that this is to have second node that is able to host
> the self-hosted engine for HA purposes.
> There is already one node that hosts the self-hosted engine and I want to
> have second one.
> Will it work in this case?
>

No, in this case it's by far easier:
you should just set that host into maintenance mode from the engine and
remember to switch on the hosted-engine deployment flag when you go to
reinstall the host.


>
>
>
> On Mon, Jan 28, 2019 at 1:25 PM Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Mon, Jan 28, 2019 at 12:21 PM Jarosław Prokopowski <
>> jprokopow...@gmail.com> wrote:
>>
>>> Hi Guys,
>>>
>>> Is there a way to convert existing oVirt node (CentOS) to be able to
>>> host a Self-Hosted Environment?
>>> If so how can I do that?
>>>
>>
>> Hi, the best option is with backup and restore.
>> Basically you should take a backup of your current engine with
>> engine-backup
>> and then deploy hosted-engine with hosted-engine --deploy
>> --restore-from-file=mybackup.tar.gz
>>
>>
>>>
>>> Thanks
>>> Jaroson
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZVY4SD5RTNV6VC4UCIJXG4CLVUSQRYGF/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OJ44XNDRPGHQQRNZQOV4JDIJPSCR6R3B/


[ovirt-users] Re: Sanlock volume corrupted on deployment

2019-01-28 Thread Simone Tiraboschi
On Sat, Jan 26, 2019 at 5:13 PM Strahil  wrote:

> Hey guys,
>
> I have noticed that with 4.2.8 the sanlock issue (during deployment) is
> still not fixed.
> Am I the only one with bad luck or there is something broken there ?
>

Hi,
I'm not aware on anything breaking hosted-engine deployment on 4.2.8.
Which kind of storage are you using?
Can you please share your logs?


>
> The sanlock service reports code 's7 add_lockspace fail result -233'
> 'leader1 delta_acquire_begin error -233 lockspace hosted-engine host_id 1'.
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SZMF5KKHSXOUTLGX3LR2NBN7E6QGS6G3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDTZ67C3LDVA6OGLUGQXUX2LSIMW6HZW/


[ovirt-users] Re: converting oVirt host to be able to hosts to a Self-Hosted Environment

2019-01-28 Thread Simone Tiraboschi
On Mon, Jan 28, 2019 at 12:21 PM Jarosław Prokopowski <
jprokopow...@gmail.com> wrote:

> Hi Guys,
>
> Is there a way to convert existing oVirt node (CentOS) to be able to host
> a Self-Hosted Environment?
> If so how can I do that?
>

Hi, the best option is with backup and restore.
Basically you should take a backup of your current engine with engine-backup
and then deploy hosted-engine with hosted-engine --deploy
--restore-from-file=mybackup.tar.gz


>
> Thanks
> Jaroson
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZVY4SD5RTNV6VC4UCIJXG4CLVUSQRYGF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7GKE3PUJWBZPHZQZAJVUU3GTUAB56R2S/


[ovirt-users] Re: Ovirt snapshot issues

2019-01-28 Thread Alex K
On Fri, Jan 25, 2019 at 2:19 PM Alex K  wrote:

>
>
> On Thu, Jan 24, 2019 at 11:28 AM Elad Ben Aharon 
> wrote:
>
>> Thanks!
>>
>> +Fred Rolland  seems like the same issue as
>> reported in https://bugzilla.redhat.com/show_bug.cgi?id=1555116
>>
> Seems to be related with time-out issues. Though i did not have any
> storage issues that could have affected the snapshot procedure, although I
> am running a replica 2 on 1Gbit network where vm storage resides which is
> not the fastest and may affect this. Could rebasing the backing-chain to
> reflect the engine state be a solution for this?
>
Did the following (after a backup of the VM):

qemu-img rebase -b 604d84c3-8d5f-4bb6-a2b5-0aea79104e43
4f636d91-a66c-4d68-8720-d2736a3765df
qemu-img rebase -b cf8707f2-bf1f-4827-8dc2-d7e6ffcc3d43
604d84c3-8d5f-4bb6-a2b5-0aea79104e43

But still getting same error at engine GUI.
Hope that relevant bugs will be resolved as ovirt snapshoting has been
unstable for me (have periodically reported several issues here without
finding yet a solution) and had to switch to other backup approaches
(Veeam).


>
>>
>> 2019-01-24 10:12:08,240+02 ERROR
>> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (default
>> task-544) [416c625f-e57b-46b8-bf74-5b774191fada] Error during
>> ValidateFailure.: java.lang.NullPointerExceptio
>> n
>>at org.ovirt.engine.core.bll.validator.storage.
>> StorageDomainValidator.getTotalSizeForMerge(StorageDomainValidator.java:205)
>> [bll.jar:]
>>at org.ovirt.engine.core.bll.validator.storage.
>> StorageDomainValidator.hasSpaceForMerge(StorageDomainValidator.java:241)
>> [bll.jar:]
>>at org.ovirt.engine.core.bll.validator.storage.
>> MultipleStorageDomainsValidator.lambda$allDomainsHaveSpaceForMerge$6(
>> MultipleStorageDomainsValidator.java:122) [bll.jar:]
>>at 
>> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
>> [rt.jar:1.8.0_191]
>>
>>
>>
>> On Thu, Jan 24, 2019 at 10:25 AM Alex K  wrote:
>>
>>> When I get the error the engine.log  logs the attached
>>> engine-partial.log.
>>> At vdsm.log at SPM host I don't see any error generated.
>>> Full logs also attached.
>>>
>>> Thanx,
>>> Alex
>>>
>>>
>>>
>>>
>>> On Wed, Jan 23, 2019 at 5:53 PM Elad Ben Aharon 
>>> wrote:
>>>
 Hi,

 Can you please provide engine.log and vdsm.log?

 On Wed, Jan 23, 2019 at 5:41 PM Alex K  wrote:

> Hi all,
>
> I have ovirt 4.2.7, self-hosted on top gluster, with two servers.
> I have a specific VM which has encountered some snapshot issues.
> The engine lists 4 snapshots and when trying to delete one of them I
> get "General command validation failure".
>
> The VM was being backed up periodically by a python script which was
> creating a snapshot -> clone -> export -> delete clone -> delete snapshot.
> There were times where the VM was complaining of some illegal snapshots
> following such backup procedures and I had to delete such illegal 
> snapshots
> references from the engine DB (following some steps found online),
> otherwise I would not be able to start the VM if it was shut down. Seems
> though that this is not a clean process and leaves the underlying image of
> the VM in an inconsistent state in regards to its snapshots as when
> checking the backing chain of the image file I get:
>
> *b46d8efe-885b-4a68-94ca-e8f437566bee* (active VM)* ->*
> *b7673dca-6e10-4a0f-9885-1c91b86616af ->*
> *4f636d91-a66c-4d68-8720-d2736a3765df ->*
> 6826cb76-6930-4b53-a9f5-fdeb0e8012ac ->
> 61eea475-1135-42f4-b8d1-da6112946bac ->
> *604d84c3-8d5f-4bb6-a2b5-0aea79104e43 ->*
> 1e75898c-9790-4163-ad41-847cfe84db40 ->
> *cf8707f2-bf1f-4827-8dc2-d7e6ffcc3d43 ->*
> 3f54c98e-07ca-4810-82d8-cbf3964c7ce5 (raw image)
>
> The bold ones are the ones shown at engine GUI. The VM runs normally
> without issues.
> I was thinking if I could use qemu-img commit to consolidate and
> remove the snapshots that are not referenced from engine anymore. Any 
> ideas
> from your side?
>
> Thanx,
> Alex
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DDZXH5UG6QEH76A5EO4STZ4YV7RIQQ2I/
>


 --

 Elad Ben Aharon

 ASSOCIATE MANAGER, RHV storage QE

>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidel

[ovirt-users] Re: Deploying single instance - error

2019-01-28 Thread Sachidananda URS
Hi David,

On Mon, Jan 28, 2019 at 5:01 PM Gobinda Das  wrote:

> Hi David,
>  Thanks!
> Adding sac to check if we are missing anything for gdeploy.
>
> On Mon, Jan 28, 2019 at 4:33 PM Leo David  wrote:
>
>> Hi Gobinda,
>> gdeploy --version
>> gdeploy 2.0.2
>>
>> yum list installed | grep gdeploy
>> gdeploy.noarch2.0.8-1.el7
>> installed
>>
>>
Ramakrishna will build a fedora package to include that fix.
Should be available to you in some time. Will keep you posted.

-sac
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZCWCEQKHJ5BSMLUBQD7O5STO6LGUPUG/


[ovirt-users] Re: Deploying single instance - error

2019-01-28 Thread Gobinda Das
Hi David,
 Thanks!
Adding sac to check if we are missing anything for gdeploy.

On Mon, Jan 28, 2019 at 4:33 PM Leo David  wrote:

> Hi Gobinda,
> gdeploy --version
> gdeploy 2.0.2
>
> yum list installed | grep gdeploy
> gdeploy.noarch2.0.8-1.el7
> installed
>
> Thank you !
>
>
> On Mon, Jan 28, 2019 at 10:56 AM Gobinda Das  wrote:
>
>> Hi David,
>>  Can you please check the  gdeploy version?
>> This bug was fixed last year:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1626513
>> And is part of: gdeploy-2.0.2-29
>>
>> On Sun, Jan 27, 2019 at 2:38 PM Leo David  wrote:
>>
>>> Hi,
>>> It seems so that I had to manually add the sections, to make the scrip
>>> working:
>>> [diskcount]
>>> 12
>>> [stripesize]
>>> 256
>>>
>>> It looks like ansible is still searching for these sections regardless
>>> that I have configured "jbod"  in the wizard...
>>>
>>> Thanks,
>>>
>>> Leo
>>>
>>>
>>>
>>> On Sun, Jan 27, 2019 at 10:49 AM Leo David  wrote:
>>>
 Hello Everyone,
 Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso
 ) for deploying one node instance by following from within CockpitUI seems
 not to be possible.
 Here's the generated inventory ( i've specified "jbod"  in the wizard ):

 #gdeploy configuration generated by cockpit-gluster plugin
 [hosts]
 192.168.80.191

 [script1:192.168.80.191]
 action=execute
 ignore_script_errors=no
 file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
 192.168.80.191
 [disktype]
 jbod
 [service1]
 action=enable
 service=chronyd
 [service2]
 action=restart
 service=chronyd
 [shell2]
 action=execute
 command=vdsm-tool configure --force
 [script3]
 action=execute
 file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
 ignore_script_errors=no
 [pv1:192.168.80.191]
 action=create
 devices=sdb
 ignore_pv_errors=no
 [vg1:192.168.80.191]
 action=create
 vgname=gluster_vg_sdb
 pvname=sdb
 ignore_vg_errors=no
 [lv1:192.168.80.191]
 action=create
 lvname=gluster_lv_engine
 ignore_lv_errors=no
 vgname=gluster_vg_sdb
 mount=/gluster_bricks/engine
 size=230GB
 lvtype=thick
 [selinux]
 yes
 [service3]
 action=restart
 service=glusterd
 slice_setup=yes
 [firewalld]
 action=add

 ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
 services=glusterfs
 [script2]
 action=execute
 file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
 [shell3]
 action=execute
 command=usermod -a -G gluster qemu
 [volume1]
 action=create
 volname=engine
 transport=tcp

 key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock
 value=36,36,on,32,on,off,30,off,on,off,off,off,enable
 brick_dirs=192.168.80.191:/gluster_bricks/engine/engine
 ignore_volume_errors=no

 It does not get to finish,  throwing the following error:

 PLAY [gluster_servers]
 *
 TASK [Create volume group on the disks]
 
 changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg':
 u'gluster_vg_sdb'})
 PLAY RECAP
 *
 192.168.80.191 : ok=1changed=1unreachable=0
 failed=0
 *Error: Section diskcount not found in the configuration file*

 Any thoughts ?






 --
 Best regards, Leo David

>>>
>>>
>>> --
>>> Best regards, Leo David
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z2X45A6V6WQC3DBH6DGENJGBAVKNPY5T/
>>>
>>
>>
>> --
>>
>>
>> Thanks,
>> Gobinda
>>
>
>
> --
> Best regards, Leo David
>


-- 


Thanks,
Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UJARCLZTVSMEDATLQVUDI2NXYY5QY24E/


[ovirt-users] Create Snapshots on Windows 7 Pro x64 VM's causes Bootload errors

2019-01-28 Thread robertodg
Hello everyone,

on Ovirt 4.2.6.4-1.el7, when we try to create a snapshot on Windows 7 Pro x64 
(regardless of whether the VM is powered off or not) after several reboots it 
causes bootload errors (below you'll find a link with screenshoot error).

https://ibb.co/tKx6SKw

We haven't tried other Windows versions yet, do you have any idea about what 
the cause might be?

Thanks in advance.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XGFOIJRWVX3K5XZQXWYONVK5P2JZLBOW/


[ovirt-users] converting oVirt host to be able to hosts to a Self-Hosted Environment

2019-01-28 Thread Jarosław Prokopowski
Hi Guys,

Is there a way to convert existing oVirt node (CentOS) to be able to host a 
Self-Hosted Environment?
If so how can I do that?

Thanks
Jaroson
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZVY4SD5RTNV6VC4UCIJXG4CLVUSQRYGF/


[ovirt-users] Re: Deploying single instance - error

2019-01-28 Thread Leo David
Hi Gobinda,
gdeploy --version
gdeploy 2.0.2

yum list installed | grep gdeploy
gdeploy.noarch2.0.8-1.el7
installed

Thank you !


On Mon, Jan 28, 2019 at 10:56 AM Gobinda Das  wrote:

> Hi David,
>  Can you please check the  gdeploy version?
> This bug was fixed last year:
> https://bugzilla.redhat.com/show_bug.cgi?id=1626513
> And is part of: gdeploy-2.0.2-29
>
> On Sun, Jan 27, 2019 at 2:38 PM Leo David  wrote:
>
>> Hi,
>> It seems so that I had to manually add the sections, to make the scrip
>> working:
>> [diskcount]
>> 12
>> [stripesize]
>> 256
>>
>> It looks like ansible is still searching for these sections regardless
>> that I have configured "jbod"  in the wizard...
>>
>> Thanks,
>>
>> Leo
>>
>>
>>
>> On Sun, Jan 27, 2019 at 10:49 AM Leo David  wrote:
>>
>>> Hello Everyone,
>>> Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso
>>> ) for deploying one node instance by following from within CockpitUI seems
>>> not to be possible.
>>> Here's the generated inventory ( i've specified "jbod"  in the wizard ):
>>>
>>> #gdeploy configuration generated by cockpit-gluster plugin
>>> [hosts]
>>> 192.168.80.191
>>>
>>> [script1:192.168.80.191]
>>> action=execute
>>> ignore_script_errors=no
>>> file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>>> 192.168.80.191
>>> [disktype]
>>> jbod
>>> [service1]
>>> action=enable
>>> service=chronyd
>>> [service2]
>>> action=restart
>>> service=chronyd
>>> [shell2]
>>> action=execute
>>> command=vdsm-tool configure --force
>>> [script3]
>>> action=execute
>>> file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
>>> ignore_script_errors=no
>>> [pv1:192.168.80.191]
>>> action=create
>>> devices=sdb
>>> ignore_pv_errors=no
>>> [vg1:192.168.80.191]
>>> action=create
>>> vgname=gluster_vg_sdb
>>> pvname=sdb
>>> ignore_vg_errors=no
>>> [lv1:192.168.80.191]
>>> action=create
>>> lvname=gluster_lv_engine
>>> ignore_lv_errors=no
>>> vgname=gluster_vg_sdb
>>> mount=/gluster_bricks/engine
>>> size=230GB
>>> lvtype=thick
>>> [selinux]
>>> yes
>>> [service3]
>>> action=restart
>>> service=glusterd
>>> slice_setup=yes
>>> [firewalld]
>>> action=add
>>>
>>> ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
>>> services=glusterfs
>>> [script2]
>>> action=execute
>>> file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
>>> [shell3]
>>> action=execute
>>> command=usermod -a -G gluster qemu
>>> [volume1]
>>> action=create
>>> volname=engine
>>> transport=tcp
>>>
>>> key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock
>>> value=36,36,on,32,on,off,30,off,on,off,off,off,enable
>>> brick_dirs=192.168.80.191:/gluster_bricks/engine/engine
>>> ignore_volume_errors=no
>>>
>>> It does not get to finish,  throwing the following error:
>>>
>>> PLAY [gluster_servers]
>>> *
>>> TASK [Create volume group on the disks]
>>> 
>>> changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg':
>>> u'gluster_vg_sdb'})
>>> PLAY RECAP
>>> *
>>> 192.168.80.191 : ok=1changed=1unreachable=0
>>> failed=0
>>> *Error: Section diskcount not found in the configuration file*
>>>
>>> Any thoughts ?
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Best regards, Leo David
>>>
>>
>>
>> --
>> Best regards, Leo David
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z2X45A6V6WQC3DBH6DGENJGBAVKNPY5T/
>>
>
>
> --
>
>
> Thanks,
> Gobinda
>


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GXIQW5S74B2ICZUXYULCYGDN2S3H4V6Y/


[ovirt-users] Re: Nvidia Grid K2 and Ovirt GPU Passtrough

2019-01-28 Thread okok102928
Do you ever stop at the boot screen? (Windows logo screen, etc.)

You can not implement a GPU display with spice. There are some engineers who 
have implemented this functionality through upstream kvm, but they are not 
fundamentally supported by ovirt.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4G2BQQA2AJARRGLDBR53Y46IKTOWE4K/


[ovirt-users] Re: Using oVirt for Desktop Virtualization

2019-01-28 Thread okok102928
Hi!
I had the same problem with you.

Use the latest version of the Remote Desktop Protocol (8.x) and enable the 
remotefx policy. Bandwidth is greatly reduced. You can experience good quality 
and low bandwidth. I installed the same VGPU as you and I have been working on 
a VDI project in some elementary schools.

However, the rdp protocol does not fully support 3D programs. Please check 
below.

1) If you have built up your company's infrastructure and tested what I have 
said and did not get good results,
Use the answer: pcoip. However, I am confident that your mouth will increase 
with monthly and annual licenses.

2) Do you personally want to use 3D programs in remote sessions?
Answer: Please use Team Viewer. TeamViewer gives you enough quality to feel the 
game smoothly.

I hope my answer will help you.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GC5MVPCKGNRNP7DE5KPAGMQH67H6ZRZZ/


[ovirt-users] Re: Deploying single instance - error

2019-01-28 Thread Gobinda Das
Hi David,
 Can you please check the  gdeploy version?
This bug was fixed last year:
https://bugzilla.redhat.com/show_bug.cgi?id=1626513
And is part of: gdeploy-2.0.2-29

On Sun, Jan 27, 2019 at 2:38 PM Leo David  wrote:

> Hi,
> It seems so that I had to manually add the sections, to make the scrip
> working:
> [diskcount]
> 12
> [stripesize]
> 256
>
> It looks like ansible is still searching for these sections regardless
> that I have configured "jbod"  in the wizard...
>
> Thanks,
>
> Leo
>
>
>
> On Sun, Jan 27, 2019 at 10:49 AM Leo David  wrote:
>
>> Hello Everyone,
>> Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso
>> ) for deploying one node instance by following from within CockpitUI seems
>> not to be possible.
>> Here's the generated inventory ( i've specified "jbod"  in the wizard ):
>>
>> #gdeploy configuration generated by cockpit-gluster plugin
>> [hosts]
>> 192.168.80.191
>>
>> [script1:192.168.80.191]
>> action=execute
>> ignore_script_errors=no
>> file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>> 192.168.80.191
>> [disktype]
>> jbod
>> [service1]
>> action=enable
>> service=chronyd
>> [service2]
>> action=restart
>> service=chronyd
>> [shell2]
>> action=execute
>> command=vdsm-tool configure --force
>> [script3]
>> action=execute
>> file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
>> ignore_script_errors=no
>> [pv1:192.168.80.191]
>> action=create
>> devices=sdb
>> ignore_pv_errors=no
>> [vg1:192.168.80.191]
>> action=create
>> vgname=gluster_vg_sdb
>> pvname=sdb
>> ignore_vg_errors=no
>> [lv1:192.168.80.191]
>> action=create
>> lvname=gluster_lv_engine
>> ignore_lv_errors=no
>> vgname=gluster_vg_sdb
>> mount=/gluster_bricks/engine
>> size=230GB
>> lvtype=thick
>> [selinux]
>> yes
>> [service3]
>> action=restart
>> service=glusterd
>> slice_setup=yes
>> [firewalld]
>> action=add
>>
>> ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
>> services=glusterfs
>> [script2]
>> action=execute
>> file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
>> [shell3]
>> action=execute
>> command=usermod -a -G gluster qemu
>> [volume1]
>> action=create
>> volname=engine
>> transport=tcp
>>
>> key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock
>> value=36,36,on,32,on,off,30,off,on,off,off,off,enable
>> brick_dirs=192.168.80.191:/gluster_bricks/engine/engine
>> ignore_volume_errors=no
>>
>> It does not get to finish,  throwing the following error:
>>
>> PLAY [gluster_servers]
>> *
>> TASK [Create volume group on the disks]
>> 
>> changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg':
>> u'gluster_vg_sdb'})
>> PLAY RECAP
>> *
>> 192.168.80.191 : ok=1changed=1unreachable=0
>> failed=0
>> *Error: Section diskcount not found in the configuration file*
>>
>> Any thoughts ?
>>
>>
>>
>>
>>
>>
>> --
>> Best regards, Leo David
>>
>
>
> --
> Best regards, Leo David
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z2X45A6V6WQC3DBH6DGENJGBAVKNPY5T/
>


-- 


Thanks,
Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P4MIBCHPGTYYIJ5NO736VAW37JXXH6MY/