[ovirt-users] oVirt automation through Ansible and cloud-init in oVirt 4.1.5 + Ansible 2.3.1

2017-09-08 Thread Julián Tete
oVirt Version: 4.1.5

Ansible: 2.3.1

Hello Friends of oVirt

I want to automate the creation, provisioning and deployment of virtual
machines in oVirt, using Ansible.

I want to use a non-cloud image for the template. It has cloud-init
installed. And it looks for the IP http: //169.254.169 and the following
error message: "Calling 'http: //169.254.169 ."

It looks like oVirt uses a config drive with a user-data.txt file

This is my Ansible Playbook:

---
# Primer Play
- hosts: oVirtEnginePruebas
  remote_user: root
  tasks:
- name : Definiendo la conexion con el Engine de oVirt
  ovirt_auth:
  url: https://engine1.example.com/ovirt-engine/api
  username: admin@internal
  password: mysupersecretpassword
  ca_file: /etc/pki/ovirt-engine/ca.pem

- name : Creando la maquina virtual requerida
  ovirt_vms:
  auth: "{{ ovirt_auth }}"
  state: present
  name: CentOS7CloudInit
  template: CloudInitTemplate
  cluster: Default

- name : Se establecen las propiedades de la maquina virtual y los
parametros de cloud-init
  ovirt_vms:
  auth: "{{ ovirt_auth }}"
  name: CentOS7CloudInit
  template: CloudInitTemplate
  cluster: Default
  memory: 5GiB
  cpu_cores: 8
  high_availability: true
  cloud_init:
host_name: cloudinit.example.com
nic_name: eth0
nic_boot_protocol: static
nic_ip_address: 192.168.0.238
nic_netmask: 255.255.255.0
nic_gateway: 192.168.0.1
dns_servers: 8.8.8.8
dns_search: example.com
nic_on_boot: true
user_name: root
root_password: mysupersecretpassword

- name : Desconectando con el Engine de oVirt revocando el token SSO
  ovirt_auth:
  state: absent
  ovirt_auth: "{{ ovirt_auth }}"
~


I just want to use Ansible for this, I do not want to use the oVirt
webinterface to run run once every time I want to provision a machine.

How can I do that ?

Thanks in advance
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Adding host for the first time - Power Management?

2017-09-08 Thread Mark Steele
Hello,

We have an existing 4 Host Ovirt 3.5 installation. I am adding two more HP
Gen9 Blades to the configuration. I looked at how the power management is
setup up in Ovirt for the first four blades and it appears to be configured
for ilo4 using a user account that I cannot find documentation for anywhere
within my infrastructure.

Questions:

- is this user / password defined on the blade somewhere (perhaps like an
IPMI setup)?
- is anyone familiar with where I set this up on an HP Blade?

I'd like to get this configured before it goes into production as I assume
this requires access to the blade bios / setup to properly configure.

Best regards,

***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt node and bonding options cutomizations

2017-09-08 Thread Gianluca Cecchi
Hello,
when using CentOS as OS for hypervisor and mode=4 for bonding (802.3ad) in
the past I had to use lacp_rate=1 (aka fast), because I noticed problems
with the default lacp_rate=0 parameter.
Now I'm testing an oVirt node and I see that into the graphical interface
there are the options to set mode, link up and down delays, but not the
lacp_rate option.

It is a different environment from the one where I had to modify it, so I
will start with the default value and see if I get any problems again.

Now the ifcfg-bond0 file created at oVirt node level by the web mgmt
interface contains:

BONDING_OPTS="miimon=100 updelay=0 downdelay=0 mode=802.3ad"

Would I be able to modify it in case of need and have it persistent across
reboot, something like this:

BONDING_OPTS="miimon=100 updelay=0 downdelay=0 mode=802.3ad lacp_rate=1"

?

Thanks in advance,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot install ovirt guest agent- repo does not exists

2017-09-08 Thread Yohan JAROSZ
Hi,

It was due to https://status.opensuse.org/ outage.
Now it should be back to normal.

Best,
Yo.

> On 7 Sep 2017, at 12:25, Christophe TREFOIS  wrote:
> 
> Hi,
> 
> Our puppet workflows are failing.
> 
> Could anybody please tell us what-s going on?
> 
> Thanks,
> -- 
> 
> Dr Christophe Trefois, Dipl.-Ing.  
> Technical Specialist / Post-Doc
> 
> UNIVERSITÉ DU LUXEMBOURG
> 
> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
> Campus Belval | House of Biomedicine  
> 6, avenue du Swing 
> L-4367 Belvaux  
> T: +352 46 66 44 6124 
> F: +352 46 66 44 6949  
> http://www.uni.lu/lcsb
> 
> 
> 
> 
> 
> This message is confidential and may contain privileged information. 
> It is intended for the named recipient only. 
> If you receive it in error please notify me and permanently delete the 
> original message and any copies. 
> 
>   
> 
>> On 6 Sep 2017, at 16:15, Yohan JAROSZ  wrote:
>> 
>> Dear Vinzenz, dear list
>> 
>> It seems that the repo evilissimo is not here anymore: 
>> http://download.opensuse.org/repositories/home:/
>> So we can’t install overt agent as stated in the docs: 
>> https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-ubuntu/
>> (http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04/)
>> 
>> best,
>> Yo.
>> 
>> 
>> 
>> 
>> Yohan Jarosz
>> Scientific Collaborator
>> 
>> UNIVERSITÉ DU LUXEMBOURG
>> 
>> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
>> Campus Belval | House of Biomedicine
>> 7, avenue des Hauts-Fourneaux
>> L-4362 Esch-sur-Alzette
>> T +352 46 66 44 6669
>> F +352 46 66 44 3 6669
>> 
>> 
>> yohan.jar...@uni.lu  http://lcsb.uni.lu
>> 
>> LCSB - Accelerating Biomedicine!: https://www.youtube.com/watch?v=oLUE6DjSB7Y
>> -
>> This message is confidential and may contain privileged information. It is 
>> intended for the named recipient only. If you receive it in error please 
>> notify me and permanently delete the original message and any copies.
>> -
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-09-08 Thread yayo (j)
2017-07-19 11:22 GMT+02:00 yayo (j) :

> running the "gluster volume heal engine" don't solve the problem...
>
> Some extra info:
>
> We have recently changed the gluster from: 2 (full repliacated) + 1
> arbiter to 3 full replicated cluster but i don't know this is the problem...
>
>

Hi,

I'm sorry for the follow up. I want to say that after upgrade all nodes to
the same level, all problems are solved and cluster works very well now!

Thank you all for the support!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt selft-hosted with NFS on top gluster

2017-09-08 Thread Abi Askushi
Hi all,

Seems that I am seeing light in he tunnel!

I have done the below:

Added the option
/etc/glusterfs/glusterd.vol :
option rpc-auth-allow-insecure on

restarted glusterd

Then set the volume option:
gluster volume set vms server.allow-insecure on

Then disabled the option:
gluster volume set vms performance.readdir-ahead off

which seems to have been enabled when desperately testing gluster options.

then I added again the storage domain with glusterfs (not NFS).
This lead me to have really high performance boost on writes of VMs.

The dd went from 10MB/s to 60MB/s!
And the VM disk benchmarks from few MB to 80 - 100MB/s!

Seems that the above two options which were missing did the trick for the
gluster.

When checking the VM XML I still see the disk defines as file and not
network. I am not sure that ligfapi is used from ovirt.
I will set also a new VM just in case to check the XML again.
Is there any way to confirm use of ligfapi?

On Fri, Sep 8, 2017 at 10:08 AM, Abi Askushi 
wrote:

> I don't see any other bottleneck. CPUs are quite idle. Seems that the load
> is mostly due to high latency on IO.
>
> Reading further the gluster docs:
>
> https://github.com/gluster/glusterfs-specs/blob/master/
> done/GlusterFS%203.5/libgfapi%20with%20qemu%20libvirt.md
>
> I see that I am missing the following options:
>
> /etc/glusterfs/glusterd.vol :
> option rpc-auth-allow-insecure on
>
> gluster volume set vms gluster server.allow-insecure on
>
> It says that the above allow qemu to use libgfapi.
> When checking the VM XML, I don't see any gluster protocol at the disk
> drive:
>
> 
>io='threads'/>
>   
>   
>   222a1312-5efa-4731-8914-9a9d24dccba5
>   
>   
> 
>
>
>
> While at gluster docs it mentions the below type of disk:
>
> 
>
>
> 
> 
>
> function='0x0'/>
> 
>
>
> Does the above indicate that ovirt/qemu is not using libgfapi but FUSE only?
> This could be the reason of such slow perf.
>
>
> On Thu, Sep 7, 2017 at 1:47 PM, Yaniv Kaul  wrote:
>
>>
>>
>> On Thu, Sep 7, 2017 at 12:52 PM, Abi Askushi 
>> wrote:
>>
>>>
>>>
>>> On Thu, Sep 7, 2017 at 10:30 AM, Yaniv Kaul  wrote:
>>>


 On Thu, Sep 7, 2017 at 10:06 AM, Yaniv Kaul  wrote:

>
>
> On Wed, Sep 6, 2017 at 6:08 PM, Abi Askushi 
> wrote:
>
>> For a first idea I use:
>>
>> dd if=/dev/zero of=testfile bs=1GB count=1
>>
>
> This is an incorrect way to test performance, for various reasons:
> 1. You are not using oflag=direct , thus not using DirectIO, but using
> cache.
> 2. It's unrealistic - it is very uncommon to write large blocks of
> zeros (sometimes during FS creation or wiping). Certainly not 1GB
> 3. It is a single thread of IO - again, unrealistic for VM's IO.
>
> I forgot to mention that I include oflag=direct in my tests. I agree
>>> though that dd is not the correct way to test, hence I mentioned I just use
>>> it to get a first feel. More tests are done within the VM benchmarking its
>>> disk IO (with tools like IOmeter).
>>>
>>> I suggest using fio and such. See https://github.com/pcuzner/fio-tools
> for example.
>
 Do you have any recommended config file to use for VM workload?
>>>
>>
>> Desktops and Servers VMs behave quite differently, so not really. But the
>> 70/30 job is typically a good baseline.
>>
>>
>>>
>>>
>
>>
>> When testing on the gluster mount point using above command I hardly
>> get 10MB/s. (On the same time the network traffic hardly reaches 
>> 100Mbit).
>>
>> When testing our of the gluster (for example at /root) I get 600 -
>> 700MB/s.
>>
>
> That's very fast - from 4 disks doing RAID5? Impressive (unless you
> use caching!). Are those HDDs or SSDs/NVMe?
>
>
 These are SAS disks. But there is also a RAID controller with 1GB cache.
>>>
>>>
>> When I mount the gluster volume with NFS and test on it I get 90 -
>> 100 MB/s, (almost 10x from gluster results) which is the max I can get
>> considering I have only 1 Gbit network for the storage.
>>
>> Also, when using glusterfs the general VM performance is very poor
>> and disk write benchmarks show that is it at least 4 times slower then 
>> when
>> the VM is hosted on the same data store when NFS mounted.
>>
>> I don't know why I hitting such a significant performance penalty,
>> and every possible tweak that I was able to find out there did not make 
>> any
>> difference on the performance.
>>
>> The hardware I am using is pretty decent for the purposes intended:
>> 3 nodes, each node having with 32 MB of RAM, 16 physical CPU cores, 2
>> TB of storage in RAID5 (4 disks), of which 1.5 TB are sliced for the data
>> store of ovirt where 

Re: [ovirt-users] oVirt selft-hosted with NFS on top gluster

2017-09-08 Thread Abi Askushi
I don't see any other bottleneck. CPUs are quite idle. Seems that the load
is mostly due to high latency on IO.

Reading further the gluster docs:

https://github.com/gluster/glusterfs-specs/blob/master/done/GlusterFS%203.5/libgfapi%20with%20qemu%20libvirt.md

I see that I am missing the following options:

/etc/glusterfs/glusterd.vol :
option rpc-auth-allow-insecure on

gluster volume set vms gluster server.allow-insecure on

It says that the above allow qemu to use libgfapi.
When checking the VM XML, I don't see any gluster protocol at the disk
drive:


  
  
  
  222a1312-5efa-4731-8914-9a9d24dccba5
  
  




While at gluster docs it mentions the below type of disk:


   
   


   
   



Does the above indicate that ovirt/qemu is not using libgfapi but FUSE only?
This could be the reason of such slow perf.


On Thu, Sep 7, 2017 at 1:47 PM, Yaniv Kaul  wrote:

>
>
> On Thu, Sep 7, 2017 at 12:52 PM, Abi Askushi 
> wrote:
>
>>
>>
>> On Thu, Sep 7, 2017 at 10:30 AM, Yaniv Kaul  wrote:
>>
>>>
>>>
>>> On Thu, Sep 7, 2017 at 10:06 AM, Yaniv Kaul  wrote:
>>>


 On Wed, Sep 6, 2017 at 6:08 PM, Abi Askushi 
 wrote:

> For a first idea I use:
>
> dd if=/dev/zero of=testfile bs=1GB count=1
>

 This is an incorrect way to test performance, for various reasons:
 1. You are not using oflag=direct , thus not using DirectIO, but using
 cache.
 2. It's unrealistic - it is very uncommon to write large blocks of
 zeros (sometimes during FS creation or wiping). Certainly not 1GB
 3. It is a single thread of IO - again, unrealistic for VM's IO.

 I forgot to mention that I include oflag=direct in my tests. I agree
>> though that dd is not the correct way to test, hence I mentioned I just use
>> it to get a first feel. More tests are done within the VM benchmarking its
>> disk IO (with tools like IOmeter).
>>
>> I suggest using fio and such. See https://github.com/pcuzner/fio-tools
 for example.

>>> Do you have any recommended config file to use for VM workload?
>>
>
> Desktops and Servers VMs behave quite differently, so not really. But the
> 70/30 job is typically a good baseline.
>
>
>>
>>

>
> When testing on the gluster mount point using above command I hardly
> get 10MB/s. (On the same time the network traffic hardly reaches 100Mbit).
>
> When testing our of the gluster (for example at /root) I get 600 -
> 700MB/s.
>

 That's very fast - from 4 disks doing RAID5? Impressive (unless you use
 caching!). Are those HDDs or SSDs/NVMe?


>>> These are SAS disks. But there is also a RAID controller with 1GB cache.
>>
>>
> When I mount the gluster volume with NFS and test on it I get 90 - 100
> MB/s, (almost 10x from gluster results) which is the max I can get
> considering I have only 1 Gbit network for the storage.
>
> Also, when using glusterfs the general VM performance is very poor and
> disk write benchmarks show that is it at least 4 times slower then when 
> the
> VM is hosted on the same data store when NFS mounted.
>
> I don't know why I hitting such a significant performance penalty, and
> every possible tweak that I was able to find out there did not make any
> difference on the performance.
>
> The hardware I am using is pretty decent for the purposes intended:
> 3 nodes, each node having with 32 MB of RAM, 16 physical CPU cores, 2
> TB of storage in RAID5 (4 disks), of which 1.5 TB are sliced for the data
> store of ovirt where VMs are stored.
>

>>> I forgot to ask why are you using RAID 5 with 4 disks and not RAID 10?
>>> Same usable capacity, higher performance, same protection and faster
>>> recovery, I believe.
>>>
>> Correction: there are 5 disks of 600GB each. The main reason going with
>> RAID 5 was the capacity. With RAID 10 I can use only 4 of them and get only
>> 1.1 TB usable, with RAID 5 I get 2.2 TB usable. I agree going with RAID 10
>> (+ one additional drive to go with 6 drives) would be better but this is
>> what I have now.
>>
>> Y.
>>>
>>>
 You have not mentioned your NIC speeds. Please ensure all work well,
 with 10g.
 Is the network dedicated for Gluster traffic? How are they connected?


>>> I have mentioned that I have 1 Gbit dedicated for the storage. A
>> different network is used for this and a dedicated 1Gbit switch. The
>> throughput has been confirmed between all nodes with iperf.
>>
>
> Oh With 1Gb, you can't get more than 100+MBps...
>
>
>> I know 10Gbit would be better, but when using native gluster at ovirt the
>> network pipe was hardly reaching 100Mbps thus the bottleneck was gluster
>> and not the network. If I can saturate 1Gbit and I still have performance
>> issues then I may 

Re: [ovirt-users] Update notes

2017-09-08 Thread Sandro Bonazzola
2017-09-07 19:25 GMT+02:00 Karli Sjöberg :

> Hey all!
>
> Just wanted to contrast all people's problems by chipping in that I've
> just updated my home oVirt Hosted Engine environment of two Hosts from
> 4.1.1 to 4.1.5 (engine and Hosts) without a hitch!
>

Thanks for the feedback! Very much appreciated!


>
> Seriously guys, this is getting boring ; P
>

Oh! You can go a step forward and consume master nightly ; P
Lot of fun granted! But only until we'll be ready for alpha, upgrading to
4.2.0 alpha should be as boring as updating to 4.1.5.


>
> Keep up the good work!
>

We'll do!


>
> /K
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users