Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-19 Thread Arsène Gschwind
I did start the hosted engine on the host I've applied the patch but 
I've forgot to restart ovirt-ha-agent, and now it works.


Great job, thanks

Best regards,
Arsène


On 04/19/2017 03:39 PM, Martin Sivak wrote:
> I've applied the patch on one off my host and restarted the hosted 
engine as you wrote but it didn't help,

> the console button   is still grayed out and not available.

Are you sure the host with the patch applied was used to start the 
engine? Btw, you need to restart the ovirt-ha-agent service first 
before restarting the engine VM.


Best regards

Martin Sivak

On Wed, Apr 19, 2017 at 3:17 PM, Arsène Gschwind 
> wrote:


Hi,

I've applied the patch on one off my host and restarted the hosted
engine as you wrote but it didn't help, the console button is
still grayed out and not available.

Rgds,
Arsène


On 04/18/2017 02:20 PM, Evgenia Tokar wrote:

Hi,

Thanks for bringing the issue to our attention and for answering
the questions.

There is a patch to fix this: https://gerrit.ovirt.org/#/c/75515/

After applying the patch a restart to the hosted engine vm is
required (not just the engine).

Let me know if you are still experiencing issues with this.

Thanks,
Jenny


On Wed, Apr 12, 2017 at 8:05 PM, Rafał Wojciechowski
> wrote:

hi,

I will answer also. however I am using single hypervisor so
without ha and I have no performed steps:
https://www.ovirt.org/documentation/how-to/hosted-engine/


1. yes - however I have to start in headless mode so it is
quite obvius
if I am trying to start with spice/vnc console I am getting
segfault from libvirtd
2. as above
3. as above


W dniu 12.04.2017 o 14:12, Arsène Gschwind pisze:


Hi all,

I will answer your questions:

1. definitively yes
2. the command hosted-engine --console works well and I'm
able to connect.
3. Here are the device entries


devices={device:qxl,alias:video0,type:video,deviceId:5210a3c3-9cc4-4aed-90c6-432dd2d37c46,address:{slot:0x02,
bus:0x00,domain:0x,type:pci,function:0x0}}
devices={device:console,type:console}

Thanks and rgds,
Arsène

On 04/12/2017 10:53 AM, Evgenia Tokar wrote:

Hi all,

I have managed to reproduce this issue and opened a bug for
tracking it:
https://bugzilla.redhat.com/show_bug.cgi?id=1441570
 .

There is no solution yet, but I would appreciate if any who
encountered this issue will answer some questions:
1. Is the console button greyed out in the UI?
2. On the hosted engine host, does the command
hosted-engine --console fails?
 If it fails, try upgrading ovirt-hosted-engine-ha on
the hosted engine host. We had a bug related to this issue
that was fixed
(https://bugzilla.redhat.com/show_bug.cgi?id=1364132
).
 After upgrade and restart of the vm, this should work,
and you should be able to connect to the console.
3. On the hosted engine host look at the content of:
/var/run/ovirt-hosted-engine-ha/vm.conf
Does it contain a graphical device? Or a console device?

Thanks,
Jenny


On Mon, Apr 10, 2017 at 11:44 AM, Martin Sivak
> wrote:

Hi,

we are working on that, we can only ask for patience
now, Jenny was trying to find out what happened and how
to fix it all week.

Best regards

--
Martin Sivak
SLA / oVirt

On Mon, Apr 10, 2017 at 9:38 AM, Rafał Wojciechowski
> wrote:

hi,

I have similiar issue(I also started my mailthread)
after upgrade 4.0 to 4.1

Version 4.1.1.8-1.el7.centos (before it was some
4.1.0.x or similiar - update not fixed it)

to run VM I have to set in Console tab Headless
mode - without it I got libvirtd segfault(logs
attached in my mailthread).

So I am able to run VMs only without Console - do
you also have to set headless before run VM?

I noticed that libvirt-daemon was also upgraded to
2.0 version during ovirt upgrade - I dont think
that 4.1 was not 

Re: [ovirt-users] Question about Huge Pages

2017-04-19 Thread Martin Polednik

On 19/04/17 14:01 +0200, Gianluca Cecchi wrote:

On Wed, Apr 19, 2017 at 8:03 AM, Michal Skrivanek 
wrote:



Why not reserving regular hugepages for VMs on boot?



Do you mean at hypervisor level? In this case it is what I'm doing normally
for physical servers where I install Oracle RDBMS



then you can use
it with vdsm hook for that Oracle VM.



Which hook are you referring?
This one:
http://www.ovirt.org/develop/developer-guide/vdsm/hook/hugepages/
?
In case is it still current? In the sense that I need to mount the
hugetblfs virtual file system at hst level?
The hook description seems low detailed...
Normally if I want oracle user able to use huge pages on physical server, I
have to specify

#
# Huge pages
#
vm.hugetlb_shm_group = 2000
# 18GB allocatable
vm.nr_hugepages = 9216
#

where 2000 is the group id for dba group, the main group of oracle user

How to map this with vrtualization?
Eg:
1) vm.hugetlb_shm_group at hypervisor side should be set to the group of
the qemu user as the qemu-kvm process runs with it?
2) Then I have to set VM for VM the hugepages=xxx value in the hook and
that will bypass the sysctl.conf configuration in the guest?
3) I presume I have to set the vm.hugetlb_shm_group parameter at guest
level


If you are using recent CentOS (or I guess Fedora), there isn't any
extra setup required. Just create the custom property:

On the host where engine is running:

$ engine-config -s "UserDefinedVMProperties=hugepages=^.*$"
$ service ovirt-engine restart

and you should see 'hugepages' when editing a VM under custom properties.
Set the number to (desired memory / 2048) and you're good to go. The
VM will run with it's memory backed by hugepages. If you need
hugepages even inside the VM, do whatever you would do on a physical
host.

mpolednik


Thanks,
Gianluca




It improves VM performance in
general, the only drawback is less flexibility since that memory can't
be used by others unless they specifically ask for  hugepages.



This seems to confirm that I have to set a statich sysctl.conf entry at
hypervisor level such as
vm.nr_hugepages = 



Also, I suppose you disable KSM, and I'm not sure about ballooning,
unless you need it I'd disable it too.



I kept the defaults at the moment that I suppose should be

a) KSM disabled

ksm has been configured to start by default as normally, but ksmtuned has
been disabled:

[g.cecchi@ov300 ~]$ sudo systemctl status ksm
● ksm.service - Kernel Samepage Merging
  Loaded: loaded (/usr/lib/systemd/system/ksm.service; enabled; vendor
preset: enabled)
  Active: active (exited) since Tue 2017-04-11 11:07:28 CEST; 1 weeks 1
days ago
 Process: 976 ExecStart=/usr/libexec/ksmctl start (code=exited,
status=0/SUCCESS)
Main PID: 976 (code=exited, status=0/SUCCESS)
  CGroup: /system.slice/ksm.service

Apr 11 11:07:28 ov300.datacenter.polimi.it systemd[1]: Starting Kernel
Samepage Merging...
Apr 11 11:07:28 ov300.datacenter.polimi.it systemd[1]: Started Kernel
Samepage Merging.

[g.cecchi@ov300 ~]$ sudo systemctl status ksmtuned
● ksmtuned.service - Kernel Samepage Merging (KSM) Tuning Daemon
  Loaded: loaded (/usr/lib/systemd/system/ksmtuned.service; disabled;
vendor preset: disabled)
  Active: inactive (dead)
[g.cecchi@ov300 ~]$


b) ballooning enabled for a newly created VM unless I explicitly disable it
(at least I see this happens in 4.1.1)

What to do for a) and b) to not interfere with huge pages?



The hook is being improved right now in master, but it should be
usable in stable too.



I will be happy to test and verify and contribute to its description, as
soon as I understand its usage

Gianluca



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] CentOS 7 and kernel 4.x

2017-04-19 Thread FERNANDO FREDIANI

Hi folks

Is anyone using KVM Nodes running CentOS with upgraded Kernel like 
Elrepo to either 4.5 (lt) or 4.10(ml) and noticed any improvements due 
that ?


What about oVirt-Node-NG ? I don't really like to make much changes on 
oVirt-Node image, but wanted to hear from whoever may have done that and 
are having good and stable results. And if so if there is a way to build 
an install image with one of those newer kernels.


Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-04-19 Thread Bryan Sockel
Thank you for the information, i did check my servers this morning, in total 
i have 4 servers configured as part of my ovirt deployment, two 
virtualization servers and 2 gluster servers, with one of the virtualization 
being the arbiter for my gluster replicated storage.

>From what i can see on my 2 dedicated gluster boxes i see traffic going out 
over multiple links.  On both of my virtualization hosts i am seeing all 
traffic go out via em1, and no traffic going out over the other interfaces.  
All four interfaces are configured in a single bond as 802.3ad on both hosts 
with my logical networks attached to the bond.


-Original Message-
From: Yaniv Kaul 
To: Bryan Sockel 
Cc: users 
Date: Wed, 19 Apr 2017 10:41:40 +0300
Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?



On Tue, Apr 18, 2017 at 9:57 PM, Bryan Sockel  wrote:
Was reading over this post to the group about storage options.  I am more of 
a windows guy as appose to a linux guy, but am learning quickly and had a 
question.  You said that LACP will not provide extra band with (Especially 
with NFS).  Does the same hold true with GlusterFS.  We are currently using 
GlusterFS for the file replication piece.  Does Glusterfs take advantage of 
any multipathing?

Thanks


I'd expect Gluster to take advantage of LACP, as it has replication to 
multiple peers (as opposed to NFS). See[1].
Y.

[1] 
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Network%20Configurations%20Techniques/
 


-Original Message-
From: Yaniv Kaul 
To: Charles Tassell 
Cc: users 
Date: Sun, 26 Mar 2017 10:40:00 +0300
Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?



On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell  wrote:
Hi Everyone,

  I'm about to setup an oVirt cluster with two hosts hitting a Linux storage 
server.  Since the Linux box can provide the storage in pretty much any 
form, I'm wondering which option is "best." Our primary focus is on 
reliability, with performance being a close second.  Since we will only be 
using a single storage server I was thinking NFS would probably beat out 
GlusterFS, and that NFSv4 would be a better choice than NFSv3.  I had 
assumed that that iSCSI would be better performance wise, but from what I'm 
seeing online that might not be the case.

NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support, 
which is nice.
Gluster probably requires 3 servers.
In most cases, I don't think people see the difference in performance 
between NFS and iSCSI. The theory is that block storage is faster, but in 
practice, most don't get to those limits where it matters really.


  Our servers will be using a 1G network backbone for regular traffic and a 
dedicated 10G backbone with LACP for redundancy and extra bandwidth for 
storage traffic if that makes a difference.

LCAP many times (especially on NFS) does not provide extra bandwidth, as the 
(single) NFS connection tends to be sticky to a single physical link.
It's one of the reasons I personally prefer iSCSI with multipathing.


  I'll probably try to do some performance benchmarks with 2-3 options, but 
the reliability issue is a little harder to test for.  Has anyone had any 
particularly bad experiences with a particular storage option?  We have been 
using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues with 
the multipath setup, but that won't be a problem with the new SAN since it's 
only got a single controller interface.

A single controller is not very reliable. If reliability is your primary 
concern, I suggest ensuring there is no single point of failure - or at 
least you are aware of all of them (does the storage server have redundant 
power supply? to two power sources? Of course in some scenarios it's an 
overkill and perhaps not practical, but you should be aware of your weak 
spots).

I'd stick with what you are most comfortable managing - creating, backing 
up, extending, verifying health, etc.
Y.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Question about Huge Pages

2017-04-19 Thread Gianluca Cecchi
On Wed, Apr 19, 2017 at 3:44 PM, Martin Polednik 
wrote:

>
>>
> If you are using recent CentOS (or I guess Fedora), there isn't any
> extra setup required. Just create the custom property:
>

Both my engine and my hosts are CentOS 7.3 + updates


>
> On the host where engine is running:
>
> $ engine-config -s "UserDefinedVMProperties=hugepages=^.*$"
> $ service ovirt-engine restart
>
> and you should see 'hugepages' when editing a VM under custom properties.
>

So no vdsm hook at all to install?



> Set the number to (desired memory / 2048) and you're good to go. The
> VM will run with it's memory backed by hugepages.


As in sysctl.conf? So that if I want 4Gb of Huge Pages I have to set 2048?



> If you need
> hugepages even inside the VM, do whatever you would do on a physical
> host.
>
> mpolednik
>
>
yes, the main subject is to have Huge Pages inside the guest, so that
Oracle RDBMS at startup detect them and use them

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-19 Thread Martin Sivak
> I've applied the patch on one off my host and restarted the hosted engine
as you wrote but it didn't help,
> the console button   is still grayed out and not available.

Are you sure the host with the patch applied was used to start the engine?
Btw, you need to restart the ovirt-ha-agent service first before restarting
the engine VM.

Best regards

Martin Sivak

On Wed, Apr 19, 2017 at 3:17 PM, Arsène Gschwind 
wrote:

> Hi,
>
> I've applied the patch on one off my host and restarted the hosted engine
> as you wrote but it didn't help, the console button is still grayed out and
> not available.
>
> Rgds,
> Arsène
>
> On 04/18/2017 02:20 PM, Evgenia Tokar wrote:
>
> Hi,
>
> Thanks for bringing the issue to our attention and for answering the
> questions.
>
> There is a patch to fix this: https://gerrit.ovirt.org/#/c/75515/
> After applying the patch a restart to the hosted engine vm is required
> (not just the engine).
>
> Let me know if you are still experiencing issues with this.
>
> Thanks,
> Jenny
>
>
> On Wed, Apr 12, 2017 at 8:05 PM, Rafał Wojciechowski <
> i...@rafalwojciechowski.pl> wrote:
>
>> hi,
>>
>> I will answer also. however I am using single hypervisor so without ha
>> and I have no performed steps:
>> https://www.ovirt.org/documentation/how-to/hosted-engine/
>>
>> 1. yes - however I have to start in headless mode so it is quite obvius
>> if I am trying to start with spice/vnc console I am getting segfault from
>> libvirtd
>> 2. as above
>> 3. as above
>>
>> W dniu 12.04.2017 o 14:12, Arsène Gschwind pisze:
>>
>> Hi all,
>>
>> I will answer your questions:
>>
>> 1. definitively yes
>> 2. the command hosted-engine --console works well and I'm able to connect.
>> 3. Here are the device entries
>>
>> devices={device:qxl,alias:video0,type:video,deviceId:5210a3c
>> 3-9cc4-4aed-90c6-432dd2d37c46,address:{slot:0x02,
>> bus:0x00,domain:0x,type:pci,function:0x0}}
>> devices={device:console,type:console}
>>
>> Thanks and rgds,
>> Arsène
>> On 04/12/2017 10:53 AM, Evgenia Tokar wrote:
>>
>> Hi all,
>>
>> I have managed to reproduce this issue and opened a bug for tracking it:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1441570 .
>>
>> There is no solution yet, but I would appreciate if any who encountered
>> this issue will answer some questions:
>> 1. Is the console button greyed out in the UI?
>> 2. On the hosted engine host, does the command hosted-engine --console
>> fails?
>>  If it fails, try upgrading ovirt-hosted-engine-ha on the hosted
>> engine host. We had a bug related to this issue that was fixed (
>> https://bugzilla.redhat.com/show_bug.cgi?id=1364132).
>>  After upgrade and restart of the vm, this should work, and you
>> should be able to connect to the console.
>> 3. On the hosted engine host look at the content of:
>> /var/run/ovirt-hosted-engine-ha/vm.conf
>> Does it contain a graphical device? Or a console device?
>>
>> Thanks,
>> Jenny
>>
>>
>> On Mon, Apr 10, 2017 at 11:44 AM, Martin Sivak  wrote:
>>
>>> Hi,
>>>
>>> we are working on that, we can only ask for patience now, Jenny was
>>> trying to find out what happened and how to fix it all week.
>>>
>>> Best regards
>>>
>>> --
>>> Martin Sivak
>>> SLA / oVirt
>>>
>>> On Mon, Apr 10, 2017 at 9:38 AM, Rafał Wojciechowski <
>>> i...@rafalwojciechowski.pl> wrote:
>>>
 hi,

 I have similiar issue(I also started my mailthread) after upgrade 4.0
 to 4.1

 Version 4.1.1.8-1.el7.centos (before it was some 4.1.0.x or similiar -
 update not fixed it)
 to run VM I have to set in Console tab Headless mode - without it I got
 libvirtd segfault(logs attached in my mailthread).

 So I am able to run VMs only without Console - do you also have to set
 headless before run VM?

 I noticed that libvirt-daemon was also upgraded to 2.0 version during
 ovirt upgrade - I dont think that 4.1 was not testes due to such libvirtd
 upgrade... but maybe?

 Regards,
 Rafal Wojciechowski

 W dniu 10.04.2017 o 08:24, Arsène Gschwind pisze:

 Hi,

 After updating to oVirt 4.1.1 Async release i can confirm that the
 problem still persists.

 Rgds,
 Arsène

 On 03/25/2017 12:25 PM, Arsène Gschwind wrote:

 Hi,
 After updating to 4.1.1 i'm observing the same behavior, HE without any
 console.
 Even when trying to edit the HE VMs it doesn't change anything,
 Graphics stays to NONE.

 Thanks for any Help.

 Regards,
 Arsène

 On 03/24/2017 03:11 PM, Nelson Lameiras wrote:

 Hello,

 When upgrading my test setup from 4.0 to 4.1, my engine vm lost it's
 console (from SPICE to None in GUI)

 My test setup :
 2 manually built hosts using centos 7.3, ovirt 4.1
 1 manually built hosted engine centos 7.3, oVirt 4.1.0.4-el7,
 accessible with SPICE console via GUI

 I updated 

Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-04-19 Thread Yaniv Kaul
On Wed, Apr 19, 2017 at 5:07 PM, Bryan Sockel  wrote:

> Thank you for the information, i did check my servers this morning, in
> total i have 4 servers configured as part of my ovirt deployment, two
> virtualization servers and 2 gluster servers, with one of the
> virtualization being the arbiter for my gluster replicated storage.
>
> From what i can see on my 2 dedicated gluster boxes i see traffic going
> out over multiple links.  On both of my virtualization hosts i am seeing
> all traffic go out via em1, and no traffic going out over the other
> interfaces.  All four interfaces are configured in a single bond as 802.3ad
> on both hosts with my logical networks attached to the bond.
>

the balancing is based on hash with either L2+L3, or L3+L4. It may well be
that both end up with the same hash and therefore go through the same link.
Y.


>
>
>
> -Original Message-
> From: Yaniv Kaul 
> To: Bryan Sockel 
> Cc: users 
> Date: Wed, 19 Apr 2017 10:41:40 +0300
> Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
>
>
>
> On Tue, Apr 18, 2017 at 9:57 PM, Bryan Sockel 
> wrote:
>>
>> Was reading over this post to the group about storage options.  I am more
>> of a windows guy as appose to a linux guy, but am learning quickly and had
>> a question.  You said that LACP will not provide extra band with
>> (Especially with NFS).  Does the same hold true with GlusterFS.  We are
>> currently using GlusterFS for the file replication piece.  Does Glusterfs
>> take advantage of any multipathing?
>>
>> Thanks
>>
>>
>
> I'd expect Gluster to take advantage of LACP, as it has replication to
> multiple peers (as opposed to NFS). See[1].
> Y.
>
> [1] https://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/Network%20Configurations%20Techniques/
>
>>
>>
>> -Original Message-
>> From: Yaniv Kaul 
>> To: Charles Tassell 
>> Cc: users 
>> Date: Sun, 26 Mar 2017 10:40:00 +0300
>> Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
>>
>>
>>
>> On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell 
>> wrote:
>>>
>>> Hi Everyone,
>>>
>>>   I'm about to setup an oVirt cluster with two hosts hitting a Linux
>>> storage server.  Since the Linux box can provide the storage in pretty much
>>> any form, I'm wondering which option is "best." Our primary focus is on
>>> reliability, with performance being a close second.  Since we will only be
>>> using a single storage server I was thinking NFS would probably beat out
>>> GlusterFS, and that NFSv4 would be a better choice than NFSv3.  I had
>>> assumed that that iSCSI would be better performance wise, but from what I'm
>>> seeing online that might not be the case.
>>
>>
>> NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD
>> support, which is nice.
>> Gluster probably requires 3 servers.
>> In most cases, I don't think people see the difference in performance
>> between NFS and iSCSI. The theory is that block storage is faster, but in
>> practice, most don't get to those limits where it matters really.
>>
>>
>>>
>>>   Our servers will be using a 1G network backbone for regular traffic
>>> and a dedicated 10G backbone with LACP for redundancy and extra bandwidth
>>> for storage traffic if that makes a difference.
>>
>>
>> LCAP many times (especially on NFS) does not provide extra bandwidth, as
>> the (single) NFS connection tends to be sticky to a single physical link.
>> It's one of the reasons I personally prefer iSCSI with multipathing.
>>
>>
>>>
>>>   I'll probably try to do some performance benchmarks with 2-3 options,
>>> but the reliability issue is a little harder to test for.  Has anyone had
>>> any particularly bad experiences with a particular storage option?  We have
>>> been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues
>>> with the multipath setup, but that won't be a problem with the new SAN
>>> since it's only got a single controller interface.
>>
>>
>> A single controller is not very reliable. If reliability is your primary
>> concern, I suggest ensuring there is no single point of failure - or at
>> least you are aware of all of them (does the storage server have redundant
>> power supply? to two power sources? Of course in some scenarios it's an
>> overkill and perhaps not practical, but you should be aware of your weak
>> spots).
>>
>> I'd stick with what you are most comfortable managing - creating, backing
>> up, extending, verifying health, etc.
>> Y.
>>
>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted engine install failed; vdsm upset about broker

2017-04-19 Thread Jamie Lawrence
I trialed installing the hosted engine, following the instructions at  
http://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/
  . This is using Gluster as the backend storage subsystem.

Answer file at the end.

Per the docs, 

"When the hosted-engine deployment script completes successfully, the oVirt 
Engine is configured and running on your host. The Engine has already 
configured the data center, cluster, host, the Engine virtual machine, and a 
shared storage domain dedicated to the Engine virtual machine.”

In my case, this is false. The installation claims success, but  the hosted 
engine VM stays stopped, unless I start it manually. If I start it manually, 
the default DC is down, the default cluster has the installation host in the 
cluster,  there is no storage, and the VM doesn’t show up in the GUI. In this 
install run, I have not yet started the engine manually.

I assume this is related to the errors in ovirt-hosted-engine-setup.log, below. 
(The timestamps are confusing; it looks like the Python errors are logged some 
time after they’re captured or something.) The HA broker and agent logs just 
show them looping in the sequence below.

Is there a decent way to pick this up and continue? If not, how do I make this 
work? 

Thanks,

-j

- - - - ovirt-hosted-engine-setup.log snippet: - - - - 

2017-04-19 12:29:55 DEBUG otopi.context context._executeMethod:128 Stage 
late_setup METHOD otopi.plugins.gr_he_setup.system.vdsmenv.Plugin._late_setup
2017-04-19 12:29:55 DEBUG otopi.plugins.otopi.services.systemd 
systemd.status:90 check service vdsmd status
2017-04-19 12:29:55 DEBUG otopi.plugins.otopi.services.systemd 
plugin.executeRaw:813 execute: ('/bin/systemctl', 'status', 'vdsmd.service'), 
executable='None', cwd='None', env=None
2017-04-19 12:29:55 DEBUG otopi.plugins.otopi.services.systemd 
plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'status', 
'vdsmd.service'), rc=0
2017-04-19 12:29:55 DEBUG otopi.plugins.otopi.services.systemd 
plugin.execute:921 execute-output: ('/bin/systemctl', 'status', 
'vdsmd.service') stdout:
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
   Active: active (running) since Wed 2017-04-19 12:26:59 PDT; 2min 55s ago
  Process: 67370 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh 
--post-stop (code=exited, status=0/SUCCESS)
  Process: 69995 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh 
--pre-start (code=exited, status=0/SUCCESS)
 Main PID: 70062 (vdsm)
   CGroup: /system.slice/vdsmd.service
   └─70062 /usr/bin/python2 /usr/share/vdsm/vdsm

Apr 19 12:29:00 sc5-ovirt-2.squaretrade.com vdsm[70062]: vdsm 
ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Failed to connect to 
broker, the number of errors has exceeded the limit (1)
Apr 19 12:29:00 sc5-ovirt-2.squaretrade.com vdsm[70062]: vdsm root ERROR failed 
to retrieve Hosted Engine HA info
 Traceback (most recent 
call last):
   File 
"/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo
 stats = 
instance.get_all_stats()
   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 102, in get_all_stats
 with 
broker.connection(self._retries, self._wait):
   File 
"/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
 return 
self.gen.next()
   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 99, in connection
 
self.connect(retries, wait)
   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 78, in connect
 raise 
BrokerConnectionError(error_msg)
 BrokerConnectionError: 
Failed to connect to broker, the number of errors has exceeded the limit (1)
Apr 19 12:29:15 sc5-ovirt-2.squaretrade.com vdsm[70062]: vdsm 
ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Failed to connect to 
broker, the number of errors has exceeded the limit (1)
Apr 19 12:29:15 sc5-ovirt-2.squaretrade.com vdsm[70062]: vdsm root ERROR failed 
to retrieve Hosted Engine HA info
 Traceback (most recent 
call last):
   File 

[ovirt-users] Hosted engine install failed; vdsm upset about broker (revised)

2017-04-19 Thread Jamie Lawrence

So, tracing this further, I’m pretty sure this is something about sanlock. 

As best I can tell this[1]  seems to be the failure that is blocking importing 
the pool, creating storage domains, importing the HE, etc. Contrary to the log, 
sanlock is running; I verified it starts on system-boot and restarts just fine.

I found one reference to someone having a similar problem in 3.6, but that 
appeared to have been a permission issue I’m not afflicted with.

How can I move past this? 

TIA, 

-j


[1] agent.log:
MainThread::WARNING::2017-04-19 
17:07:13,537::agent::209::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
 Restarting agent, attempt '6'
MainThread::INFO::2017-04-19 
17:07:13,567::hosted_engine::242::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname)
 Found certificate common name: sc5-ovirt-2.squaretrade.com
MainThread::INFO::2017-04-19 
17:07:13,569::hosted_engine::604::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
 Initializing VDSM
MainThread::INFO::2017-04-19 
17:07:16,044::hosted_engine::630::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
 Connecting the storage
MainThread::INFO::2017-04-19 
17:07:16,045::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
 Connecting storage server
MainThread::INFO::2017-04-19 
17:07:20,876::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
 Connecting storage server
MainThread::INFO::2017-04-19 
17:07:20,893::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
 Refreshing the storage domain
MainThread::INFO::2017-04-19 
17:07:21,160::hosted_engine::657::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
 Preparing images
MainThread::INFO::2017-04-19 
17:07:21,160::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
 Preparing images
MainThread::INFO::2017-04-19 
17:07:23,954::hosted_engine::660::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
 Refreshing vm.conf
MainThread::INFO::2017-04-19 
17:07:23,955::config::485::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf)
 Reloading vm.conf from the shared storage domain
MainThread::INFO::2017-04-19 
17:07:23,955::config::412::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
 Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::WARNING::2017-04-19 
17:07:26,741::ovf_store::107::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
 Unable to find OVF_STORE
MainThread::ERROR::2017-04-19 
17:07:26,744::config::450::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
 Unable to identify the OVF_STORE volume, falling back to initial vm.conf. 
Please ensure you already added your first data domain for regular VMs
MainThread::INFO::2017-04-19 
17:07:26,770::hosted_engine::509::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
 Initializing ha-broker connection
MainThread::INFO::2017-04-19 
17:07:26,771::brokerlink::130::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor ping, options {'addr': '10.181.26.1'}
MainThread::INFO::2017-04-19 
17:07:26,774::brokerlink::141::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 140621269798096
MainThread::INFO::2017-04-19 
17:07:26,774::brokerlink::130::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor mgmt-bridge, options {'use_ssl': 'true', 'bridge_name': 
'ovirtmgmt', 'address': '0'}
MainThread::INFO::2017-04-19 
17:07:26,791::brokerlink::141::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 140621269798544
MainThread::INFO::2017-04-19 
17:07:26,792::brokerlink::130::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor mem-free, options {'use_ssl': 'true', 'address': '0'}
MainThread::INFO::2017-04-19 
17:07:26,793::brokerlink::141::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 140621269798224
MainThread::INFO::2017-04-19 
17:07:26,794::brokerlink::130::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor cpu-load-no-engine, options {'use_ssl': 'true', 'vm_uuid': 
'04ff4cf1-135a-4918-9a1f-8023322f89a3', 'address': '0'}
MainThread::INFO::2017-04-19 
17:07:26,796::brokerlink::141::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 140621269796816
MainThread::INFO::2017-04-19 
17:07:26,796::brokerlink::130::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid': 
'04ff4cf1-135a-4918-9a1f-8023322f89a3', 'address': '0'}

Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-19 Thread Yaniv Kaul
On Tue, Apr 18, 2017 at 7:48 PM, Nelson Lameiras <
nelson.lamei...@lyra-network.com> wrote:

> hello,
>
> When putting a host on "maintenance mode", all vms start migrating to
> other hosts.
>
> We have some hosts that have 60 vms. So this will create a 60 vms
> migrating simultaneously.
> Some vms are under so much heavy loads that migration fails often (our
> guess is that massive simultaneous migrations does not help migration
> convergence) - even with "suspend workload if needed" migraton policy.
>
> - Does oVirt really launches 60 simultaneous migrations or is there a
> queuing system ?
> - If there is a queuing system, is there a way to configure a maximum
> number of simultaneous migrations ?
>
> I did see a "migration bandwidth limit", but this is quite what we are
> looking for.
>

What migration policy are you using?
Are you using a dedicated migration network, or the ovirtmgmt network?


>
> my setup:
> ovirt-engine +hosted engine 4.1.1
> hosts : centos 7.3 fully updated.
>
> for full context to understand this question : 2 times in the past, when
> trying to put a host in maintenance, host stopped responding during massive
> migrations and was fenced by engine. It's still unclear why host stopped
> responding, but we think that migrating 60+ vms simultaneously puts a heavy
> strain on storage ? So we would like to better control migration process in
> order to better understand what's happening. This scenario is "production
> only" since our labs do not contain nearly as much vm with such heavy
> loads. So rather than trying to reproduce, we are trying to avoid ;)
>

If you could open a bug with relevant logs on the host not responding,
that'd be great.
Live migration doesn't touch the storage (disks re not moving anywhere),
but it does stress the network. I doubt it, but perhaps you over-saturate
the ovirtmgmt network.
Y.


>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-19 Thread Nelson Lameiras
1000 Mbps full duplex 

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 



From: "Konstantin Shalygin"  
To: "Nelson Lameiras"  
Cc: users@ovirt.org 
Sent: Wednesday, April 19, 2017 11:25:27 AM 
Subject: Re: [ovirt-users] massive simultaneous vms migrations ? 



I mean what is your hardware? 1G? 40G? 

On 04/19/2017 04:16 PM, Nelson Lameiras wrote: 



I'm using ovirtmgmt network for migrations.

I'm guetting the vibe that using a dedicated network for migration is "good 
practice"...

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 nelson.lamei...@lyra-network.com www.lyra-network.com | 
www.payzen.eu Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE

- Original Message -
From: "Konstantin Shalygin"  To: users@ovirt.org , "Nelson 
Lameiras"  Sent: Wednesday, April 19, 2017 
3:15:29 AM
Subject: Re: Re: [ovirt-users] massive simultaneous vms migrations ?

Hello.

What is your Migration Network? 

BQ_BEGIN

We have some hosts that have 60 vms. So this will create a 60 vms migrating 
simultaneously.
Some vms are under so much heavy loads that migration fails often (our guess is 
that massive simultaneous migrations does not help migration convergence) - 
even with "suspend workload if needed" migraton policy. 



BQ_END


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-19 Thread Konstantin Shalygin

We have many issues migration with 1G and 18-25 vms.

Is slow, stuck, failed. Switched to 10G and set migration limit to 
5000Mbps (actually this is don't work, but if don't set this field, 
limit is 1000Mbps!) - 25vms migrate ~ 30seconds in total.



On 04/19/2017 07:41 PM, Nelson Lameiras wrote:

1000 Mbps full duplex


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-19 Thread Arsène Gschwind

Hi,

I've applied the patch on one off my host and restarted the hosted 
engine as you wrote but it didn't help, the console button is still 
grayed out and not available.


Rgds,
Arsène


On 04/18/2017 02:20 PM, Evgenia Tokar wrote:

Hi,

Thanks for bringing the issue to our attention and for answering the 
questions.


There is a patch to fix this: https://gerrit.ovirt.org/#/c/75515/
After applying the patch a restart to the hosted engine vm is required 
(not just the engine).


Let me know if you are still experiencing issues with this.

Thanks,
Jenny


On Wed, Apr 12, 2017 at 8:05 PM, Rafał Wojciechowski 
> wrote:


hi,

I will answer also. however I am using single hypervisor so
without ha and I have no performed steps:
https://www.ovirt.org/documentation/how-to/hosted-engine/


1. yes - however I have to start in headless mode so it is quite
obvius
if I am trying to start with spice/vnc console I am getting
segfault from libvirtd
2. as above
3. as above


W dniu 12.04.2017 o 14:12, Arsène Gschwind pisze:


Hi all,

I will answer your questions:

1. definitively yes
2. the command hosted-engine --console works well and I'm able to
connect.
3. Here are the device entries


devices={device:qxl,alias:video0,type:video,deviceId:5210a3c3-9cc4-4aed-90c6-432dd2d37c46,address:{slot:0x02,
bus:0x00,domain:0x,type:pci,function:0x0}}
devices={device:console,type:console}

Thanks and rgds,
Arsène

On 04/12/2017 10:53 AM, Evgenia Tokar wrote:

Hi all,

I have managed to reproduce this issue and opened a bug for
tracking it: https://bugzilla.redhat.com/show_bug.cgi?id=1441570
 .

There is no solution yet, but I would appreciate if any who
encountered this issue will answer some questions:
1. Is the console button greyed out in the UI?
2. On the hosted engine host, does the command hosted-engine
--console fails?
 If it fails, try upgrading ovirt-hosted-engine-ha on the
hosted engine host. We had a bug related to this issue that was
fixed (https://bugzilla.redhat.com/show_bug.cgi?id=1364132
).
 After upgrade and restart of the vm, this should work, and
you should be able to connect to the console.
3. On the hosted engine host look at the content of:
/var/run/ovirt-hosted-engine-ha/vm.conf
Does it contain a graphical device? Or a console device?

Thanks,
Jenny


On Mon, Apr 10, 2017 at 11:44 AM, Martin Sivak
> wrote:

Hi,

we are working on that, we can only ask for patience now,
Jenny was trying to find out what happened and how to fix it
all week.

Best regards

--
Martin Sivak
SLA / oVirt

On Mon, Apr 10, 2017 at 9:38 AM, Rafał Wojciechowski
>
wrote:

hi,

I have similiar issue(I also started my mailthread)
after upgrade 4.0 to 4.1

Version 4.1.1.8-1.el7.centos (before it was some 4.1.0.x
or similiar - update not fixed it)

to run VM I have to set in Console tab Headless mode -
without it I got libvirtd segfault(logs attached in my
mailthread).

So I am able to run VMs only without Console - do you
also have to set headless before run VM?

I noticed that libvirt-daemon was also upgraded to 2.0
version during ovirt upgrade - I dont think that 4.1 was
not testes due to such libvirtd upgrade... but maybe?

Regards,
Rafal Wojciechowski

W dniu 10.04.2017 o 08:24, Arsène Gschwind pisze:


Hi,

After updating to oVirt 4.1.1 Async release i can
confirm that the problem still persists.

Rgds,
Arsène


On 03/25/2017 12:25 PM, Arsène Gschwind wrote:


Hi,

After updating to 4.1.1 i'm observing the same
behavior, HE without any console.
Even when trying to edit the HE VMs it doesn't change
anything, Graphics stays to NONE.

Thanks for any Help.

Regards,
Arsène

On 03/24/2017 03:11 PM, Nelson Lameiras wrote:

Hello,

When upgrading my test setup from 4.0 to 4.1, my
engine vm lost it's console (from SPICE to None in GUI)

My test setup :
2 manually built hosts using centos 7.3, ovirt 4.1
1 manually built hosted engine centos 7.3, oVirt
4.1.0.4-el7, accessible with 

Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-19 Thread Martin Sivak
Hi Rafal,

segfaulting libvirt seems to be something different. Can you please start a
separate thread about it or open a bugzilla?

Best regards

Martin Sivak

On Tue, Apr 18, 2017 at 4:52 PM, Rafał Wojciechowski <
i...@rafalwojciechowski.pl> wrote:

> hi,
>
> I applied it for 
> /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/
> content, then rebooted, reinstalled host in ovirt engine web administrator,
> again rebooted and tried again but I have still the same issue with
> libvirtd segfault
>
>
> Regards,
> Rafal Wojciechowski
>
>
> W dniu 18.04.2017 o 14:20, Evgenia Tokar pisze:
>
> Hi,
>
> Thanks for bringing the issue to our attention and for answering the
> questions.
>
> There is a patch to fix this: https://gerrit.ovirt.org/#/c/75515/
> After applying the patch a restart to the hosted engine vm is required
> (not just the engine).
>
> Let me know if you are still experiencing issues with this.
>
> Thanks,
> Jenny
>
>
> On Wed, Apr 12, 2017 at 8:05 PM, Rafał Wojciechowski <
> i...@rafalwojciechowski.pl> wrote:
>
>> hi,
>>
>> I will answer also. however I am using single hypervisor so without ha
>> and I have no performed steps:
>> https://www.ovirt.org/documentation/how-to/hosted-engine/
>>
>> 1. yes - however I have to start in headless mode so it is quite obvius
>> if I am trying to start with spice/vnc console I am getting segfault from
>> libvirtd
>> 2. as above
>> 3. as above
>>
>> W dniu 12.04.2017 o 14:12, Arsène Gschwind pisze:
>>
>> Hi all,
>>
>> I will answer your questions:
>>
>> 1. definitively yes
>> 2. the command hosted-engine --console works well and I'm able to connect.
>> 3. Here are the device entries
>>
>> devices={device:qxl,alias:video0,type:video,deviceId:5210a3c
>> 3-9cc4-4aed-90c6-432dd2d37c46,address:{slot:0x02,
>> bus:0x00,domain:0x,type:pci,function:0x0}}
>> devices={device:console,type:console}
>>
>> Thanks and rgds,
>> Arsène
>> On 04/12/2017 10:53 AM, Evgenia Tokar wrote:
>>
>> Hi all,
>>
>> I have managed to reproduce this issue and opened a bug for tracking it:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1441570 .
>>
>> There is no solution yet, but I would appreciate if any who encountered
>> this issue will answer some questions:
>> 1. Is the console button greyed out in the UI?
>> 2. On the hosted engine host, does the command hosted-engine --console
>> fails?
>>  If it fails, try upgrading ovirt-hosted-engine-ha on the hosted
>> engine host. We had a bug related to this issue that was fixed (
>> https://bugzilla.redhat.com/show_bug.cgi?id=1364132).
>>  After upgrade and restart of the vm, this should work, and you
>> should be able to connect to the console.
>> 3. On the hosted engine host look at the content of:
>> /var/run/ovirt-hosted-engine-ha/vm.conf
>> Does it contain a graphical device? Or a console device?
>>
>> Thanks,
>> Jenny
>>
>>
>> On Mon, Apr 10, 2017 at 11:44 AM, Martin Sivak  wrote:
>>
>>> Hi,
>>>
>>> we are working on that, we can only ask for patience now, Jenny was
>>> trying to find out what happened and how to fix it all week.
>>>
>>> Best regards
>>>
>>> --
>>> Martin Sivak
>>> SLA / oVirt
>>>
>>> On Mon, Apr 10, 2017 at 9:38 AM, Rafał Wojciechowski <
>>> i...@rafalwojciechowski.pl> wrote:
>>>
 hi,

 I have similiar issue(I also started my mailthread) after upgrade 4.0
 to 4.1

 Version 4.1.1.8-1.el7.centos (before it was some 4.1.0.x or similiar -
 update not fixed it)
 to run VM I have to set in Console tab Headless mode - without it I got
 libvirtd segfault(logs attached in my mailthread).

 So I am able to run VMs only without Console - do you also have to set
 headless before run VM?

 I noticed that libvirt-daemon was also upgraded to 2.0 version during
 ovirt upgrade - I dont think that 4.1 was not testes due to such libvirtd
 upgrade... but maybe?

 Regards,
 Rafal Wojciechowski

 W dniu 10.04.2017 o 08:24, Arsène Gschwind pisze:

 Hi,

 After updating to oVirt 4.1.1 Async release i can confirm that the
 problem still persists.

 Rgds,
 Arsène

 On 03/25/2017 12:25 PM, Arsène Gschwind wrote:

 Hi,
 After updating to 4.1.1 i'm observing the same behavior, HE without any
 console.
 Even when trying to edit the HE VMs it doesn't change anything,
 Graphics stays to NONE.

 Thanks for any Help.

 Regards,
 Arsène

 On 03/24/2017 03:11 PM, Nelson Lameiras wrote:

 Hello,

 When upgrading my test setup from 4.0 to 4.1, my engine vm lost it's
 console (from SPICE to None in GUI)

 My test setup :
 2 manually built hosts using centos 7.3, ovirt 4.1
 1 manually built hosted engine centos 7.3, oVirt 4.1.0.4-el7,
 accessible with SPICE console via GUI

 I updated ovirt-engine from 4.1.0 to 4.1.1 by doing on engine :
 - yum update
 - engine-setup
 - reboot 

Re: [ovirt-users] [Python-SDK][Ovirt-4.0] Create VM on specific Host.

2017-04-19 Thread TranceWorldLogic .
Hi,

It is working fine.
But I want to disable migration policy. It didn't solve my purpose.

What else I have to do to disable migration ?

Thanks,
~Rohit


On Wed, Apr 19, 2017 at 1:36 PM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Thanks will try and let you know,
> ~Rohit
>
> On Wed, Apr 19, 2017 at 1:18 PM, Juan Hernández 
> wrote:
>
>> On 04/19/2017 09:34 AM, Juan Hernández wrote:
>> > On 04/19/2017 08:41 AM, TranceWorldLogic . wrote:
>> >> Hi,
>> >>
>> >> I was trying to create VM on specific HOST using python sdk as shown
>> below.
>> >>
>> >> -- Code ---
>> >> vm = vms_service.add(  
>> >> host = types.Host(
>> >> name = "host-01",
>> >> ),
>> >>   )
>> >> -- End Code ---
>> >>
>> >> It created VM successfully, but when I see in ovirt GUI I saw that VM
>> is
>> >> not bonded with specific host.
>> >>
>> >> Ovirt GUI:
>> >> Virtual Machines -> click on VM -> "Edit" button -> In advance menu ->
>> >> "Host" tab
>> >> Start Running On:
>> >>o  Any Host in Cluster  <== This option got selected
>> >>o  Specific Host(s)   <== *I want this option to select.*
>> >>
>> >> Please help me to bind VM to specific Host via Python SDK
>> >>
>> >
>> > The Vm.host attribute is used only to indicate in what host is the VM
>> > currently running.
>> >
>> > To pin the VM to a set of hosts you have to use Vm.placement_policy, as
>> > described here:
>> >
>> >
>> > http://ovirt.github.io/ovirt-engine-api-model/4.1/#types/vm/
>> attributes/placement_policy
>> >
>> > With the Python SDK it should be something like this:
>> >
>> >   vm = vms_service.add(
>> > vm=types.Vm(
>> >   ...
>> >   placement_policy=types.PlacementPolicy(
>> > hosts=[
>> >   types.Host(name='host-01')
>> > ]
>> >   )
>> > )
>> >   )
>> >
>>
>> Sorry, the name of the type is incorrect, should be
>> 'types.VmPlacementPolicy'. So the complete example should be like this:
>>
>>   vm = vms_service.add(
>> vm=types.Vm(
>>   ...
>>   placement_policy=types.VmPlacementPolicy(
>> hosts=[
>>   types.Host(name='host-01')
>> ]
>>   )
>> )
>>   )
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Python-SDK][Ovirt-4.0] Create VM on specific Host.

2017-04-19 Thread Juan Hernández
On 04/19/2017 01:27 PM, TranceWorldLogic . wrote:
> Hi,
> 
> It is working fine.
> But I want to disable migration policy. It didn't solve my purpose.
> 
> What else I have to do to disable migration ?
> 

Not sure what exactly you want to achieve. Can you elaborate a bit?

If in addition to pin the VM to a host you also want to disable
migration, you can use the Vm.placement_policy.affinity attribute:

  vm = vms_service.add(
vm=types.Vm(
  ...
  placement_policy=types.VmPlacementPolicy(
hosts=[
  types.Host(name='host-01')
],
affinity=types.VmAffinity.PINNED
  )
)
  )

Martin, I think that you can explain better than me what are the
meanings of the values of the VmAffinity enum, and what is it
relationship to pinning. Would be nice to have that documented in the
specification of the API.

> 
> 
> On Wed, Apr 19, 2017 at 1:36 PM, TranceWorldLogic .
> > wrote:
> 
> Thanks will try and let you know,
> ~Rohit
> 
> On Wed, Apr 19, 2017 at 1:18 PM, Juan Hernández  > wrote:
> 
> On 04/19/2017 09:34 AM, Juan Hernández wrote:
> > On 04/19/2017 08:41 AM, TranceWorldLogic . wrote:
> >> Hi,
> >>
> >> I was trying to create VM on specific HOST using python sdk
> as shown below.
> >>
> >> -- Code ---
> >> vm = vms_service.add(  
> >> host = types.Host(
> >> name = "host-01",
> >> ),
> >>   )
> >> -- End Code ---
> >>
> >> It created VM successfully, but when I see in ovirt GUI I saw
> that VM is
> >> not bonded with specific host.
> >>
> >> Ovirt GUI:
> >> Virtual Machines -> click on VM -> "Edit" button -> In
> advance menu ->
> >> "Host" tab
> >> Start Running On:
> >>o  Any Host in Cluster  <== This option got selected
> >>o  Specific Host(s)   <== *I want this option to select.*
> >>
> >> Please help me to bind VM to specific Host via Python SDK
> >>
> >
> > The Vm.host attribute is used only to indicate in what host is
> the VM
> > currently running.
> >
> > To pin the VM to a set of hosts you have to use
> Vm.placement_policy, as
> > described here:
> >
> >
> >
> 
> http://ovirt.github.io/ovirt-engine-api-model/4.1/#types/vm/attributes/placement_policy
> 
> 
> >
> > With the Python SDK it should be something like this:
> >
> >   vm = vms_service.add(
> > vm=types.Vm(
> >   ...
> >   placement_policy=types.PlacementPolicy(
> > hosts=[
> >   types.Host(name='host-01')
> > ]
> >   )
> > )
> >   )
> >
> 
> Sorry, the name of the type is incorrect, should be
> 'types.VmPlacementPolicy'. So the complete example should be
> like this:
> 
>   vm = vms_service.add(
> vm=types.Vm(
>   ...
>   placement_policy=types.VmPlacementPolicy(
> hosts=[
>   types.Host(name='host-01')
> ]
>   )
> )
>   )
> 
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Question about Huge Pages

2017-04-19 Thread Gianluca Cecchi
On Wed, Apr 19, 2017 at 8:03 AM, Michal Skrivanek 
wrote:

>
> Why not reserving regular hugepages for VMs on boot?


Do you mean at hypervisor level? In this case it is what I'm doing normally
for physical servers where I install Oracle RDBMS


> then you can use
> it with vdsm hook for that Oracle VM.


Which hook are you referring?
This one:
http://www.ovirt.org/develop/developer-guide/vdsm/hook/hugepages/
?
In case is it still current? In the sense that I need to mount the
hugetblfs virtual file system at hst level?
The hook description seems low detailed...
Normally if I want oracle user able to use huge pages on physical server, I
have to specify

#
# Huge pages
#
vm.hugetlb_shm_group = 2000
# 18GB allocatable
vm.nr_hugepages = 9216
#

where 2000 is the group id for dba group, the main group of oracle user

How to map this with vrtualization?
Eg:
1) vm.hugetlb_shm_group at hypervisor side should be set to the group of
the qemu user as the qemu-kvm process runs with it?
2) Then I have to set VM for VM the hugepages=xxx value in the hook and
that will bypass the sysctl.conf configuration in the guest?
3) I presume I have to set the vm.hugetlb_shm_group parameter at guest
level

Thanks,
Gianluca



> It improves VM performance in
> general, the only drawback is less flexibility since that memory can't
> be used by others unless they specifically ask for  hugepages.
>

This seems to confirm that I have to set a statich sysctl.conf entry at
hypervisor level such as
vm.nr_hugepages = 


> Also, I suppose you disable KSM, and I'm not sure about ballooning,
> unless you need it I'd disable it too.
>

I kept the defaults at the moment that I suppose should be

a) KSM disabled

ksm has been configured to start by default as normally, but ksmtuned has
been disabled:

[g.cecchi@ov300 ~]$ sudo systemctl status ksm
● ksm.service - Kernel Samepage Merging
   Loaded: loaded (/usr/lib/systemd/system/ksm.service; enabled; vendor
preset: enabled)
   Active: active (exited) since Tue 2017-04-11 11:07:28 CEST; 1 weeks 1
days ago
  Process: 976 ExecStart=/usr/libexec/ksmctl start (code=exited,
status=0/SUCCESS)
 Main PID: 976 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/ksm.service

Apr 11 11:07:28 ov300.datacenter.polimi.it systemd[1]: Starting Kernel
Samepage Merging...
Apr 11 11:07:28 ov300.datacenter.polimi.it systemd[1]: Started Kernel
Samepage Merging.

[g.cecchi@ov300 ~]$ sudo systemctl status ksmtuned
● ksmtuned.service - Kernel Samepage Merging (KSM) Tuning Daemon
   Loaded: loaded (/usr/lib/systemd/system/ksmtuned.service; disabled;
vendor preset: disabled)
   Active: inactive (dead)
[g.cecchi@ov300 ~]$


b) ballooning enabled for a newly created VM unless I explicitly disable it
(at least I see this happens in 4.1.1)

What to do for a) and b) to not interfere with huge pages?


> The hook is being improved right now in master, but it should be
> usable in stable too.
>
>
I will be happy to test and verify and contribute to its description, as
soon as I understand its usage

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Python-SDK][Ovirt-4.0] Create VM on specific Host.

2017-04-19 Thread TranceWorldLogic .
Hi Juan,

It working fine. affinity and hosts have solved my problem.

Thanks,
~Rohit


On Wed, Apr 19, 2017 at 5:16 PM, Juan Hernández  wrote:

> On 04/19/2017 01:27 PM, TranceWorldLogic . wrote:
> > Hi,
> >
> > It is working fine.
> > But I want to disable migration policy. It didn't solve my purpose.
> >
> > What else I have to do to disable migration ?
> >
>
> Not sure what exactly you want to achieve. Can you elaborate a bit?
>
> If in addition to pin the VM to a host you also want to disable
> migration, you can use the Vm.placement_policy.affinity attribute:
>
>   vm = vms_service.add(
> vm=types.Vm(
>   ...
>   placement_policy=types.VmPlacementPolicy(
> hosts=[
>   types.Host(name='host-01')
> ],
> affinity=types.VmAffinity.PINNED
>   )
> )
>   )
>
> Martin, I think that you can explain better than me what are the
> meanings of the values of the VmAffinity enum, and what is it
> relationship to pinning. Would be nice to have that documented in the
> specification of the API.
>
> >
> >
> > On Wed, Apr 19, 2017 at 1:36 PM, TranceWorldLogic .
> > > wrote:
> >
> > Thanks will try and let you know,
> > ~Rohit
> >
> > On Wed, Apr 19, 2017 at 1:18 PM, Juan Hernández  > > wrote:
> >
> > On 04/19/2017 09:34 AM, Juan Hernández wrote:
> > > On 04/19/2017 08:41 AM, TranceWorldLogic . wrote:
> > >> Hi,
> > >>
> > >> I was trying to create VM on specific HOST using python sdk
> > as shown below.
> > >>
> > >> -- Code ---
> > >> vm = vms_service.add(  
> > >> host = types.Host(
> > >> name = "host-01",
> > >> ),
> > >>   )
> > >> -- End Code ---
> > >>
> > >> It created VM successfully, but when I see in ovirt GUI I saw
> > that VM is
> > >> not bonded with specific host.
> > >>
> > >> Ovirt GUI:
> > >> Virtual Machines -> click on VM -> "Edit" button -> In
> > advance menu ->
> > >> "Host" tab
> > >> Start Running On:
> > >>o  Any Host in Cluster  <== This option got selected
> > >>o  Specific Host(s)   <== *I want this option to select.*
> > >>
> > >> Please help me to bind VM to specific Host via Python SDK
> > >>
> > >
> > > The Vm.host attribute is used only to indicate in what host is
> > the VM
> > > currently running.
> > >
> > > To pin the VM to a set of hosts you have to use
> > Vm.placement_policy, as
> > > described here:
> > >
> > >
> > >
> > http://ovirt.github.io/ovirt-engine-api-model/4.1/#types/
> vm/attributes/placement_policy
> >  vm/attributes/placement_policy>
> > >
> > > With the Python SDK it should be something like this:
> > >
> > >   vm = vms_service.add(
> > > vm=types.Vm(
> > >   ...
> > >   placement_policy=types.PlacementPolicy(
> > > hosts=[
> > >   types.Host(name='host-01')
> > > ]
> > >   )
> > > )
> > >   )
> > >
> >
> > Sorry, the name of the type is incorrect, should be
> > 'types.VmPlacementPolicy'. So the complete example should be
> > like this:
> >
> >   vm = vms_service.add(
> > vm=types.Vm(
> >   ...
> >   placement_policy=types.VmPlacementPolicy(
> > hosts=[
> >   types.Host(name='host-01')
> > ]
> >   )
> > )
> >   )
> >
> >
> >
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Python-SDK][Ovirt-4.0] Create VM on specific Host.

2017-04-19 Thread Juan Hernández
On 04/19/2017 08:41 AM, TranceWorldLogic . wrote:
> Hi,
> 
> I was trying to create VM on specific HOST using python sdk as shown below.
> 
> -- Code ---
> vm = vms_service.add(  
> host = types.Host(
> name = "host-01",
> ),
>   )
> -- End Code ---
> 
> It created VM successfully, but when I see in ovirt GUI I saw that VM is
> not bonded with specific host.
> 
> Ovirt GUI:
> Virtual Machines -> click on VM -> "Edit" button -> In advance menu ->
> "Host" tab
> Start Running On:
>o  Any Host in Cluster  <== This option got selected
>o  Specific Host(s)   <== *I want this option to select.*
> 
> Please help me to bind VM to specific Host via Python SDK
> 

The Vm.host attribute is used only to indicate in what host is the VM
currently running.

To pin the VM to a set of hosts you have to use Vm.placement_policy, as
described here:


http://ovirt.github.io/ovirt-engine-api-model/4.1/#types/vm/attributes/placement_policy

With the Python SDK it should be something like this:

  vm = vms_service.add(
vm=types.Vm(
  ...
  placement_policy=types.PlacementPolicy(
hosts=[
  types.Host(name='host-01')
]
  )
)
  )
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Python-SDK][Ovirt-4.0] Create VM on specific Host.

2017-04-19 Thread TranceWorldLogic .
Thanks will try and let you know,
~Rohit

On Wed, Apr 19, 2017 at 1:18 PM, Juan Hernández  wrote:

> On 04/19/2017 09:34 AM, Juan Hernández wrote:
> > On 04/19/2017 08:41 AM, TranceWorldLogic . wrote:
> >> Hi,
> >>
> >> I was trying to create VM on specific HOST using python sdk as shown
> below.
> >>
> >> -- Code ---
> >> vm = vms_service.add(  
> >> host = types.Host(
> >> name = "host-01",
> >> ),
> >>   )
> >> -- End Code ---
> >>
> >> It created VM successfully, but when I see in ovirt GUI I saw that VM is
> >> not bonded with specific host.
> >>
> >> Ovirt GUI:
> >> Virtual Machines -> click on VM -> "Edit" button -> In advance menu ->
> >> "Host" tab
> >> Start Running On:
> >>o  Any Host in Cluster  <== This option got selected
> >>o  Specific Host(s)   <== *I want this option to select.*
> >>
> >> Please help me to bind VM to specific Host via Python SDK
> >>
> >
> > The Vm.host attribute is used only to indicate in what host is the VM
> > currently running.
> >
> > To pin the VM to a set of hosts you have to use Vm.placement_policy, as
> > described here:
> >
> >
> > http://ovirt.github.io/ovirt-engine-api-model/4.1/#types/
> vm/attributes/placement_policy
> >
> > With the Python SDK it should be something like this:
> >
> >   vm = vms_service.add(
> > vm=types.Vm(
> >   ...
> >   placement_policy=types.PlacementPolicy(
> > hosts=[
> >   types.Host(name='host-01')
> > ]
> >   )
> > )
> >   )
> >
>
> Sorry, the name of the type is incorrect, should be
> 'types.VmPlacementPolicy'. So the complete example should be like this:
>
>   vm = vms_service.add(
> vm=types.Vm(
>   ...
>   placement_policy=types.VmPlacementPolicy(
> hosts=[
>   types.Host(name='host-01')
> ]
>   )
> )
>   )
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [Python-SDK][Ovirt-4.0] Create VM on specific Host.

2017-04-19 Thread TranceWorldLogic .
Hi,

I was trying to create VM on specific HOST using python sdk as shown below.

-- Code ---
vm = vms_service.add(  
host = types.Host(
name = "host-01",
),
  )
-- End Code ---

It created VM successfully, but when I see in ovirt GUI I saw that VM is
not bonded with specific host.

Ovirt GUI:
Virtual Machines -> click on VM -> "Edit" button -> In advance menu ->
"Host" tab
Start Running On:
   o  Any Host in Cluster  <== This option got selected
   o  Specific Host(s)   <== *I want this option to select.*

Please help me to bind VM to specific Host via Python SDK

Thanks,
~Rohit
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-04-19 Thread Yaniv Kaul
On Tue, Apr 18, 2017 at 9:57 PM, Bryan Sockel  wrote:

> Was reading over this post to the group about storage options.  I am more
> of a windows guy as appose to a linux guy, but am learning quickly and had
> a question.  You said that LACP will not provide extra band with
> (Especially with NFS).  Does the same hold true with GlusterFS.  We are
> currently using GlusterFS for the file replication piece.  Does Glusterfs
> take advantage of any multipathing?
>
> Thanks
>
>

I'd expect Gluster to take advantage of LACP, as it has replication to
multiple peers (as opposed to NFS). See[1].
Y.

[1]
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Network%20Configurations%20Techniques/


>
>
> -Original Message-
> From: Yaniv Kaul 
> To: Charles Tassell 
> Cc: users 
> Date: Sun, 26 Mar 2017 10:40:00 +0300
> Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
>
>
>
> On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell 
> wrote:
>>
>> Hi Everyone,
>>
>>   I'm about to setup an oVirt cluster with two hosts hitting a Linux
>> storage server.  Since the Linux box can provide the storage in pretty much
>> any form, I'm wondering which option is "best." Our primary focus is on
>> reliability, with performance being a close second.  Since we will only be
>> using a single storage server I was thinking NFS would probably beat out
>> GlusterFS, and that NFSv4 would be a better choice than NFSv3.  I had
>> assumed that that iSCSI would be better performance wise, but from what I'm
>> seeing online that might not be the case.
>
>
> NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support,
> which is nice.
> Gluster probably requires 3 servers.
> In most cases, I don't think people see the difference in performance
> between NFS and iSCSI. The theory is that block storage is faster, but in
> practice, most don't get to those limits where it matters really.
>
>
>>
>>   Our servers will be using a 1G network backbone for regular traffic and
>> a dedicated 10G backbone with LACP for redundancy and extra bandwidth for
>> storage traffic if that makes a difference.
>
>
> LCAP many times (especially on NFS) does not provide extra bandwidth, as
> the (single) NFS connection tends to be sticky to a single physical link.
> It's one of the reasons I personally prefer iSCSI with multipathing.
>
>
>>
>>   I'll probably try to do some performance benchmarks with 2-3 options,
>> but the reliability issue is a little harder to test for.  Has anyone had
>> any particularly bad experiences with a particular storage option?  We have
>> been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues
>> with the multipath setup, but that won't be a problem with the new SAN
>> since it's only got a single controller interface.
>
>
> A single controller is not very reliable. If reliability is your primary
> concern, I suggest ensuring there is no single point of failure - or at
> least you are aware of all of them (does the storage server have redundant
> power supply? to two power sources? Of course in some scenarios it's an
> overkill and perhaps not practical, but you should be aware of your weak
> spots).
>
> I'd stick with what you are most comfortable managing - creating, backing
> up, extending, verifying health, etc.
> Y.
>
>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Python-SDK][Ovirt-4.0] Create VM on specific Host.

2017-04-19 Thread Juan Hernández
On 04/19/2017 09:34 AM, Juan Hernández wrote:
> On 04/19/2017 08:41 AM, TranceWorldLogic . wrote:
>> Hi,
>>
>> I was trying to create VM on specific HOST using python sdk as shown below.
>>
>> -- Code ---
>> vm = vms_service.add(  
>> host = types.Host(
>> name = "host-01",
>> ),
>>   )
>> -- End Code ---
>>
>> It created VM successfully, but when I see in ovirt GUI I saw that VM is
>> not bonded with specific host.
>>
>> Ovirt GUI:
>> Virtual Machines -> click on VM -> "Edit" button -> In advance menu ->
>> "Host" tab
>> Start Running On:
>>o  Any Host in Cluster  <== This option got selected
>>o  Specific Host(s)   <== *I want this option to select.*
>>
>> Please help me to bind VM to specific Host via Python SDK
>>
> 
> The Vm.host attribute is used only to indicate in what host is the VM
> currently running.
> 
> To pin the VM to a set of hosts you have to use Vm.placement_policy, as
> described here:
> 
> 
> http://ovirt.github.io/ovirt-engine-api-model/4.1/#types/vm/attributes/placement_policy
> 
> With the Python SDK it should be something like this:
> 
>   vm = vms_service.add(
> vm=types.Vm(
>   ...
>   placement_policy=types.PlacementPolicy(
> hosts=[
>   types.Host(name='host-01')
> ]
>   )
> )
>   )
> 

Sorry, the name of the type is incorrect, should be
'types.VmPlacementPolicy'. So the complete example should be like this:

  vm = vms_service.add(
vm=types.Vm(
  ...
  placement_policy=types.VmPlacementPolicy(
hosts=[
  types.Host(name='host-01')
]
  )
)
  )

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem adding Labels to Bonds on big clusters

2017-04-19 Thread Yaniv Kaul
On Wed, Apr 19, 2017 at 9:02 AM, Marcel Hanke  wrote:

> Hi,
> I currently have some trouble adding a label to a bond on an host.
> The problem seems to come from the number of Vlans (108 in that label) and
> the
> number of hosts (160 in 4 cluster). As far as I can tell the process gets
> finished on the node (all the network configurations are there in the log),
> but ovirt seems to fail sometime after the node has nothing to do anymore.
> The
> Logs gives me a timeout error for the configuration command.
>
> Does anyone know how to encrease the timeout or fix that problem an other
> way?
>

For the time being, I suggest using the API. We have an open bug on this
(couldn't find it right now).
Y.


>
> The exact same setup with only 80 vlan and 120 hosts in 3 cluster is
> running
> fine.
>
> Cheers Marcel
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Problem adding Labels to Bonds on big clusters

2017-04-19 Thread Marcel Hanke
Hi,
I currently have some trouble adding a label to a bond on an host.
The problem seems to come from the number of Vlans (108 in that label) and the 
number of hosts (160 in 4 cluster). As far as I can tell the process gets 
finished on the node (all the network configurations are there in the log), 
but ovirt seems to fail sometime after the node has nothing to do anymore. The 
Logs gives me a timeout error for the configuration command.

Does anyone know how to encrease the timeout or fix that problem an other way?

The exact same setup with only 80 vlan and 120 hosts in 3 cluster is running 
fine.

Cheers Marcel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Question about Huge Pages

2017-04-19 Thread Michal Skrivanek
> On 18 Apr 2017, at 18:03, Gianluca Cecchi  wrote:
>
> Hello,
> I'm testing virtualization of some Oracle servers.
> I have 4.1.1 with CentOS 7.3 servers as hypervisors.
> Typically on physical Oracle servers I configure huge pages for Oracle memory 
> areas.
> In particular I disable Transparent Huge Pages, because they are known to be 
> in conflict with Oracle performances, both in RAC and in standalone 
> configurations.
> In RHEL systems I configure "transparent_hugepage=never" boot parameter, 
> while in Oracle Linux OS uek kernels it is already disabled by default.
> I notice that in CentOS 7.3, by default, transparent huge pages are 
> configured:
>
> [root@ov300 ~]# cat /proc/meminfo | grep -i huge
> AnonHugePages:  17006592 kB
> HugePages_Total:   0
> HugePages_Free:0
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:   2048 kB
> [root@ov300 ~]#
>
> I'm going to configure a VM with 64Gb of ram and with an Oracle RDBMS that 
> would have 16Gb of SGA.
> I suspect that I could have problems if I dont' change configuration at 
> hypervisor level...
>
> What do you think about this subject?
> Is there any drawback if I manually configure the hypervisors to boot with  
> the "transparent_hugepage=never" boot parameter?

Why not reserving regular hugepages for VMs on boot? then you can use
it with vdsm hook for that Oracle VM. It improves VM performance in
general, the only drawback is less flexibility since that memory can't
be used by others unless they specifically ask for  hugepages.
Also, I suppose you disable KSM, and I'm not sure about ballooning,
unless you need it I'd disable it too.

The hook is being improved right now in master, but it should be
usable in stable too.

Thanks,
michal
>
> Thanks in advance,
> Gianluca
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-19 Thread Nelson Lameiras
Hi yaniv, 

Thanks for your response. My answers below. 

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 



From: "Yaniv Kaul"  
To: "Nelson Lameiras"  
Cc: "ovirt users"  
Sent: Wednesday, April 19, 2017 9:13:27 AM 
Subject: Re: [ovirt-users] massive simultaneous vms migrations ? 



On Tue, Apr 18, 2017 at 7:48 PM, Nelson Lameiras < 
nelson.lamei...@lyra-network.com > wrote: 



hello, 

When putting a host on "maintenance mode", all vms start migrating to other 
hosts. 

We have some hosts that have 60 vms. So this will create a 60 vms migrating 
simultaneously. 
Some vms are under so much heavy loads that migration fails often (our guess is 
that massive simultaneous migrations does not help migration convergence) - 
even with "suspend workload if needed" migraton policy. 

- Does oVirt really launches 60 simultaneous migrations or is there a queuing 
system ? 
- If there is a queuing system, is there a way to configure a maximum number of 
simultaneous migrations ? 

I did see a "migration bandwidth limit", but this is quite what we are looking 
for. 



What migration policy are you using? 
-- "suspend workkiad if needed" 
Are you using a dedicated migration network, or the ovirtmgmt network? 
-- ovirtmgmt network 



BQ_BEGIN


my setup: 
ovirt-engine +hosted engine 4.1.1 
hosts : centos 7.3 fully updated. 

for full context to understand this question : 2 times in the past, when trying 
to put a host in maintenance, host stopped responding during massive migrations 
and was fenced by engine. It's still unclear why host stopped responding, but 
we think that migrating 60+ vms simultaneously puts a heavy strain on storage ? 
So we would like to better control migration process in order to better 
understand what's happening. This scenario is "production only" since our labs 
do not contain nearly as much vm with such heavy loads. So rather than trying 
to reproduce, we are trying to avoid ;) 

BQ_END

If you could open a bug with relevant logs on the host not responding, that'd 
be great. 
-- Too late, we made the "precipitaded" mistake of reinstalling server (beause 
of other reasons), so all traces are lost. Next time I will make sure to keep 
traces. 
Live migration doesn't touch the storage (disks re not moving anywhere), but it 
does stress the network. I doubt it, but perhaps you over-saturate the 
ovirtmgmt network. 
-- This makes sens. It's a long shot though. Maybe it would be possible to 
create a VLAN dedicated to migrations in same physical network et use QOS to 
reserve always some bandwith to oVirt administration ? (maybe opening can of 
worms here) 
Y. 

BQ_BEGIN


cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 

nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 


___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 


BQ_END


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-19 Thread Nelson Lameiras
hello pavel, 

Thanks for you answer. 
I did find this parameter (only) in /etc/vdsm/vdsm.conf.rpmnew (??) 

Parameter is commented with value 2 so my guess is that it is not used... So 
this brings a few more questions : 

- Since parameter is commented, default value must be used... can we be sure 
that 2 is default value? 
I do found strange that migrations are limited to 2, I have the feeling that 
more than two are simultaneously being migrated (but I'm maybe wrong), how to 
be sure? 

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 



From: "Pavel Gashev"  
To: users@ovirt.org, "nelson lameiras"  
Sent: Tuesday, April 18, 2017 7:16:08 PM 
Subject: Re: [ovirt-users] massive simultaneous vms migrations ? 

VDSM has the following config option: 

# Maximum concurrent outgoing migrations 
# max_outgoing_migrations = 2 

On Tue, 2017-04-18 at 18:48 +0200, Nelson Lameiras wrote: 



hello, 

When putting a host on "maintenance mode", all vms start migrating to other 
hosts. 

We have some hosts that have 60 vms. So this will create a 60 vms migrating 
simultaneously. 
Some vms are under so much heavy loads that migration fails often (our guess is 
that massive simultaneous migrations does not help migration convergence) - 
even with "suspend workload if needed" migraton policy. 

- Does oVirt really launches 60 simultaneous migrations or is there a queuing 
system ? 
- If there is a queuing system, is there a way to configure a maximum number of 
simultaneous migrations ? 

I did see a "migration bandwidth limit", but this is quite what we are looking 
for. 

my setup: 
ovirt-engine +hosted engine 4.1.1 
hosts : centos 7.3 fully updated. 

for full context to understand this question : 2 times in the past, when trying 
to put a host in maintenance, host stopped responding during massive migrations 
and was fenced by engine. It's still unclear why host stopped responding, but 
we think that migrating 60+ vms simultaneously puts a heavy strain on storage ? 
So we would like to better control migration process in order to better 
understand what's happening. This scenario is "production only" since our labs 
do not contain nearly as much vm with such heavy loads. So rather than trying 
to reproduce, we are trying to avoid ;) 

cordialement, regards, 




Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 









Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 

___
Users mailing list Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-19 Thread Nelson Lameiras
I'm using ovirtmgmt network for migrations.

I'm guetting the vibe that using a dedicated network for migration is "good 
practice"...

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE

- Original Message -
From: "Konstantin Shalygin" 
To: users@ovirt.org, "Nelson Lameiras" 
Sent: Wednesday, April 19, 2017 3:15:29 AM
Subject: Re: Re: [ovirt-users] massive simultaneous vms migrations ?

Hello.

What is your Migration Network?


> We have some hosts that have 60 vms. So this will create a 60 vms migrating 
> simultaneously.
> Some vms are under so much heavy loads that migration fails often (our guess 
> is that massive simultaneous migrations does not help migration convergence) 
> - even with "suspend workload if needed" migraton policy.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-19 Thread Konstantin Shalygin

I mean what is your hardware? 1G? 40G?


On 04/19/2017 04:16 PM, Nelson Lameiras wrote:

I'm using ovirtmgmt network for migrations.

I'm guetting the vibe that using a dedicated network for migration is "good 
practice"...

cordialement, regards,


Nelson LAMEIRAS
Ingénieur Systèmes et Réseaux / Systems and Networks engineer
Tel: +33 5 32 09 09 70
nelson.lamei...@lyra-network.com

www.lyra-network.com | www.payzen.eu





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE

- Original Message -
From: "Konstantin Shalygin" 
To: users@ovirt.org, "Nelson Lameiras" 
Sent: Wednesday, April 19, 2017 3:15:29 AM
Subject: Re: Re: [ovirt-users] massive simultaneous vms migrations ?

Hello.

What is your Migration Network?



We have some hosts that have 60 vms. So this will create a 60 vms migrating 
simultaneously.
Some vms are under so much heavy loads that migration fails often (our guess is that 
massive simultaneous migrations does not help migration convergence) - even with 
"suspend workload if needed" migraton policy.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users