Re: [ovirt-users] Hosted engine install failed; vdsm upset about broker

2017-04-20 Thread knarra

On 04/20/2017 10:48 PM, Jamie Lawrence wrote:

On Apr 19, 2017, at 11:35 PM, knarra  wrote:

On 04/20/2017 03:15 AM, Jamie Lawrence wrote:

I trialed installing the hosted engine, following the instructions at  
http://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/
  . This is using Gluster as the backend storage subsystem.

Answer file at the end.

Per the docs,

"When the hosted-engine deployment script completes successfully, the oVirt 
Engine is configured and running on your host. The Engine has already configured the 
data center, cluster, host, the Engine virtual machine, and a shared storage domain 
dedicated to the Engine virtual machine.”

In my case, this is false. The installation claims success, but  the hosted 
engine VM stays stopped, unless I start it manually.

During the install process there is a step where HE vm is stopped and started. 
Can you check if this has happened correctly ?

The installer claimed it did, but I believe it didn’t. Below the error from my 
original email, there’s the below (apologies for not including it earlier; I 
missed it). Note: 04ff4cf1-135a-4918-9a1f-8023322f89a3 is the HE - I’m pretty 
sure it is complaining about itself. (In any case, I verified that there are no 
other VMs running with both virsh and vdsClient.)

2017-04-19 12:27:02 DEBUG otopi.context context._executeMethod:128 Stage 
late_setup METHOD otopi.plugins.gr_he_setup.vm.runvm.Plugin._late_setup
2017-04-19 12:27:02 DEBUG otopi.plugins.gr_he_setup.vm.runvm 
runvm._late_setup:83 {'status': {'message': 'Done', 'code': 0}, 'items': 
[u'04ff4cf1-135a-4918-9a1f-8023322f89a3']}
2017-04-19 12:27:02 ERROR otopi.plugins.gr_he_setup.vm.runvm 
runvm._late_setup:91 The following VMs have been found: 
04ff4cf1-135a-4918-9a1f-8023322f89a3
2017-04-19 12:27:02 DEBUG otopi.context context._executeMethod:142 method 
exception
Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in 
_executeMethod
 method['method']()
   File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/vm/runvm.py",
 line 95, in _late_setup
 _('Cannot setup Hosted Engine with other VMs running')
RuntimeError: Cannot setup Hosted Engine with other VMs running
2017-04-19 12:27:02 ERROR otopi.context context._executeMethod:151 Failed to 
execute stage 'Environment setup': Cannot setup Hosted Engine with other VMs 
running
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:760 ENVIRONMENT 
DUMP - BEGIN
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/error=bool:'True'
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/exceptionInfo=list:'[(, RuntimeError('Cannot 
setup Hosted Engine with other VMs running',), )]'
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:774 ENVIRONMENT 
DUMP - END
James, generally this issue happens when the setup failed once and you 
tried re running it again.  Can you clean it and deploy it again?  HE 
should come up successfully. Below are the steps for cleaning it up.


1) vdsClient -s 0 list table | awk '{print $1}' | xargs vdsClient -s 0 
destroy


2) stop the volume and delete all the information inside the bricks from 
all the hosts


3) try to umount storage from /rhev/data-center/mnt/ - umount 
-f /rhev/data-center/mnt/  if it is mounted


4) remove all dirs from /rhev/data-center/mnt/ - rm 
-rf /rhev/data-center/mnt/*


5) start  volume again and start the deployment.

Thanks
kasturi




If I start it manually, the default DC is down, the default cluster has the 
installation host in the cluster,  there is no storage, and the VM doesn’t show 
up in the GUI. In this install run, I have not yet started the engine manually.

you wont be seeing HE vm until HE storage is imported into the UI. HE storage 
will be automatically imported into the UI (which will import HE vm too )once a 
master domain is present .

Sure; I’m just attempting to provide context.


I assume this is related to the errors in ovirt-hosted-engine-setup.log, below. 
(The timestamps are confusing; it looks like the Python errors are logged some 
time after they’re captured or something.) The HA broker and agent logs just 
show them looping in the sequence below.

Is there a decent way to pick this up and continue? If not, how do I make this 
work?

Can you please check the following things.

1) is glusterd running on all the nodes ? 'systemctl status glistered’
2) Are you able to connect to your storage server which is ovirt_engine in your 
case.
3) Can you check if all the brick process in the volume is up ?


1) Verified that glusterd is running on all three nodes.

2)
[root@sc5-thing-1]# mount -tglusterfs sc5-gluster-1:/ovirt_engine 
/mnt/ovirt_engine
[root@sc5-thing-1]# df -h
Filesystem  Size  Used Avail Use% Mounted on
[…]
sc5-gluster-1:/ovirt_engine 300G  2.6G  298G   1% /mnt/ovirt_engine


3)

Re: [ovirt-users] LACP Bonding issue

2017-04-20 Thread Bryan Sockel
That was my next thought, wanted to see if there was another way before
I got to that point.


 Original message 
From: Derek Atkins  
Date: 4/20/17 8:12 PM (GMT-06:00) 
To: Bryan Sockel  
Cc: Chris Adams , users@ovirt.org 
Subject: Re: [ovirt-users] LACP Bonding issue 

  _  

>From : Derek Atkins [de...@ihtfp.com]
To : Bryan Sockel [bryan.soc...@altn.com]
Cc : Chris Adams [c...@cmadams.net], users@ovirt.org [users@ovirt.org]
Date : Thursday, April 20 2017 20:11:46
For what it's worth I set up my bond0 manually on CentOS before
installing
ovirt hosted engine and the ovirtmgmt bridge "took over" from bond0..
But
it appears to still be working.  At least I've not noticed links being
down, and "ifconfig" shows decent amount of traffic on both eno1 and
eno2.

Maybe wipe, re-install, configure it manually, and then install ovirt?

-derek

On Thu, April 20, 2017 8:58 pm, Bryan Sockel wrote:
> We checked the port groups, and servers are cabled correctly.
>
> After server is rebooted, em1 is the only interface passing traffic.
> Other 3 nics sitting idle.  We can down each port on the switch and
> confirm it is down on the server.
>
>
> I am pretty sure it is related to the bridge that was created to pass
> vm-host-altn traffic when the appliance was first installed.
>
>
>
>  Original message 
> From: Chris Adams 
> Date: 4/20/17 5:40 PM (GMT-06:00)
> To: users@ovirt.org
> Subject: Re: [ovirt-users] LACP Bonding issue
>
>   _
>
> From : Chris Adams [c...@cmadams.net]
> To : users@ovirt.org [users@ovirt.org]
> Date : Thursday, April 20 2017 17:40:25
> Once upon a time, Bryan Sockel  said:
>> It seems that is some disconnect between my network bridge, the bond
> and my
>> interfaces.  I would like to some how get my bond to use all 4
> interfaces.
>> On reboot, it always seems to reset consistently to EM1.
>
> Are you sure the switch side is all the same LACP group?  Sounds like
> one port may accidentally be in a separate group, and that happens to
be
> em1.
>
> You might try swapping wires between em1 and another port and reboot
and
> see which ports come up - if all but the port with the wire formerly
in
> em1 come up, it points to the switch config.
>
> --
> Chris Adams
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>


-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] LACP Bonding issue

2017-04-20 Thread Derek Atkins
For what it's worth I set up my bond0 manually on CentOS before installing
ovirt hosted engine and the ovirtmgmt bridge "took over" from bond0..  But
it appears to still be working.  At least I've not noticed links being
down, and "ifconfig" shows decent amount of traffic on both eno1 and eno2.

Maybe wipe, re-install, configure it manually, and then install ovirt?

-derek

On Thu, April 20, 2017 8:58 pm, Bryan Sockel wrote:
> We checked the port groups, and servers are cabled correctly.
>
> After server is rebooted, em1 is the only interface passing traffic.
> Other 3 nics sitting idle.  We can down each port on the switch and
> confirm it is down on the server.
>
>
> I am pretty sure it is related to the bridge that was created to pass
> vm-host-altn traffic when the appliance was first installed.
>
>
>
>  Original message 
> From: Chris Adams 
> Date: 4/20/17 5:40 PM (GMT-06:00)
> To: users@ovirt.org
> Subject: Re: [ovirt-users] LACP Bonding issue
>
>   _
>
> From : Chris Adams [c...@cmadams.net]
> To : users@ovirt.org [users@ovirt.org]
> Date : Thursday, April 20 2017 17:40:25
> Once upon a time, Bryan Sockel  said:
>> It seems that is some disconnect between my network bridge, the bond
> and my
>> interfaces.  I would like to some how get my bond to use all 4
> interfaces.
>> On reboot, it always seems to reset consistently to EM1.
>
> Are you sure the switch side is all the same LACP group?  Sounds like
> one port may accidentally be in a separate group, and that happens to be
> em1.
>
> You might try swapping wires between em1 and another port and reboot and
> see which ports come up - if all but the port with the wire formerly in
> em1 come up, it points to the switch config.
>
> --
> Chris Adams
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>


-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] LACP Bonding issue

2017-04-20 Thread Chris Adams
Sorry about the message with nothing new...

Once upon a time, Bryan Sockel  said:
> We checked the port groups, and servers are cabled correctly.
> 
> After server is rebooted, em1 is the only interface passing traffic.
> Other 3 nics sitting idle.  We can down each port on the switch and
> confirm it is down on the server.
> 
> I am pretty sure it is related to the bridge that was created to pass
> vm-host-altn traffic when the appliance was first installed.

Well, I don't have any problem with that setup on multiple oVirt
clusters (including a bunch of R610 servers), so I don't think that's
it.

I configure oVirt for "custom" bonding options; I use:

  mode=802.3ad lacp_rate=1 xmit_hash_policy=layer2+3

Is it possible to move the wires around temporarily, so different server
ports are connected to different switch ports?  It would be interested
to see if the "solo" behavior stayed with the port or the wire.

-- 
Chris Adams 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Communicate between the guest vm's hosted on, different hosts on different data centres using OVS

2017-04-20 Thread Linov Suresh
Hi Charles,

We use an Ethernet cable to connect between the hosts through 10G port.

We want to use the guest VM's hosted on different hosts 10G network. oVirt
already supports SR-IOV. So we should guest 10G speed for our VM's.

In KVM we can create a second bridge and add the 10G interface to the
bridge would do the trick.

But in oVirt how do we do this? We want to use our OVs network, which was
created using OVN Provider.


   Linov Suresh



On Thu, Apr 20, 2017 at 6:59 PM, Charles Tassell  wrote:

> Hi Suresh,
>
>   You would need to connect the two OVN instances somehow.  If it's just
> two single hosts, I think the easiest way would be to create a VPN
> connection between the two hosts with OpenVPN or the like and then add the
> tun/tap interfaces into the OVN on each box.  You might run into problems
> if you start adding more hosts though, as if the host with the VPN goes
> down it would disconnect the two datacenters.
>
>   If the two datacenters are on the same physical network (ie, you just
> mean oVirt datacenter and not different colocation providers) then adding a
> VLAN to the NICs connected to the OVN interface would work.  You would
> probably have to setup some sort of channel bonding/LACP as you add more
> hosts, but OVN should be able to handle that simply enough.
>
> On 2017-04-20 07:33 PM, users-requ...@ovirt.org wrote:
>
>> Date: Thu, 20 Apr 2017 14:43:26 -0400
>> From: Linov Suresh 
>> To: users@ovirt.org
>> Subject: [ovirt-users] Communicate between the guest vm's hosted on
>> different hosts on different data centres using OVS
>> Message-ID:
>> 

Re: [ovirt-users] LACP Bonding issue

2017-04-20 Thread Chris Adams
Once upon a time, Bryan Sockel  said:
> We checked the port groups, and servers are cabled correctly.
> 
> After server is rebooted, em1 is the only interface passing traffic.
> Other 3 nics sitting idle.  We can down each port on the switch and
> confirm it is down on the server.
> 
> 
> I am pretty sure it is related to the bridge that was created to pass
> vm-host-altn traffic when the appliance was first installed.
> 
> 
> 
>  Original message 
> From: Chris Adams  
> Date: 4/20/17 5:40 PM (GMT-06:00) 
> To: users@ovirt.org 
> Subject: Re: [ovirt-users] LACP Bonding issue 
> 
>   _  
> 
> >From : Chris Adams [c...@cmadams.net]
> To : users@ovirt.org [users@ovirt.org]
> Date : Thursday, April 20 2017 17:40:25
> Once upon a time, Bryan Sockel  said:
> > It seems that is some disconnect between my network bridge, the bond
> and my 
> > interfaces.  I would like to some how get my bond to use all 4
> interfaces.  
> > On reboot, it always seems to reset consistently to EM1.
> 
> Are you sure the switch side is all the same LACP group?  Sounds like
> one port may accidentally be in a separate group, and that happens to be
> em1.
> 
> You might try swapping wires between em1 and another port and reboot and
> see which ports come up - if all but the port with the wire formerly in
> em1 come up, it points to the switch config.
> 
> -- 
> Chris Adams 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- 
Chris Adams 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] LACP Bonding issue

2017-04-20 Thread Bryan Sockel
We checked the port groups, and servers are cabled correctly.

After server is rebooted, em1 is the only interface passing traffic.
Other 3 nics sitting idle.  We can down each port on the switch and
confirm it is down on the server.


I am pretty sure it is related to the bridge that was created to pass
vm-host-altn traffic when the appliance was first installed.



 Original message 
From: Chris Adams  
Date: 4/20/17 5:40 PM (GMT-06:00) 
To: users@ovirt.org 
Subject: Re: [ovirt-users] LACP Bonding issue 

  _  

>From : Chris Adams [c...@cmadams.net]
To : users@ovirt.org [users@ovirt.org]
Date : Thursday, April 20 2017 17:40:25
Once upon a time, Bryan Sockel  said:
> It seems that is some disconnect between my network bridge, the bond
and my 
> interfaces.  I would like to some how get my bond to use all 4
interfaces.  
> On reboot, it always seems to reset consistently to EM1.

Are you sure the switch side is all the same LACP group?  Sounds like
one port may accidentally be in a separate group, and that happens to be
em1.

You might try swapping wires between em1 and another port and reboot and
see which ports come up - if all but the port with the wire formerly in
em1 come up, it points to the switch config.

-- 
Chris Adams 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Communicate between the guest vm's hosted on, different hosts on different data centres using OVS

2017-04-20 Thread Charles Tassell

Hi Suresh,

  You would need to connect the two OVN instances somehow.  If it's 
just two single hosts, I think the easiest way would be to create a VPN 
connection between the two hosts with OpenVPN or the like and then add 
the tun/tap interfaces into the OVN on each box.  You might run into 
problems if you start adding more hosts though, as if the host with the 
VPN goes down it would disconnect the two datacenters.


  If the two datacenters are on the same physical network (ie, you just 
mean oVirt datacenter and not different colocation providers) then 
adding a VLAN to the NICs connected to the OVN interface would work.  
You would probably have to setup some sort of channel bonding/LACP as 
you add more hosts, but OVN should be able to handle that simply enough.


On 2017-04-20 07:33 PM, users-requ...@ovirt.org wrote:

Date: Thu, 20 Apr 2017 14:43:26 -0400
From: Linov Suresh 
To: users@ovirt.org
Subject: [ovirt-users] Communicate between the guest vm's hosted on
different hosts on different data centres using OVS
Message-ID:

Re: [ovirt-users] LACP Bonding issue

2017-04-20 Thread Chris Adams
Once upon a time, Bryan Sockel  said:
> It seems that is some disconnect between my network bridge, the bond and my 
> interfaces.  I would like to some how get my bond to use all 4 interfaces.  
> On reboot, it always seems to reset consistently to EM1.

Are you sure the switch side is all the same LACP group?  Sounds like
one port may accidentally be in a separate group, and that happens to be
em1.

You might try swapping wires between em1 and another port and reboot and
see which ports come up - if all but the port with the wire formerly in
em1 come up, it points to the switch config.

-- 
Chris Adams 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] LACP Bonding issue

2017-04-20 Thread Bryan Sockel

After further testing we can either get EM1 running on the bond or EM2 - EM4 
running in the bond, but not EM1-EM4.  We are running the Hosted Engine 
Appliance in our setup.  In order to install the appliance, i have to point 
it to a physical nic during the install process.  The install process 
creates the network bridge ovirtmgmt which is tied to interface EM1 on both 
servers.  After i finished setting up the appliance, and both of my hosts, i 
went in and created my bond so i could configure LACP.  

It seems that is some disconnect between my network bridge, the bond and my 
interfaces.  I would like to some how get my bond to use all 4 interfaces.  
On reboot, it always seems to reset consistently to EM1.

-Original Message-
From: Konstantin Shalygin 
To: users@ovirt.org, Bryan Sockel 
Date: Thu, 20 Apr 2017 23:43:15 +0700
Subject: Re: Re: [ovirt-users] LACP Bonding issue

You should configure your LAG with this options (custom mode on oVirt):

mode=4 miimon=100 xmit_hash_policy=2 lacp_rate=1

An tell to your network admin configure switch:
"Give me lacp timeout short with channel-group mode active. Also set 
port-channel load-balance src-dst-mac-ip (or src-dst-ip\src-dst-mac)".

You also need to understand that LACP balancing works 'per flow'. You can 
take 2 hosts and run "iperf -c xxx.xxx.xxx.xxx -i 0.1 -d",
and on one phy interface you should see 1Gb RX, and on another phy interface 
1Gb TX.

Hi,

I discovered an issue with my LACP configuration and i am having trouble 
figuring it out.  I am running 2 Dell Powered 610's with 4 broadcomm nics.  
I am trying to bond them together, however only one of the nics goes active 
no mater how much traffic i push across the links.

I have spoken to my network admin, and says that the switch ports are 
configured and can only see one active link on the switch.

Thanks
Bryan___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-20 Thread Gianluca Cecchi
Further infos:

- ovirt-hosted-engine-ha package version

[root@ractor ~]# rpm -q ovirt-hosted-engine-ha
ovirt-hosted-engine-ha-2.1.0.5-1.el7.centos.noarch
[root@ractor ~]#


- serial console works

[root@ractor ~]# hosted-engine --console
The engine VM is running on this host
Connected to domain HostedEngine
Escape character is ^]

CentOS Linux 7 (Core)
Kernel 3.10.0-514.16.1.el7.x86_64 on an x86_64

ractorshe login: root
Password:
Last login: Thu Apr 20 19:14:27 on pts/0
[root@ractorshe ~]#


- Current runtime vm.conf for hosted engine vm

[root@ractor ~]# cat /run/ovirt-hosted-engine-ha/vm.conf
cpuType=Nehalem
emulatedMachine=pc-i440fx-rhel7.3.0
vmId=7b0ff898-0a9e-4b97-8292-1d9f2a0a6683
smp=4
memSize=16384
maxVCpus=16
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
vmName=HostedEngine
display=qxl
devices={index:0,iface:virtio,format:raw,bootOrder:1,address:{slot:0x06,bus:0x00,domain:0x,type:pci,function:0x0},volumeID:43ee87b9-4293-4d43-beab-582f500667a7,imageID:d6287dfb-27af-461b-ab79-4eb3a45d8c8a,readonly:false,domainID:2025c2ea-6205-4bc1-b29d-745b47f8f806,deviceId:d6287dfb-27af-461b-ab79-4eb3a45d8c8a,poolID:----,device:disk,shared:exclusive,propagateErrors:off,type:disk}
devices={nicModel:pv,macAddr:00:16:3e:3a:ee:a5,linkActive:true,network:ovirtmgmt,deviceId:4bbb90e6-4f8e-42e0-91ea-d894125ff4a8,address:{slot:0x03,bus:0x00,domain:0x,type:pci,function:0x0},device:bridge,type:interface}
devices={index:2,iface:ide,shared:false,readonly:true,deviceId:8c3179ac-b322-4f5c-9449-c52e3665e0ae,address:{controller:0,target:0,unit:0,bus:1,type:drive},device:cdrom,path:,type:disk}
devices={device:usb,type:controller,deviceId:ee985889-6878-463a-a415-9b50a4a810b3,address:{slot:0x01,bus:0x00,domain:0x,type:pci,function:0x2}}
devices={device:virtio-serial,type:controller,deviceId:d99705cd-0ebf-40f0-950b-575ab4e6d934,address:{slot:0x05,bus:0x00,domain:0x,type:pci,function:0x0}}
devices={device:ide,type:controller,deviceId:ef31f1a2-746a-4188-ae45-ef157d7b5598,address:{slot:0x01,bus:0x00,domain:0x,type:pci,function:0x1}}
devices={device:scsi,model:virtio-scsi,type:controller,deviceId:f41baf47-51f8-42e9-a290-70da06191991,address:{slot:0x04,bus:0x00,domain:0x,type:pci,function:0x0}}
devices={alias:rng0,specParams:{source:urandom},deviceId:4c7f0e81-c3e8-498f-a5a2-b8c1543e94b4,address:{slot:0x02,bus:0x00,domain:0x,type:pci,function:0x0},device:virtio,model:virtio,type:rng}
devices={device:console,type:console}
[root@ractor ~]#
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Communicate between the guest vm's hosted on different hosts on different data centres using OVS

2017-04-20 Thread Linov Suresh
Hello,

I have configured OVS network on oVirt 4.1.1, following the document
https://www.ovirt.org/blog/2016/11/ovirt-provider-ovn/

I have created OVN Provider, and created new Logical Network for OVS using
OVN Provider. I have synced the network also selected OVS when I created
the cluster.

I have two data centres and each data centre has one host each.

Now I have created VM's selecting OVS network (not ovirtmgmt). VM's in the
same data centre can talk to each other.

How do i make the VM's hosted on different data centre can talk to each
other?

Appreciate your help in advance,

Sincerely,

Suresh.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] virsh list

2017-04-20 Thread Irit Goihman
Hi,
Please use only the -r option.
You shouldn't be making changes outside ovirt-engine and vdsm.

Thanks,

On Sat, Apr 15, 2017 at 11:00 AM, Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

> Depends on what you need to do. If you only want to se vms, add -r to have
> readonly access.
>
> Luca
>
> Il 15 apr 2017 12:58 AM, "Konstantin Raskoshnyi"  ha
> scritto:
>
> Hi guys
>
> I'm trying to run virsh list (or any other virsh commands)
>
> virsh list
> Please enter your authentication name: admin
> Please enter your password:
> error: failed to connect to the hypervisor
> error: authentication failed: authentication failed
>
> But I have no clue about login:password ovirt uses.
> I tried admin password, also tried to create new account with saslpasswd2
>
> Which didn't work to.
>
> Any solutions?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

IRIT GOIHMAN

SOFTWARE ENGINEER

EMEA VIRTUALIZATION R

Red Hat EMEA 


TRIED. TESTED. TRUSTED. 
@redhatnews    Red Hat
   Red Hat

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-20 Thread Arsène Gschwind

Hi,

You just need to patch ovf2VmParams.py the other 3 files are just for 
testing purpose.

Don't forget to restart *ovirt-ha-agent* service before starting HE VM.

Rgds,
Arsène


On 04/20/2017 07:19 PM, Gianluca Cecchi wrote:
On Wed, Apr 19, 2017 at 4:26 PM, Arsène Gschwind 
> wrote:


I did start the hosted engine on the host I've applied the patch
but I've forgot to restart ovirt-ha-agent, and now it works.

Great job, thanks

Best regards,
Arsène



Hello,
I have the same on my single host hosted engine environment, after 
passing from 4.1.0 to 4.1.1

I tried to change the file as in gerrit entry

/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py

[root@ractor ovf]# diff ovf2VmParams.py ovf2VmParams.py.bck
164,168d163
< def buildGraphics(device):
< graphics = buildDevice(device)
< return graphics
<
<
257,258d251
< elif t == 'graphics':
< devices.append(buildGraphics(device))

I don't find in my environment the other 3 files
ovf2VmParams_test.py
ovf_test.xml
ovf_test_max_vcpu.xml

Then I
set global maintenance
restart ovirt-guest-agent
shutdown engine vm
exit global maintenance
the engine vm starts up and I'm able to connect, but its console still 
grey




--

*Arsène Gschwind*
Fa. Sapify AG im Auftrag der Universität Basel
IT Services
Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
Tel. +41 79 449 25 63  | http://its.unibas.ch 
ITS-ServiceDesk: support-...@unibas.ch | +41 61 267 14 11

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-20 Thread Martin Sivak
Are you sure you restarted the proper services?

I see -nographic in the qemu command line and you mentioned
ovirt-guest-agent intead of ovirt-ha-agent..

Best regards

Martin Sivak

On Thu, Apr 20, 2017 at 7:25 PM, Gianluca Cecchi
 wrote:
>
>
> On Thu, Apr 20, 2017 at 7:19 PM, Gianluca Cecchi 
> wrote:
>>
>> On Wed, Apr 19, 2017 at 4:26 PM, Arsène Gschwind
>>  wrote:
>>>
>>> I did start the hosted engine on the host I've applied the patch but I've
>>> forgot to restart ovirt-ha-agent, and now it works.
>>>
>>> Great job, thanks
>>>
>>> Best regards,
>>> Arsène
>>>
>>>
>>
>> Hello,
>> I have the same on my single host hosted engine environment, after passing
>> from 4.1.0 to 4.1.1
>>
>>
>>
>
> BTW the qemu command line generated for hosted engine is :
>
> qemu 23686 1 82 19:11 ?00:00:15 /usr/libexec/qemu-kvm -name
> guest=HostedEngine,debug-threads=on -S -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-8-HostedEngine/master-key.aes
> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m 16384
> -realtime mlock=off -smp 4,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> 7b0ff898-0a9e-4b97-8292-1d9f2a0a6683 -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0054-5910-8056-C4C04F30354A,uuid=7b0ff898-0a9e-4b97-8292-1d9f2a0a6683
> -nographic -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-8-HostedEngine/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2017-04-20T17:11:49,driftfix=slew -global
> kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
> file=/var/run/vdsm/storage/2025c2ea-6205-4bc1-b29d-745b47f8f806/d6287dfb-27af-461b-ab79-4eb3a45d8c8a/43ee87b9-4293-4d43-beab-582f500667a7,format=raw,if=none,id=drive-virtio-disk0,serial=d6287dfb-27af-461b-ab79-4eb3a45d8c8a,cache=none,werror=stop,rerror=stop,aio=threads
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -drive if=none,id=drive-ide0-1-0,readonly=on -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
> tap,fd=30,id=hostnet0,vhost=on,vhostfd=33 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3a:ee:a5,bus=pci.0,addr=0x3
> -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/7b0ff898-0a9e-4b97-8292-1d9f2a0a6683.com.redhat.rhevm.vdsm,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/7b0ff898-0a9e-4b97-8292-1d9f2a0a6683.org.qemu.guest_agent.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev
> socket,id=charchannel2,path=/var/lib/libvirt/qemu/channels/7b0ff898-0a9e-4b97-8292-1d9f2a0a6683.org.ovirt.hosted-engine-setup.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0
> -chardev pty,id=charconsole0 -device
> virtconsole,chardev=charconsole0,id=console0 -object
> rng-random,id=objrng0,filename=/dev/urandom -device
> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x2 -msg timestamp=on
>
>
> and in /var/log/libvirt/qemu/HostedEngine.log
>
> 2017-04-20 17:11:49.947+: starting up libvirt version: 2.0.0, package:
> 10.el7_3.5 (CentOS BuildSystem ,
> 2017-03-03-02:09:45, c1bm.rdu2.centos.org), qemu version: 2.6.0
> (qemu-kvm-ev-2.6.0-28.el7_3.6.1), hostname: ractor.mydomain
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name
> guest=HostedEngine,debug-threads=on -S -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-8-HostedEngine/master-key.aes
> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m 16384
> -realtime mlock=off -smp 4,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> 7b0ff898-0a9e-4b97-8292-1d9f2a0a6683 -smbios
> 'type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0054-5910-8056-C4C04F30354A,uuid=7b0ff898-0a9e-4b97-8292-1d9f2a0a6683'
> -nographic -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-8-HostedEngine/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2017-04-20T17:11:49,driftfix=slew -global
> kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> 

Re: [ovirt-users] Hosted engine install failed; vdsm upset about broker (revised)

2017-04-20 Thread Jamie Lawrence

> On Apr 20, 2017, at 9:18 AM, Simone Tiraboschi  wrote:

> Could you please share the output of 
>   sudo -u vdsm sudo service sanlock status

That command line prompts for vdsm’s password, which it doesn’t have. But 
output returned as root is below. Is that ‘operation not permitted’ related?

Thanks,

-j

[root@sc5-ovirt-2 jlawrence]# service sanlock status
Redirecting to /bin/systemctl status  sanlock.service
● sanlock.service - Shared Storage Lease Manager
   Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled; vendor 
preset: disabled)
   Active: active (running) since Wed 2017-04-19 16:56:40 PDT; 17h ago
  Process: 16764 ExecStart=/usr/sbin/sanlock daemon (code=exited, 
status=0/SUCCESS)
 Main PID: 16765 (sanlock)
   CGroup: /system.slice/sanlock.service
   ├─16765 /usr/sbin/sanlock daemon
   └─16766 /usr/sbin/sanlock daemon

Apr 19 16:56:40 sc5-ovirt-2.squaretrade.com systemd[1]: Starting Shared Storage 
Lease Manager...
Apr 19 16:56:40 sc5-ovirt-2.squaretrade.com systemd[1]: Started Shared Storage 
Lease Manager.
Apr 19 16:56:40 sc5-ovirt-2.squaretrade.com sanlock[16765]: 2017-04-19 
16:56:40-0700 482 [16765]: set scheduler RR|RESET_ON_FORK priority 99 failed: 
Operation not permitted

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-20 Thread Gianluca Cecchi
On Thu, Apr 20, 2017 at 7:19 PM, Gianluca Cecchi 
wrote:

> On Wed, Apr 19, 2017 at 4:26 PM, Arsène Gschwind <
> arsene.gschw...@unibas.ch> wrote:
>
>> I did start the hosted engine on the host I've applied the patch but I've
>> forgot to restart ovirt-ha-agent, and now it works.
>>
>> Great job, thanks
>>
>> Best regards,
>> Arsène
>>
>>
> Hello,
> I have the same on my single host hosted engine environment, after passing
> from 4.1.0 to 4.1.1
>
>
>
>
BTW the qemu command line generated for hosted engine is :

qemu 23686 1 82 19:11 ?00:00:15 /usr/libexec/qemu-kvm -name
guest=HostedEngine,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-8-HostedEngine/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m 16384
-realtime mlock=off -smp 4,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
7b0ff898-0a9e-4b97-8292-1d9f2a0a6683 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-3.1611.el7.centos,serial=4C4C4544-0054-5910-8056-C4C04F30354A,uuid=7b0ff898-0a9e-4b97-8292-1d9f2a0a6683
-nographic -no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-8-HostedEngine/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2017-04-20T17:11:49,driftfix=slew -global
kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
file=/var/run/vdsm/storage/2025c2ea-6205-4bc1-b29d-745b47f8f806/d6287dfb-27af-461b-ab79-4eb3a45d8c8a/43ee87b9-4293-4d43-beab-582f500667a7,format=raw,if=none,id=drive-virtio-disk0,serial=d6287dfb-27af-461b-ab79-4eb3a45d8c8a,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=30,id=hostnet0,vhost=on,vhostfd=33 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3a:ee:a5,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/7b0ff898-0a9e-4b97-8292-1d9f2a0a6683.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/7b0ff898-0a9e-4b97-8292-1d9f2a0a6683.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev
socket,id=charchannel2,path=/var/lib/libvirt/qemu/channels/7b0ff898-0a9e-4b97-8292-1d9f2a0a6683.org.ovirt.hosted-engine-setup.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0
-chardev pty,id=charconsole0 -device
virtconsole,chardev=charconsole0,id=console0 -object
rng-random,id=objrng0,filename=/dev/urandom -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x2 -msg timestamp=on


and in /var/log/libvirt/qemu/HostedEngine.log

2017-04-20 17:11:49.947+: starting up libvirt version: 2.0.0, package:
10.el7_3.5 (CentOS BuildSystem ,
2017-03-03-02:09:45, c1bm.rdu2.centos.org), qemu version: 2.6.0
(qemu-kvm-ev-2.6.0-28.el7_3.6.1), hostname: ractor.mydomain
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name
guest=HostedEngine,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-8-HostedEngine/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m 16384
-realtime mlock=off -smp 4,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
7b0ff898-0a9e-4b97-8292-1d9f2a0a6683 -smbios
'type=1,manufacturer=oVirt,product=oVirt
Node,version=7-3.1611.el7.centos,serial=4C4C4544-0054-5910-8056-C4C04F30354A,uuid=7b0ff898-0a9e-4b97-8292-1d9f2a0a6683'
-nographic -no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-8-HostedEngine/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2017-04-20T17:11:49,driftfix=slew -global
kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
file=/var/run/vdsm/storage/2025c2ea-6205-4bc1-b29d-745b47f8f806/d6287dfb-27af-461b-ab79-4eb3a45d8c8a/43ee87b9-4293-4d43-beab-582f500667a7,format=raw,if=none,id=drive-virtio-disk0,serial=d6287dfb-27af-461b-ab79-4eb3a45d8c8a,cache=none,werror=stop,rerror=stop,aio=threads
-device

Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-20 Thread Gianluca Cecchi
On Wed, Apr 19, 2017 at 4:26 PM, Arsène Gschwind 
wrote:

> I did start the hosted engine on the host I've applied the patch but I've
> forgot to restart ovirt-ha-agent, and now it works.
>
> Great job, thanks
>
> Best regards,
> Arsène
>
>
Hello,
I have the same on my single host hosted engine environment, after passing
from 4.1.0 to 4.1.1
I tried to change the file as in gerrit entry

/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py

[root@ractor ovf]# diff ovf2VmParams.py ovf2VmParams.py.bck
164,168d163
< def buildGraphics(device):
< graphics = buildDevice(device)
< return graphics
<
<
257,258d251
< elif t == 'graphics':
< devices.append(buildGraphics(device))

I don't find in my environment the other 3 files
ovf2VmParams_test.py
ovf_test.xml
ovf_test_max_vcpu.xml

Then I
set global maintenance
restart ovirt-guest-agent
shutdown engine vm
exit global maintenance
the engine vm starts up and I'm able to connect, but its console still
grey
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine install failed; vdsm upset about broker

2017-04-20 Thread Jamie Lawrence

> On Apr 19, 2017, at 11:35 PM, knarra  wrote:
> 
> On 04/20/2017 03:15 AM, Jamie Lawrence wrote:
>> I trialed installing the hosted engine, following the instructions at  
>> http://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/
>>   . This is using Gluster as the backend storage subsystem.
>> 
>> Answer file at the end.
>> 
>> Per the docs,
>> 
>> "When the hosted-engine deployment script completes successfully, the oVirt 
>> Engine is configured and running on your host. The Engine has already 
>> configured the data center, cluster, host, the Engine virtual machine, and a 
>> shared storage domain dedicated to the Engine virtual machine.”
>> 
>> In my case, this is false. The installation claims success, but  the hosted 
>> engine VM stays stopped, unless I start it manually.
> During the install process there is a step where HE vm is stopped and 
> started. Can you check if this has happened correctly ?

The installer claimed it did, but I believe it didn’t. Below the error from my 
original email, there’s the below (apologies for not including it earlier; I 
missed it). Note: 04ff4cf1-135a-4918-9a1f-8023322f89a3 is the HE - I’m pretty 
sure it is complaining about itself. (In any case, I verified that there are no 
other VMs running with both virsh and vdsClient.)

2017-04-19 12:27:02 DEBUG otopi.context context._executeMethod:128 Stage 
late_setup METHOD otopi.plugins.gr_he_setup.vm.runvm.Plugin._late_setup
2017-04-19 12:27:02 DEBUG otopi.plugins.gr_he_setup.vm.runvm 
runvm._late_setup:83 {'status': {'message': 'Done', 'code': 0}, 'items': 
[u'04ff4cf1-135a-4918-9a1f-8023322f89a3']}
2017-04-19 12:27:02 ERROR otopi.plugins.gr_he_setup.vm.runvm 
runvm._late_setup:91 The following VMs have been found: 
04ff4cf1-135a-4918-9a1f-8023322f89a3
2017-04-19 12:27:02 DEBUG otopi.context context._executeMethod:142 method 
exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in 
_executeMethod
method['method']()
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/vm/runvm.py",
 line 95, in _late_setup
_('Cannot setup Hosted Engine with other VMs running')
RuntimeError: Cannot setup Hosted Engine with other VMs running
2017-04-19 12:27:02 ERROR otopi.context context._executeMethod:151 Failed to 
execute stage 'Environment setup': Cannot setup Hosted Engine with other VMs 
running
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:760 ENVIRONMENT 
DUMP - BEGIN
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/error=bool:'True'
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/exceptionInfo=list:'[(, 
RuntimeError('Cannot setup Hosted Engine with other VMs running',), )]'
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:774 ENVIRONMENT 
DUMP - END


>> If I start it manually, the default DC is down, the default cluster has the 
>> installation host in the cluster,  there is no storage, and the VM doesn’t 
>> show up in the GUI. In this install run, I have not yet started the engine 
>> manually.
> you wont be seeing HE vm until HE storage is imported into the UI. HE storage 
> will be automatically imported into the UI (which will import HE vm too )once 
> a master domain is present .

Sure; I’m just attempting to provide context.

>> I assume this is related to the errors in ovirt-hosted-engine-setup.log, 
>> below. (The timestamps are confusing; it looks like the Python errors are 
>> logged some time after they’re captured or something.) The HA broker and 
>> agent logs just show them looping in the sequence below.
>> 
>> Is there a decent way to pick this up and continue? If not, how do I make 
>> this work?
> Can you please check the following things.
> 
> 1) is glusterd running on all the nodes ? 'systemctl status glistered’
> 2) Are you able to connect to your storage server which is ovirt_engine in 
> your case.
> 3) Can you check if all the brick process in the volume is up ?


1) Verified that glusterd is running on all three nodes.

2) 
[root@sc5-thing-1]# mount -tglusterfs sc5-gluster-1:/ovirt_engine 
/mnt/ovirt_engine
[root@sc5-thing-1]# df -h
Filesystem  Size  Used Avail Use% Mounted on
[…]
sc5-gluster-1:/ovirt_engine 300G  2.6G  298G   1% /mnt/ovirt_engine


3)
[root@sc5-gluster-1 jlawrence]# gluster volume status
Status of volume: ovirt_engine
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick sc5-gluster-1:/gluster-bricks/ovirt_e
ngine/ovirt_engine-149217 0  Y   22102
Brick sc5-gluster-2:/gluster-bricks/ovirt_e
ngine/ovirt_engine-149157 0  Y   37842
Brick sc5-gluster-3:/gluster-bricks/ovirt_e
ngine/ovirt_engine-149157 0  

[ovirt-users] Ovirt self hosted engine network issue

2017-04-20 Thread Brenneman, Brad B.
Hi,

  I have an ovirt self hosted server with multiple physical NICs in it.

em1 - set for DHCP and used for web resources
bond0 (p1p1, p1p2) - set statically for communicating to internal hosts w/o 
internet

Before I installed the engine vm/appliance, system was able to browse the web 
on em1 and maintain internal comms on bond0 to internals hosts.

After installing self hosted engine/appliance, the host is unable to 
communicate to web resources on em1.  Ovirtmgmt is on the bond and has no comms 
issues.

I installed the logical network for the external web side per the online docs 
and attached it to em1.

Host system can ping the network gateways on each NIC. When launching firefox, 
system is able to browse to Hosted-engine VM page (admin portal, user portal , 
etc) but is unable to get to web sites (i.e. google, ovirt.org, etc.)

I read that Ovirt 4.1 was supposed to have fixed the multiple gateway issue, 
but am confused as to why I can't get out.

Any ideas how I can get the host to browse the web again?

Brad

William "Brad" Brenneman | Leidos
Senior Systems Engineer | Naval Strike and Intelligence Division
6909 Metro Park Drive   Alexandria, VA 22310
phone: 571.319.8221
"Temporary" mobile:   571 213 6890
william.b.brenne...@leidos.com  |  
leidos.com

[cid:image004.png@01CF4CE2.1EF07A30]

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4.1.1 and ovn problems

2017-04-20 Thread Gianluca Cecchi
Hello,
I installed some months ago a test setup in 4.1.0 with ovn.
Now after updating engine and host to 4.1.1 it seems the services are up
but it doesn't work.
If I run a VM with a network device in OVN external provider, it cant' boot
and I get this in engine.log:

2017-04-20 15:17:42,285+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-11) [e55e0971-f1e5-4fda-8666-3ce23797027f]
EVENT_ID: USER_FAILED_RUN_VM(54), Correlation ID:
e55e0971-f1e5-4fda-8666-3ce23797027f, Job ID:
067f6e70-9e70-48bc-be44-d5bd1d9485fd, Call Stack: null, Custom Event ID:
-1, Message: Failed to run VM c6 (User: admin@internal-authz).
2017-04-20 15:17:42,317+02 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
(org.ovirt.thread.pool-6-thread-11) [e55e0971-f1e5-4fda-8666-3ce23797027f]
Lock freed to object
'EngineLock:{exclusiveLocks='[50194eea-f96d-4ebb-bf64-55cef13f4309=]', sharedLocks='null'}'
2017-04-20 15:17:42,317+02 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
(org.ovirt.thread.pool-6-thread-11) [e55e0971-f1e5-4fda-8666-3ce23797027f]
Command 'org.ovirt.engine.core.bll.RunVmCommand' failed: EngineException:
(Failed with error PROVIDER_FAILURE and code 5050)

Firewall is disabled/stopped at host and engine side (where I installed the
central server too) and should not be the problem

On engine server  I get this into /var/log/ovirt-provider-ovn.log

2017-04-20 16:36:25,355   Request: GET : /v2.0/ports
2017-04-20 16:36:25,355   Connecting to remote ovn database: tcp:
127.0.0.1:6641
2017-04-20 16:36:28,422   Failed to connect!
2017-04-20 16:36:28,422   Failed to connect!
Traceback (most recent call last):
  File "/usr/share/ovirt-provider-ovn/neutron.py", line 76, in
_handle_request
content)
  File "/usr/share/ovirt-provider-ovn/neutron.py", line 132, in
handle_request
with OvnNbDb(self.remote) as nb_db:
  File "/usr/share/ovirt-provider-ovn/ovndb/ndb_api.py", line 56, in
__init__
self.connect(tables, remote, self.OVN_NB_OVSSCHEMA_FILE)
  File "/usr/share/ovirt-provider-ovn/ovndb/ovsdb_api.py", line 110, in
connect
OvsDb._connect(self._ovsdb_connection)
  File "/usr/share/ovirt-provider-ovn/ovndb/ovsdb_api.py", line 47, in block
raise OvsDBConnectionFailed('Failed to connect!')
OvsDBConnectionFailed: Failed to connect!

Initial working versions on engine, where I configured the central server:

Feb 14 17:55:57 Installed: openvswitch-2.6.90-1.el7.centos.x86_64
Feb 14 17:55:57 Installed: openvswitch-ovn-common-2.6.90-1.el7.centos.x86_64
Feb 14 17:55:58 Installed:
openvswitch-ovn-central-2.6.90-1.el7.centos.x86_64
Feb 14 17:55:59 Installed: python-openvswitch-2.6.90-1.el7.centos.noarch
Feb 14 17:56:52 Installed:
ovirt-provider-ovn-1.0-1.20161219125609.git.el7.centos.noarch

Today as part of the update I got:

Apr 20 11:30:06 Updated: openvswitch-2.7.0-1.el7.centos.x86_64
Apr 20 11:30:06 Updated: openvswitch-ovn-common-2.7.0-1.el7.centos.x86_64
Apr 20 11:30:07 Updated: openvswitch-ovn-central-2.7.0-1.el7.centos.x86_64
Apr 20 11:30:24 Installed: python-openvswitch-2.7.0-1.el7.centos.noarch
Apr 20 11:31:00 Updated: ovirt-provider-ovn-1.0-6.el7.centos.noarch

At the page
https://www.ovirt.org/blog/2016/11/ovirt-provider-ovn/

I see this note about ports:

"
Since OVS 2.7, OVN central must be configured to listen to requests on
appropriate ports:

ovn-sbctl set-connection ptcp:6642
ovn-nbctl set-connection ptcp:6641
"

and in my case I indeed passed from 2.6.90 to 2.7.0...

Do I need to run these two commands?
Or any other configuration settings?

Thanks in advance,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] LACP Bonding issue

2017-04-20 Thread Konstantin Shalygin

You should configure your LAG with this options (custom mode on oVirt):

mode=4 miimon=100 xmit_hash_policy=2 lacp_rate=1

An tell to your network admin configure switch:
"Give me lacp timeout short with channel-group mode active. Also set 
port-channel load-balance src-dst-mac-ip (or src-dst-ip\src-dst-mac)".


You also need to understand that LACP balancing works 'per flow'. You 
can take 2 hosts and run "iperf -c xxx.xxx.xxx.xxx -i 0.1 -d",
and on one phy interface you should see 1Gb RX, and on another phy 
interface 1Gb TX.



Hi,

I discovered an issue with my LACP configuration and i am having trouble
figuring it out.  I am running 2 Dell Powered 610's with 4 broadcomm nics.
I am trying to bond them together, however only one of the nics goes active
no mater how much traffic i push across the links.

I have spoken to my network admin, and says that the switch ports are
configured and can only see one active link on the switch.

Thanks
Bryan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [oVirt 4.1] oVirt installation and hotplug memory to the hosted-engine VM

2017-04-20 Thread Simone Tiraboschi
On Fri, Apr 14, 2017 at 8:26 PM, wodel youchi 
wrote:

> Hi,
>
> I am testing oVirt 4.1 using oVirt node and oVirt appliance, I have two
> questions :
>
> 1 - Installation process :
> I found that the installation process has changed regarding the appliance
> use, it forces the download of the appliance before continuing the
> installation, but it still offer the use of another image later on, which
> is somehow disturbing.
> can the force-download be disabled, When installing the first time, I did
> download the rpm appliance before starting the hosted-engine deployment,
> but I had to re-install the nodes, and since I have a poor internet
> connection and to save time I saved the ova file of the appliance, but I
> couldn't use later, because the deployment script forces me to download the
> image again.
>

The check is not about the OVA file but the rpm which contains the file.
It's enough to save the rpm file and push it to your host.


>
> 2 - Can we hotplug memory on the hosted-engine VM? if yes how? I can edit
> the VM, but the changes are not taken into account. I then stopped the VM
> engine and started it again, still the changes are not taken into account,
> in the webui I have the new value, but free -m shows me the old one.
>
> Thanks in advance and thank you all for your work and efforts
>
> Regards.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] impossible to change console graphics on hosted engine

2017-04-20 Thread Simone Tiraboschi
On Tue, Apr 18, 2017 at 11:20 AM, Nelson Lameiras <
nelson.lamei...@lyra-network.com> wrote:

> Hello,
>
> My hosted engine has currently VNC (+cirrus) graphics console, which has
> serious performance issues with my remote-viewer (theses issues are not
> important for this mail)
>
> I know that SPICE (+QXL) works perfect for me, so I tried to update my
> hostedEngine console settings on oVirt GUI, but I get the message below :
>
> "There was an attempt to change Hosted Engine VM values that are locked"
>
> So my question is : how can I change this setting on HostedEngine?
>

Currently it's not allowed,
could you please open an RFE to track it?


>
> This question could be applied to other settings wich I would like to also
> change on HostedEngine but are also locked (ex: "optimise for
> [server|desktop]")
>
> my setup:
> hostedEngine : centos 7.3 (full updated) + oVirt engine 4.1.1
> node running engine : centos 7.3 (full updated) + engine 4.1.1
> cluster running engine : compatibility 4.0
> engine running on dedicated ISCSI volume
>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine install failed; vdsm upset about broker (revised)

2017-04-20 Thread Simone Tiraboschi
On Thu, Apr 20, 2017 at 2:14 AM, Jamie Lawrence 
wrote:

>
> So, tracing this further, I’m pretty sure this is something about sanlock.
>
> As best I can tell this[1]  seems to be the failure that is blocking
> importing the pool, creating storage domains, importing the HE, etc.
> Contrary to the log, sanlock is running; I verified it starts on
> system-boot and restarts just fine.
>
> I found one reference to someone having a similar problem in 3.6, but that
> appeared to have been a permission issue I’m not afflicted with.
>
> How can I move past this?
>

Could you please share the output of
  sudo -u vdsm sudo service sanlock status
?


>
> TIA,
>
> -j
>
>
> [1] agent.log:
> MainThread::WARNING::2017-04-19 17:07:13,537::agent::209::
> ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Restarting agent,
> attempt '6'
> MainThread::INFO::2017-04-19 17:07:13,567::hosted_engine::
> 242::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname)
> Found certificate common name: sc5-ovirt-2.squaretrade.com
> MainThread::INFO::2017-04-19 17:07:13,569::hosted_engine::
> 604::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_vdsm) Initializing VDSM
> MainThread::INFO::2017-04-19 17:07:16,044::hosted_engine::
> 630::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_storage_images) Connecting the storage
> MainThread::INFO::2017-04-19 17:07:16,045::storage_server::
> 219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> MainThread::INFO::2017-04-19 17:07:20,876::storage_server::
> 226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> MainThread::INFO::2017-04-19 17:07:20,893::storage_server::
> 233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Refreshing the storage domain
> MainThread::INFO::2017-04-19 17:07:21,160::hosted_engine::
> 657::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_storage_images) Preparing images
> MainThread::INFO::2017-04-19 17:07:21,160::image::126::
> ovirt_hosted_engine_ha.lib.image.Image::(prepare_images) Preparing images
> MainThread::INFO::2017-04-19 17:07:23,954::hosted_engine::
> 660::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_storage_images) Refreshing vm.conf
> MainThread::INFO::2017-04-19 17:07:23,955::config::485::
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf)
> Reloading vm.conf from the shared storage domain
> MainThread::INFO::2017-04-19 17:07:23,955::config::412::
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.
> config::(_get_vm_conf_content_from_ovf_store) Trying to get a fresher
> copy of vm configuration from the OVF_STORE
> MainThread::WARNING::2017-04-19 17:07:26,741::ovf_store::107::
> ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Unable to find
> OVF_STORE
> MainThread::ERROR::2017-04-19 17:07:26,744::config::450::
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.
> config::(_get_vm_conf_content_from_ovf_store) Unable to identify the
> OVF_STORE volume, falling back to initial vm.conf. Please ensure you
> already added your first data domain for regular VMs
> MainThread::INFO::2017-04-19 17:07:26,770::hosted_engine::
> 509::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_broker) Initializing ha-broker connection
> MainThread::INFO::2017-04-19 17:07:26,771::brokerlink::130:
> :ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Starting monitor ping, options {'addr': '10.181.26.1'}
> MainThread::INFO::2017-04-19 17:07:26,774::brokerlink::141:
> :ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Success, id 140621269798096
> MainThread::INFO::2017-04-19 17:07:26,774::brokerlink::130:
> :ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Starting monitor mgmt-bridge, options {'use_ssl': 'true', 'bridge_name':
> 'ovirtmgmt', 'address': '0'}
> MainThread::INFO::2017-04-19 17:07:26,791::brokerlink::141:
> :ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Success, id 140621269798544
> MainThread::INFO::2017-04-19 17:07:26,792::brokerlink::130:
> :ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Starting monitor mem-free, options {'use_ssl': 'true', 'address': '0'}
> MainThread::INFO::2017-04-19 17:07:26,793::brokerlink::141:
> :ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Success, id 140621269798224
> MainThread::INFO::2017-04-19 17:07:26,794::brokerlink::130:
> :ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Starting monitor cpu-load-no-engine, options {'use_ssl': 'true', 'vm_uuid':
> '04ff4cf1-135a-4918-9a1f-8023322f89a3', 'address': '0'}
> MainThread::INFO::2017-04-19 17:07:26,796::brokerlink::141:
> 

Re: [ovirt-users] upgrade to 4.1

2017-04-20 Thread Simone Tiraboschi
On Thu, Apr 20, 2017 at 3:50 PM, Fabrice Bacchella <
fabrice.bacche...@orange.fr> wrote:

> I tried to upgrade ovirt to version 4.1 from 4.0 and got:
>
>   Found the following problems in PostgreSQL configuration for the
> Engine database:
>autovacuum_vacuum_scale_factor required to be at most 0.01
>autovacuum_analyze_scale_factor required to be at most 0.075
>autovacuum_max_workers required to be at least 6
>Postgresql client version is '9.4.8', whereas the version on
> XXX is '9.4.11'. Please use a Postgresql server of version '9.4.8'.
>   Please set:
>autovacuum_vacuum_scale_factor = 0.01
>autovacuum_analyze_scale_factor = 0.075
>autovacuum_max_workers = 6
>server_version = 9.4.8
>   in postgresql.conf on ''. Its location is usually
> /var/lib/pgsql/data , or somewhere under /etc/postgresql* .
>
> I'm a little afraid about that. Does ovirt want pg to lies about it's
> version ? It's a shared instance so what about other tools that access it ?
> Is there some explanation about the meaning of those values ?
>

engine-setup it's comparing the version of the local psql client with the
version reported by the remote postgresql server as for:
https://bugzilla.redhat.com/show_bug.cgi?id=1331168

Currently it's a strict comparison; maybe we should be more flexibly
regarding .z versions; Didi?

The other checks has been introduced for performance and scaling reasons.


>
> And it was not in the release notes, it's not funny to get this warning
> after starting the upgrade
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] LACP Bonding issue

2017-04-20 Thread Bryan Sockel

Hi,

I discovered an issue with my LACP configuration and i am having trouble 
figuring it out.  I am running 2 Dell Powered 610's with 4 broadcomm nics.  
I am trying to bond them together, however only one of the nics goes active 
no mater how much traffic i push across the links.

I have spoken to my network admin, and says that the switch ports are 
configured and can only see one active link on the switch.

Thanks
Bryan

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: f4:8e:38:c5:fc:a8
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 1
Actor Key: 9
Partner Key: 20
Partner Mac Address: a4:6c:2a:e5:30:00

Slave Interface: em1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:8e:38:c5:fc:a8
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: f4:8e:38:c5:fc:a8
port key: 9
port priority: 255
port number: 1
port state: 61
details partner lacp pdu:
system priority: 8192
system mac address: a4:6c:2a:e5:30:00
oper key: 20
port priority: 32768
port number: 25
port state: 61

Slave Interface: em2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:8e:38:c5:fc:a9
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
system priority: 65535
system mac address: f4:8e:38:c5:fc:a8
port key: 9
port priority: 255
port number: 2
port state: 5
details partner lacp pdu:
system priority: 32768
system mac address: a4:6c:2a:e5:30:00
oper key: 20
port priority: 32768
port number: 73
port state: 5

Slave Interface: em3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:8e:38:c5:fc:aa
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
system priority: 65535
system mac address: f4:8e:38:c5:fc:a8
port key: 9
port priority: 255
port number: 3
port state: 5
details partner lacp pdu:
system priority: 32768
system mac address: a4:6c:2a:e5:30:00
oper key: 20
port priority: 32768
port number: 26
port state: 5

Slave Interface: em4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:8e:38:c5:fc:ab
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
oper key: 20
port priority: 32768
port number: 73
port state: 5

Slave Interface: em3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:8e:38:c5:fc:aa
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
system priority: 65535
system mac address: f4:8e:38:c5:fc:a8
port key: 9
port priority: 255
port number: 3
port state: 5
details partner lacp pdu:
system priority: 32768
system mac address: a4:6c:2a:e5:30:00
oper key: 20
port priority: 32768
port number: 26
port state: 5

Slave Interface: em4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:8e:38:c5:fc:ab
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
system priority: 65535
system mac address: f4:8e:38:c5:fc:a8
port key: 9
port priority: 255
port number: 4
port state: 5
details partner lacp pdu:
system priority: 32768
system mac address: a4:6c:2a:e5:30:00
oper key: 20
port priority: 32768
port number: 74
port state: 5
Apr 20 10:51:04 vm-host-colo-2 systemd: Stopping LSB: Bring up/down 
networking...
Apr 20 10:51:04 vm-host-colo-2 kernel: DMZ: port 1(bond0.10) entered disabled 
state
Apr 20 10:51:04 vm-host-colo-2 network: Shutting down interface DMZ:  [  OK  ]
Apr 20 10:51:04 vm-host-colo-2 kernel: Internal-Dev: port 1(bond0.30) entered 
disabled state
Apr 20 10:51:04 vm-host-colo-2 network: Shutting down interface Internal-Dev:  
[  OK  ]
Apr 20 10:51:05 vm-host-colo-2 kernel: Lab: port 1(bond0.40) entered disabled 
state
Apr 20 10:51:05 vm-host-colo-2 network: Shutting down interface Lab:  [  OK  ]
Apr 20 10:51:05 vm-host-colo-2 kernel: 

Re: [ovirt-users] oVirt 4.1 not possible to set local maintenance on single host

2017-04-20 Thread Simone Tiraboschi
On Thu, Apr 20, 2017 at 12:12 PM, Gianluca Cecchi  wrote:

> Hello,
> I have a single host test environment on 4.1.0 with hosted engine VM.
>
> I'm going to update to 4.1.1.
>
> Normally the workflow was:
>
> - put env in global maintenance
> - update engine part with engine-setup and such
> - update other os related packages of hosted engine VM
> - shutdown hosted engine vm
> - exit global maintenance
>
> Verify engine VM starts and all is ok form web admin gui.
> These steps above I have already done and now my engine vm has latest
> 4.1.1 setup.
>
> Now I want to proceed also with the only existing host part and I have
> already run:
>
> - shutdown all running VMs
> - put env in global maintenance
> - shutdown hosted engine vm
>
> Status is:
>
> [root@ractor ~]# hosted-engine --vm-status
>
>
> !! Cluster is in GLOBAL MAINTENANCE mode !!
>
>
>
> --== Host 1 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : ractor.mydomain
> Host ID: 1
> Engine status  : {"reason": "bad vm status", "health":
> "bad", "vm": "down", "detail": "down"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 017f5635
> local_conf_timestamp   : 5595857
> Host timestamp : 5595835
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=5595835 (Thu Apr 20 12:06:21 2017)
> host-id=1
> score=3400
> vm_conf_refresh_time=5595857 (Thu Apr 20 12:06:44 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=GlobalMaintenance
> stopped=False
>
>
> !! Cluster is in GLOBAL MAINTENANCE mode !!
>
> [root@ractor ~]#
>
> Normally I set host into local maintenance now, but I receive this error:
>
> [root@ractor ~]# hosted-engine --set-maintenance --mode=local
> Unable to enter local maintenance mode: there are no available hosts
> capable of running the engine VM.
> [root@ractor ~]#
>
> [root@ractor ~]# ps -ef|grep [k]vm
> root   887 2  0 Feb14 ?00:00:00 [kvm-irqfd-clean]
> [root@ractor ~]#
>
> Is this a bug or changed functionality?
>

Yes in the past it was possible, now it's explicitly prevented since the
engine VM couldn't be migrated or restart somewhere else:
https://bugzilla.redhat.com/show_bug.cgi?id=1394570


> Thanks,
> Gianluca
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Question about Huge Pages

2017-04-20 Thread Gianluca Cecchi
On Thu, Apr 20, 2017 at 10:35 AM, Michal Skrivanek 
wrote:

>
> On 19 Apr 2017, at 16:28, Gianluca Cecchi 
> wrote:
>
> On Wed, Apr 19, 2017 at 3:44 PM, Martin Polednik 
> wrote:
>
>>
>>>
>> If you are using recent CentOS (or I guess Fedora), there isn't any
>> extra setup required. Just create the custom property:
>>
>
> Both my engine and my hosts are CentOS 7.3 + updates
>
>
> that’s good
>
>
>
>>
>> On the host where engine is running:
>>
>> $ engine-config -s "UserDefinedVMProperties=hugepages=^.*$"
>> $ service ovirt-engine restart
>>
>> and you should see 'hugepages' when editing a VM under custom properties.
>>
>
> So no vdsm hook at all to install?
>
>
> today you still need the hook.
>
>
>
>
>> Set the number to (desired memory / 2048) and you're good to go. The
>> VM will run with it's memory backed by hugepages.
>
>
> As in sysctl.conf? So that if I want 4Gb of Huge Pages I have to set 2048?
>
>
> yes. there might be some
>
>
>
>
>> If you need
>> hugepages even inside the VM, do whatever you would do on a physical
>> host.
>>
>> mpolednik
>>
>>
> yes, the main subject is to have Huge Pages inside the guest, so that
> Oracle RDBMS at startup detect them and use them
>
>
> yes, so if you do that via sysctl.conf on real HW just do the same here,
> or modify kernel cmdline.
>
> Note that those are two separate things
> the hook is making QEMU process use hugepages memory in the host - that
> improves performance of any VM
> then how it looks in guest is no concern to oVirt, it’s guest-side
> hugepages. You can enable/set them regardless the previous step, which may
> be fine if you just want to expose the capability to some app  - e.g. in
> testing that the guest-side Oracle can work with hugepages in the guest.
> But you probably want both Oracle to see hugepages and also actually use
> them - then you need both reserve that on host for qemu process and then
> inside guest reserve that for oracle. I.e. you need to add a “buffer” on
> host side to accommodate the non-hugepages parts of the guest e.g. on 24GB
> host you can reserve 20GB hugepages for VMs to use, and then run a VM with
> 20GB memory, reserving 16GB hugepages inside the guest for oracle to use.
>
> Thanks,
> michal
>
>
> Gianluca
>
>
>
I'm making some tests right now.
Steps done:
- configure huge pages on hypervisor

[root@ractor ~]# cat /etc/sysctl.d/huge-pages.conf
# 20/04/2017 8Gb
vm.nr_hugepages = 4096
[root@ractor ~]#

rebooted host (I also updated in the mean time it to latest 4.1.1 packages
with vdsm-4.19.10.1-1.el7.centos.x86_64 and vdsm-hook-hugepages-4.19.
10.1-1.el7.centos.noarch)
I also set "transparent_hugepage=never" boot parameter because I know that
they are in conflict with Huge Pages

So the situation is:

[root@ractor ~]# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-514.16.1.el7.x86_64 root=/dev/mapper/centos-root
ro rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8
transparent_hugepage=never
[root@ractor ~]#

[root@ractor ~]# cat /proc/meminfo | grep -i huge
AnonHugePages: 0 kB
HugePages_Total:4096
HugePages_Free: 4096
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
[root@ractor ~]#

I edited a pre-existing CentOS 6 VM setting for it 8Gb of ram and 2048
pages (4Gb) in custom property forhugepages.

When I power on I get this addition in qemu-kvm process definition as
expected:

-mem-path /dev/hugepages/libvirt/qemu

I noticed that now I have on host

[root@ractor vdsm]# cat /proc/meminfo | grep -i huge
AnonHugePages: 0 kB
HugePages_Total:6144
HugePages_Free: 2048
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
[root@ractor vdsm]#

So apparently it did allocated 2048 new huge pages...
Does it mean that actually I have not to pre-allocate huge pages at all on
host and it eventually will increase them (but not able to remove then I
suppose) ?

Anyway the count doesn't seem correct... because it seems that a total of
4096 pages are in use/locked... (HugePages_Total - HugePages_Free
+ HugePages_Rsvd)
while they should be 2048.

[root@ractor vdsm]# ll /dev/hugepages/libvirt/qemu/
total 0
[root@ractor vdsm]# ll /hugetlbfs/libvirt/qemu/
total 0
[root@ractor vdsm]#

If I power off the VM

[root@ractor vdsm]# cat /proc/meminfo | grep -i huge
AnonHugePages: 0 kB
HugePages_Total:4096
HugePages_Free: 4096
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
[root@ractor vdsm]#

Does this mean that in CentOS 7.3 Huge Pages could be reclaimed???

Nevertheless, when I configure huge pages in guest it seems to work as
expected

[root@dbtest ~]# cat /proc/meminfo | grep -i huge
AnonHugePages: 0 kB
HugePages_Total:2048
HugePages_Free: 2048
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

Going into Oracle DB initialization, after configuring its 

Re: [ovirt-users] Missing feature in python SDK4

2017-04-20 Thread Fabrice Bacchella

> Le 20 avr. 2017 à 16:35, Juan Hernández  a écrit :
> 
> On 04/20/2017 12:26 PM, Fabrice Bacchella wrote:
>> I didn't find a way to find the writer that correspond to a given type. Is 
>> there a way to do that, or it's up to the end user to manually manage this 
>> mapping ?
>> 
> 
> Yes that is missing. We have it for reading, but not for writing. This
> patch should address that:
> 
>  Add generic writer
>  https://gerrit.ovirt.org/75699
> 
> Please open a bug so that we can decide what version should contain this
> fix.

https://bugzilla.redhat.com/show_bug.cgi?id=1444114


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Missing feature in python SDK4

2017-04-20 Thread Juan Hernández
On 04/20/2017 04:35 PM, Juan Hernández wrote:
> On 04/20/2017 12:26 PM, Fabrice Bacchella wrote:
>> I didn't find a way to find the writer that correspond to a given type. Is 
>> there a way to do that, or it's up to the end user to manually manage this 
>> mapping ?
>>
> 
> Yes that is missing. We have it for reading, but not for writing. This
> patch should address that:
> 
>   Add generic writer
>   https://gerrit.ovirt.org/75699
> 
> Please open a bug so that we can decide what version should contain this
> fix.
> 
> Note that you should try to avoid using directly the writer/reader
> classes, as they are an internal implementation detail and may change in
> the future. The contract for reading/writing is using the Reader and
> Writer classes. For example, to generate the XML for an object (once the
> patch is merged):
> 
>   from ovirtsdk4 import types
>   from ovirtsdk4 import writer
>   from ovirtsdk4 import writers
> 
>   vm = types.Vm(
> id="123",
> name="myvm",
> ...
>   )
> 
>   xml = writer.Writer.write(vm)
> 
>   print(xml)
> 

The above was not complete clear. The ovirtsdk4.writer.Writer and
ovirtsdk4.reader.Reader classes are part of the contract of the SDK, it
is safe to use them. The specific writer/reader classes, for example
VmReader or VmWriter, are not part of the contract, and they may change
in the future without notice, so try to avoid using them directly.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Missing feature in python SDK4

2017-04-20 Thread Juan Hernández
On 04/20/2017 12:26 PM, Fabrice Bacchella wrote:
> I didn't find a way to find the writer that correspond to a given type. Is 
> there a way to do that, or it's up to the end user to manually manage this 
> mapping ?
> 

Yes that is missing. We have it for reading, but not for writing. This
patch should address that:

  Add generic writer
  https://gerrit.ovirt.org/75699

Please open a bug so that we can decide what version should contain this
fix.

Note that you should try to avoid using directly the writer/reader
classes, as they are an internal implementation detail and may change in
the future. The contract for reading/writing is using the Reader and
Writer classes. For example, to generate the XML for an object (once the
patch is merged):

  from ovirtsdk4 import types
  from ovirtsdk4 import writer
  from ovirtsdk4 import writers

  vm = types.Vm(
id="123",
name="myvm",
...
  )

  xml = writer.Writer.write(vm)

  print(xml)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] upgrade to 4.1

2017-04-20 Thread Fabrice Bacchella
I tried to upgrade ovirt to version 4.1 from 4.0 and got:

  Found the following problems in PostgreSQL configuration for the 
Engine database:
   autovacuum_vacuum_scale_factor required to be at most 0.01
   autovacuum_analyze_scale_factor required to be at most 0.075
   autovacuum_max_workers required to be at least 6
   Postgresql client version is '9.4.8', whereas the version on XXX is 
'9.4.11'. Please use a Postgresql server of version '9.4.8'.
  Please set:
   autovacuum_vacuum_scale_factor = 0.01
   autovacuum_analyze_scale_factor = 0.075
   autovacuum_max_workers = 6
   server_version = 9.4.8
  in postgresql.conf on ''. Its location is usually 
/var/lib/pgsql/data , or somewhere under /etc/postgresql* .

I'm a little afraid about that. Does ovirt want pg to lies about it's version ? 
It's a shared instance so what about other tools that access it ? Is there some 
explanation about the meaning of those values ?

And it was not in the release notes, it's not funny to get this warning after 
starting the upgrade___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] The web portal gives: Bad Request: 400

2017-04-20 Thread Yaniv Kaul
On Thu, Apr 20, 2017 at 1:06 PM, Arman Khalatyan  wrote:

> After the recent upgrade from ovirt Version 4.1.1.6-1.el7.centos. to
> Version 4.1.1.8-1.el7.centos
>
> The web portal gives following error:
> Bad Request
>
> Your browser sent a request that this server could not understand.
>
> Additionally, a 400 Bad Request error was encountered while trying to use
> an ErrorDocument to handle the request.
>
>
> Are there any hints how to fix it?
>

It'd be great if you could share some logs. The httpd logs, server.log and
engine.log, all might be useful.
Y.


> BTW the rest API works as expected, engine-setup went without errors.
>
> Thanks,
>
> Arman.
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Compiling oVirt for Debian.

2017-04-20 Thread Yedidyah Bar David
On Thu, Apr 20, 2017 at 3:45 PM, Leni Kadali Mutungi
 wrote:
> On 4/18/17, Yedidyah Bar David  wrote:
>> On Sun, Apr 16, 2017 at 6:54 AM, Leni Kadali Mutungi
>>  wrote:
> I think that all of them are maintained on gerrit.ovirt.org, and most
> have
> mirrors on github.com/ovirt.
>
>>> Found all the source code on gerrit.ovirt.org; not all of it is
>>> mirrored to github.com/ovirt
>>>
> If you haven't yet, you might want to check also:
>
> http://www.ovirt.org/develop/developer-guide/engine/engine-development-environment/
> Adding to otopi support for apt/dpkg is indeed interesting and useful,
> but
> imo isn't mandatory for a first milestone. Not having an apt packager
> will
> simply mean you can't install/update packages using otopi, but other
> things
> should work. Notably, you won't be able to use engine-setup for
> upgrades,
> at least not the way it's done with yum and versionlock.
>>>
>>> So does this mean I shouldn't bother with installing otopi, because
>>> according to the development guide for RPM-based systems, it seems
>>> only the ovirt-host-deploy, ovirt-setup-lib, and ovirt-js-dependencies
>>> are the packages required.
>>
>> ovirt-host-deploy requires otopi, and also engine-setup (from the engine
>> git repo) does. So unless you want to start manually imitating what these
>> do (which might not be a terrible idea, if you want to understand more
>> deeply how things work, but will take more time), you do need otopi.
>>
>> Also please note that the above developer guide is probably not complete
>> or up-to-date - please check also README.adoc from the engine sources.
>>
>>> The guide for Debian is blank and marked as
>>> TODO.
>>
>> Indeed, patches are welcome :-)
>>
>> I expect at least some packages to be missing there, didn't check
>> personally.
>>
>>> Another query I had was that should I make the config files
>>> myself as referenced by the README or can I expect that it will be
>>> done during make install?
>>
>> which ones? postgresql's? It's automatically done when you install
>> from RPMs, but not in dev-env mode. So you'll have to do that
>> manually for now.
>
> I was referring to the configuration files referenced in the README
> docs for otopi, ovirt-host-deploy, ovirt-setup-lib, and
> ovirt-js-dependencies.

otopi and ovirt-host-deploy do not need configuration files.
ovirt-setup-lib does not have any (and does not mention any).
No idea about ovirt-js-dependencies.

> I'm not sure that running the make install will
> put the required configuration files in the directories that the
> programs will expect to find them.

Not sure either.

I suggest to check the spec file in the source to see what rpm
installation does, and the gentoo stuff I mentioned earlier.

> If it turns out that that is the
> case, then I think I am all set.

And if you are not, please post specific errors/problems :-)

> I installed postgresql from the
> Debian repositories.

Obviously. When I said "from RPMs", I referred the oVirt and the
provided spec files, not to postgresql. IIRC I already used oVirt
with a postgresql db on a remote Debian machine without problems.

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Compiling oVirt for Debian.

2017-04-20 Thread Leni Kadali Mutungi
On 4/18/17, Yedidyah Bar David  wrote:
> On Sun, Apr 16, 2017 at 6:54 AM, Leni Kadali Mutungi
>  wrote:
 I think that all of them are maintained on gerrit.ovirt.org, and most
 have
 mirrors on github.com/ovirt.

>> Found all the source code on gerrit.ovirt.org; not all of it is
>> mirrored to github.com/ovirt
>>
 If you haven't yet, you might want to check also:

 http://www.ovirt.org/develop/developer-guide/engine/engine-development-environment/
 Adding to otopi support for apt/dpkg is indeed interesting and useful,
 but
 imo isn't mandatory for a first milestone. Not having an apt packager
 will
 simply mean you can't install/update packages using otopi, but other
 things
 should work. Notably, you won't be able to use engine-setup for
 upgrades,
 at least not the way it's done with yum and versionlock.
>>
>> So does this mean I shouldn't bother with installing otopi, because
>> according to the development guide for RPM-based systems, it seems
>> only the ovirt-host-deploy, ovirt-setup-lib, and ovirt-js-dependencies
>> are the packages required.
>
> ovirt-host-deploy requires otopi, and also engine-setup (from the engine
> git repo) does. So unless you want to start manually imitating what these
> do (which might not be a terrible idea, if you want to understand more
> deeply how things work, but will take more time), you do need otopi.
>
> Also please note that the above developer guide is probably not complete
> or up-to-date - please check also README.adoc from the engine sources.
>
>> The guide for Debian is blank and marked as
>> TODO.
>
> Indeed, patches are welcome :-)
>
> I expect at least some packages to be missing there, didn't check
> personally.
>
>> Another query I had was that should I make the config files
>> myself as referenced by the README or can I expect that it will be
>> done during make install?
>
> which ones? postgresql's? It's automatically done when you install
> from RPMs, but not in dev-env mode. So you'll have to do that
> manually for now.

I was referring to the configuration files referenced in the README
docs for otopi, ovirt-host-deploy, ovirt-setup-lib, and
ovirt-js-dependencies. I'm not sure that running the make install will
put the required configuration files in the directories that the
programs will expect to find them. If it turns out that that is the
case, then I think I am all set. I installed postgresql from the
Debian repositories.

-- 
- Warm regards
Leni Kadali Mutungi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-20 Thread Martin Sivak
> I did find this parameter (only) in /etc/vdsm/vdsm.conf.rpmnew (??)

.rpmnew files are telling you that RPM could not update the configuration
file automatically during package upgrade, because there was a manual
change in it. You can use a tool like rpmconf or just copy it over
vdsm.conf if you want the defaults.

See for example:

https://ask.fedoraproject.org/en/question/25722/what-are-rpmnew-files/
https://www.redhat.com/archives/rhl-list/2003-December/msg04713.html

Best regards

--
Martin Sivak
SLA / oVirt


On Wed, Apr 19, 2017 at 11:14 AM, Nelson Lameiras <
nelson.lamei...@lyra-network.com> wrote:

> hello pavel,
>
> Thanks for you answer.
> I did find this parameter (only) in /etc/vdsm/vdsm.conf.rpmnew (??)
>
> Parameter is commented with value 2 so my guess is that it is not used...
> So this brings a few more questions :
>
> - Since parameter is commented, default value must be used... can we be
> sure that 2 is default value?
> I do found strange that migrations are limited to 2, I have the feeling
> that more than two are simultaneously being migrated (but I'm maybe wrong),
> how to be sure?
>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
> --
> *From: *"Pavel Gashev" 
> *To: *users@ovirt.org, "nelson lameiras"  >
> *Sent: *Tuesday, April 18, 2017 7:16:08 PM
> *Subject: *Re: [ovirt-users] massive simultaneous vms migrations ?
>
> VDSM has the following config option:
>
> # Maximum concurrent outgoing migrations
> # max_outgoing_migrations = 2
>
> On Tue, 2017-04-18 at 18:48 +0200, Nelson Lameiras wrote:
>
> hello,
>
> When putting a host on "maintenance mode", all vms start migrating to
> other hosts.
>
> We have some hosts that have 60 vms. So this will create a 60 vms
> migrating simultaneously.
> Some vms are under so much heavy loads that migration fails often (our
> guess is that massive simultaneous migrations does not help migration
> convergence) - even with "suspend workload if needed" migraton policy.
>
> - Does oVirt really launches 60 simultaneous migrations or is there a
> queuing system ?
> - If there is a queuing system, is there a way to configure a maximum
> number of simultaneous migrations ?
>
> I did see a "migration bandwidth limit", but this is quite what we are
> looking for.
>
> my setup:
> ovirt-engine +hosted engine 4.1.1
> hosts : centos 7.3 fully updated.
>
> for full context to understand this question : 2 times in the past, when
> trying to put a host in maintenance, host stopped responding during massive
> migrations and was fenced by engine. It's still unclear why host stopped
> responding, but we think that migrating 60+ vms simultaneously puts a heavy
> strain on storage ? So we would like to better control migration process in
> order to better understand what's happening. This scenario is "production
> only" since our labs do not contain nearly as much vm with such heavy
> loads. So rather than trying to reproduce, we are trying to avoid ;)
>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1 not possible to set local maintenance on single host

2017-04-20 Thread Martin Sivak
Hi,

global maintenance is meant for completely manual changes where all
logic needs to be disabled. That includes maintenance migrations.

You should exit global maintenance and put just the single host to
local maintenance. Preferably using the webadmin UI (Management /
Maintenance). That will migrate all VMs away (including hosted engine
if it runs there) and disconnect all storage domains.

This might be tricky if all you have is just a single host.

Best regards

--
Martin Sivak
SLA / oVirt

On Thu, Apr 20, 2017 at 12:12 PM, Gianluca Cecchi
 wrote:
> Hello,
> I have a single host test environment on 4.1.0 with hosted engine VM.
>
> I'm going to update to 4.1.1.
>
> Normally the workflow was:
>
> - put env in global maintenance
> - update engine part with engine-setup and such
> - update other os related packages of hosted engine VM
> - shutdown hosted engine vm
> - exit global maintenance
>
> Verify engine VM starts and all is ok form web admin gui.
> These steps above I have already done and now my engine vm has latest 4.1.1
> setup.
>
> Now I want to proceed also with the only existing host part and I have
> already run:
>
> - shutdown all running VMs
> - put env in global maintenance
> - shutdown hosted engine vm
>
> Status is:
>
> [root@ractor ~]# hosted-engine --vm-status
>
>
> !! Cluster is in GLOBAL MAINTENANCE mode !!
>
>
>
> --== Host 1 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : ractor.mydomain
> Host ID: 1
> Engine status  : {"reason": "bad vm status", "health":
> "bad", "vm": "down", "detail": "down"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 017f5635
> local_conf_timestamp   : 5595857
> Host timestamp : 5595835
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=5595835 (Thu Apr 20 12:06:21 2017)
> host-id=1
> score=3400
> vm_conf_refresh_time=5595857 (Thu Apr 20 12:06:44 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=GlobalMaintenance
> stopped=False
>
>
> !! Cluster is in GLOBAL MAINTENANCE mode !!
>
> [root@ractor ~]#
>
> Normally I set host into local maintenance now, but I receive this error:
>
> [root@ractor ~]# hosted-engine --set-maintenance --mode=local
> Unable to enter local maintenance mode: there are no available hosts capable
> of running the engine VM.
> [root@ractor ~]#
>
> [root@ractor ~]# ps -ef|grep [k]vm
> root   887 2  0 Feb14 ?00:00:00 [kvm-irqfd-clean]
> [root@ractor ~]#
>
> Is this a bug or changed functionality?
>
> Thanks,
> Gianluca
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Missing feature in python SDK4

2017-04-20 Thread Fabrice Bacchella
I didn't find a way to find the writer that correspond to a given type. Is 
there a way to do that, or it's up to the end user to manually manage this 
mapping ?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4.1 not possible to set local maintenance on single host

2017-04-20 Thread Gianluca Cecchi
Hello,
I have a single host test environment on 4.1.0 with hosted engine VM.

I'm going to update to 4.1.1.

Normally the workflow was:

- put env in global maintenance
- update engine part with engine-setup and such
- update other os related packages of hosted engine VM
- shutdown hosted engine vm
- exit global maintenance

Verify engine VM starts and all is ok form web admin gui.
These steps above I have already done and now my engine vm has latest 4.1.1
setup.

Now I want to proceed also with the only existing host part and I have
already run:

- shutdown all running VMs
- put env in global maintenance
- shutdown hosted engine vm

Status is:

[root@ractor ~]# hosted-engine --vm-status


!! Cluster is in GLOBAL MAINTENANCE mode !!



--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ractor.mydomain
Host ID: 1
Engine status  : {"reason": "bad vm status", "health":
"bad", "vm": "down", "detail": "down"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 017f5635
local_conf_timestamp   : 5595857
Host timestamp : 5595835
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=5595835 (Thu Apr 20 12:06:21 2017)
host-id=1
score=3400
vm_conf_refresh_time=5595857 (Thu Apr 20 12:06:44 2017)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False


!! Cluster is in GLOBAL MAINTENANCE mode !!

[root@ractor ~]#

Normally I set host into local maintenance now, but I receive this error:

[root@ractor ~]# hosted-engine --set-maintenance --mode=local
Unable to enter local maintenance mode: there are no available hosts
capable of running the engine VM.
[root@ractor ~]#

[root@ractor ~]# ps -ef|grep [k]vm
root   887 2  0 Feb14 ?00:00:00 [kvm-irqfd-clean]
[root@ractor ~]#

Is this a bug or changed functionality?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] The web portal gives: Bad Request: 400

2017-04-20 Thread Arman Khalatyan
After the recent upgrade from ovirt Version 4.1.1.6-1.el7.centos. to
Version 4.1.1.8-1.el7.centos

The web portal gives following error:
Bad Request

Your browser sent a request that this server could not understand.

Additionally, a 400 Bad Request error was encountered while trying to use
an ErrorDocument to handle the request.


Are there any hints how to fix it?

BTW the rest API works as expected, engine-setup went without errors.

Thanks,

Arman.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Question about Huge Pages

2017-04-20 Thread Michal Skrivanek

> On 19 Apr 2017, at 16:28, Gianluca Cecchi  wrote:
> 
> On Wed, Apr 19, 2017 at 3:44 PM, Martin Polednik  > wrote:
> 
> 
> If you are using recent CentOS (or I guess Fedora), there isn't any
> extra setup required. Just create the custom property:
> 
> Both my engine and my hosts are CentOS 7.3 + updates

that’s good

>  
> 
> On the host where engine is running:
> 
> $ engine-config -s "UserDefinedVMProperties=hugepages=^.*$"
> $ service ovirt-engine restart
> 
> and you should see 'hugepages' when editing a VM under custom properties.
> 
> So no vdsm hook at all to install?

today you still need the hook.

> 
>  
> Set the number to (desired memory / 2048) and you're good to go. The
> VM will run with it's memory backed by hugepages.
> 
> As in sysctl.conf? So that if I want 4Gb of Huge Pages I have to set 2048?

yes. there might be some 

> 
>  
> If you need
> hugepages even inside the VM, do whatever you would do on a physical
> host.
> 
> mpolednik
> 
> 
> yes, the main subject is to have Huge Pages inside the guest, so that Oracle 
> RDBMS at startup detect them and use them

yes, so if you do that via sysctl.conf on real HW just do the same here, or 
modify kernel cmdline.

Note that those are two separate things
the hook is making QEMU process use hugepages memory in the host - that 
improves performance of any VM
then how it looks in guest is no concern to oVirt, it’s guest-side hugepages. 
You can enable/set them regardless the previous step, which may be fine if you 
just want to expose the capability to some app  - e.g. in testing that the 
guest-side Oracle can work with hugepages in the guest.
But you probably want both Oracle to see hugepages and also actually use them - 
then you need both reserve that on host for qemu process and then inside guest 
reserve that for oracle. I.e. you need to add a “buffer” on host side to 
accommodate the non-hugepages parts of the guest e.g. on 24GB host you can 
reserve 20GB hugepages for VMs to use, and then run a VM with 20GB memory, 
reserving 16GB hugepages inside the guest for oracle to use.

Thanks,
michal

> 
> Gianluca 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Change cluster compatibilty version 3.6-->4.0

2017-04-20 Thread Lionel Caignec
ok thank you very much.

- Mail original -
De: "Michal Skrivanek" 
À: "Lionel Caignec" 
Cc: "blanchet" , "users" 
Envoyé: Jeudi 20 Avril 2017 09:58:23
Objet: Re: [ovirt-users] Change cluster compatibilty version 3.6-->4.0

> On 20 Apr 2017, at 09:56, Lionel Caignec  wrote:
> 
> Cool thank you Michal and Nathanael.
> 
> I was affraid that ovirt restart all on is own.
> 
> Juste one question more, reboot is sufficient or do i need power off and 
> power on vm?

Sorry for using the confusing term - A guest-side reboot is not sufficient, it 
needs to be a power cycle from guest POV, so shut down and power on in oVirt

Thanks,
michal

> 
> - Mail original -
> De: "Michal Skrivanek" 
> À: "blanchet" , "Lionel Caignec" 
> Cc: "users" 
> Envoyé: Jeudi 20 Avril 2017 09:46:35
> Objet: Re: [ovirt-users] Change cluster compatibilty version 3.6-->4.0
> 
>> On 20 Apr 2017, at 09:38, Nathanaël Blanchet  wrote:
>> 
>> Hi Lionel :)
>> 
>> 
>> Le 20/04/2017 à 08:48, Lionel Caignec a écrit :
>>> Hi,
>>> 
>>> i've upgraded all my host and manager to ovirt 4, and now i want to upgrade 
>>> compatibility version of my cluster.
>>> 
>>> But when i change the value ovirt warn me whit message : needing reboot all 
>>> VM.
>>> 
>>> Does the manager reboot the vm on is own ? or can I can do it myself 
>>> manualy?
>> You can do it manually, but some vms properties will be changed ( vmconsole 
>> to disabled and virtio scsi to enabled)
> 
> You _have_ to do that manually, oVirt won’t restart them for you, it will 
> just keep nagging with the “pending change” triangle icon. This is so you can 
> do that whenever it is convenient for that particular VM
> A bit more is going to change for that VM - the emulated hardware is being 
> changed, as well as behavior of oVirt features leveraging them (mostly the 
> new features requiring some QEMU functionality, like hot plugging, higher 
> number of vCPUs and such)
> 
> Thanks,
> michal
> 
>>> 
>>> Thank you.
>>> 
>>> --
>>> Lionel Caignec
>>> 
>>> Centre Informatique National de l' Enseignement Supérieur
>>> 950 rue de Saint Priest
>>> 34097 MONTPELLIER Cedex 5
>>> Tel : (33) 04 67 14 14 14
>>> Fax : (33)04 67 52 37 63
>>> http://www.cines.fr
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>> 
>> -- 
>> Nathanaël Blanchet
>> 
>> Supervision réseau
>> Pôle Infrastrutures Informatiques
>> 227 avenue Professeur-Jean-Louis-Viala
>> 34193 MONTPELLIER CEDEX 5
>> Tél. 33 (0)4 67 54 84 55
>> Fax  33 (0)4 67 54 84 14
>> blanc...@abes.fr
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Change cluster compatibilty version 3.6-->4.0

2017-04-20 Thread Michal Skrivanek

> On 20 Apr 2017, at 09:56, Lionel Caignec  wrote:
> 
> Cool thank you Michal and Nathanael.
> 
> I was affraid that ovirt restart all on is own.
> 
> Juste one question more, reboot is sufficient or do i need power off and 
> power on vm?

Sorry for using the confusing term - A guest-side reboot is not sufficient, it 
needs to be a power cycle from guest POV, so shut down and power on in oVirt

Thanks,
michal

> 
> - Mail original -
> De: "Michal Skrivanek" 
> À: "blanchet" , "Lionel Caignec" 
> Cc: "users" 
> Envoyé: Jeudi 20 Avril 2017 09:46:35
> Objet: Re: [ovirt-users] Change cluster compatibilty version 3.6-->4.0
> 
>> On 20 Apr 2017, at 09:38, Nathanaël Blanchet  wrote:
>> 
>> Hi Lionel :)
>> 
>> 
>> Le 20/04/2017 à 08:48, Lionel Caignec a écrit :
>>> Hi,
>>> 
>>> i've upgraded all my host and manager to ovirt 4, and now i want to upgrade 
>>> compatibility version of my cluster.
>>> 
>>> But when i change the value ovirt warn me whit message : needing reboot all 
>>> VM.
>>> 
>>> Does the manager reboot the vm on is own ? or can I can do it myself 
>>> manualy?
>> You can do it manually, but some vms properties will be changed ( vmconsole 
>> to disabled and virtio scsi to enabled)
> 
> You _have_ to do that manually, oVirt won’t restart them for you, it will 
> just keep nagging with the “pending change” triangle icon. This is so you can 
> do that whenever it is convenient for that particular VM
> A bit more is going to change for that VM - the emulated hardware is being 
> changed, as well as behavior of oVirt features leveraging them (mostly the 
> new features requiring some QEMU functionality, like hot plugging, higher 
> number of vCPUs and such)
> 
> Thanks,
> michal
> 
>>> 
>>> Thank you.
>>> 
>>> --
>>> Lionel Caignec
>>> 
>>> Centre Informatique National de l' Enseignement Supérieur
>>> 950 rue de Saint Priest
>>> 34097 MONTPELLIER Cedex 5
>>> Tel : (33) 04 67 14 14 14
>>> Fax : (33)04 67 52 37 63
>>> http://www.cines.fr
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>> 
>> -- 
>> Nathanaël Blanchet
>> 
>> Supervision réseau
>> Pôle Infrastrutures Informatiques
>> 227 avenue Professeur-Jean-Louis-Viala
>> 34193 MONTPELLIER CEDEX 5
>> Tél. 33 (0)4 67 54 84 55
>> Fax  33 (0)4 67 54 84 14
>> blanc...@abes.fr
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Change cluster compatibilty version 3.6-->4.0

2017-04-20 Thread Lionel Caignec
Cool thank you Michal and Nathanael.

I was affraid that ovirt restart all on is own.

Juste one question more, reboot is sufficient or do i need power off and power 
on vm?

- Mail original -
De: "Michal Skrivanek" 
À: "blanchet" , "Lionel Caignec" 
Cc: "users" 
Envoyé: Jeudi 20 Avril 2017 09:46:35
Objet: Re: [ovirt-users] Change cluster compatibilty version 3.6-->4.0

> On 20 Apr 2017, at 09:38, Nathanaël Blanchet  wrote:
> 
> Hi Lionel :)
> 
> 
> Le 20/04/2017 à 08:48, Lionel Caignec a écrit :
>> Hi,
>> 
>> i've upgraded all my host and manager to ovirt 4, and now i want to upgrade 
>> compatibility version of my cluster.
>> 
>> But when i change the value ovirt warn me whit message : needing reboot all 
>> VM.
>> 
>> Does the manager reboot the vm on is own ? or can I can do it myself manualy?
> You can do it manually, but some vms properties will be changed ( vmconsole 
> to disabled and virtio scsi to enabled)

You _have_ to do that manually, oVirt won’t restart them for you, it will just 
keep nagging with the “pending change” triangle icon. This is so you can do 
that whenever it is convenient for that particular VM
A bit more is going to change for that VM - the emulated hardware is being 
changed, as well as behavior of oVirt features leveraging them (mostly the new 
features requiring some QEMU functionality, like hot plugging, higher number of 
vCPUs and such)

Thanks,
michal

>> 
>> Thank you.
>> 
>> --
>> Lionel Caignec
>> 
>> Centre Informatique National de l' Enseignement Supérieur
>> 950 rue de Saint Priest
>> 34097 MONTPELLIER Cedex 5
>> Tel : (33) 04 67 14 14 14
>> Fax : (33)04 67 52 37 63
>> http://www.cines.fr
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
> -- 
> Nathanaël Blanchet
> 
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Change cluster compatibilty version 3.6-->4.0

2017-04-20 Thread Michal Skrivanek

> On 20 Apr 2017, at 09:38, Nathanaël Blanchet  wrote:
> 
> Hi Lionel :)
> 
> 
> Le 20/04/2017 à 08:48, Lionel Caignec a écrit :
>> Hi,
>> 
>> i've upgraded all my host and manager to ovirt 4, and now i want to upgrade 
>> compatibility version of my cluster.
>> 
>> But when i change the value ovirt warn me whit message : needing reboot all 
>> VM.
>> 
>> Does the manager reboot the vm on is own ? or can I can do it myself manualy?
> You can do it manually, but some vms properties will be changed ( vmconsole 
> to disabled and virtio scsi to enabled)

You _have_ to do that manually, oVirt won’t restart them for you, it will just 
keep nagging with the “pending change” triangle icon. This is so you can do 
that whenever it is convenient for that particular VM
A bit more is going to change for that VM - the emulated hardware is being 
changed, as well as behavior of oVirt features leveraging them (mostly the new 
features requiring some QEMU functionality, like hot plugging, higher number of 
vCPUs and such)

Thanks,
michal

>> 
>> Thank you.
>> 
>> --
>> Lionel Caignec
>> 
>> Centre Informatique National de l' Enseignement Supérieur
>> 950 rue de Saint Priest
>> 34097 MONTPELLIER Cedex 5
>> Tel : (33) 04 67 14 14 14
>> Fax : (33)04 67 52 37 63
>> http://www.cines.fr
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
> -- 
> Nathanaël Blanchet
> 
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Change cluster compatibilty version 3.6-->4.0

2017-04-20 Thread Nathanaël Blanchet

Hi Lionel :)


Le 20/04/2017 à 08:48, Lionel Caignec a écrit :

Hi,

i've upgraded all my host and manager to ovirt 4, and now i want to upgrade 
compatibility version of my cluster.

But when i change the value ovirt warn me whit message : needing reboot all VM.

Does the manager reboot the vm on is own ? or can I can do it myself manualy?
You can do it manually, but some vms properties will be changed ( 
vmconsole to disabled and virtio scsi to enabled)


Thank you.

--
Lionel Caignec

Centre Informatique National de l' Enseignement Supérieur
950 rue de Saint Priest
34097 MONTPELLIER Cedex 5
Tel : (33) 04 67 14 14 14
Fax : (33)04 67 52 37 63
http://www.cines.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] import domain and import template

2017-04-20 Thread qinglong.d...@horebdata.cn
Hi,
I have created an ovirt 4.1.1.6 environment a few days ago. And I have 
imported a data domain which was used in early version(4.0.0.5) sucessfully. 
But I got a same error when importing all templates of the data domain:
Anyone can help? Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Announcing VM Portal 0.1.4

2017-04-20 Thread Michal Skrivanek

> On 20 Apr 2017, at 09:22, Nathanaël Blanchet  wrote:
> 
> Works well, good job, but I can't see which kind of new feature VM Portal 
> brings compared to Basic User Portal.
> 
> 

Depends from which POV you look at it - there's the “Goals” section on 
https://github.com/oVirt/ovirt-web-ui
But it’s a great question which you can help with - what new feature would you 
like to see? Please go ahead and file suggestions and bugs on the project page

Thanks,
michal

> 
> Le 20/04/2017 à 08:50, Marek Libra a écrit :
>> Hello All,
>> 
>> Let me announce availability of the VM Portal v0.1.4 for preliminary testing.
>> We are looking forward to your feedback which we will try to incorporate 
>> into oncoming stable 1.0.0 version.
>> 
>> The VM Portal aims to be a drop-in replacement of the existing Basic User 
>> Portal.
>> Revised list of Extended User Portal features will be implemented to ideally 
>> replace it as well.
>> 
>> The VM Portal is installed by default since oVirt 4.1.
>> 
>> The simplest way to try latest version is via Docker by [1].
>> Once oVirt credentials are entered and initialization is finished, you can 
>> access it on [2].
>> 
>> If you prefer to stay as closest to the production setup as possible, the 
>> latest rpms are available on project's yum repo [3].
>> Then you can access the portal from [4].
>> 
>> Prerequisites: The VM Portal requires ovirt-engine 4.0+, so far mostly 
>> tested on 4.1.
>> 
>> Please note, the docker image is so far meant to just simplify user testing 
>> and is not ready for production setup.
>> Unless decided otherwise in the future, stable releases are still planed to 
>> be deployed via rpms.
>> 
>> For issue reporting or enhancement ideas, please use project's github issue 
>> tracker [5].
>> 
>> Thank you for your feedback,
>> Marek
>> 
>> 
>> [1] docker run --rm -it -e 
>> ENGINE_URL=https://[OVIRT.ENGINE.FQDN]/ovirt-engine/ -p 3000:3000 
>> mareklibra/ovirt-web-ui:latest
>> [2] http://localhost:3000 
>> [3] https://people.redhat.com/mlibra/repos/ovirt-web-ui 
>> 
>> [4] https://[OVIRT.ENGINE.FQDN]/ovirt-engine/web-ui
>> [5] https://github.com/oVirt/ovirt-web-ui/issues 
>> 
>> 
>> 
>> -- 
>> MAREK LIBRA
>> SENIOR SOFTWARE ENGINEER
>> Red Hat Czech
>> 
>>  
>> 
>> ​
>> 
>> 
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users 
>> 
> 
> -- 
> Nathanaël Blanchet
> 
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr  
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Announcing VM Portal 0.1.4

2017-04-20 Thread Nathanaël Blanchet
Works well, good job, but I can't see which kind of new feature VM 
Portal brings compared to Basic User Portal.



Le 20/04/2017 à 08:50, Marek Libra a écrit :

Hello All,

Let me announce availability of the VM Portal v0.1.4 for preliminary 
testing.
We are looking forward to your feedback which we will try to 
incorporate into oncoming stable 1.0.0 version.


The VM Portal aims to be a drop-in replacement of the existing Basic 
User Portal.
Revised list of Extended User Portal features will be implemented to 
ideally replace it as well.


The VM Portal is installed by default since oVirt 4.1.

*The simplest way to try latest version is via Docker by [1].*
Once oVirt credentials are entered and initialization is finished, you 
can access it on [2].


If you prefer to stay as closest to the production setup as possible, 
the latest rpms are available on project's yum repo [3].

Then you can access the portal from [4].

Prerequisites: The VM Portal requires ovirt-engine 4.0+, so far mostly 
tested on 4.1.


Please note, the docker image is so far meant to just simplify user 
testing and is not ready for production setup.
Unless decided otherwise in the future, stable releases are still 
planed to be deployed via rpms.


For issue reporting or enhancement ideas, please use project's github 
issue tracker [5].


Thank you for your feedback,
Marek


[1] docker run --rm -it -e 
ENGINE_URL=https://[OVIRT.ENGINE.FQDN]/ovirt-engine/ -p 3000:3000 
mareklibra/ovirt-web-ui:latest

[2] http://localhost:3000
[3] https://people.redhat.com/mlibra/repos/ovirt-web-ui
[4] https://[OVIRT.ENGINE.FQDN]/ovirt-engine/web-ui
[5] https://github.com/oVirt/ovirt-web-ui/issues


--

Marek Libra

senior software engineer

Red Hat Czech





​




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Change cluster compatibilty version 3.6-->4.0

2017-04-20 Thread Lionel Caignec
Hi,

i've upgraded all my host and manager to ovirt 4, and now i want to upgrade 
compatibility version of my cluster.

But when i change the value ovirt warn me whit message : needing reboot all VM.

Does the manager reboot the vm on is own ? or can I can do it myself manualy?

Thank you.

--
Lionel Caignec 

Centre Informatique National de l' Enseignement Supérieur 
950 rue de Saint Priest 
34097 MONTPELLIER Cedex 5 
Tel : (33) 04 67 14 14 14
Fax : (33)04 67 52 37 63 
http://www.cines.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Announcing VM Portal 0.1.4

2017-04-20 Thread Marek Libra
Hello All,

Let me announce availability of the VM Portal v0.1.4 for preliminary
testing.
We are looking forward to your feedback which we will try to incorporate
into oncoming stable 1.0.0 version.

The VM Portal aims to be a drop-in replacement of the existing Basic User
Portal.
Revised list of Extended User Portal features will be implemented to
ideally replace it as well.

The VM Portal is installed by default since oVirt 4.1.

*The simplest way to try latest version is via Docker by [1].*
Once oVirt credentials are entered and initialization is finished, you can
access it on [2].

If you prefer to stay as closest to the production setup as possible, the
latest rpms are available on project's yum repo [3].
Then you can access the portal from [4].

Prerequisites: The VM Portal requires ovirt-engine 4.0+, so far mostly
tested on 4.1.

Please note, the docker image is so far meant to just simplify user testing
and is not ready for production setup.
Unless decided otherwise in the future, stable releases are still planed to
be deployed via rpms.

For issue reporting or enhancement ideas, please use project's github issue
tracker [5].

Thank you for your feedback,
Marek


[1] docker run --rm -it -e ENGINE_URL=https://[OVIRT.ENGINE.FQDN]/ovirt-engine/
-p 3000:3000 mareklibra/ovirt-web-ui:latest
[2] http://localhost:3000
[3] https://people.redhat.com/mlibra/repos/ovirt-web-ui
[4] https://[OVIRT.ENGINE.FQDN]/ovirt-engine/web-ui
[5] https://github.com/oVirt/ovirt-web-ui/issues


-- 

Marek Libra

senior software engineer

Red Hat Czech




​
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine install failed; vdsm upset about broker

2017-04-20 Thread knarra

On 04/20/2017 03:15 AM, Jamie Lawrence wrote:

I trialed installing the hosted engine, following the instructions at  
http://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/
  . This is using Gluster as the backend storage subsystem.

Answer file at the end.

Per the docs,

"When the hosted-engine deployment script completes successfully, the oVirt 
Engine is configured and running on your host. The Engine has already configured the 
data center, cluster, host, the Engine virtual machine, and a shared storage domain 
dedicated to the Engine virtual machine.”

In my case, this is false. The installation claims success, but  the hosted 
engine VM stays stopped, unless I start it manually.
During the install process there is a step where HE vm is stopped and 
started. Can you check if this has happened correctly ?

If I start it manually, the default DC is down, the default cluster has the 
installation host in the cluster,  there is no storage, and the VM doesn’t show 
up in the GUI. In this install run, I have not yet started the engine manually.
you wont be seeing HE vm until HE storage is imported into the UI. HE 
storage will be automatically imported into the UI (which will import HE 
vm too )once a master domain is present .


I assume this is related to the errors in ovirt-hosted-engine-setup.log, below. 
(The timestamps are confusing; it looks like the Python errors are logged some 
time after they’re captured or something.) The HA broker and agent logs just 
show them looping in the sequence below.

Is there a decent way to pick this up and continue? If not, how do I make this 
work?

Can you please check the following things.

1) is glusterd running on all the nodes ? 'systemctl status glusterd'
2) Are you able to connect to your storage server which is ovirt_engine 
in your case.

3) Can you check if all the brick process in the volume is up ?

Thanks
kasturi.



Thanks,

-j

- - - - ovirt-hosted-engine-setup.log snippet: - - - -

2017-04-19 12:29:55 DEBUG otopi.context context._executeMethod:128 Stage 
late_setup METHOD otopi.plugins.gr_he_setup.system.vdsmenv.Plugin._late_setup
2017-04-19 12:29:55 DEBUG otopi.plugins.otopi.services.systemd 
systemd.status:90 check service vdsmd status
2017-04-19 12:29:55 DEBUG otopi.plugins.otopi.services.systemd 
plugin.executeRaw:813 execute: ('/bin/systemctl', 'status', 'vdsmd.service'), 
executable='None', cwd='None', env=None
2017-04-19 12:29:55 DEBUG otopi.plugins.otopi.services.systemd 
plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'status', 
'vdsmd.service'), rc=0
2017-04-19 12:29:55 DEBUG otopi.plugins.otopi.services.systemd 
plugin.execute:921 execute-output: ('/bin/systemctl', 'status', 
'vdsmd.service') stdout:
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
Active: active (running) since Wed 2017-04-19 12:26:59 PDT; 2min 55s ago
   Process: 67370 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh 
--post-stop (code=exited, status=0/SUCCESS)
   Process: 69995 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh 
--pre-start (code=exited, status=0/SUCCESS)
  Main PID: 70062 (vdsm)
CGroup: /system.slice/vdsmd.service
└─70062 /usr/bin/python2 /usr/share/vdsm/vdsm

Apr 19 12:29:00 sc5-ovirt-2.squaretrade.com vdsm[70062]: vdsm 
ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Failed to connect to 
broker, the number of errors has exceeded the limit (1)
Apr 19 12:29:00 sc5-ovirt-2.squaretrade.com vdsm[70062]: vdsm root ERROR failed 
to retrieve Hosted Engine HA info
  Traceback (most 
recent call last):
File 
"/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo
  stats = 
instance.get_all_stats()
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 102, in get_all_stats
  with 
broker.connection(self._retries, self._wait):
File 
"/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
  return 
self.gen.next()
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 99, in connection
  
self.connect(retries, wait)
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 78, in connect
  raise 
BrokerConnectionError(error_msg)