[ovirt-users] Install oVirt node without Internet repository

2018-04-02 Thread G, Maghesh Kumar (Nokia - IN/Bangalore)
Hi


Description of problem:

Cannot add host - install fails & Install oVirt node without Internet repository



Version: Ovirt-4.2

Host is installed with RHEL 7.4



Actual results:

Host KVM02 installation failed. Command returned failure code 1 during SSH 
session 'root@192.175.2.231'

I found that right now on my oVirt Nodes in my test environment does not NOT 
connect to the internet...
Basically, i need the offline repository to access from oVirt nodes without 
Internet access.

Please guide us how to proceed

Regards,
Maghesh

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Engine reports

2018-04-02 Thread Anantha Raghava

Hi,

I see that oVirt DWH is installed with Version 4.2.x. Now, how do I take 
the reports? I version 3.5, we had Jasper reports module using which we 
could take utilization reports. Can we do something similar here?


--

Thanks & Regards,


Anantha Raghava


Do not print this e-mail unless required. Save Paper & trees.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 回复:Re: Re: Re: ovirt engine HA

2018-04-02 Thread dhy336
thanks, 

- 原始邮件 -
发件人:Vincent Royer 
收件人:dhy...@sina.com
抄送人:Users 
主题:Re: Re: Re: [ovirt-users] ovirt engine HA
日期:2018年04月03日 13点59分

Sounds like you should start here 
https://ovirt.org/documentation/self-hosted/Self-Hosted_Engine_Guide/


On Mon, Apr 2, 2018, 10:42 PM ,  wrote:
Thank you, How to deploy self-hosted engine? may you give me some data for 
self-hosted engine.
- 原始邮件 -
发件人:Vincent Royer 
收件人:dhy...@sina.com
抄送人:users 
主题:Re: Re: [ovirt-users] ovirt engine HA
日期:2018年04月03日 13点02分

Same thing, the engine in this case is "self-hosted", as in, it runs in a VM 
hosted on the cluster that it is managing.  I am a beginner here, but from my 
understanding, each node is always checking on the health of the engine VM.  If 
the engine is missing (ie, the host running it has gone down), then another 
available, healthy host will spawn up the engine and you will regain access. 
In my experience this has worked very reliably.  I have 2 hosts, both are 
"able" to run the engine VM.  If I take one host down, I am not able to load 
the engine GUI.  But if I wait a few minutes, then I regain access, and see 
that the engine is now running on the remaining healthy host. Vincent 
Royer778-825-1057

SUSTAINABLE MOBILE ENERGY SOLUTIONS




On Mon, Apr 2, 2018 at 6:07 PM,   wrote:
what different between self-hosted engine and  hosted engine? I find a project 
ovirt-hosted-engine-ha  https://github.com/oVirt/ovirt-hosted-engine-ha  - 
原始邮件 -
发件人:Vincent Royer 
收件人:dhy...@sina.com
抄送人:users 
主题:Re: [ovirt-users] ovirt engine HA
日期:2018年04月03日 08点57分

If your node running self-hosted engine crashes, the hosted engine will be 
started up on another node. It just takes a few minutes for this all to happen, 
but it works reliably in my experience. 



On Mon, Apr 2, 2018 at 5:42 PM,   wrote:
How to solute ovirt engine HA, I have a three node cluster, one of is deploy 
engine and node , others are node, if node that deplay engine and  node crash, 
How to ensure my server is up?
___

Users mailing list

Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users










___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt engine HA

2018-04-02 Thread Vincent Royer
Sounds like you should start here

https://ovirt.org/documentation/self-hosted/Self-Hosted_Engine_Guide/


On Mon, Apr 2, 2018, 10:42 PM ,  wrote:

> Thank you, How to deploy self-hosted engine? may you give me some data for
> self-hosted engine.
> - 原始邮件 -
> 发件人:Vincent Royer 
> 收件人:dhy...@sina.com
> 抄送人:users 
> 主题:Re: Re: [ovirt-users] ovirt engine HA
> 日期:2018年04月03日 13点02分
>
> Same thing, the engine in this case is "self-hosted", as in, it runs in a
> VM hosted on the cluster that it is managing.  I am a beginner here, but
> from my understanding, each node is always checking on the health of the
> engine VM.  If the engine is missing (ie, the host running it has gone
> down), then another available, healthy host will spawn up the engine and
> you will regain access.
>
> In my experience this has worked very reliably.  I have 2 hosts, both are
> "able" to run the engine VM.  If I take one host down, I am not able to
> load the engine GUI.  But if I wait a few minutes, then I regain access,
> and see that the engine is now running on the remaining healthy host.
>
> *Vincent Royer*
> *778-825-1057*
>
>
> 
> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>
>
>
>
> On Mon, Apr 2, 2018 at 6:07 PM,  wrote:
>
> what different between self-hosted engine and  hosted engine? I find a
> project ovirt-hosted-engine-ha
> 
> https://github.com/oVirt/ovirt-hosted-engine-ha
> - 原始邮件 -
> 发件人:Vincent Royer 
> 收件人:dhy...@sina.com
> 抄送人:users 
> 主题:Re: [ovirt-users] ovirt engine HA
> 日期:2018年04月03日 08点57分
>
> If your node running self-hosted engine crashes, the hosted engine will be
> started up on another node. It just takes a few minutes for this all to
> happen, but it works reliably in my experience.
>
>
>
> On Mon, Apr 2, 2018 at 5:42 PM,  wrote:
>
> How to solute ovirt engine HA, I have a three node cluster, one of is
> deploy engine and node , others are node, if node that deplay engine and
>  node crash, How to ensure my server is up?
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 回复:Re: Re: ovirt engine HA

2018-04-02 Thread dhy336
Thank you, How to deploy self-hosted engine? may you give me some data for 
self-hosted engine.
- 原始邮件 -
发件人:Vincent Royer 
收件人:dhy...@sina.com
抄送人:users 
主题:Re: Re: [ovirt-users] ovirt engine HA
日期:2018年04月03日 13点02分

Same thing, the engine in this case is "self-hosted", as in, it runs in a VM 
hosted on the cluster that it is managing.  I am a beginner here, but from my 
understanding, each node is always checking on the health of the engine VM.  If 
the engine is missing (ie, the host running it has gone down), then another 
available, healthy host will spawn up the engine and you will regain access. 
In my experience this has worked very reliably.  I have 2 hosts, both are 
"able" to run the engine VM.  If I take one host down, I am not able to load 
the engine GUI.  But if I wait a few minutes, then I regain access, and see 
that the engine is now running on the remaining healthy host. Vincent 
Royer778-825-1057

SUSTAINABLE MOBILE ENERGY SOLUTIONS




On Mon, Apr 2, 2018 at 6:07 PM,   wrote:
what different between self-hosted engine and  hosted engine? I find a project 
ovirt-hosted-engine-ha  https://github.com/oVirt/ovirt-hosted-engine-ha  - 
原始邮件 -
发件人:Vincent Royer 
收件人:dhy...@sina.com
抄送人:users 
主题:Re: [ovirt-users] ovirt engine HA
日期:2018年04月03日 08点57分

If your node running self-hosted engine crashes, the hosted engine will be 
started up on another node. It just takes a few minutes for this all to happen, 
but it works reliably in my experience. 



On Mon, Apr 2, 2018 at 5:42 PM,   wrote:
How to solute ovirt engine HA, I have a three node cluster, one of is deploy 
engine and node , others are node, if node that deplay engine and  node crash, 
How to ensure my server is up?
___

Users mailing list

Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users








___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt engine HA

2018-04-02 Thread Vincent Royer
Same thing, the engine in this case is "self-hosted", as in, it runs in a
VM hosted on the cluster that it is managing.  I am a beginner here, but
from my understanding, each node is always checking on the health of the
engine VM.  If the engine is missing (ie, the host running it has gone
down), then another available, healthy host will spawn up the engine and
you will regain access.

In my experience this has worked very reliably.  I have 2 hosts, both are
"able" to run the engine VM.  If I take one host down, I am not able to
load the engine GUI.  But if I wait a few minutes, then I regain access,
and see that the engine is now running on the remaining healthy host.

*Vincent Royer*
*778-825-1057*



*SUSTAINABLE MOBILE ENERGY SOLUTIONS*




On Mon, Apr 2, 2018 at 6:07 PM,  wrote:

> what different between self-hosted engine and  hosted engine? I find a
> project ovirt-hosted-engine-ha
>   https:
> //github.com/oVirt/ovirt-hosted-engine-ha
> - 原始邮件 -
> 发件人:Vincent Royer 
> 收件人:dhy...@sina.com
> 抄送人:users 
> 主题:Re: [ovirt-users] ovirt engine HA
> 日期:2018年04月03日 08点57分
>
> If your node running self-hosted engine crashes, the hosted engine will be
> started up on another node. It just takes a few minutes for this all to
> happen, but it works reliably in my experience.
>
>
>
> On Mon, Apr 2, 2018 at 5:42 PM,  wrote:
>
> How to solute ovirt engine HA, I have a three node cluster, one of is
> deploy engine and node , others are node, if node that deplay engine and
>  node crash, How to ensure my server is up?
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 回复:Re: ovirt engine HA

2018-04-02 Thread dhy336
what different between self-hosted engine and  hosted engine? I find a project 
ovirt-hosted-engine-ha  https://github.com/oVirt/ovirt-hosted-engine-ha  - 
原始邮件 -
发件人:Vincent Royer 
收件人:dhy...@sina.com
抄送人:users 
主题:Re: [ovirt-users] ovirt engine HA
日期:2018年04月03日 08点57分

If your node running self-hosted engine crashes, the hosted engine will be 
started up on another node. It just takes a few minutes for this all to happen, 
but it works reliably in my experience. 



On Mon, Apr 2, 2018 at 5:42 PM,   wrote:
How to solute ovirt engine HA, I have a three node cluster, one of is deploy 
engine and node , others are node, if node that deplay engine and  node crash, 
How to ensure my server is up?
___

Users mailing list

Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt engine HA

2018-04-02 Thread Vincent Royer
If your node running self-hosted engine crashes, the hosted engine will be
started up on another node. It just takes a few minutes for this all to
happen, but it works reliably in my experience.



On Mon, Apr 2, 2018 at 5:42 PM,  wrote:

> How to solute ovirt engine HA, I have a three node cluster, one of is
> deploy engine and node , others are node, if node that deplay engine and
>  node crash, How to ensure my server is up?
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt engine HA

2018-04-02 Thread dhy336
How to solute ovirt engine HA, I have a three node cluster, one of is deploy 
engine and node , others are node, if node that deplay engine and  node crash, 
How to ensure my server is up?___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ILO2 Fencing

2018-04-02 Thread TomK

On 4/2/2018 6:55 AM, Alex K wrote:
First issue with fencing is solved per an earlier reply.

Second issue has also been solved earlier the way you outlined but I'm 
wondering why oVirt can't use br0 as the ovirtmgmt interface instead? 
What specific settings on br0 does it NOT like?  Is there any provision 
or flexibility to allow for custom defined bridges with oVirt?


I stumbled upon this page:

https://www.ovirt.org/documentation/how-to/networking/bonding-vlan-bridge/

and tried it but oVirt would have none of it.

After I let oVirt setup it's own bridges, I can see the settings it used 
(below) but wondering what *specific* settings in the network scripts 
does it not like?


Would like to know what we can and can't touch in the config files if we 
need to tweak the setup in higher level environments.


Cheers,
Tom


 1098 -rw-rw-r--. 1 root root   289 Mar 29 03:30 ifcfg-ovirtmgmt
   127014 -rw-rw-r--. 1 root root   145 Mar 29 03:30 ifcfg-eth0
   127022 -rw-rw-r--. 1 root root   145 Mar 29 03:30 ifcfg-eth1
   127016 -rw-rw-r--. 1 root root   145 Mar 29 03:30 ifcfg-eth2
   127029 -rw-rw-r--. 1 root root   145 Mar 29 03:30 ifcfg-eth3
 3039 -rw-rw-r--. 1 root root   169 Mar 29 03:30 route-ovirtmgmt
 3200 -rw-rw-r--. 1 root root   166 Mar 29 03:30 rule-ovirtmgmt
   127032 -rw-rw-r--. 1 root root   199 Mar 31 00:35 ifcfg-bond0
   222656 drwxr-xr-x. 2 root root  4096 Mar 31 00:39 .
[root@mdskvm-p01 network-scripts]# cat ifcfg-bond0
# Generated by VDSM version 4.20.23-1.el7.centos
DEVICE=bond0
BONDING_OPTS='mode=1 miimon=100'
BRIDGE=ovirtmgmt
MACADDR=78:e7:d1:8f:4d:26
ONBOOT=yes
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no
[root@mdskvm-p01 network-scripts]#
[root@mdskvm-p01 network-scripts]#
[root@mdskvm-p01 network-scripts]#
[root@mdskvm-p01 network-scripts]# cat ifcfg-ovirtmgmt
# Generated by VDSM version 4.20.23-1.el7.centos
DEVICE=ovirtmgmt
TYPE=Bridge
DELAY=0
STP=off
ONBOOT=yes
IPADDR=192.168.0.60
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
BOOTPROTO=none
MTU=1500
DEFROUTE=yes
NM_CONTROLLED=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
DNS1=192.168.0.224
DNS2=192.168.0.44
[root@mdskvm-p01 network-scripts]# cat route-ovirtmgmt
# Generated by VDSM version 4.20.23-1.el7.centos
0.0.0.0/0 via 192.168.0.1 dev ovirtmgmt table 3232235580
192.168.0.0/24 via 192.168.0.60 dev ovirtmgmt table 3232235580
[root@mdskvm-p01 network-scripts]# cat rule-ovirtmgmt
# Generated by VDSM version 4.20.23-1.el7.centos
from 192.168.0.0/24 prio 32000 table 3232235580
from all to 192.168.0.0/24 dev ovirtmgmt prio 32000 table 3232235580
[root@mdskvm-p01 network-scripts]#




Hi,

you need a second host so as power management to work.
Since you verified it with command line then adding a second host should 
resolve this issue.


In regards to the interface, you need to remove the bridge interface as 
instructed (make it a simple interface) then leave ovirt to configure it 
ovirtmgmt bridge.



Alex

On Wed, Mar 28, 2018 at 9:17 AM, TomK > wrote:


Hey Guy's,

I've tested my ILO2 fence from the ovirt engine CLI and that works:

fence_ilo2 -a 192.168.0.37 -l  --password=""
--ssl-insecure --tls1.0 -v -o status

The UI gives me:

Test failed: Failed to run fence status-check on host
'ph-host01.my.dom'.  No other host was available to serve as proxy
for the operation.

Going to add a second host in a bit but anyway to get this working
with just one host?  I'm just adding the one host to oVirt for some
POC we are doing atm but the UI forces me to adjust Power Management
settings before proceeding.

Also:

2018-03-28 02:04:15,183-04 WARN [org.ovirt.engine.core.bll.net
work.NetworkConfigurator]
(EE-ManagedThreadFactory-engine-Thread-335) [2d691be9] Failed to
find a valid interface for the management network of host
ph-host01.my.dom. If the interface br0 is a bridge, it should be
torn-down manually.
2018-03-28 02:04:15,184-04 ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-335) [2d691be9] Exception:

org.ovirt.engine.core.bll.network.NetworkConfigurator$NetworkConfiguratorException:
Interface br0 is invalid for management network


I've these defined as such but not clear what it is expecting:

[root@ph-host01 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
qlen 1
     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
     inet 127.0.0.1/8  scope host lo
        valid_lft forever preferred_lft forever
     inet6 ::1/128 scope host
        valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc mq
master bond0 state UP qlen 1000
     link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
3: eth1:  mtu 1500 qdisc mq
master bond0 state DOWN qlen 1000
     link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
4: eth2:  mtu 1500

Re: [ovirt-users] Issues with ZFS volume creation

2018-04-02 Thread Darrell Budic
Try it with —force, if the disks have any kind of partition table on them, zfs 
will not allow you to over-write them by default.

If it’s still complaining about the disks being in use, it’s probably 
mutlipathd grabbing them. multipath -l or multipath -ll will show the to you. 
You may be able to get the pool creation done by doing ‘multipath -f’ to clear 
the ables and creating the pool before multipathd grabs the disks again, or you 
may want to read up on mutlipathd and edit your configs to prevent it from 
grabbing the disks you’re trying to use (or configure it for mutipath access to 
said disk, if you have the hardware for it).


> From: Tal Bar-Or 
> Subject: [ovirt-users] Issues with ZFS volume creation
> Date: March 25, 2018 at 9:54:35 AM CDT
> To: users
> 
> 
> Hello All,
> I know this question is might be out of Ovirt scope, but I don't have 
> anywhere else to ask for this issue (ZFS users mailing doesn't work), so I am 
> trying my luck here anyway
> so the issues go as follows :
> 
> Installed ZFS on top of CentOs 7.4 with Ovirt 4.2 , on physical Dell R720 
> with 15 sas  10 k 1.2TB each attached to PERC H310 adapter, disks are 
> configured to non-raid, all went OK, but when I am trying to create new zfs 
> pool using the following command:
>  
> zpool create -m none -o ashift=12 zvol raidz2 sda sdb sdc sdd sde sdf sdg sdh 
> sdi sdj sdk sdl sdm
> I get the following error below:
> /dev/sda is in use and contains a unknown filesystem.
> /dev/sdb is in use and contains a unknown filesystem.
> /dev/sdc is in use and contains a unknown filesystem.
> /dev/sdd is in use and contains a unknown filesystem.
> /dev/sde is in use and contains a unknown filesystem.
> /dev/sdf is in use and contains a unknown filesystem.
> /dev/sdg is in use and contains a unknown filesystem.
> /dev/sdh is in use and contains a unknown filesystem.
> /dev/sdi is in use and contains a unknown filesystem.
> /dev/sdj is in use and contains a unknown filesystem.
> /dev/sdk is in use and contains a unknown filesystem.
> /dev/sdl is in use and contains a unknown filesystem.
> /dev/sdm is in use and contains a unknown filesystem.
> 
> When typing command lsblk I get the following output below, all seems ok, any 
> idea what could be wrong?
> Please advice
> Thanks
> 
> # lsblk
> NAMEMAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
> sda   8:00  1.1T  0 disk
> └─35000cca07245c0ec 253:20  1.1T  0 mpath
> sdb   8:16   0  1.1T  0 disk
> └─35000cca072463898 253:10   0  1.1T  0 mpath
> sdc   8:32   0  1.1T  0 disk
> └─35000cca0724540e8 253:80  1.1T  0 mpath
> sdd   8:48   0  1.1T  0 disk
> └─35000cca072451b68 253:70  1.1T  0 mpath
> sde   8:64   0  1.1T  0 disk
> └─35000cca07245f578 253:30  1.1T  0 mpath
> sdf   8:80   0  1.1T  0 disk
> └─35000cca07246c568 253:11   0  1.1T  0 mpath
> sdg   8:96   0  1.1T  0 disk
> └─35000cca0724620c8 253:12   0  1.1T  0 mpath
> sdh   8:112  0  1.1T  0 disk
> └─35000cca07245d2b8 253:13   0  1.1T  0 mpath
> sdi   8:128  0  1.1T  0 disk
> └─35000cca07245f0e8 253:40  1.1T  0 mpath
> sdj   8:144  0  1.1T  0 disk
> └─35000cca072418958 253:50  1.1T  0 mpath
> sdk   8:160  0  1.1T  0 disk
> └─35000cca072429700 253:10  1.1T  0 mpath
> sdl   8:176  0  1.1T  0 disk
> └─35000cca07245d848 253:90  1.1T  0 mpath
> sdm   8:192  0  1.1T  0 disk
> └─35000cca0724625a8 253:00  1.1T  0 mpath
> sdn   8:208  0  1.1T  0 disk
> └─35000cca07245f5ac 253:60  1.1T  0 mpath
> 
> 
> -- 
> Tal Bar-or
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-engine Debug Help

2018-04-02 Thread RabidCicada
Heyo everyone.  I'm trying to debug hosted-engine --deploy.  It is failing
in `Copy configuration archive to storage` in `create_target_vm.yml`
from `hosted-engine
--deploy`.  My general and most important query here is how to get good
debug output from ansible through hosted-engine.  I'm running hosted-engine
through an ssh session.

I can't figure out how to get good debug output from ansible within that
workflow.  I see it's running through otopi, I tried setting typical
`debugger: on_failed` hooks etc and tried many incantations on the command line
and config files to get ansible to help me out.  The debugger: directive
and other debugger related ansible config file stuff wouldn't result in any
debugger popping up.  I also can't seem to pass normal - flags to
hosted-engine either and get it to ansible.  Ultimately I tried to use a
`pause` directive and it complained that it was in a non-interactive
shell.  I figured it might be the result of my ssh session so I enabled tty
allocation with -t -t.  It did not resolve the issue.

I eventually wrote-my-own/stole a callback_plugin that checks an
environmental variable and enables `display.verbosity = int(v)` since I
can't seem to pass typical - stuff to ansible through `hosted-engine
--deploy`.  It give me the best info that I have so far.   But it wont give
me enough to debug issues around Gathering Facts or what looks like a
sudo/permission problem in `Copy configuration archive to storage` in
`create_target_vm.yml`.  I took and used the exact command that they use
manually and it works when I run it manually (But I can't get debug output
to show me the exact sudo command being executed), hence my interest in
passing - or equivalent to ansible through `hosted-engine`.  I
intentionally disabled the VM_directory cleanup so that I could execute the
same stuff.

Soafter all that...what is a good way to get deep debug info from
hosted-engine ansible stuff?

Or does anyone have intuition for the possible sudo problem?
~Kyle
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Resilient Storage for Ovirt

2018-04-02 Thread Yaniv Kaul
On Sat, Mar 24, 2018 at 3:55 AM, Vincent Royer 
wrote:

> Hi,
>
> I have a 2 node cluster with Hosted Engine attached to a storage Domain
> (NFS share) served by WS2016.  I run about a dozen VMs.
>
> I need to improve availability / resilience of the storage domain, and
> also the I/O performance.
>
> Anytime we need to reboot the Windows Server, its a nightmare for the
> cluster, we have to put it all into maintenance and take it down.  When the
> Storage server crashes (has happened once) or Windows decides to install an
> update and reboot (has happened once), the storage domain obviously goes
> down and sometimes the hosts have a difficult time re-connecting.
>
> I can afford a second bare metal server and am looking for input in the
> best way to provide a highly available storage domain.  Ideally I'd like to
> be able to reboot either storage server without disrupting Ovirt. Should I
> be looking at clustering with Windows Server, or moving to a different OS?
>
> I currently run the Storage in RAID10 (spinning discs) and have the option
> of adding CacheCade to the array w/ SSD.  Would that help I/O for small
> random R/W?
>
> What are the suggested options for this scenario?
>

The easiest suggestion would be to move away from NFS. While NFS can be
made highly available (using pNFS and friends) and it's not that easy (nor
intuitive from oVirt).
iSCSI or FC are much better suited for the task, with multipathing and
iSCSI bonding (poor choice of terminology here).

You would need to use bonding (this time network bonding) and
highly-available NFS server (with a floating IP between the nodes most
likely) to succeed.
Y.


>
> Thanks
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SR-IOV and oVirt

2018-04-02 Thread Michael Burman
Hi Joe,

First of all, if you using the SR-IOV feature, you should use it only as
pci-passthrough vNICs only. Not as host-devices.
Our documentation is correct and is says : "adding a Network Interface with
type PCI Passthrough"

When enabling VFs on the host level, the expected MAC address for the VFs
is 02:00:00:00:00:01 and that is the correct behaviour.
Once you will create a network with 'passthrough' vNIC profile and then add
this as a vNIC to the guest VM, once the guest will run, you will see that
each pci-passthrough vNIC got it's own unique MAC address. No matter how
much VFs you are using, each pci-passthrough vNIC will get it's own MAC
address.

For vlan, you can create the network(passthrough) with the desired vlan tag
and it will be passed to the guest on run VM.

Cheers)



On Thu, Mar 29, 2018 at 6:00 PM,  wrote:

> I am working with a customer on enabling sriov within oVirt and were
> noticing a couple of issues.
>
>1. Whenever we assign the Number of VFs to a physical adapter in one
>of our hosts, it seems to set the mac addresses of each of the VFs to
>something other than all zeros.  Ex. 02:00:00:00:00:01
>2. The above behavior seems to create duplicate mac addresses when we
>assign 2 or more VFs to a guest VM.  All zeros will tell the guest VM that
>it needs to set the mac.  If the guest vm sees something other than all
>zeros, it will think that it was administratively assigned already and
>leave as is.
>3. We were expecting oVirt to set all of the MAC addresses of the VFs
>initially to all zeros.  Then when we assign these VFs to the guest VM, the
>guest VM will assign a unique MAC to each of the VFs.
>4. Please note that we are assigning the VF to the guest VM by adding
>a Host Device (the specific pci host device for the VF).  This seems to be
>different than your docs which shows adding a Network Interface with type
>PCI Passthrough.
>5. If we manually run the following command from an ssh session:  *echo
>4 > /sys/class/net/ens4f0/device/sriov_numvfs*
>
> it will set all of the VFs mac addresses to all zeros.  Then when we
> assign the pci host device to the guest VM through oVirt, it creates unique
> macs for both vnics.  However, when we reboot the Host, it seems to revert
> back to the oVirt assigned macs of 02:00:00:00:00:01.
>
> Do know why this might be happening?  Should we be assigning the VFs to
> the guest VM by adding a network interface with type PCI Passthrough?
> Ultimately our goal is to enable sriov within oVirt and be able to assign
> multiple VFs to the guest VMs with each getting a unique mac.  We also want
> to do the vlan tagging via an application running on the guest VM (not at
> the Host level.)
>
> Thank you for any help,
>
> jp
>
>
>
>
>
>
> *Joe Paolicelli (JP) *Virtualization Specialist, Ixia Solutions Group
> Keysight Technologies
>
> e: *j...@keysight.com *
> t: 469.556.6042 <(469)%20556-6042>
> www.ixiacom.com
>
>
> [image: cid:image002.png@01D2DA11.7BFEC8C0]
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

Michael Burman

Senior Quality engineer - rhv network - redhat israel

Red Hat



mbur...@redhat.comM: 0545355725 IM: mburman

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ILO2 Fencing

2018-04-02 Thread Alex K
Hi,

you need a second host so as power management to work.
Since you verified it with command line then adding a second host should
resolve this issue.

In regards to the interface, you need to remove the bridge interface as
instructed (make it a simple interface) then leave ovirt to configure it
ovirtmgmt bridge.


Alex

On Wed, Mar 28, 2018 at 9:17 AM, TomK  wrote:

> Hey Guy's,
>
> I've tested my ILO2 fence from the ovirt engine CLI and that works:
>
> fence_ilo2 -a 192.168.0.37 -l  --password="" --ssl-insecure
> --tls1.0 -v -o status
>
> The UI gives me:
>
> Test failed: Failed to run fence status-check on host 'ph-host01.my.dom'.
> No other host was available to serve as proxy for the operation.
>
> Going to add a second host in a bit but anyway to get this working with
> just one host?  I'm just adding the one host to oVirt for some POC we are
> doing atm but the UI forces me to adjust Power Management settings before
> proceeding.
>
> Also:
>
> 2018-03-28 02:04:15,183-04 WARN 
> [org.ovirt.engine.core.bll.network.NetworkConfigurator]
> (EE-ManagedThreadFactory-engine-Thread-335) [2d691be9] Failed to find a
> valid interface for the management network of host ph-host01.my.dom. If the
> interface br0 is a bridge, it should be torn-down manually.
> 2018-03-28 02:04:15,184-04 ERROR [org.ovirt.engine.core.bll.hos
> tdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-335)
> [2d691be9] Exception: org.ovirt.engine.core.bll.netw
> ork.NetworkConfigurator$NetworkConfiguratorException: Interface br0 is
> invalid for management network
>
>
> I've these defined as such but not clear what it is expecting:
>
> [root@ph-host01 ~]# ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: eth0:  mtu 1500 qdisc mq master
> bond0 state UP qlen 1000
> link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
> 3: eth1:  mtu 1500 qdisc mq
> master bond0 state DOWN qlen 1000
> link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
> 4: eth2:  mtu 1500 qdisc mq
> master bond0 state DOWN qlen 1000
> link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
> 5: eth3:  mtu 1500 qdisc mq
> master bond0 state DOWN qlen 1000
> link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
> 21: bond0:  mtu 1500 qdisc
> noqueue master br0 state UP qlen 1000
> link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
> inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link
>valid_lft forever preferred_lft forever
> 23: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN qlen
> 1000
> link/ether fe:69:c7:50:0d:dd brd ff:ff:ff:ff:ff:ff
> 24: br0:  mtu 1500 qdisc noqueue state
> UP qlen 1000
> link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
> inet 192.168.0.39/23 brd 192.168.1.255 scope global br0
>valid_lft forever preferred_lft forever
> inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link
>valid_lft forever preferred_lft forever
> [root@ph-host01 ~]# cd /etc/sysconfig/network-scripts/
> [root@ph-host01 network-scripts]# cat ifcfg-br0
> DEVICE=br0
> TYPE=Bridge
> BOOTPROTO=none
> IPADDR=192.168.0.39
> NETMASK=255.255.254.0
> GATEWAY=192.168.0.1
> ONBOOT=yes
> DELAY=0
> USERCTL=no
> DEFROUTE=yes
> NM_CONTROLLED=no
> DOMAIN="my.dom nix.my.dom"
> SEARCH="my.dom nix.my.dom"
> HOSTNAME=ph-host01.my.dom
> DNS1=192.168.0.224
> DNS2=192.168.0.44
> DNS3=192.168.0.45
> ZONE=public
> [root@ph-host01 network-scripts]# cat ifcfg-bond0
> DEVICE=bond0
> ONBOOT=yes
> BOOTPROTO=none
> USERCTL=no
> NM_CONTROLLED=no
> BONDING_OPTS="miimon=100 mode=2"
> BRIDGE=br0
> #
> #
> # IPADDR=192.168.0.39
> # NETMASK=255.255.254.0
> # GATEWAY=192.168.0.1
> # DNS1=192.168.0.1
> [root@ph-host01 network-scripts]#
>
>
> --
> Cheers,
> Tom K.
> 
> -
>
> Living on earth is expensive, but it includes a free trip around the sun.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt nodes NFS connection

2018-04-02 Thread Tal Bar-Or
Thanks all for your answer, it's more clear now

On Thu, Mar 22, 2018 at 7:24 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Hello Tal
>
> It seems you have a very big overkill on your environment. I would say
> that normally 2 x 10Gb interfaces can do A LOT for nodes with proper
> redundancy. Just creating Vlans you can separate traffic and apply, if
> necessary, QoS per Vlan to guarantee which one is more priority.
>
> If you have 2 x 10Gb in a LACP 802.3ad Aggregation in theory you can do
> 20Gbps of aggregated traffic. If you have 10Gb of constant storage traffic
> it is already huge, so I normally consider that Storage will not go over a
> few Gbps and VMs another few Gb which fit perfectly within even 10Gb
>
> The only exception I would make is if you have a very intensive (and I am
> not talking about IOPS, but throughput) from your storage then may be worth
> to have 2 x 10Gb for Storage and 2 x 10Gb for all other networks
> (Managment, VMs Traffic, Migration(with cap on traffic), etc).
>
> Regards
> Fernando
>
> 2018-03-21 16:41 GMT-03:00 Yaniv Kaul :
>
>>
>>
>> On Wed, Mar 21, 2018 at 12:41 PM, Tal Bar-Or  wrote:
>>
>>> Hello All,
>>>
>>> I am about to deploy a new Ovirt platform, the platform  will consist 4
>>> Ovirt nodes including management, all servers nodes and storage will have
>>> the following config:
>>>
>>> *nodes server*
>>> 4x10G ports network cards
>>> 2x10G will be used for VM network.
>>> 2x10G will be used for storage connection
>>> 2x1Ge 1xGe for nodes management
>>>
>>>
>>> *Storage *4x10G ports network cards
>>> 3 x10G for NFS storage mount Ovirt nodes
>>>
>>> Now given above network configuration layout, what is best practices in
>>> terms of nodes for storage NFS connection, throughput and path resilience
>>> suggested to use
>>> First option each node 2x 10G lacp and on storage side 3x10G lacp?
>>>
>>
>> I'm not sure how you'd get more throughout than you can get in a single
>> physical link. You will get redundancy.
>>
>> Of course, on the storage side you might benefit from multiple bonded
>> interfaces.
>>
>>
>>> The second option creates 3 VLAN's assign each node on that 3 VLAN's
>>> across 2 nic, and on storage, side assigns 3 nice across 3 VLANs?
>>>
>>
>> Interesting - but I assume it'll still stick to a single physical link.
>> Y.
>>
>> Thanks
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Tal Bar-or
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>


-- 
Tal Bar-or
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Issues with ZFS volume creation

2018-04-02 Thread Tal Bar-Or
Hello All,
I know this question is might be out of Ovirt scope, but I don't have
anywhere else to ask for this issue (ZFS users mailing doesn't work), so I
am trying my luck here anyway
so the issues go as follows :

Installed ZFS on top of CentOs 7.4 with Ovirt 4.2 , on physical Dell R720
with 15 sas  10 k 1.2TB each attached to PERC H310 adapter, disks are
configured to non-raid, all went OK, but when I am trying to create new zfs
pool using the following command:


> zpool create -m none -o ashift=12 zvol raidz2 sda sdb sdc sdd sde sdf sdg
> sdh sdi sdj sdk sdl sdm
>
I get the following error below:

> /dev/sda is in use and contains a unknown filesystem.
> /dev/sdb is in use and contains a unknown filesystem.
> /dev/sdc is in use and contains a unknown filesystem.
> /dev/sdd is in use and contains a unknown filesystem.
> /dev/sde is in use and contains a unknown filesystem.
> /dev/sdf is in use and contains a unknown filesystem.
> /dev/sdg is in use and contains a unknown filesystem.
> /dev/sdh is in use and contains a unknown filesystem.
> /dev/sdi is in use and contains a unknown filesystem.
> /dev/sdj is in use and contains a unknown filesystem.
> /dev/sdk is in use and contains a unknown filesystem.
> /dev/sdl is in use and contains a unknown filesystem.
> /dev/sdm is in use and contains a unknown filesystem.
>

When typing command lsblk I get the following output below, all seems ok,
any idea what could be wrong?
Please advice
Thanks

# lsblk
> NAMEMAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
> sda   8:00  1.1T  0 disk
> └─35000cca07245c0ec 253:20  1.1T  0 mpath
> sdb   8:16   0  1.1T  0 disk
> └─35000cca072463898 253:10   0  1.1T  0 mpath
> sdc   8:32   0  1.1T  0 disk
> └─35000cca0724540e8 253:80  1.1T  0 mpath
> sdd   8:48   0  1.1T  0 disk
> └─35000cca072451b68 253:70  1.1T  0 mpath
> sde   8:64   0  1.1T  0 disk
> └─35000cca07245f578 253:30  1.1T  0 mpath
> sdf   8:80   0  1.1T  0 disk
> └─35000cca07246c568 253:11   0  1.1T  0 mpath
> sdg   8:96   0  1.1T  0 disk
> └─35000cca0724620c8 253:12   0  1.1T  0 mpath
> sdh   8:112  0  1.1T  0 disk
> └─35000cca07245d2b8 253:13   0  1.1T  0 mpath
> sdi   8:128  0  1.1T  0 disk
> └─35000cca07245f0e8 253:40  1.1T  0 mpath
> sdj   8:144  0  1.1T  0 disk
> └─35000cca072418958 253:50  1.1T  0 mpath
> sdk   8:160  0  1.1T  0 disk
> └─35000cca072429700 253:10  1.1T  0 mpath
> sdl   8:176  0  1.1T  0 disk
> └─35000cca07245d848 253:90  1.1T  0 mpath
> sdm   8:192  0  1.1T  0 disk
> └─35000cca0724625a8 253:00  1.1T  0 mpath
> sdn   8:208  0  1.1T  0 disk
> └─35000cca07245f5ac 253:60  1.1T  0 mpath



-- 
Tal Bar-or
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ILO2 Fencing

2018-04-02 Thread TomK

Hey Guy's,

I've tested my ILO2 fence from the ovirt engine CLI and that works:

fence_ilo2 -a 192.168.0.37 -l  --password="" 
--ssl-insecure --tls1.0 -v -o status


The UI gives me:

Test failed: Failed to run fence status-check on host 
'ph-host01.my.dom'.  No other host was available to serve as proxy for 
the operation.


Going to add a second host in a bit but anyway to get this working with 
just one host?  I'm just adding the one host to oVirt for some POC we 
are doing atm but the UI forces me to adjust Power Management settings 
before proceeding.


Also:

2018-03-28 02:04:15,183-04 WARN 
[org.ovirt.engine.core.bll.network.NetworkConfigurator] 
(EE-ManagedThreadFactory-engine-Thread-335) [2d691be9] Failed to find a 
valid interface for the management network of host ph-host01.my.dom. If 
the interface br0 is a bridge, it should be torn-down manually.
2018-03-28 02:04:15,184-04 ERROR 
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] 
(EE-ManagedThreadFactory-engine-Thread-335) [2d691be9] Exception: 
org.ovirt.engine.core.bll.network.NetworkConfigurator$NetworkConfiguratorException: 
Interface br0 is invalid for management network



I've these defined as such but not clear what it is expecting:

[root@ph-host01 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc mq 
master bond0 state UP qlen 1000

link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
3: eth1:  mtu 1500 qdisc mq 
master bond0 state DOWN qlen 1000

link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
4: eth2:  mtu 1500 qdisc mq 
master bond0 state DOWN qlen 1000

link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
5: eth3:  mtu 1500 qdisc mq 
master bond0 state DOWN qlen 1000

link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
21: bond0:  mtu 1500 qdisc 
noqueue master br0 state UP qlen 1000

link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link
   valid_lft forever preferred_lft forever
23: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN 
qlen 1000

link/ether fe:69:c7:50:0d:dd brd ff:ff:ff:ff:ff:ff
24: br0:  mtu 1500 qdisc noqueue state 
UP qlen 1000

link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
inet 192.168.0.39/23 brd 192.168.1.255 scope global br0
   valid_lft forever preferred_lft forever
inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link
   valid_lft forever preferred_lft forever
[root@ph-host01 ~]# cd /etc/sysconfig/network-scripts/
[root@ph-host01 network-scripts]# cat ifcfg-br0
DEVICE=br0
TYPE=Bridge
BOOTPROTO=none
IPADDR=192.168.0.39
NETMASK=255.255.254.0
GATEWAY=192.168.0.1
ONBOOT=yes
DELAY=0
USERCTL=no
DEFROUTE=yes
NM_CONTROLLED=no
DOMAIN="my.dom nix.my.dom"
SEARCH="my.dom nix.my.dom"
HOSTNAME=ph-host01.my.dom
DNS1=192.168.0.224
DNS2=192.168.0.44
DNS3=192.168.0.45
ZONE=public
[root@ph-host01 network-scripts]# cat ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
NM_CONTROLLED=no
BONDING_OPTS="miimon=100 mode=2"
BRIDGE=br0
#
#
# IPADDR=192.168.0.39
# NETMASK=255.255.254.0
# GATEWAY=192.168.0.1
# DNS1=192.168.0.1
[root@ph-host01 network-scripts]#


--
Cheers,
Tom K.
-

Living on earth is expensive, but it includes a free trip around the sun.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [graph.y:363:graphyyerror] 0-parser: syntax error: line 19 (volume 'management'): "cluster.server-quorum-type:", allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume'

2018-04-02 Thread TomK

Hey All,

Wondering if anyone see this happen and can provide some hints.

After numerous failed attempts to add a physical host to an oVirt VM 
engine that already had a gluster volume, I get this errors and I'm 
unable to start up gluster anymore:



[2018-03-27 07:01:37.511304] E [MSGID: 101021] 
[graph.y:363:graphyyerror] 0-parser: syntax error: line 19 (volume 
'management'): "cluster.server-quorum-type:"

allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume'()
[2018-03-27 07:01:37.511597] E [MSGID: 100026] 
[glusterfsd.c:2403:glusterfs_process_volfp] 0-: failed to construct the 
graph
[2018-03-27 07:01:37.511791] E [graph.c:1102:glusterfs_graph_destroy] 
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55f06827d0cd] 
-->/usr/sbin/glusterd(glusterfs_process_volfp+0x150) [0x55f06827cf60] 
-->/lib64/libglusterfs.so.0(glusterfs_graph_destroy+0x84) 
[0x7f519a816c64] ) 0-graph: invalid argument: graph [Invalid argument]
[2018-03-27 07:01:37.511839] W [glusterfsd.c:1393:cleanup_and_exit] 
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55f06827d0cd] 
-->/usr/sbin/glusterd(glusterfs_process_volfp+0x163) [0x55f06827cf73] 
-->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x55f06827c49b] ) 0-: 
received signum (-1), shutting down
[2018-03-27 07:02:52.223358] I [MSGID: 100030] [glusterfsd.c:2556:main] 
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.13.2 
(args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2018-03-27 07:02:52.229816] E [MSGID: 101021] 
[graph.y:363:graphyyerror] 0-parser: syntax error: line 19 (volume 
'management'): "cluster.server-quorum-type:"

allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume'()
[2018-03-27 07:02:52.230125] E [MSGID: 100026] 
[glusterfsd.c:2403:glusterfs_process_volfp] 0-: failed to construct the 
graph
[2018-03-27 07:02:52.230320] E [graph.c:1102:glusterfs_graph_destroy] 
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55832612b0cd] 
-->/usr/sbin/glusterd(glusterfs_process_volfp+0x150) [0x55832612af60] 
-->/lib64/libglusterfs.so.0(glusterfs_graph_destroy+0x84) 
[0x7f9a1ded4c64] ) 0-graph: invalid argument: graph [Invalid argument]
[2018-03-27 07:02:52.230369] W [glusterfsd.c:1393:cleanup_and_exit] 
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55832612b0cd] 
-->/usr/sbin/glusterd(glusterfs_process_volfp+0x163) [0x55832612af73] 
-->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x55832612a49b] ) 0-: 
received signum (-1), shutting down


--
Cheers,
Tom K.
-

Living on earth is expensive, but it includes a free trip around the sun.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] SR-IOV and oVirt

2018-04-02 Thread joe.paolicelli
I am working with a customer on enabling sriov within oVirt and were noticing a 
couple of issues.

  1.  Whenever we assign the Number of VFs to a physical adapter in one of our 
hosts, it seems to set the mac addresses of each of the VFs to something other 
than all zeros.  Ex. 02:00:00:00:00:01
  2.  The above behavior seems to create duplicate mac addresses when we assign 
2 or more VFs to a guest VM.  All zeros will tell the guest VM that it needs to 
set the mac.  If the guest vm sees something other than all zeros, it will 
think that it was administratively assigned already and leave as is.
  3.  We were expecting oVirt to set all of the MAC addresses of the VFs 
initially to all zeros.  Then when we assign these VFs to the guest VM, the 
guest VM will assign a unique MAC to each of the VFs.
  4.  Please note that we are assigning the VF to the guest VM by adding a Host 
Device (the specific pci host device for the VF).  This seems to be different 
than your docs which shows adding a Network Interface with type PCI Passthrough.
  5.  If we manually run the following command from an ssh session:  echo 4 > 
/sys/class/net/ens4f0/device/sriov_numvfs
it will set all of the VFs mac addresses to all zeros.  Then when we assign the 
pci host device to the guest VM through oVirt, it creates unique macs for both 
vnics.  However, when we reboot the Host, it seems to revert back to the oVirt 
assigned macs of 02:00:00:00:00:01.
Do know why this might be happening?  Should we be assigning the VFs to the 
guest VM by adding a network interface with type PCI Passthrough?  Ultimately 
our goal is to enable sriov within oVirt and be able to assign multiple VFs to 
the guest VMs with each getting a unique mac.  We also want to do the vlan 
tagging via an application running on the guest VM (not at the Host level.)
Thank you for any help,
jp


Joe Paolicelli (JP)
Virtualization Specialist, Ixia Solutions Group
Keysight Technologies
e: j...@keysight.com
t: 469.556.6042
www.ixiacom.com

[cid:image002.png@01D2DA11.7BFEC8C0]


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Data Path Operations On Any Host

2018-04-02 Thread SOFTWARE (WG)
Hi,

In relation to the change made to distribute data operations between all the 
hosts in a data center rather than burden the SPM. I am having troubles finding 
information on this and need assistance to prevent this happening on my 
development oVirt 4.2 system. The issue I have is that I have a cluster which 
hosts all the storage volumes using gluster and they have 10G NICs. I also have 
a separate cluster which is virtualisation only and each host only has 3 x 1G 
aggregated NICs. When I perform disk moves between storage domains it often 
uses one of the virtualisation hosts which drastically increases the time taken 
to move the disk. Can I restrict these types of operations to a set of hosts or 
turn it off altogether so that it just uses the SPM like it used to in the 
past. Distributing it is a great feature but unfortunately is no good in my 
current setup.


Regards,

Jeremy






This message is the property of John Wood Group PLC and/or its subsidiaries 
and/or affiliates and is intended only for the named recipient(s). Its contents 
(including any attachments) may be confidential, legally privileged or 
otherwise protected from disclosure by law. Unauthorised use, copying, 
distribution or disclosure of any of it may be unlawful and is strictly 
prohibited. We assume no responsibility to persons other than the intended 
named recipient(s) and do not accept liability for any errors or omissions 
which are a result of email transmission. If you have received this message in 
error, please notify us immediately by reply email to the sender and confirm 
that the original message and any attachments and copies have been destroyed 
and deleted from your system.

If you do not wish to receive future unsolicited commercial electronic messages 
from us, please forward this email to: unsubscr...@woodplc.com and include 
“Unsubscribe” in the subject line. If applicable, you will continue to receive 
invoices, project communications and similar factual, non-commercial electronic 
communications.

Please click http://www.woodplc.com/email-disclaimer for notices and company 
information in relation to emails originating in the UK, Italy or France.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Issue adding network interface to VM failed with HotPlugNicVDS

2018-04-02 Thread Oliver Riesener

> Am 01.04.2018 um 11:43 schrieb Arik Hadas :
> 
> 
> 
> On Fri, Mar 30, 2018 at 10:06 PM, Oliver Riesener 
> mailto:oliver.riese...@hs-bremen.de>> wrote:
> Hi,
> 
> running ovirt 4.2.2-6 with firewalld enabled.
> 
> Failed to HotPlugNicVDS, error = The name org.fedoraproject.FirewallD1 was 
> not provided by any .service files, code = 49
> 
> I have seen this error when trying to run a VM after starting/stoping 
> firewalld without restarting libvirtd.
> Try restarting libvirtd.
>  
Yes, you’re right. Restarting libvirtd.service solves this situation.

firealld was manually installed/restarted before and then libvirtd.server got 
confused.

Thanks a lot!

● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor 
preset: enabled)
  Drop-In: /etc/systemd/system/libvirtd.service.d
   └─unlimited-core.conf
   Active: active (running) since Fr 2018-03-30 08:13:29 CEST; 3 days ago
 Docs: man:libvirtd(8)
   http://libvirt.org
 Main PID: 2599 (libvirtd)
   CGroup: /system.slice/libvirtd.service
   └─2599 /usr/sbin/libvirtd --listen

Mär 30 08:13:28 ovn-elem systemd[1]: Starting Virtualization daemon...
Mär 30 08:13:29 ovn-elem systemd[1]: Started Virtualization daemon.
Mär 30 19:18:37 ovn-elem libvirtd[2599]: 2018-03-30 17:18:37.552+: 2618: 
info : libvirt version: 3.2.0, package: 14.el7_4.9 (CentOS BuildSystem 
, 2018-03-07-13:51:24, x86-01.bsys.centos.org)
Mär 30 19:18:37 ovn-elem libvirtd[2599]: 2018-03-30 17:18:37.552+: 2618: 
info : hostname: ovn-host.example.org
Mär 30 19:18:37 ovn-elem libvirtd[2599]: 2018-03-30 17:18:37.552+: 2618: 
warning : qemuDomainObjBeginJobInternal:3847 : Cannot start job (query, none) 
for domain v-srv-home; current job is (none, migration in) owned by (0 , 
0 remoteDispatchDomainMigratePrepare3Params) for (0s, 31s)
Mär 30 19:18:37 ovn-elem libvirtd[2599]: 2018-03-30 17:18:37.553+: 2618: 
error : qemuDomainObjBeginJobInternal:3859 : Timed out during operation: cannot 
acquire state change lock (held by remoteDispatchDomainMigratePrepare3Params)
Mär 30 20:43:36 ovn-elem libvirtd[2599]: 2018-03-30 18:43:36.749+: 2618: 
error : qemuDomainAgentAvailable:6030 : Guest agent is not responding: QEMU 
guest agent is not connected
Mär 30 20:47:21 ovn-elem libvirtd[2599]: 2018-03-30 18:47:21.298+: 2599: 
error : qemuMonitorIO:697 : internal error: End of file from qemu monitor


> 
> Can’t hot plug any new network interfaces.
> 
> 30d6c2ab', vmId='20abce62-a558-4aee-b3e3-3fa70f1d1918'}', device='bridge', 
> type='INTERFACE', specParams='[inbound={}, outbound={}]', address='', 
> managed='true', plugged='true', readOnly='false', deviceAlias='', 
> customProperties='[]', snapshotId='null', logicalName='null', 
> hostDevice='null'}'})' execution failed: VDSGenericException: 
> VDSErrorException: Failed to HotPlugNicVDS, error = The name 
> org.fedoraproject.FirewallD1 was not provided by any .service files, code = 49
> 2018-03-30 20:56:08,620+02 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugNicVDSCommand] (default 
> task-106) [e732710] FINISH, HotPlugNicVDSCommand, log id: 210cb07
> 2018-03-30 20:56:08,620+02 ERROR 
> [org.ovirt.engine.core.bll.network.vm.ActivateDeactivateVmNicCommand] 
> (default task-106) [e732710] Command 
> 'org.ovirt.engine.core.bll.network.vm.ActivateDeactivateVmNicCommand' failed: 
> EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
> VDSGenericException: VDSErrorException: Failed to HotPlugNicVDS, error = The 
> name org.fedoraproject.FirewallD1 was not provided by any .service files, 
> code = 49 (Failed with error ACTIVATE_NIC_FAILED and code 49)
> 2018-03-30 20:56:08,627+02 ERROR 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-106) [e732710] EVENT_ID: 
> NETWORK_ACTIVATE_VM_INTERFACE_FAILURE(1,013), Failed to plug Network 
> Interface nic3 (VirtIO) to VM v-srv-opt. (User: admin@internal)
> 2018-03-30 20:56:08,629+02 INFO  
> [org.ovirt.engine.core.bll.CommandCompensator] (default task-106) [e732710] 
> Command [id=0db21508-1eeb-40f5-912e-58af9bb3fa9b]: Compensating NEW_ENTITY_ID 
> of org.ovirt.engine.core.common.businessentities.VmDevice; snapshot: 
> VmDeviceId:{deviceId='6d7a5b68-0eb3-4531-bc06-3aff30d6c2ab', 
> vmId='20abce62-a558-4aee-b3e3-3fa70f1d1918'}.
> 2018-03-30 20:56:08,630+02 INFO  
> [org.ovirt.engine.core.bll.CommandCompensator] (default task-106) [e732710] 
> Command [id=0db21508-1eeb-40f5-912e-58af9bb3fa9b]: Compensating NEW_ENTITY_ID 
> of org.ovirt.engine.core.common.businessentities.network.VmNetworkStatistics; 
> snapshot: 6d7a5b68-0eb3-4531-bc06-3aff30d6c2ab.
> 2018-03-30 20:56:08,631+02 INFO  
> [org.ovirt.engine.core.bll.CommandCompensator] (default task-106) [e732710] 
> Command [id=0db21508-1eeb-40f5-912e-58af9bb3fa9b]: Compensating NEW_ENTITY_ID 
> of org.ovirt.engine.core.common.businessentities.network.VmNetworkInte

Re: [ovirt-users] oVirt System Test Hackathon

2018-04-02 Thread Yedidyah Bar David
On Mon, Apr 2, 2018 at 9:50 AM, Emil Natan  wrote:

> 13/04/18 maybe?
>

No, the date was correct - search the list archives for 'hackathon':

http://lists.ovirt.org/pipermail/users/2018-March/thread.html

Only that the last mail from Rob was sent a month too late, perhaps some
bug in some calendar software somewhere.


>
> On Sun, Apr 1, 2018 at 10:24 PM, Rob Dueckman  wrote:
>
>> *Rob Dueckman wishes to make you aware of "oVirt System Test Hackathon".*
>> *oVirt System Test Hackathon*
>> *Start:*   *13/03/18 00:00:00*
>> *End:*   *14/03/18 00:00:00*
>> *Location:*   *#ovirt IRC channel*
>> *Attendees:*
>>
>>
>>
>> * sbona...@redhat.com  de...@ovirt.org
>>  users@ovirt.org  d...@dukey.org
>>  *
>> *Description:*
>>
>>
>>
>>
>>
>> *Please join us in an ovirt-system-tests hackathon pushing new tests and
>> improving existing ones for testing Hosted Engine.Git repo is
>> available: > href="https://www.google.com/url?q=https%3A%2F%2Fgerrit.ovirt.org%2Fgitweb%3Fp%3Dovirt-system-tests.git%3Ba%3Dsummary&sa=D&ust=1520522907926000&usg=AFQjCNHiWd7myztG_ebonGlsjNat279l_Q
>> "
>> target="_blank">https://gerrit.ovirt.org/gitweb?p=ovirt-system-tests.git;a=summary
>> Integration,
>> Node and CI team will be available for helping in the effort and reviewing
>> patches.Here's a public trello board tracking the efforts: > href="https://trello.com/b/Pp76YoRL
>> ">https://trello.com/b/Pp76YoRL
>> 
>> -::~:~::~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~::~:~::-
>> Please do not edit this section of the description. View your event at
>> https://www.google.com/calendar/event?action=VIEW&eid=MDhuZTY5bXRxbGo0cnNmMzIzMzEyMjYwZnEgdXNlcnNAb3ZpcnQub3Jn&tok=MTkjc2JvbmF6em9AcmVkaGF0LmNvbTM0ZGExOGE3NjYwMTM2Y2NlN2E3OTY2NTQ4NWEzZjA4ZWM4NGU4Mjk&ctz=Europe/Rome&hl=en
>> .
>> -::~:~::~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~::~:~::-*
>>
>> Attached is an iCalendar file with more information about the event. If
>> your mail client supports iTip requests you can use this file to easily
>> update your local copy of the event.
>>
>> If your email client doesn't support iTip requests you can use the
>> following links to: *accept
>> *,
>> *accept tentatively
>> *
>> or *decline
>> *
>> the event.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Emil
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users