Re: [ovirt-users] creating a vlan-tagged network

2017-01-01 Thread Edward Haas
On Sun, Jan 1, 2017 at 7:16 PM, Jim Kusznir  wrote:

> I pinged both the router on the subnet and a host IP in-between the two
> ip's.
>
> [root@ovirt3 ~]# ping -I 162.248.147.33 162.248.147.1
> PING 162.248.147.1 (162.248.147.1) from 162.248.147.33 : 56(84) bytes of
> data.
> 64 bytes from 162.248.147.1: icmp_seq=1 ttl=255 time=8.17 ms
> 64 bytes from 162.248.147.1: icmp_seq=2 ttl=255 time=7.47 ms
> 64 bytes from 162.248.147.1: icmp_seq=3 ttl=255 time=7.53 ms
> 64 bytes from 162.248.147.1: icmp_seq=4 ttl=255 time=8.42 ms
> ^C
> --- 162.248.147.1 ping statistics ---
> 4 packets transmitted, 4 received, 0% packet loss, time 3004ms
> rtt min/avg/max/mdev = 7.475/7.901/8.424/0.420 ms
> [root@ovirt3 ~]#
>
> The VM only has its public IP.
>
> --Jim
>

Very strange, all looks good to me.

I can try to help you debug using tcpdump, just send me the details for
remote connection on private.
It will also help if you join the vdsm or ovir IRC channels.


>
> On Jan 1, 2017 01:26, "Edward Haas"  wrote:
>
>>
>>
>> On Sun, Jan 1, 2017 at 10:50 AM, Jim Kusznir  wrote:
>>
>>> I currently only have two IPs assigned to me...I can try and take
>>> another, but that may not route out of the rack.  I've got the VM on one of
>>> the IPs and the host on the other currently.
>>>
>>> The switch is a "web-managed" basic 8-port switch (thrown in for testing
>>> while the "real" switch is in transit).  It has the 3 ports the hosts are
>>> plugged in configured with vlan 1 untagged, set as PVID, and vlan 2
>>> tagged.  Another port on the switch is untagged on vlan 1 connected to the
>>> router for the ovirtmgmt network (protected by a VPN, but not "burning"
>>> public IPs for mgmt purposes), another couple ports are untagged on vlan
>>> 2.  One of those ports goes out of the rack, another goes to the router's
>>> internet port.  Router gets to the internet just fine.
>>>
>>> VM:
>>> kusznir@FusionPBX:~$ ip address
>>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>>> group default
>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>> inet 127.0.0.1/8 scope host lo
>>>valid_lft forever preferred_lft forever
>>> inet6 ::1/128 scope host
>>>valid_lft forever preferred_lft forever
>>> 2: eth0:  mtu 1500 qdisc pfifo_fast
>>> state UP group default qlen 1000
>>> link/ether 00:1a:4a:16:01:51 brd ff:ff:ff:ff:ff:ff
>>> inet 162.248.147.31/24 brd 162.248.147.255 scope global eth0
>>>valid_lft forever preferred_lft forever
>>> inet6 fe80::21a:4aff:fe16:151/64 scope link
>>>valid_lft forever preferred_lft forever
>>> kusznir@FusionPBX:~$ ip route
>>> default via 162.248.147.1 dev eth0
>>> 162.248.147.0/24 dev eth0  proto kernel  scope link  src 162.248.147.31
>>> kusznir@FusionPBX:~$
>>>
>>> Host:
>>> [root@ovirt3 ~]# ip address
>>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>> inet 127.0.0.1/8 scope host lo
>>>valid_lft forever preferred_lft forever
>>> inet6 ::1/128 scope host
>>>valid_lft forever preferred_lft forever
>>> 2: em1:  mtu 1500 qdisc mq master
>>> ovirtmgmt state UP qlen 1000
>>> link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
>>> 3: em2:  mtu 1500 qdisc mq state DOWN qlen 1000
>>> link/ether 00:21:9b:98:2f:46 brd ff:ff:ff:ff:ff:ff
>>> 4: em3:  mtu 1500 qdisc mq state DOWN qlen 1000
>>> link/ether 00:21:9b:98:2f:48 brd ff:ff:ff:ff:ff:ff
>>> 5: em4:  mtu 1500 qdisc mq state
>>> DOWN qlen 1000
>>> link/ether 00:21:9b:98:2f:4a brd ff:ff:ff:ff:ff:ff
>>> 6: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
>>> link/ether 8e:1b:51:60:87:55 brd ff:ff:ff:ff:ff:ff
>>> 7: ovirtmgmt:  mtu 1500 qdisc noqueue
>>> state UP
>>> link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
>>> inet 192.168.8.13/24 brd 192.168.8.255 scope global dynamic
>>> ovirtmgmt
>>>valid_lft 54830sec preferred_lft 54830sec
>>> 11: em1.2@em1:  mtu 1500 qdisc noqueue
>>> master Public_Cable state UP
>>> link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
>>> 12: Public_Cable:  mtu 1500 qdisc
>>> noqueue state UP
>>> link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
>>> inet 162.248.147.33/24 brd 162.248.147.255 scope global Public_Cable
>>>valid_lft forever preferred_lft forever
>>> 14: vnet0:  mtu 1500 qdisc pfifo_fast
>>> master ovirtmgmt state UNKNOWN qlen 500
>>> link/ether fe:1a:4a:16:01:54 brd ff:ff:ff:ff:ff:ff
>>> inet6 fe80::fc1a:4aff:fe16:154/64 scope link
>>>valid_lft forever preferred_lft forever
>>> 15: vnet1:  mtu 1500 qdisc pfifo_fast
>>> master ovirtmgmt state UNKNOWN qlen 500
>>> link/ether fe:1a:4a:16:01:52 brd ff:ff:ff:ff:ff:ff
>>> inet6 fe80::fc1a:4aff:fe16:152/64 scope link
>>>valid_lft forever preferred_lft forever
>>> 16: vnet2:  mtu 1500 qdisc pfifo_fast
>>> master ovirtmgmt state UNKNOWN qlen 500
>>> link/ether fe:1a:4a:16:01:53 brd ff:ff:ff:ff:ff:ff
>>> inet6 fe80::fc1a:4aff:fe16:153/64 scope link
>>>  

Re: [ovirt-users] How to invoke ovirt-guest-agent hook from ovirt engine SDK ?

2017-01-01 Thread Vinzenz Feenstra

> On Dec 30, 2016, at 11:03 AM, TranceWorldLogic .  
> wrote:
> 
> HI,

Hi there,

> 
> I was exploring more about ovirt-gueste-engin.
> It look to me very easy to configure add add hook as script.
> 
> But my doubt is, how to invoke those script from ovirt-engine ?
> Please some one help me to understand this part. 
> I am looking into python SDK code to figure out same but still not got luck

Guest agent hooks aren’t triggered through the SDK, hooks are triggered when 
certain events happen on the hypervisor side.
e.g. A VM gets migrated from HOST A to HOST B or the VM gets suspended. In 
these cases VDSM _can_ send a message to the guest
agent asking it to process all hooks.

That those hooks are enabled are depending on the migration policy configured. 
Currently all but the ‘Legacy’  migration policies do cause the hooks to be 
executed, given a new enough guest agent, VDSM and cluster version.
HTH


> 
> Thanks,
> ~Rohit
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Overlapping packages in CentOS 7 repo files from ovirt and mirror.centos.org

2017-01-01 Thread Sandro Bonazzola
On Thu, Dec 29, 2016 at 11:09 PM, Richard Chan  wrote:

> Hi all,
>
> The repo files for ovirt-4.0 seem to have overlapping packages from (el7
> vs centos.el7 naming).
>
> resources.ovirt.org: ovirt-4.0
>
> and
>
> mirror.centos.org/centos/7/virt/x86_64/ovirt-4.0/: centos-ovirt40-release
>
> for example
>
> vdsm-4.18.15.3-1.el7.centos.x86_64.rpm
>
> vs
>
> vdsm-4.18.15.3-1.el7.x86_64.rpm
>
>
> Which one should "win"? We need this for auditing purposes.
>
> Thanks.
>

Hi,
when a new oVirt release is announced the oVirt project releases source
code developed and tested during the release cycle.
For convenience, the oVirt release engineering team builds rpms for Fedora,
Red Hat Enterprise Linux and similar.
oVirt is not a Linux distribution.
Once oVirt release is available, the CentOS VIrtualization SIG packages the
oVirt Virtualization Host related packages and make them available on
CentOS mirrors.
CentOS Linux is a Linux distribution.

So, if you need auditing on CentOS only, you should rely on CentOS
repositories and not enabling oVirt repositories on Virtualization Hosts.
On the manager side, oVirt engine is not yet packaged by CentOS
Virtualization SIG being it almost impossible to package without accessing
maven central during the build which is not allowed by packaging policies.
So, for the oVirt Engine host, you've no choice but use oVrit repositories
or build rpms yourself.

The fact you see overlapping versions in oVirt repo and CentOS
Virtualization SIG repositories is due to the fact the two repositories are
independent. You can use either the oVIrt one (built by oVirt release
engineering) or the CentOS one (built by CentOS Virt SIG).




>
>
> --
> Richard Chan
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New oVirt user

2017-01-01 Thread Sahina Bose
On Thu, Dec 29, 2016 at 10:53 AM, Jim Kusznir  wrote:

> Hello:
>
> I've been involved in virtualization from its very early days, and been
> running linux virtualization solutions off and on for a decade.
> Previously, I was always frustrated with the long feature list offered by
> many linux virtualization systems but with no reasonable way to manage
> that.  It seemed that I had to spend an inordinate amount of time doing
> everything by hand.  Thus, when I found oVirt, I was ecstatic!
> Unfortunately, at that time I changed employment (or rather left employment
> and became self-employed), and didn't have any reason to build my own virt
> cluster..until now!
>
> So I'm back with oVirt, and actually deploying a small 3-node cluster.  I
> intend to run on it:
> VoIP Server
> Web Server
> Business backend server
> UniFi management server
> Monitoring server (zabbix)
>
> Not a heavy load, and 3 servers is probably overkill, but I need this to
> work, and it sounds like 3 is the magic entry level for all the
> cluster/failover stuff to work.  For now, my intent is to use a single SSD
> on each node with gluster for the storage backend.  I figure if all the
> failover stuff actually working, if I loose a node due to disk failure, its
> not the end of the world.  I can rebuild it, reconnect gluster, and restart
> everything.  As this is for a startup business, funds are thin at the
> moment, so I'm trying to cut a couple corners that don't affect overall
> reliability.  If this side of the business grows more, I would likely
> invest in some dedicated servers.
>

Welcome back to oVirt :)


>
> So far, I've based my efforts around this guide on oVirt's website:
> http://www.ovirt.org/blog/2016/08/up-and-running-with-
> ovirt-4-0-and-gluster-storage/
>
> My cluster is currently functioning, but not entirely correctly.  Some of
> it is gut feel, some of it is specific test cases (more to follow).  First,
> some areas that lacked clarity and the choices I made in them:
>
> Early on, Jason talks about using a dedicated gluster network for the
> gluster storage sync'ing.  I liked that idea, and as I had 4 nics on each
> machine, I thought dedicating one or two to gluster would be fine.  So, on
> my clean, bare machines, I setup another network with private NiCs and put
> it on a standalone switch.  I added hostnames with a designator (-g on the
> end) for the IPs for all three nodes into /etc/hosts on all three nodes so
> now each node can resolve itself and the other nodes on the -g name (and
> private IP) as well as their main host name and "more public" (but not
> public) IP.
>
> Then, for gdeploy, I put the hostnames in as the -g hostnames, as I didn't
> see anywhere to tell gluster to use the private network.  I think this is a
> place I went wrong, but didn't realize it until the end
>

-g hostnames are the right ones to put in for gdeploy. gdeploy peer probes
the cluster and creates the gluster volumes, so it needs the gluster
specific ip addresses.


>
> I set up the gdeploy script (it took a few times, and a few OS rebuilds to
> get it just right...), and ran it, and it was successful!  When complete, I
> had a working gluster cluster and the right software installed on each node!
>

Were these errors specific to gdeploy configuration? With the latest
release of gdeploy, there's an option "skip__errors". This
could help avoid the OS rebuilds, I think.



>
> I set up the engine on node1, and that worked, and I was able to log in to
> the web gui.  I mistakenly skipped the web gui enable gluster service
> before doing the engine vm reboot to complete the engine setup process, but
> I did go back in after the reboot and do that.  After doing that, I was
> notified in the gui that there were additional nodes, did I want to add
> them.  Initially, I skipped that and went back to the command line as Jason
> suggests.  Unfortunately, it could not find any other nodes through his
> method, and it didn't work.  Combine that with the warnings that I should
> not be using the command line method, and it would be removed in the next
> release, I went back to the gui and attempted to add the nodes that way.
>
> Here's where things appeared to go wrong...It showed me two additional
> nodes, but ONLY by their -g (private gluster) hostname.  And the ssh
> fingerprints were not populated, so it would not let me proceed.  After
> messing with this for a bit, I realized that the engine cannot get to the
> nodes via the gluster interface (and as far as I knew, it shouldn't).
> Working late at night, I let myself "hack it up" a bit, and on the engine
> VM, I added /etc/hosts entries for the -g hostnames pointing to the main
> IPs.  It then populated the ssh host keys and let me add them in.  Ok, so
> things appear to be working..kinda.  I noticed at this point that ALL
> aspects of the gui became VERY slow.  Clicking in and typing in any field
> felt like I was on ssh over a satellite link.  Everything felt a bit worse
> 

Re: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt network

2017-01-01 Thread Sverker Abrahamsson
1. That is not possible as ovirt (or vdsm) will rewrite the network 
configuration to a non-working state. That is why I've set that if as 
hidden to vdsm and is why I'm keen on getting OVS/OVN to work


2. I've been reading the doc for OVN and starting to connect the dots, 
which is not trivial as it is complex. Some insights reached:


First step is the OVN database, installed by openvswitch-ovn-central, 
which I currently have running on h2 host. The 'ovn-nbctl' and 
'ovn-sbctl' commands are only possible to execute on a database node. 
Two ip's are given to 'vdsm-tool ovn-config  ip>' as arguments, where  is how this OVN node reaches 
the database and  is the ip to which other OVN nodes sets up 
a tunnel to this node. I.e. it is not for creating a tunnel to the 
database which I thought first from the description in blog post.


The tunnel between OVN nodes is of type geneve which is a UDP based 
protocol but I have not been able to find anywhere which port is used so 
that I can open it in firewalld. I have added OVN on another host, 
called h1, and connected it to the db. I see there is traffic to the db 
port, but I don't see any geneve traffic between the nodes.


Ovirt is now able to create it's vnet0 interface on the br-int ovs 
bridge, but then I run into the next issue. How do I create a connection 
from the logical switch to the physical host? I need that to a) get a 
connection out to the internet through a masqueraded if or ipv6 and b) 
be able to run a dhcp server to give ip's to the VM's.


/Sverker

Den 2016-12-30 kl. 18:05, skrev Marcin Mirecki:

1. Why not use your physical nic for ovirtmgmt then?

2. "ovn-nbctl ls-add" does not add a bridge, but a logical switch.
br-int is an internal OVN implementation detail, which the user
should not care about. What you see in the ovirt UI are logical
networks. They are implemented as OVN logical switches in case
of the OVN provider.

Please look at:
http://www.ovirt.org/blog/2016/11/ovirt-provider-ovn/
You can get the latest rpms from here:
http://resources.ovirt.org/repos/ovirt/experimental/master/ovirt-provider-ovn_fc24_46/rpm/fc24/noarch/

- Original Message -

From: "Sverker Abrahamsson" 
To: "Marcin Mirecki" 
Cc: "Ovirt Users" 
Sent: Friday, December 30, 2016 4:25:58 PM
Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt network

1. No, I did not want to put the ovirtmgmt bridge on my physical nic as
it always messed up the network config making the host unreachable. I
have put a ovs bridge on this nic which I will use to make tunnels when
I add other hosts. Maybe br-int will be used for that instead, will see
when I get that far.

As it is now I have a dummy if for ovirtmgmt bridge but this will
probably not work when I add other hosts as that bridge cannot connect
to the other hosts. I'm considering keeping this just as a dummy to keep
ovirt engine satisfied while the actual communication will happen over
OVN/OVS bridges and tunnels.

2. On
https://www.ovirt.org//develop/release-management/features/ovirt-ovn-provider/
there is instructions how to add an OVS bridge to OVN with |ovn-nbctl
ls-add |. If you want to use br-int then it makes sense to
make that bridge visible in ovirt webui under networks so that it can be
selected for VM's.

It quite doesn't make sense to me that I can select other network for my
VM but then that setting is not used when setting up the network.

/Sverker

Den 2016-12-30 kl. 15:34, skrev Marcin Mirecki:

Hi,

The OVN provider does not require you to add any bridges manually.
As I understand we were dealing with two problems:
1. You only had one physical nic and wanted to put a bridge on it,
 attaching the management network to the bridge. This was the reason for
 creating the bridge (the recommended setup would be to used a separate
 physical nic for the management network). This bridge has nothing to
 do with the OVN bridge.
2. OVN - you want to use OVN on this system. For this you have to install
 OVN on your hosts. This should create the br-int bridge, which are
 then used by the OVN provider. This br-int bridge must be configured
 to connect to other hosts using the geneve tunnels.

In both cases the systems will not be aware of any bridges you create.
They need a nic (be it physical or virtual) to connect to other system.
Usually this is the physical nic. In your case you decided to put a bridge
on the physical nic, and give oVirt a virtual nic attached to this bridge.
This works, but keep in mind that the bridge you have introduced is outside
of oVirt's (and OVN) control (and as such is not supported).


What is the purpose of
adding my bridges to Ovirt through the external provider and configure
them on my VM

I am not quite sure I understand.
The external provider (OVN provider to be specific), does not add any
bridges
to the system. It is using the br-int bridge created by OVN. The networks
created by the OVN provider are purely logical

Re: [ovirt-users] Current status of 4.0.6 | EL7.3?

2017-01-01 Thread Robert Story
On Sun, 1 Jan 2017 20:27:21 +0100 Michal wrote:
MS> >  Or will qemu-kvm-common-ev-2.6.0 get
MS> > released in the ovirt-release40 repo sometime soon?  (I'm glad I haven't
MS> > updated yet!)  
MS> 
MS> It wouldn't let you upgrade the host due to the dependency so
MS> hopefully nothing would break, but indeed 7.3 needs libvirt-2.0 and
MS> qemu-kvm-ev-1.6

That's a typo, right? I htink you meant qemu-kvm-ev-2.6.

Robert

-- 
Senior Software Engineer @ Parsons


pgpnnoNXZZGcT.pgp
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Current status of 4.0.6 | EL7.3?

2017-01-01 Thread Robert Story
On Fri, 30 Dec 2016 21:29:17 -0500 Derek wrote:
DA> Is this the official response from the ovirt team, to use the
DA> centos-release-qemu-ev repo?  Or will qemu-kvm-common-ev-2.6.0 get
DA> released in the ovirt-release40 repo sometime soon?  (I'm glad I haven't
DA> updated yet!)

I read it here:

 From: Sandro Bonazzola 
 To: users 
 Subject: Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need 
qemu-kvm-ev 2.6
 Date: Tue, 13 Dec 2016 08:43:15 +0100
 On Tue, 13 Dec 2016 08:43:15 +0100 Sandro wrote:
SB> On Mon, Dec 12, 2016 at 6:38 PM, Chris Adams  wrote:
SB> 
SB> > Once upon a time, Sandro Bonazzola  said:  
SB> > > In terms of ovirt repositories, qemu-kvm-ev 2.6 is available right now 
in
SB> > > ovirt-master-snapshot-static, ovirt-4.0-snapshot-static, and  
SB> > ovirt-4.0-pre  
SB> > > (contains 4.0.6 RC4 rpms going to be announced in a few minutes.)  
SB> >
SB> > Will qemu-kvm-ev 2.6 be added to any of the oVirt repos for prior
SB> > versions (such as 3.5 or 3.6)?
SB> 
SB> You can enable CentOS Virt SIG repo by running "yum install
SB> centos-release-qemu-ev" on your CentOS 7 systems.
SB> and you'll have updated qemu-kvm-ev.




Robert

-- 
Senior Software Engineer @ Parsons


pgpR7pWadnHU0.pgp
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Current status of 4.0.6 | EL7.3?

2017-01-01 Thread Michal Skrivanek
> On 31 Dec 2016, at 03:29, Derek Atkins  wrote:
>
> Hi Robert,
>
> Is this the official response from the ovirt team, to use the
> centos-release-qemu-ev repo?

It's the best choice for now. I suppose we will just add it to the
repo list instead of re-releasing the package. It's us/Sandro building
it anyway:)

>  Or will qemu-kvm-common-ev-2.6.0 get
> released in the ovirt-release40 repo sometime soon?  (I'm glad I haven't
> updated yet!)

It wouldn't let you upgrade the host due to the dependency so
hopefully nothing would break, but indeed 7.3 needs libvirt-2.0 and
qemu-kvm-ev-1.6

Thanks,
michal

>
> Thanks,
>
> -derek
>
>> On Thu, December 29, 2016 9:32 pm, Robert Story wrote:
>> On Thu, 29 Dec 2016 15:32:07 -0500 Derek wrote:
>> DA> Hi,
>> DA>
>> DA> What is the current status of Ovirt 4.0.6 and EL7.3?  From previous
>> DA> threads it seemed to me that there was a potential compatibility issue
>> DA> with the 7.3 kernel and an updated version of vdsm or qemu?  I just
>> want
>> DA> to ensure any potential issues have been cleared up before I upgrade
>> my
>> DA> systems.
>> DA>
>> DA> Thanks,
>> DA>
>> DA> -derek
>> DA>
>>
>> I think you need to enable CentOS Virt SIG repo to get the latest
>> qemu-kvm:
>>
>> # yum list qemu-kvm-common\*
>> qemu-kvm-common.x86_64   10:1.5.3-126.el7
>> base
>>
>> # yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm
>> # yum -C list qemu-kvm-common\*
>> qemu-kvm-common.x86_64 10:1.5.3-126.el7
>> base
>> qemu-kvm-common-ev.x86_64  10:2.3.0-31.el7.16.1
>> ovirt-4.0
>>
>> # yum install centos-release-qemu-ev
>> # yum list qemu-kvm-common\*
>> qemu-kvm-common.x86_64   10:1.5.3-126.el7 base
>> qemu-kvm-common-ev.x86_6410:2.6.0-27.1.el7
>> centos-qemu-ev
>>
>> That worked for me earlier this week.
>>
>>
>> Robert
>>
>> --
>> Senior Software Engineer @ Parsons
>
>
> --
>   Derek Atkins 617-623-3745
>   de...@ihtfp.com www.ihtfp.com
>   Computer and Internet Security Consultant
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Watchdog device

2017-01-01 Thread Michal Skrivanek
> On 01 Jan 2017, at 15:18, Doron Fediuck  wrote:
>
> Hi Gary,
> this is a known issue we're working on. if the VM is down it should
> work as expected (edit the VM and close the dialog).

Engine.log might help. Can you attach it? Supposing you mean the VM is
created and not running, it should work..

> Please try and let me know. Another option is to do this until the bug
> is resolved is using the REST API.
>
> Note that this is currently working on on Linux guests (there are no
> updated Windows drivers). You will need to configure
> the device in the guest as explained by the docs.
>
> Doron
>
>
>> On Sun, Jan 1, 2017 at 9:28 AM, Gary Pedretty  wrote:
>> How do you add the supported watchdog device to a VM that has already been
>> created?  I have tried adding it via the High Availability tab in the Edit
>> VM Dialog, but it does not  retain the setting when I close the dialog.
>>
>> Latest version os Overt running as self-hosted engine
>>
>> Gary
>>
>>
>> 
>> Gary Pedrettyg...@ravnalaska.net
>> Systems Manager  www.flyravn.com
>> Ravn Alaska   /\907-450-7251
>> 5245 Airport Industrial Road /  \/\ 907-450-7238 fax
>> Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
>> Serving Alaska's Interior  /  \/  /\  \ \/\   "Love your neighbor as
>> Having a heatwave, its summer   yourself” Matt 22:39
>> 
>>
>>
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] creating a vlan-tagged network

2017-01-01 Thread Jim Kusznir
I pinged both the router on the subnet and a host IP in-between the two
ip's.

[root@ovirt3 ~]# ping -I 162.248.147.33 162.248.147.1
PING 162.248.147.1 (162.248.147.1) from 162.248.147.33 : 56(84) bytes of
data.
64 bytes from 162.248.147.1: icmp_seq=1 ttl=255 time=8.17 ms
64 bytes from 162.248.147.1: icmp_seq=2 ttl=255 time=7.47 ms
64 bytes from 162.248.147.1: icmp_seq=3 ttl=255 time=7.53 ms
64 bytes from 162.248.147.1: icmp_seq=4 ttl=255 time=8.42 ms
^C
--- 162.248.147.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 7.475/7.901/8.424/0.420 ms
[root@ovirt3 ~]#

The VM only has its public IP.

--Jim

On Jan 1, 2017 01:26, "Edward Haas"  wrote:

>
>
> On Sun, Jan 1, 2017 at 10:50 AM, Jim Kusznir  wrote:
>
>> I currently only have two IPs assigned to me...I can try and take
>> another, but that may not route out of the rack.  I've got the VM on one of
>> the IPs and the host on the other currently.
>>
>> The switch is a "web-managed" basic 8-port switch (thrown in for testing
>> while the "real" switch is in transit).  It has the 3 ports the hosts are
>> plugged in configured with vlan 1 untagged, set as PVID, and vlan 2
>> tagged.  Another port on the switch is untagged on vlan 1 connected to the
>> router for the ovirtmgmt network (protected by a VPN, but not "burning"
>> public IPs for mgmt purposes), another couple ports are untagged on vlan
>> 2.  One of those ports goes out of the rack, another goes to the router's
>> internet port.  Router gets to the internet just fine.
>>
>> VM:
>> kusznir@FusionPBX:~$ ip address
>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
>> default
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>>valid_lft forever preferred_lft forever
>> inet6 ::1/128 scope host
>>valid_lft forever preferred_lft forever
>> 2: eth0:  mtu 1500 qdisc pfifo_fast
>> state UP group default qlen 1000
>> link/ether 00:1a:4a:16:01:51 brd ff:ff:ff:ff:ff:ff
>> inet 162.248.147.31/24 brd 162.248.147.255 scope global eth0
>>valid_lft forever preferred_lft forever
>> inet6 fe80::21a:4aff:fe16:151/64 scope link
>>valid_lft forever preferred_lft forever
>> kusznir@FusionPBX:~$ ip route
>> default via 162.248.147.1 dev eth0
>> 162.248.147.0/24 dev eth0  proto kernel  scope link  src 162.248.147.31
>> kusznir@FusionPBX:~$
>>
>> Host:
>> [root@ovirt3 ~]# ip address
>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>>valid_lft forever preferred_lft forever
>> inet6 ::1/128 scope host
>>valid_lft forever preferred_lft forever
>> 2: em1:  mtu 1500 qdisc mq master
>> ovirtmgmt state UP qlen 1000
>> link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
>> 3: em2:  mtu 1500 qdisc mq state DOWN qlen 1000
>> link/ether 00:21:9b:98:2f:46 brd ff:ff:ff:ff:ff:ff
>> 4: em3:  mtu 1500 qdisc mq state DOWN qlen 1000
>> link/ether 00:21:9b:98:2f:48 brd ff:ff:ff:ff:ff:ff
>> 5: em4:  mtu 1500 qdisc mq state DOWN
>> qlen 1000
>> link/ether 00:21:9b:98:2f:4a brd ff:ff:ff:ff:ff:ff
>> 6: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
>> link/ether 8e:1b:51:60:87:55 brd ff:ff:ff:ff:ff:ff
>> 7: ovirtmgmt:  mtu 1500 qdisc noqueue
>> state UP
>> link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
>> inet 192.168.8.13/24 brd 192.168.8.255 scope global dynamic ovirtmgmt
>>valid_lft 54830sec preferred_lft 54830sec
>> 11: em1.2@em1:  mtu 1500 qdisc noqueue
>> master Public_Cable state UP
>> link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
>> 12: Public_Cable:  mtu 1500 qdisc
>> noqueue state UP
>> link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
>> inet 162.248.147.33/24 brd 162.248.147.255 scope global Public_Cable
>>valid_lft forever preferred_lft forever
>> 14: vnet0:  mtu 1500 qdisc pfifo_fast
>> master ovirtmgmt state UNKNOWN qlen 500
>> link/ether fe:1a:4a:16:01:54 brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::fc1a:4aff:fe16:154/64 scope link
>>valid_lft forever preferred_lft forever
>> 15: vnet1:  mtu 1500 qdisc pfifo_fast
>> master ovirtmgmt state UNKNOWN qlen 500
>> link/ether fe:1a:4a:16:01:52 brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::fc1a:4aff:fe16:152/64 scope link
>>valid_lft forever preferred_lft forever
>> 16: vnet2:  mtu 1500 qdisc pfifo_fast
>> master ovirtmgmt state UNKNOWN qlen 500
>> link/ether fe:1a:4a:16:01:53 brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::fc1a:4aff:fe16:153/64 scope link
>>valid_lft forever preferred_lft forever
>> 17: vnet3:  mtu 1500 qdisc pfifo_fast
>> master Public_Cable state UNKNOWN qlen 500
>> link/ether fe:1a:4a:16:01:51 brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::fc1a:4aff:fe16:151/64 scope link
>>valid_lft forever preferred_lft forever
>> [root@ovirt3 ~]# ip route
>> default via 192.168.8.1 dev ovirtmgmt
>> 162.248.147.0/24 dev 

Re: [ovirt-users] Watchdog device

2017-01-01 Thread Doron Fediuck
Hi Gary,
this is a known issue we're working on. if the VM is down it should
work as expected (edit the VM and close the dialog).
Please try and let me know. Another option is to do this until the bug
is resolved is using the REST API.

Note that this is currently working on on Linux guests (there are no
updated Windows drivers). You will need to configure
the device in the guest as explained by the docs.

Doron


On Sun, Jan 1, 2017 at 9:28 AM, Gary Pedretty  wrote:
> How do you add the supported watchdog device to a VM that has already been
> created?  I have tried adding it via the High Availability tab in the Edit
> VM Dialog, but it does not  retain the setting when I close the dialog.
>
> Latest version os Overt running as self-hosted engine
>
> Gary
>
>
> 
> Gary Pedrettyg...@ravnalaska.net
> Systems Manager  www.flyravn.com
> Ravn Alaska   /\907-450-7251
> 5245 Airport Industrial Road /  \/\ 907-450-7238 fax
> Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
> Serving Alaska's Interior  /  \/  /\  \ \/\   "Love your neighbor as
> Having a heatwave, its summer   yourself” Matt 22:39
> 
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] creating a vlan-tagged network

2017-01-01 Thread Edward Haas
On Sun, Jan 1, 2017 at 10:50 AM, Jim Kusznir  wrote:

> I currently only have two IPs assigned to me...I can try and take another,
> but that may not route out of the rack.  I've got the VM on one of the IPs
> and the host on the other currently.
>
> The switch is a "web-managed" basic 8-port switch (thrown in for testing
> while the "real" switch is in transit).  It has the 3 ports the hosts are
> plugged in configured with vlan 1 untagged, set as PVID, and vlan 2
> tagged.  Another port on the switch is untagged on vlan 1 connected to the
> router for the ovirtmgmt network (protected by a VPN, but not "burning"
> public IPs for mgmt purposes), another couple ports are untagged on vlan
> 2.  One of those ports goes out of the rack, another goes to the router's
> internet port.  Router gets to the internet just fine.
>
> VM:
> kusznir@FusionPBX:~$ ip address
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
> default
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: eth0:  mtu 1500 qdisc pfifo_fast
> state UP group default qlen 1000
> link/ether 00:1a:4a:16:01:51 brd ff:ff:ff:ff:ff:ff
> inet 162.248.147.31/24 brd 162.248.147.255 scope global eth0
>valid_lft forever preferred_lft forever
> inet6 fe80::21a:4aff:fe16:151/64 scope link
>valid_lft forever preferred_lft forever
> kusznir@FusionPBX:~$ ip route
> default via 162.248.147.1 dev eth0
> 162.248.147.0/24 dev eth0  proto kernel  scope link  src 162.248.147.31
> kusznir@FusionPBX:~$
>
> Host:
> [root@ovirt3 ~]# ip address
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: em1:  mtu 1500 qdisc mq master
> ovirtmgmt state UP qlen 1000
> link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
> 3: em2:  mtu 1500 qdisc mq state DOWN qlen 1000
> link/ether 00:21:9b:98:2f:46 brd ff:ff:ff:ff:ff:ff
> 4: em3:  mtu 1500 qdisc mq state DOWN qlen 1000
> link/ether 00:21:9b:98:2f:48 brd ff:ff:ff:ff:ff:ff
> 5: em4:  mtu 1500 qdisc mq state DOWN
> qlen 1000
> link/ether 00:21:9b:98:2f:4a brd ff:ff:ff:ff:ff:ff
> 6: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
> link/ether 8e:1b:51:60:87:55 brd ff:ff:ff:ff:ff:ff
> 7: ovirtmgmt:  mtu 1500 qdisc noqueue
> state UP
> link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
> inet 192.168.8.13/24 brd 192.168.8.255 scope global dynamic ovirtmgmt
>valid_lft 54830sec preferred_lft 54830sec
> 11: em1.2@em1:  mtu 1500 qdisc noqueue
> master Public_Cable state UP
> link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
> 12: Public_Cable:  mtu 1500 qdisc
> noqueue state UP
> link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
> inet 162.248.147.33/24 brd 162.248.147.255 scope global Public_Cable
>valid_lft forever preferred_lft forever
> 14: vnet0:  mtu 1500 qdisc pfifo_fast
> master ovirtmgmt state UNKNOWN qlen 500
> link/ether fe:1a:4a:16:01:54 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc1a:4aff:fe16:154/64 scope link
>valid_lft forever preferred_lft forever
> 15: vnet1:  mtu 1500 qdisc pfifo_fast
> master ovirtmgmt state UNKNOWN qlen 500
> link/ether fe:1a:4a:16:01:52 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc1a:4aff:fe16:152/64 scope link
>valid_lft forever preferred_lft forever
> 16: vnet2:  mtu 1500 qdisc pfifo_fast
> master ovirtmgmt state UNKNOWN qlen 500
> link/ether fe:1a:4a:16:01:53 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc1a:4aff:fe16:153/64 scope link
>valid_lft forever preferred_lft forever
> 17: vnet3:  mtu 1500 qdisc pfifo_fast
> master Public_Cable state UNKNOWN qlen 500
> link/ether fe:1a:4a:16:01:51 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc1a:4aff:fe16:151/64 scope link
>valid_lft forever preferred_lft forever
> [root@ovirt3 ~]# ip route
> default via 192.168.8.1 dev ovirtmgmt
> 162.248.147.0/24 dev Public_Cable  proto kernel  scope link  src
> 162.248.147.33
> 169.254.0.0/16 dev ovirtmgmt  scope link  metric 1007
> 169.254.0.0/16 dev Public_Cable  scope link  metric 1012
> 192.168.8.0/24 dev ovirtmgmt  proto kernel  scope link  src 192.168.8.13
> [root@ovirt3 ~]# brctl show
> bridge name bridge id STP enabled interfaces
> ;vdsmdummy; 8000. no
> Public_Cable 8000.00219b982f44 no em1.2
> vnet3
> ovirtmgmt 8000.00219b982f44 no em1
> vnet0
> vnet1
> vnet2
> [root@ovirt3 ~]#
>
> I did see that the cluster settings has a switch type setting; currently
> at the default "LEGACY", it also has "OVS" as an option.  Not sure if that
> matters or not.
>
> I configured another VM on the network, and static'ed an IP, and could
> ping the other VM as well as the host, but not the internet.  The h

Re: [ovirt-users] creating a vlan-tagged network

2017-01-01 Thread Jim Kusznir
I currently only have two IPs assigned to me...I can try and take another,
but that may not route out of the rack.  I've got the VM on one of the IPs
and the host on the other currently.

The switch is a "web-managed" basic 8-port switch (thrown in for testing
while the "real" switch is in transit).  It has the 3 ports the hosts are
plugged in configured with vlan 1 untagged, set as PVID, and vlan 2
tagged.  Another port on the switch is untagged on vlan 1 connected to the
router for the ovirtmgmt network (protected by a VPN, but not "burning"
public IPs for mgmt purposes), another couple ports are untagged on vlan
2.  One of those ports goes out of the rack, another goes to the router's
internet port.  Router gets to the internet just fine.

VM:
kusznir@FusionPBX:~$ ip address
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state
UP group default qlen 1000
link/ether 00:1a:4a:16:01:51 brd ff:ff:ff:ff:ff:ff
inet 162.248.147.31/24 brd 162.248.147.255 scope global eth0
   valid_lft forever preferred_lft forever
inet6 fe80::21a:4aff:fe16:151/64 scope link
   valid_lft forever preferred_lft forever
kusznir@FusionPBX:~$ ip route
default via 162.248.147.1 dev eth0
162.248.147.0/24 dev eth0  proto kernel  scope link  src 162.248.147.31
kusznir@FusionPBX:~$

Host:
[root@ovirt3 ~]# ip address
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq master
ovirtmgmt state UP qlen 1000
link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
3: em2:  mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 00:21:9b:98:2f:46 brd ff:ff:ff:ff:ff:ff
4: em3:  mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 00:21:9b:98:2f:48 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 1500 qdisc mq state DOWN
qlen 1000
link/ether 00:21:9b:98:2f:4a brd ff:ff:ff:ff:ff:ff
6: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
link/ether 8e:1b:51:60:87:55 brd ff:ff:ff:ff:ff:ff
7: ovirtmgmt:  mtu 1500 qdisc noqueue
state UP
link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
inet 192.168.8.13/24 brd 192.168.8.255 scope global dynamic ovirtmgmt
   valid_lft 54830sec preferred_lft 54830sec
11: em1.2@em1:  mtu 1500 qdisc noqueue
master Public_Cable state UP
link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
12: Public_Cable:  mtu 1500 qdisc noqueue
state UP
link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
inet 162.248.147.33/24 brd 162.248.147.255 scope global Public_Cable
   valid_lft forever preferred_lft forever
14: vnet0:  mtu 1500 qdisc pfifo_fast
master ovirtmgmt state UNKNOWN qlen 500
link/ether fe:1a:4a:16:01:54 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc1a:4aff:fe16:154/64 scope link
   valid_lft forever preferred_lft forever
15: vnet1:  mtu 1500 qdisc pfifo_fast
master ovirtmgmt state UNKNOWN qlen 500
link/ether fe:1a:4a:16:01:52 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc1a:4aff:fe16:152/64 scope link
   valid_lft forever preferred_lft forever
16: vnet2:  mtu 1500 qdisc pfifo_fast
master ovirtmgmt state UNKNOWN qlen 500
link/ether fe:1a:4a:16:01:53 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc1a:4aff:fe16:153/64 scope link
   valid_lft forever preferred_lft forever
17: vnet3:  mtu 1500 qdisc pfifo_fast
master Public_Cable state UNKNOWN qlen 500
link/ether fe:1a:4a:16:01:51 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc1a:4aff:fe16:151/64 scope link
   valid_lft forever preferred_lft forever
[root@ovirt3 ~]# ip route
default via 192.168.8.1 dev ovirtmgmt
162.248.147.0/24 dev Public_Cable  proto kernel  scope link  src
162.248.147.33
169.254.0.0/16 dev ovirtmgmt  scope link  metric 1007
169.254.0.0/16 dev Public_Cable  scope link  metric 1012
192.168.8.0/24 dev ovirtmgmt  proto kernel  scope link  src 192.168.8.13
[root@ovirt3 ~]# brctl show
bridge name bridge id STP enabled interfaces
;vdsmdummy; 8000. no
Public_Cable 8000.00219b982f44 no em1.2
vnet3
ovirtmgmt 8000.00219b982f44 no em1
vnet0
vnet1
vnet2
[root@ovirt3 ~]#

I did see that the cluster settings has a switch type setting; currently at
the default "LEGACY", it also has "OVS" as an option.  Not sure if that
matters or not.

I configured another VM on the network, and static'ed an IP, and could ping
the other VM as well as the host, but not the internet.  The host can still
ping the internet.

--Jim
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users