[ovirt-users] Re: [EXTERNAL] Re: Nested Virtualization in AMD Ryzen

2024-01-06 Thread NUNIN Roberto
What are the Cluster and Datacenter compatibility version settings in new setup 
 ?









Roberto Nunin
ICT Infrastructure Senior Advisor

Gruppo Comifar
a PHOENIX company
Nucleo ind. Sant'Atto - S. Nicolò a Tordino
64100, Teramo - Italy
+393483019957
+39 0861 204 415 - int: 7415
roberto.nu...@comifar.it

[cid:image85fdaf.JPG@3089a764.4c816112]





Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.



From: LS CHENG 
Sent: Saturday, January 6, 2024 4:10:28 PM
To: users 
Subject: [EXTERNAL] [ovirt-users] Re: Nested Virtualization in AMD Ryzen

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe!



domcapabilities output

[root@kvm1 ~]# virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
domcapabilities |grep AMD
  AMD
  phenom
  athlon
  Opteron_G5
  Opteron_G4
  Opteron_G3
  Opteron_G2
  Opteron_G1
  EPYC-Rome
  EPYC-Milan
  EPYC-IBPB
  EPYC

On Sat, Jan 6, 2024 at 3:58 PM LS CHENG 
mailto:lsc.or...@gmail.com>> wrote:
Hi

I notice that the CPU in kvm is detected as Opteron_G3 after running "virsh -r 
capabilities | grep model" and there is no support in ovirt UI, is it possible 
to added by modify the postal database in the engine?

From kvm the host seems good

[root@kvm1 ~]# virt-host-validate
  QEMU: Checking for hardware virtualization : 
PASS
  QEMU: Checking if device /dev/kvm exists   : 
PASS
  QEMU: Checking if device /dev/kvm is accessible: 
PASS
  QEMU: Checking if device /dev/vhost-net exists : 
PASS
  QEMU: Checking if device /dev/net/tun exists   : 
PASS
  QEMU: Checking for cgroup 'cpu' controller support : 
PASS
  QEMU: Checking for cgroup 'cpuacct' controller support : 
PASS
  QEMU: Checking for cgroup 'cpuset' controller support  : 
PASS
  QEMU: Checking for cgroup 'memory' controller support  : 
PASS
  QEMU: Checking for cgroup 'devices' controller support : 
PASS
  QEMU: Checking for cgroup 'blkio' controller support   : 
PASS
  QEMU: Checking for device assignment IOMMU support : 
WARN (No ACPI IVRS table found, IOMMU either disabled in BIOS or not supported 
by this hardware platform)
  QEMU: Checking for secure guest support: 
WARN (Unknown if this platform has Secure Guest support)


Thanks



On Sat, Jan 6, 2024 at 12:27 PM LS CHENG 
mailto:lsc.or...@gmail.com>> wrote:
Hi all

I am running OLVM 4.5, this is a test setup which was running in my old 
workstation with Intel CPU and is nested virtualization (with VMWare 
Workstation), the host was running Windows 7 x64, I moved to AMD Ryzen 7950X3D 
a couple of days ago which runs Windows 11 x64 with 128GB memory then moved 
OLVM VM's from the old workstation to this new workstation.

The problem I face now is the KVM hosts shows this error

Host kvm1 moved to Non-Operational state as host CPU type is not supported in 
this cluster compatibility version or is not supported at all

I modified /etc/modprobe.d/kvm.conf and changed

options kvm_amd nested=0
to
options kvm_amd nested=1

reboot the kvm host but still getting same error, I verified the modification 
and seems good

[root@kvm1 ~]# cat /sys/module/kvm_amd/parameters/nested
1

In Windows 11 I have hyper-v off and Memory Integrity is also off.

Am I missing any additional steps?


Thanks






___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F7YXU2M26LI74OHFRQVWSUFAPTQEP2NY/


[ovirt-users] Re: Trunk VLAN Ovirt 4.2

2019-01-13 Thread NUNIN Roberto
After nodes were joined the ovirt cluster, you must :

1) create networks within Datacenter tab. There you set the VLAN tagging, with 
the VLAN ID. You can also specify the network usage (migration, management, vm 
etc.). You must also assign network to the cluster containing nodes.
2) assign these networks to nodes from Host>Setup Host Networks, without assign 
an IP, being the "transport" to the vm that will run on the nodes. At the same 
time, or before, or later, you can create the bond from the same point. Choose 
the right bond mode, depending on the switch config and supported modes on 
ovirt.

After these task, networks will go up in the cluster, so you can start to 
deploy vm and assign vm to these "trunked" networks, assigning them IP 
addresses, fixed or via DHCP.

Hope this help you.

Roberto Nunin
Gruppo Comifar



ICT Infrastructure Manager
Nucleo Ind. Sant'Atto - S.Nicolò A Tordino

64100, Teramo (TE) - Italy

Phone: +39 0861 204 415 int: 2415

Fax: +39 02  0804

Mobile: +39 3483019957

Email: roberto.nu...@comifar.it

Web: www.gruppocomifar.it





On Mon, Jan 14, 2019 at 4:08 AM +0100, "Sebastian Antunez N." 
mailto:antunez.sebast...@gmail.com>> wrote:

Hello Guys

Have a lab with 6 nodes Ovirt 4.2. All nodes have 4 nic 1GB and two nic are for 
Management and and I will assign two nic in bonding for VM traffic

My problem is the following.

The switch Cisco have 2 VLAN (56 and 57) and I need to be able to create a new 
network with the two vlan but I am not clear how to make it so that the vlan go 
through this created network. I have read that I must create a network with 
vlan 56 for example and assign it an IP, and then create the network with vlan 
57 and assign it an IP and later assign it to bonding.

Is what I indicate or should another process be correct?

Thanks for the help.

Sebastian



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHNF2KTCDMMX6IV3ORAQEY7V2NIH425H/


[ovirt-users] R: Re: [ANN] oVirt Node 4.2.6 async update is now available

2018-10-05 Thread NUNIN Roberto
Are these errors present while updating ovirt-engine or during update hosts ?

I’ve succesfully updated hosted-engine from 4.2.5.x to 4.2.6.4-1.el7, bot now 
I’ve errors while updating hosts.

Can I apply the same workaround ?
Thanks in advance


manual remove cockpit-networkmanager-172-1.el7.noarch and update ovirt again 
fixes this issue for me.
On 10/4/18 11:20 AM, Maton, Brett wrote:
Having trouble upgrading my test instance (4.2.7.1-1.el7), there appear to be 
some dependency issues:

Transaction check error:
  file /usr/share/cockpit/networkmanager/manifest.json from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.ca.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.cs.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.de.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.es.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.eu.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.fi.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.fr.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.hr.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.hu.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.ja.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.ko.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.my.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.nl.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.pa.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.pl.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.pt.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.pt_BR.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.tr.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.uk.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.zh_CN.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/firewall.css.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/network.min.js.gz from install of 
cockpit-system-176-2.el7.centos.noarch conflicts with file from package 
cockpit-networkmanager-172-1.el7.noarch



On Thu, 4 Oct 2018 at 09:44, Sandro Bonazzola 
mailto:sbona...@redhat.com>> wrote:
The oVirt Team has just released a new version of oVirt Node image including 
latest CentOS updates,
fixing a regression introduced in kernel package [1] breaking IP 

[ovirt-users] oVirt LDAP user authentication troubleshooting

2017-08-07 Thread NUNIN Roberto
I've two oVirt 4.1.4.2-1 pods used for labs.

These two pods are configured in the same way (three node with gluster)

Trying to setup LDAP auth, towards the same OpenLDAP server, setup ends 
correctly in both engine VM.
When I try to perform system permission modification, only one of these is 
recognizing the LDAP groups and allow setup and next users belonging to defined 
groups to log-in and perform assigned level tasks.

On the second engine, system permissions, even if it recognize the LDAP domain 
(it appear in the selection box for search base) do not find nothing, groups or 
individuals.
How to analyze this ? I wasn't able to find logs useful for troubleshooting.

Setup ended correctly with both Login and Search tasks complete successful.
Thanks

Roberto







Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ris: Best practices for LACP bonds on oVirt

2017-07-04 Thread NUNIN Roberto




 Messaggio originale 
Oggetto: Re: [ovirt-users] Best practices for LACP bonds on oVirt
Da: Yedidyah Bar David
A: Vinícius Ferrão ,Simone Tiraboschi
CC: users


On Tue, Jul 4, 2017 at 3:51 AM, Vinícius Ferrão wrote:

> In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP bond 
> for management and servers VLAN’s, while eth1 and eth2 are Multipath iSCSI 
> disks (MPIO).

>You probably meant eth2 and eth3 for the latter bond.

Sorry to jump inside this interesting discussion, but LACP on iSCSI ?

If I remember correctly, there was warning about using LACP on iSCSI comms, 
active/standby was the preferred one, due to performances ?
Something has changed about this topic ?

Thanks,
Roberto
___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



--
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: Cannot connect to gluster storage after HE installation

2017-05-30 Thread NUNIN Roberto

I'm assuming HE is up and running, as you're able to access the engine.
Yes. It is up and running.
Can you check if the Default cluster has "Gluster service" enabled. (This would 
have been a prompt during HE install, and the service is enabled based on your 
choice)
This was not activated. Did you mean that, during HE setup the prompt was “Do 
you want to activate Gluster service on this host ?” Honestly, considering that 
it was under gluster installation, and the default value was “No”, I’ve left No.
Now I’ve activated on the GUI.

 Are the gluster volumes listed in the "Volumes" tab? The engine needs to be 
aware of the volumes to use in the New storage domain dialog.
Yes, now the gluster volumes are shown.

Last question Sahina:

It is correct that glusterd service is disabled on hypervisors ?

Thanks for pointing me to the right solution.


Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cannot connect to gluster storage after HE installation

2017-05-30 Thread NUNIN Roberto
Hi

Ovirt-node-ng installation using iso image 20170526.
I've made five attempts, each one ended with different fail, still loking at : 
http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/

The last one was done, successfully (I hope)  taking care of :


1)Configure networking before to set date & time, to have chronyd up & 
running

2)Modifying gdeploy generated script, still looking at ntpd instead of 
chronyd

3)Being a gluster based cluster, configured partition on each data disk 
(sdb > sdb1 type 8e + partprobe)

4)Blacklisted all nodes on multipath.conf

5)Double check if refuses from previous attempts was already visible (for 
example gluster volume group > vgremove -f -y ).

After HE installation and restart, no advisory about additional servers to add 
to the cluster, so manually added as new servers. Successfully.

Now I must add storage, but unfortunately, nothing is shown in the Gluster 
drop-down list, even if I change the host.
I've chosen "Use managed gluster".

AT a first look, glusterd is up & running (but disabled at system startup !) :

aps-te65-mng.mydomain.it:Loaded: loaded 
(/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
aps-te65-mng.mydomain.it:Active: active (running) since Tue 2017-05-30 
09:54:23 CEST; 4h 40min ago

aps-te66-mng.mydomain.it:Loaded: loaded 
(/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
aps-te66-mng.mydomain.it:Active: active (running) since Tue 2017-05-30 
09:54:24 CEST; 4h 40min ago

aps-te67-mng.mydomain.it:Loaded: loaded 
(/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
aps-te67-mng.mydomain.it:Active: active (running) since Tue 2017-05-30 
09:54:24 CEST; 4h 40min ago

data gluster volume is ok :

[root@aps-te65-mng ~]# gluster volume info data

Volume Name: data
Type: Replicate
Volume ID: ea6a2c9f-b042-42b4-9c0e-1f776e50b828
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: aps-te65-mng.mydomain.it:/gluster_bricks/data/data
Brick2: aps-te66-mng.mydomain.it:/gluster_bricks/data/data
Brick3: aps-te67-mng.mydomain.it:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
[root@aps-te65-mng ~]#

[root@aps-te65-mng ~]# gluster volume status data
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick aps-te65-mng.mydomain.it:/gluster_bric
ks/data/data49153 0  Y   52710
Brick aps-te66-mng.mydomain.it:/gluster_bric
ks/data/data49153 0  Y   45265
Brick aps-te67-mng.mydomain.it:/gluster_bric
ks/data/data49153 0  Y   45366
Self-heal Daemon on localhost   N/A   N/AY   57491
Self-heal Daemon on aps-te67-mng.mydomain.it N/A   N/AY   46488
Self-heal Daemon on aps-te66-mng.mydomain.it N/A   N/AY   46384

Task Status of Volume data
--
There are no active volume tasks

[root@aps-te65-mng ~]#

Any hints on this ? May I send logs ?
In hosted-engine log, apart fencing problems with HPE iLO3 agent, I can find 
only these errors:

2017-05-30 11:58:56,981+02 ERROR 
[org.ovirt.engine.core.utils.servlet.ServletUtils] (default task-23) [] Can't 
read file '/usr/share/ovirt-engine/files/spice/SpiceVersion.txt' for request 
'/ovirt-engine/services/files/spice/SpiceVersion.txt', will send a 404 error 
response.
2017-05-30 13:49:51,033+02 ERROR 
[org.ovirt.engine.core.utils.servlet.ServletUtils] (default task-64) [] Can't 
read file '/usr/share/ovirt-engine/files/spice/SpiceVersion.txt' for request 
'/ovirt-engine/services/files/spice/SpiceVersion.txt', will send a 404 error 
response.
2017-05-30 13:58:01,569+02 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] 
(org.ovirt.thread.pool-6-thread-25) [677ae254] Command 'PollVDSCommand(HostName 
= aps-te66-mng.mydomain.it, VdsIdVDSCommandParametersBase:{runAsync='true', 
hostId='3fea5320-33f5-4479-89ce-7d3bc575cd49'})' execution failed: 
VDSGenericException: VDSNetworkException: 

[ovirt-users] ovirt-node-ng & gluster installation question

2017-05-25 Thread NUNIN Roberto
I'm trying to install latest oVirt release on a cluster lab.

I have 6 servers and I need to use gluster from hyperconverged hosted engine 
lab, all of them have volumes for gluster storage, apart OS disks.

Servers are installed with ovirt-node-ng-installer-ovirt-4.1-pre-2017051210 iso 
image.
After installation, full update.

I'm try to follow the guide in : 
http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/
In this guide, I can seen in the host selection, the possibility to add more 
than three hosts to the cluster.
In current cockpit version, I can't see this.

These are cockpit components, installed in all six servers:

cockpit-ws-140-1.el7.centos.x86_64
cockpit-system-140-1.el7.centos.noarch
cockpit-140-1.el7.centos.x86_64
cockpit-dashboard-140-1.el7.centos.x86_64
cockpit-storaged-140-1.el7.centos.noarch
cockpit-ovirt-dashboard-0.10.7-0.0.18.el7.centos.noarch
cockpit-bridge-140-1.el7.centos.x86_64

Two questions:

Must I proceed with only three servers and only after add the remainder three 
to the cluster, to have distributed gluster solution or I must 
change something in the first installation phase, to add all six servers from 
the beginning ?

In the second cockpit screen there aren't default repo/packages, must I leave 
it blank ? (servers are already updated).

Thanks in advance for any hints.

Roberto




Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.0 - Gluster - Hosted Engine deploy howto

2017-03-15 Thread NUNIN Roberto
Hi kasturi

  >>  Hosted Engine tab is visible only after hosted_storage and hosted_engine 
vm is imported to the >>cluster. Once the first node is installed, you will 
need to create a storage domain of vmstore or >>data volume .

Data storage domain is already created and activated. I've also created one 
test VM, up & running.


>>Once one of these storage domains are created, hosted_storage and 
>>Hosted_engine vm is >>imported automatically. After this you will be able to 
>>see Hosted Engine tab in the New hosts >>dialog box.

This is the strange behavior, as wrote in my previous, hosted_storage is still 
locked and hosted engine VM is visible only as number of VM, but is not visible 
selecting the host where it is running.

>>Thanks
>>kasturi

Thanks
Roberto

On 03/15/2017 11:09 PM, NUNIN Roberto wrote:
Hi

Our test environment:

oVirt Engine : 4.1.0.3-1-el7.centos

Gluster replica 3 with bricks on all nodes.

6 nodes :

OS Version: RHEL - 7 - 3.1611.el7.centos
Kernel Version: 3.10.0 - 514.10.2.el7.x86_64
KVM Version: 2.6.0 - 28.el7_3.3.1
LIBVIRT Version: libvirt-2.0.0-10.el7_3.5
VDSM Version: vdsm-4.19.4-1.el7.centos
GlusterFS Version: glusterfs-3.8.9-1.el7

I have successfully added all the nodes to the default cluster.
Now I need to activate all the remainder 5 hosts for the hosted engine, but I 
haven’t the tab “Hosted Engine” in the Host properties, so I can’t deploy HE 
via web UI.

Using command line, I have an error :

2017-03-15 16:58:48 ERROR otopi.plugins.gr_he_setup.storage.storage 
storage._abortAdditionalHosts:189 Setup of additional hosts using this software 
is not allowed anymore. Please use the engine web interface to deploy any 
additional hosts.
RuntimeError:Setup of additional hosts using this software is not allowed 
anymore. Please use the engine web interface to deploy any additional hosts.
2017-03-15 16:58:48 ERROR otopi.context context._executeMethod:151 Failed to 
execute stage 'Environment customization': Setup of additional hosts using this 
software is not allowed anymore. Please use the engine web interface to deploy 
any additional hosts.
2017-03-15 16:58:48 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/exceptionInfo=list:'[(, 
RuntimeError('Setup of additional hosts using this software is not allowed 
anymore. Please use the engine web interface to deploy any additional 
hosts.',), )]'
2017-03-15 16:58:48 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/exceptionInfo=list:'[(, 
RuntimeError('Setup of additional hosts using this software is not allowed 
anymore. Please use the engine web interface to deploy any additional 
hosts.',), )]'

The hosted_storage is still locked.

How to enable the “Hosted Engine” Host property in the web UI or to force 
deploy via CLI ?

If logs are needed, please let me know which one. Attached agent-ha.log (from 
first node) and ovirt-hosted-engine-setup.log (from host where attempt to 
deploy hosted-engine via CLI).

Apart this, seems that the test environment is working as expected (Gluster ok, 
installed VM ok).

TIA



Roberto





Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.



___Users mailing 
listus...@ovirt.org<mailto:Users@ovirt.org>http://lists.ovirt.org/mailman/listinfo/users






Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] R: moVirt 1.6 released!

2016-09-24 Thread NUNIN Roberto
Well, that depends on the console settings on the VM. Do you have a VM with 
SPICE console configured?

/K

Yes, you're right. But in oVirt browser GUI I have SPICE protocol, instead
In moVirt setting for that vm I read VNC.

So tried another vm reported as SPICE also in moVirt, and this works fine, the 
only initial exception is certificate related,  like you've described.

Thanks to pointed me in the right direction !

Roberto



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: moVirt 1.6 released!

2016-09-23 Thread NUNIN Roberto
> >
> > >Also tested that aSPICE works, which is >awesome! Keep up the great
> > work!
> >
> > Interesting: did you have used aSpice to act as a console viewer ?
> > Could you share basics ?
>
> Very basic:) Just install aSPICE from $app_store and then, in moVirt, click 
> the
> "three dots" in the top right corner and choose "Console". I had to accept our
> certificate and for some reason the first time I tried it, it didn´t work, 
> but I tried
> again and BOOM!:)
>
> /K

Tested it, but I wasn't able to have the console until bVNC free installation
Message was: check if you have aSpice or bVNC installed.
Looks strange, considering that setup isn't a nightmare!

Thanks




Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moVirt 1.6 released!

2016-09-22 Thread NUNIN Roberto

>You may hitting an issue that we do not >support SPICE proxy: 
>https://github.com/>matobet/moVirt/issues/188
>Is that the case?

No, honestly speaking I have read somewhere that aSpice don't support console 
session due to missed security reqirements in handshake.

I really do not remember where, but it was around first moVirt release date.

Thanks



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moVirt 1.6 released!

2016-09-21 Thread NUNIN Roberto

>Also tested that aSPICE works, which is >awesome! Keep up the great work!

Interesting: did you have used aSpice to act as a console viewer ?
Could you share basics ?

TIA

Roberto



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted engine disk moving to another storage volume

2016-09-20 Thread NUNIN Roberto
Hi

We must move the hosted engine disk from the storage where it was installed, to 
another one, due to storage maintenance activities.
Both current and future volume are hosted on FC storage.

It is possible to perform this change without interrupting services ? it is a 
secure change ?

Current setup:

Four nodes Centos 7.2.1511
Vdsm  4.18.9.1
Libvirt 1.2.17.13
KVM 2.3.0-31

oVirtEngine 4.0.1.1-1.el7.centos.

Thanks in advance sur any suggestion.



Roberto




Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Missing engine-manage-domains?!

2016-08-04 Thread NUNIN Roberto

I'm using oVirt Engine Version: 4.0.0.6-1.el7.centos and wanted to connect an 
LDAP server.  I went looking for engine-manage-domains on the engine machine 
but it seems to be missing.  Any ideas?
Thanks,
Clint

You must install ovirt-engine-extension-aaa-ldap.noarch package from ovirt repo 
and run  ovirt-engine-extension-aaa-ldap-setup, choosing which kind of ldap 
server, fqdn etc.

Hope this can halp you
Roberto



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: oVirt 4.0 hosted-engine deploy fail on fc domain

2016-07-08 Thread NUNIN Roberto

> -Messaggio originale-
> Da: Simone Tiraboschi [mailto:stira...@redhat.com]
> Inviato: venerdì 8 luglio 2016 14:31
> A: NUNIN Roberto <roberto.nu...@comifar.it>
> Cc: users@ovirt.org
> Oggetto: Re: [ovirt-users] oVirt 4.0 hosted-engine deploy fail on fc domain
>
> On Fri, Jul 8, 2016 at 12:17 PM, NUNIN Roberto <roberto.nu...@comifar.it>
> wrote:
> > Hello
> >
> > I’m in trouble deploying hosted-engine on a fresh-installed Centos7.2
> > server, chosing fc domain:
> >
> >
> >
> > [ ERROR ] Failed to execute stage 'Environment customization': 'devList'
> >
> Hi Roberto,
> we already opened a bug on it:
> https://bugzilla.redhat.com/show_bug.cgi?id=1352601
> and we already have a fix: https://gerrit.ovirt.org/#/c/60142/
>
> The fix will come with the next release but it's really simple:
> if you are brave enough you could simply edit /usr/share/ovirt-hosted-engine-
> setup/plugins/gr-he-setup/storage/blockd.py
> on your host replacing 'devList' with 'items' in line 407.
>
> > Roberto

Hi Simone

Thanks for hint, fix has done the job.

Ciao




Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4.0 hosted-engine deploy fail on fc domain

2016-07-08 Thread NUNIN Roberto
Hello
I'm in trouble deploying hosted-engine on a fresh-installed Centos7.2 server, 
chosing fc domain:

[ ERROR ] Failed to execute stage 'Environment customization': 'devList'

Installation ends with error, following the versions of ovirt installed 
software:

ovirt-hosted-engine-setup-2.0.0.2-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.7.0-1.el7.centos.noarch
ovirt-setup-lib-1.0.2-1.el7.centos.noarch
ovirt-host-deploy-1.5.0-1.el7.centos.noarch
ovirt-imageio-daemon-0.3.0-0.201606191345.git9f3d6d4.el7.centos.noarch
ovirt-vmconsole-host-1.0.3-1.el7.centos.noarch
ovirt-release40-4.0.0-5.noarch
ovirt-vmconsole-1.0.3-1.el7.centos.noarch
ovirt-hosted-engine-ha-2.0.0-1.el7.centos.noarch
libgovirt-0.3.3-1.el7_2.1.x86_64
ovirt-imageio-common-0.3.0-0.201606191345.git9f3d6d4.el7.centos.noarch

Here part of the log:

2016-07-08 11:46:08 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:204 DIALOG:SEND Please specify the storage 
you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
2016-07-08 11:46:11 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:204 DIALOG:RECEIVEfc
2016-07-08 11:46:11 DEBUG otopi.context context.dumpEnvironment:760 ENVIRONMENT 
DUMP - BEGIN
2016-07-08 11:46:11 DEBUG otopi.context context.dumpEnvironment:770 ENV 
OVEHOSTED_STORAGE/domainType=str:'fc'
2016-07-08 11:46:11 DEBUG otopi.context context.dumpEnvironment:774 ENVIRONMENT 
DUMP - END
2016-07-08 11:46:11 DEBUG otopi.context context._executeMethod:128 Stage 
customization METHOD 
otopi.plugins.gr_he_setup.storage.blockd.Plugin._customization
2016-07-08 11:46:11 DEBUG otopi.plugins.gr_he_setup.storage.blockd 
blockd._fc_get_lun_list:404 {'status': {'message': 'Done', 'code': 0}, 'items': 
[{u'status': u'free', u'vendorID': u'3PARdata', u'capacity': u'1099511627776', 
u'fwrev': u'3122', u'vgUUID': u'', u'pvsize': u'', u'pathlist': [], 
u'logicalblocksize': u'512', u'pathstatus': [{u'capacity': u'1099511627776', 
u'physdev': u'sdb', u'type': u'FCP', u'state': u'active', u'lun': u'1'}, 
{u'capacity': u'1099511627776', u'physdev': u'sdc', u'type': u'FCP', u'state': 
u'active', u'lun': u'1'}, {u'capacity': u'1099511627776', u'physdev': u'sdd', 
u'type': u'FCP', u'state': u'active', u'lun': u'1'}, {u'capacity': 
u'1099511627776', u'physdev': u'sde', u'type': u'FCP', u'state': u'active', 
u'lun': u'1'}, {u'capacity': u'1099511627776', u'physdev': u'sdf', u'type': 
u'FCP', u'state': u'active', u'lun': u'1'}, {u'capacity': u'1099511627776', 
u'physdev': u'sdg', u'type': u'FCP', u'state': u'active', u'lun': u'1'}, 
{u'capacity': u'1099511627776', u'physdev': u'sdh', u'type': u'FCP', u'state': 
u'active', u'lun': u'1'}, {u'capacity': u'1099511627776', u'physdev': u'sdi', 
u'type': u'FCP', u'state': u'active', u'lun': u'1'}], u'devtype': u'FCP', 
u'physicalblocksize': u'512', u'pvUUID': u'', u'serial': 
u'S3PARdataVV_1619775', u'GUID': u'360002ac01d0060964d3f', 
u'productID': u'VV'}]}
2016-07-08 11:46:11 DEBUG otopi.context context._executeMethod:142 method 
exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in 
_executeMethod
method['method']()
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/blockd.py",
 line 612, in _customization
lunGUID = self._customize_lun(self.domainType, target)
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/blockd.py",
 line 212, in _customize_lun
available_luns = self._fc_get_lun_list()
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/blockd.py",
 line 407, in _fc_get_lun_list
for device in devices['devList']:
KeyError: 'devList'
ERROR otopi.context context._executeMethod:151 Failed to execute stage 
'Environment customization': 'devList'

FC volume is hosted on an HP 3PAR storage array.

It is available, under multipathd, to the OS:

[root@xxx-yyy-xxx ~]# multipath -l
360002ac01d0060964d3f dm-2 3PARdata,VV
size=1.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 2:0:11:1 sdi 8:128 active undef running
  |- 1:0:9:1  sdd 8:48  active undef running
  |- 2:0:7:1  sdg 8:96  active undef running
  |- 1:0:1:1  sdc 8:32  active undef running
  |- 2:0:6:1  sdf 8:80  active undef running
  |- 1:0:0:1  sdb 8:16  active undef running
  |- 2:0:8:1  sdh 8:112 active undef running
  `- 1:0:10:1 sde 8:64  active undef running
[root@xxx-yyy-xxx ~]#

Must I submit other relevant logs ? Which one ?
Thanks in advance.


Roberto



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente 

Re: [ovirt-users] Shared storage between DC

2016-05-01 Thread NUNIN Roberto
Hi
I have in production the scenery something similar to what you've described.
The "enabling factor" is represented by an "storage virtualization" set of 
appliances, that maintain mirrored logical volume over fc physical volumes 
across two distinct datacenters, while giving rw simultaneus access to cluster 
hypervisors split between datacenters, that run the VMs.

So: cluster also is spread across dc, no need to import nothing.
Regards,

Roberto


Il giorno 01 mag 2016, alle ore 10:37, Arsène Gschwind 
> ha scritto:

Hi,

Is it possible to have a shared Storage domain between 2 Datacenter in oVirt?
We do replicate a FC Volume between 2 datacenter using FC SAN storage 
technology and we have an oVirt cluster on each site defined in separate DCs. 
The idea behind this is to setup a DR site and also balance the load between 
each site.
What happens if I do import a storage domain already active in one DC, will it 
break the Storage domain?

Thanks for any information..
Regards,
Arsène
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: R: R: R: R: R: R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-08-20 Thread NUNIN Roberto
 -Messaggio originale-
 Da: Michael S. Tsirkin [mailto:m...@redhat.com]
 Inviato: mercoledì 29 luglio 2015 12:03
 A: NUNIN Roberto
 Cc: Fabian Deutsch; users@ovirt.org
 Oggetto: Re: R: [ovirt-users] R: R: R: R: R: R: PXE boot of a VM on vdsm don't
 read DHCP offer

 On Wed, Jul 29, 2015 at 12:00:38PM +0200, NUNIN Roberto wrote:
 
   -Messaggio originale-
   Da: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Per conto
 di
   Michael S. Tsirkin
   Inviato: giovedì 9 luglio 2015 15:15
   A: Fabian Deutsch
   Cc: users@ovirt.org
   Oggetto: Re: [ovirt-users] R: R: R: R: R: R: PXE boot of a VM on vdsm 
   don't
 read
   DHCP offer
  
   On Thu, Jul 09, 2015 at 08:57:50AM -0400, Fabian Deutsch wrote:
- Original Message -
 On Wed, Jul 08, 2015 at 09:11:42AM +0300, Michael S. Tsirkin wrote:
  On Tue, Jul 07, 2015 at 05:13:28PM +0100, Dan Kenigsberg wrote:
   On Tue, Jul 07, 2015 at 10:14:54AM +0200, NUNIN Roberto wrote:

 On Mon, Jul 06, 2015 at 10:33:59AM +0200, NUNIN Roberto
 wrote:
  Hi Dan
 
  Sorry for question: what do you mean for interface vnet 
  ?
  Currently our path is :
  eno1 - eno2   bond0 - bond.3500 (VLAN) -- 
  bridge -
  vm.
 
  Which one of these ?
  Moreover, reading Fabian statements about bonding limits,
   today I
  can try
 to switch to a config without bonding.

 vm is a complicated term.

 `brctl show` would not show you a vm connected to a bridge.
   When
 you
 WOULD see is a vnet888 tap device. The other side of this
 device
   is
 held by qemu, which implement the VM.
   
Ok, understood and found it, vnet2
   

 I'm asking if the dhcp offer has reached that tap device.
   
No, the DHCP offer packet do not reach the vnet2 interface, I 
can
 see
only DHCP DISCOVER.
  
   Ok, so it seems that we have a problem in the host bridging.
  
   Is it the latest kernel-3.10.0-229.7.2.el7.x86_64 ?
  
   Michael, a DHCP DISCOVER is sent out of a just-booted guest, and
   OFFER
   returns to the bridge, but is not propagated to the tap device.
   Can you suggest how to debug this further?
 
  Dump packets including the ethernet headers.
  Likely something interfered with them so the eth address is wrong.
 
  Since bonding does this sometimes, this is the most likely culprit.

 We've ruled this out already - Roberto reproduces the issue without a
 bond.
   
To me this looks like either a regression in the host side bridging. But
 otoh it
   doesn't look
like it's happening always, because otherwise I'd expect more noise
 around
   this issue.
   
- fabian
  
   Hard to say. E.g. forwarding delay would do this for a while.
   If eth address of the packets is okay, poke at the fbd, maybe there's
   something wrong there. Maybe stp is detecting a loop - try checking that.
 
  Someone is checking this ?
  In tested config SPT was off.

 Then maybe you have a loop :)

No, it is not a loop.

I've done further tests today and finally I've defined the following conditions.
Erratic behavior is detected only within a cluster where nodes are HP Proliant 
BL660cGen8, connected to Cisco Nexus 7K thru HP FEX B22 blade interconnects and 
Cisco Nexus 5596 switches. All nic cards are 10Gbit.

It doesn't happen with two HP Proliant DL380G5 with 10Gbit nics, connected 
directly to Cisco Nexus 5548UP switches and not happen with two HP Proliant 
ML350eGen8 nic 1Gbit connected to Cisco 4948 and next the same Nexus 5548UP.

All nodes are running Centos 7.1 with latest updates and all networks are 
configured in the same mode, with bonding over two nic, then vlan interfaces 
and bridge towards VMs. Bonding is 4 for all and works correctly with  DL380 
and ML350 clusters.

Well, I've tried to change the bonding mode on the BL660 cluster to mode 1 and 
the issue disappear.
In all other bonding modes, it doesn't work; bridge interfaces receive DHCP 
offers and do NOT reject packets, but tap interfaces aren't receiving the 
offer. It works only with mode 1.

How I can investigate further ? Desiderata is to have mode 4, to aggregate 
available bandwidth.

RN


  RN
  
   --
   MST
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
 
  Questo messaggio e' indirizzato esclusivamente al destinatario indicato e
 potrebbe contenere informazioni confidenziali, riservate o proprietarie.
 Qualora la presente venisse ricevuta per errore, si prega di segnalarlo
 immediatamente al mittente, cancellando l'originale e ogni sua copia e
 distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito
 e potrebbe essere fonte di violazione di legge

[ovirt-users] R: vlan-tagging on non-tagged network

2015-08-17 Thread NUNIN Roberto
Sorry for jumping into the discussion,  but I'm currently experiencing exactly 
the opposite behaviour.
DHCP offer is reached  until the host bridge interface, but it is not 
propagated to the vnet device.
I'm using VLAN tagging. Packets incoming are correctly managed until vnet 
device and seems to have impact only into DHCP packets.

Static IP assigned to the vm ĺeave VM active and reachable.

Just my cent to a topic that I suppose could be related to mine issue.

Roberto


 Messaggio originale 
Da: Felix Pepinghege pepingh...@ira.uka.de
Data: 17/08/2015 13:36 (GMT+01:00)
A: Users@ovirt.org
Oggetto: Re: [ovirt-users] vlan-tagging on non-tagged network

Hi Ido, hi everybody,

sorry that I kept you waiting for two months, I only just found the time
to go back to this problem.

You were completely right with your guess. The ethernet frames do appear
on the vnet-interface, but not on the bridge. The dropped-counter seems
to be independent from these losses, though.

However, while this tells me *where* the problem is, I still don't know
*what* the problem is. I've done some research, but couldn't find
anything particularly helpful.
An interesting point may be that this problem is mono-directional. That
is, the bridge happily passes vlan-tagged frames from the ethernet
device to the vnet, but not the other way around. Untagged ethernet
frames make their way through the brigde, no matter where they come from.

The vlan module is loaded, as to the versioning questions:
# cat /etc/centos-release ; uname -s -v -r
CentOS Linux release 7.1.1503 (Core)
Linux 3.10.0-229.7.2.el7.x86_64 #1 SMP Tue Jun 23 22:06:11 UTC 2015

The guest OS is an up-to-date Debian Jessie, which should not matter,
though, as the frames get lost on the doorstep of the bridge on the host.

Again, any suggestions are much appreciated!

Regards,
Felix


Am 16.06.2015 um 08:27 schrieb Ido Barkan:
 Hey Felix.

 IIUC your frames are dropped by the bridge. Ovirt uses Linux Bridges
 To connect virtual machines to 'networks'. The guest connects to the bridge
 using a tap device which usually is called 'vnetnumber'.

 So, just to verify, can you please tcpdump both on the bridge device and on 
 the tap device?
 The bridge can be quite noisy so I suggest filtering traffic using the 
 guest's MAC
 address. So I am not sure what protocol you use for tunneling but applying
 a filter similar to this one should do the job:
   tcpdump -n -i vnet0 -vvv -s 1500 'udp[38:4]=0x001a4aaeec8e'

 My guess is that you will observe traffic on the tap device, but not on the 
 bridge.
 You didn't specify which centOS version you use but I do remember seeing 
 people
 complaining about Linux bridges discarding their tagged frames.
 You can -maybe- also observe the 'dropped' counter increases on the bridge by 
 running:
   'ip -s link show dev trunk'

 There were a few bugs on rhel6/7 about this, specifically I remember
 https://bugzilla.redhat.com/show_bug.cgi?id=1174291
 and
 https://bugzilla.redhat.com/show_bug.cgi?id=1200275#c20

 Also, is the vlan module loaded on your host?
 'lsmod |grep 8021q'

 Thanks,
 Ido

 - Original Message -
 From: Felix Pepinghege pepingh...@ira.uka.de
 To: Users@ovirt.org
 Sent: Monday, June 15, 2015 11:33:39 AM
 Subject: [ovirt-users] vlan-tagging on non-tagged network

 Hi everybody!

 I am experiencing a behaviour of ovirt, of which I don't know whether it
 is expected or not. My setup is as follows:
 A virtual machine has a logical network attached to it, which is
 configured without vlan-tagging and listens to the name 'trunk'.
 The VM is running an openvpn server. It is a patched openvpn version,
 including vlan-tagging. That is, openvpn clients get a vlan tag. This
 should not really be an issue but should satisfy the why do you want to
 do it in the first place-questions.
 Anyhow, effectively, the VM simply puts vlan-tagged ethernet-frames on
 the virtual network. These frames, however, never make it to the host's
 network bridge, which represents the logical network.
 My observations are: According to tcpdump, the vlan-tagged packages
 arrive at the eth1-interface inside the VM (which *is* the correct
 interface). Again, according to tcpdump, these packages never arrive at
 the corresponding network-bridge (i.e., the interface 'trunk') on the host.
 I know that the setup itself is feasible with KVM---I have it working on
 a proxmox-machine. Therefore, my conclusion is, that ovirt doesn't like
 vlan-tagged ethernet-frames on non-tagged logical networks, and somehow
 filters them out, though I don't really see on what level that would
 happen (Handling the ethernet frames should be a concern of
 KVM/QEMU/Linux only, once ovirt has started the VM).
 So this problem could be a CentOS issue, but I really don't see why
 CentOS should act differently than debian does (proxmox is debian-based).
 Is this a known/wanted/expected behaviour of ovirt, and can I somehow
 prevent or elude it?

 Any help is 

[ovirt-users] R: R: R: R: R: R: R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-07-29 Thread NUNIN Roberto
 -Messaggio originale-
 Da: Michael S. Tsirkin [mailto:m...@redhat.com]
 Inviato: mercoledì 29 luglio 2015 12:03
 A: NUNIN Roberto
 Cc: Fabian Deutsch; users@ovirt.org
 Oggetto: Re: R: [ovirt-users] R: R: R: R: R: R: PXE boot of a VM on vdsm don't
 read DHCP offer

 On Wed, Jul 29, 2015 at 12:00:38PM +0200, NUNIN Roberto wrote:
 
   -Messaggio originale-
   Da: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Per conto
 di
   Michael S. Tsirkin
   Inviato: giovedì 9 luglio 2015 15:15
   A: Fabian Deutsch
   Cc: users@ovirt.org
   Oggetto: Re: [ovirt-users] R: R: R: R: R: R: PXE boot of a VM on vdsm 
   don't
 read
   DHCP offer
  
   On Thu, Jul 09, 2015 at 08:57:50AM -0400, Fabian Deutsch wrote:
- Original Message -
 On Wed, Jul 08, 2015 at 09:11:42AM +0300, Michael S. Tsirkin wrote:
  On Tue, Jul 07, 2015 at 05:13:28PM +0100, Dan Kenigsberg wrote:
   On Tue, Jul 07, 2015 at 10:14:54AM +0200, NUNIN Roberto wrote:

 On Mon, Jul 06, 2015 at 10:33:59AM +0200, NUNIN Roberto
 wrote:
  Hi Dan
 
  Sorry for question: what do you mean for interface vnet 
  ?
  Currently our path is :
  eno1 - eno2   bond0 - bond.3500 (VLAN) -- 
  bridge -
  vm.
 
  Which one of these ?
  Moreover, reading Fabian statements about bonding limits,
   today I
  can try
 to switch to a config without bonding.

 vm is a complicated term.

 `brctl show` would not show you a vm connected to a bridge.
   When
 you
 WOULD see is a vnet888 tap device. The other side of this
 device
   is
 held by qemu, which implement the VM.
   
Ok, understood and found it, vnet2
   

 I'm asking if the dhcp offer has reached that tap device.
   
No, the DHCP offer packet do not reach the vnet2 interface, I 
can
 see
only DHCP DISCOVER.
  
   Ok, so it seems that we have a problem in the host bridging.
  
   Is it the latest kernel-3.10.0-229.7.2.el7.x86_64 ?
  
   Michael, a DHCP DISCOVER is sent out of a just-booted guest, and
   OFFER
   returns to the bridge, but is not propagated to the tap device.
   Can you suggest how to debug this further?
 
  Dump packets including the ethernet headers.
  Likely something interfered with them so the eth address is wrong.
 
  Since bonding does this sometimes, this is the most likely culprit.

 We've ruled this out already - Roberto reproduces the issue without a
 bond.
   
To me this looks like either a regression in the host side bridging. But
 otoh it
   doesn't look
like it's happening always, because otherwise I'd expect more noise
 around
   this issue.
   
- fabian
  
   Hard to say. E.g. forwarding delay would do this for a while.
   If eth address of the packets is okay, poke at the fbd, maybe there's
   something wrong there. Maybe stp is detecting a loop - try checking that.
 
  Someone is checking this ?
  In tested config SPT was off.

 Then maybe you have a loop :)

That was already checked, the MAC was unique in the VLAN.

RN

  RN
  
   --
   MST
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
 
  Questo messaggio e' indirizzato esclusivamente al destinatario indicato e
 potrebbe contenere informazioni confidenziali, riservate o proprietarie.
 Qualora la presente venisse ricevuta per errore, si prega di segnalarlo
 immediatamente al mittente, cancellando l'originale e ogni sua copia e
 distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito
 e potrebbe essere fonte di violazione di legge.
 
  This message is for the designated recipient only and may contain
 privileged, proprietary, or otherwise private information. If you have 
 received
 it in error, please notify the sender immediately, deleting the original and 
 all
 copies and destroying any hard copies. Any other use is strictly prohibited
 and may be unlawful.

Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users

[ovirt-users] R: R: R: R: R: R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-07-29 Thread NUNIN Roberto

 -Messaggio originale-
 Da: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Per conto di
 Michael S. Tsirkin
 Inviato: giovedì 9 luglio 2015 15:15
 A: Fabian Deutsch
 Cc: users@ovirt.org
 Oggetto: Re: [ovirt-users] R: R: R: R: R: R: PXE boot of a VM on vdsm don't 
 read
 DHCP offer

 On Thu, Jul 09, 2015 at 08:57:50AM -0400, Fabian Deutsch wrote:
  - Original Message -
   On Wed, Jul 08, 2015 at 09:11:42AM +0300, Michael S. Tsirkin wrote:
On Tue, Jul 07, 2015 at 05:13:28PM +0100, Dan Kenigsberg wrote:
 On Tue, Jul 07, 2015 at 10:14:54AM +0200, NUNIN Roberto wrote:
  
   On Mon, Jul 06, 2015 at 10:33:59AM +0200, NUNIN Roberto wrote:
Hi Dan
   
Sorry for question: what do you mean for interface vnet ?
Currently our path is :
eno1 - eno2   bond0 - bond.3500 (VLAN) -- bridge 
-
vm.
   
Which one of these ?
Moreover, reading Fabian statements about bonding limits,
 today I
can try
   to switch to a config without bonding.
  
   vm is a complicated term.
  
   `brctl show` would not show you a vm connected to a bridge.
 When
   you
   WOULD see is a vnet888 tap device. The other side of this device
 is
   held by qemu, which implement the VM.
 
  Ok, understood and found it, vnet2
 
  
   I'm asking if the dhcp offer has reached that tap device.
 
  No, the DHCP offer packet do not reach the vnet2 interface, I can 
  see
  only DHCP DISCOVER.

 Ok, so it seems that we have a problem in the host bridging.

 Is it the latest kernel-3.10.0-229.7.2.el7.x86_64 ?

 Michael, a DHCP DISCOVER is sent out of a just-booted guest, and
 OFFER
 returns to the bridge, but is not propagated to the tap device.
 Can you suggest how to debug this further?
   
Dump packets including the ethernet headers.
Likely something interfered with them so the eth address is wrong.
   
Since bonding does this sometimes, this is the most likely culprit.
  
   We've ruled this out already - Roberto reproduces the issue without a
   bond.
 
  To me this looks like either a regression in the host side bridging. But 
  otoh it
 doesn't look
  like it's happening always, because otherwise I'd expect more noise around
 this issue.
 
  - fabian

 Hard to say. E.g. forwarding delay would do this for a while.
 If eth address of the packets is okay, poke at the fbd, maybe there's
 something wrong there. Maybe stp is detecting a loop - try checking that.

Someone is checking this ?
In tested config SPT was off.

RN

 --
 MST
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: R: R: R: R: R: R: R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-07-29 Thread NUNIN Roberto
Da: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Per conto di 
Jorick Astrego
Inviato: mercoledì 29 luglio 2015 14:26
A: users@ovirt.org
Oggetto: Re: [ovirt-users] R: R: R: R: R: R: R: R: PXE boot of a VM on vdsm 
don't read DHCP offer



On 07/29/2015 12:12 PM, NUNIN Roberto wrote:
 -Messaggio originale-
 Da: Michael S. Tsirkin [mailto:m...@redhat.commailto:redhat.com]
 Inviato: mercoledì 29 luglio 2015 12:03
 A: NUNIN Roberto
 Cc: Fabian Deutsch; users@ovirt.orgmailto:ovirt.org
 Oggetto: Re: R: [ovirt-users] R: R: R: R: R: R: PXE boot of a VM on vdsm 
 don't
 read DHCP offer

 On Wed, Jul 29, 2015 at 12:00:38PM +0200, NUNIN Roberto wrote:
 -Messaggio originale-
 Da: users-boun...@ovirt.orgmailto:ovirt.org 
 [mailto:users-boun...@ovirt.orgmailto:ovirt.org] Per conto
 di
 Michael S. Tsirkin
 Inviato: giovedì 9 luglio 2015 15:15
 A: Fabian Deutsch
 Cc: users@ovirt.orgmailto:ovirt.org
 Oggetto: Re: [ovirt-users] R: R: R: R: R: R: PXE boot of a VM on vdsm don't
 read
 DHCP offer

 On Thu, Jul 09, 2015 at 08:57:50AM -0400, Fabian Deutsch wrote:
 - Original Message -
 On Wed, Jul 08, 2015 at 09:11:42AM +0300, Michael S. Tsirkin wrote:
 On Tue, Jul 07, 2015 at 05:13:28PM +0100, Dan Kenigsberg wrote:
 On Tue, Jul 07, 2015 at 10:14:54AM +0200, NUNIN Roberto wrote:
 On Mon, Jul 06, 2015 at 10:33:59AM +0200, NUNIN Roberto
 wrote:
 Hi Dan

 Sorry for question: what do you mean for interface vnet ?
 Currently our path is :
 eno1 - eno2   bond0 - bond.3500 (VLAN) -- bridge -
 vm.

 Which one of these ?
 Moreover, reading Fabian statements about bonding limits,
 today I
 can try
 to switch to a config without bonding.

 vm is a complicated term.

 `brctl show` would not show you a vm connected to a bridge.
 When
 you
 WOULD see is a vnet888 tap device. The other side of this
 device
 is
 held by qemu, which implement the VM.
 Ok, understood and found it, vnet2

 I'm asking if the dhcp offer has reached that tap device.
 No, the DHCP offer packet do not reach the vnet2 interface, I can
 see
 only DHCP DISCOVER.
 Ok, so it seems that we have a problem in the host bridging.

 Is it the latest kernel-3.10.0-229.7.2.el7.x86_64 ?

 Michael, a DHCP DISCOVER is sent out of a just-booted guest, and
 OFFER
 returns to the bridge, but is not propagated to the tap device.
 Can you suggest how to debug this further?
 Dump packets including the ethernet headers.
 Likely something interfered with them so the eth address is wrong.

 Since bonding does this sometimes, this is the most likely culprit.
 We've ruled this out already - Roberto reproduces the issue without a
 bond.
 To me this looks like either a regression in the host side bridging. But
 otoh it
 doesn't look
 like it's happening always, because otherwise I'd expect more noise
 around
 this issue.
 - fabian
 Hard to say. E.g. forwarding delay would do this for a while.
 If eth address of the packets is okay, poke at the fbd, maybe there's
 something wrong there. Maybe stp is detecting a loop - try checking that.
 Someone is checking this ?
 In tested config SPT was off.
 Then maybe you have a loop :)
 That was already checked, the MAC was unique in the VLAN.

 RN


Did you also try a reboot of the VM? We have the same issue with foreman
and both Libvirt and oVirt. On second boot PXE boots properly from DHCP.

Yes tried more than one time the reboot, without changes.
RN


Haven't had the time to investigate yet so we're using mostly image
based provisioning on oVirt at the moment.





Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts

Tel: 053 20 30 270

i...@netbulae.eumailto:i...@netbulae.eu

Staalsteden 4-3A

KvK 08198180

Fax: 053 20 30 271

www.netbulae.euhttp://www.netbulae.eu

7547 TA Enschede

BTW NL821234584B01






Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: R: R: R: R: R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-07-13 Thread NUNIN Roberto

 -Messaggio originale-
 Da: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Per conto di
 Michael S. Tsirkin
 Inviato: giovedì 9 luglio 2015 15:15
 A: Fabian Deutsch
 Cc: users@ovirt.org
 Oggetto: Re: [ovirt-users] R: R: R: R: R: R: PXE boot of a VM on vdsm don't 
 read
 DHCP offer

 On Thu, Jul 09, 2015 at 08:57:50AM -0400, Fabian Deutsch wrote:
  - Original Message -
   On Wed, Jul 08, 2015 at 09:11:42AM +0300, Michael S. Tsirkin wrote:
On Tue, Jul 07, 2015 at 05:13:28PM +0100, Dan Kenigsberg wrote:
 On Tue, Jul 07, 2015 at 10:14:54AM +0200, NUNIN Roberto wrote:
  
   On Mon, Jul 06, 2015 at 10:33:59AM +0200, NUNIN Roberto wrote:
Hi Dan
   
Sorry for question: what do you mean for interface vnet ?
Currently our path is :
eno1 - eno2   bond0 - bond.3500 (VLAN) -- bridge 
-
vm.
   
Which one of these ?
Moreover, reading Fabian statements about bonding limits,
 today I
can try
   to switch to a config without bonding.
  
   vm is a complicated term.
  
   `brctl show` would not show you a vm connected to a bridge.
 When
   you
   WOULD see is a vnet888 tap device. The other side of this device
 is
   held by qemu, which implement the VM.
 
  Ok, understood and found it, vnet2
 
  
   I'm asking if the dhcp offer has reached that tap device.
 
  No, the DHCP offer packet do not reach the vnet2 interface, I can 
  see
  only DHCP DISCOVER.

 Ok, so it seems that we have a problem in the host bridging.

 Is it the latest kernel-3.10.0-229.7.2.el7.x86_64 ?

 Michael, a DHCP DISCOVER is sent out of a just-booted guest, and
 OFFER
 returns to the bridge, but is not propagated to the tap device.
 Can you suggest how to debug this further?
   
Dump packets including the ethernet headers.
Likely something interfered with them so the eth address is wrong.
   
Since bonding does this sometimes, this is the most likely culprit.
  
   We've ruled this out already - Roberto reproduces the issue without a
   bond.
 
  To me this looks like either a regression in the host side bridging. But 
  otoh it
 doesn't look
  like it's happening always, because otherwise I'd expect more noise around
 this issue.
 
  - fabian

 Hard to say. E.g. forwarding delay would do this for a while.
 If eth address of the packets is okay, poke at the fbd, maybe there's
 something wrong there. Maybe stp is detecting a loop - try checking that.


I've the tcpdump captures, let me know if are useful to analyze.
In the VLAN interface, STP=off.



RN

 --
 MST
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: R: R: R: R: R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-07-08 Thread NUNIN Roberto


 -Messaggio originale-
 Da: Michael S. Tsirkin [mailto:m...@redhat.com]
 Inviato: mercoledì 8 luglio 2015 08:12
 A: Dan Kenigsberg
 Cc: NUNIN Roberto; users@ovirt.org; ibar...@redhat.com
 Oggetto: Re: R: R: R: R: R: [ovirt-users] R: PXE boot of a VM on vdsm don't 
 read
 DHCP offer

 On Tue, Jul 07, 2015 at 05:13:28PM +0100, Dan Kenigsberg wrote:
  On Tue, Jul 07, 2015 at 10:14:54AM +0200, NUNIN Roberto wrote:
   
On Mon, Jul 06, 2015 at 10:33:59AM +0200, NUNIN Roberto wrote:
 Hi Dan

 Sorry for question: what do you mean for interface vnet ?
 Currently our path is :
 eno1 - eno2   bond0 - bond.3500 (VLAN) -- bridge - vm.

 Which one of these ?
 Moreover, reading Fabian statements about bonding limits, today I
 can try
to switch to a config without bonding.
   
vm is a complicated term.
   
`brctl show` would not show you a vm connected to a bridge. When
 you
WOULD see is a vnet888 tap device. The other side of this device is
held by qemu, which implement the VM.
  
   Ok, understood and found it, vnet2
  
   
I'm asking if the dhcp offer has reached that tap device.
  
   No, the DHCP offer packet do not reach the vnet2 interface, I can see only
 DHCP DISCOVER.
 
  Ok, so it seems that we have a problem in the host bridging.
 
  Is it the latest kernel-3.10.0-229.7.2.el7.x86_64 ?
 
  Michael, a DHCP DISCOVER is sent out of a just-booted guest, and OFFER
  returns to the bridge, but is not propagated to the tap device.
  Can you suggest how to debug this further?

 Dump packets including the ethernet headers.

I have both tcpdump capture, on the bridge interface and on the vnet interface.

How I can upload output files ?
Sorry for probably obvious question, first time for me :)

Roberto Nunin
 Likely something interfered with them so the eth address is wrong.

 Since bonding does this sometimes, this is the most likely culprit.

 --
 MST

Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: R: R: R: R: R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-07-07 Thread NUNIN Roberto


 -Messaggio originale-
 Da: Dan Kenigsberg [mailto:dan...@redhat.com]
 Inviato: martedì 7 luglio 2015 18:13
 A: NUNIN Roberto; m...@redhat.com
 Cc: users@ovirt.org; ibar...@redhat.com
 Oggetto: Re: R: R: R: R: R: [ovirt-users] R: PXE boot of a VM on vdsm don't 
 read
 DHCP offer

 On Tue, Jul 07, 2015 at 10:14:54AM +0200, NUNIN Roberto wrote:
  
   On Mon, Jul 06, 2015 at 10:33:59AM +0200, NUNIN Roberto wrote:
Hi Dan
   
Sorry for question: what do you mean for interface vnet ?
Currently our path is :
eno1 - eno2   bond0 - bond.3500 (VLAN) -- bridge - vm.
   
Which one of these ?
Moreover, reading Fabian statements about bonding limits, today I can
 try
   to switch to a config without bonding.
  
   vm is a complicated term.
  
   `brctl show` would not show you a vm connected to a bridge. When you
   WOULD see is a vnet888 tap device. The other side of this device is
   held by qemu, which implement the VM.
 
  Ok, understood and found it, vnet2
 
  
   I'm asking if the dhcp offer has reached that tap device.
 
  No, the DHCP offer packet do not reach the vnet2 interface, I can see only
 DHCP DISCOVER.

 Ok, so it seems that we have a problem in the host bridging.

 Is it the latest kernel-3.10.0-229.7.2.el7.x86_64 ?

It is 3.10.0-229.7.2.el7.x86_64 on the host with installed Centos7.1, update, 
then vdsm
It is 3.10.0-123.20.1.el7.x86_64 on the host installed with vdsm iso
Both hosts show the issue

 Michael, a DHCP DISCOVER is sent out of a just-booted guest, and OFFER
 returns to the bridge, but is not propagated to the tap device.
 Can you suggest how to debug this further?

Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: R: R: R: R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-07-07 Thread NUNIN Roberto

 On Mon, Jul 06, 2015 at 10:33:59AM +0200, NUNIN Roberto wrote:
  Hi Dan
 
  Sorry for question: what do you mean for interface vnet ?
  Currently our path is :
  eno1 - eno2   bond0 - bond.3500 (VLAN) -- bridge - vm.
 
  Which one of these ?
  Moreover, reading Fabian statements about bonding limits, today I can try
 to switch to a config without bonding.

 vm is a complicated term.

 `brctl show` would not show you a vm connected to a bridge. When you
 WOULD see is a vnet888 tap device. The other side of this device is
 held by qemu, which implement the VM.

Ok, understood and found it, vnet2


 I'm asking if the dhcp offer has reached that tap device.

No, the DHCP offer packet do not reach the vnet2 interface, I can see only DHCP 
DISCOVER.


Roberto

Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: R: R: R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-07-06 Thread NUNIN Roberto
Hi Dan

Sorry for question: what do you mean for interface vnet ?
Currently our path is :
eno1 - eno2   bond0 - bond.3500 (VLAN) -- bridge - vm.

Which one of these ?
Moreover, reading Fabian statements about bonding limits, today I can try to 
switch to a config without bonding.

BR

Roberto

 -Messaggio originale-
 Da: Dan Kenigsberg [mailto:dan...@redhat.com]
 Inviato: lunedì 6 luglio 2015 10:02
 A: NUNIN Roberto
 Cc: users@ovirt.org; ibar...@redhat.com
 Oggetto: Re: R: R: R: [ovirt-users] R: PXE boot of a VM on vdsm don't read
 DHCP offer

 On Fri, Jul 03, 2015 at 04:32:50PM +0200, NUNIN Roberto wrote:
  Thanks for answering. My considerations below.
 
  BR,
 
  ROberto
   -Messaggio originale-
   Da: Dan Kenigsberg [mailto:dan...@redhat.com]
   Inviato: venerdì 3 luglio 2015 12:31
   A: NUNIN Roberto
   Cc: users@ovirt.org; ibar...@redhat.com
   Oggetto: Re: R: R: [ovirt-users] R: PXE boot of a VM on vdsm don't read
   DHCP offer
  
   On Fri, Jul 03, 2015 at 10:51:52AM +0200, NUNIN Roberto wrote:
Hi Dan, guys
   
Sorry for very late follow-up, but we had a lot of other topics to fix 
just
   before to go back on this one.
   
We have tried another approach just to check if the kernel of the vdsm
 iso
   image used to install the host could create the problem I've reported to
 the
   list.
   
Now we have reinstalled the same hardware with latest CentOS 7.1,
 fully
   updated.
Installed vdsm, then joined the oVirt cluster.
   
Well, we are observing the same behavior as before.
No DHCP offer is reaching the booting VM, and:
   
brctl showmacs bridge_if show us the booting vm mac-address
tcpdump -I bridge_if show us the dhcp offer coming from dhcp
 server.
  
   The offer should be replicated to the tap device (vnetXXX). I assume
   that it does not?
 
  The DHCP offer package coming from DHCP server flows thru the bond0
  device (tagged), the  VLAN device (tagged) and the bridge interface
  (not anymore tagged, correct), so it seems that the path back is
  complete, considering that brctl showmacs shows the vm mac and If we
  put a static IP into the vm nic, it works correctly.

 Let's try to trace the offer for one more link. If it does not get to
 the tap device vnetXXX - it is clearly a host-side issue. Maybe a bridge
 module bug. If you spot the offer on the tap, the it becomes more likely
 that the issue is in qemu or elsewere in the virt stack.

 I would appreciate your assistence!

 
  
   
We have also tried to remove ANY firewall rule.
   
It isn't a PXE issue (gPXE 0.9.7) but only a DHCP process issue. 
Infact, if
 we
   install a vm manually and assign a static IP, it works fine.
If we switch to dhcp, the vm don't get the dynamic one.
In this case, tcpdump on vm shows only the DHCP discovery, not the
 DHCP
   offer.
   
Any further suggestion/hint ?
 
  We have mode 4, but already checked active/backup, mode 2, without
 changes.
 

 Thanks, so it's not that bonding issue.

 Regards,
 Dan.

Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: R: R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-07-03 Thread NUNIN Roberto
Thanks for answering. My considerations below.

BR,

ROberto
 -Messaggio originale-
 Da: Dan Kenigsberg [mailto:dan...@redhat.com]
 Inviato: venerdì 3 luglio 2015 12:31
 A: NUNIN Roberto
 Cc: users@ovirt.org; ibar...@redhat.com
 Oggetto: Re: R: R: [ovirt-users] R: PXE boot of a VM on vdsm don't read
 DHCP offer

 On Fri, Jul 03, 2015 at 10:51:52AM +0200, NUNIN Roberto wrote:
  Hi Dan, guys
 
  Sorry for very late follow-up, but we had a lot of other topics to fix just
 before to go back on this one.
 
  We have tried another approach just to check if the kernel of the vdsm iso
 image used to install the host could create the problem I've reported to the
 list.
 
  Now we have reinstalled the same hardware with latest CentOS 7.1, fully
 updated.
  Installed vdsm, then joined the oVirt cluster.
 
  Well, we are observing the same behavior as before.
  No DHCP offer is reaching the booting VM, and:
 
  brctl showmacs bridge_if show us the booting vm mac-address
  tcpdump -I bridge_if show us the dhcp offer coming from dhcp server.

 The offer should be replicated to the tap device (vnetXXX). I assume
 that it does not?

The DHCP offer package coming from DHCP server flows thru the bond0 device 
(tagged), the  VLAN device (tagged) and the bridge interface (not anymore 
tagged, correct), so it seems that the path back is complete, considering that 
brctl showmacs shows the vm mac and If we put a static IP into the vm nic, it 
works correctly.


 
  We have also tried to remove ANY firewall rule.
 
  It isn't a PXE issue (gPXE 0.9.7) but only a DHCP process issue. Infact, if 
  we
 install a vm manually and assign a static IP, it works fine.
  If we switch to dhcp, the vm don't get the dynamic one.
  In this case, tcpdump on vm shows only the DHCP discovery, not the DHCP
 offer.
 
  Any further suggestion/hint ?

We have mode 4, but already checked active/backup, mode 2, without changes.



 Which bond mode are you using? I recently seen

 https://bugzilla.redhat.com/show_bug.cgi?id=1230638#c22

 closed as duplicate of

 Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM
 networks

Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: R: R: R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-07-03 Thread NUNIN Roberto


…

  I've observed this behavior in bug
  https://bugzilla.redhat.com/show_bug.cgi?id=1230638
  We also removed all firewall rules, checked iPXE and I also saw the
  requests
  going out, but no replies getting to the VM.
  But here it sounds like it isn't specific to bonds.
  After all I did not find the solution yet.

 In our config, I can see the DHCP offer until the hypervisor bridge interface
 toward vm

Just to make sure, is your setup:

NIC -- bridge -- VM?

Or is a bond involved?

NICs (two) - BOND (type 4) - VLAN - BRIDGE - VM



 
  It is probably a good idea to install the oS with a static IP, and then
  switch to dhcp to then use tcpdump inside the vm to see what is reaching
  the inside.

 Already done. Vm do not acquire the IP address and, on the vm side, tcpdump
 shows only requests.
 At the same time, the DHCP offer s detected on the bridge if of the
 hypervisor.

 With static IP, vm works fine.

Yes, it only seems to be around dhcp.

If you don't have a bond involved, then please reopen the bug abaove, and we 
try to find
someone to look at it.

- fabian

Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-07-03 Thread NUNIN Roberto
Hi Dan, guys

Sorry for very late follow-up, but we had a lot of other topics to fix just 
before to go back on this one.

We have tried another approach just to check if the kernel of the vdsm iso 
image used to install the host could create the problem I've reported to the 
list.

Now we have reinstalled the same hardware with latest CentOS 7.1, fully updated.
Installed vdsm, then joined the oVirt cluster.

Well, we are observing the same behavior as before.
No DHCP offer is reaching the booting VM, and:

brctl showmacs bridge_if show us the booting vm mac-address
tcpdump -I bridge_if show us the dhcp offer coming from dhcp server.

We have also tried to remove ANY firewall rule.

It isn't a PXE issue (gPXE 0.9.7) but only a DHCP process issue. Infact, if we 
install a vm manually and assign a static IP, it works fine.
If we switch to dhcp, the vm don't get the dynamic one.
In this case, tcpdump on vm shows only the DHCP discovery, not the DHCP offer.

Any further suggestion/hint ?



RN

 -Messaggio originale-
 Da: Dan Kenigsberg [mailto:dan...@redhat.com]
 Inviato: lunedì 18 maggio 2015 16:14
 A: NUNIN Roberto
 Cc: users@ovirt.org; ibar...@redhat.com
 Oggetto: Re: R: [ovirt-users] R: PXE boot of a VM on vdsm don't read DHCP
 offer

 On Fri, May 08, 2015 at 03:11:25PM +0200, NUNIN Roberto wrote:
  Hi Dan
  Thanks for answering
 
 
 
   Which kernel does the el7 host run? I think that Ido has seen a case
   where `brctl showmacs` was not populated with the VM mac, despite a
   packet coming out of it.
 
  Kernel is: 3.10.0-123.20.1.el7.x86_64, package is vdsm only. Brctl isn't
 available within vdsm only package.

 Could you try upgrading to a more up-to-date
 http://mirror.centos.org/centos-
 7/7.1.1503/updates/x86_64/Packages/kernel-3.10.0-
 229.4.2.el7.x86_64.rpm
 ?

 bridge-utils is a vdsm dependency. It must exist on your host. Please
 see if the mac of the vNIC shows up on `brctl showmacs` as it should.

  
   Can you tcpdump and check whether the bridge propogated the DHCP
 offer
   to the tap device of the said VM? Does the packet generated by
   `ether-wake MAC-of-VM` reach the tap device?
 
  Yes: host see the broadcast :
  0.000.0.0.0   255.255.255.255   DHCP 
  346DHCP
 Discover - Transaction ID 0x69267b67
  It came from the right MAC:
  Source: Qumranet_15:81:03 (00:1a:4a:15:81:03)
  And it is tagged correctly:
  802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 3500
 
  This is the offer, on the bond interface:
  1.01235510.155.124.2  10.155.124.246DHCP 
  346DHCP
 Offer- Transaction ID 0x69267b67
  Layer 2 info:
  Ethernet II, Src: Cisco_56:83:c3 (84:78:ac:56:83:c3), Dst:
 Qumranet_15:81:03 (00:1a:4a:15:81:03)
  Tagging on the bond:
  802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 3500
 
  The tag is correctly removed when DHCP offer is forwarded over the
 bond.3500.
  Here's the offer content, seems everything right:
 
  Client IP address: 0.0.0.0 (0.0.0.0)
  Your (client) IP address: 10.155.124.246 (10.155.124.246)
  Next server IP address: 10.155.124.223 (10.155.124.223)
  Relay agent IP address: 10.155.124.2 (10.155.124.2)
  Client MAC address: Qumranet_15:81:03 (00:1a:4a:15:81:03)
  Client hardware address padding: 
  Server host name: 10.155.124.223
  Boot file name: pxelinux.0
  Magic cookie: DHCP
 
  Nothing of this offer appear on the VM side.

 But does it show on the host's bridge? on the tap device?

 
  ether-wake -i bond0.3500 00:1a:4a:15:81:03 (started from the host)
  reach the VM eth0 interface:
  2.002028   HewlettP_4a:47:b0 Qumranet_15:81:03 WOL  116
 MagicPacket for Qumranet_15:81:03 (00:1a:4a:15:81:03)
 
  Really strange behavior.
 
  Roberto

Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: R: NFS export domain still remain in Preparing for maintenance

2015-05-20 Thread NUNIN Roberto
Hi Maor

Sorry for the late answer, I was out of office for a couple of days.

You were right, it was an engine issue. Never happened before, after attaching 
and detaching more than one time the export domain.

A restart solved the issue, the export domain that before the reboot was shown 
in preparing for maintenance , after reboot was shown in the correct state, 
so detach, reattach to another DC was tried successfully, at least three times, 
with export / import activity also.

Thank you so much !.
Best regards,



Roberto


 -Messaggio originale-
 Da: Maor Lipchuk [mailto:mlipc...@redhat.com]
 Inviato: domenica 17 maggio 2015 12:00
 A: NUNIN Roberto
 Cc: users@ovirt.org
 Oggetto: Re: R: [ovirt-users] NFS export domain still remain in Preparing for
 maintenance

 Hi NUNIN,

 Usually when an NFS Storage Domain is in preparing for maintenance
 status, it is because one of the Hosts encounter a problem while un mounting
 the storage domain.
 Can you please add the engine logs and also the VDSM logs of the hosts you
 are using in the Data Center.

 while gathering all the logs you could try also the following actions:
 1. Since you wrote that you do see an audit log indicating the operation
 succeed, maybe it could also be a refresh problem in the GUI.
 Could you try to switch tabs in the GUI (For example to peak the Virtual
 Machine main tab and switch back to the Storage Domain tab)

 If that is not helping, try for restart your engine service (guessing it also 
 could
 be an issue of the monitor Storage Domains)
 Please let me know if there is any difference

 Regards,
 Maor



 - Original Message -
  From: NUNIN Roberto roberto.nu...@comifar.it
  To: Maor Lipchuk mlipc...@redhat.com
  Cc: users@ovirt.org
  Sent: Friday, May 15, 2015 10:28:36 AM
  Subject: R: [ovirt-users] NFS export domain still remain in Preparing for
   maintenance
 
  Apologize for spamming: when I try to put the NFS_Export in maintenance
 mode,
  in the engine GUI, messages area,  I can see:
 
 
  2015-May-15, 09:22
 
  Storage Domain NFS_Export (Data Center x) was deactivated.
 
 
 
 
 
 
  but the status of the NFS_Export still remain “Preparing For Maintenance”
  and the Deattach function is disabled.
 
  BR
 
 
 
 
 
  RN
 
 
 
 
 
   -Messaggio originale-
 
   Da: NUNIN Roberto
 
   Inviato: venerdì 15 maggio 2015 09:22
 
   A: 'Maor Lipchuk'
 
   Cc: users@ovirt.org
 
   Oggetto: R: [ovirt-users] NFS export domain still remain in Preparing for
 
   maintenance
 
  
 
   Hi Maor
 
  
 
   Thanks for answering.
 
  
 
   Probably my description wasn't so clear, try to explain better.
 
  
 
   We have configured one storage as NFS export (exported from a server
 that
 
   isn't part of the oVirt installation), to be used to move exported VM
   between
 
   one DC and another.
 
  
 
   The process to export VM's ends correctly but, when we try to put the
 NFS
 
   export storage in maintenance mode (not the hosts) before to deattach it
 
   from the current DC and, subsequently, attach to the DC where I want to
 
   import VM, the process remain hung  indefinitely in “Preparing for
 
   maintenance phase.
 
  
 
   I'm sure that the VM export process is ended correctly, looking at the
 logs
   in
 
   the engine GUI and also to data into the NFS export.
 
  
 
   Because of only one export storage is allowed per DC, I'm not able to
 move
 
   VM between DC's.
 
  
 
   This is the behavior.
 
  
 
   Best regards
 
  
 
  
 
   Roberto
 
  
 
-Messaggio originale-
 
Da: Maor Lipchuk [mailto:mlipc...@redhat.com]
 
Inviato: mercoledì 13 maggio 2015 23:18
 
A: NUNIN Roberto
 
Cc: users@ovirt.orgmailto:users@ovirt.org
 
Oggetto: Re: [ovirt-users] NFS export domain still remain in Preparing
for
 
maintenance
 
   
 
Hi NUNIN,
 
   
 
I'm not sure that I clearly understood the problem.
 
You wrote that your NFS export is attached to a 6.6 cluster, though a
 
   cluster
 
is mainly an entity which contains Hosts.
 
   
 
If it is the Host that was preparing for maintenance then it could be
that
 
there are VMs which are running on that Host which are currently
 during
 
   live
 
migration.
 
In that case you could either manually migrate those VMs, shut them
 
   down,
 
or simply move the Host back to active.
 
Is that indeed the issue? if not ,can you elaborate a but more please
 
   
 
   
 
Thanks,
 
Maor
 
   
 
   
 
   
 
- Original Message -
 
 From: NUNIN Roberto
 roberto.nu...@comifar.itmailto:roberto.nu...@comifar.it
 
 To: users@ovirt.orgmailto:users@ovirt.org
 
 Sent: Tuesday, May 12, 2015 5:17:39 PM
 
 Subject: [ovirt-users] NFS export domain still remain in Preparing 
 for
 
maintenance
 

 

 

 
 Hi all
 

 

 

 
 We are using oVirt engine 3.5.1-0.0 on Centos 6.6
 

 
 We have two DC. One with hosts using

[ovirt-users] R: NFS export domain still remain in Preparing for maintenance

2015-05-15 Thread NUNIN Roberto
Hi Maor

Thanks for answering.

Probably my description wasn't so clear, try to explain better.

We have configured one storage as NFS export (exported from a server that isn't 
part of the oVirt installation), to be used to move exported VM between one DC 
and another.

The process to export VM's ends correctly but, when we try to put the NFS 
export storage in maintenance mode (not the hosts) before to deattach it from 
the current DC and, subsequently, attach to the DC where I want to import VM, 
the process remain hung  indefinitely in “Preparing for maintenance phase.

I'm sure that the VM export process is ended correctly, looking at the logs in 
the engine GUI and also to data into the NFS export.

Because of only one export storage is allowed per DC, I'm not able to move VM 
between DC's.

This is the behavior.

Best regards


Roberto

 -Messaggio originale-
 Da: Maor Lipchuk [mailto:mlipc...@redhat.com]
 Inviato: mercoledì 13 maggio 2015 23:18
 A: NUNIN Roberto
 Cc: users@ovirt.org
 Oggetto: Re: [ovirt-users] NFS export domain still remain in Preparing for
 maintenance

 Hi NUNIN,

 I'm not sure that I clearly understood the problem.
 You wrote that your NFS export is attached to a 6.6 cluster, though a cluster
 is mainly an entity which contains Hosts.

 If it is the Host that was preparing for maintenance then it could be that
 there are VMs which are running on that Host which are currently during live
 migration.
 In that case you could either manually migrate those VMs, shut them down,
 or simply move the Host back to active.
 Is that indeed the issue? if not ,can you elaborate a but more please


 Thanks,
 Maor



 - Original Message -
  From: NUNIN Roberto roberto.nu...@comifar.it
  To: users@ovirt.org
  Sent: Tuesday, May 12, 2015 5:17:39 PM
  Subject: [ovirt-users] NFS export domain still remain in Preparing for
   maintenance
 
 
 
  Hi all
 
 
 
  We are using oVirt engine 3.5.1-0.0 on Centos 6.6
 
  We have two DC. One with hosts using vdsm-4.16.10-
 8.gitc937927.el7.x86_64,
  the other vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6
 
  No hosted-engine, it run on a dedicates VM, outside oVirt.
 
 
 
  Behavior: When try to put the NFS export currently active and attached to
 the
  6.6 cluster, used to move VM from one DC to the other, this remain
  indefinitely in “Preparing for maintenance phase”.
 
 
 
  No DNS resolution issue in place. All parties involved are solved directly
  and via reverse resolution.
 
  I’ve read about the issue on el7 and IPv6 bug, but here we have the
 problem
  on Centos 6.6 hosts.
 
 
 
  Any idea/suggestion/further investigation ?
 
 
 
  Can we reinitialize the NFS export in some way ? Only erasing content ?
 
  Thanks in advance for any suggestion.
 
 
 
 
 
  Roberto Nunin
 
  Italy
 
 
 
 
  Questo messaggio e' indirizzato esclusivamente al destinatario indicato e
  potrebbe contenere informazioni confidenziali, riservate o proprietarie.
  Qualora la presente venisse ricevuta per errore, si prega di segnalarlo
  immediatamente al mittente, cancellando l'originale e ogni sua copia e
  distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente
  proibito e potrebbe essere fonte di violazione di legge.
 
  This message is for the designated recipient only and may contain
 privileged,
  proprietary, or otherwise private information. If you have received it in
  error, please notify the sender immediately, deleting the original and all
  copies and destroying any hard copies. Any other use is strictly prohibited
  and may be unlawful.
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 

Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: NFS export domain still remain in Preparing for maintenance

2015-05-15 Thread NUNIN Roberto
Apologize for spamming: when I try to put the NFS_Export in maintenance mode, 
in the engine GUI, messages area,  I can see:


2015-May-15, 09:22

Storage Domain NFS_Export (Data Center x) was deactivated.






but the status of the NFS_Export still remain “Preparing For Maintenance”  and 
the Deattach function is disabled.

BR





RN





 -Messaggio originale-

 Da: NUNIN Roberto

 Inviato: venerdì 15 maggio 2015 09:22

 A: 'Maor Lipchuk'

 Cc: users@ovirt.org

 Oggetto: R: [ovirt-users] NFS export domain still remain in Preparing for

 maintenance



 Hi Maor



 Thanks for answering.



 Probably my description wasn't so clear, try to explain better.



 We have configured one storage as NFS export (exported from a server that

 isn't part of the oVirt installation), to be used to move exported VM between

 one DC and another.



 The process to export VM's ends correctly but, when we try to put the NFS

 export storage in maintenance mode (not the hosts) before to deattach it

 from the current DC and, subsequently, attach to the DC where I want to

 import VM, the process remain hung  indefinitely in “Preparing for

 maintenance phase.



 I'm sure that the VM export process is ended correctly, looking at the logs in

 the engine GUI and also to data into the NFS export.



 Because of only one export storage is allowed per DC, I'm not able to move

 VM between DC's.



 This is the behavior.



 Best regards





 Roberto



  -Messaggio originale-

  Da: Maor Lipchuk [mailto:mlipc...@redhat.com]

  Inviato: mercoledì 13 maggio 2015 23:18

  A: NUNIN Roberto

  Cc: users@ovirt.orgmailto:users@ovirt.org

  Oggetto: Re: [ovirt-users] NFS export domain still remain in Preparing for

  maintenance

 

  Hi NUNIN,

 

  I'm not sure that I clearly understood the problem.

  You wrote that your NFS export is attached to a 6.6 cluster, though a

 cluster

  is mainly an entity which contains Hosts.

 

  If it is the Host that was preparing for maintenance then it could be that

  there are VMs which are running on that Host which are currently during

 live

  migration.

  In that case you could either manually migrate those VMs, shut them

 down,

  or simply move the Host back to active.

  Is that indeed the issue? if not ,can you elaborate a but more please

 

 

  Thanks,

  Maor

 

 

 

  - Original Message -

   From: NUNIN Roberto 
   roberto.nu...@comifar.itmailto:roberto.nu...@comifar.it

   To: users@ovirt.orgmailto:users@ovirt.org

   Sent: Tuesday, May 12, 2015 5:17:39 PM

   Subject: [ovirt-users] NFS export domain still remain in Preparing for

  maintenance

  

  

  

   Hi all

  

  

  

   We are using oVirt engine 3.5.1-0.0 on Centos 6.6

  

   We have two DC. One with hosts using vdsm-4.16.10-

  8.gitc937927.el7.x86_64,

   the other vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6

  

   No hosted-engine, it run on a dedicates VM, outside oVirt.

  

  

  

   Behavior: When try to put the NFS export currently active and attached

 to

  the

   6.6 cluster, used to move VM from one DC to the other, this remain

   indefinitely in “Preparing for maintenance phase”.

  

  

  

   No DNS resolution issue in place. All parties involved are solved directly

   and via reverse resolution.

  

   I’ve read about the issue on el7 and IPv6 bug, but here we have the

  problem

   on Centos 6.6 hosts.

  

  

  

   Any idea/suggestion/further investigation ?

  

  

  

   Can we reinitialize the NFS export in some way ? Only erasing content ?

  

   Thanks in advance for any suggestion.

  

  

  

  

  

   Roberto Nunin

  

   Italy

  

  

  

  

   Questo messaggio e' indirizzato esclusivamente al destinatario indicato e

   potrebbe contenere informazioni confidenziali, riservate o proprietarie.

   Qualora la presente venisse ricevuta per errore, si prega di segnalarlo

   immediatamente al mittente, cancellando l'originale e ogni sua copia e

   distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente

   proibito e potrebbe essere fonte di violazione di legge.

  

   This message is for the designated recipient only and may contain

  privileged,

   proprietary, or otherwise private information. If you have received it in

   error, please notify the sender immediately, deleting the original and all

   copies and destroying any hard copies. Any other use is strictly 
   prohibited

   and may be unlawful.

  

   ___

   Users mailing list

   Users@ovirt.orgmailto:Users@ovirt.org

   http://lists.ovirt.org/mailman/listinfo/users

  


Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e

[ovirt-users] NFS export domain still remain in Preparing for maintenance

2015-05-12 Thread NUNIN Roberto
Hi all

We are using oVirt engine 3.5.1-0.0 on Centos 6.6
We have two DC. One with hosts using vdsm-4.16.10-8.gitc937927.el7.x86_64, the 
other vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6
No hosted-engine, it run on a dedicates VM, outside oVirt.

Behavior: When try to put the NFS export currently active and attached to the 
6.6 cluster, used to move VM from one DC to the other, this remain indefinitely 
in Preparing for maintenance phase.

No DNS resolution issue in place. All parties involved are solved directly and 
via reverse resolution.
I've read about the issue on el7 and IPv6 bug, but here we have the problem on 
Centos 6.6 hosts.

Any idea/suggestion/further investigation ?

Can we reinitialize the NFS export in some way ? Only erasing content ?
Thanks in advance for any suggestion.


Roberto Nunin
Italy



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-05-08 Thread NUNIN Roberto
Hi Rick

No, DHCP is provided by an external appliance.
Thanks for answering !



Roberto


-Messaggio originale-
Da: Rik Theys [mailto:rik.th...@esat.kuleuven.be]
Inviato: giovedì 7 maggio 2015 20:46
A: NUNIN Roberto
Cc: users@ovirt.org
Oggetto: Re: [ovirt-users] R: PXE boot of a VM on vdsm don't read DHCP offer

Hi,

Is your DHCP server a VM running on the same host? I've seem some
strange issues where a VM could not obtain a DHCP lease if it was
running on the same physical machine as the client. If this is the case,
I can look up what I had to change, otherwise ignore it.

Regards,

Rik

 Further info about this case:

 An already installed and running VM (Centos7) with static IPv4 assignment, if 
  changed to DHCP mode, do not acquire the
 IP address.



 In this case, tcpdump taken on the VM, do not show DHCP offer packets that 
 are instead seen on the host bond interface.

 Seems that something is filtering DHCP offers between host physical eth 
 interfaces and VM virtio eth interface.

 Physical servers on the same VLAN keep DHCP offers and boot from PXE 
 correctly.



 Roberto







 Hi all



 We are using oVirt engine 3.5.1-0.0 on Centos 6.6

 We are deploying two hosts with vdsm-4.16.10-8.gitc937927.el7.x86_64

 No hosted-engine, it run on a dedicates VM, outside oVirt.



 Behavior: PXE boot of a VM, ends in timeout (0x4c106035), instead to accept 
 the DHCP offer coming from DHCP server.

 Tcpdump capture started on the vdsm host, bond0 interface shows clearly that 
 DHCP offers reach the vdsm interfaces
 three times before PXE client ends in timeout.

 Incoming DHCP offer is correctly tagged when it comes to the bond0 interface 
 and forwarded to the bond0.bridge
 interface.

 PXE simply ignore it. PXE version is gPXE 0.9.7.

 bond0.bridge interface is already setup with STP=off and DELAY=0.



 If we install a VM using command  line boot parameters, VM install  run 
 fine. The issue is only related to PXE
 process, when it is expected to use the DHCP offer.



 I can provide tcpdump capture, but Iʼve not attached to the email because 
 Iʼm quite new of the community and donʼt
 know if it is allowed/correct.



 On another host, under the same engine, running 
 vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6, this behavior is
 not happening, everything works fine.



 Any idea/suggestion/further investigation ?

 Thanks for attention

 Best regards





 Roberto Nunin

 Infrastructure Manager

 Italy





 Here are interfaces configs:

 eno1:

 DEVICE=eno1

 HWADDR=38:63:bb:4a:47:b0

 MASTER=bond0

 NM_CONTROLLED=no

 ONBOOT=yes

 SLAVE=yes

 eno2:

 DEVICE=eno2

 HWADDR=38:63:bb:4a:47:b4

 MASTER=bond0

 NM_CONTROLLED=no

 ONBOOT=yes

 SLAVE=yes

 bond0:

 BONDING_OPTS=mode=4 miimon=100

 DEVICE=bond0

 NM_CONTROLLED=no

 ONBOOT=yes

 TYPE=Bond

 bond0.3500:

 DEVICE=bond0.3500

 VLAN=yes

 BRIDGE=DMZ3_DEV

 ONBOOT=no

 MTU=1500

 NM_CONTROLLED=no

 HOTPLUG=no

 DMZ3_DEV:

 DEVICE=DMZ3_DEV

 TYPE=Bridge

 DELAY=0

 STP=off

 ONBOOT=no

 MTU=1500

 DEFROUTE=no

 NM_CONTROLLED=no

 HOTPLUG=no








 ___
 Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
 potrebbe contenere informazioni
 confidenziali, riservate o proprietarie. Qualora la presente venisse ricevuta 
 per errore, si prega di segnalarlo
 immediatamente al mittente, cancellando l'originale e ogni sua copia e 
 distruggendo eventuali copie cartacee. Ogni
 altro uso e' strettamente proibito e potrebbe essere fonte di violazione di 
 legge.

 This message is for the designated recipient only and may contain privileged, 
 proprietary, or otherwise private
 information. If you have received it in error, please notify the sender 
 immediately, deleting the original and all
 copies and destroying any hard copies. Any other use is strictly prohibited 
 and may be unlawful.



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: R: PXE boot of a VM on vdsm don't read DHCP offer

2015-05-08 Thread NUNIN Roberto
Hi Dan
Thanks for answering



 Which kernel does the el7 host run? I think that Ido has seen a case
 where `brctl showmacs` was not populated with the VM mac, despite a
 packet coming out of it.

Kernel is: 3.10.0-123.20.1.el7.x86_64, package is vdsm only. Brctl isn't 
available within vdsm only package.

 Can you tcpdump and check whether the bridge propogated the DHCP offer
 to the tap device of the said VM? Does the packet generated by
 `ether-wake MAC-of-VM` reach the tap device?

Yes: host see the broadcast :
0.000.0.0.0   255.255.255.255   DHCP 346
DHCP Discover - Transaction ID 0x69267b67
It came from the right MAC:
Source: Qumranet_15:81:03 (00:1a:4a:15:81:03)
And it is tagged correctly:
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 3500

This is the offer, on the bond interface:
1.01235510.155.124.2  10.155.124.246DHCP 346
DHCP Offer- Transaction ID 0x69267b67
Layer 2 info:
Ethernet II, Src: Cisco_56:83:c3 (84:78:ac:56:83:c3), Dst: 
Qumranet_15:81:03 (00:1a:4a:15:81:03)
Tagging on the bond:
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 3500

The tag is correctly removed when DHCP offer is forwarded over the bond.3500.
Here's the offer content, seems everything right:

Client IP address: 0.0.0.0 (0.0.0.0)
Your (client) IP address: 10.155.124.246 (10.155.124.246)
Next server IP address: 10.155.124.223 (10.155.124.223)
Relay agent IP address: 10.155.124.2 (10.155.124.2)
Client MAC address: Qumranet_15:81:03 (00:1a:4a:15:81:03)
Client hardware address padding: 
Server host name: 10.155.124.223
Boot file name: pxelinux.0
Magic cookie: DHCP

Nothing of this offer appear on the VM side.

ether-wake -i bond0.3500 00:1a:4a:15:81:03 (started from the host)
reach the VM eth0 interface:
2.002028   HewlettP_4a:47:b0 Qumranet_15:81:03 WOL  116
MagicPacket for Qumranet_15:81:03 (00:1a:4a:15:81:03)

Really strange behavior.

Roberto


Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Presentation

2015-05-07 Thread NUNIN Roberto
Hello

I'm Roberto Nunin, responsible for Infrastructure in an Italian Company, part 
of multi-national group.
I'm really interested in oVirt technology and it's future developments.

Currently we are running a PoC of three clusters, 6 hosts, with an issue I've 
already submitted to the community.

Hope to find inside community answers, solutions, suggestions related to the 
product.
Will try to enhance my knowledge about it.

Have a nice day

Roberto Nunin
RHCE#110-006-970



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: PXE boot of a VM on vdsm don't read DHCP offer

2015-05-07 Thread NUNIN Roberto
Further info about this case:

An already installed and running VM (Centos7) with static IPv4 assignment, if  
changed to DHCP mode, do not acquire the IP address.

In this case, tcpdump taken on the VM, do not show DHCP offer packets that are 
instead seen on the host bond interface.
Seems that something is filtering DHCP offers between host physical eth 
interfaces and VM virtio eth interface.
Physical servers on the same VLAN keep DHCP offers and boot from PXE correctly.

Roberto



Hi all

We are using oVirt engine 3.5.1-0.0 on Centos 6.6
We are deploying two hosts with vdsm-4.16.10-8.gitc937927.el7.x86_64
No hosted-engine, it run on a dedicates VM, outside oVirt.

Behavior: PXE boot of a VM, ends in timeout (0x4c106035), instead to accept 
the DHCP offer coming from DHCP server.
Tcpdump capture started on the vdsm host, bond0 interface shows clearly that 
DHCP offers reach the vdsm interfaces three times before PXE client ends in 
timeout.
Incoming DHCP offer is correctly tagged when it comes to the bond0 interface 
and forwarded to the bond0.bridge interface.
PXE simply ignore it. PXE version is gPXE 0.9.7.
bond0.bridge interface is already setup with STP=off and DELAY=0.

If we install a VM using command  line boot parameters, VM install  run fine. 
The issue is only related to PXE process, when it is expected to use the DHCP 
offer.

I can provide tcpdump capture, but I've not attached to the email because I'm 
quite new of the community and don't know if it is allowed/correct.

On another host, under the same engine, running 
vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6, this behavior is not 
happening, everything works fine.

Any idea/suggestion/further investigation ?
Thanks for attention
Best regards


Roberto Nunin
Infrastructure Manager
Italy


Here are interfaces configs:
eno1:
DEVICE=eno1
HWADDR=38:63:bb:4a:47:b0
MASTER=bond0
NM_CONTROLLED=no
ONBOOT=yes
SLAVE=yes
eno2:
DEVICE=eno2
HWADDR=38:63:bb:4a:47:b4
MASTER=bond0
NM_CONTROLLED=no
ONBOOT=yes
SLAVE=yes
bond0:
BONDING_OPTS=mode=4 miimon=100
DEVICE=bond0
NM_CONTROLLED=no
ONBOOT=yes
TYPE=Bond
bond0.3500:
DEVICE=bond0.3500
VLAN=yes
BRIDGE=DMZ3_DEV
ONBOOT=no
MTU=1500
NM_CONTROLLED=no
HOTPLUG=no
DMZ3_DEV:
DEVICE=DMZ3_DEV
TYPE=Bridge
DELAY=0
STP=off
ONBOOT=no
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
HOTPLUG=no





Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] PXE boot of a VM on vdsm don't read DHCP offer

2015-05-06 Thread NUNIN Roberto
Hi all

We are using oVirt engine 3.5.1-0.0 on Centos 6.6
We are deploying two hosts with vdsm-4.16.10-8.gitc937927.el7.x86_64
No hosted-engine, it run on a dedicates VM, outside oVirt.

Behavior: PXE boot of a VM, ends in timeout (0x4c106035), instead to accept the 
DHCP offer coming from DHCP server.
Tcpdump capture started on the vdsm host, bond0 interface shows clearly that 
DHCP offers reach the vdsm interfaces three times before PXE client ends in 
timeout.
Incoming DHCP offer is correctly tagged when it comes to the bond0 interface 
and forwarded to the bond0.bridge interface.
PXE simply ignore it. PXE version is gPXE 0.9.7.
bond0.bridge interface is already setup with STP=off and DELAY=0.

If we install a VM using command  line boot parameters, VM install  run fine. 
The issue is only related to PXE process, when it is expected to use the DHCP 
offer.

I can provide tcpdump capture, but I've not attached to the email because I'm 
quite new of the community and don't know if it is allowed/correct.

On another host, under the same engine, running 
vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6, this behavior is not 
happening, everything works fine.

Any idea/suggestion/further investigation ?
Thanks for attention
Best regards


Roberto Nunin
Infrastructure Manager
Italy


Here are interfaces configs:
eno1:
DEVICE=eno1
HWADDR=38:63:bb:4a:47:b0
MASTER=bond0
NM_CONTROLLED=no
ONBOOT=yes
SLAVE=yes
eno2:
DEVICE=eno2
HWADDR=38:63:bb:4a:47:b4
MASTER=bond0
NM_CONTROLLED=no
ONBOOT=yes
SLAVE=yes
bond0:
BONDING_OPTS=mode=4 miimon=100
DEVICE=bond0
NM_CONTROLLED=no
ONBOOT=yes
TYPE=Bond
bond0.3500:
DEVICE=bond0.3500
VLAN=yes
BRIDGE=DMZ3_DEV
ONBOOT=no
MTU=1500
NM_CONTROLLED=no
HOTPLUG=no
DMZ3_DEV:
DEVICE=DMZ3_DEV
TYPE=Bridge
DELAY=0
STP=off
ONBOOT=no
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
HOTPLUG=no





Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users