I guess that with my manuals of Dell and seeing your configuration, i will
can do it well.
http://www.dell.com/downloads/global/products/pwcnt/en/app_note_6.pdf
PowerConnect 52xx:
console# config
console(config)# ip igmp snooping
console(config)# ip igmp snooping querier
cisco is similar
Following rule on your pve nodes should prevent igmp packages flooding
your bridge:
iptables -t filter -A FORWARD -i vmbr0 -p igmp -j DROP
If something happens you can remove the rule this way:
iptables -t filter -D FORWARD -i vmbr0 -p igmp -j DROP
Just be carefull that it'll block all igmp, so
Hi to all
Recently i have tested in the company that igmp is necessary for the VMs
(tested with tcpdump), the company has Windows Servers as VMs and several
Windows systems as workstations in the local network, so i can tell you that
i need to have the protocol igmp enabled in some VMs for
Many thanks Alexandre !!!, it is the rule that i was searching long time
ago, i will add to the rc.local file.
Moreover and if you can, as i need permit multicast in some Windows servers
VMs, workstations in the local network, and PVE nodes, can you show me the
configuration of your switch
@Dietmar: maybe can we add a default drop rule in -A PVEFW-FORWARD, to drop
multicast traffic from host ?
Or maybe better, allow to create rules at datacenter level, and put them in -A
PVEFW-FORWARD ?
So that we have 'IN', 'OUT', and 'FORWARD' rules at Datacenter/host level? Not
sure
if
Alexandre Derumier
Ingénieur système et stockage
Fixe : 03 20 68 90 88
Fax : 03 20 68 90 81
45 Bvd du Général Leclerc 59100 Roubaix
12 rue Marivaux 75002 Paris
MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de
trafic
De: Cesar Peschiera
After a minute of apply on only a node (pve6), these commands, i lost the
quorum in two nodes (pve5 and pve6):
The commands executed on only a node (pve6):
echo 1 /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
echo 0 /sys/class/net/vmbr0/bridge/multicast_querier
If you enable
On Sat, 3 Jan 2015 21:32:54 -0300
Cesar Peschiera br...@click.com.py wrote:
Now in the switch i have igmp snooping disabled, but i want to avoid
flooding the entire VLAN and the VMs
Following rule on your pve nodes should prevent igmp packages flooding
your bridge:
iptables -t filter -A
Thanks Michael for your reply
And what about of the tag firewall in the PVE GUI:
- For the Datacenter.
- For each PVE node.
- For the network device of the VM.
In general lines, i want to have all network traffic enabled (In/Out), and
only cut the traffic that i want cut, that in this case will
Hi,
But as i need that the VMs and the PVE host can be accessed from any
workstation, the vlan option isn't a option useful for me.
Ok
And about of cluster communication and the VMs, as i don't want that the
multicast packages go to the VMs, i believe that i can cut it for the VMs of
two
Hi Alexandre
Many thanks for your reply, which is much appreciated.
Unfortunately, your suggestion does not work for me, so i will comment the
results.
Between some comments, also in this message i have 7 questions for you, and
i'll be very grateful if you can answer me.
Only for that be
Hi Alexandre.
Thanks for your reply.
But as i need that the VMs and the PVE host can be accessed from any
workstation, the vlan option isn't a option useful for me.
Anyway, i am testing with I/OAT DMA Engine enabled in the Bios Hardware,
that after some days with few activity, the CMAN
Hi Alexandre
Today, and after a week, again a node lost the cluster communication. So i
changed the configuration of the Bios Hardware to I/OAT DMA enabled (that
work very well in others nodes Dell R320 with NICs of 1 Gb/s).
Moreover, trying to follow your advice of to put 192.100.100.51 ip
Hi Cesar,
I think I totaly forgot that we can't add an ip on an interface slave of a
bridge.
Myself I'm using a tagged vlan interface for the cluster communication
something like:
auto bond0
iface bond0 inet manual
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy
I'm interested to known what is this option ;)
Mememory Mapped I/O Above 4 GB : Disable
So you need to disable it, to not have any problem ?
Maybe this is related:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=2050443
yes, i can to write in
Hi Alexandre
Thanks for your reply, and here my answers:
I'm interested to known what is this option ;)
Mememory Mapped I/O Above 4 GB : Disable
Can you check that you can write to /etc/pve?
yes, i can to write in /etc/pve
And talking about of the red lights:
After of some hours, the
After several checks, I found the problem in these two servers: a
configuration in the Hardware Bios that isn't compatible with the
pve-kernel-3.10.0-5, and my NICs was getting the link to down and after up.
(i guess that soon i will comunicate my setup of BIOS in Dell R720).
... :-)
I'm
After several checks, I found the problem in these two servers: a
configuration in the Hardware Bios that isn't compatible with the
pve-kernel-3.10.0-5, and my NICs was getting the link to down and after up.
(i guess that soon i will comunicate my setup of BIOS in Dell R720).
... :-)
But now i
Hi Alexandre
I put 192.100.100.51 ip address directly to bond0, and i don't have network
enabled (as if the node is totally isolated)
This was my setup:
---
auto bond0
iface bond0 inet static
address 192.100.100.51
netmask 255.255.255.0
gateway 192.100.100.4
slaves eth0 eth2
Alexandre Derumier
Ingénieur système et stockage
Fixe : 03 20 68 90 88
Fax : 03 20 68 90 81
45 Bvd du Général Leclerc 59100 Roubaix
12 rue Marivaux 75002 Paris
MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de
trafic
De: Cesar Peschiera
Hi Alexandre
Maybe the problem is in PVE, because:
A) When these 2 nodes has quorum (the light is green in PVE GUI), the VM
configured in HA not turns on.
B) After, i try to start manually the VM, and i get this error message:
Executing HA start for VM 109
Member pve5 trying to enable
can you post your /etc/network/interfaces of theses 10gb/s nodes ?
This is my configuration:
Note: The LAN use 192.100.100.0/24
#Network interfaces
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
iface eth4 inet
maybe can you try to put 192.100.100.51 ip address directly to bond0,
to avoid corosync traffic going through to vmbr0.
(I remember some old offloading bugs with 10gbe nic and linux bridge)
- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc:
Hi Alexandre
Many thanks for your reply, and here my answer and comments:
is multicast working ?
Yes
Here a better explanation:
- I have 8 nodes in a PVE cluster.
- 5 Nodes have PVE 3.3 version with the Kernel 3.10.0-19 version
- 3 Nodes have PVE 2.3 version with the Kernel 2.6.32-19
when you loose the quorum, is multicast working or not ?
(test with omping for example)
- Mail original -
De: Cesar Peschiera br...@click.com.py
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Décembre 2014 15:33:47
Objet: Re: [pve-devel] Quorum
When you get the loss the quorum in these two nodes, i will do the test with
omping, etc.
Many thanks for your reply ... :-)
Cesar
- Original Message -
From: Alexandre DERUMIER aderum...@odiso.com
To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent:
Yes, Multicast works (tested with omping)
Best regards
Cesar
- Original Message -
From: Alexandre DERUMIER aderum...@odiso.com
To: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Sent: Thursday, December 18, 2014 12:38 PM
Subject: Re: [pve-devel] Quorum
27 matches
Mail list logo