Re: [pve-devel] Fwd: [Qemu-stable] [Qemu-devel] [PATCH v2 1/1] migration/block: fix pending() return value

2015-01-03 Thread Dietmar Maurer
 On January 2, 2015 at 5:37 PM Stefan Priebe - Profihost AG
 s.pri...@profihost.ag wrote:
 
 
 Isn't this something which was reported some weeks ago?

Sorry, but what do you refer to?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] ceph : libleveldb 1.12 in ceph extra repository

2015-01-03 Thread Dietmar Maurer
 currently in ceph extra repository, 
 http://ceph.com/packages/ceph-extras/debian/dists/wheezy/main/binary-amd64/Packages
 
 
 the libleveldb version is 1.12 
 
 vs 1.9 in proxmox repository. 


Thank for that hint. I will update that with next package upload.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] NUMIPTENT minimal 8 - by 18 get error

2015-01-03 Thread Dietmar Maurer

 I get an error by numiptent 18:18 when I start this container.
 
 Starting container ...
 Container is mounted
 Container start failed (try to check kernel messages, e.g. dmesg | tail)
 Container is unmounted
 dmesg | tail
 
 Fatal resource shortage: numiptent, UB 294.
 CT: 294: stopped
 CT: 294: failed to start with err=-12
 
 When I set to 100, the container starts. Is the minimum 8 correct or is the
 minimum for numiptent higher?

AFAIK we do not set that value, so it is 'unlimited' by default.
Seems you manually changed it?

Form vzctl manpage:

   --numiptent num[:num]
  Number of iptables (netfilter) entries.  Setting the barrier and
the limit to different values does
  not make practical sense.

So I am not sure what you want to ask exactly?

I assume your container simply use more that 18 netfilter entries.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2015-01-03 Thread Alexandre DERUMIER






Alexandre Derumier 
Ingénieur système et stockage 


Fixe : 03 20 68 90 88 
Fax : 03 20 68 90 81 


45 Bvd du Général Leclerc 59100 Roubaix 
12 rue Marivaux 75002 Paris 


MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de 
trafic 


De: Cesar Peschiera br...@click.com.py 
À: aderumier aderum...@odiso.com 
Cc: pve-devel pve-devel@pve.proxmox.com 
Envoyé: Samedi 3 Janvier 2015 03:41:20 
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns 
off 

Hi Alexandre 

Many thanks for your reply, which is much appreciated. 

Unfortunately, your suggestion does not work for me, so i will comment the 
results. 

Between some comments, also in this message i have 7 questions for you, and 
i'll be very grateful if you can answer me. 

Only for that be clear about of the version of the programs that i have 
installed in the nodes that has a behaviour strange (2 of 6 PVE nodes): 
shell pveversion -v 
proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-5-pve) 
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03) 
pve-kernel-3.10.0-5-pve: 3.10.0-19 
pve-kernel-2.6.32-34-pve: 2.6.32-139 
lvm2: 2.02.98-pve4 
clvm: 2.02.98-pve4 
corosync-pve: 1.4.7-1 
openais-pve: 1.1.4-3 
libqb0: 0.11.1-2 
redhat-cluster-pve: 3.2.0-2 
resource-agents-pve: 3.9.2-4 
fence-agents-pve: 4.0.10-1 
pve-cluster: 3.0-15 
qemu-server: 3.3-5 --especial patch created by Alexandre for me 
pve-firmware: 1.1-3 
libpve-common-perl: 3.0-19 
libpve-access-control: 3.0-15 
libpve-storage-perl: 3.0-25 
pve-libspice-server1: 0.12.4-3 
vncterm: 1.1-8 
vzctl: 4.0-1pve6 
vzprocps: 2.0.11-2 
vzquota: 3.1-2 
pve-qemu-kvm: 2.2-2 --especial patch created by Alexandre for me 
ksm-control-daemon: 1.1-1 
glusterfs-client: 3.5.2-1 

After a minute of apply on only a node (pve6), these commands, i lost the 
quorum in two nodes (pve5 and pve6): 
The commands executed on only a node (pve6): 
echo 1  /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping 
echo 0  /sys/class/net/vmbr0/bridge/multicast_querier 

The error message in the node where i applied the commands (pve6) is this: 
Message from syslogd@pve6 at Jan 2 20:58:32 ... 
rgmanager[4912]: #1: Quorum Dissolved 

And as collateral effect, as the pve5 node is configured with HA for a VM 
with a failover domain between pve5 and pve6 (the nodes), also pve5 has 
loss the quorum and the VM that is in HA turns off brutally. 

These are the error messages in the screen of the pve5 node: 
[ 61.246002] dlm: rgmanager: send_repeat_remove dir 6 rg=pvevm:112 
[119373.380111] dlm: closing connection to node 1 
[119373:300150] dlm: closing connection to node 2 
[119373:380182] dlm: closing connection to node 3 
[119373:300205] dlm: closing connection to node 4 
[119373:380229] dlm: closing connection to node 6 
[119373:300268] dlm: closing connection to node 7 
[119373:380319] dlm: closing connection to node 8 
[119545:042242] dlm: closing connection to node 3 
[119545:042264] dlm: closing connection to node 8 
[119545:042281] dlm: closing connection to node 7 
[119545:042300] dlm: closing connection to node 2 
[119545:042316] dlm: closing connection to node 1 
[119545:042331] dlm: closing connection to node 4 
[119545:042347] dlm: closing connection to node 5 
[119545:042891] dlm: dlm user daemon left 1 lockspaces 

So i believe that pve has a bug and a great problem, but i am not sure of 
that, but i know that if the pve6 node for some reason turns off brutally, 
the pve5 node will lose quorum and his VM in HA also will turn off, and this 
behaviour will give me several problems due that actually i don't know what 
i must do for start the VM in the node that is alive? 

So my questions are: 
1) Why the pve5 node lost the quorum if i don't applied any change in this 
node? 
(this node always had the multicast snooping filter disabled) 
2) Why the VM that is running on pve5 node and also is configured in HA 
turns off brutally? 
3) If it is a bug, can someone apply a patch to code? 

Moreover, talking about of firewall enabled for the VMs: 
I remember that +/- 1 month ago, i tried apply to the firewall a rule 
restrictive of access of the IP address of cluster communication to the VMs 
without successful, ie, with a policy of firewall by default of allow, 
each time that i enable this unique and restrictive rule to the VM, the VM 
lose all network communication. Maybe i am wrong in something. 

So i would like to ask you somethings: 

4) Can you do a test, and then tell me the results? 
5) If the results are positives, can you tell me how do it? 
6) And if the results are negatives, can you apply a patch to code? 

Moreover, the last question: 
7) As each PVE node has his firewall tag in the PVE GUI, i guess that such 
option is for apply firewall rules of in/out that affect only to this node, 
right?, or for what exist such option? 



- Original Message - 
From: Alexandre DERUMIER aderum...@odiso.com 
To: Cesar Peschiera br...@click.com.py 
Cc: 

Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2015-01-03 Thread Alexandre DERUMIER
After a minute of apply on only a node (pve6), these commands, i lost the 
quorum in two nodes (pve5 and pve6): 
The commands executed on only a node (pve6): 
echo 1  /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping 
echo 0  /sys/class/net/vmbr0/bridge/multicast_querier

If you enable multicast snooping (on linux bridge, or physical switch),
you need an igmp querier (or more) on your network.

Personnaly, I really don't like use querier from linux bridge,
So I enable it on my physical switches.

You can have multiple querier, but only one is working at one time.
(They are some kind of election when a querier is going down)

on linux bridge, disable multicast_snooping also disable multicast querier by 
default.


1) Why the pve5 node lost the quorum if i don't applied any change in this
node?
(this node always had the multicast snooping filter disabled)

Is igmp snooping enabled on your physical switch ?
Maybe pve6 was the master igmp querier.


2) Why the VM that is running on pve5 node and also is configured in HA
turns off brutally?
3) If it is a bug, can someone apply a patch to code?

Can't comment about this, I don't use HA in production. Maybe because it's 
loose quorum.
You really need a stable multicast (really really stable) to use HA.



Moreover, talking about of firewall enabled for the VMs:
I remember that +/- 1 month ago, i tried apply to the firewall a rule
restrictive of access of the IP address of cluster communication to the VMs
without successful, ie, with a policy of firewall by default of allow,
each time that i enable this unique and restrictive rule to the VM, the VM
lose all network communication. Maybe i am wrong in something.

So i would like to ask you somethings:

4) Can you do a test, and then tell me the results?
5) If the results are positives, can you tell me how do it?
6) And if the results are negatives, can you apply a patch to code?

I'll do test, but I don't see why It'll not work.
(I known they was a bug with openswitch , but with linux bridge it's should 
work without any problem)



7) As each PVE node has his firewall tag in the PVE GUI, i guess that such
option is for apply firewall rules of in/out that affect only to this node,
right?, or for what exist such option?

Yes, exactly, firewall tab on the node, is the firewall for INPUT|OUTPUT rules 
to|from the node.
At datacenter level, it's apply on all nodes IN|OUT


- Mail original -
De: aderumier aderum...@odiso.com
À: Cesar Peschiera br...@click.com.py
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Samedi 3 Janvier 2015 16:31:11
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns 
off







Alexandre Derumier 
Ingénieur système et stockage 


Fixe : 03 20 68 90 88 
Fax : 03 20 68 90 81 


45 Bvd du Général Leclerc 59100 Roubaix 
12 rue Marivaux 75002 Paris 


MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de 
trafic 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Fwd: [Qemu-stable] [Qemu-devel] [PATCH v2 1/1] migration/block: fix pending() return value

2015-01-03 Thread Stefan Priebe

Am 03.01.2015 um 11:00 schrieb Dietmar Maurer:

On January 2, 2015 at 5:37 PM Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:


Isn't this something which was reported some weeks ago?


Sorry, but what do you refer to?


Sorry i thought there were a report where block migration was hanging.

Stefan

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2015-01-03 Thread Michael Rasmussen
On Sat, 3 Jan 2015 21:32:54 -0300
Cesar Peschiera br...@click.com.py wrote:

 
 Now in the switch i have igmp snooping disabled, but i want to avoid
 flooding the entire VLAN and the VMs
 
Following rule on your pve nodes should prevent igmp packages flooding
your bridge:
iptables -t filter -A FORWARD -i vmbr0 -p igmp -j DROP

If something happens you can remove the rule this way:
iptables -t filter -D FORWARD -i vmbr0 -p igmp -j DROP

PS. Your SPF for click.com.py is configured wrong:
Received-SPF: softfail (click.com.py ... _spf.copaco.com.py: Sender is
not authorized by default to use 'br...@click.com.py' in 'mfrom'
identity, however domain is not currently prepared for false failures
(mechanism '~all' matched)) receiver=mail1.copaco.com.py;
identity=mailfrom; envelope-from=br...@click.com.py; helo=gerencia;
client-ip=190.23.61.163 
Received-SPF: softfail (click.com.py ... _spf.copaco.com.py: Sender is
not authorized by default to use 'br...@click.com.py' in 'mfrom'
identity, however domain is not currently prepared for false failures
(mechanism '~all' matched)) receiver=mail1.copaco.com.py;
identity=mailfrom; envelope-from=br...@click.com.py; helo=gerencia;
client-ip=190.23.61.163 
Received-SPF: softfail (click.com.py ... _spf.copaco.com.py: Sender is
not authorized by default to use 'br...@click.com.py' in 'mfrom'
identity, however domain is not currently prepared for false failures
(mechanism '~all' matched)) receiver=mail1.copaco.com.py;
identity=mailfrom; envelope-from=br...@click.com.py; helo=gerencia;
client-ip=190.23.61.163
-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
Why does a hearse horse snicker, hauling a lawyer away?
-- Carl Sandburg


pgppe0ADDiaZ9.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

2015-01-03 Thread Cesar Peschiera

Thanks Michael for your reply

And what about of the tag firewall in the PVE GUI:
- For the Datacenter.
- For each PVE node.
- For the network device of the VM.

In general lines, i want to have all network traffic enabled (In/Out), and
only cut the traffic that i want cut, that in this case will be the igmp for
the VMs. So i guess that i need to have the PVE GUI of this mode:

- Firewall tag in Datacenter:
Enable Firewall: yes
Input policy: accept
Output policy: accept

- Firewall tag in PVE nodes:
Enable Firewall: yes

Or without import as is this configured (both- datacenter and PVE nodes),
will work well the rule that
you suggest me?

And the rule that you suggest me, where will be better put it?:
1) In the rc.local file (I don't like put it here)
2) In the PVE GUI (i believe that will be the best site), but i don't know 
how add it, and guess that after, i will have that enable the firewall in 
the network device of the VM (also in PVE GUI).


- Original Message - 
From: Michael Rasmussen m...@datanom.net

To: pve-devel pve-devel@pve.proxmox.com
Sent: Saturday, January 03, 2015 11:34 PM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off



Now in the switch i have igmp snooping disabled, but i want to avoid
flooding the entire VLAN and the VMs


Following rule on your pve nodes should prevent igmp packages flooding
your bridge:
iptables -t filter -A FORWARD -i vmbr0 -p igmp -j DROP

If something happens you can remove the rule this way:
iptables -t filter -D FORWARD -i vmbr0 -p igmp -j DROP


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel