Re: [pve-devel] qemu : add support for vlan trunks on net interfaces

2016-08-15 Thread Andrew Thrift
Is there documentation on this feature ?


On Fri, Jan 15, 2016 at 3:15 PM, Alexandre Derumier  wrote:
> This add support vlan trunks for qemu net interfaces.
>
> based on theses patches:
> http://pve.proxmox.com/pipermail/pve-devel/2014-September/012730.html
>
> This works with OVS and linux vlan-aware bridge.
>
> details are in commits
>
> Regards,
>
> Alexandre
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] UUID for VM's and disks

2016-05-16 Thread Andrew Thrift
Hi Dietmar,

Using a pool per cluster will mean that you may need to split/shrink
your placement group count to stay within Ceph best-practises.  With
using a fixed number of pools, you can calculate the correct number of
PG's for your estimated OSD count.
By using a UUID for the disks and having a shared ceph pool, we would
be able to move VM's between the PVE clusters to balance around the
Proxmox clustering limits without fear of overlapping disk ID's.


Having UUID's for VM's is not as important, but just ensures that 3rd
party applications are referencing a unique ID for a VM and removes
the potential for basic coding errors removing a VM from the wrong
cluster.



On Mon, May 16, 2016 at 5:08 PM, Dietmar Maurer  wrote:
>> The use of VMID's for disks is becoming an issue for us with Ceph.  We
>> cannot have multiple PVE clusters sharing the same pool on Ceph as
>> there will be duplicate VMID's across the clusters that will result in
>> a clash of RBD names on Ceph.  I am sure this would be an issue with
>> iSCSI+ZFS as well.
>
> The suggestion is to use different pools.
>
>
>> Using UUID's for VM's would have benefits when using an external
>> application to manage multiple PVE clusters.
>
> Why? What application do you talk about exactly?
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] UUID for VM's and disks

2016-05-15 Thread Andrew Thrift
Hi,

Have the Promox VE team thought about using UUID's for VM's as well as
for Disks ?

The use of VMID's for disks is becoming an issue for us with Ceph.  We
cannot have multiple PVE clusters sharing the same pool on Ceph as
there will be duplicate VMID's across the clusters that will result in
a clash of RBD names on Ceph.  I am sure this would be an issue with
iSCSI+ZFS as well.

Using UUID's for VM's would have benefits when using an external
application to manage multiple PVE clusters.

VMID's could be retained for ease of management when using the local
PVE web management, but via the API and internally UUID's could be
used.


Regards,



Andrew
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux bridge

2015-08-05 Thread Andrew Thrift
Thanks Alexandre.

I will build something for testing this Friday.
On 5/08/2015 5:24 pm, Alexandre DERUMIER aderum...@odiso.com wrote:

 BTW, can you test this method too ?


 eth0.10vmbrcustomer--(vlanX)--tapX
 
 
 
 auto vmbrcustomer1
 iface vmbrcustomer1 inet manual
 bridge_vlan_aware yes
 bridge_ports eth0.10
 bridge_stp off
 bridge_fd 0
 pre-up ip link add link eth0 eth0.10 type vlan proto 802.1ad id
 10


 ?

 - Mail original -
 De: aderumier aderum...@odiso.com
 À: Andrew Thrift and...@networklabs.co.nz
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Mardi 4 Août 2015 14:02:46
 Objet: Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux
 bridge

 Another way,

 but I'm not sure it's working, is to tag 802.1ad on the physical interface




 eth0.10vmbrcustomer--(vlanX)--tapX



 auto vmbrcustomer1
 iface vmbrcustomer1 inet manual
 bridge_vlan_aware yes
 bridge_ports eth0.10
 bridge_stp off
 bridge_fd 0
 pre-up ip link add link eth0 eth0.10 type vlan proto 802.1ad id 10




 - Mail original -
 De: aderumier aderum...@odiso.com
 À: Andrew Thrift and...@networklabs.co.nz
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Mardi 4 Août 2015 12:22:47
 Objet: Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux
 bridge

 Hi Alexandre,
 Hi,

 We also use QinQ and have submitted patches for the previous network
 implementation that made use of a bridge in bridge design to achieve the
 QinQ functionality.

 They are also a new way to implement q-in-q with vlan aware bridge

 http://www.spinics.net/lists/linux-ethernet-bridging/msg05514.html

 ++ +---+p/u +--+ ++ +--+
 |eth0|--|802.1ad|veth|802.1Q|--|vnet|--|VM|
 ++ |bridge | |bridge| ++ +--+
 +---+ +--+

 p/u: pvid/untagged



 Currently we have implemented 802.1Q bridge.

 for qinq, we need to create a root bridge, with 802.1ad enabled, linked
 through a veth pair to 802.1Q bridge.


 The qinq bridge is managed exactly in the same way than 802.1ad, but it's
 enabled with
 echo 0x88a8  /sys/class/net/XXX/bridge/vlan_protocol

 for example
 

 eth0vmbr0--(vlan10)---brigelink---bridgelinkpeervmbrcustomer--(vlanX)--tapX


 brctl addbr vmbr0
 echo 0x88a8  /sys/class/net/vmbr0/bridge/vlan_protocol
 ip link add dev bridgelink type veth peer name bridgelinkpeer
 brctl addif vmbr0 bridgelink
 brctl addif vmbrcustomer1 bridgelinkpeer
 bridge vlan add dev bridgelink vid 10 pvid untagged


 something like that


 I can try to make a patch, but I don't have hardware which support q-in-q
 for testing.




 - Mail original -
 De: Andrew Thrift and...@networklabs.co.nz
 À: aderumier aderum...@odiso.com
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Mardi 4 Août 2015 10:49:26
 Objet: Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux
 bridge

 Hi Alexandre,
 We also use QinQ and have submitted patches for the previous network
 implementation that made use of a bridge in bridge design to achieve the
 QinQ functionality.

 The new vlan aware bridge implementation will be a lot cleaner.

 When your patches are ready we will test them and provide feedback.


 Thanks,



 Andrew

 On Tue, Jul 28, 2015 at 2:09 AM, Alexandre DERUMIER  aderum...@odiso.com
  wrote:


 does somebody have tested my vlan bridges patches ? (note that that need
 iproute2 from debian sid, for vlan ranges)

 It's working really fine here, I'm looking to add a patch for Q-in-Q
 bridge too. (I think Stefan Priebe use them)





 - Mail original -
 De: aderumier  aderum...@odiso.com 
 À: Wolfgang Bumiller  w.bumil...@proxmox.com 
 Cc: pve-devel  pve-devel@pve.proxmox.com 
 Envoyé: Vendredi 24 Juillet 2015 18:49:18
 Objet: Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux
 bridge

 Why is `bridge_add_interface` now restricted to the firewall-else
 branch?

 I manage it like openvswitch,

 vlan tagging is always done on the main bridge, not firewall bridge.


  + if ($firewall) {
  + $create_firewall_bridge_linux($iface, $bridge, $tag);

 create_firewall_bridge_linux($iface, $bridge, $tag)
 have

 - $bridge_add_interface($bridge, $vethfwpeer);
 + $bridge_add_interface($bridge, $vethfwpeer, $tag); #tag on the main
 bridge
 - return $fwbr;
 + $bridge_add_interface($fwbr, $iface); # add vm tap interface on
 fwbridge without vlan tag









 - Mail original -
 De: Wolfgang Bumiller  w.bumil...@proxmox.com 
 À: aderumier  aderum...@odiso.com 
 Cc: pve-devel  pve-devel@pve.proxmox.com 
 Envoyé: Vendredi 24 Juillet 2015 15:20:06
 Objet: Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux
 bridge

 On Fri, Jul 24, 2015 at 01:52:59PM +0200, Alexandre Derumier wrote:
  - $newbridge = $create_firewall_bridge_linux($iface, $newbridge) if
 $firewall;
  + if (!$vlan_aware) {
  + my $newbridge = activate_bridge_vlan($bridge, $tag);
  + copy_bridge_config($bridge, $newbridge

Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux bridge

2015-08-04 Thread Andrew Thrift
Hi Alexandre,

We also use QinQ and have submitted patches for the previous network
implementation that made use of a bridge in bridge design to achieve the
QinQ functionality.

The new vlan aware bridge implementation will be a lot cleaner.

When your patches are ready we will test them and provide feedback.


Thanks,



Andrew

On Tue, Jul 28, 2015 at 2:09 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 does somebody have tested my vlan bridges patches ?  (note that that need
 iproute2 from debian sid, for vlan ranges)

 It's working really fine here, I'm looking to add a patch for Q-in-Q
 bridge too. (I think Stefan Priebe use them)





 - Mail original -
 De: aderumier aderum...@odiso.com
 À: Wolfgang Bumiller w.bumil...@proxmox.com
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Vendredi 24 Juillet 2015 18:49:18
 Objet: Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux
 bridge

 Why is `bridge_add_interface` now restricted to the firewall-else
 branch?

 I manage it like openvswitch,

 vlan tagging is always done on the main bridge, not firewall bridge.


  + if ($firewall) {
  + $create_firewall_bridge_linux($iface, $bridge, $tag);

 create_firewall_bridge_linux($iface, $bridge, $tag)
 have

 - $bridge_add_interface($bridge, $vethfwpeer);
 + $bridge_add_interface($bridge, $vethfwpeer, $tag); #tag on the main
 bridge
 - return $fwbr;
 + $bridge_add_interface($fwbr, $iface); # add vm tap interface on
 fwbridge without vlan tag









 - Mail original -
 De: Wolfgang Bumiller w.bumil...@proxmox.com
 À: aderumier aderum...@odiso.com
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Vendredi 24 Juillet 2015 15:20:06
 Objet: Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux
 bridge

 On Fri, Jul 24, 2015 at 01:52:59PM +0200, Alexandre Derumier wrote:
  - $newbridge = $create_firewall_bridge_linux($iface, $newbridge) if
 $firewall;
  + if (!$vlan_aware) {
  + my $newbridge = activate_bridge_vlan($bridge, $tag);
  + copy_bridge_config($bridge, $newbridge) if $bridge ne $newbridge;
  + $tag = undef;
  + }
  +
  + if ($firewall) {
  + $create_firewall_bridge_linux($iface, $bridge, $tag);
  + } else {
  + $bridge_add_interface($bridge, $iface, $tag);
  + }
 
  - $bridge_add_interface($newbridge, $iface);


 Why is `bridge_add_interface` now restricted to the firewall-else
 branch?
 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux bridge

2015-08-04 Thread Andrew Thrift
Alexandre,

Am I right in thinking that for 4x bridges each with a different SVID
(101,102,103,104) that the config would look like this ?

auto vmbrcustomer1
iface vmbrcustomer1 inet manual
bridge_vlan_aware yes
bridge_ports customer1lp
bridge_stp off
bridge_fd 0
pre-up ip link add dev customer1l type veth peer name customer1lp
post-up ip link set customer1l up

auto vmbrcustomer2
iface vmbrcustomer2 inet manual
bridge_vlan_aware yes
bridge_ports customer2lp
bridge_stp off
bridge_fd 0
pre-up ip link add dev customer2l type veth peer name customer2lp
post-up ip link set customer2l up

auto vmbrcustomer3
iface vmbrcustomer3 inet manual
bridge_vlan_aware yes
bridge_ports customer3lp
bridge_stp off
bridge_fd 0
pre-up ip link add dev customer3l type veth peer name customer3lp
post-up ip link set customer3l up

auto vmbrcustomer4
iface vmbrcustomer4 inet manual
bridge_vlan_aware yes
bridge_ports customer4lp
bridge_stp off
bridge_fd 0
pre-up ip link add dev customer4l type veth peer name customer4lp
post-up ip link set customer4l up

auto vmbr0
iface vmbr0 inet manual
bridge_vlan_aware yes
bridge_ports eth0 customer1l customer2l customer3l customer4l
bridge_stp off
bridge_fd 0
post-up echo 0x88a8  /sys/class/net/vmbr0/bridge/vlan_protocol
post-up bridge vlan add dev customer1l vid 101 pvid untagged
post-up bridge vlan add dev customer2l vid 102 pvid untagged
post-up bridge vlan add dev customer3l vid 103 pvid untagged
post-up bridge vlan add dev customer4l vid 104 pvid untagged

On Wed, Aug 5, 2015 at 11:54 AM, Andrew Thrift and...@networklabs.co.nz
wrote:

 Hi Alexandre,

 This looks like it should work.

 Something to be aware of, QinQ does not always have an outer tag with
 ethertype 0x88a8, it can also have a tag of 0x8100 or 0x9100 depending on
 the implementation.

 For example:

 0x88a8--0x8100Outer-tag (SVID) of 0x88a8, Inner-tag (CVID) of 0x8100
 0x8100--0x8100Outer-tag (SVID) of 0x8100, Inner-tag (CVID) of 0x8100
 0x9100--0x8100Outer-tag (SVID) of 0x9100, Inner-tag (CVID) of 0x8100

 We typically use stacked tags of 0x8100 so both the outer and inner tag
 are 0x8100, this seems to be the most compatible method and is supported by
 all vendors that I am aware of.




 On Wed, Aug 5, 2015 at 12:25 AM, Alexandre DERUMIER aderum...@odiso.com
 wrote:

 This seem to work.
 (I'm not sure about tcpdump result when vlan are stacked)


 auto vmbrcustomer1
 iface vmbrcustomer1 inet manual
 bridge_vlan_aware yes
 bridge_ports customer1lp
 bridge_stp off
 bridge_fd 0
 pre-up ip link add dev customer1l type veth peer name customer1lp
 post-up ip link set customer1l up


 auto vmbr0
 iface vmbr0 inet manual
 bridge_vlan_aware yes
 bridge_ports eth0 customer1l
 bridge_stp off
 bridge_fd 0
 post-up echo 0x88a8  /sys/class/net/vmbr0/bridge/vlan_protocol
 post-up bridge vlan add dev customer1l vid 10 pvid untagged


 - Mail original -
 De: aderumier aderum...@odiso.com
 À: Andrew Thrift and...@networklabs.co.nz
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Mardi 4 Août 2015 14:02:46
 Objet: Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware
 linux bridge

 Another way,

 but I'm not sure it's working, is to tag 802.1ad on the physical interface




 eth0.10vmbrcustomer--(vlanX)--tapX



 auto vmbrcustomer1
 iface vmbrcustomer1 inet manual
 bridge_vlan_aware yes
 bridge_ports eth0.10
 bridge_stp off
 bridge_fd 0
 pre-up ip link add link eth0 eth0.10 type vlan proto 802.1ad id 10




 - Mail original -
 De: aderumier aderum...@odiso.com
 À: Andrew Thrift and...@networklabs.co.nz
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Mardi 4 Août 2015 12:22:47
 Objet: Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware
 linux bridge

 Hi Alexandre,
 Hi,

 We also use QinQ and have submitted patches for the previous network
 implementation that made use of a bridge in bridge design to achieve the
 QinQ functionality.

 They are also a new way to implement q-in-q with vlan aware bridge

 http://www.spinics.net/lists/linux-ethernet-bridging/msg05514.html

 ++ +---+p/u +--+ ++ +--+
 |eth0|--|802.1ad|veth|802.1Q|--|vnet|--|VM|
 ++ |bridge | |bridge| ++ +--+
 +---+ +--+

 p/u: pvid/untagged



 Currently we have implemented 802.1Q bridge.

 for qinq, we need to create a root bridge, with 802.1ad enabled, linked
 through a veth pair to 802.1Q bridge.


 The qinq bridge is managed exactly in the same way than 802.1ad, but it's
 enabled with
 echo 0x88a8  /sys/class/net/XXX/bridge/vlan_protocol

 for example
 

 eth0vmbr0--(vlan10)---brigelink---bridgelinkpeer

Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux bridge

2015-08-04 Thread Andrew Thrift
Hi Alexandre,

This looks like it should work.

Something to be aware of, QinQ does not always have an outer tag with
ethertype 0x88a8, it can also have a tag of 0x8100 or 0x9100 depending on
the implementation.

For example:

0x88a8--0x8100Outer-tag (SVID) of 0x88a8, Inner-tag (CVID) of 0x8100
0x8100--0x8100Outer-tag (SVID) of 0x8100, Inner-tag (CVID) of 0x8100
0x9100--0x8100Outer-tag (SVID) of 0x9100, Inner-tag (CVID) of 0x8100

We typically use stacked tags of 0x8100 so both the outer and inner tag
are 0x8100, this seems to be the most compatible method and is supported by
all vendors that I am aware of.




On Wed, Aug 5, 2015 at 12:25 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 This seem to work.
 (I'm not sure about tcpdump result when vlan are stacked)


 auto vmbrcustomer1
 iface vmbrcustomer1 inet manual
 bridge_vlan_aware yes
 bridge_ports customer1lp
 bridge_stp off
 bridge_fd 0
 pre-up ip link add dev customer1l type veth peer name customer1lp
 post-up ip link set customer1l up


 auto vmbr0
 iface vmbr0 inet manual
 bridge_vlan_aware yes
 bridge_ports eth0 customer1l
 bridge_stp off
 bridge_fd 0
 post-up echo 0x88a8  /sys/class/net/vmbr0/bridge/vlan_protocol
 post-up bridge vlan add dev customer1l vid 10 pvid untagged


 - Mail original -
 De: aderumier aderum...@odiso.com
 À: Andrew Thrift and...@networklabs.co.nz
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Mardi 4 Août 2015 14:02:46
 Objet: Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux
 bridge

 Another way,

 but I'm not sure it's working, is to tag 802.1ad on the physical interface




 eth0.10vmbrcustomer--(vlanX)--tapX



 auto vmbrcustomer1
 iface vmbrcustomer1 inet manual
 bridge_vlan_aware yes
 bridge_ports eth0.10
 bridge_stp off
 bridge_fd 0
 pre-up ip link add link eth0 eth0.10 type vlan proto 802.1ad id 10




 - Mail original -
 De: aderumier aderum...@odiso.com
 À: Andrew Thrift and...@networklabs.co.nz
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Mardi 4 Août 2015 12:22:47
 Objet: Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux
 bridge

 Hi Alexandre,
 Hi,

 We also use QinQ and have submitted patches for the previous network
 implementation that made use of a bridge in bridge design to achieve the
 QinQ functionality.

 They are also a new way to implement q-in-q with vlan aware bridge

 http://www.spinics.net/lists/linux-ethernet-bridging/msg05514.html

 ++ +---+p/u +--+ ++ +--+
 |eth0|--|802.1ad|veth|802.1Q|--|vnet|--|VM|
 ++ |bridge | |bridge| ++ +--+
 +---+ +--+

 p/u: pvid/untagged



 Currently we have implemented 802.1Q bridge.

 for qinq, we need to create a root bridge, with 802.1ad enabled, linked
 through a veth pair to 802.1Q bridge.


 The qinq bridge is managed exactly in the same way than 802.1ad, but it's
 enabled with
 echo 0x88a8  /sys/class/net/XXX/bridge/vlan_protocol

 for example
 

 eth0vmbr0--(vlan10)---brigelink---bridgelinkpeervmbrcustomer--(vlanX)--tapX


 brctl addbr vmbr0
 echo 0x88a8  /sys/class/net/vmbr0/bridge/vlan_protocol
 ip link add dev bridgelink type veth peer name bridgelinkpeer
 brctl addif vmbr0 bridgelink
 brctl addif vmbrcustomer1 bridgelinkpeer
 bridge vlan add dev bridgelink vid 10 pvid untagged


 something like that


 I can try to make a patch, but I don't have hardware which support q-in-q
 for testing.




 - Mail original -
 De: Andrew Thrift and...@networklabs.co.nz
 À: aderumier aderum...@odiso.com
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Mardi 4 Août 2015 10:49:26
 Objet: Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux
 bridge

 Hi Alexandre,
 We also use QinQ and have submitted patches for the previous network
 implementation that made use of a bridge in bridge design to achieve the
 QinQ functionality.

 The new vlan aware bridge implementation will be a lot cleaner.

 When your patches are ready we will test them and provide feedback.


 Thanks,



 Andrew

 On Tue, Jul 28, 2015 at 2:09 AM, Alexandre DERUMIER  aderum...@odiso.com
  wrote:


 does somebody have tested my vlan bridges patches ? (note that that need
 iproute2 from debian sid, for vlan ranges)

 It's working really fine here, I'm looking to add a patch for Q-in-Q
 bridge too. (I think Stefan Priebe use them)





 - Mail original -
 De: aderumier  aderum...@odiso.com 
 À: Wolfgang Bumiller  w.bumil...@proxmox.com 
 Cc: pve-devel  pve-devel@pve.proxmox.com 
 Envoyé: Vendredi 24 Juillet 2015 18:49:18
 Objet: Re: [pve-devel] [PATCH] tap_plug : add support for vlan aware linux
 bridge

 Why is `bridge_add_interface` now restricted to the firewall-else
 branch?

 I manage it like openvswitch,

 vlan tagging is always done on the main bridge, not firewall bridge.


  + if ($firewall

[pve-devel] PVE-Enterprise ceph version

2015-07-14 Thread Andrew Thrift
Hi Guys,

There are a number of important fixes in the Ceph Firefly 0.80.9 release,
in particular fixes for qemu+librbd performance.

Is there any reason the PVE-Enterprise repository is still on 0.80.8  ?

Can this be updated ?


Thanks,
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server: drive-mirror improvements

2015-05-11 Thread Andrew Thrift
Thanks Alexandre,

This sounds like it could fix the corruption issues we have been
experiencing when doing drive-mirror's on RBD.


On Tue, May 12, 2015 at 1:55 AM, Alexandre Derumier aderum...@odiso.com
wrote:

 this is candidate for proxmox 3.4 fixes  proxmox 4.0


 First patch use new ready flag introduce in qemu 2.2, to finish block
 migration.
 It's the right way to known if we can complete the job. (same flag than
 qmp ready event)
 (I have also remove the vm pause when target is too slow, it don't work
 fine in my tests,
 and we can't rely on offset==len to known if the transfert is correctly
 finished.



 Second patch increse a lot a drive-mirror qmp, because the job is scanning
 the whole
 source image before return.
 This can be very long with big image on nfs for example.


 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] add cloudinit support to proxmox ?

2015-05-05 Thread Andrew Thrift
I think it is a brilliant idea.

On Wed, May 6, 2015 at 5:24 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 Hi,

 I wonder if we could add some kind of cloudinit support to proxmox.

 It's already implemented in openstack,ovirt,rhev

 http://www.ovirt.org/Features/Cloud-Init_Integration


 and supported in almost all last guest distros (debian,centos, fedora,
 suse , and windows !)


 could be usefull to setup hostname,timezone, ip address to client.
 (we need to add some fields in vmid.conf, generate virtual iso or floppy
 for the first boot of the vm.


 What do you think about it ?

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Current status of 3.10 kernels

2015-05-04 Thread Andrew Thrift
We have been running 3.10 in production for over 2 years now.   No real
problems.


On Mon, May 4, 2015 at 10:33 PM, Martin Waschbüsch serv...@waschbuesch.it
wrote:

 Hi there,

 I have experienced some VM crashes when trying to run 64bit FreeBSD guests.
 Apparently, if I read the posts on the forum right, the 3.10 kernels do
 not have this problem.

 At any rate: is there a list of things known to work / not work with the
 latest 3.10 kernel? E.g. ceph, etc...
 Any caveats when upgrading a cluster, e.g. a certain order in which things
 should happen?
 Is it possible to upgrade one node and test things out without leaving it
 detached from the cluster?

 Thanks a lot for any and all input!

 Martin

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : balloning fix (finally !)

2015-04-07 Thread Andrew Thrift
Did this patch make its way in to the pve-enterprise repository ?


On Mon, Mar 9, 2015 at 8:22 PM, Alexandre Derumier aderum...@odiso.com
wrote:

 The problem seem to come from polling interval, which is not setup
 automatically
 when -machine option exist.

 We already setup it manually, but not when we are doing migration.

 This patch always setup the pooling interval, if proxmox balloon option !=
 0

 It's working fine with current proxmox pve-qemu-kvm, so no need to remove
 the balloon stats patch.


 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] support QinQ / vlan stacking

2015-03-13 Thread Andrew Thrift
So with the ebtables patch, bridge filter rules can be configured via the
PVE API and WebUI ?



On Fri, Mar 13, 2015 at 9:10 PM, Dietmar Maurer diet...@proxmox.com wrote:

  So it should be safe to use the vlan_filtering code?
 
  Maybe can we wait for rhel 7.1 kernel ? I think it should be pretty
 stable,
  but it need to be tested.

 Yes. I think that kernel should be available soon?

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] balloon bug in qemu 2.1 ?

2015-03-06 Thread Andrew Thrift
Hi Alexandre,

Yes the affected VM seem to have -machine type=pc-i440fx-2.1 in their KVM
parameters.

I found a bunch that have machine: pc-i440fx-1.4 in the config, and KVM
parameters, and they output the full balloon stats e.g.:

# info balloon
balloon: actual=4096 max_mem=4096 total_mem=4095 free_mem=2221
mem_swapped_in=0 mem_swapped_out=0 major_page_faults=0
minor_page_faults=43 last_update=1425629742




On Fri, Mar 6, 2015 at 4:17 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 ok,I speak too fast.
 It's not related to this patch.


 on current pve-qemu-kvm 2.1.

 balloon is working fine
 # info balloon
 balloon: actual=1024 max_mem=1024 total_mem=1002 free_mem=941
 mem_swapped_in=0 mem_swapped_out=0 major_page_faults=120
 minor_page_faults=215272 last_update=1425568324


 But if the vm (qemu 2.1) is started with
  -machine type=pc-i440fx-2.1
 or
  -machine type=pc-i440fx-2.0   (this is the case when you do a live
 migration)

 It's not working

 # info balloon
 balloon: actual=1024 max_mem=1024



 @Andrew, are your vm where you see the info balloon bug, have been
 migrated from old proxmox (without stop/start ?)
 can you check in ssh if the kvm process have -machine type in the command
 line ?


 - Mail original -
 De: aderumier aderum...@odiso.com
 À: Andrew Thrift and...@networklabs.co.nz
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Jeudi 5 Mars 2015 16:05:03
 Objet: Re: [pve-devel] balloon bug in qemu 2.1 ?

 I need to do more tests,

 but it seem that this commit (applied on qemu 2.2 but not on qemu 2.1)

 http://git.qemu.org/?p=qemu.git;a=commit;h=22644cd2c60151a964d9505f4c5f7baf845f20d8

 fix the problem with qemu 2.1.
 (I have tested with the patch, balloon works fine, I need to test without
 the patch to compare)





 - Mail original -
 De: aderumier aderum...@odiso.com
 À: Andrew Thrift and...@networklabs.co.nz
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Jeudi 5 Mars 2015 15:41:51
 Objet: Re: [pve-devel] balloon bug in qemu 2.1 ?

 in proxmox virtio-balloon-fix-query.patch,

 we have

 hw/virtio/virtio-balloon.c

 +
 + if (!(balloon_stats_enabled(dev)  balloon_stats_supported(dev) 
 + dev-stats_last_update)) {
 + return;
 + }
 +
 + info-last_update = dev-stats_last_update;
 + info-has_last_update = true;
 +
 + info-mem_swapped_in = dev-stats[VIRTIO_BALLOON_S_SWAP_IN];
 + info-has_mem_swapped_in = info-mem_swapped_in = 0 ? true : false;
 +
 + info-mem_swapped_out = dev-stats[VIRTIO_BALLOON_S_SWAP_OUT];
 + info-has_mem_swapped_out = info-mem_swapped_out = 0 ? true : false;
 +
 + info-major_page_faults = dev-stats[VIRTIO_BALLOON_S_MAJFLT];
 + info-has_major_page_faults = info-major_page_faults = 0 ? true :
 false;
 +
 + info-minor_page_faults = dev-stats[VIRTIO_BALLOON_S_MINFLT];
 + info-has_minor_page_faults = info-minor_page_faults = 0 ? true :
 false;
 +


 so, that mean that in qemu 2.1

 + if (!(balloon_stats_enabled(dev)  balloon_stats_supported(dev) 
 + dev-stats_last_update)) {
 + return;
 + }

 one of this 3 funtions is not working


 - Mail original -
 De: Andrew Thrift and...@networklabs.co.nz
 À: aderumier aderum...@odiso.com
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Jeudi 5 Mars 2015 15:17:58
 Objet: Re: [pve-devel] balloon bug in qemu 2.1 ?

 Hi Alexandre,
 This may be the cause of the crashes we have been experiencing. We
 reported them here:


 http://forum.proxmox.com/threads/21276-Kernel-Oops-Panic-on-3-10-5-and-3-10-7-Kernels

 These only started happening since we moved to qemu-2.1.x and we get the
 same output:

 # info balloon
 balloon: actual=16384 max_mem=16384

 and have noticed VM's with only 1-2GB usage in the guest reporting almost
 the entire amount of ram used to the host, even though we have the latest
 balloon driver loaded and the blnsvr.exe service running.



 On Thu, Mar 5, 2015 at 11:33 PM, Alexandre DERUMIER  aderum...@odiso.com
  wrote:


 Hi,

 I have see a bug report here:

 http://forum.proxmox.com/threads/2-RAM-Problem-since-Upgrade-to-3-4?p=108367posted=1#post108367

 about balloon.


 on my qemu 2.2

 #info balloon
 balloon: actual=1024 max_mem=2048 total_mem=985 free_mem=895
 mem_swapped_in=0 mem_swapped_out=0 major_page_faults=301
 minor_page_faults=61411 last_update=1425550707


 same vm with qemu 2.2 + -machine type=pc-i440fx-2.1
 #info balloon
 balloon: actual=1024 max_mem=2048



 (Don't have true qemu 2.1 for test currently)


 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel

Re: [pve-devel] balloon bug in qemu 2.1 ?

2015-03-05 Thread Andrew Thrift
Hi Alexandre,

This may be the cause of the crashes we have been experiencing.We
reported them here:

http://forum.proxmox.com/threads/21276-Kernel-Oops-Panic-on-3-10-5-and-3-10-7-Kernels

These only started happening since we moved to qemu-2.1.x and we get the
same output:

# info balloon
balloon: actual=16384 max_mem=16384


and have noticed VM's with only 1-2GB usage in the guest reporting
almost the entire amount of ram used to the host, even though we have
the latest balloon driver loaded and the blnsvr.exe service running.




On Thu, Mar 5, 2015 at 11:33 PM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 Hi,

 I have see a bug report here:

 http://forum.proxmox.com/threads/2-RAM-Problem-since-Upgrade-to-3-4?p=108367posted=1#post108367

 about balloon.


 on my qemu 2.2

 #info balloon
 balloon: actual=1024 max_mem=2048 total_mem=985 free_mem=895
 mem_swapped_in=0 mem_swapped_out=0 major_page_faults=301
 minor_page_faults=61411 last_update=1425550707


 same vm with qemu 2.2 + -machine type=pc-i440fx-2.1
 #info balloon
 balloon: actual=1024 max_mem=2048



 (Don't have true qemu 2.1 for test currently)


 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] support QinQ / vlan stacking

2015-02-23 Thread Andrew Thrift
We have found you need to nest the bridges to get QinQ to work in all
scenarios.
e.g. The above patch will work for MOST scenarios, but if you attach a vlan
aware VM to the parent vmbr0 bridge it will cause traffic to the VM's to
stop, or will not be able to see the tagged frames.

The patch we use only has one other minor change:

-activate_bridge_vlan_slave($bridgevlan, $iface, $tag);
+activate_bridge_vlan_slave($bridgevlan, $bridge, $tag);


On Sat, Feb 14, 2015 at 9:41 PM, Stefan Priebe s.pri...@profihost.ag
wrote:


 Signed-off-by: Stefan Priebe s.pri...@profihost.ag
 ---
  data/PVE/Network.pm |2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

 diff --git a/data/PVE/Network.pm b/data/PVE/Network.pm
 index 00639f6..97f4033 100644
 --- a/data/PVE/Network.pm
 +++ b/data/PVE/Network.pm
 @@ -323,7 +323,7 @@ sub activate_bridge_vlan {

  my @ifaces = ();
  my $dir = /sys/class/net/$bridge/brif;
 -PVE::Tools::dir_glob_foreach($dir, '((eth|bond)\d+)', sub {
 +PVE::Tools::dir_glob_foreach($dir, '((eth|bond)\d+(\.\d+)?)', sub {
  push @ifaces, $_[0];
  });

 --
 1.7.10.4

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Optimizing CPU flags

2015-02-17 Thread Andrew Thrift
I agree.

This would be of a big benefit to us, we quite often have to fix up VM's
that users have created with sub-optimal disk and network configurations.



On Wed, Feb 18, 2015 at 1:31 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 A fast workaround/hack would be to add such setting to datacenter.cfg

 I think also it could be great to be able to define default create values
 in datacenter.cfg

 (disk (virtio,scsi,...),nic (virtio,e1000,...), scsi controller
 (lsi,virtio-scsi), ...)


 - Mail original -
 De: dietmar diet...@proxmox.com
 À: Stefan Priebe s.pri...@profihost.ag, pve-devel 
 pve-devel@pve.proxmox.com
 Envoyé: Mercredi 11 Février 2015 19:42:11
 Objet: Re: [pve-devel] Optimizing CPU flags

   Sure but you cannot select a cpu type as a global default in a cluster.
   So you have to remember that one each time.
 
  The suggestion was to implement that:
  https://git.proxmox.com/?p=qemu-defaults.git;a=summary

 A fast workaround/hack would be to add such setting to datacenter.cfg

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] bridge on top of bridge?

2015-02-15 Thread Andrew Thrift
Hi Stefan,

We use bonded 10gigabit, our config is below.

Note eth0 and eth1 are not used.

We put customer VM's inside the relevant vmbr and tag them with a vlan.
So say they are on vmbr0 and we set the vlan in PVE to 55 they would have
an outer tag of 101 and an inner tag of 55.

This works very nicely.



auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

auto eth2
iface eth2 inet manual
#post-up for i in rx tx sg tso gso gro; do ethtool -K $IFACE $i off; done

auto eth3
iface eth3 inet manual
#post-up for i in rx tx sg tso gso gro; do ethtool -K $IFACE $i off; done

auto bond0
iface bond0 inet manual
# bonding confuration
bond_mode 802.3ad
bond_miimon 100
bond-lacp-rate 1
bond-xmit-hash-policy layer3+4
bond_slaves eth2 eth3
post-up ifconfig $IFACE mtu 9000

#Cust01 vlan
auto bond0.101
iface bond0.101 inet manual

#Cust02 vlan
auto bond0.102
iface bond0.102 inet manual

#Cust03 vlan
auto bond0.103
iface bond0.103 inet manual

#Cust04 vlan
auto bond0.104
iface bond0.104 inet manual

#MgmtServers vlan
auto bond0.200
iface bond0.200 inet manual

#Bridge configuration
auto mgmtbr0
iface mgmtbr0 inet static
address 10.99.99.104
netmask 255.255.255.0
gateway 10.99.99.254
post-up ifconfig $IFACE mtu 1500
bridge_ports bond0.200
bridge_stp off
bridge_fd 0

auto vmbr0
iface vmbr0 inet manual
bridge_ports bond0.101
bridge_stp off
bridge_fd 0

auto vmbr1
iface vmbr1 inet manual
bridge_ports bond0.102
bridge_stp off
bridge_fd 0

auto vmbr2
iface vmbr2 inet manual
bridge_ports bond0.103
bridge_stp off
bridge_fd 0

auto vmbr3
iface vmbr3 inet manual
bridge_ports bond0.104
bridge_stp off
bridge_fd 0

On Sat, Feb 14, 2015 at 9:11 PM, Stefan Priebe s.pri...@profihost.ag
wrote:

 Hi,

 how does your interfaces file looks like?

 Thanks!

 Stefan

 Am 11.02.2015 um 22:28 schrieb Andrew Thrift:

 Hi Stefan,

 I posted a patch to do this a while back:

 http://pve.proxmox.com/pipermail/pve-devel/2013-April/006995.html

 It will need a few changes to work on pve-test but we have been running
 it in production for a few years now.




 On Thu, Feb 12, 2015 at 8:44 AM, Stefan Priebe s.pri...@profihost.ag
 mailto:s.pri...@profihost.ag wrote:


 Am 11.02.2015 um 20:25 schrieb Alexandre DERUMIER:

 does anybody know a way to have two bridges on top of
 one bond?


 AFAIK, it's not possible to put a single interfaces on 2 bridge.

 It's possible with tagged interfaces.


 Yes but the problem is i just would like to have a PVID - so no
 untagged traffic on that bridge but there may be other tap devices
 with a vid.

 But I think that mixing 1 interface - 1 brige
   1 internface.tag - 1 another bridge

 is not working
 (maybe a linux bug).

 That may work but won't help me as the another bridge may contain
 multiple other tags.

 I think it's work with openvswitch.
 (I think it work also with an ovs on top of another ovs with
 tagged port)


 That sounds great. Never touched openvswitch? Is there a howto for
 proxmox? What's the advantage over normal linux bridges?

 Greets,
 Stefan



 - Mail original -
 De: Stefan Priebe s.pri...@profihost.ag
 mailto:s.pri...@profihost.ag
 À: pve-devel pve-devel@pve.proxmox.com
 mailto:pve-devel@pve.proxmox.com
 Envoyé: Mercredi 11 Février 2015 13:41:58
 Objet: [pve-devel] bridge on top of bridge?

 Hi,

 does anybody know a way to have two bridges on top of one bond?
 I woulod
 like to use the PVID feature for another bridge than the default
 one
 (private LAN without the need to enable tagging on the tap
 device).

 I already tried to add the 2nd bridge on top of the bond or to
 put the
 2nd bridge into the first bridge. Both does not work.

 Greets,
 Stefan
 _
 pve-devel mailing list
 pve-devel@pve.proxmox.com mailto:pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-__bin/mailman/listinfo/pve-devel
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

 _
 pve-devel mailing list
 pve-devel@pve.proxmox.com mailto:pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-__bin/mailman/listinfo/pve-devel
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] iothreads : create 1 iothread by virtio device

2015-01-29 Thread Andrew Thrift
Hi Alexandre, List

Thanks for your very informative email.

Our hosts are fairly powerful (E5-2697v2), as are our ceph hosts.

We will run some more benchmarks shortly using KRBD + iothreads and report
back.


Regards,



Andrew

On Thu, Jan 29, 2015 at 9:20 PM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 Would be interesting to additionally benchmark with sheepdog, so that we
 can
 compare ;-)

 Why not, I'll try to see if I can do it.
 Also compare with a local raw file on ssd.



 - Mail original -
 De: dietmar diet...@proxmox.com
 À: aderumier aderum...@odiso.com
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Jeudi 29 Janvier 2015 08:54:43
 Objet: Re: [pve-devel] [PATCH] iothreads : create 1 iothread by virtio
 device

  I going to have a full ssd ceph cluster next month (18x s 3500 on 3
 hosts),
  with more powerfull nodes and more powerfull clients (20 cores 3,1Gghz)
 
  I'll do benchmarks and post results with differents setup.

 Would be interesting to additionally benchmark with sheepdog, so that we
 can
 compare ;-)
 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Fwd: [Qemu-stable] [Qemu-devel] [PATCH v2 1/1] migration/block: fix pending() return value

2015-01-04 Thread Andrew Thrift
I think you may be referring to a problem I reported.
We have been having corruption of RBD volumes when live migrated from one
pool to another inside Proxmox VE.

Alexandre had spotted this bug and suggested it may be related.


On Sun, Jan 4, 2015 at 8:37 AM, Stefan Priebe s.pri...@profihost.ag wrote:

 Am 03.01.2015 um 11:00 schrieb Dietmar Maurer:

 On January 2, 2015 at 5:37 PM Stefan Priebe - Profihost AG
 s.pri...@profihost.ag wrote:


 Isn't this something which was reported some weeks ago?


 Sorry, but what do you refer to?


 Sorry i thought there were a report where block migration was hanging.

 Stefan


 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] rbd : free_image : retry if rbd has watchers

2014-12-08 Thread Andrew Thrift
Hi Alexandre,

 yes this was tested with 3.3-3

# aptitude -F '%p %v#' search qemu-server
qemu-server   3.3-3



On Mon, Dec 8, 2014 at 5:07 PM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 We are still experiencing corruption when performing live disk moves,
 even with this patch in place.

 Do you talk about patch  rbd : free_image : retry if rbd has watchers ?

 because we don't have applied this patch, because it was not the problem
 but the consequence.

 Do you have tested last qemu-server package from pvetest repository ?(=
 3.3-3)

 It should be fixed here.



 - Mail original -
 De: Andrew Thrift and...@networklabs.co.nz
 À: aderumier aderum...@odiso.com
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Lundi 8 Décembre 2014 04:42:23
 Objet: Re: [pve-devel] [PATCH] rbd : free_image : retry if rbd has watchers

 Hi Alexandre,
 We are still experiencing corruption when performing live disk moves, even
 with this patch in place.

 Is there anything we can do to help pinpoint the cause of this ?



 On Wed, Nov 12, 2014 at 7:39 PM, Alexandre DERUMIER  aderum...@odiso.com
  wrote:


 Ok, Great!

 Thanks for testing.

 (@cc pve-devel)

 - Mail transféré -

 De: Andrew Thrift  and...@networklabs.co.nz 
 À: Alexandre DERUMIER  aderum...@odiso.com 
 Envoyé: Mercredi 12 Novembre 2014 05:10:40
 Objet: Re: [pve-devel] [PATCH] rbd : free_image : retry if rbd has watchers


 Hi Alexandre,


 Initial testing looks promising.


 I have tested migrating disks that have active writes on them and it
 worked well. All files had matching md5 sums.


 I will test with 4K writes tomorrow.


 On Fri, Nov 7, 2014 at 10:42 PM, Andrew Thrift  and...@networklabs.co.nz
  wrote:



 Thanks Alexandre,


 I will try these first thing Monday.


 Have a good weekend !








 On Fri, Nov 7, 2014 at 10:29 PM, Alexandre DERUMIER  aderum...@odiso.com
  wrote:

 blockquote
 Do you know why online Disk Move's could be causing this corruption ? We
 have had to stop using it as if we corrupt a customers DB server it would
 not be a good thing :(

 Can you try to 2 patchs I have sent ? I think it should fix the problem.


 - Mail original -

 De: Andrew Thrift  and...@networklabs.co.nz 
 À: Alexandre DERUMIER  aderum...@odiso.com 
 Envoyé: Jeudi 6 Novembre 2014 21:18:19
 Objet: Re: [pve-devel] [PATCH] rbd : free_image : retry if rbd has watchers




 HI Alexandre,


 Not related specifically to this patch. But using DIsk Move while the VM
 is online results in corruption for us almost every time we use it.


 We are using PVE3.3 with RBD storage. Typically we are moving from one RBD
 pool to another. We seem to get coorruption if the block copy completes or
 fails.


 We are primarily running Windows guest OS's with virtio or virtio-scsi
 disks.


 Our Ceph cluster has 84 spinning disks and 7x Intel S3700 Journal's.
 Networking to all devices is 2x10gigabit bonded and performance generally
 is very good.




 Do you know why online Disk Move's could be causing this corruption ? We
 have had to stop using it as if we corrupt a customers DB server it would
 not be a good thing :(




 On Fri, Nov 7, 2014 at 5:00 AM, Alexandre DERUMIER  aderum...@odiso.com
  wrote:


 I'll resend a V2 tommorow
 - Mail original -

 De: Dietmar Maurer  diet...@proxmox.com 
 À: Alexandre DERUMIER  aderum...@odiso.com 
 Cc: pve-devel@pve.proxmox.com
 Envoyé: Jeudi 6 Novembre 2014 16:08:38
 Objet: RE: [pve-devel] [PATCH] rbd : free_image : retry if rbd has watchers



  And what happens if we get other errors?
 
  Currently It's retrying until $i  ~0
 
  but we could add a die directly if $err !~ image still has watchers

 Yes, I think that would be better.
 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




 /blockquote




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Remove Disk

2014-12-07 Thread Andrew Thrift
Hi,

I have noticed from at least PVE 2.3 through 3.3 that when removing a disk
from a VM that there is no Task created, and that no progress is shown.

This means that on storage such as Ceph where disk removal can take some
time, you are in the dark as to progress.  You also cannot see from a VM's
Task History when a disk was deleted.

Can this missing functionality please be added to the roadmap ?


Thanks,




Andrew
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Remove Disk

2014-12-07 Thread Andrew Thrift
Hrmm I have had remove jobs run for 20+ minutes and never seen a task
(ceph...)

Wouldn't it be good practise to do this for all remove jobs ?

I notice that when a Disk Move is performed that it shows the remove
status.



On Mon, Dec 8, 2014 at 2:40 PM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 Hi,
 I'm not sure, but I think a task is created only if the delete take too
 much time.
 I don't remember exactly after how much time.


 - Mail original -
 De: Andrew Thrift and...@networklabs.co.nz
 À: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Lundi 8 Décembre 2014 00:12:50
 Objet: [pve-devel] Remove Disk

 Hi,
 I have noticed from at least PVE 2.3 through 3.3 that when removing a disk
 from a VM that there is no Task created, and that no progress is shown.

 This means that on storage such as Ceph where disk removal can take some
 time, you are in the dark as to progress. You also cannot see from a VM's
 Task History when a disk was deleted.

 Can this missing functionality please be added to the roadmap ?


 Thanks,




 Andrew

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Maintenance mode

2014-11-10 Thread Andrew Thrift
Option 4, You could have a drop down box with a default selection of
Distribute Evenly then a list of all available nodes.
This would allow the user to just click through and have them distribute
evenly,  OR they could select a specific node.



On Tue, Nov 11, 2014 at 12:32 PM, Michael Rasmussen m...@datanom.net wrote:

 On Tue, 11 Nov 2014 00:27:46 +0100
 Michael Rasmussen m...@datanom.net wrote:

  Hi all,
 
  Just to let you know. I have started implementing the feature for
  applying maintenance mode to a node. Right-click on any node will open
  a context menu which, at the moment, only contains one item:
  Maintenance. See attached screenshot.
 
 I quick poll.

 1) Should the user select a specific target node?
 2) Should the user select a list of target nodes?
 3) Should the implementation migrate running VM's evenly across all
 other nodes?
 4) Should the user be able to select any one of the methods above?

 --
 Hilsen/Regards
 Michael Rasmussen

 Get my public GnuPG keys:
 michael at rasmussen dot cc
 http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
 mir at datanom dot net
 http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
 mir at miras dot org
 http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
 --
 /usr/games/fortune -es says:
 How should I know if it works?  That's what beta testers are for.  I
 only coded it.
 (Attributed to Linus Torvalds, somewhere in a posting)

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] rbd : free_image : retry if rbd has watchers

2014-11-05 Thread Andrew Thrift
Hi Alexandre,

Will this potentially fix the Disk Move issues I reported a few weeks back ?


On Thu, Nov 6, 2014 at 4:09 AM, Alexandre Derumier aderum...@odiso.com
wrote:

 reported in drive-mirror,

 if we cancel the job and we try to free the rbd volume,
 it seem that rbd storage take 2-3 seconds to release the volume
 or it's throwing an image still has watchers.

 This patch try to wait 10s with 10retries before dying

 Signed-off-by: Alexandre Derumier aderum...@odiso.com
 ---
  PVE/Storage/RBDPlugin.pm |   15 ++-
  1 file changed, 14 insertions(+), 1 deletion(-)

 diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
 index 1026d81..3bfff4c 100644
 --- a/PVE/Storage/RBDPlugin.pm
 +++ b/PVE/Storage/RBDPlugin.pm
 @@ -409,7 +409,20 @@ sub free_image {
  run_rbd_command($cmd, errmsg = rbd snap purge '$volname' error);

  $cmd = $rbd_cmd($scfg, $storeid, 'rm', $name);
 -run_rbd_command($cmd, errmsg = rbd rm '$volname' error);
 +
 +my $i = 0;
 +while(1){
 +   $i++;
 +   eval {
 +   run_rbd_command($cmd, errmsg = rbd rm '$volname' error);
 +   };
 +   my $err = $@;
 +
 +   sleep 1 if ($err  $err =~ m/image still has watchers/);
 +
 +   die image still has watchers if $i  10;
 +   last if !$err;
 +}

  return undef;
  }
 --
 1.7.10.4

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Guest UEFI support

2014-10-29 Thread Andrew Thrift
Hi Dietmar,

It is a replacement for the bios.  You can load ovmf.fd with the -L option
for QEMU.

You can download binary copies of ovmf.fd from:
http://tianocore.github.io/ovmf/



On Wed, Oct 29, 2014 at 12:22 AM, Dietmar Maurer diet...@proxmox.com
wrote:

  I was wondering if support for UEFI in VM's could be added to PVE.
 
  See http://www.linux-kvm.org/page/OVMF
 
  It looks to be an alternative firmware image that is loaded instead of
 the usual
  bios.bin

 I would like to add EFI support, but those howtos are highly confusing for
 me.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Guest UEFI support

2014-10-27 Thread Andrew Thrift
Hi,

I was wondering if support for UEFI in VM's could be added to PVE.

See http://www.linux-kvm.org/page/OVMF

It looks to be an alternative firmware image that is loaded instead of the
usual bios.bin


Regards,





Andrew
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : allow only hotpluggable|dynamics options to be change online

2014-09-07 Thread Andrew Thrift
This seems like a very good idea.

The web-ui could indicate the VM has pending changes, possibly by using a
different VM icon, and/or showing it on the VM summary tab.
Being able to view what changes are pending would also be a good idea in
multi-admin environments.



On Sat, Sep 6, 2014 at 2:11 AM, Dietmar Maurer diet...@proxmox.com wrote:

  Seems users also have different opinions - some prefer existing
 behavior 
 
  Maybe simply add an option in datacenter.cfg, to choose behavior ?

 I think that is not necessary, because we already have the VM 'hotplug'
 option.

 I am thinking about the following: Write pending changes into a new config
 section,
 for example:

 --vm config---
 memory: 1024

 [pendign_changes]
 memory: 8192
 --

 We always write this before we change something. If hotplug is enabled, we
 commit the change after
 hotplug. Else, we commit on VM startup.

 What do you think?

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] vm_devices_list : also list block devices

2014-09-01 Thread Andrew Thrift
Some feedback.

We have tested this patch and hot add/remove are both working perfectly
with at least virtio-scsi on Windows 2008, 2008r2, 2012, 2012r2 and Linux
3.2 and higher.

Thanks Guys!   This is going to lower operational overheads for us as we
wont need to schedule outages to cleanly add disks to VM's.






On Mon, Sep 1, 2014 at 9:34 PM, Dietmar Maurer diet...@proxmox.com wrote:

 applied

  -Original Message-
  From: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] On Behalf Of
  Alexandre Derumier
  Sent: Freitag, 29. August 2014 15:04
  To: pve-devel@pve.proxmox.com
  Subject: [pve-devel] [PATCH] vm_devices_list : also list block devices
 
  This allow scsi disk to be plug|unplug

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Pool View

2014-08-26 Thread Andrew Thrift
Hi Dietmar,

Sorry for the delay.

Yes we have tested this on our test cluster and it appears to work as
expected.


Regards,




Andrew




On Wed, Aug 13, 2014 at 6:16 PM, Dietmar Maurer diet...@proxmox.com wrote:

 check_any call for valid_privs:
 This check is assessing whether the user has any permissions assigned to
 the pool and therefore, by extension, whether they should have visibility
 of the Pool at all, or whether the Pool should be omitted from their view
 of the list of Pools. In other words, if the user has no rights assigned to
 the Pool, then their console/screen should not be cluttered with it in this
 view (as they can not perform any actions on it). This check achieves that
 goal.

 Still don't get that. If a user has no permission assigned to a pool, the
 pool is not listed at all.
 If a user can see any VM inside that pool (VM.Audit), The view will show
 the pool.

 I cannot see any need for that check_any call.

 I just committed a first version of the Pool View:


 https://git.proxmox.com/?p=pve-manager.git;a=commitdiff;h=686915003e6069a25b9c8e3d533f770f0a71bebe

 Please can you test if that fits your needs?


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Pool View

2014-08-04 Thread Andrew Thrift
Exported Variable:

The particular variable referenced is the canonical list of Valid
Privileges, necessary for assessing whether any given Privilege is, in fact
on the list of valid and assignable Privileges. The change made allows the
variable to referenced outside the scope of the module that contains it; so
that it can be re-used in other modules (specifically in
PVE::API2::Cluster). I would like to highlight that it is NOT really a
Global Variable. It is a lexically scoped, exported variable, available to
other modules within the PVE package by reference to
PVE:AccessControl::valid_privs (eg, it does not collide with other
declarations of valid_privs in other modules). If a different approach is
preferred, please advise what the desired strategy is - if it is strictly
necessary, then for example a utility function could be added to
PVE::AccessControl to export the values contained in the valid_privs
variable instead. If the Proxmox has some specific design goals in mind the
patch can be altered to accommodate.


check_any call for valid_privs:

This check is assessing whether the user has any permissions assigned to
the pool and therefore, by extension, whether they should have visibility
of the Pool at all, or whether the Pool should be omitted from their view
of the list of Pools. In other words, if the user has no rights assigned to
the Pool, then their console/screen should not be cluttered with it in this
view (as they can not perform any actions on it). This check achieves that
goal.


!important tag on image url:

This was included in the original cut of the patch to cover a specific
browser incompatibility scenario; where the author observed that the image
was not showing in its desired situation. This is a common problem with
older versions of Internet Explorer and occasionally I have observed it in
Chrome as well (where a less-specific CSS selector overrides a
more-specific CSS selector).


On Mon, Jul 21, 2014 at 5:40 PM, Dietmar Maurer diet...@proxmox.com wrote:

 here are a few comments and question (inline):

  diff -Nuarp usr/share/perl5/PVE/AccessControl.pm
  usr/share/perl5/PVE/AccessControl.pm
  --- usr/share/perl5/PVE/AccessControl.pm  2014-06-27
 16:23:34.305587119
  +1200
  +++ usr/share/perl5/PVE/AccessControl.pm  2014-06-27
 16:23:44.077692909
  +1200
  @@ -492,7 +492,7 @@ my $privgroups = {
   },
   };
 
  -my $valid_privs = {};
  +our $valid_privs = {};

 using global vars is bad design!

   my $special_roles = {
   'NoAccess' = {}, # no priviledges
  diff -Nuarp usr/share/perl5/PVE/API2/Cluster.pm
  usr/share/perl5/PVE/API2/Cluster.pm
  --- usr/share/perl5/PVE/API2/Cluster.pm   2014-06-27
 16:22:37.302970017
  +1200
  +++ usr/share/perl5/PVE/API2/Cluster.pm   2014-06-27
 16:26:30.939493106
  +1200
  @@ -19,6 +19,8 @@ use PVE::RESTHandler;
   use PVE::RPCEnvironment;
   use PVE::JSONSchema qw(get_standard_option);
 
  +use PVE::AccessControl qw(valid_privs);
  +
   use base qw(PVE::RESTHandler);
 
   __PACKAGE__-register_method ({
  @@ -160,12 +162,20 @@ __PACKAGE__-register_method({
my $vmlist = PVE::Cluster::get_vmlist() || {};
my $idlist = $vmlist-{ids} || {};
 
  + my $storage_pool = {};
  +
  + foreach my $pool (keys(%{$usercfg-{pools}})) {
  + foreach my $storage (keys(%{$usercfg-{pools}-{$pool}-
  {storage}})) {
  + $storage_pool-{$storage}=$pool;
  + }
  + }
  +
my $pooldata = {};
if (!$param-{type} || $param-{type} eq 'pool') {
foreach my $pool (keys %{$usercfg-{pools}}) {
my $d = $usercfg-{pools}-{$pool};
 
  - next if !$rpcenv-check($authuser, /pool/$pool, [
  'Pool.Allocate' ], 1);
  + next if !$rpcenv-check_any($authuser, /pool/$pool, [keys
  $PVE::AccessControl::valid_privs], 1);

 Please can you explain that change?

 
my $entry = {
id = /pool/$pool,
  @@ -185,9 +195,9 @@ __PACKAGE__-register_method({
 
my $data = $idlist-{$vmid};
my $entry = PVE::API2Tools::extract_vm_stats($vmid, $data,
  $rrd);
  - if ($entry-{uptime}) {
  - if (my $pool = $usercfg-{vms}-{$vmid}) {
  - if (my $pe = $pooldata-{$pool}) {
  + if (my $pool = $usercfg-{vms}-{$vmid}) {
  + if (my $pe = $pooldata-{$pool}) {
  + if ($entry-{uptime}) {
$pe-{uptime} = $entry-{uptime} if !$pe-
  {uptime} || $entry-{uptime}  $pe-{uptime};
$pe-{mem} = 0 if !$pe-{mem};
$pe-{mem} += $entry-{mem};
  @@ -198,6 +208,7 @@ __PACKAGE__-register_method({
$pe-{maxcpu} = 0 if !$pe-{maxcpu};
$pe-{maxcpu} += $entry-{maxcpu};
}
  + $entry-{pool} = $pe-{pool};
}
}
 
  @@ -228,6 

Re: [pve-devel] KVM guest hangs with SCSI drive (ZFS)

2014-05-06 Thread Andrew Thrift
This seems to be a trivial change, but fixes a known problem in PVE3.2

With such low risk, why would this NOT be included as an update to PVE3.2 ?



On Tue, May 6, 2014 at 5:34 PM, Dietmar Maurer diet...@proxmox.com wrote:

  Will these patched packages (for qemu 1.7.1) make it in to the PVE3.2
 repos ?

 No, my plan was to release them with qemu 2.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] KVM guest hangs with SCSI drive (ZFS)

2014-05-06 Thread Andrew Thrift
Dietmar,

Is this not the point of pve-test  ?

If the updated packages are put in to pve-test users will be able to test
this for you.

We will test this against Ceph, iSCSI to EqualLogic, RAID via PERC5, iSCSI
to Nexenta and NFS to Nexenta.





On Tue, May 6, 2014 at 8:47 PM, Dietmar Maurer diet...@proxmox.com wrote:

  This seems to be a trivial change, but fixes a known problem in PVE3.2
 
  With such low risk, why would this NOT be included as an update to
 PVE3.2 ?

 Updating a library is not a trivial change. This needs careful testing.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] KVM guest hangs with SCSI drive (ZFS)

2014-05-05 Thread Andrew Thrift
Will these patched packages (for qemu 1.7.1) make it in to the PVE3.2 repos
?



On Tue, May 6, 2014 at 1:00 PM, Michael Rasmussen m...@datanom.net wrote:

 On Sun, 04 May 2014 18:47:28 +0200 (CEST)
 Alexandre DERUMIER aderum...@odiso.com wrote:

 
  can you test them ?
 
 Have tested with CentOS-6.5 and I can confirm that it works with virtio
 and megaraid. No speed monster though;-)

 --
 Hilsen/Regards
 Michael Rasmussen

 Get my public GnuPG keys:
 michael at rasmussen dot cc
 http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
 mir at datanom dot net
 http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
 mir at miras dot org
 http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
 --
 /usr/games/fortune -es says:
 1st graffitiist: QUESTION AUTHORITY!

 2nd graffitiist: Why?

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] PATCH: Add support for bridges with more than one physical link.

2014-02-12 Thread Andrew Thrift
While this is a very neat way to load balance vlan traffic, it could be
dangerous.

You are effectively allowing users to create a loop. Unless they have their
switches and spanning tree configured correctly upstream of the host, they
could create a large broadcast storm on their network, likely knocking out
other hosts and switches control planes.

It is the same as looping a cable between two ports on a switch that does
not have edge-safeguard functionality.

Just my 2c.




On Wed, Feb 12, 2014 at 6:28 PM, Pablo Ruiz pablo.r...@gmail.com wrote:

 Hi,

 In our proxmox cluster, each node has two bond interfaces, and each bond
 interface connects to and independent switch. This allows us to enable
 MSTP/PVSTP+ and thus load share traffic on different vlans across switches.

   +==+
|  SWITCH-A  |---,
   +==+   |
   +===+   | |
   |   |-(bond1)--´ |
 -|  Node-X  |  (trunk)
   |   |-(bond2)--, |
   +===+   | |
   +==+   |
|  SWITCH-B  |---´
   +==+

 In this setup, we have a couple of vlans (iSCSI-A  iSCSI-B) each which
 has been priorized (by means of MSTP/PVST) on each switch. Also, proxmox's
 internal (software) bridges have STP disabled (so they do not conflict with
 MSTP's traffic). With this setup we are able to achieve full-redundant
 network interconnects, while at the same time using both links/bonds for
 iSCSI traffic (with multipath+round-robin).

 However, proxmox's current code doesnt allow bridges with more than one
 physical interface, something we had to apply an small enhacement to
 proxmox in order to setup our cluster as stated.

 We would like to have this enhacement merged into proxmox, and so I've
 read about proxmox development policies, etc. And as stated here is the
 link containing a diff format patch:
 https://github.com/pruiz/pve-common/commit/ce0173a1079e4fc8bb08d9eebd1df71f0f8dc3fe.diff
  aswell
 as the prettified diff from github:
 https://github.com/pruiz/pve-common/commit/ce0173a1079e4fc8bb08d9eebd1df71f0f8dc3fe

 This code has been in production for little more than a month with no
 issues. But, please let me know what maybe missing and/or what amendments
 needs to be done in order for this patch to be accepted into proxmox.

 Best regards,
 Pablo

 PD: I'll be sending the signed contribution aggrement by tomorrow, as soon
 as I get to my office. As I hope to send another contribution regarding ZFS
 plugin next.

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] PATCH: Add support for bridges with more than one physical link.

2014-02-12 Thread Andrew Thrift
Correct in your case, but not all Proxmox users have MSTP/PVST+ configured.

A much lower risk strategy is to use Multi-Chassis Link Aggregation Groups
between Switch-A and Switch-B with a 4 port bond on the Proxmox hosts,
using a L3/L4 hash you will achieve better load balancing of traffic across
all 4 links and as this requires no changes to Proxmox will not introduce
the chance of loops.

We have had this in production for 13 months now with no issues.



On Wed, Feb 12, 2014 at 9:55 PM, Pablo Ruiz pablo.r...@gmail.com wrote:

 That's what MSTP/PVSTP+ is supposed to avoid. (And infact, it does so in
 our environment).. however, it requires switches with such capability.


 On Wed, Feb 12, 2014 at 9:53 AM, Andrew Thrift 
 and...@networklabs.co.nzwrote:

 While this is a very neat way to load balance vlan traffic, it could be
 dangerous.

 You are effectively allowing users to create a loop. Unless they have
 their switches and spanning tree configured correctly upstream of the host,
 they could create a large broadcast storm on their network, likely knocking
 out other hosts and switches control planes.

 It is the same as looping a cable between two ports on a switch that does
 not have edge-safeguard functionality.

 Just my 2c.




 On Wed, Feb 12, 2014 at 6:28 PM, Pablo Ruiz pablo.r...@gmail.com wrote:

 Hi,

 In our proxmox cluster, each node has two bond interfaces, and each bond
 interface connects to and independent switch. This allows us to enable
 MSTP/PVSTP+ and thus load share traffic on different vlans across switches.

   +==+
|  SWITCH-A  |---,
   +==+   |
   +===+   | |
   |   |-(bond1)--´ |
 -|  Node-X  |  (trunk)
   |   |-(bond2)--, |
   +===+   | |
   +==+   |
|  SWITCH-B  |---´
   +==+

 In this setup, we have a couple of vlans (iSCSI-A  iSCSI-B) each which
 has been priorized (by means of MSTP/PVST) on each switch. Also, proxmox's
 internal (software) bridges have STP disabled (so they do not conflict with
 MSTP's traffic). With this setup we are able to achieve full-redundant
 network interconnects, while at the same time using both links/bonds for
 iSCSI traffic (with multipath+round-robin).

 However, proxmox's current code doesnt allow bridges with more than one
 physical interface, something we had to apply an small enhacement to
 proxmox in order to setup our cluster as stated.

 We would like to have this enhacement merged into proxmox, and so I've
 read about proxmox development policies, etc. And as stated here is the
 link containing a diff format patch:
 https://github.com/pruiz/pve-common/commit/ce0173a1079e4fc8bb08d9eebd1df71f0f8dc3fe.diff
  aswell
 as the prettified diff from github:
 https://github.com/pruiz/pve-common/commit/ce0173a1079e4fc8bb08d9eebd1df71f0f8dc3fe

 This code has been in production for little more than a month with no
 issues. But, please let me know what maybe missing and/or what amendments
 needs to be done in order for this patch to be accepted into proxmox.

 Best regards,
 Pablo

 PD: I'll be sending the signed contribution aggrement by tomorrow, as
 soon as I get to my office. As I hope to send another contribution
 regarding ZFS plugin next.

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] Virtual vlan tagging to bridge interface

2014-01-13 Thread Andrew Thrift
Ok, so in my example of eth0,eth1,eth2,eth3bond0-
---bond0.101vmbr0vmbr0.201tap interface

The packet is DOUBLE TAGGED with 0x8100 tags.  So the outer tag (SVID) is
101 with an inner (CVID) tag of 201.

802.1ad support adds the capability to have an SVID of 0x88a8 with a CVID
of 0x8100.   Before this support was added you could only do QinQ using the
stacked 0x8100 tags.




On Mon, Jan 13, 2014 at 9:31 PM, Alexandre DERUMIER aderum...@odiso.comwrote:

 Also,I see that support of 802.1ad has been added to kernel since 3.10
 only. So how do is work before ?


 http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=8ad227ff89a7e6f05d07cd0acfd95ed3a24450ca


 # ip link add link eth0 eth0.1000 \
 type vlan proto 802.1ad id 1000

 - Mail original -

 De: Alexandre DERUMIER aderum...@odiso.com
 À: Stefan Priebe - Profihost AG s.pri...@profihost.ag
 Cc: pve-devel@pve.proxmox.com
 Envoyé: Lundi 13 Janvier 2014 09:10:59
 Objet: Re: [pve-devel] [PATCH] Virtual vlan tagging to bridge interface

 this should explain it:
 http://en.wikipedia.org/wiki/IEEE_802.1ad

 Thanks, I understand now.

 So, for this setup:

 bond0.101vmbr0vmbr0.201tap interface


 when the packet from tap (tagged 201) is going out through bond0.101,

 what happen ?

 is the packet vlan retagged 101 ?
 or is the packet vlan 201 encapsuled in vlan101 ?





 - Mail original -

 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
 À: Alexandre DERUMIER aderum...@odiso.com, Andrew Thrift 
 and...@networklabs.co.nz
 Cc: pve-devel@pve.proxmox.com
 Envoyé: Lundi 13 Janvier 2014 08:24:19
 Objet: Re: [pve-devel] [PATCH] Virtual vlan tagging to bridge interface

 Am 13.01.2014 07:54, schrieb Alexandre DERUMIER:
  QinQ vlan tagging.
 
  can somebody explain me how qinq works exactly ? (I'm reading cisco doc,
 but I'm not sure to understand how tagging is working exactly)

 this should explain it:
 http://en.wikipedia.org/wiki/IEEE_802.1ad


  - Mail original -
 
  De: Andrew Thrift and...@networklabs.co.nz
  À: pve-devel@pve.proxmox.com
  Envoyé: Lundi 13 Janvier 2014 01:33:24
  Objet: Re: [pve-devel] [PATCH] Virtual vlan tagging to bridge interface
 
 
  FYI we are using vlan tagging on bridges with Proxmox in production for
 over a year now, initially on 2.6.32 kernel and then on 3.10. We are using
 Intel gigabit and 10gigabit adapters.
 
 
  We posted the patches to the list a few months back, I believe these are
 very similar to Alexandre's patches. We have a more complex config in that
 we are also doing bonding and QinQ vlan tagging.
 
 
  Our setup looks like this:
 
 
 
 eth0,eth1,eth2,eth3bond0bond0.101vmbr0vmbr0.201tap
 interface
 
 
 
  That is using an outer tag of 101 and an inner tag of 201.
 
 
 
 
 
 
 
 
 
  On Sat, Jan 11, 2014 at 7:59 AM, Alexandre DERUMIER 
 aderum...@odiso.com  wrote:
 
 
 
  If alexandre’s patch don’t work with any devices it isn’t really
 interesting because it addressing other functionality and devices. I
 checked the patch and it use the same problematic part with eth*, wifi* and
 bond* check which fails with virtual devices like gre, ipsec,..
 
  What do you mean by don't work with any devices ?
 
  My patch is to manage vlan tags on the bridge, not eth interface.
 
  eth0vmbr0--tap interface
 
  vlan are tagged on tap interfaces plugged on vmbr0, with new bridge
 cmd. (like an access port on a cisco switch)
  and vlans are allowed to pass through eth0.(like a trunk port on cisco
 switch)
 
  So I think it should work with gre,ipsec,...(But I don't have tested it
 yet)
 
 
 
 
 
 
  - Mail original -
 
  De: Johannes Ernst  i...@filemedia.de 
  À: pve-devel@pve.proxmox.com
  Envoyé: Vendredi 10 Janvier 2014 18:16:30
 
  Objet: Re: [pve-devel] [PATCH] Virtual vlan tagging to bridge interface
 
 
 
  Thus, it’s a configuration issue and NOT a kernel issue.
 
  If alexandre’s patch don’t work with any devices it isn’t really
 interesting because it addressing other functionality and devices. I
 checked the patch and it use the same problematic part with eth*, wifi* and
 bond* check which fails with virtual devices like gre, ipsec,..
 
  Am 10.01.2014 um 17:18 schrieb Dietmar Maurer  diet...@proxmox.com :
 
  Sure? Do you have additional information? I think it's not correct and
 it works!
 
  We tested that a few times (and failed), so nobody is keen to test that
 again.
 
  We currently try to use the new bridge VLAN features - that looks more
 promising (see patches from Alexandre).
 
 
 
 
 
  ___
  pve-devel mailing list
  pve-devel@pve.proxmox.com
  http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
  ___
  pve-devel mailing list
  pve-devel@pve.proxmox.com
  http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
 
 
 
 
  ___
  pve-devel mailing list
  pve

Re: [pve-devel] [PATCH] Virtual vlan tagging to bridge interface

2014-01-13 Thread Andrew Thrift
I should add that some switches will only accept QinQ using an SVID of
0x88a8 and a CVID of 0x8100.  Which is why support for it was needed on
Linux.

A lot of switches will allow you to specify the SVID type, either globally,
per port or per vlan.





On Tue, Jan 14, 2014 at 10:42 AM, Andrew Thrift and...@networklabs.co.nzwrote:

 Ok, so in my example of eth0,eth1,eth2,eth3bond0-
 ---bond0.101vmbr0vmbr0.201tap interface

 The packet is DOUBLE TAGGED with 0x8100 tags.  So the outer tag (SVID) is
 101 with an inner (CVID) tag of 201.

 802.1ad support adds the capability to have an SVID of 0x88a8 with a CVID
 of 0x8100.   Before this support was added you could only do QinQ using the
 stacked 0x8100 tags.




 On Mon, Jan 13, 2014 at 9:31 PM, Alexandre DERUMIER 
 aderum...@odiso.comwrote:

 Also,I see that support of 802.1ad has been added to kernel since 3.10
 only. So how do is work before ?


 http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=8ad227ff89a7e6f05d07cd0acfd95ed3a24450ca


 # ip link add link eth0 eth0.1000 \
 type vlan proto 802.1ad id 1000

 - Mail original -

 De: Alexandre DERUMIER aderum...@odiso.com
 À: Stefan Priebe - Profihost AG s.pri...@profihost.ag
 Cc: pve-devel@pve.proxmox.com
 Envoyé: Lundi 13 Janvier 2014 09:10:59
 Objet: Re: [pve-devel] [PATCH] Virtual vlan tagging to bridge interface

 this should explain it:
 http://en.wikipedia.org/wiki/IEEE_802.1ad

 Thanks, I understand now.

 So, for this setup:

 bond0.101vmbr0vmbr0.201tap interface


 when the packet from tap (tagged 201) is going out through bond0.101,

 what happen ?

 is the packet vlan retagged 101 ?
 or is the packet vlan 201 encapsuled in vlan101 ?





 - Mail original -

 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
 À: Alexandre DERUMIER aderum...@odiso.com, Andrew Thrift 
 and...@networklabs.co.nz
 Cc: pve-devel@pve.proxmox.com
 Envoyé: Lundi 13 Janvier 2014 08:24:19
 Objet: Re: [pve-devel] [PATCH] Virtual vlan tagging to bridge interface

 Am 13.01.2014 07:54, schrieb Alexandre DERUMIER:
  QinQ vlan tagging.
 
  can somebody explain me how qinq works exactly ? (I'm reading cisco
 doc, but I'm not sure to understand how tagging is working exactly)

 this should explain it:
 http://en.wikipedia.org/wiki/IEEE_802.1ad


  - Mail original -
 
  De: Andrew Thrift and...@networklabs.co.nz
  À: pve-devel@pve.proxmox.com
  Envoyé: Lundi 13 Janvier 2014 01:33:24
  Objet: Re: [pve-devel] [PATCH] Virtual vlan tagging to bridge interface
 
 
  FYI we are using vlan tagging on bridges with Proxmox in production for
 over a year now, initially on 2.6.32 kernel and then on 3.10. We are using
 Intel gigabit and 10gigabit adapters.
 
 
  We posted the patches to the list a few months back, I believe these
 are very similar to Alexandre's patches. We have a more complex config in
 that we are also doing bonding and QinQ vlan tagging.
 
 
  Our setup looks like this:
 
 
 
 eth0,eth1,eth2,eth3bond0bond0.101vmbr0vmbr0.201tap
 interface
 
 
 
  That is using an outer tag of 101 and an inner tag of 201.
 
 
 
 
 
 
 
 
 
  On Sat, Jan 11, 2014 at 7:59 AM, Alexandre DERUMIER 
 aderum...@odiso.com  wrote:
 
 
 
  If alexandre’s patch don’t work with any devices it isn’t really
 interesting because it addressing other functionality and devices. I
 checked the patch and it use the same problematic part with eth*, wifi* and
 bond* check which fails with virtual devices like gre, ipsec,..
 
  What do you mean by don't work with any devices ?
 
  My patch is to manage vlan tags on the bridge, not eth interface.
 
  eth0vmbr0--tap interface
 
  vlan are tagged on tap interfaces plugged on vmbr0, with new bridge
 cmd. (like an access port on a cisco switch)
  and vlans are allowed to pass through eth0.(like a trunk port on cisco
 switch)
 
  So I think it should work with gre,ipsec,...(But I don't have tested it
 yet)
 
 
 
 
 
 
  - Mail original -
 
  De: Johannes Ernst  i...@filemedia.de 
  À: pve-devel@pve.proxmox.com
  Envoyé: Vendredi 10 Janvier 2014 18:16:30
 
  Objet: Re: [pve-devel] [PATCH] Virtual vlan tagging to bridge interface
 
 
 
  Thus, it’s a configuration issue and NOT a kernel issue.
 
  If alexandre’s patch don’t work with any devices it isn’t really
 interesting because it addressing other functionality and devices. I
 checked the patch and it use the same problematic part with eth*, wifi* and
 bond* check which fails with virtual devices like gre, ipsec,..
 
  Am 10.01.2014 um 17:18 schrieb Dietmar Maurer  diet...@proxmox.com :
 
  Sure? Do you have additional information? I think it's not correct
 and it works!
 
  We tested that a few times (and failed), so nobody is keen to test
 that again.
 
  We currently try to use the new bridge VLAN features - that looks more
 promising (see patches from Alexandre).
 
 
 
 
 
  ___
  pve-devel mailing

Re: [pve-devel] [PATCH] Virtual vlan tagging to bridge interface

2014-01-12 Thread Andrew Thrift
FYI we are using vlan tagging on bridges with Proxmox in production for
over a year now, initially on 2.6.32 kernel and then on 3.10.  We are using
Intel gigabit and 10gigabit adapters.

We posted the patches to the list a few months back,  I believe these are
very similar to Alexandre's patches.  We have a more complex config in that
we are also doing bonding and QinQ vlan tagging.

Our setup looks like this:

eth0,eth1,eth2,eth3bond0bond0.101vmbr0vmbr0.201tap
interface

That is using an outer tag of 101 and an inner tag of 201.





On Sat, Jan 11, 2014 at 7:59 AM, Alexandre DERUMIER aderum...@odiso.comwrote:

 If alexandre’s patch don’t work with any devices it isn’t really
 interesting because it addressing other functionality and devices. I
 checked the patch and it use the same problematic part with eth*, wifi* and
 bond* check which fails with virtual devices like gre, ipsec,..

 What do you mean by don't work with any devices ?

 My patch is to manage vlan tags on the bridge, not eth interface.

 eth0vmbr0--tap interface

 vlan are tagged on tap interfaces plugged on vmbr0, with new bridge cmd.
 (like an access port on a cisco switch)
 and vlans are allowed to pass through eth0.(like a trunk port on cisco
 switch)

 So I think it should work with gre,ipsec,...(But I don't have tested it
 yet)





 - Mail original -

 De: Johannes Ernst i...@filemedia.de
 À: pve-devel@pve.proxmox.com
 Envoyé: Vendredi 10 Janvier 2014 18:16:30
 Objet: Re: [pve-devel] [PATCH] Virtual vlan tagging to bridge interface

 Thus, it’s a configuration issue and NOT a kernel issue.

 If alexandre’s patch don’t work with any devices it isn’t really
 interesting because it addressing other functionality and devices. I
 checked the patch and it use the same problematic part with eth*, wifi* and
 bond* check which fails with virtual devices like gre, ipsec,..

 Am 10.01.2014 um 17:18 schrieb Dietmar Maurer diet...@proxmox.com:

  Sure? Do you have additional information? I think it's not correct and
 it works!
 
  We tested that a few times (and failed), so nobody is keen to test that
 again.
 
  We currently try to use the new bridge VLAN features - that looks more
 promising (see patches from Alexandre).
 


 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Resource Pool problems when cloning

2013-08-12 Thread Andrew Thrift

Hi,

We have noticed on two separate clusters that when we perform a clone, 
and select a destination resource pool that the resulting cloned machine 
is NOT in the pool we selected (it is in the root pool)
We also noticed that when logged in as a restricted user (that only has 
access to the one pool) that when we clone to the allowed resource pool 
it fails based on permissions.


I am assuming that the second issue is related to the first, in that PVE 
is not actually putting the cloned VM in to the correct pool, it is 
putting it in to the default root pool.


Is this a known issue ?


If not, we are happy to create a clean patch for the issue and perform 
testing on it, if someone here is prepared to sponsor it.




Regards,





Andrew


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Ops on multiple VM's

2013-05-15 Thread Andrew Thrift

Hi,

One of the features that was discussed for Proxmox 3.0 was the ability 
to perform an operation on multiple VM's at the same time.


e.g. Select  group of VM's or a Resource Group and Migrate them, or 
start/stop them.


I was wondering if/when this feature will be implemented ?



Regards,



Andrew
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Linked clones fail if source Ceph pool name is not rbd

2013-05-13 Thread Andrew Thrift

Hi,

On 3.0rc1 when I attempt to create a Linked-Clone from a VM on a pool 
other than rbd it fails with the following error:


stopped: clone failed: rbd clone base-120-disk-1' error: rbd: error 
opening pool rbd: (2) No such file or directory



Regards,





Andrew
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Cloned VM's do not clone parent cache settings

2013-05-13 Thread Andrew Thrift

Good Morning,

I have noticed that cloned VM's do not copy the cache settings of the 
Template.  e.g. if I have a Template with write-back enabled, then 
perform a full clone it will have write-back cache disabled on the 
resulting VM.



I am not sure if this is a feature, or a bug.



Regards,




Andrew

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] new ceph stable release !

2013-05-08 Thread Andrew Thrift

Great!

Will this fix make it in to the packages for 3.0 rc2   ?




Regards,




Andrew

On 5/7/2013 10:21 PM, Dietmar Maurer wrote:

Thanks for the hint:

https://git.proxmox.com/?p=pve-qemu-kvm.git;a=commit;h=54619c4adf9014093ef09d0f28fd40e88f56a55a


-Original Message-
From: Stefan Priebe - Profihost AG [mailto:s.pri...@profihost.ag]
Sent: Dienstag, 07. Mai 2013 11:51
To: Dietmar Maurer
Cc: Alexandre DERUMIER; pve-devel@pve.proxmox.com
Subject: Re: [pve-devel] new ceph stable release !

Keep in mind to update QEMU to fix the ugly rbd flush bug:

http://git.qemu.org/?p=qemu.git;a=commitdiff;h=dc7588c1eb3008bda53dde1d
6b890cd299758155

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Add sound card to VM's from WebUI

2013-04-18 Thread Andrew Thrift
I have not been able to get sound over RDP working on Windows Server 
2008r2 without a KVM hda sound card.  As soon as this is added it 
works perfectly.


I'll check our VMWare farms which always just work with RDP sound and 
see what we have there.





On 4/19/2013 12:15 AM, Alexandre DERUMIER wrote:

seem to be possible also for win2003

http://support.dvsweb.com/app/answers/detail/a_id/263/~/enabling-audio-over-rdp-on-a-windows-server,-without-hardware-soundcard



- Mail original -

De: Martin Maurer mar...@proxmox.com
À: Andrew Thrift and...@networklabs.co.nz, Dietmar Maurer 
diet...@proxmox.com, pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Avril 2013 13:49:39
Objet: Re: [pve-devel] Add sound card to VM's from WebUI

Hi,


-Original Message-
From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel-
boun...@pve.proxmox.com] On Behalf Of Andrew Thrift
Sent: Donnerstag, 18. April 2013 06:44
To: Dietmar Maurer; pve-devel@pve.proxmox.com
Subject: Re: [pve-devel] Add sound card to VM's from WebUI

RDP still requires that the Terminal Server has a sound card.

At least my win7 on a PVE servers play sounds (no physical sound card and no 
KVM sound card).
I just connect with my win desktop rdp client.

Can you explain this?

Martin

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Add sound card to VM's from WebUI

2013-04-17 Thread Andrew Thrift

Is this feature planned ?


And if not, would patches be accepted to add this feature ?


Use cases:

-Virtual Terminal Services (Required for RDS Remote Sound)
-Virtual Desktop Infrastructure

Based on the guest OS the default sound card could be selected, e.g. 
ac97 for 32bit WinXP, hda for 64bit Windows/Linux




Regards,



Andrew Thrift



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Process to submit patches

2013-04-17 Thread Andrew Thrift

Hi Dietmar,

We have been running this successfully for a few weeks now on the 
default PVE kernel with no issues.


I searched the list archives, and the issues I could see looked like 
they were related to the MTU on the parent physical interface being too 
small.


Linux does not clearly differentiate between L2 (interface) and L3 (IP) 
MTU, and also has a hidden 4 byte allowance on the interface MTU, making 
it confusing.  When running QinQ the parent physical interface needs a 
MTU of at least 1504 bytes (allows a 1508 byte frame), any sub vlan's 
and bridges will then inherit this MTU allowing for the inner tag to 
pass externally correctly.


Obviously if the users are not doing QinQ the parent physical interface 
can be the default 1500byte MTU and will work perfectly.



We have only tested with KVM, so maybe there are other issues with 
OpenVZ...   If there are not, it would be good to see this patch merged.




Regards,




Andrew




On 4/2/2013 5:32 PM, Dietmar Maurer wrote:

Our patches creates the vlan sub-if on the parent VM bridge, rather than
on the parent interface.   This works with both QinQ and non QinQ
configurations.

AFAIR this create serious problem. We already tried that several times and 
always run
into problems. Search the list for details.




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Process to submit patches

2013-04-01 Thread Andrew Thrift

Hi,

We are wanting to submit a patch to Network.pm to be included upstream.

Our patch changes the way Proxmox dynamically creates vlans allowing for 
the current model, as well as for QinQ.


Currently it is not possible to do QinQ on Proxmox as when you specify a 
VLAN in the WebUI, Proxmox checks the parent bridge for a Physical 
Interface, then creates the vlan sub-if on the Physical Interface and 
then binds it to a new bridge.


Our patches creates the vlan sub-if on the parent VM bridge, rather than 
on the parent interface.   This works with both QinQ and non QinQ 
configurations.



What is the process to submit our patch ?




Regards,





Andrew Thrift

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Process to submit patches

2013-04-01 Thread Andrew Thrift

FYI patch is:


*** Network.pm.orig 2013-04-02 10:19:42.0 +1300
--- Network.pm  2013-04-02 11:41:44.0 +1300
***
*** 122,138 
  #check if we have an only one ethX or bondX interface in the bridge

  my $iface;
! PVE::Tools::dir_glob_foreach($dir, '((eth|bond)\d+)', sub {
my ($slave) = @_;

die more then one physical interfaces on bridge '$bridge'\n 
if $iface;

!   $iface = $slave;

  });

  die no physical interface on bridge '$bridge'\n if !$iface;

! my $ifacevlan = ${iface}.$tag;

  # create vlan on $iface is not already exist
  if (! -d /sys/class/net/$ifacevlan) {
--- 122,138 
  #check if we have an only one ethX or bondX interface in the bridge

  my $iface;
! PVE::Tools::dir_glob_foreach($dir, '((eth|bond)\d+\.?\d+)', sub {
my ($slave) = @_;

die more then one physical interfaces on bridge '$bridge'\n 
if $iface;

!   $iface = $bridge;

  });

  die no physical interface on bridge '$bridge'\n if !$iface;

! my $ifacevlan = ${bridge}.$tag;

  # create vlan on $iface is not already exist
  if (! -d /sys/class/net/$ifacevlan) {





On 4/2/2013 11:27 AM, Andrew Thrift wrote:

Hi,

We are wanting to submit a patch to Network.pm to be included upstream.

Our patch changes the way Proxmox dynamically creates vlans allowing 
for the current model, as well as for QinQ.


Currently it is not possible to do QinQ on Proxmox as when you specify 
a VLAN in the WebUI, Proxmox checks the parent bridge for a Physical 
Interface, then creates the vlan sub-if on the Physical Interface and 
then binds it to a new bridge.


Our patches creates the vlan sub-if on the parent VM bridge, rather 
than on the parent interface.   This works with both QinQ and non QinQ 
configurations.



What is the process to submit our patch ?




Regards,





Andrew Thrift



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] QEMU/KVM + RBD latency

2013-03-12 Thread Andrew Thrift

Hi,

Any FYI more than anything. We have experienced VERY high latency when 
performing IO heavy operations on our RBD volumes. We initially thought 
it was due to network congestion on our uplinks, but it turns out it is 
a bug.


See http://tracker.ceph.com/issues/3737 

It would be great if Proxmox could include this when it is included in 
the stable branch of Ceph.




Regards,




Andrew
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] features ideas for 3.0 roadmap

2013-03-12 Thread Andrew Thrift

  
  
Having a "Maintenance" mode would be
  good.
  
  Being able to mark a node as being under "Maintenance" and having
  all VM's migrate off it, and it disappear from the target list in
  migrations.
  
  Also being able to select Multiple guests and migrate them in one
  action would also be VERY useful.
  
  
  
  
  Regards,
  
  
  
  
  Andrew
  
  
  
  
  
  On 3/13/2013 8:21 AM, James Devine wrote:


  Perhaps some type of update mode which would pull a
host server out of the cluster and migrate machines off of it.

  Auto-migrate VMs to do resource balancing. I've heard
resistance to this with the argument that the migration
would cause an even higher load but I think it is still a
valid option. If the high load ends up being a long term
situation it may be beneficial to migrate and cause a high
load for a brief period of time. Also if things are kept
balanced early enough the problem may be prevented all
together. There could be tuning options such as when to
balance and delay to prevent short periods of high load from
causing a migration. This would also be useful for
rebalancing the cluster if new hosts are added or if the
above option is implemented and a host is re-added to the
cluster.


  
  On Wed, Mar 6, 2013 at 3:18 AM, James
A. Coyle james.co...@jamescoyle.net
wrote:
I would
  +1 the backups, it's the one part of Proxmox which drives
  me mad! I tend to SSH in and rename them manually for
  major backups.
  
  I'd like to see some small UI changes:
  - Add a search box under Storage  Content so that we
  can start to type and the content is filtered.
  - On Create Backup Job - allow a name or comment to
  describe the backup.
  - As discussed - backups are tricky to manage on VMID
  alone.
  
  Overall though, it's a great product! I use 2 x 3 node
  clusters and use about a 50:50 split between QEMU and
  OpenVZ. Thanks for your efforts!
  
  
  James Coyle
  
  M:+44 (0) 7809 895
392
  E: james.co...@jamescoyle.net
  Skype: jac2703
  Gtalk: jac2...@gmail.com
  www: www.jamescoyle.net
  

  - Original Message -
  From: "FinnTux" finn...@gmail.com
  To: "Alexandre DERUMIER" aderum...@odiso.com
  Cc: pve-devel@pve.proxmox.com
  Sent: Wednesday, 6 March, 2013 7:52:56 AM
  Subject: Re: [pve-devel] features ideas for 3.0
  roadmap
  
   - finish vm/templates copy/clone
   - storage migration
   - storage migration + live migration for local
  disks
   - add hyperv cpu features for windows  2008
   - add spice console support (openstack has added
  spice-html5 console, so it should work now)
   - add support for qemu guest agent
   - finish cleanup of hotplug/unplug with
  cooperation of guest agent
   - add dynamic nic vlan change
   - add dynamic nic rate limit change
  
   What do you think about this ? Other ideas are
  welcome :)
  
  Sounds great.
  
  - Add vCPU, vRAM and bridges (networks) to pools
  - Improve backup handling. Like mentioned before on
  the forums backup
  name based solely on VMID is confusing. Needs somekind
  of notes
  included in the backup
  ___
  pve-devel mailing list
  pve-devel@pve.proxmox.com
  http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
  ___
  pve-devel mailing list
  pve-devel@pve.proxmox.com
  http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

  

  
  

  
  
  
  
  ___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



  


Re: [pve-devel] network card MBit/s vs MB/s

2013-02-17 Thread Andrew Thrift

I agree.

All networking measurements are in bits/s   Megabits, Kilobits

Having Proxmox's in MB/s is inconsistent with the way the rest of the 
industry measures network bandwidth.





Regards,





Andrew





On 2/17/2013 8:13 PM, Stefan Priebe - Profihost AG wrote:

Right but people are used to use Mbit/s for everything network related. So I 
think a lot of people who want to use it will use it incorrect.

Stefan

Am 17.02.2013 um 07:43 schrieb Dietmar Maurer diet...@proxmox.com:


nobody an opinion about that? To me it had happened several times that i
putted in 100MB/s and i wanted 100Mbit/s.

I guess not many people use that feature.
But I prefer MB, because all other things (beside network) are measured in MB,
for example HD or backup speed, disk size, ...


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] vm migration + storage migration with ndb workflow notes

2013-01-18 Thread Andrew Thrift
As a user, our main use case for live storage migration would be migrating 
between local disk/RBD/iSCSI. 

We would also use the ability to migrate storage across the network. 

This could for example be used to migrate between a SAN in one datacentre to a 
SAN in a remote datacentre.


Regards,


Andrew


Dietmar Maurer diet...@proxmox.com wrote:

 I have finally understand the whole workflow for migrate vm + storage
 migrate at the same time
 
 (I don't have to much time to work on this, maybe it's better to have
stable
 working disk clone/copy/live mirror code before )

Yes, I would also like to do that after copy/clone.

Besides, I think the whole thing gets to complex.

We only want to be able to migrate local disks (changing storage is not
really needed?).
I guess that would make 'remote' migrate much easier?



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel