>>OK, multicast traffic may still be hindered when on the same network with
>>heavy users (e.g. VM storage), even if the network itself is not saturated.
>>A second totem ring through the redundant ring protocol (rrp) in passive
>>mode could boost the performance as it almost doubles the
---
pct.adoc | 55 +++
1 file changed, 55 insertions(+)
diff --git a/pct.adoc b/pct.adoc
index 40028b7..51b15cc 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -458,9 +458,64 @@ include::pct-network-opts.adoc[]
Backup and Restore
--
---
pct.adoc | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/pct.adoc b/pct.adoc
index 2b72f96..40028b7 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -455,6 +455,13 @@ they can contain the following setting:
include::pct-network-opts.adoc[]
+Backup and Restore
>>Note that I have around 1000vms, so I don't known impact of number of
>>messages/s.
a simple tcpdump give me an average of:
udp/5404: 500packets/s
udp/5405 : 1300 packets/s
- Mail original -
De: "Alexandre Derumier"
À: "pve-devel"
On 09/21/2016 10:51 AM, Alexandre DERUMIER wrote:
Note that I have around 1000vms, so I don't known impact of number of
messages/s.
a simple tcpdump give me an average of:
udp/5404: 500packets/s
udp/5405 : 1300 packets/s
- Mail original -
De: "Alexandre Derumier"
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c3a53c9..1244c02 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4605,7 +4605,7 @@ sub vm_commandline {
my $cmd = config_to_command($storecfg,
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
the package has been removed from the list of lxcfs dependencies
since 0.12-pve1
---
PVE/API2/APT.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/API2/APT.pm b/PVE/API2/APT.pm
index cf63c4c..3672053 100644
--- a/PVE/API2/APT.pm
+++ b/PVE/API2/APT.pm
@@ -534,7 +534,7 @@
---
Makefile | 1 +
index.adoc | 1 +
pve-admin-guide.adoc | 2 ++
pve-improve.adoc | 37 +
4 files changed, 41 insertions(+)
create mode 100644 pve-improve.adoc
diff --git a/Makefile b/Makefile
index 0acaedf..a8205b0 100644
---
This is a complementary fix for #1105 (Create Linux VM Wizard: use scsi
as default bus/device) and add some logic to the list of controllers
presented in the ControllerSelector combo box
Since we can have IDE, SCSI, Virtio(blk) as a controller during installation,
based on OS detection and
On 20 September 2016 at 07:43, Alexandre DERUMIER
wrote:
> One thing that I think it could be great,
>
> is to be able to have unique vmid across differents proxmox clusters.
>
> maybe with a letter prefix for example (cluster1: vmid: a100 , cluster2:
> vmid:b100).
>
> Like
> On September 21, 2016 at 1:25 AM Alexandre DERUMIER
> wrote:
>
>
> Another question about my first idea (replace corosync),
>
> is is really difficult to replace corosync by something else ?
>
> Sheepdog storage for example, have support for corosync and zookeeper.
On 09/21/2016 08:50 AM, Alexandre DERUMIER wrote:
Forgot to mention that consul supports multiple clusters and/or multi
center clusters out of the box.
yes, I read the doc yesterday. seem very interesting.
The most work could be to replace pmxcs by consul kv store. I have seen some
consul
> About corosync scaling,
> I found a discussion about implementation of satellites nodes
>
> http://discuss.corosync.narkive.com/Uh97uGyd/rfc-extending-corosync-to-high-node-counts
Sure, such things can extend the node count of a single cluster. But I am not
100% sure
if that solves all
> On Wed, 21 Sep 2016 01:45:18 +0200
> Michael Rasmussen wrote:
>
> > https://github.com/hashicorp/consul
> >
> Forgot to mention that consul supports multiple clusters and/or multi
> center clusters out of the box.
I would like to know how long it takes to synchronize data
>>@Alexandre, you say that with 16 nodes the cluster is quite at is maximum,
>>can I get some more infos from you as I currently do not have the
>>hardware to
>>test this :)
>>
>>Do you use IGMP snooping/queriers?
>>On which network communicates corosync, on an independent? And how fast
>>is it?
> Note that for scaling, zookeeper,consul,... have some kind of master nodes for
> the quorum, and client nodes. (same than corosync satelitte).
> I don't think it's technically possible to scale with full mesh masters nodes
> with lot of nodes.
Exactly. Corosync is much better/faster and can
>>Exactly. Corosync is much better/faster and can replicate to more nodes. So I
>>would prefer to keep the
>>better technology (corosync), and improve it.
>>
>>Maybe it is even possible to implement a satelite code directly inside
>>pmxcfs...
I agreed too. Better to improve the wheel than
18 matches
Mail list logo