>>We really need a stop hook... but the question there is *how*...
>>(Even if we added a hook to kvm, it would be nice to have something for
>>the case where it gets SIGKILLed).
For clean qemu stop (or shutdown in guest), we could use qmp-events
>>But parse_numa is still in QemuServer, did you mean to include another
>>hunk in this patch moving this function?
Ok, my bad,here a fix (better to keep it in QemuServer, like other parse sub)
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -187,7 +187,7 @@ sub config {
>>When adding more numaX entries to the VM's config than the host has this
>>now produces an 'Use of uninitialized value' error.
>>Better check for whether /sys/devices/system/node/node$numanode exists
>>and throw a useful error.
>>But should this even be fixed to host nodes? Without hugepages
>>When not specifying numaX entries we run into this code using the
>>default $dimm_size of 512, which won't work with 1G pages. Iow this
>>config without any `numa0`, `numa1`, ... entries:
>> hugepages: 1024
>> numa: 1
>>will error at this point about wrong memory sizes. We could probably
Hi,
I think it could be great in the future to add support for online migration
(live migration + storage migration).
I have send the workflow last year, but never have time to implement it.
- Mail original -
De: "Wolfgang Link"
À: "pve-devel"
It is now possibel to migrate LVM-thin volumes offline from one node to an
other.
Also LVM what is used by Qemu.
---
PVE/Storage.pm | 33 +
1 file changed, 33 insertions(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 1192ce3..907e3b3 100755
---
---
src/PVE/API2/LXC.pm | 21 +-
src/PVE/LXC.pm | 62 +
2 files changed, 72 insertions(+), 11 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 1dd9405..8c1bf4a 100644
--- a/src/PVE/API2/LXC.pm
+++
Now it is possible to move the volume to an other storage.
This works only when the CT is off, to keep the volume consistent.
---
src/PVE/API2/LXC.pm | 132
src/PVE/CLI/pct.pm | 1 +
2 files changed, 133 insertions(+)
diff --git
---
src/PVE/API2/LXC.pm | 6 --
1 file changed, 6 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 9c932a3..71828c9 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1107,12 +1107,6 @@ __PACKAGE__->register_method({
"you clone a
If we make a linked clone the CT must be a template so it is not allowed to run.
If we make a full clone, it is safer to have the CT offline.
---
src/PVE/API2/LXC.pm | 11 +++
src/PVE/LXC.pm | 4 ++--
2 files changed, 5 insertions(+), 10 deletions(-)
diff --git
>>General question:
>>Can you give an example in what kind of situation this boosts
>>performance?
I'll send benchmark results soon. (Note that hugepages are required for dpdk
too)
>>Note that vm_sto_cleanup() literally only happens when you stop the VM
>>via CLI or GUI, not when the VM
---
PVE/CLI/pveceph.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index ce991c1..2783f24 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -73,8 +73,8 @@ __PACKAGE__->register_method ({
properties => {
Ceph changed user name from root to ceph.
And for startup systemd is used instead of sysvinit.
---
PVE/API2/Ceph.pm | 9 +
1 file changed, 9 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 58e5b35..4f85860 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@
Ceph is managed since version infernalis by systemd and use ceph as user and
group.
---
PVE/CephTools.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/PVE/CephTools.pm b/PVE/CephTools.pm
index c7749bb..ddf4777 100644
--- a/PVE/CephTools.pm
+++ b/PVE/CephTools.pm
@@ -353,4
General question:
Can you give an example in what kind of situation this boosts
performance?
Code comments inline:
On Sat, Jun 04, 2016 at 10:19:56AM +0200, Alexandre Derumier wrote:
> changelog : rebase on last git
>
> vm configuration
>
> hugepages: (any|2|1024)
>
> any:
---
rebased v2 with v1 applied ;)
src/PVE/LXC/Config.pm | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index d80dae0..c067e7a 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -551,9 +551,10 @@ my $mp_desc =
> +By default (if no mountpoint option is overridden on the command line), `pct
> +restore` will recreate volume mountpoints that are found in the backed up
> +configuration. All recovered mountpoints will be created on the same storage
> +(provided via `-storage`, defaulting to `local`),
I
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
---
Changed:
- WARNING to NOTE
- include symlink info in normal description as well
src/PVE/LXC/Config.pm | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 6b4ec4c..c067e7a 100644
--- a/src/PVE/LXC/Config.pm
+++
---
src/PVE/LXC/Config.pm | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 6b4ec4c..d80dae0 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -208,7 +208,7 @@ my $rootfs_desc = {
},
ro => {
---
pct.adoc | 6 ++
1 file changed, 6 insertions(+)
diff --git a/pct.adoc b/pct.adoc
index dc9f446..014e48d 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -382,6 +382,12 @@ mounting mechanisms or storage technologies, it is
possible to
establish the FUSE mount on the Proxmox host and use a bind
---
pct.adoc | 53 +
1 file changed, 53 insertions(+)
diff --git a/pct.adoc b/pct.adoc
index 014e48d..38d5507 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -452,6 +452,45 @@ and destroy containers, and control execution (start,
stop, migrate,
...).
---
vzdump.adoc | 3 +++
1 file changed, 3 insertions(+)
diff --git a/vzdump.adoc b/vzdump.adoc
index 3276a32..00f4aa6 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -93,6 +93,9 @@ NOTE: `snapshot` mode requires that all backed up volumes are
on a storage that
supports snapshots. Using the
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On Tue, Jun 07, 2016 at 09:10:07AM +0200, Wolfgang Link wrote:
> It is now possibel to migrate LVM-thin volumes offline from one node to an
> other.
> Also LVM what is used by Qemu.
> ---
> PVE/Storage.pm | 35 +++
> 1 file changed, 35 insertions(+)
>
> diff
It is now possibel to migrate LVM-thin volumes offline from one node to an
other.
Also LVM what is used by Qemu.
---
PVE/Storage.pm | 35 +++
1 file changed, 35 insertions(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 1192ce3..22c1a4c 100755
---
Ofline migration on LVM and LVMThin are possible offline.
---
PVE/QemuMigrate.pm | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 7b9506f..2fd307c 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -293,7 +293,9 @@ sub
---
Note: this could be used in CephTools / pveceph as well if necessary
PVE/Storage/RBDPlugin.pm | 30 ++
1 file changed, 30 insertions(+)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 967ee04..2b4ff9c 100644
--- a/PVE/Storage/RBDPlugin.pm
otherwise mapping those images will fail. disabling the
features only needs to be done once per image, so it makes
sense to do this when creating the images.
unfortunately, the command does not work in hammer, so
it needs a version check for jewel or higher.
---
PVE/Storage/RBDPlugin.pm | 17
35 matches
Mail list logo