also see:
https://www.freedesktop.org/wiki/Software/systemd/writing-vm-managers/
> On June 1, 2016 at 6:22 PM Dietmar Maurer wrote:
>
>
> > ---
> > Changes since v2:
> > removed leftovers from old patch
> > fixed the option documentation (also a leftover from an
> ---
> Changes since v2:
> removed leftovers from old patch
> fixed the option documentation (also a leftover from an patch)
still a bit ugly...
What about using the SD dbus interface inside kvm?
https://www.freedesktop.org/wiki/Software/systemd/ControlGroupInterface/
>>@Alexandre: do you really need the other direction?
No. Only old->new.
I don't think that user try to migrate from new->old anyway.
- Mail original -
De: "dietmar"
À: "Thomas Lamprecht" , "pve-devel"
,
- if (res && res[1]) {
+ if (res && res[1] && Ext.isArray(me.items)) {
+ me.items.forEach(function(item) {
+ if (item.itemId === res[1]) {
+ activeTab = res[1];
+ }
+
> > The way that it also works between different installed qemu-server
> > versions would need to give the destination node an additional flag that
> > we want to use a TCP tunnel.
> >
> > @Dietmar should that be done? I mean if they update qemu-server it works
> > without it also and as a
> - if (res && res[1]) {
> + if (res && res[1] && Ext.isArray(me.items)) {
> + me.items.forEach(function(item) {
> + if (item.itemId === res[1]) {
> + activeTab = res[1];
> + }
> +
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
when switching from a vm to a template, if you
have a tab selected which does not exists in a template
(for example console or task history) you break
the site
this patch checks if the wanted tab actually exists,
and leave it on default (the first) when it does not
Signed-off-by: Dominik Csapak
this patch lets the graphs flow if you have enough
horizontal space
Signed-off-by: Dominik Csapak
---
this looks a bit weird on nodes because of the
long status output, but some people requested this and
i want to redo the whole status/notes area anyway
since we cannot create templates with existing snapshots,
and we cannot take snapshots of templates, showing
the tab on templates makes no sense
Signed-off-by: Dominik Csapak
---
www/manager6/qemu/Config.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
jslint does not like mixing statements and function calls
Signed-off-by: Dominik Csapak
---
www/manager6/panel/InputPanel.js | 10 +++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/www/manager6/panel/InputPanel.js b/www/manager6/panel/InputPanel.js
On 06/01/2016 03:37 PM, Alexandre DERUMIER wrote:
>>> We could actually support the old one for some time, even if it is broken
>>> (not our fault) as we see if we should open a tunnel or not, i.e. if the
>>> raddr is
>>> tcp and localhost then open a tcp tunnel like we did, if its tcp and not
>>We could actually support the old one for some time, even if it is broken
>>(not our fault) as we see if we should open a tunnel or not, i.e. if the
>>raddr is
>>tcp and localhost then open a tcp tunnel like we did, if its tcp and not
>>localhost its
>>an unsecure and if its unix the do like in
Hi,
On 06/01/2016 02:50 PM, Alexandre DERUMIER wrote:
> Hi,
>
> I don't have read the whole patches,
>
> but does it break migration from oldserver to newserver (with new code) ?
Not with unsecure migrations. But yes with secure ones this would be the
case, they would not crash but simply not
If the failure policy triggers more often than 2 times we used an
already tried node again, even if there where other untried nodes.
This does not make real sense as when it failed to start on a node
a short time ago it probably will also fail now (e.g. storage is
offline), whereas an untried
Instead of simply counting up an integer on each failed relocation
trial record the already tried nodes. We still have the try count
through the size of the array, so no information lost and no
behavioural change.
Use this for now to log on which nodes we failed to recover, may be
useful for an
Fencing is something which should not happen often in the real world
and has most time a really bad cause, thus send a email when
starting to fence a node and on success to root@localhost to inform
the cluster admin of said failures so he can check the hardware and
cluster status as soon as
avoids stranges error like "could not open dir/group.tmp.PID"
Signed-off-by: Thomas Lamprecht
---
src/PVE/HA/Sim/Hardware.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/PVE/HA/Sim/Hardware.pm b/src/PVE/HA/Sim/Hardware.pm
index a212671..be1037d 100644
---
Signed-off-by: Thomas Lamprecht
---
src/PVE/HA/Env.pm | 6 ++
src/PVE/HA/Env/PVE2.pm | 9 +
src/PVE/HA/Sim/Env.pm | 7 +++
3 files changed, 22 insertions(+)
diff --git a/src/PVE/HA/Env.pm b/src/PVE/HA/Env.pm
index c7537b1..55f6684 100644
---
Else we the regression test produce a indeterministic output.
As the hashs would else be traversed in random order it makes no
real difference for the PVE2 environment, so just sort keys when we
add them to the cluster or spawn resource agent workers to avoid
that problem.
Signed-off-by: Thomas
I picked a few patches from my feature branches which aren't
relevant to those features to get them upstream.
Patch 5 and 6 are v2 of a previous send regarding relocation policy
improvements.
cheers,
Thomas
___
pve-devel mailing list
instead of defaulting to VM.Config.Options for
all options not checked seperately, we
now have lists for the different config permissions
and check them accordingly.
for everything not given, we require root access
this is important especially for usbN and hostpciN
since they can change the host
comments inline
> + } elsif ($remainingoptions->{$opt} || $opt =~
> m/^(numa|parallell|serial)\d+$/) {
parallell => parallel
And those options allows access to host HW, so we need to restrict access.
___
pve-devel mailing list
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
We cannot guarantee when the SSH forward Tunnel really becomes
ready. The check with the mtunnel API call did not help for this
prolem as it only checked that the SSH connection itself works and
that the destination node has quorum but the forwarded tunnel itself
was not checked.
The Forward
Output all errors - if any - and add some log outputs on what we qmp
commands we do with which parameters, may be helpful when debugging
or analyzing a users problem.
Also check if the queried status is defined, as on a error this may
not be.
Signed-off-by: Thomas Lamprecht
As we open2 it we also need to collect it to avoid zombies
Omit timeout as its only used once and so we can use a static
wait time.
Signed-off-by: Thomas Lamprecht
---
PVE/QemuMigrate.pm | 39 +++
1 file changed, 19 insertions(+), 20
Changes since V1:
* the help button was not hidden after switching InputPanels via
direct tab clicks. To solve this, add a listener for 'deactivate' events,
and react appropriately (1/5) Since the pve Wizard sends this event in all
cases,
remove the call to hide() in the wizard (2/5)
* add
Next / OK are already displayed in blue, which is the 'call-to-action'
color we use everywhere.
To prevent stealing attention from these buttons, switch help button
to grey
---
www/css/ext6-pve.css | 4
www/manager6/button/HelpButton.js | 4 +++-
2 files changed, 7
Inside a wizard, switching to a new tab will fire
the 'activate' event to the new tab, causing
the inputPanel of this tab to display its help in
the wizard window.
---
www/manager6/window/Wizard.js | 4
1 file changed, 4 insertions(+)
diff --git a/www/manager6/window/Wizard.js
Changes since v2:
- openssh is able to use UNIX socket forwards since 5.7 (and jessie has 5.7)
so use that instead of socat.
- split the part of child collection apart as it does not have antything to do
with the tunnel problem
___
pve-devel
---
www/manager6/window/Edit.js | 6 ++
1 file changed, 6 insertions(+)
diff --git a/www/manager6/window/Edit.js b/www/manager6/window/Edit.js
index 28067a6..b231003 100644
--- a/www/manager6/window/Edit.js
+++ b/www/manager6/window/Edit.js
@@ -238,6 +238,12 @@ Ext.define('PVE.window.Edit',
This help button is meant to be added on InputPanels, where a
link to an online documentation chapter or subschapter is available.
Clicking on the help button will open the help in a new
browser tab.
Original idea similar to the pfSense GUI.
---
www/manager6/Makefile | 1 +
---
PVE/QemuServer.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b13dc71..d75ac98 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2750,6 +2750,7 @@ sub config_to_command {
push @$cmd, '-pidfile' , pidfile_name($vmid);
---
...r-cgroup-option-to-deal-with-systemd-run-.patch | 161 +
debian/patches/series | 1 +
2 files changed, 162 insertions(+)
create mode 100644
debian/patches/pve/0046-add-wait-for-cgroup-option-to-deal-with-systemd-run-.patch
diff --git
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On 05/31/2016 07:29 PM, Dietmar Maurer wrote:
>> Further another problem would still be open if we tried to patch the
>> SSH Forward method we currently use - which we solve for free with
>> the approach of this patch - namely the problem that the method
>> to get an available port
instead of defaulting to VM.Config.Options for
all options not checked seperately, we
now have lists for the different config permissions
and check them accordingly.
for everything not given, we require root access
this is important especially for usbN and hostpciN
since they can change the host
comments inline
> Wolfgang Link hat am 1. Juni 2016 um 09:22 geschrieben:
>
>
> With this patch it is possible to make a full clone from an running container,
> if the underlying Storage provides snapshots.
> ---
> src/PVE/API2/LXC.pm | 42
comments inline
> Wolfgang Link hat am 1. Juni 2016 um 09:22 geschrieben:
>
>
> Now it is possible to move the volume to an other storage.
> This works only when the CT is off, to keep the volume consistent.
> ---
> src/PVE/API2/LXC.pm | 116
>
---
src/PVE/API2/LXC.pm | 6 --
1 file changed, 6 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 71cf21d..6fb0a62 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1041,12 +1041,6 @@ __PACKAGE__->register_method({
"you clone a
If we make a linked clone the CT must be a template so it is not allowed to run.
If we make a full clone, it is safer to have the CT offline.
---
src/PVE/API2/LXC.pm | 11 +++
src/PVE/LXC.pm | 4 ++--
2 files changed, 5 insertions(+), 10 deletions(-)
diff --git
With this patch it is possible to make a full clone from an running container,
if the underlying Storage provides snapshots.
---
src/PVE/API2/LXC.pm | 42 +-
1 file changed, 41 insertions(+), 1 deletion(-)
diff --git a/src/PVE/API2/LXC.pm
---
src/PVE/API2/LXC.pm | 21 +-
src/PVE/LXC.pm | 62 +
2 files changed, 72 insertions(+), 11 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 95932a9..0bcadc3 100644
--- a/src/PVE/API2/LXC.pm
+++
Now it is possible to move the volume to an other storage.
This works only when the CT is off, to keep the volume consistent.
---
src/PVE/API2/LXC.pm | 116
src/PVE/CLI/pct.pm | 1 +
2 files changed, 117 insertions(+)
diff --git
46 matches
Mail list logo