---
PVE/QemuServer.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 047c5a4..eab1381 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5234,10 +5234,10 @@ sub restore_update_config_line {
} elsif ($line =~
they are mostly intended to save space as the "new theme", if it
gets applied, takes up space like it's worth pure gold.
Paddings get made smaller on buttons, tabs and grids.
Also the tree receive a sane space padding.
Further fix the height of the top info panel (the one with the logo,
PVE
This is only reached if the $line from which $virtdev
originates matches, and the part in $virtdev can never be
false then.
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index eab1381..708ec27 100644
---
if someone makes a snapshot named 'vzdump', it would get deleted
when using vzdump in snapshot mode, since we use that name for
making a temporary one
Signed-off-by: Dominik Csapak
---
src/PVE/API2/LXC/Snapshot.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> adding a flag for usb devices (usb3), if this is set to yes,
> add a xhci controller and attach the specified devices to it
applied, but please can we use PVE::JSONSchema::parse_property_string()
for the config parser?
___
pve-devel mailing list
> On February 10, 2016 at 1:47 PM Wolfgang Bumiller
> wrote:
>
>
> On Mon, Dec 07, 2015 at 04:17:06PM +0100, Wolfgang Bumiller wrote:
> > > On December 7, 2015 at 4:10 PM Dietmar Maurer wrote:
> > >
> > >
> > > I am quite unsure about this one.
applied - but the logic in snapshot_delete looks still wrong to me. The
qemu-server
implementation locks more reasonable to me.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> Dietmar Maurer hat am 11. Februar 2016 um 06:56
> geschrieben:
>
>
> applied - but the logic in snapshot_delete looks still wrong to me. The
> qemu-server
> implementation locks more reasonable to me.
I will further harmonize the code paths when implementing mountpoint
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Since VZDump was the only user of lock_aquire and
lock_release, and does not actually need this split,
we can merge lock_aquire and lock_release into
lock_container.
---
This allows us to drop the locking code from
lock_container altogether as soon as the
refcounting patch for
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
read_etc_network_interfaces used the content of
/proc/net/if_inet6 to decide whether an interface's state is
"active", which means an interface is only active when it
has an ipv6 address, thus using net.ipv6.conf.*.disable_ipv6
on an interface will cause it to show as inactive in the web
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Signed-off-by: Thomas Lamprecht
---
data/PVE/Cluster.pm | 7 +++
1 file changed, 7 insertions(+)
diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm
index 27e248f..5a93f79 100644
--- a/data/PVE/Cluster.pm
+++ b/data/PVE/Cluster.pm
@@ -1327,6 +1327,13 @@ my
This patch implements the use of the new max_workers setting from
the datacenter.cfg.
Adding a 'get_max_worker' method to the enviornment allows us to
do that and to replace 'can_fork' with that method.
can_fork isn't needed anymore as get_max_worker may simply return 0
to signal that the
If set limit the maximal worker count to the new datacenter.cfg
setting 'max_workers'.
For stopall we prefer this over the cpu count if it's set.
For migrateall we prefer the parameter but allow now to ommit
the parameter and then we use the new setting if set.
if both are not set we throw an
applied, thanks.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
The new settings 'max_workers' describes how many workers should be started
on a 'mass action' like 'stopall' 'migrateall' or the actions from the
ha-manager.
This adds the option in (1 cluster)
Implements it for 'migrateall' and 'stopall' (2 manager)
and also for the ha-manager (3 ha-manager)
It uses PVE::INotify::inotify_init() in run_cli_handler().
---
Note that it also uses PVE::RPCEnvironment in run_cli_handler() but
adding a `use` clause for this would make the entire package (not
just that function) depend on the pve-access-control and thereby
introduce a circular dependency.
quotactl(2) requires a path to the device node to work which
means we need to expose them to the container, luckily it
doesn't need r/w access to the device. Also, loop devices
will not detach from the images anymore with them being
still mounted in the monitor's mount namespace (which is
unshared
adding a flag for usb devices (usb3), if this is set to yes,
add an xhci controller and attach the specified devices to it
Signed-off-by: Dominik Csapak
---
PVE/QemuServer.pm | 35 +++
1 file changed, 31 insertions(+), 4 deletions(-)
diff
adding a flag for usb devices (usb3), if this is set to yes,
add a xhci controller and attach the specified devices to it
Signed-off-by: Dominik Csapak
---
changes since v1:
use the $use_usb3 variable for adding the xhci controller
PVE/QemuServer.pm | 35
Set unfreeze before trying to freeze, otherwise an aborted
or failed lxc-freeze will not be reversed by our error
handling, leaving the container in a (partially) frozen
state.
Make snapshot_create failure handling more resembling
to the QemuServer codebase and prepare for future code
Now using comment markers to mark the /etc/hosts section
managed by PVE. The rest of the file is left untouched.
If a localhost entry is missing it'll be included as part of
the managed section (and can be moved out by the user).
If no section is found then one will be inserted after the
last
On Mon, Dec 07, 2015 at 04:17:06PM +0100, Wolfgang Bumiller wrote:
> > On December 7, 2015 at 4:10 PM Dietmar Maurer wrote:
> >
> >
> > I am quite unsure about this one. Do we want to set title on client side,
> > or server side? Why do we want to mix styles?
>
>
This resource let us test a defined failiure behaviour ofi services.
Through the VMID we define how it should behave, with the folowing
rules:
When the service has the SID "fa:abcde" the digits a - e mean:
a - no meaning but can be used for differentiating similar resources
b - how many tries
this is mainly for formatting (prettiness) purpose but helps when
going through an output of a test.
Signed-off-by: Thomas Lamprecht
---
src/PVE/HA/Sim/TestEnv.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/PVE/HA/Sim/TestEnv.pm b/src/PVE/HA/Sim/TestEnv.pm
Else a few branches would not be taken and the behaviour wasn't
quite straightforward.
Only increment tries if we really retry and log retries
Signed-off-by: Thomas Lamprecht
---
src/PVE/HA/LRM.pm | 5 -
src/PVE/HA/Manager.pm | 22 ++
2
On 02/10/2016 01:47 PM, Wolfgang Bumiller wrote:
On Mon, Dec 07, 2015 at 04:17:06PM +0100, Wolfgang Bumiller wrote:
On December 7, 2015 at 4:10 PM Dietmar Maurer wrote:
I am quite unsure about this one. Do we want to set title on client side,
or server side? Why do we
On 02/10/2016 02:13 PM, Thomas Lamprecht wrote:
This resource let us test a defined failiure behaviour ofi services.
Through the VMID we define how it should behave, with the folowing
rules:
When the service has the SID "fa:abcde" the digits a - e mean:
a - no meaning but can be used for
32 matches
Mail list logo