---
src/PVE/LXC/Setup/Base.pm| 168 --
src/test/Makefile| 2 +
src/test/etc_hosts/Makefile | 9 ++
src/test/etc_hosts/run_tests.pl | 138
src/test/etc_hosts/test-template
read_etc_network_interfaces uses the content of
/proc/net/if_inet6 to decide whether an interface's state is
"active", which means an interface is only active when it
has an ipv6 address, thus using net.ipv6.conf.*.disable_ipv6
on an interface will cause it to show as inactive in the web
interface.
At this point the underlying file has already been
successfully resized which means it makes sense to refelct
that change in the config, but the guest will not see the
effect of it, however, a subsequent resize command will
further increase the size relative to the 'new' size, so
after such an erro
Like with qemu the root user can use -skiplock with 'pct
start' and 'pct stop'.
This does not alter the container's lxc config, instead we
pass PVE_SKIPLOCK=1 via the environment which will be seen
from the prestart hook but not from inside the container.
---
src/PVE/API2/LXC/Status.pm | 16 +
Needs the apparmor /run -> /var/run bind mount patch in
lxc-pve.
---
src/PVE/LXC/Setup/SUSE.pm | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/src/PVE/LXC/Setup/SUSE.pm b/src/PVE/LXC/Setup/SUSE.pm
index db8d140..13f2760 100644
--- a/src/PVE/LXC/Setup/SUSE.pm
+++
The API passes $skiplock to vm_destroy() which performed a
check conditionally depending on the $skiplock parameter and
then simply calls destroy_vm() inside lock_config() which
did yet another check_lock() without any way to avoid that.
Added the $skiplock parameter to destroy_vm() and removed
th
qm list and qm status both show suspended VMs as 'running'
while the GUI's status summary shows them as 'paused'.
This patch makes 'qm status' always request the full status
and adds an optional '-full' parameter for 'qm list' to
use a full status query to include the 'paused' state. (This
is opti
Fixes some issues (mount retry loops) with suse 13.1 and
13.2 containers.
---
Note: this patch has been accepted upstream and should be in the next release.
...armor-allow-binding-run-lock-var-run-lock.patch | 32 ++
debian/patches/series | 1 +
> On February 4, 2016 at 4:52 PM Dietmar Maurer wrote:
>
>
> > > with the new behaviour, we don't need sanitize_mountpoint anymore:
> > >
> > > Signed-off-by: Dominik Csapak
> >
> > Acked-by: Wolfgang Bumiller
>
> This looks potentially dangerous to me. Is there a reason (bug) for that
> c
> On February 4, 2016 at 4:41 PM Dietmar Maurer wrote:
>
>
> I thought that code is required to make volume resize happy?
If you mean `pct resize` then no, since it doesn't care about the
guest's /dev, after all it has to work on stopped containers, too.
Although we might want to keep the wri
> >>Sure, it can work for special cases. But in general this would be a
> >>dangerous feature. Maybe an extra flag for such cases?
>
> Ok no problem.
>
> I think we have already
>
> -force boolean
>
> Allow to migrate VMs which use local devices. Only root
>
>>Sure, it can work for special cases. But in general this would be a
>>dangerous feature. Maybe an extra flag for such cases?
Ok no problem.
I think we have already
-force boolean
Allow to migrate VMs which use local devices. Only root may
applied.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied, thanks.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied all 4 patches, thanks.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> for example , for usb key, yes, they are different.
>
> But some devices like usb dongles could work without any problem.
>
> (Migrating a vm with propriarity software which need a dongle)
Sure, it can work for special cases. But in general this would be a
dangerous feature. Maybe an extra fla
>>That makes no sense to me. After migration the VM is connected to a totally
>>different usb device!
for example , for usb key, yes, they are different.
But some devices like usb dongles could work without any problem.
(Migrating a vm with propriarity software which need a dongle)
(We have
up to now we were only updating the picker selection when the picker
was created, which means that subsequent changes in the text field were
not propagated to the drop-down list
This patch creates a private syncSelection() method which is called each time
the picker is shown
This is roughly based
> On February 4, 2016 at 4:04 PM Alexandre Derumier wrote:
>
>
> We can migrate a vm with a usbhost device plugged without any problem.
>
> If the target server don't have the usb device, the device is unplugged on
> target vm
>
> If the target server have same kind of usb device (same vendo
> > with the new behaviour, we don't need sanitize_mountpoint anymore:
> >
> > Signed-off-by: Dominik Csapak
>
> Acked-by: Wolfgang Bumiller
This looks potentially dangerous to me. Is there a reason (bug) for that
change? Or is this just a cleanup?
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index fbd4830..9b7b07b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4472,7 +4472,7 @@ sub vmconfig_update_disk {
d
I thought that code is required to make volume resize happy?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Set unfreeze before trying to freeze, otherwise an aborted
or failed lxc-freeze will not be reversed by our error
handling, leaving the container in a (partially) frozen
state.
Add a call to snapshot_commit in its own eval block to the
error handling code, because we want to cleanup and unlock
the
We can migrate a vm with a usbhost device plugged without any problem.
If the target server don't have the usb device, the device is unplugged on
target vm
If the target server have same kind of usb device (same vendor:productid or
same usb port),
it's transparent for the vm.
Signed-off-by: Al
The IPrefSelector ComboGrid can have selected values which are not backed
by the component store, ie the store only contains IP aliases, but
the ComboGrid can contain an IP adress not registered as an IP alias.
In that case we should not try to update the selection in the dropdown,
as the dropdown
Another batch bites the dust
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
using applyIf is not safe here as the tbar property has already been set
by the framework ( and anyway we would like to override any default
set by the framework )
this allows the toolbar of the component to be displayed
---
www/manager6/grid/FirewallAliases.js | 2 +-
1 file changed, 1 insertion
---
www/manager6/form/IPRefSelector.js | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/www/manager6/form/IPRefSelector.js
b/www/manager6/form/IPRefSelector.js
index a017e15..fbf8a5d 100644
--- a/www/manager6/form/IPRefSelector.js
+++ b/www/manager6/form/IPRefSelector.js
@@
On Thu, Feb 04, 2016 at 01:40:15PM +0100, Dominik Csapak wrote:
> changes from v1:
> renamed function to verify_*
> added check for ../ at the beginning
> cleaned up regex (\.)? -> \.?
>
>
> currently we sanitize mountpoints with sanitize_mountpoint, which
> tries to remove dots, double-dots and
Some news,
I have rollback to ipmi_watchdog, which was not working yesterday,
and It working fine now.
So, indeed, maybe it was an hardware problem. (tests servers are 8-9 years
old,so ...)
Thanks for help !
- Mail original -
De: "Thomas Lamprecht"
À: "pve-devel"
Envoyé: Jeudi 4 F
changes from v1:
renamed function to verify_*
added check for ../ at the beginning
cleaned up regex (\.)? -> \.?
currently we sanitize mountpoints with sanitize_mountpoint, which
tries to remove dots, double-dots and multiple slashes, but it does it
not correctly (e.g. /test/././ gets truncated t
On Thu, Feb 04, 2016 at 01:07:03PM +0100, Fabian Grünbichler wrote:
> Since lxc.autodev defaults to 1, LXC will mount /dev as
> tmpfs an populate it. The removed code was unnecessary,
> since the device node was not accessable in the container
> anyway. A /dev mountpoint is mounted into the rootfs
> Wolfgang Bumiller hat am 4. Februar 2016 um 11:21
> geschrieben:
>
>
> On Thu, Feb 04, 2016 at 11:08:05AM +0100, Fabian Grünbichler wrote:
> > skip /dev and bind mounts, otherwise stop backups will
> > fail in parse_volume_id.
> > ---
> > src/PVE/VZDump/LXC.pm | 5 -
> > 1 file changed, 4
Since lxc.autodev defaults to 1, LXC will mount /dev as
tmpfs an populate it. The removed code was unnecessary,
since the device node was not accessable in the container
anyway. A /dev mountpoint is mounted into the rootfs and
accessable under its mountpoint, even if there is no
associated /dev nod
On Thu, Feb 04, 2016 at 11:36:41AM +0100, Dominik Csapak wrote:
> currently we sanitize mountpoints with sanitize_mountpoint, which
> tries to remove dots, double-dots and multiple slashes, but it does it
> not correctly (e.g. /test/././ gets truncated to /test./ )
>
> instead of trying to truncat
On 02/04/2016 11:58 AM, Alexandre DERUMIER wrote:
If it runs fine on the other two with the same hardware it smells strong
like a possible hardware bug/defective hardware (or firmware)?
The countdown is probably only the default countdown, as it's not active
and has no action configured this c
>>If it runs fine on the other two with the same hardware it smells strong
>>like a possible hardware bug/defective hardware (or firmware)?
>>
>>The countdown is probably only the default countdown, as it's not active
>>and has no action configured this can be dismissed, imo.
But this should wor
>>echo "A" | socat - UNIX-CONNECT:/var/run/ watchdog -mux.sock
>>
>>after this command, server rebooted
Yes, this is working for me too.
The watchdog seem to works fine, simply not started ...
- Mail original -
De: "Eduard Ahmatgareev"
À: "pve-devel"
Envoyé: Jeudi 4 Février 2016 11:26
currently we sanitize mountpoints with sanitize_mountpoint, which
tries to remove dots, double-dots and multiple slashes, but it does it
not correctly (e.g. /test/././ gets truncated to /test./ )
instead of trying to truncate the path, we create a format for mp strings
which throws an error if /./
On 02/04/2016 11:07 AM, Alexandre DERUMIER wrote:
looks OK to me. Seems there is no HA enabled VM on this node? That
would explain that the watchdog does not trigger.
The problem is not that the watchdog is not trigger,
is that the watchdog timer is stopped
(and with a strange countdown of 15
I had problem with watchdog on ipmi, that's why I uses for check next:
ii pve-manager 4.1-5
amd64The Proxmox Virtual Environment
check: ii status package and test watchdog:
echo "A" | socat - UNIX-CONNECT:/var/run/watchdog-mux.sock
after this command, server
On Thu, Feb 04, 2016 at 11:08:05AM +0100, Fabian Grünbichler wrote:
> skip /dev and bind mounts, otherwise stop backups will
> fail in parse_volume_id.
> ---
> src/PVE/VZDump/LXC.pm | 5 -
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump
skip /dev and bind mounts, otherwise stop backups will
fail in parse_volume_id.
---
src/PVE/VZDump/LXC.pm | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
index dddf17e..fda37c9 100644
--- a/src/PVE/VZDump/LXC.pm
+++ b/src/PVE/VZ
>>looks OK to me. Seems there is no HA enabled VM on this node? That
>>would explain that the watchdog does not trigger.
The problem is not that the watchdog is not trigger,
is that the watchdog timer is stopped
(and with a strange countdown of 15s)
# ipmitool mc watchdog get
Watchdog Timer
proxmox-ve: 4.1-34 (running kernel: 4.2.6-1-pve)
pve-manager: 4.1-5 (running version: 4.1-5/f910ef5c)
pve-kernel-4.2.6-1-pve: 4.2.6-34
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 0.17.2-1
pve-cluster: 4.0-31
qemu-server: 4.0-49
pve-firmware: 1.1-7
libpve-common-perl: 4.0-45
libpve-access-cont
On 02/04/2016 10:58 AM, Eduard Ahmatgareev wrote:
I tried add new node to proxmox cluster and had problem:
pvecm add cluster_ip --force
Are you sure you want to continue connecting (yes/no)? yes
node cluster-2-4 already defined
copy corosync auth key
stopping pve-cluster service
backup old d
I tried add new node to proxmox cluster and had problem:
pvecm add cluster_ip --force
Are you sure you want to continue connecting (yes/no)? yes
node cluster-2-4 already defined
copy corosync auth key
stopping pve-cluster service
backup old database
Job for corosync.service failed. See 'systemct
When cherry-picking range A..B the commit A won't get cherry picked,
only the commits after A.
To fix that use range A^..B as ^ selects the previous commit.
This fixes the build of zfs with linux kernel version 4.4 as one
compatibility change commit wasn't included.
Signed-off-by: Thomas Lamprech
watchdog-mux.socket was removed in f8a3fc80af but the
postinstall script used -e instead of -L to test for the
symlink, which fails since the destination is already
removed at that point.
Use -L and remove the dead symlink if it exists.
Reported-by: Alexandre Derumier
---
debian/postinst | 2 +-
> On February 4, 2016 at 7:50 AM Alexandre DERUMIER wrote:
>
>
> >>It's just a warning, but it could be great to find a way to clean up it.
>
> for the warning,
>
> the problem seem to be that postinst script from pve-ha-manager
>
> if [ -e /etc/systemd/system/sockets.target.wants/watchdog
> >>What is the output of:
>
> # systemctl status watchdog-mux.service
>
>
> ● watchdog-mux.service - Proxmox VE watchdog multiplexer
>Loaded: loaded (/lib/systemd/system/watchdog-mux.service; static)
>Active: active (running) since Thu 2016-02-04 09:09:26 CET; 1min 38s ago
> Main PID
> It's never go inside the if
>
>
> in my case,
> /etc/systemd/system/sockets.target.wants/watchdog-mux.socket exist, it's a
> symlink,
> to
> /etc/systemd/system/sockets.target.wants/watchdog-mux.socket ->
> /lib/systemd/system/watchdog-mux.socket
>
> but
>
> /lib/systemd/system/watchdog-mux.
>>but it looks like softdog is working on both nodes?
yes
>>What is the output of:
# systemctl status watchdog-mux.service
● watchdog-mux.service - Proxmox VE watchdog multiplexer
Loaded: loaded (/lib/systemd/system/watchdog-mux.service; static)
Active: active (running) since Thu 2016-
54 matches
Mail list logo