On Thu, 5 Jan 2017 16:21:38 +0100 (CET)
Dietmar Maurer wrote:
>
> If you do not want to debug yourself, please can you file a
> bug at bugzilla.proxmox.com?
>
https://bugzilla.proxmox.com/show_bug.cgi?id=1243
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG
> On January 5, 2017 at 8:02 PM Michael Rasmussen wrote:
>
>
> On Thu, 5 Jan 2017 16:21:38 +0100 (CET)
> Dietmar Maurer wrote:
>
> >
> > If you do not want to debug yourself, please can you file a
> > bug at bugzilla.proxmox.com?
> >
> I will do
On Thu, 5 Jan 2017 16:21:38 +0100 (CET)
Dietmar Maurer wrote:
>
> If you do not want to debug yourself, please can you file a
> bug at bugzilla.proxmox.com?
>
I will do that but I think I have nailed the problem down to either
wrong instructions on the wiki or some kind
> if someone has a good argument why this is a bad idea, please share it
> (or any other suggestion for this)
IMHO it is confusing to display all VMs ...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
Hi all,
I just stumbled across the following:
When configuring memory for a VM, you can choose between the options 'Use fixed
size memory' and 'Automatically allocate memory within this range'.
The online help explains the ballooning feature quite nicely, but there is a
mismatch:
Under the
On 01/05/2017 04:04 PM, Dietmar Maurer wrote:
BulkStop: Why do we list already stopped guests?
i wanted to preserve the old behaviour (which included all vms)
this is interesting for one case:
you open the bulk stop window -> someone starts a vm -> you click stop
in the old (and current)
> > You get an error after installing custom certs?
> Yes, getting the error after following the new instructions for
> installing custom certs. I had to renew my custom certs and chose the
> new instructions for doing that.
If you do not want to debug yourself, please can you file a
bug at
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
BulkStop: Why do we list already stopped guests?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied, thanks!
> this patch series adds a vmid filter to the
> startall/stopall/migrateall calls of nodes
>
> and a gui for selecting this
>
> so you can selectively start, stop and migrate guests in bulk
>
> i will also send a documentation patch later and add a help button to the
> window
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
This allows visual feedback for first time users doing a backup.
---
change the way we reload by hiding the backup window instead of passing
aroung the reload() function
www/manager6/grid/BackupView.js | 7 ++-
www/manager6/window/Backup.js | 12 ++--
2 files changed, 16
Reviewed-by: Dominik Csapak
On 01/05/2017 12:23 PM, Thomas Lamprecht wrote:
On the old HA status we saw where a service was located currently,
this information was lost when we merged the resource and the status
tab.
Add this information again.
Signed-off-by: Thomas
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
any comments?
On 12/20/2016 09:33 AM, Thomas Lamprecht wrote:
shutdown.target is active every time when the node shuts down, be it
reboot, poweroff, halt or kexec.
As we want to return true only when the node powers down without a
restart afterwards this was wrong.
Match only poweroff.target
this uses the new vmselector and the new vmid filter in the backend
to allow starting/stopping/migrating selected vms instead of all
by default all vms are selected to have the same default behaviour
Signed-off-by: Dominik Csapak
---
www/manager6/Makefile | 3 ++-
this is mostly copied from MigrateAll.js, but a more generic way,
to allow startall and stopall to also use it
Signed-off-by: Dominik Csapak
---
www/manager6/window/BulkAction.js | 141 ++
1 file changed, 141 insertions(+)
create mode
this patch series adds a vmid filter to the
startall/stopall/migrateall calls of nodes
and a gui for selecting this
so you can selectively start, stop and migrate guests in bulk
i will also send a documentation patch later and add a help button to the
window to explain it
Dominik Csapak (4):
this is a form field which is a grid for selecting vms
if nodename is given, it will filter the vms only to the given node
you can filter the grid with the column header, and only the selected
and visible items are in the value of the field
Signed-off-by: Dominik Csapak
On the old HA status we saw where a service was located currently,
this information was lost when we merged the resource and the status
tab.
Add this information again.
Signed-off-by: Thomas Lamprecht
---
changes since v1:
* add 'node' also to the data model
there was still a point where we got the wrong string
on createosd we get the devpath (/dev/cciss/c0d0)
but need the info from get_disks, which looks in /sys/block
where it needs to be cciss!c0d0
Signed-off-by: Dominik Csapak
---
i hope this is the final fix for this
The drive-mirror qmp command have a timeout of 3s by default (QMPCLient.pm),
shouldn't we bump it to 6s ? (more than 5s connect-timeout ?)
- Mail original -
De: "Wolfgang Bumiller"
À: "pve-devel"
Envoyé: Jeudi 5 Janvier 2017 10:09:28
Thanks Wolfgang.
I had already prepared a v10 with the cleanup ;)
I'm currently on holiday, so don't have too much time this week.
yes, for nbd tls, this need qemu 2.8 + blockdev-add / blockdev-mirror.
Too much change for now. (and blockdev is still experimental and not completed)
- Mail
Sorry, forgot to add the applied tag.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
---
PVE/API2/Qemu.pm | 4 ++--
PVE/QemuMigrate.pm | 2 +-
PVE/QemuServer.pm | 16
3 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index e48bf6d..288a9cd 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1643,7
---
PVE/QemuServer.pm | 19 +--
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 0b866cd..31e30fa 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5926,17 +5926,16 @@ sub qemu_drive_mirror {
die
Applied the series with some followup patches:
* Added patches for the timeout and POSIX::_exit() changes I mentioned.
* Also added some whitespace & style cleanup patches.
Now we need to figure out whether to first add the ssh-tunnel based
ecnryption or go with qemu's tls. Saw the thread on
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 31e30fa..c2fa20b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5919,7 +5919,7 @@ sub qemu_drive_mirror {
$format = "nbd";
my
---
PVE/API2/Qemu.pm | 7 +++
PVE/QemuMigrate.pm | 16
PVE/QemuServer.pm | 12 ++--
3 files changed, 13 insertions(+), 22 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 0bae424..e48bf6d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@
On 2017-01-05 09:21, Dietmar Maurer wrote:
The default configuration work for you?
I do not know exactly since I have been using custom certs since proxmox
2.x and have kept these certs while upgrading (following the old
instructions)
You get an error after installing custom certs?
Yes,
31 matches
Mail list logo