applied with correct dependency (libwww-perl instead of
liblwp-protocol-https-perl)
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied with cleanups
On Thu, Dec 07, 2017 at 03:10:02PM +0100, Thomas Lamprecht wrote:
> Add a postinst file which stops, if running, the ha service before it
> configures pve-cluster and starts them again, if enabled.
> Do this only if the version installed before the upgrade is <= 2.0-3
>
>
since we do not want to depend on libpve-accesscontrol,
we check the ticket via the api on http://localhost:85
this means we have to pass the path and permission via the commandline
Signed-off-by: Dominik Csapak
---
debian/control | 2 +-
Now you get a email if a replication job fail.
The mail is only send the first time, when a job switched from 'ok' state in
'error' state.
No more notification will come when a job with error state retry to sync.
[PATCH manager V2]
Indentation cleanup.
Small clenup like W.Bumiller has suggested.
applied both patches
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
---
PVE/API2/Replication.pm | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/PVE/API2/Replication.pm b/PVE/API2/Replication.pm
index f396615d..38449892 100644
--- a/PVE/API2/Replication.pm
+++ b/PVE/API2/Replication.pm
@@ -77,15 +77,15 @@ sub run_jobs {
my
We will handle this errors in the API and decide what to do.
---
PVE/Replication.pm | 95 +++---
1 file changed, 41 insertions(+), 54 deletions(-)
diff --git a/PVE/Replication.pm b/PVE/Replication.pm
index c25ed44..9bc4e61 100644
---
A email notification will be send for each job when the job fails.
This message will only send when an error occurs and the fail count is on 1.
---
PVE/API2/Replication.pm | 18 --
PVE/CLI/pvesr.pm | 11 ++-
bin/init.d/pvesr.service | 2 +-
3 files changed, 27
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
cfs_* methods cann now die (rightfully so) when the IPCC endpoint is
not connected, or another grave IPCC error arised.
As we did not catch those problems in the RPCEnvironments
init_request method, which loads the user config, this got
propagated to the anyevents auth_handler call in its
Allows to fix a problem where a logged in connected client was logged
out because we could not verify him for this call as the cluster
filesystem was unavailable.
If we get such a exception then use it for responding.
THis is save as no logged out client can get ever do anything where
login
Add a postinst file which stops, if running, the ha service before it
configures pve-cluster and starts them again, if enabled.
Do this only if the version installed before the upgrade is <= 2.0-3
dpkg-query has Version and Config-Version
Version is at this time the new unpacked version already,
12 matches
Mail list logo