# cat /etc/pve/replication.cfg
local: 105-0
target ns302695
rate 10
schedule */2:00
local: 103-0
target ns3511723
rate 11
schedule */20
local: 109-0
target ns3511723
rate 10
local: 102-0
target ns302695
rate 10
schedule 22:30
local: 107-0
hi,
i reply here, to avoid confusion in the other thread
can you post the content of the two files:
/etc/pve/replication.cfg
/var/lib/pve-manager/pve-replication-state.json (of the source node)
?
___
pve-user mailing list
pve-user@pve.proxmox.com
Another strange thing is that the failing jobs are running out of schedule.
Jul 12 12:00:03 ns pvesr[12049]: total estimated size is 119M
Jul 12 12:00:03 ns pvesr[12049]: TIMESENT SNAPSHOT
Jul 12 12:00:04 ns pvesr[12049]: 12:00:04 9.18M
Same error again, nothing strange until 3:20 AM:
Jul 10 03:00:02 xxnodenamexx pvesr[32023]: send from
@__replicate_103-0_1499647500__ to
rpool/data/vm-103-disk-1@__replicate_103-0_1499648400__ estimated size
is 7.27M
Jul 10 03:00:02 xxnodenamexx pvesr[32023]: total estimated size is 7.27M
Sorry not meant to reply here.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Same error again, nothing strange until 3:20 AM:
Jul 10 03:00:02 xxnodenamexx pvesr[32023]: send from
@__replicate_103-0_1499647500__ to
rpool/data/vm-103-disk-1@__replicate_103-0_1499648400__ estimated size
is 7.27M
Jul 10 03:00:02 xxnodenamexx pvesr[32023]: total estimated size is 7.27M
Hi,
I upgraded my 2-node setup to proxmox 5 and stretch.
I am using ZFS local storages and unicast corosync (OVH).
Did set up storage migration from both sides to the other for some VMs and a
container. It makes transfering VMs really fast, however VMs need to be shut
down as online migration