I did some tests on 2 different NFS,SMB shares and I had the same very slow backup performance. Then I reverted from *pve-kernel-2.6.32-23-pve *to *pve-kernel-2.6.32-20-pve *and the backup performance got back to normal speed.
Yannis Milios ------------------ Systems Administrator Mob. 0030 6932-657-029 Tel. 0030 211-800-1230 E-mail. yannis.mil...@gmail.com On Mon, Sep 16, 2013 at 10:45 AM, Yannis Milios <yannis.mil...@gmail.com>wrote: > Hello list, > > I have a single node running a single vm (win2k3). > It was working fine on Proxmox 2.3.I upgraded this box to 3.1 last week by > following: > http://pve.proxmox.com/wiki/Upgrade_from_2.3_to_3.0. > All went smooth, except that backup is not completed for some reason. > I have a local nfs mount on a NAS device which I use as a backup target: > > root@proxmox1:~# df -h > Filesystem Size Used Avail Use% Mounted on > udev 10M 0 10M 0% /dev > tmpfs 194M 1.5M 192M 1% /run > /dev/mapper/pve-root 28G 16G 11G 60% / > tmpfs 5.0M 0 5.0M 0% /run/lock > tmpfs 387M 22M 366M 6% /run/shm > /dev/mapper/pve-data 65G 9.8G 55G 16% /var/lib/vz > /dev/sda1 495M 123M 348M 27% /boot > /dev/fuse 30M 16K 30M 1% /etc/pve > *192.168.0.203:/VOLUME1/PUBLIC 442G 198G 244G 45% /mnt/pve/nfs1* > * > * > The error I am receiving is: > > *VM 101 qmp command 'query-backup' failed - client closed connection* > * > * > What I've noticed is that if even if I invoke a manual backup job, the > process starts but it takes forever and then stops with the above message. > During this time, if I ping the node I get very high response time and > some times it is near inaccessible. > I did a test to write a 100mb file at nfs mount: > > *root@proxmox1:~# dd if=/dev/zero of=/mnt/pve/nfs1/test.raw bs=1M > count=100* > *100+0 records in* > *100+0 records out* > *104857600 bytes (105 MB) copied, 340.931 s, 308 kB/s* > * > * > Seems that for some reason the machine has a hard time communicating with > NAS device.The strange thing is that before upgrade there was no problem. > Could it be a nic driver issue? I'm providing some more info if could > anyone help: > > > *root@proxmox1:~# pveversion -v* > *proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)* > *pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)* > *pve-kernel-2.6.32-20-pve: 2.6.32-100* > *pve-kernel-2.6.32-16-pve: 2.6.32-82* > *pve-kernel-2.6.32-17-pve: 2.6.32-83* > *pve-kernel-2.6.32-18-pve: 2.6.32-88* > *pve-kernel-2.6.32-23-pve: 2.6.32-109* > *lvm2: 2.02.98-pve4* > *clvm: 2.02.98-pve4* > *corosync-pve: 1.4.5-1* > *openais-pve: 1.1.4-3* > *libqb0: 0.11.1-2* > *redhat-cluster-pve: 3.2.0-2* > *resource-agents-pve: 3.9.2-4* > *fence-agents-pve: 4.0.0-1* > *pve-cluster: 3.0-7* > *qemu-server: 3.1-1* > *pve-firmware: 1.0-23* > *libpve-common-perl: 3.0-6* > *libpve-access-control: 3.0-6* > *libpve-storage-perl: 3.0-10* > *pve-libspice-server1: 0.12.4-1* > *vncterm: 1.1-4* > *vzctl: 4.0-1pve3* > *vzprocps: 2.0.11-2* > *vzquota: 3.1-2* > *pve-qemu-kvm: 1.4-17* > *ksm-control-daemon: 1.1-1* > *glusterfs-client: 3.4.0-2* > > > *root@proxmox1:~# tail /var/log/kern.log* > *Sep 16 02:35:11 proxmox1 kernel: r8169 0000:02:00.0: eth0: link up* > *Sep 16 10:18:47 proxmox1 kernel: r8169 0000:02:00.0: eth0: link up* > *Sep 16 10:19:35 proxmox1 kernel: r8169 0000:02:00.0: eth0: link up* > *Sep 16 10:20:05 proxmox1 kernel: r8169 0000:02:00.0: eth0: link up* > *Sep 16 10:20:35 proxmox1 kernel: r8169 0000:02:00.0: eth0: link up* > *Sep 16 10:20:59 proxmox1 kernel: r8169 0000:02:00.0: eth0: link up* > *Sep 16 10:21:41 proxmox1 kernel: r8169 0000:02:00.0: eth0: link up* > *Sep 16 10:22:05 proxmox1 kernel: r8169 0000:02:00.0: eth0: link up* > *Sep 16 10:22:41 proxmox1 kernel: r8169 0000:02:00.0: eth0: link up* > *Sep 16 10:23:11 proxmox1 kernel: r8169 0000:02:00.0: eth0: link up* > > > > > > > > > > > >
_______________________________________________ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user