Hi all,

I just notice this issue this morning after one of my backups gave me an issue 
last night.  Turns out the boot device:
/dev/mapper/pve-root      99083868   94576944          0 100% /

Is full. Its a 100GB drive for the boot system. I'm trying to figure out where 
all the data is stored as when I compared it to another server its using the 
same amount of disk space (roughly), but isn't reporting the errors as shown 
below.

Aug 11 07:45:12 node1 pveproxy[2955]: worker 7548 finished
Aug 11 07:45:12 node1 pveproxy[2955]: starting 2 worker(s)
Aug 11 07:45:12 node1 pveproxy[2955]: worker 7580 started
Aug 11 07:45:12 node1 pveproxy[2955]: worker 7581 started
Aug 11 07:45:12 node1 pveproxy[7579]: error writing access log
Aug 11 07:45:12 node1 pveproxy[7581]: error writing access log


I've tried stopping and restarting the pveproxy daemon, but that didn't do 
anything.  I've also stopped all the VM's to see if one of them was causing an 
issue and I don't see any difference in the amount of writes being put to the 
log file.

pveversion --verbose
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-22-pve
proxmox-ve-2.6.32: 3.0-107
pve-kernel-2.6.32-10-pve: 2.6.32-63
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-22-pve: 2.6.32-107
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-23
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1


Thank you,




__________
David




_______________________________________________
pve-user mailing list
[email protected]
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to