Package: snmpd
Version: 5.4.1~dfsg-12
Severity: normal
Hi.
It seems impossible to retrieve hrPartitionTable :
If I try with :
# /usr/sbin/snmpd -f -D -u snmp -I -smux -p /var/run/snmpd.pid localhost 2>&1 |
grep host/hr_partition
handler::register: Registering host/hr_partition (::old_api) at
iso.3.6.1.2.1.25.3.7.1.1
register_mib: registering "host/hr_partition" at iso.3.6.1.2.1.25.3.7.1.1 with
context ""
handler::register: Registering host/hr_partition (::old_api) at
iso.3.6.1.2.1.25.3.7.1.2
register_mib: registering "host/hr_partition" at iso.3.6.1.2.1.25.3.7.1.2 with
context ""
handler::register: Registering host/hr_partition (::old_api) at
iso.3.6.1.2.1.25.3.7.1.3
register_mib: registering "host/hr_partition" at iso.3.6.1.2.1.25.3.7.1.3 with
context ""
handler::register: Registering host/hr_partition (::old_api) at
iso.3.6.1.2.1.25.3.7.1.4
register_mib: registering "host/hr_partition" at iso.3.6.1.2.1.25.3.7.1.4 with
context ""
handler::register: Registering host/hr_partition (::old_api) at
iso.3.6.1.2.1.25.3.7.1.5
register_mib: registering "host/hr_partition" at iso.3.6.1.2.1.25.3.7.1.5 with
context ""
whenever I issue :
# snmpwalk -v2c -c public localhost '.1.3.6.1.2.1.25.3.7'
I get :
HOST-RESOURCES-MIB::hrPartitionTable = No Such Object available on this agent
at this OID
snmpd traces show :
trace: header_hrpartition(): host/hr_partition.c, 106:
host/hr_partition: var_hrpartition: HOST-RESOURCES-MIB::hrPartitionEntry.0 0
trace: header_hrpartition(): host/hr_partition.c, 178:
host/hr_partition: ... index out of range
trace: header_hrpartition(): host/hr_partition.c, 106:
host/hr_partition: var_hrpartition: HOST-RESOURCES-MIB::hrPartitionEntry.0 0
trace: header_hrpartition(): host/hr_partition.c, 178:
host/hr_partition: ... index out of range
trace: header_hrpartition(): host/hr_partition.c, 106:
host/hr_partition: var_hrpartition: HOST-RESOURCES-MIB::hrPartitionEntry.0 0
trace: header_hrpartition(): host/hr_partition.c, 178:
host/hr_partition: ... index out of range
trace: header_hrpartition(): host/hr_partition.c, 106:
host/hr_partition: var_hrpartition: HOST-RESOURCES-MIB::hrPartitionEntry.0 0
trace: header_hrpartition(): host/hr_partition.c, 178:
host/hr_partition: ... index out of range
trace: header_hrpartition(): host/hr_partition.c, 106:
host/hr_partition: var_hrpartition: HOST-RESOURCES-MIB::hrPartitionEntry.0 0
trace: header_hrpartition(): host/hr_partition.c, 178:
host/hr_partition: ... index out of range
Here's what my disks look like :
# mount
/dev/sda3 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/main-home on /home type ext3 (rw)
/dev/mapper/main-vsgranulite on /var/lib/vservers/vsgranulite type ext3 (rw)
# df -h
Sys. de fich. Tail. Occ. Disp. %Occ. Monté sur
/dev/sda3 9,2G 6,3G 2,6G 72% /
tmpfs 1002M 0 1002M 0% /lib/init/rw
udev 10M 104K 9,9M 2% /dev
tmpfs 1002M 0 1002M 0% /dev/shm
/dev/mapper/main-home
41G 25G 15G 63% /home
/dev/mapper/main-vsgranulite
9,9G 1,1G 8,4G 11% /var/lib/vservers/vsgranulite
Thanks in advance.
Best regards,
-- System Information:
Debian Release: squeeze/sid
APT prefers testing
APT policy: (990, 'testing'), (500, 'testing-proposed-updates')
Architecture: i386 (i686)
Kernel: Linux 2.6.29-1-686 (SMP w/2 CPU cores)
Locale: LANG=fr_FR.UTF-8, LC_CTYPE=fr_FR.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/bash
Versions of packages snmpd depends on:
ii adduser 3.110 add and remove users and groups
ii debconf 1.5.26 Debian configuration management sy
ii libc6 2.9-4 GNU C Library: Shared libraries
ii libsensors3 1:2.10.8-1 library to read temperature/voltag
pn libsnmp9 <none> (no description available)
ii libwrap0 7.6.q-16 Wietse Venema's TCP wrappers libra
snmpd recommends no packages.
snmpd suggests no packages.
--
To UNSUBSCRIBE, email to [email protected]
with a subject of "unsubscribe". Trouble? Contact [email protected]