Hi, looks like defective RAM.
kp Am Montag, 16. Januar 2006 18:10 schrieb Darcy (Home): > Good day all, > > The past few days we have had trouble with the uClibc system crashing. > Here is the error message below: > > Unable to handle kernel paging request at virtual address lcd4 c023 > *pgd= 0000000000000 > *pmd= 0000000000000 > 0ops=0000 > CPU: 0 > EIP: 0010 Not tainted > EFLAGS: 00010093 > Kernel panic: Aiee, killing interupt handler! > In interupt handler - not syncing > > Here is the general information (after reboot) > > name > Linux imageROCfw 2.4.31 #1 Thu Aug 18 21:03:20 CEST 2005 i686 unknown > 17:06:08 up 18 min, load average: 0.00, 0.00, 0.00 > > memory > total used free shared buffers > Mem: 224888 18736 206152 0 56 > Swap: 0 0 0 > Total: 224888 18736 206152 > > disk free > Filesystem Size Used Available Use% Mounted on > /dev/root 20.0M 9.2M 10.8M 46% / > tmpfs 20.0M 0 20.0M 0% /tmp > tmpfs 40.0M 104.0k 39.9M 0% /var/log > > Networking > Interface status > 1: lo: mtu 16436 qdisc noqueue > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > 2: dummy0: mtu 1500 qdisc noop > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff > 3: eth0: mtu 1500 qdisc pfifo_fast qlen 1000 > link/ether 00:40:63:d4:0c:19 brd ff:ff:ff:ff:ff:ff > inet 46.24.251.4/24 brd 46.24.251.254 scope global eth0 > 4: eth1: mtu 1500 qdisc pfifo_fast qlen 1000 > link/ether 00:40:63:d4:0c:18 brd ff:ff:ff:ff:ff:ff > inet 10.30.4.254/24 brd 10.30.4.255 scope global eth1 > 5: ipsec0: mtu 16260 qdisc pfifo_fast qlen 10 > link/ether 00:40:63:d4:0c:19 brd ff:ff:ff:ff:ff:ff > inet 46.24.251.4/24 brd 46.24.251.254 scope global ipsec0 > 6: ipsec1: mtu 0 qdisc noop qlen 10 > link/ipip > 7: ipsec2: mtu 0 qdisc noop qlen 10 > link/ipip > 8: ipsec3: mtu 0 qdisc noop qlen 10 > link/ipip > 9: tap0: mtu 1500 qdisc pfifo_fast qlen 100 > link/ether 00:ff:28:a7:75:f7 brd ff:ff:ff:ff:ff:ff > inet 10.30.5.1/24 brd 10.30.5.255 scope global tap0 > > Routes > 10.10.70.0/24 via 46.24.251.1 dev ipsec0 > 10.10.64.0/24 via 46.24.251.1 dev ipsec0 > 10.10.80.0/24 via 46.24.251.1 dev ipsec0 > 10.10.50.0/24 via 46.24.251.1 dev ipsec0 > 10.30.5.0/24 dev tap0 proto kernel scope link src 10.30.5.1 > 10.10.66.0/24 via 46.24.251.1 dev ipsec0 > 10.30.4.0/24 dev eth1 proto kernel scope link src 10.30.4.254 > 10.10.61.0/24 via 10.30.5.3 dev tap0 > 10.10.60.0/24 via 10.30.5.3 dev tap0 > 46.24.251.0/24 dev eth0 proto kernel scope link src 46.24.251.4 > 46.24.251.0/24 dev ipsec0 proto kernel scope link src 46.24.251.4 > 192.168.0.0/16 via 10.30.5.2 dev tap0 > default via 46.24.251.1 dev eth0 > > Modules > > Module Size Used by Not tainted > softdog 1360 1 > ipt_state 272 42 > ipt_helper 400 0 (unused) > ipt_conntrack 692 0 > ipt_REDIRECT 480 0 (unused) > ipt_MASQUERADE 1024 1 > ip_nat_irc 1704 0 (unused) > ip_nat_ftp 2152 0 (unused) > iptable_nat 14332 3 > ip_conntrack_irc 2484 1 > ip_conntrack_ftp 3132 1 > ip_conntrack 16516 2 > ipsec_aes 31296 2 > ipsec 247716 2 > tun 2944 3 > viarhine 10564 2 > mii 1820 0 > crc32 2620 0 > isofs 15732 0 (unused) > ide-detect 132 0 (unused) > ide-cd 26748 0 > ide-disk 11308 0 > ide-core 80476 0 > cdrom 25344 0 > > proc info > Processor > processor : 0 > vendor_id : CentaurHauls > cpu family : 6 > model : 9 > model name : VIA Nehemiah > stepping : 4 > cpu MHz : 1002.309 > cache size : 64 KB > fdiv_bug : no > hlt_bug : no > f00f_bug : no > coma_bug : no > fpu : yes > fpu_exception : yes > cpuid level : 1 > wp : yes > flags : fpu de pse tsc msr mtrr pge cmov mmx fxsr sse xstore > bogomips : 1998.84 > > > Running Processes > PID Uid VmSize Stat Command > 1 root 256 S init [2] > 2 root SW [keventd] > 3 root SWN [ksoftirqd_CPU0] > 4 root SW [kswapd] > 5 root SW [bdflush] > 6 root SW [kupdated] > 24335 root 284 S /sbin/syslogd -m 240 > 13374 root 360 S /sbin/klogd > 29689 root 324 S /usr/sbin/dropbear -p 6000 -r > /etc/dropbear/dropbear_ > 15379 root 148 S /usr/sbin/watchdog > 31875 nobody 356 S /usr/sbin/dnsmasq > 28644 root 284 S /usr/sbin/inetd > 12933 nobody 1936 S /usr/sbin/openvpn --daemon --writepid > /var/run/openvp > 28731 root 292 S /usr/sbin/ulogd -d > 18327 root 280 S /bin/sh /lib/ipsec/_plutorun --debug none > --uniqueids > 24705 root 268 S logger -p daemon.error -t ipsec__plutorun > 13487 root 280 S /bin/sh /lib/ipsec/_plutorun --debug none > --uniqueids > 19609 root 276 S /bin/sh /lib/ipsec/_plutoload --load %search > --start > 27806 root 820 S /lib/ipsec/pluto --nofork --debug-none > --uniqueids > 25379 root 160 S _pluto_adns 7 10 > 18388 root 1932 S /usr/sbin/snmpd -Lsd -Lf /dev/null -p > /var/run/snmpd. > 27938 sh-httpd 932 S /usr/sbin/mini_httpds -C /etc/mini_httpds.conf > 21392 root 308 S /usr/sbin/cron > 25071 root 292 S /sbin/getty 38400 tty1 > 13871 root 292 S /sbin/getty 38400 tty2 > 30618 sh-httpd 228 R N /usr/bin/haserl general-info.cgi > 22647 sh-httpd 1200 S /usr/sbin/mini_httpds -C /etc/mini_httpds.conf > 28508 sh-httpd 1200 S /usr/sbin/mini_httpds -C /etc/mini_httpds.conf > 21633 sh-httpd 296 S N /bin/sh > 9872 sh-httpd 300 R N ps aux > > Any help woould be appreciated. > > Darcy > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > ------------------------------------------------------------------------ > leaf-user mailing list: [email protected] > https://lists.sourceforge.net/lists/listinfo/leaf-user > Support Request -- http://leaf-project.org/ ------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click ------------------------------------------------------------------------ leaf-user mailing list: [email protected] https://lists.sourceforge.net/lists/listinfo/leaf-user Support Request -- http://leaf-project.org/
