Re: ZFS pool on FreeBSD 8.2-STABLE broken?
On Mon, 28 Mar 2011 13:38:54 -0500 Scot Hetzel wrote: On Mon, Mar 28, 2011 at 1:17 PM, Mark Morley wrote: Hi there, I have a small backup server (8.2-STABLE). �It boots from ufs and has azfs pool for backups that consists of 8 drives configured as 4 mirrored devices, totally around 2.5 TB. Been working great, no issues, until the past few days when remote rsyncsto it have started to get very slow (it's only at around %50 capacity). �Rebooting it helps for a while, then it gets slow again. �But this isn't the problem now... After the last reboot, it froze while booting right at the point where the file system gets mounted. �No errors, it just doesn't proceed past the ZFS version message. I rebooted single user and tried to access it with zpool status, and the command hangs in the same way. �Any attempt to access it (zfs list, for example) does the same thing. The disks themselves seem fine. �They are all connected to a pair of Adaptec RAID controllers (configured as individual drives, with mirroring handled by zfs) and the controller software shows them all to be intact. I disabled zfs in rc.conf and was able to boot, but I can't access the pool. Any ideas on how to diagnose and hopefully repair this? Your going to need to download a recent -CURRENT ISO that contans zfs v28, then you can try to recover the pool as outlined in this post http://opensolaris.org/jive/message.jspa?messageID=445269 Well, what I did was rebuild world and kernel top 9.0-CURRENT and reboot. It was able to see and access the zfs file system immediately without having to import it. I did a zpool upgrade to v28 and all seems well so far. Mark ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org X-pstn-neptune: 0/0/0.00/0 X-pstn-levels: (S: 7.06799/99.9 CV:99.9000 FC:95.5390 LC:95.5390 R:95.9108 P:95.9108 M:97.0282 C:98.6951 ) X-pstn-settings: 4 (1.5000:1.5000) s cv gt3 gt2 gt1 r p m c X-pstn-addresses: from m...@islandnet.com [294/10]
ZFS pool on FreeBSD 8.2-STABLE broken?
Hi there, I have a small backup server (8.2-STABLE). It boots from ufs and has a zfs pool for backups that consists of 8 drives configured as 4 mirrored devices, totally around 2.5 TB. Been working great, no issues, until the past few days when remote rsyncs to it have started to get very slow (it's only at around %50 capacity). Rebooting it helps for a while, then it gets slow again. But this isn't the problem now... After the last reboot, it froze while booting right at the point where the file system gets mounted. No errors, it just doesn't proceed past the ZFS version message. I rebooted single user and tried to access it with zpool status, and the command hangs in the same way. Any attempt to access it (zfs list, for example) does the same thing. The disks themselves seem fine. They are all connected to a pair of Adaptec RAID controllers (configured as individual drives, with mirroring handled by zfs) and the controller software shows them all to be intact. I disabled zfs in rc.conf and was able to boot, but I can't access the pool. Any ideas on how to diagnose and hopefully repair this? Mark ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: NFS stalling on 8.1-STABLE
On Sun, 15 Aug 2010 17:11:01 -0400 (EDT) Rick Macklem rmack...@uoguelph.ca wrote: Hi all, I have five front end web servers that all mount their content from the same server via NFS. If I stress the link on any one of the machines (eg: copy a large directory with a lot of files to/from the mounted file system) the client will pause. That is, all processes trying to access that mount will freeze. The log files with hundreds or thousands of nfs server not responding / is alive again messages. After 60 seconds it returns to normal, unless the load is still there in which case it continues to pause. The 60sec delay suggests that the client is doing a TCP reconnect. I'd suggest that you look at a packet trace in wireshark (it knows how to decode NFS packets) and see if there are new TCP connections (SYN, SYN-ACK,...) being made. If that is what is happening, I suspect it is NIC driver related, but it is really hard to say. I'll try this if/when it happens again. If you can try a network interface of a different type (not em) that will check to see if it is an em(4) issue. Unfortunately I don't have any non-em cards around. Alternately, you could try turning off the TSO and checksum offload stuff for the em(4) and see if that helps. Hmm, interesting. The four machines that seem to be working (so far) have these enabled by default. The fifth one has checksums enabled, but not TSO. Doesn't appear to support it. I also tried switching from TCP to UDP. This seems to be working (so far) on four of the clients (which happen to be identical load balanced machines), but on the fifth one (which serves a different purpose) I'm getting something really weird. Instead of locking up periodically as before, it's actually losing the mount. For example, a 'df' doesn't include the mounted system. If I try to access the mounted system (with 'ls' for example) I get an Input / output error message. I can remount it, but only after I force a dismount. Mark ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: NFS stalling on 8.1-STABLE
On Sun, 15 Aug 2010 23:35:50 -0700 Jeremy Chadwick free...@jdc.parodius.com wrote: On Thu, Aug 12, 2010 at 10:35:49AM -0700, Mark Morley wrote: I have five front end web servers that all mount their content from the same server via NFS. If I stress the link on any one of the machines (eg: copy a large directory with a lot of files to/from the mounted file system) the client will pause. That is, all processes trying to access that mount will freeze. The log files with hundreds or thousands of nfs server not responding / is alive again messages. After 60 seconds it returns to normal, unless the load is still there in which case it continues to pause. This has only started happening since I upgraded the client machines to 8.1-STABLE (previously four of them were 8.0 and one was 7.3). The server is 7.1-RELEASE-p11. No other changes have taken place in terms of hardware or software or mount options, etc. All nics involved are gigabit em cards, and they are on a private network (web access to the boxes is via an external interface). Are there any indications in dmesg that the NIC is responsible, e.g. interface down/up, etc.? No, nothing like that. Does switching to UDP-based NFS solve the problem for you? Trying that now for the past 24 hours or so. Four of the machine seem ok so far, but the fifth one has started dropping the mount entirely. Access to it gives an Input / output error message. Forcing a dismount and remounting brings it back. What OS version (uname -a) and NIC are used on the NFS server? FreeBSD xxx 7.1-RELEASE-p11 FreeBSD 7.1-RELEASE-p11 #0: Wed May 26 03:20:59 PDT 2010 r...@xxx:/usr/obj/usr/src/sys/CUSTOM i386 NICs are em Can you please provide the following output from one of the client machines running 8.1-STABLE with gigE em(4)? You can X-out machine names, MAC addresses, and IP addresses/netblocks if need be. * uname -a FreeBSD xxx 8.1-STABLE FreeBSD 8.1-STABLE #0: Tue Jul 27 16:27:44 PDT 2010 r...@xxx:/usr/obj/usr/src/sys/CUSTOM amd64 * ifconfig emX (where X is the interface number which would be used for NFS) em0: flags=8843UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST metric 0 mtu 1500 options=209bRXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC ether 00:0e:0c:85:d5:0d inet 192.168.1.30 netmask 0xff00 broadcast 192.168.1.255 media: Ethernet 1000baseT full-duplex status: active * netstat -idn -I emX NameMtu Network Address Ipkts Ierrs IdropOpkts Oerrs Coll Drop em01500 Link#1 00:0e:0c:85:d5:0d 39913814 2 0 39949943 0 00 em01500 192.168.1.0/2 192.168.1.30 39944016 - - 39949664 - -- * pciconf -lvc (provide only the data for emX please) e...@pci0:1:6:0: class=0x02 card=0x13768086 chip=0x107c8086 rev=0x05 hdr=0x00 vendor = 'Intel Corporation' device = 'Gigabit Ethernet Controller (Copper) rev 5 (82541PI)' class = network subclass = ethernet cap 01[dc] = powerspec 2 supports D0 D3 current D0 cap 07[e4] = PCI-X supports 2048 burst read, 1 split transaction * vmstat -i interrupt total rate irq1: atkbd0 239 0 irq16: em0 36746591883 irq18: em1 12658607304 irq21: ohci0 2 0 irq22: ehci0 528002 12 irq23: atapci1 2334936 56 cpu0: timer 83207296 2000 cpu1: timer 83207289 2000 Total 218682962 5256 * sysctl hw.pci hw.pci.usb_early_takeover: 1 hw.pci.honor_msi_blacklist: 1 hw.pci.enable_msix: 1 hw.pci.enable_msi: 1 hw.pci.do_power_resume: 1 hw.pci.do_power_nodriver: 0 hw.pci.enable_io_modes: 1 hw.pci.default_vgapci_unit: -1 hw.pci.host_mem_start: 2147483648 hw.pci.mcfg: 1 * As root, run sysctl dev.em.X.stats=1 then do dmesg and provide the output for NIC statistics (will start with emX:) em0: Excessive collisions = 0 em0: Sequence errors = 0 em0: Defer count = 52 em0: Missed Packets = 0 em0: Receive No Buffers = 0 em0: Receive Length Errors = 0 em0: Receive errors = 1 em0: Crc errors = 1 em0: Alignment errors = 0 em0: Collision/Carrier extension errors = 0 em0: RX overruns = 0 em0: watchdog timeouts = 0 em0: RX MSIX IRQ = 0 TX MSIX IRQ = 0 LINK MSIX IRQ = 0 em0: XON Rcvd = 54 em0: XON Xmtd = 0 em0: XOFF Rcvd = 54 em0: XOFF Xmtd = 0 em0: Good Packets Rcvd = 39915088 em0: Good Packets Xmtd = 39951839 Mark ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
NFS stalling on 8.1-STABLE
Hi all, I have five front end web servers that all mount their content from the same server via NFS. If I stress the link on any one of the machines (eg: copy a large directory with a lot of files to/from the mounted file system) the client will pause. That is, all processes trying to access that mount will freeze. The log files with hundreds or thousands of nfs server not responding / is alive again messages. After 60 seconds it returns to normal, unless the load is still there in which case it continues to pause. This has only started happening since I upgraded the client machines to 8.1-STABLE (previously four of them were 8.0 and one was 7.3). The server is 7.1-RELEASE-p11. No other changes have taken place in terms of hardware or software or mount options, etc. All nics involved are gigabit em cards, and they are on a private network (web access to the boxes is via an external interface). If I truss a command such as df, it gets tonbsp;getfsstat() and pauses there. Mount options are currently rw,tcp,nolockd,noatime,nosuid,bg,intr,soft,rsize=32768,wsize=32768 but I've tried all sorts of things and it doesn't seem to make a difference. Here's a sample output from nfsstat -c from one of the boxes (uptime 14 days): Client Info: Rpc Counts: Getattr SetattrLookup Readlink Read WriteCreateRemove 75552107 3008653 300569929253365 2426554 4748471 2035545 3015497 Rename Link Symlink Mkdir Rmdir Readdir RdirPlusAccess 864598 50887 7462 11895 1137933 16160386 0 31593291 MknodFsstatFsinfo PathConfCommit 0 22510271 5 0 3569465 Rpc Info: TimedOut Invalid X Replies Retries Requests 0 0 0 0 467516377 Cache Info: Attr HitsMisses Lkup HitsMisses BioR HitsMisses BioW HitsMisses 1461457650 75552057 963440449 300536041 37404178 2359677 9467719 4748471 BioRLHitsMisses BioD HitsMisses DirE HitsMisses 14409992253365 29508747 16119060 22292421 23233 Any thoughts? Mark ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org X-pstn-levels: (S:23.42978/99.9 CV:99.9000 FC:95.5390 LC:95.5390 R:95.9108 P:95.9108 M:97.0282 C:98.6951 ) Message-ID: 1477820950330102083993269003...@psmtp.com X-pstn-settings: 4 (1.5000:1.5000) s cv gt3 gt2 gt1 r p m c X-pstn-addresses: from m...@islandnet.com [294/10]
Re: NFS trouble on 7.3-STABLE i386
On Tue, 25 May 2010 20:59:08 -0400 (EDT) Rick Macklem wrote:You could try this patch. (It reverts the only vnode locking change that I can see was done the the nfs server between 7.1 and 7.3.): . . . If you get a chance to try it, please let us know if it helps, rick The patch didn't help I'm afraid. I wound up reverting back to 7.1 and after more than 24 hours I haven't seen a single stuck nfsd. Mark ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org X-pstn-neptune: 0/0/0.00/0 X-pstn-levels: (S:70.94084/99.9 CV:99.9000 FC:95.5390 LC:95.5390 R:95.9108 P:95.9108 M:97.0282 C:98.6951 ) X-pstn-settings: 4 (1.5000:1.5000) s cv gt3 gt2 gt1 r p m c X-pstn-addresses: from m...@islandnet.com [294/10]
Re: NFS trouble on 7.3-STABLE i386
On Tue, 25 May 2010 20:59:08 -0400 (EDT) Rick Macklem wrote: You could try this patch. (It reverts the only vnode locking change that I can see was done the the nfs server between 7.1 and 7.3.): --- nfs_serv.c.sav 2010-05-25 19:40:29.0 -0400 +++ nfs_serv.c 2010-05-25 19:41:38.0 -0400 @@ -3236,7 +3236,7 @@ io.uio_rw = UIO_READ; io.uio_td = NULL; eofflag = 0; - vn_lock(vp, LK_SHARED | LK_RETRY, td); + vn_lock(vp, LK_EXCLUSIVE | LK_RETRY, td); if (cookies) { free((caddr_t)cookies, M_TEMP); cookies = NULL; @@ -3518,7 +3518,7 @@ io.uio_rw = UIO_READ; io.uio_td = NULL; eofflag = 0; - vn_lock(vp, LK_SHARED | LK_RETRY, td); + vn_lock(vp, LK_EXCLUSIVE | LK_RETRY, td); if (cookies) { free((caddr_t)cookies, M_TEMP); cookies = NULL; If you get a chance to try it, please let us know if it helps, rick Thanks, but unfortunately it didn't work. Rebooted it four hours ago with the patch in place and at the moment I have seven nfsd processes stuck in that state. Could it indicate a problem with the underlying disk system? It's an aac0 raid, but it has no errors and the controller indicates all is well, so I doubt it. Mark ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org X-pstn-neptune: 0/0/0.00/0 X-pstn-levels: (S:70.94084/99.9 CV:99.9000 FC:95.5390 LC:95.5390 R:95.9108 P:95.9108 M:97.0282 C:98.6951 ) X-pstn-settings: 4 (1.5000:1.5000) s cv gt3 gt2 gt1 r p m c X-pstn-addresses: from m...@islandnet.com [294/10]
Re: NFS trouble on 7.3-STABLE i386
On Fri, 21 May 2010 11:32:33 -0400 (EDT) Rick Macklem wrote: On Fri, 21 May 2010, Mark Morley wrote: Having an issue with a file server here (7.3-STABLE i386) The nfsd processes are hanging. Client access to the nfs shares stops working and the nfsd processes on the server cannot be killed by any means. There are no errors showing up anywhere on the server. The network connection to the server seems fine (ie: anything other than nfs traffic seems ok). Rebooting the server fixes the problem for a while, but it doesn't reboot easily. It times out on terminating the nfsd processes. When it finally does reboot the file system isn't marked clean, resulting in a long wait for fsck (although it doesn't find any problems, it's a multi terrabyte share and it takes a while). This morning it did it again. This time I tried manually killing nfsd but nothing I did would make them die. No errors. Next time it happens, do a ps axlH to see what the nfsd threads are waiting for. It might give you a hint as to what is happening. Ok, it did it again. ps axlH shows all the nfsd processes stuck in the _ufs_ state. The server isn't doing anything else, no other processes seem to be monopolizing resources or disks in any way. rpcinfo doesn't show anything amiss as far as I can tell (ie: rpc is running) After a reboot, one of the 32 nfsd's almost immediately goes into the ufs state and never leaves it (and never racks up and CPU time either). The others are fine. Slowly over time more and more enter this state. When I rebooted it today, all but one were in that state. The clients were bogging down, presumably because the one and only functioning nfsd was overworked. One client is running 8.1-prerelease as a test, and that particular client only will start getting lots of timeouts accessing the nfs share (even with less load than the other clients). Just in case it's tickling something on the server I've shut it down this time and I'm leaving it off for the time being. Any further thoughts? Mark ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org X-pstn-neptune: 2/1/0.50/77 X-pstn-levels: (S:70.94084/99.9 CV:99.9000 FC:95.5390 LC:95.5390 R:95.9108 P:95.9108 M:97.0282 C:98.6951 ) X-pstn-settings: 4 (1.5000:1.5000) s cv gt3 gt2 gt1 r p m c X-pstn-addresses: from m...@islandnet.com [294/10]
NFS trouble on 7.3-STABLE i386
Having an issue with a file server here (7.3-STABLE i386) The nfsd processes are hanging. Client access to the nfs shares stops working and the nfsd processes on the server cannot be killed by any means. There are no errors showing up anywhere on the server. The network connection to the server seems fine (ie: anything other than nfs traffic seems ok). Rebooting the server fixes the problem for a while, but it doesn't reboot easily. It times out on terminating the nfsd processes. When it finally does reboot the file system isn't marked clean, resulting in a long wait for fsck (although it doesn't find any problems, it's a multi terrabyte share and it takes a while). This morning it did it again. This time I tried manually killing nfsd but nothing I did would make them die. No errors. The server is a dual core intel cpu with 2 gigs of ram. Adaptec 5805 raid controller, 8 x 750G drives, RAID 6 2 x em interfaces It's been find until about last week some time. I did recently upgrade from 7.1 to 7.3, which may be related, although this issue didn't start happening right away. No particular time of day and it doesn't seem to coincide with any particular cron tasks or have anything to do with the level of activity. Any thoughts? Mark ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org X-pstn-neptune: 0/0/0.00/0 X-pstn-levels: (S:38.77112/99.9 CV:99.9000 FC:95.5390 LC:95.5390 R:95.9108 P:95.9108 M:97.0282 C:98.6951 ) X-pstn-settings: 4 (1.5000:1.5000) s cv gt3 gt2 gt1 r p m c X-pstn-addresses: from m...@islandnet.com [294/10]
Re: NFS processes locking up!!
makeoptions COPTFLAGS=-O2 -pipe -funroll-loops -ffast-math Don't use leet meaningless compiler flags, and try again :) Kris D'oh! blush I don't normally have that in my kernel configs. That carried forward from the old server. I'll remove those and see how it goes. Well, that made no difference I'm afraid. I removed all make options, cleaned the source tree, deleted /usr/obj/* and completely rebuilt world as well as the kernel. A few days later it did the exact same thing. The change in compiler options did seem to lower the load averages a bit though. Possibly related: I am seeing ufs_rename: fvp == tvp (can't happen) messages periodically. Maybe one or two every 2-3 hours. Any chance that's a symptom of something related? Mark -- Mark Morley Owner / Administrator Islandnet.com ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
pf buggy on 6.1-STABLE?
Hi folks, Wondering if this rings any bells for anyone: After upgrading a handful of web servers from FreeBSD 4.11 with ipfw to 6.1-STABLE with pf, customers started reporting that occasionally their server side scripts would fail to connect to the SQL servers (which are still 4.11 and are attached via a separate dedicated gigabit network). A test page that makes 10,000 rapid SQL connections which connected 100% of the time before, now will usually see anywhere from one or two failed connections to a dozen or so (per 10,000) After trying many other things first, we finally found that 'pf' seems to be the culprit. Disabling pf with pfctl -d allows 100% of all connections to work, and as soon as we enable it we see connection failures again. I've tried changing the pf rule set in different ways, with and without scrubbing, with and without queues, even to the point where I have a single rule that just allows everything. It doesn't seem to matter what the rules actually are, just whether or not pf is enabled. I recompiled the kernel with pf disabled and ipfw enabled, and it works fine with 100% successful connections. We have no funky compiler options or anything like that. Any thoughts? Mark -- Mark Morley Owner / Administrator Islandnet.com ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: NFS processes locking up!!
series of Ethernet chips #device xe # Xircom pccard Ethernet # Wireless NIC cards #device wlan# 802.11 support #device an # Aironet 4500/4800 802.11 wireless NICs. #device awi # BayStack 660 and others #device ral # Ralink Technology RT2500 wireless NICs. #device wi # WaveLAN/Intersil/Symbol 802.11 wireless NICs. # Pseudo devices. device loop# Network loopback device random # Entropy device device ether # Ethernet support #device sl # Kernel SLIP #device ppp # Kernel PPP #device tun # Packet tunnel. device pty # Pseudo-ttys (telnet etc) #device md # Memory disks #device gif # IPv6 and IPv4 tunneling #device faith # IPv6-to-IPv4 relaying (translation) # The `bpf' device enables the Berkeley Packet Filter. # Be aware of the administrative consequences of enabling this! # Note that 'bpf' is required for DHCP. device bpf # Berkeley packet filter # USB support #device uhci# UHCI PCI-USB interface #device ohci# OHCI PCI-USB interface #device ehci# EHCI PCI-USB interface (USB 2.0) #device usb # USB Bus (required) #device udbp# USB Double Bulk Pipe devices #device ugen# Generic #device uhid# Human Interface Devices #device ukbd# Keyboard #device ulpt# Printer #device umass # Disks/Mass storage - Requires scbus and da #device ums # Mouse #device ural# Ralink Technology RT2500USB wireless NICs #device urio# Diamond Rio 500 MP3 player #device uscanner# Scanners # USB Ethernet, requires miibus #device aue # ADMtek USB Ethernet #device axe # ASIX Electronics USB Ethernet #device cdce# Generic USB over Ethernet #device cue # CATC USB Ethernet #device kue # Kawasaki LSI USB Ethernet #device rue # RealTek RTL8150 USB Ethernet # FireWire support #device firewire# FireWire bus code #device sbp # SCSI over FireWire (Requires scbus and da) #device fwe # Ethernet over FireWire (non-standard!) Mark -- Mark Morley Owner / Administrator Islandnet.com ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: NFS processes locking up!!
makeoptions COPTFLAGS=-O2 -pipe -funroll-loops -ffast-math Don't use leet meaningless compiler flags, and try again :) Kris D'oh! blush I don't normally have that in my kernel configs. That carried forward from the old server. I'll remove those and see how it goes. Mark -- Mark Morley Owner / Administrator Islandnet.com ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
RE: Trouble with NFSd under 6.1-Stable, any ideas?
Another data point: One of our NFS servers is an amd64 based system serving a cluster of web and email servers. Under 6.1-RCx it gave us the same (or better) performance than the server it replaced (which was 4.11). The server load hovered between 0.x and 1.x But after upping it to 6.1-STABLE the load now hovers between 5.x and 6.x with spikes as high as 8.x, and there has been no change at all in the NFS client traffic or other loading factors that we can tell. This in turn makes for slower NFS client accesses. I am going to try reverting to an earlier src tree and see if that helps. Mark -- Mark Morley Owner / Administrator Islandnet.com ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]