8.1R possible zfs snapshot livelock?
Hello, Not sure if it's worth troubleshooting this too much before upgrading, but we recently had an 8.1R/amd64 box hang in a way that suggested everything was waiting on disk access. It's remote and we had to resort to a power-cycle to bring it back (we have serial console, but it hung after accepting the root password). We run hourly/daily/weekly/monthly snapshots on about a half dozen filesystems using RSE's snaphot script (see http://people.freebsd.org/~rse/snapshot/ - we only use the zfs snapshotting and do not use the amd portion). We have some basic stats logged on all our boxes every 5 minutes and I saw a pile of cron jobs stuck in disk I/O wait. I suspect these were the snapshots. Shortly after that it seems as if all disk I/O got hung. Some additional info about what the main tasks are on this box: -qmail deliveries (lots) -postgres (light use) -nfs export of qmail log dirs to another box that does log analysis All services are spread amongst a handful of jails. Each jail has it's out zfs filesystem. Does this sound familiar to anyone running ZFS with snapshots? Anything I should log to get more data if this happens again? I have output from arc_summary.pl running every 5 minutes as part of our general status logging. Any pointers to known issues in ZFS (both 8.1 an 8.2) would be helpful. Also, anywhere to look for the general state of ZFS besides this page? http://wiki.freebsd.org/ZFS Thanks, Charles ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Ports and Packages for Supported Releases
Portmgr published a new page on their website which describes the current support and EoL policies for the ports tree and released packages. The main take-home messages are: - Support of FreeBSD releases by ports and the ports infrastructure matches the policies set out by the FreeBSD Security Officer. - Package builds will use the oldest supported minor release within each major branch to ensure ABI and KBI backwards compatability within each major branch, and support all minor versions of each major branch, including -RELEASE and -STABLE. See the full policy on the portmgr webpage: http://www.freebsd.org/portmgr/policies_eol.html On behalf of portmgr, -erwin -- Erwin Lansing http://droso.org Prediction is very difficult especially about the futureer...@freebsd.org pgpdNB8tx5lxz.pgp Description: PGP signature
Re: 8.1R possible zfs snapshot livelock?
On Tue, May 17, 2011 at 02:43:44AM -0400, Charles Sprickman wrote: Not sure if it's worth troubleshooting this too much before upgrading, but we recently had an 8.1R/amd64 box hang in a way that suggested everything was waiting on disk access. It's remote and we had to resort to a power-cycle to bring it back (we have serial console, but it hung after accepting the root password). We run hourly/daily/weekly/monthly snapshots on about a half dozen filesystems using RSE's snaphot script (see http://people.freebsd.org/~rse/snapshot/ - we only use the zfs snapshotting and do not use the amd portion). We have some basic stats logged on all our boxes every 5 minutes and I saw a pile of cron jobs stuck in disk I/O wait. I suspect these were the snapshots. Shortly after that it seems as if all disk I/O got hung. Some additional info about what the main tasks are on this box: -qmail deliveries (lots) -postgres (light use) -nfs export of qmail log dirs to another box that does log analysis All services are spread amongst a handful of jails. Each jail has it's out zfs filesystem. Does this sound familiar to anyone running ZFS with snapshots? Yes, and is exactly why I don't use them. :-) The problem sounds more like the kernel didn't lock up waiting for disk I/O (you didn't see any disk or controller issues on the console), but more likely bugs with ZFS snapshots or ZFS itself (kernel thread deadlock). We're talking about FreeBSD 8.1-RELEASE here; ZFS innards have changed greatly between then and now. Understandably (and justified), folks will almost certainly recommend that you upgrade the machine to RELENG_8 (8.2-STABLE) and see if the problem recurs. If so, you'll probably need to drop the machine to DDB remotely (via serial console) and issue some commands per whatever a kernel developer tells you. If this is a production machine, doing that probably isn't possible (it may take days or weeks before someone gets back to you), so the best thing to do would be to ensure you have dumpdev=auto (or a specific chosen device of your choice) to dump all memory to swap, and a /var filesystem large enough to hold it all, then drop to DDB + induce a panic by issuing call doadump. reboot, then let savecore(8) find the kernel dump in swap, save it to files in /var/crash, which can then be later examined using kgdb. Anything I should log to get more data if this happens again? I have output from arc_summary.pl running every 5 minutes as part of our general status logging. Any pointers to known issues in ZFS (both 8.1 an 8.2) would be helpful. There's a whole ton of issues, but noting them all is virtually impossible at this point. CVS commits / cvsweb are probably a better way to see what's been fixed. I've been screaming for years about the need for concise documentation every time the ZFS code (in RELENG_8 at least) is touched with an explanation of what the problem was and what was fixed; I've since given up that effort. Also, anywhere to look for the general state of ZFS besides this page? http://wiki.freebsd.org/ZFS The freebsd-fs and freebsd-stable mailing lists are pretty much the source of truth these days. Basically if you use ZFS you're sort of expected to be subscribed to them and following them daily. -- | Jeremy Chadwick j...@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB | ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8.1R possible zfs snapshot livelock?
on 17/05/2011 10:30 Jeremy Chadwick said the following: On Tue, May 17, 2011 at 02:43:44AM -0400, Charles Sprickman wrote: Does this sound familiar to anyone running ZFS with snapshots? Yes, and is exactly why I don't use them. :-) You put a smiley, but is this an attempt at FUD? -- Andriy Gapon ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8.1R possible zfs snapshot livelock?
On Tue, May 17, 2011 at 01:48:04PM +0300, Andriy Gapon wrote: on 17/05/2011 10:30 Jeremy Chadwick said the following: On Tue, May 17, 2011 at 02:43:44AM -0400, Charles Sprickman wrote: Does this sound familiar to anyone running ZFS with snapshots? Yes, and is exactly why I don't use them. :-) You put a smiley, but is this an attempt at FUD? I wish it were. I experienced similar behaviour to Charles during the early 8.x days (possibly 8.1-RELEASE, I forget; I may be thinking of 8.0?) where ZFS snapshots would occasionally result in the kernel deadlocking on ZFS-bound I/O. The kernel was alive/responsive to some degree but ZFS I/O would just indefinitely stall at that point, requiring a full system reset. No disk or controller problems (same hardware I'm using today actually!). I believe there were commits and improvements for snapshotting committed between 8.1-RELEASE and 8.2-RELEASE, but I haven't bothered to test them. The experience left a very bad taste in my mouth and as such I have avoided ZFS snapshots since. I'd be willing to try them again assuming someone can at least confirm that there were commits done to address snapshot concerns during the past year or so. But... There are still some outstanding incidents that directly pertain to ZFS snapshots, or are related to ZFS snapshots (meaning things like send/recv which are commonly used alongside snapshots), which I remember reading about but really saw no answer to: * ZFS send | ssh zfs recv results in ZFS subsystem hanging; 8.1-RELEASE; February 2011: http://lists.freebsd.org/pipermail/freebsd-fs/2011-February/010602.html * Kernel panic during heavy disk I/O while zfs recv being used simultaneously; CURRENT (so ZFS v28?); April 2011: http://lists.freebsd.org/pipermail/freebsd-fs/2011-April/011155.html * ZFS snapshots taking an extremely long time to be deleted; RELENG_8_1; February 2011: http://lists.freebsd.org/pipermail/freebsd-fs/2011-February/010797.html * zfs destroy -r not working on filesystem-level snapshots but works on pool-level snapshots; RELENG_8 with ZFS v28 patch (and is specific to ZFS v28 given the info); May 2011: http://lists.freebsd.org/pipermail/freebsd-fs/2011-May/011412.html Sorry to just rattle off a bunch of URLs and issues at once; it's not my intention to slander work on ZFS or anything even remotely like that. I'm just wondering given the number of problem reports that seem to come in about snapshot or snapshot-related ZFS stuff, where we stand on these? This is mainly for Charles' benefit and not so much mine (our rsnapshot/rsync-based backups work great for us at this time, sans the stomping of atime). -- | Jeremy Chadwick j...@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB | ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8.1R possible zfs snapshot livelock?
on 17/05/2011 14:29 Jeremy Chadwick said the following: On Tue, May 17, 2011 at 01:48:04PM +0300, Andriy Gapon wrote: on 17/05/2011 10:30 Jeremy Chadwick said the following: On Tue, May 17, 2011 at 02:43:44AM -0400, Charles Sprickman wrote: Does this sound familiar to anyone running ZFS with snapshots? Yes, and is exactly why I don't use them. :-) You put a smiley, but is this an attempt at FUD? I wish it were. The reason I asked is that I could have easily answered No, that's why I use them all the time. And I am sure many people would join me on this. So the way you originally described the issue was sufficiently non-specific and strong. I experienced similar behaviour to Charles during the early 8.x days (possibly 8.1-RELEASE, I forget; I may be thinking of 8.0?) where ZFS snapshots would occasionally result in the kernel deadlocking on ZFS-bound I/O. The kernel was alive/responsive to some degree but ZFS I/O would just indefinitely stall at that point, requiring a full system reset. No disk or controller problems (same hardware I'm using today actually!). I believe there were commits and improvements for snapshotting committed between 8.1-RELEASE and 8.2-RELEASE, but I haven't bothered to test them. The experience left a very bad taste in my mouth and as such I have avoided ZFS snapshots since. I'd be willing to try them again assuming someone can at least confirm that there were commits done to address snapshot concerns during the past year or so. But... There are still some outstanding incidents that directly pertain to ZFS snapshots, or are related to ZFS snapshots (meaning things like send/recv which are commonly used alongside snapshots), which I remember reading about but really saw no answer to: * ZFS send | ssh zfs recv results in ZFS subsystem hanging; 8.1-RELEASE; February 2011: http://lists.freebsd.org/pipermail/freebsd-fs/2011-February/010602.html * Kernel panic during heavy disk I/O while zfs recv being used simultaneously; CURRENT (so ZFS v28?); April 2011: http://lists.freebsd.org/pipermail/freebsd-fs/2011-April/011155.html * ZFS snapshots taking an extremely long time to be deleted; RELENG_8_1; February 2011: http://lists.freebsd.org/pipermail/freebsd-fs/2011-February/010797.html * zfs destroy -r not working on filesystem-level snapshots but works on pool-level snapshots; RELENG_8 with ZFS v28 patch (and is specific to ZFS v28 given the info); May 2011: http://lists.freebsd.org/pipermail/freebsd-fs/2011-May/011412.html Sorry to just rattle off a bunch of URLs and issues at once; it's not my intention to slander work on ZFS or anything even remotely like that. I'm just wondering given the number of problem reports that seem to come in about snapshot or snapshot-related ZFS stuff, where we stand on these? This is mainly for Charles' benefit and not so much mine (our rsnapshot/rsync-based backups work great for us at this time, sans the stomping of atime). Problem reports are always over-represented on the mailing lists. People rarely write that e.g. ZFS snapshot has flawlessly worked for them for the millionth time again today. I am not aware of any known-but-not-fixed issues in this area. Each problem report should be properly investigated individually. -- Andriy Gapon ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
MSI-X problem on Intel NC365T 82580 at FreeBSD 8.2-RELEASE-p1
Greetings, dear Sirs We are using FreeBSD 8.2-RELEASE-p1 as main gateway router at HP DL160 G6 with Intel 82576 integrated network adapter (works fine with MSI-X) driver version - Intel(R) PRO/1000 Network Connection version - 2.2.3 to get more ports we installed additional network adapter Intel NC365T (82580 chip) but NC365T does not work with MSI-X (dmesg output later) Intel and HP support can't help at all... maybe we are not alone in this problem? Thanx! IT Department Director Kislitsyn Stanislav dmesg output: igb0: Intel(R) PRO/1000 Network Connection version - 2.2.3 mem 0xfbc8-0xfbcf,0xfbe7-0xfbe73fff irq 36 at device 0.0 on pci7 igb0: Unable to map MSIX table igb0: Using MSI interrupt igb0: [FILTER] igb0: Ethernet address: f4:ce:46:a5:d0:3c igb1: Intel(R) PRO/1000 Network Connection version - 2.2.3 mem 0xfbd0-0xfbd7,0xfbe74000-0xfbe77fff irq 35 at device 0.1 on pci7 igb1: Unable to map MSIX table igb1: Using MSI interrupt igb1: [FILTER] igb1: Ethernet address: f4:ce:46:a5:d0:3d igb2: Intel(R) PRO/1000 Network Connection version - 2.2.3 mem 0xfbd8-0xfbdf,0xfbe78000-0xfbe7bfff irq 34 at device 0.2 on pci7 igb2: Unable to map MSIX table igb2: Using MSI interrupt igb2: [FILTER] igb2: Ethernet address: f4:ce:46:a5:d0:3e igb3: Intel(R) PRO/1000 Network Connection version - 2.2.3 mem 0xfbe8-0xfbef,0xfbe7c000-0xfbe7 irq 24 at device 0.3 on pci7 igb3: Unable to map MSIX table igb3: Using MSI interrupt igb3: [FILTER] igb3: Ethernet address: f4:ce:46:a5:d0:3f pcib3: ACPI PCI-PCI bridge at device 7.0 on pci0 pci6: ACPI PCI bus on pcib3 pcib4: ACPI PCI-PCI bridge at device 9.0 on pci0 pci5: ACPI PCI bus on pcib4 igb4: Intel(R) PRO/1000 Network Connection version - 2.2.3 port 0xe880-0xe89f mem 0xfbb6-0xfbb7,0xfbb4-0xfbb5,0xfbbb8000-0xffff irq 32 at device 0.0 on pci5 igb4: Using MSIX interrupts with 9 vectors igb4: [ITHREAD] igb4: [ITHREAD] igb4: [ITHREAD] igb4: [ITHREAD] igb4: [ITHREAD] igb4: [ITHREAD] igb4: [ITHREAD] igb4: [ITHREAD] igb4: [ITHREAD] igb4: Ethernet address: d8:d3:85:65:72:96 igb5: Intel(R) PRO/1000 Network Connection version - 2.2.3 port 0xec00-0xec1f mem 0xfbbe-0xfbbf,0xfbbc-0xfbbd,0xfbbbc000-0xfbbb irq 42 at device 0.1 on pci5 igb5: Using MSIX interrupts with 9 vectors igb5: [ITHREAD] igb5: [ITHREAD] igb5: [ITHREAD] igb5: [ITHREAD] igb5: [ITHREAD] igb5: [ITHREAD] igb5: [ITHREAD] igb5: [ITHREAD] igb5: [ITHREAD] igb5: Ethernet address: d8:d3:85:65:72:97 ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8.1R possible zfs snapshot livelock?
On Tue, May 17, 2011 at 02:55:54PM +0300, Andriy Gapon wrote: on 17/05/2011 14:29 Jeremy Chadwick said the following: On Tue, May 17, 2011 at 01:48:04PM +0300, Andriy Gapon wrote: on 17/05/2011 10:30 Jeremy Chadwick said the following: On Tue, May 17, 2011 at 02:43:44AM -0400, Charles Sprickman wrote: Does this sound familiar to anyone running ZFS with snapshots? Yes, and is exactly why I don't use them. :-) You put a smiley, but is this an attempt at FUD? I wish it were. The reason I asked is that I could have easily answered No, that's why I use them all the time. And I am sure many people would join me on this. So the way you originally described the issue was sufficiently non-specific and strong. You're absolutely right -- and to me, your answer/experience holds much more weight than my own. But if you and I were presenting advocacy of ZFS snapshots to a person who had experienced problems with it, their reluctance to believe would be understandable, no? They'd want some form of reassurance that the problem they experience was known or had been fixed in some way. I guess what I'm saying is that yes my wording was strong -- it was an opinion based on past experience. Fact: I don't have any present-day evidence to validate my opinion, since the ZFS code has changed greatly between then and now. But also fact: I did experience something very similar to what Charles did. Sympathy is sometimes all we admins/users have in situations like this. :-) But I do understand your point. I experienced similar behaviour to Charles during the early 8.x days (possibly 8.1-RELEASE, I forget; I may be thinking of 8.0?) where ZFS snapshots would occasionally result in the kernel deadlocking on ZFS-bound I/O. The kernel was alive/responsive to some degree but ZFS I/O would just indefinitely stall at that point, requiring a full system reset. No disk or controller problems (same hardware I'm using today actually!). I believe there were commits and improvements for snapshotting committed between 8.1-RELEASE and 8.2-RELEASE, but I haven't bothered to test them. The experience left a very bad taste in my mouth and as such I have avoided ZFS snapshots since. I'd be willing to try them again assuming someone can at least confirm that there were commits done to address snapshot concerns during the past year or so. But... There are still some outstanding incidents that directly pertain to ZFS snapshots, or are related to ZFS snapshots (meaning things like send/recv which are commonly used alongside snapshots), which I remember reading about but really saw no answer to: * ZFS send | ssh zfs recv results in ZFS subsystem hanging; 8.1-RELEASE; February 2011: http://lists.freebsd.org/pipermail/freebsd-fs/2011-February/010602.html * Kernel panic during heavy disk I/O while zfs recv being used simultaneously; CURRENT (so ZFS v28?); April 2011: http://lists.freebsd.org/pipermail/freebsd-fs/2011-April/011155.html * ZFS snapshots taking an extremely long time to be deleted; RELENG_8_1; February 2011: http://lists.freebsd.org/pipermail/freebsd-fs/2011-February/010797.html * zfs destroy -r not working on filesystem-level snapshots but works on pool-level snapshots; RELENG_8 with ZFS v28 patch (and is specific to ZFS v28 given the info); May 2011: http://lists.freebsd.org/pipermail/freebsd-fs/2011-May/011412.html Sorry to just rattle off a bunch of URLs and issues at once; it's not my intention to slander work on ZFS or anything even remotely like that. I'm just wondering given the number of problem reports that seem to come in about snapshot or snapshot-related ZFS stuff, where we stand on these? This is mainly for Charles' benefit and not so much mine (our rsnapshot/rsync-based backups work great for us at this time, sans the stomping of atime). Problem reports are always over-represented on the mailing lists. People rarely write that e.g. ZFS snapshot has flawlessly worked for them for the millionth time again today. I am not aware of any known-but-not-fixed issues in this area. Each problem report should be properly investigated individually. Both absolutely correct and understood. It just really sucks to be one of the people who experiences problems. When you have a system that you've taken a lot of time to get up and working, it runs reliably for weeks/months, then suddenly something like the above happens, you have to start weighing the pros and cons to alternatives (using something other than snapshot capability, changing filesystems, etc.). It would help if folks had some guidelines for what information would be helpful for kernel developers in the case of a ZFS deadlock of this nature. I would say the majority of the admin/user community (and this includes me!), once at a db prompt, have no clue how to proceed. So for Charles' situation, the
Re: 8.1R possible zfs snapshot livelock?
on 17/05/2011 15:23 Jeremy Chadwick said the following: So for Charles' situation, the next time it happens what would be useful for him to provide? The best I could come up with was to induce doadump then reboot to get the system up/working again, and then use kgdb after-the-fact. This is one of the best things to do, if possible. In this case all the potentially useful info would be preserved. Less drastic approach to hanged I/O debugging is to find out where processes/threads are actually stuck. E.g. using procstat -kk. -- Andriy Gapon ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8.1R possible zfs snapshot livelock?
On Tue, 17 May 2011, Andriy Gapon wrote: on 17/05/2011 15:23 Jeremy Chadwick said the following: So for Charles' situation, the next time it happens what would be useful for him to provide? The best I could come up with was to induce doadump then reboot to get the system up/working again, and then use kgdb after-the-fact. This is one of the best things to do, if possible. In this case all the potentially useful info would be preserved. Will do. Just have to verify I understand the break to debugger stuff with serial consoles and ensure it's not easy to accidentally trigger (vauge memories of Sun boxes doing funny things if someone messes with the console). We're also going to start in on upgrading to 8.2 and see what that brings. Still sounds like a good idea to be able to force a dump if things lock up like this again, regardless of what we're running. Less drastic approach to hanged I/O debugging is to find out where processes/threads are actually stuck. E.g. using procstat -kk. Odd you say that because we've got an old 32-bit 8.1 box that's running spamassassin and some devel stuff that looks like it's getting a little wedged: PID USERNAME THR PRI NICE SIZERES STATETIME WCPU COMMAND 6 root4 -8- 0K36K tx-tx 126.0H 76.37% zfskern And I'm not sure procstat is meant for this, but the output is interesting: [root@h22 /home/spork]# procstat -k 6 PIDTID COMM TDNAME KSTACK 6 100053 zfskern arc_reclaim_thre mi_switch sleepq_switch sleepq_timedwait _cv_timedwait arc_reclaim_thread fork_exit fork_trampoline 6 100054 zfskern l2arc_feed_threa mi_switch sleepq_switch sleepq_timedwait _cv_timedwait l2arc_feed_thread fork_exit fork_trampoline 6 100093 zfskern txg_thread_enter mi_switch sleepq_switch sleepq_wait _cv_wait txg_thread_wait txg_quiesce_thread fork_exit fork_trampoline 6 100094 zfskern txg_thread_enter mi_switch sleepq_switch sleepq_timedwait _cv_timedwait txg_thread_wait txg_sync_thread fork_exit fork_trampoline Makes me curious about the patch mm@ has for 8.2 here: http://blog.vx.sk/archives/24-Backported-patches-for-FreeBSD-82-RELEASE.html (item c in the list) Anyhow, thank you *all* for an interesting discussion. We do want to forge ahead with using snapshots since it's a really nice luxury, especially on boxes with lots of jails. Makes it very easy to roll things back without having to go to the tapes. Thanks, Charles -- Andriy Gapon ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8.1R possible zfs snapshot livelock?
on 18/05/2011 04:49 Charles Sprickman said the following: On Tue, 17 May 2011, Andriy Gapon wrote: Less drastic approach to hanged I/O debugging is to find out where processes/threads are actually stuck. E.g. using procstat -kk. Odd you say that because we've got an old 32-bit 8.1 box that's running spamassassin and some devel stuff that looks like it's getting a little wedged: PID USERNAME THR PRI NICE SIZERES STATETIME WCPU COMMAND 6 root4 -8- 0K36K tx-tx 126.0H 76.37% zfskern And I'm not sure procstat is meant for this, but the output is interesting: It is. [root@h22 /home/spork]# procstat -k 6 PIDTID COMM TDNAME KSTACK 6 100053 zfskern arc_reclaim_thre mi_switch sleepq_switch sleepq_timedwait _cv_timedwait arc_reclaim_thread fork_exit fork_trampoline 6 100054 zfskern l2arc_feed_threa mi_switch sleepq_switch sleepq_timedwait _cv_timedwait l2arc_feed_thread fork_exit fork_trampoline 6 100093 zfskern txg_thread_enter mi_switch sleepq_switch sleepq_wait _cv_wait txg_thread_wait txg_quiesce_thread fork_exit fork_trampoline 6 100094 zfskern txg_thread_enter mi_switch sleepq_switch sleepq_timedwait _cv_timedwait txg_thread_wait txg_sync_thread fork_exit fork_trampoline This looks completely normal. -- Andriy Gapon ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org