Hi Jeremy,
thanks for your detailed explanation.
-dennis
Am 18.05.2013 um 17:35 schrieb Jeremy Chadwick:
On Sat, May 18, 2013 at 12:14:28PM +0200, Ronald Klop wrote:
On Fri, 17 May 2013 19:31:01 +0200, Jeremy Chadwick j...@koitsu.org wrote:
On Fri, May 17, 2013 at 11:37:23AM +0200, dennis
On Fri, 17 May 2013 19:31:01 +0200, Jeremy Chadwick j...@koitsu.org wrote:
On Fri, May 17, 2013 at 11:37:23AM +0200, dennis berger wrote:
Hi List,
I can confirm that it is the bug you mentioned steven.
Here is how I found it.
I recorded hourly zfskern and nfsd stats. like this.
echo PROCSTAT
On Sat, May 18, 2013 at 12:14:28PM +0200, Ronald Klop wrote:
On Fri, 17 May 2013 19:31:01 +0200, Jeremy Chadwick j...@koitsu.org wrote:
On Fri, May 17, 2013 at 11:37:23AM +0200, dennis berger wrote:
Hi List,
I can confirm that it is the bug you mentioned steven.
Here is how I found it.
I
Hi List,
I can confirm that it is the bug you mentioned steven.
Here is how I found it.
I recorded hourly zfskern and nfsd stats. like this.
echo PROCSTAT $reportname
pgrep -S (zfskern|nfsd) | xargs procstat -kk $reportname
luckily it crashed this night and logged this.
1910 101508 nfsd
On Fri, May 17, 2013 at 11:37:23AM +0200, dennis berger wrote:
Hi List,
I can confirm that it is the bug you mentioned steven.
Here is how I found it.
I recorded hourly zfskern and nfsd stats. like this.
echo PROCSTAT $reportname
pgrep -S (zfskern|nfsd) | xargs procstat -kk $reportname
This is indeed a ZFS+NFS system and I can see that istgt and nfs are stuck in
some ZIO state. Maybe it's this.
Thank's for pointing out.
Is it this ZFS+NFS deadlock?
--- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
+++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
@@
Hi list,
since we activated 10gbe on ixgbe cards + jumbo frames(9k) on 9.0 and now on
9.1 we recognize that after a random period of time, sometimes a week,
sometimes only a day, the
system doesn't send any packets out. The phenomenon is that you can't login via
ssh, nfs and istgt is not
So, you stop getting 10G transmission and so you are looking at mbuf leaks?
I don't see
anything in your data that makes it look like you've run out of available
mbufs. You said
you're running jumbos, what size? You do realize that if you do this the
clusters are coming
from different pools and
Hi jack,
so the increasing number of mbufs in use or mbuf clusters in use is normal,
you would say?
jumbo frames are of size 9k. I know that they're from different pools, I also
checked that pool.
nmb are:
#cat loader.conf
#tuning network
hw.intr_storm_threshold=9000
On Wed, May 15, 2013 at 10:13:04PM +0200, dennis berger wrote:
Hi jack,
so the increasing number of mbufs in use or mbuf clusters in use is normal,
you would say?
jumbo frames are of size 9k. I know that they're from different pools, I also
checked that pool.
nmb are:
#cat loader.conf
Am 15.05.2013 um 23:14 schrieb Jeremy Chadwick:
On Wed, May 15, 2013 at 10:13:04PM +0200, dennis berger wrote:
Hi jack,
so the increasing number of mbufs in use or mbuf clusters in use is
normal, you would say?
jumbo frames are of size 9k. I know that they're from different pools, I
- Original Message -
From: dennis berger d...@nipsi.de
FreeBSD 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec 4 09:23:10 UTC
2012
3. Regarding this:
A clean shutdown isn't possible though. It hangs after vnode
cleaning, normally you would see detaching of usb devices here,
12 matches
Mail list logo