On 05 January, 2007 - Tomas Ögren sent me these 3,3K bytes:
These numbers come from the last ::kmastat you ran before reducing the
DNLC size. Note below that much of this space is still consumed by
these caches, even after the DNLC has dropped it references. This is
largely due to
On 07 January, 2007 - Tomas Ögren sent me these 1,0K bytes:
On 05 January, 2007 - Tomas Ögren sent me these 3,3K bytes:
These numbers come from the last ::kmastat you ran before reducing the
DNLC size. Note below that much of this space is still consumed by
these caches, even after
Hello Tomas,
Friday, January 5, 2007, 4:00:53 AM, you wrote:
TÖ On 04 January, 2007 - Tomas Ögren sent me these 1,0K bytes:
On 03 January, 2007 - [EMAIL PROTECTED] sent me these 0,5K bytes:
Hmmm, so there is lots of evictable cache here (mostly in the MFU
part of the cache)... could
On 05 January, 2007 - Robert Milkowski sent me these 3,8K bytes:
Hello Tomas,
I saw the same behavior here when ncsize was increased from default.
Try with default and lets see what will happen - if it works then it's
better than hung every an hour or so.
That's still not the point.. It
Thomas,
This could be fragmentation in the meta-data caches. Could you
print out the results of ::kmastat?
-Mark
Tomas Ögren wrote:
On 05 January, 2007 - Robert Milkowski sent me these 3,8K bytes:
Hello Tomas,
I saw the same behavior here when ncsize was increased from default.
Try with
On 05 January, 2007 - Mark Maybee sent me these 0,8K bytes:
Thomas,
This could be fragmentation in the meta-data caches. Could you
print out the results of ::kmastat?
http://www.acc.umu.se/~stric/tmp/zfs-dumps.tar.bz2
memstat, kmastat and dnlc_nentries from 10 minutes after boot up until
So it looks like this data does not include ::kmastat info from *after*
you reset arc_reduce_dnlc_percent. Can I get that?
What I suspect is happening:
1 with your large ncsize, you eventually ran the machine out
of memory because (currently) the arc is not accounting for
On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes:
So it looks like this data does not include ::kmastat info from *after*
you reset arc_reduce_dnlc_percent. Can I get that?
Yeah, attached. (although about 18 hours after the others)
What I suspect is happening:
1 with your
Tomas Ögren wrote:
On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes:
So it looks like this data does not include ::kmastat info from *after*
you reset arc_reduce_dnlc_percent. Can I get that?
Yeah, attached. (although about 18 hours after the others)
Excellent, this confirms #3
On 05 January, 2007 - Tomas Ögren sent me these 33K bytes:
On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes:
So it looks like this data does not include ::kmastat info from *after*
you reset arc_reduce_dnlc_percent. Can I get that?
Yeah, attached. (although about 18 hours
On 05 January, 2007 - Mark Maybee sent me these 2,9K bytes:
Tomas Ögren wrote:
On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes:
So it looks like this data does not include ::kmastat info from *after*
you reset arc_reduce_dnlc_percent. Can I get that?
Yeah, attached. (although
On 03 January, 2007 - [EMAIL PROTECTED] sent me these 0,5K bytes:
Hmmm, so there is lots of evictable cache here (mostly in the MFU
part of the cache)... could you make your core file available?
I would like to take a look at it.
Isn't this just like:
6493923 nfsfind on ZFS filesystem
Hello.
Having some hangs on a snv53 machine which is quite probably ZFS+NFS
related, since that's all the machine do ;)
The machine is a 2x750MHz Blade1000 with 2GB ram, using a SysKonnect
9821 GigE card (with their 8.19.1.3 skge driver) and two HP branded MPT
SCSI cards. Normal load is pretty
Hello Tomas,
Wednesday, January 3, 2007, 10:32:39 AM, you wrote:
TÖ Hello.
TÖ Having some hangs on a snv53 machine which is quite probably ZFS+NFS
TÖ related, since that's all the machine do ;)
TÖ The machine is a 2x750MHz Blade1000 with 2GB ram, using a SysKonnect
TÖ 9821 GigE card (with
On 03 January, 2007 - Robert Milkowski sent me these 3,0K bytes:
Hello Tomas,
Wednesday, January 3, 2007, 10:32:39 AM, you wrote:
TÖ The tweaks I have are:
TÖ set ncsize = 50
TÖ set nfs:nrnode = 50
TÖ set zfs:zil_disable=1
TÖ set zfs:zfs_vdev_cache_bshift=14
TÖ set
Hello Tomas,
Give us output of ::kmastat on crashdump.
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
http://milek.blogspot.com
___
zfs-discuss mailing list
On 03 January, 2007 - Robert Milkowski sent me these 0,2K bytes:
Hello Tomas,
Give us output of ::kmastat on crashdump.
Ok, attached.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
Tomas,
There are a couple of things going on here:
1. There is a lot of fragmentation in your meta-data caches (znode,
dnode, dbuf, etc). This is burning up about 300MB of space in your
hung kernel. This is a known problem that we are currently working
on.
2. While the ARC has set its
On 03 January, 2007 - Mark Maybee sent me these 5,0K bytes:
Tomas,
There are a couple of things going on here:
1. There is a lot of fragmentation in your meta-data caches (znode,
dnode, dbuf, etc). This is burning up about 300MB of space in your
hung kernel. This is a known problem
Hmmm, so there is lots of evictable cache here (mostly in the MFU
part of the cache)... could you make your core file available?
I would like to take a look at it.
-Mark
Tomas Ögren wrote:
On 03 January, 2007 - Mark Maybee sent me these 5,0K bytes:
Tomas,
There are a couple of things going
Hmmm, so there is lots of evictable cache here (mostly in the MFU
part of the cache)... could you make your core file available?
I would like to take a look at it.
Isn't this just like:
6493923 nfsfind on ZFS filesystem quickly depletes memory in a 1GB system
Which was introduced in b51(or 52)
Ah yes! Thank you Casper. I knew this looked familiar! :-)
Yes, this is almost certainly what is happening here. The
bug was introduced in build 51 and fixed in build 54.
[EMAIL PROTECTED] wrote:
Hmmm, so there is lots of evictable cache here (mostly in the MFU
part of the cache)... could
Tomas Ögren wrote:
df (GNU df) says there are ~850k inodes used, I'd like to keep those in
memory.. There is currently 1.8TB used on the filesystem.. The
probability of a cache hit in the user data cache is about 0% and the
probability that an rsync happens again shortly is about 100%..
Also,
On 03 January, 2007 - Richard Elling sent me these 0,5K bytes:
Tomas Ögren wrote:
df (GNU df) says there are ~850k inodes used, I'd like to keep those in
memory.. There is currently 1.8TB used on the filesystem.. The
probability of a cache hit in the user data cache is about 0% and the
24 matches
Mail list logo