Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-14 Thread Ivan Voras

Doug Poland wrote:



Ok, I re-ran with same config, but this time monitoring the sysctls
you requested* ( and the rest I was watching ):


I failed to mention that

kstat.zfs.misc.arcstats.size

seemed to fluctuate between about 164,000,00 and 180,000,000 bytes
during this last run


Is that with or without panicking? If the system did panic then it looks 
like the problem is a memory leak somewhere else in the kernel, which 
you could confirm by monitoring vmstat -z.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-14 Thread Doug Poland

On Thu, January 14, 2010 03:17, Ivan Voras wrote:
 Doug Poland wrote:

 Ok, I re-ran with same config, but this time monitoring the
 sysctls you requested* ( and the rest I was watching ):

 I failed to mention that

 kstat.zfs.misc.arcstats.size

 seemed to fluctuate between about 164,000,00 and 180,000,000 bytes
 during this last run

 Is that with or without panicking?

with a panic


 If the system did panic then it looks like the problem is a memory
 leak somewhere else in the kernel, which you could confirm by
 monitoring vmstat -z.

I'll give that a try.  Am I looking for specific items in vmstat -z?
arc*, zil*, zfs*, zio*?  Please advise.


-- 
Regards,
Doug

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-14 Thread Ivan Voras
2010/1/14 Doug Poland d...@polands.org:

 On Thu, January 14, 2010 03:17, Ivan Voras wrote:
 Doug Poland wrote:

 Ok, I re-ran with same config, but this time monitoring the
 sysctls you requested* ( and the rest I was watching ):

 I failed to mention that

 kstat.zfs.misc.arcstats.size

 seemed to fluctuate between about 164,000,00 and 180,000,000 bytes
 during this last run

 Is that with or without panicking?

 with a panic


 If the system did panic then it looks like the problem is a memory
 leak somewhere else in the kernel, which you could confirm by
 monitoring vmstat -z.

 I'll give that a try.  Am I looking for specific items in vmstat -z?
 arc*, zil*, zfs*, zio*?  Please advise.

You should look for whatever is allocating all your memory between 180
MB (which is your ARC size) and 1.2 GB (which is your kmem size).
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-14 Thread Doug Poland

On Thu, January 14, 2010 08:50, Ivan Voras wrote:
 2010/1/14 Doug Poland d...@polands.org:

 kstat.zfs.misc.arcstats.size

 seemed to fluctuate between about 164,000,00 and 180,000,000 bytes
 during this last run

 Is that with or without panicking?

 with a panic


 If the system did panic then it looks like the problem is a memory
 leak somewhere else in the kernel, which you could confirm by
 monitoring vmstat -z.

 I'll give that a try.  Am I looking for specific items in vmstat
 -z?   arc*, zil*, zfs*, zio*?  Please advise.

 You should look for whatever is allocating all your memory between 180
 MB (which is your ARC size) and 1.2 GB (which is your kmem size).


OK, another run, this time back to vfs.zfs.arc_max=512M in
/boot/loader.conf, and a panic:

panic: kmem malloc(131072): kmem map too small: 1294258176 total
allocated

I admit I do not fully understand what metrics are important to proper
analysis of this issue.  In this case, I was watching the following
within 1 second of the panic:

sysctl kstat.zfs.misc.arcstats.size: 41739944
sysctl vfs.numvnodes: 678
sysctl vfs.zfs.arc_max: 536870912
sysctl vfs.zfs.arc_meta_limit: 134217728
sysctl vfs.zfs.arc_meta_used: 7228584
sysctl vfs.zfs.arc_min: 67108864
sysctl vfs.zfs.cache_flush_disable: 0
sysctl vfs.zfs.debug: 0
sysctl vfs.zfs.mdcomp_disable: 0
sysctl vfs.zfs.prefetch_disable: 1
sysctl vfs.zfs.recover: 0
sysctl vfs.zfs.scrub_limit: 10
sysctl vfs.zfs.super_owner: 0
sysctl vfs.zfs.txg.synctime: 5
sysctl vfs.zfs.txg.timeout: 30
sysctl vfs.zfs.vdev.aggregation_limit: 131072
sysctl vfs.zfs.vdev.cache.bshift: 16
sysctl vfs.zfs.vdev.cache.max: 16384
sysctl vfs.zfs.vdev.cache.size: 10485760
sysctl vfs.zfs.vdev.max_pending: 35
sysctl vfs.zfs.vdev.min_pending: 4
sysctl vfs.zfs.vdev.ramp_rate: 2
sysctl vfs.zfs.vdev.time_shift: 6
sysctl vfs.zfs.version.acl: 1
sysctl vfs.zfs.version.dmu_backup_header: 2
sysctl vfs.zfs.version.dmu_backup_stream: 1
sysctl vfs.zfs.version.spa: 13
sysctl vfs.zfs.version.vdev_boot: 1
sysctl vfs.zfs.version.zpl: 3
sysctl vfs.zfs.zfetch.array_rd_sz: 1048576
sysctl vfs.zfs.zfetch.block_cap: 256
sysctl vfs.zfs.zfetch.max_streams: 8
sysctl vfs.zfs.zfetch.min_sec_reap: 2
sysctl vfs.zfs.zil_disable: 0
sysctl vm.kmem_size: 1327202304
sysctl vm.kmem_size_max: 329853485875
sysctl vm.kmem_size_min: 0
sysctl vm.kmem_size_scale: 3


vmstat -z | egrep -i 'zfs|zil|arc|zio|files'
ITEM SIZE LIMIT  USED  FREE  REQUESTS
Files: 80,0,  116,  199,   850713
zio_cache:720,0,53562,   98, 86386955
arc_buf_hdr_t:208,0, 1193,   31,11990
arc_buf_t: 72,0, 1180,  120,11990
zil_lwb_cache:200,0,11580, 2594,62407
zfs_znode_cache:  376,0,  605,   55,  654

vmstat -m |grep solaris|sed 's/K//'|awk '{print vm.solaris:, $3*1024}'


  solaris: 1285068800


The value I see as the culprit is vmstat -m | grep solaris.  This
value fluctuates wildly during the run and is always near kmem_size at
the time of the panic.

Again, I'm not sure what to look for here, and you are patiently
helping me along in this process.  If you have any tips or can point
me to docs on how to easily monitor these values, I will endeavor to
do so.


-- 
Regards,
Doug

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-14 Thread Ivan Voras
2010/1/14 Doug Poland d...@polands.org:

 On Thu, January 14, 2010 08:50, Ivan Voras wrote:
 2010/1/14 Doug Poland d...@polands.org:

 kstat.zfs.misc.arcstats.size

 seemed to fluctuate between about 164,000,00 and 180,000,000 bytes
 during this last run

 Is that with or without panicking?

 with a panic


 If the system did panic then it looks like the problem is a memory
 leak somewhere else in the kernel, which you could confirm by
 monitoring vmstat -z.

 I'll give that a try.  Am I looking for specific items in vmstat
 -z?   arc*, zil*, zfs*, zio*?  Please advise.

 You should look for whatever is allocating all your memory between 180
 MB (which is your ARC size) and 1.2 GB (which is your kmem size).


 OK, another run, this time back to vfs.zfs.arc_max=512M in
 /boot/loader.conf, and a panic:

 panic: kmem malloc(131072): kmem map too small: 1294258176 total
 allocated

 I admit I do not fully understand what metrics are important to proper
 analysis of this issue.  In this case, I was watching the following
 within 1 second of the panic:

 sysctl kstat.zfs.misc.arcstats.size: 41739944
 sysctl vfs.numvnodes: 678
 sysctl vfs.zfs.arc_max: 536870912
 sysctl vfs.zfs.arc_meta_limit: 134217728
 sysctl vfs.zfs.arc_meta_used: 7228584
 sysctl vfs.zfs.arc_min: 67108864
 sysctl vfs.zfs.cache_flush_disable: 0
 sysctl vfs.zfs.debug: 0
 sysctl vfs.zfs.mdcomp_disable: 0
 sysctl vfs.zfs.prefetch_disable: 1
 sysctl vfs.zfs.recover: 0
 sysctl vfs.zfs.scrub_limit: 10
 sysctl vfs.zfs.super_owner: 0
 sysctl vfs.zfs.txg.synctime: 5
 sysctl vfs.zfs.txg.timeout: 30
 sysctl vfs.zfs.vdev.aggregation_limit: 131072
 sysctl vfs.zfs.vdev.cache.bshift: 16
 sysctl vfs.zfs.vdev.cache.max: 16384
 sysctl vfs.zfs.vdev.cache.size: 10485760
 sysctl vfs.zfs.vdev.max_pending: 35
 sysctl vfs.zfs.vdev.min_pending: 4
 sysctl vfs.zfs.vdev.ramp_rate: 2
 sysctl vfs.zfs.vdev.time_shift: 6
 sysctl vfs.zfs.version.acl: 1
 sysctl vfs.zfs.version.dmu_backup_header: 2
 sysctl vfs.zfs.version.dmu_backup_stream: 1
 sysctl vfs.zfs.version.spa: 13
 sysctl vfs.zfs.version.vdev_boot: 1
 sysctl vfs.zfs.version.zpl: 3
 sysctl vfs.zfs.zfetch.array_rd_sz: 1048576
 sysctl vfs.zfs.zfetch.block_cap: 256
 sysctl vfs.zfs.zfetch.max_streams: 8
 sysctl vfs.zfs.zfetch.min_sec_reap: 2
 sysctl vfs.zfs.zil_disable: 0
 sysctl vm.kmem_size: 1327202304
 sysctl vm.kmem_size_max: 329853485875
 sysctl vm.kmem_size_min: 0
 sysctl vm.kmem_size_scale: 3


 vmstat -z | egrep -i 'zfs|zil|arc|zio|files'
 ITEM                     SIZE     LIMIT      USED      FREE  REQUESTS
 Files:                     80,        0,      116,      199,   850713
 zio_cache:                720,        0,    53562,       98, 86386955
 arc_buf_hdr_t:            208,        0,     1193,       31,    11990
 arc_buf_t:                 72,        0,     1180,      120,    11990
 zil_lwb_cache:            200,        0,    11580,     2594,    62407
 zfs_znode_cache:          376,        0,      605,       55,      654

 vmstat -m |grep solaris|sed 's/K//'|awk '{print vm.solaris:, $3*1024}'


  solaris: 1285068800


 The value I see as the culprit is vmstat -m | grep solaris.  This
 value fluctuates wildly during the run and is always near kmem_size at
 the time of the panic.

 Again, I'm not sure what to look for here, and you are patiently
 helping me along in this process.  If you have any tips or can point
 me to docs on how to easily monitor these values, I will endeavor to
 do so.

The only really important ones should be kstat.zfs.misc.arcstats.size
(which you very rarely print) and vm.kmem_size. The solaris entry
above should be near  kstat.zfs.misc.arcstats.size in all cases.

But I don't have any more ideas here. Try taking this post (also
include kstst.zfs.misc.arcstats.size) to the freebsd-fs@ mailing list.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-13 Thread Doug Poland
Hello,

I'm trying to get an 8.0-RELEASE-p2 amd64 box to not crash when
running benchmarks/unixbench.  The box in question has 4GB RAM running
6 SCSI disks in a RAID1Z.

dmesg | grep memory
real memory  = 4294967296 (4096 MB)
avail memory = 3961372672 (3777 MB)

zpool status
  pool: bethesda
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
bethesda   ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
gpt/disk0  ONLINE   0 0 0
gpt/disk1  ONLINE   0 0 0
gpt/disk2  ONLINE   0 0 0
gpt/disk3  ONLINE   0 0 0
gpt/disk4  ONLINE   0 0 0
gpt/disk5  ONLINE   0 0 0


It appears unixbench causes the mem exhaustion when running the fstime
/ fsbuffer / fsdisk programs, depending on what I've got in
/boot/loader.conf

I began with a system with no tunables in /boot/loader.conf
(vm.kmem_size and vm.kmem_size_max).  Then I tried increasing
vm.kmem_size and vm.kmem_size_max a GB at a time, until I was at 4GB.

At every increase, the system panicked with a kmem exhaustion, until I
used the 4GB settings.  At that time, the system system became
unresponsive and had to be reset.

So the question is, can ZFS be tuned to not panic or hang no matter
what I throw at it?  In this case, it's the ancient and innocuous
unixbench utility.

This is a test box right now and I'm more than willing to try various
tests or tweaks to get 8.x FreeBSD/ZFS into a stable state.


-- 
Regards,
Doug

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-13 Thread Ivan Voras

Doug Poland wrote:


So the question is, can ZFS be tuned to not panic or hang no matter
what I throw at it?


Apparently not.

 I began with a system with no tunables in /boot/loader.conf
 (vm.kmem_size and vm.kmem_size_max).  Then I tried increasing
 vm.kmem_size and vm.kmem_size_max a GB at a time, until I was at 4GB.

Try adding vfs.zfs.arc_max=512M to /boot/loader.conf.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-13 Thread Doug Poland

On Wed, January 13, 2010 11:55, Ivan Voras wrote:
 Doug Poland wrote:

 So the question is, can ZFS be tuned to not panic or hang no matter
 what I throw at it?

 Apparently not.

   I began with a system with no tunables in /boot/loader.conf
   (vm.kmem_size and vm.kmem_size_max).  Then I tried increasing
   vm.kmem_size and vm.kmem_size_max a GB at a time, until I was at
 4GB.

 Try adding vfs.zfs.arc_max=512M to /boot/loader.conf.

Would you suggest tweaking the vm.kmem_size tunables in addition to
arc_max?



-- 
Regards,
Doug

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-13 Thread Ivan Voras
2010/1/13 Doug Poland d...@polands.org:

 On Wed, January 13, 2010 11:55, Ivan Voras wrote:
 Doug Poland wrote:

 So the question is, can ZFS be tuned to not panic or hang no matter
 what I throw at it?

 Apparently not.

   I began with a system with no tunables in /boot/loader.conf
   (vm.kmem_size and vm.kmem_size_max).  Then I tried increasing
   vm.kmem_size and vm.kmem_size_max a GB at a time, until I was at
 4GB.

 Try adding vfs.zfs.arc_max=512M to /boot/loader.conf.

 Would you suggest tweaking the vm.kmem_size tunables in addition to
 arc_max?

No, unless they auto-tune to something lesser than approximately arc_max*3.

I try to set arc_max to be a third (or a quarter) the kmem_size, and
tune kmem_size ad_hoc to suit the machine and its purpose.

The reason for this is that arc_max is just a guideline, not a hard
limit... the ZFS ARC usage can and will spike to much larger values,
usually in the most inopportune moment.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-13 Thread Doug Poland

On Wed, January 13, 2010 12:35, Ivan Voras wrote:

 Try adding vfs.zfs.arc_max=512M to /boot/loader.conf.

 Would you suggest tweaking the vm.kmem_size tunables in addition to
 arc_max?

 No, unless they auto-tune to something lesser than approximately
 arc_max*3.

 I try to set arc_max to be a third (or a quarter) the kmem_size, and
 tune kmem_size ad_hoc to suit the machine and its purpose.

 The reason for this is that arc_max is just a guideline, not a hard
 limit... the ZFS ARC usage can and will spike to much larger values,
 usually in the most inopportune moment.

This is the state of the machine when it panicked this time:

panic: kmem_malloc(131072): kmem_map too small: 1296957440 total
allocated
cpuid = 1

/boot/loader.conf: vfs.zfs.arc_max=512M
vfs.numvnodes: 660
vfs.zfs.arc_max: 536870912
vfs.zfs.arc_meta_limit: 134217728
vfs.zfs.arc_meta_used: 7006136
vfs.zfs.arc_min: 67108864
vfs.zfs.zil_disable: 0
vm.kmem_size: 1327202304
vm.kmem_size_max: 329853485875

Using a handy little script I found posted in several places, I was
monitoring memory:

TEXT 15373968   14.66   MiB
DATA   1536957440   1465.76 MiB
TOTAL  1552331408   1480.42 MiB

Where TEXT = a sum of kldstat memory values
and   DATA = a sum of vmstat -m values

Is there a next step to try, or is this chasing a wild goose?


-- 
Regards,
Doug

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-13 Thread Ivan Voras
2010/1/13 Doug Poland d...@polands.org:

 On Wed, January 13, 2010 12:35, Ivan Voras wrote:

 Try adding vfs.zfs.arc_max=512M to /boot/loader.conf.

 Would you suggest tweaking the vm.kmem_size tunables in addition to
 arc_max?

 No, unless they auto-tune to something lesser than approximately
 arc_max*3.

 I try to set arc_max to be a third (or a quarter) the kmem_size, and
 tune kmem_size ad_hoc to suit the machine and its purpose.

 The reason for this is that arc_max is just a guideline, not a hard
 limit... the ZFS ARC usage can and will spike to much larger values,
 usually in the most inopportune moment.

 This is the state of the machine when it panicked this time:

 panic: kmem_malloc(131072): kmem_map too small: 1296957440 total
 allocated
 cpuid = 1

 /boot/loader.conf: vfs.zfs.arc_max=512M
 vfs.numvnodes: 660
 vfs.zfs.arc_max: 536870912
 vfs.zfs.arc_meta_limit: 134217728
 vfs.zfs.arc_meta_used: 7006136
 vfs.zfs.arc_min: 67108864
 vfs.zfs.zil_disable: 0
 vm.kmem_size: 1327202304
 vm.kmem_size_max: 329853485875

(from the size of arc_max I assume you did remember to reboot after
changing loader.conf and before testing again but just checking - did
you?)

Can you monitor and record kstat.zfs.misc.arcstats.size sysctl while
the test is running (and crashing)?

This looks curious - your kmem_max is ~~ 1.2 GB, arc_max is 0.5 GB and
you are still having panics. Is there anything unusual about your
system? Like unusually slow CPU, unusually fast or slow drives?

I don't have any ideas smarter than reducing arc_max by half then try
again and continue reducing it until it works. It would be very
helpful if you could monitor the kstat.zfs.misc.arcstats.size sysctl
while you are doing the tests to document what is happening to the
system. If it by any chance stays the same you should probably monitor
vmstat -m.


 Using a handy little script I found posted in several places, I was
 monitoring memory:

 TEXT     15373968       14.66   MiB
 DATA   1536957440       1465.76 MiB
 TOTAL  1552331408       1480.42 MiB

 Where TEXT = a sum of kldstat memory values
 and   DATA = a sum of vmstat -m values

 Is there a next step to try, or is this chasing a wild goose?


 --
 Regards,
 Doug


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-13 Thread Doug Poland

On Wed, January 13, 2010 13:57, Ivan Voras wrote:
 2010/1/13 Doug Poland d...@polands.org:

 This is the state of the machine when it panicked this time:

 panic: kmem_malloc(131072): kmem_map too small: 1296957440 total
 allocated
 cpuid = 1

 /boot/loader.conf: vfs.zfs.arc_max=512M
 vfs.numvnodes: 660
 vfs.zfs.arc_max: 536870912
 vfs.zfs.arc_meta_limit: 134217728
 vfs.zfs.arc_meta_used: 7006136
 vfs.zfs.arc_min: 67108864
 vfs.zfs.zil_disable: 0
 vm.kmem_size: 1327202304
 vm.kmem_size_max: 329853485875

 (from the size of arc_max I assume you did remember to reboot after
 changing loader.conf and before testing again but just checking - did
 you?)

Yes, I did reboot


 Can you monitor and record kstat.zfs.misc.arcstats.size sysctl while
 the test is running (and crashing)?

Certainly


 This looks curious - your kmem_max is ~~ 1.2 GB, arc_max is 0.5 GB and
 you are still having panics. Is there anything unusual about your
 system? Like unusually slow CPU, unusually fast or slow drives?

Don't think there is anything unusual.  This is 5 year old HP DL385.
It has two 2.6GHz Opteron 252 CPUs.  The disks are 6x36GB P-SCSI. 
There are behind an HP Smart Array 6i controller.  I had to configure
each drive as RAID0 in order make it visible to the OS.  Kinda hokey
if you ask me.

dmesg | grep -i CPU
CPU: AMD Opteron(tm) Processor 252 (2605.92-MHz K8-class CPU)
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs

smartctl -a /dev/da0
Device: COMPAQ   RAID 0  VOLUME   Version: OK
Device type: disk
Local Time is: Wed Jan 13 14:21:44 2010 CST
Device does not support SMART

dmesg | grep -i smart
ciss0: HP Smart Array 6i port 0x5000-0x50ff mem
0xf7ef-0xf7ef1fff,0xf7e8-0xf7eb irq 24 at device 4.0 on
pci2

 I don't have any ideas smarter than reducing arc_max by half then try
 again and continue reducing it until it works. It would be very
 helpful if you could monitor the kstat.zfs.misc.arcstats.size sysctl
 while you are doing the tests to document what is happening to the
 system. If it by any chance stays the same you should probably monitor
 vmstat -m.

OK, will do monitor on the next run.  Thanks for your help so far.


-- 
Regards,
Doug

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-13 Thread Doug Poland

On Wed, January 13, 2010 13:57, Ivan Voras wrote:
 2010/1/13 Doug Poland d...@polands.org:


 Can you monitor and record kstat.zfs.misc.arcstats.size sysctl while
 the test is running (and crashing)?

 This looks curious - your kmem_max is ~~ 1.2 GB, arc_max is 0.5 GB and
 you are still having panics. Is there anything unusual about your
 system? Like unusually slow CPU, unusually fast or slow drives?

 I don't have any ideas smarter than reducing arc_max by half then try
 again and continue reducing it until it works. It would be very
 helpful if you could monitor the kstat.zfs.misc.arcstats.size sysctl
 while you are doing the tests to document what is happening to the
 system. If it by any chance stays the same you should probably monitor
 vmstat -m.


Ok, I re-ran with same config, but this time monitoring the sysctls
you requested* ( and the rest I was watching ):

panic: kmem_malloc(131072): kmem_map too small: 1292869632 total
allocated
cpuid = 0

* kstat.zfs.misc.arcstats.size: 166228176
  vfs.numvnodes: 2848
  vfs.zfs.arc_max: 536870912
  vfs.zfs.arc_meta_limit: 134217728
  vfs.zfs.arc_meta_used: 132890832
  vfs.zfs.arc_min: 67108864
  vfs.zfs.cache_flush_disable: 0
  vfs.zfs.debug: 0
  vfs.zfs.mdcomp_disable: 0
  vfs.zfs.prefetch_disable: 1
  vfs.zfs.recover: 0
  vfs.zfs.scrub_limit: 10
  vfs.zfs.super_owner: 0
  vfs.zfs.txg.synctime: 5
  vfs.zfs.txg.timeout: 30
  vfs.zfs.vdev.aggregation_limit: 131072
  vfs.zfs.vdev.cache.bshift: 16
  vfs.zfs.vdev.cache.max: 16384
  vfs.zfs.vdev.cache.size: 10485760
  vfs.zfs.vdev.max_pending: 35
  vfs.zfs.vdev.min_pending: 4
  vfs.zfs.vdev.ramp_rate: 2
  vfs.zfs.vdev.time_shift: 6
  vfs.zfs.version.acl: 1
  vfs.zfs.version.dmu_backup_header: 2
  vfs.zfs.version.dmu_backup_stream: 1
  vfs.zfs.version.spa: 13
  vfs.zfs.version.vdev_boot: 1
  vfs.zfs.version.zpl: 3
  vfs.zfs.zfetch.array_rd_sz: 1048576
  vfs.zfs.zfetch.block_cap: 256
  vfs.zfs.zfetch.max_streams: 8
  vfs.zfs.zfetch.min_sec_reap: 2
  vfs.zfs.zil_disable: 0
  vm.kmem_size: 1327202304
  vm.kmem_size_max: 329853485875
  vm.kmem_size_min: 0
  vm.kmem_size_scale: 3
* vmstat -m | grep solaris: 1496232960



-- 
Regards,
Doug

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic

2010-01-13 Thread Doug Poland


 Ok, I re-ran with same config, but this time monitoring the sysctls
 you requested* ( and the rest I was watching ):

I failed to mention that

kstat.zfs.misc.arcstats.size

seemed to fluctuate between about 164,000,00 and 180,000,000 bytes
during this last run



-- 
Regards,
Doug

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org