> In what way is CPU contention being monitored?  "prstat" without
> options is nearly useless for a multithreaded app on a multi-CPU (or
> multi-core/multi-thread) system.  mpstat is only useful if threads
> never migrate between CPU's.  "prstat -mL" gives a nice picture of how
> busy each LWP (thread) is.


Using "prstat -mL" on the Nexenta box shows no serious activity

> Oh, since the database runs on Linux I guess you need to dig up top's
> equivalent of "prstat -mL".  Unfortunately, I don't think that Linux
> has microstate accounting and as such you may not have visibility into
> time spent on traps, text faults, and data faults on a per-process
> basis.


If CPU is the bottleneck then it's probably on the Linux box.  Using "top" the 
following is typical of what I get:

top - 15:04:11 up 24 days,  4:13,  6 users,  load average: 5.87, 5.79, 5.85
Tasks: 307 total,   1 running, 306 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.6%us,  0.3%sy,  0.0%ni, 98.4%id,  0.0%wa,  0.3%hi,  0.3%si,  0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni, 96.2%id,  3.8%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  :  2.2%us,  5.1%sy,  0.0%ni, 55.0%id, 36.4%wa,  0.0%hi,  1.3%si,  0.0%st
Cpu3  :  3.3%us,  1.3%sy,  0.0%ni,  0.0%id, 95.0%wa,  0.3%hi,  0.0%si,  0.0%st
Cpu4  :  0.0%us,  0.7%sy,  0.0%ni, 98.7%id,  0.3%wa,  0.0%hi,  0.3%si,  0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni, 99.3%id,  0.7%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu7  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu8  :  1.0%us,  0.0%sy,  0.0%ni, 99.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu9  :  0.3%us,  0.0%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu10 : 16.6%us, 10.9%sy,  0.0%ni,  0.0%id, 70.6%wa,  0.3%hi,  1.6%si,  0.0%st
Cpu11 :  0.6%us,  1.0%sy,  0.0%ni, 66.9%id, 31.5%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu12 :  0.3%us,  0.3%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu13 :  0.3%us,  0.0%sy,  0.0%ni, 95.7%id,  4.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu14 :  1.5%us,  0.0%sy,  0.0%ni, 98.5%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu15 :  0.0%us,  0.7%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  74098512k total, 73910728k used,   187784k free,    96948k buffers
Swap:  2104488k total,      208k used,  2104280k free, 63210472k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
          
17652 mysql     20   0 3553m 3.1g 5472 S   38  4.4 247:51.80 mysqld             
           
16301 mysql     20   0 4275m 3.3g 5980 S    4  4.7   5468:33 mysqld             
           
16006 mysql     20   0 4434m 3.3g 5888 S    3  4.6   5034:06 mysqld             
           
12822 root      15  -5     0    0    0 S    2  0.0  22:00.50 scsi_wq_39         
           

> Have you done any TCP tuning? 

Some, yes, but since I've seen much more throughput on other tests I've made, I 
dont think it's the bottleneck here.
Thanks!                                           
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to