[zfs-discuss] ZFS RAID10

2006-08-08 Thread Robert Milkowski
Hi.

snv_44, v440

filebench/varmail results for ZFS RAID10 with 6 disks and 32 disks.
What is suprising is that the results for both cases are almost the same!



6 disks:

   IO Summary:  566997 ops 9373.6 ops/s, (1442/1442 r/w)  45.7mb/s,
299us cpu/op,   5.1ms latency
   IO Summary:  542398 ops 8971.4 ops/s, (1380/1380 r/w)  43.9mb/s,
300us cpu/op,   5.4ms latency


32 disks:
   IO Summary:  572429 ops 9469.7 ops/s, (1457/1457 r/w)  46.2mb/s,
301us cpu/op,   5.1ms latency
   IO Summary:  560491 ops 9270.6 ops/s, (1426/1427 r/w)  45.4mb/s,
300us cpu/op,   5.2ms latency

   

Using iostat I can see that with 6 disks in a pool I get about 100-200 IO/s per 
disk in a pool, and with 32 disk pool I get only 30-70 IO/s per disk in a pool. 
Each CPU is used at about 25% in SYS (there're 4 CPUs).

Something is wrong here.


# zpool status
  pool: zfs_raid10_32disks
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zfs_raid10_32disks  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t16d0  ONLINE   0 0 0
c3t17d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t18d0  ONLINE   0 0 0
c3t19d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t20d0  ONLINE   0 0 0
c3t21d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t22d0  ONLINE   0 0 0
c3t23d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t24d0  ONLINE   0 0 0
c3t25d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t26d0  ONLINE   0 0 0
c3t27d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t32d0  ONLINE   0 0 0
c3t33d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t34d0  ONLINE   0 0 0
c3t35d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t36d0  ONLINE   0 0 0
c3t37d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t38d0  ONLINE   0 0 0
c3t39d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t40d0  ONLINE   0 0 0
c3t41d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t42d0  ONLINE   0 0 0
c3t43d0  ONLINE   0 0 0

errors: No known data errors
bash-3.00# zpool destroy zfs_raid10_32disks
bash-3.00# zpool create zfs_raid10_6disks mirror c3t42d0 c3t43d0 mirror c3t40d0 
c3t41d0 mirror c3t38d0 c3t39d0
bash-3.00# zfs set atime=off zfs_raid10_6disks
bash-3.00# zfs create zfs_raid10_6disks/t1
bash-3.00#
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RAID10

2006-08-08 Thread Matthew Ahrens
On Tue, Aug 08, 2006 at 09:54:16AM -0700, Robert Milkowski wrote:
 Hi.
 
 snv_44, v440
 
 filebench/varmail results for ZFS RAID10 with 6 disks and 32 disks.
 What is suprising is that the results for both cases are almost the same!
 
 
 
 6 disks:
 
IO Summary:  566997 ops 9373.6 ops/s, (1442/1442 r/w)  45.7mb/s,
 299us cpu/op,   5.1ms latency
IO Summary:  542398 ops 8971.4 ops/s, (1380/1380 r/w)  43.9mb/s,
 300us cpu/op,   5.4ms latency
 
 
 32 disks:
IO Summary:  572429 ops 9469.7 ops/s, (1457/1457 r/w)  46.2mb/s,
 301us cpu/op,   5.1ms latency
IO Summary:  560491 ops 9270.6 ops/s, (1426/1427 r/w)  45.4mb/s,
 300us cpu/op,   5.2ms latency
 

 
 Using iostat I can see that with 6 disks in a pool I get about 100-200 IO/s 
 per disk in a pool, and with 32 disk pool I get only 30-70 IO/s per disk in a 
 pool. Each CPU is used at about 25% in SYS (there're 4 CPUs).
 
 Something is wrong here.

It's possible that you are CPU limited.  I'm guessing that your test
uses only one thread, so that may be the limiting factor.

We can get a quick idea of where that CPU is being spent if you can run
'lockstat -kgIW sleep 60' while your test is running, and send us the
first 100 lines of output.  It would be nice to see the output of
'iostat -xnpc 3' while the test is running, too.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss