> Randread is a single threaded test, it will never do
> more than one
> simultaneous read. So you can't get more reads/sec
> than one disc can
> deliver. You have to have more than one simultaneous
> read to get more.
> Try running several randread at the same time.

Doh, ofcourse. When running 2 parallell I still got 66 reads/sec per process, 
though 4 and 8 only gave 33 reads/sec. Its composed of 9 drives. While it is in 
now way optimized way of testing, 33 r/s * 4 = 132 r/s on 9 drives which should 
theoretically give 9 * 66 r/s = 594 is still rather far off. 

The original problem which I didn't describe in my first post was that of 
terrible resync performance, 6 MB/s. Now one of the stripes still have default 
stripe size of 16k, when both had default stripe size I got about 7 MB/s. An 
iostat looks like this.

device       r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
md20        93.4   93.2 5977.6 5964.8  0.0  1.0    5.4   0 100
md21       466.6    0.0 5971.2    0.0  0.0  0.3    0.7   0   9
md22         0.0   93.2    0.0 5964.8  0.0  0.9    9.8   0  91
sd31         0.0   20.2    0.0  644.0  0.0  0.2    9.5   0  10
sd32         0.0   19.2    0.0  614.4  0.0  0.2    9.6   0  10
sd33         0.0   25.4    0.0  815.2  0.0  0.2    9.6   0  13
sd34         0.0   19.2    0.0  614.4  0.0  0.2    9.5   0  10
sd35         0.0   19.2    0.0  614.4  0.0  0.2    9.5   0  10
sd37         0.0   19.2    0.0  614.4  0.0  0.2    9.4   0  10
sd39         0.0   25.6    0.0  819.2  0.0  0.2    9.4   0  13
sd40         0.0   19.2    0.0  614.4  0.0  0.2    9.4   0  10
sd42         0.0   19.2    0.0  614.4  0.0  0.2    9.6   0  10
sd43        51.8    0.0  664.8    0.0  0.0  0.0    0.5   0   3
sd44        52.0    0.0  665.6    0.0  0.0  0.0    0.8   0   4
sd45         0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd46        51.8    0.0  662.4    0.0  0.0  0.0    0.5   0   3
sd47        51.8    0.0  662.4    0.0  0.0  0.0    0.6   0   3
sd48        51.8    0.0  662.4    0.0  0.0  0.0    0.6   0   3
sd49        51.8    0.0  662.4    0.0  0.0  0.0    0.6   0   3
sd50        51.8    0.0  663.2    0.0  0.0  0.0    0.7   0   4
sd51        51.8    0.0  662.4    0.0  0.0  0.0    0.8   0   4
sd52        51.8    0.0  662.4    0.0  0.0  0.0    0.8   0   4

At first I thought there might be something wrong with the FC switch used for 
the devices on md22, but I got the full speed sequential test results using it, 
etc.

The disks are obviously lazying thru the whole process. Should it be this slow? 
Nothing else is using those disks. Any one disk is capable of about 33 MB/s 
sequential read(tested).

If I start dd if=/dev/md/dsk/d22 of=/dev/null bs=16k (d22 is the device being 
synced into the mirror) I get about 20 MB/s read and resync drops to about 3.5 
MB/s. Same if I try it on d21 or d20 (which is the mirror device). So there is 
clearly bandwith available. When I recreate d21 to have 1M stripe size will it 
resync faster?

What I really wondering is about the 9/91% busy on d21/d22. When I first added 
d22, with default stripe size, it wouldn't give any test reads the time of day, 
though I had a ftp transfer writing about 7 MB/s. But the total bandwidth 
didn't really exceed 7 MB/s.

Any insights greatly appreciated.

/Samuel
 
 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to