When I write to a raw device (/dev/rdsk) connected through the Solaris iSCSI 
Software initiator, I am seeing poor performance until I have more IO threads 
than there are connections to the volume.

See the following output for an example when using Sun's MPxIO:

# iscsiadm list target -v
Target:
iqn.2001-05.com.equallogic:6-8a0900-0a33d1d01-c436e05358444ac7-v1
Alias: -
TPGT: 0
ISID: 4000002a0001
Connections: 1
CID: 0
IP address (Local): 172.19.51.69:32775
IP address (Peer): 172.19.50.53:3260

Target:
iqn.2001-05.com.equallogic:6-8a0900-0a33d1d01-c436e05358444ac7-v1
Alias: -
TPGT: 0
ISID: 4000002a0000
Connections: 1
CID: 0
IP address (Local): 172.19.51.67:32800
IP address (Peer): 172.19.50.53:3260


Now checking performance I show the following:

# ps -ef | grep dd
root 2732 2658 0 16:32:45 pts/1 0:02 dd if=/dev/zero
of=/dev/rdsk/c4t0690A018D0D1330AC74A445853E036C4d0s6 bs=64k
root 2721 2658 0 16:32:06 pts/1 0:02 dd if=/dev/zero
of=/dev/rdsk/c4t0690A018D0D1330AC74A445853E036C4d0s6 bs=64k
# iostat -dxz 2
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
sd4 0.0 0.0 0.0 0.0 0.0 0.0 5.1 0 0
sd14 0.2 0.0 1.6 0.0 0.0 0.0 7.5 0 0
ssd0 0.0 0.2 0.2 11.1 0.0 0.0 17.8 0 0
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
ssd0 0.0 14.0 0.0 896.9 0.0 2.0 142.6 0 100
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
ssd0 0.0 11.0 0.0 704.0 0.0 2.0 181.7 0 100
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
ssd0 0.0 11.0 0.0 704.1 0.0 2.0 181.7 0 100
^C


Now, starting up a third dd to the same volume:
# ps -ef | grep dd
root 2732 2658 1 16:32:45 pts/1 0:02 dd if=/dev/zero
of=/dev/rdsk/c4t0690A018D0D1330AC74A445853E036C4d0s6 bs=64k
root 2721 2658 1 16:32:06 pts/1 0:02 dd if=/dev/zero
of=/dev/rdsk/c4t0690A018D0D1330AC74A445853E036C4d0s6 bs=64k
root 2771 2658 0 16:35:04 pts/1 0:00 dd if=/dev/zero
of=/dev/rdsk/c4t0690A018D0D1330AC74A445853E036C4d0s6 bs=64k


Checking performance yields:
[EMAIL PROTECTED]/]# iostat -dxz 2
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
sd4 0.0 0.0 0.0 0.0 0.0 0.0 5.1 0 0
sd14 0.2 0.0 1.6 0.0 0.0 0.0 7.5 0 0
ssd0 0.0 0.2 0.2 12.1 0.0 0.0 17.2 0 0
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
ssd0 0.0 944.2 0.0 60431.0 0.0 2.9 3.1 0 100
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
ssd0 0.0 902.9 0.0 57787.7 0.0 2.9 3.2 0 100
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
ssd0 0.0 841.8 0.0 53876.3 0.0 2.9 3.5 0 100

I have reproduced this when not using Solaris MPxIO and I can provide the same 
data as above if necessary. I have also tried this using dd to write to 
different partitions and I have witnessed the same behavior.

I believe I am seeing this issue also when using Solaris Volume manager. If I 
connect to two separate volumes and I configure these whole volumes as mirrors 
of each other, they sync at around 400-500k. If I divide these same volumes 
into two separate slices and mirror both slices, I see the performance increase 
to at least 30MB/sec. Also, if I connect to both of these same volumes from a 
Linux server and I run a single threaded performance test (using the same dd 
command), I am seeing performance of 100MB/sec to the volume (looking through 
iostat -dx also).

I have engaged Sun support, but nothing has come of that yet.  I updated to the 
Solaris 10 recommended patch cluster at the beginning of last week, and I have 
also updated my OBP on my Sun Fire v250 server to whatever the latest release 
was at the same time I updated the patch cluster (4.17.3 I think).

Any clues as to what the issue is that I am seeing?

Thanks.

Patrick
 
 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to