Hi... First of all, thanks for the reply.
After recovering my system, I tried to perform the tests you ask for.... It might be good to know what scsiinfo (or similar) says about the size of > the LUN > at the start of yout tests. Likewise, show what "fdisk -l" tells about the > partitions, and finally what "df -k" tells about the capacity of the file > system. I have the following multipath devices: [r...@core26 ~]# dmsetup ls iscsi06-apoio1 (253, 0) -> dm-0 iscsi06-apoio1p1 (253, 3) -> dm-3 iscsi06-apoio2p1 (253, 2) -> dm-2 (the one which gave problems previously,it was called dm-10), iscsi06-apoio2 (253, 1) -> dm-1 [r...@core26 ~]# multipath -ll sda: checker msg is "rdac checker reports path is down" iscsi06-apoio1 (3600a0b80003ad1e500000f2e49ae6d3e) dm-0 IBM,VirtualDisk [size=2.7T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=100][active] \_ 25:0:0:0 sdb 8:16 [active][ready] iscsi06-apoio2 (3600a0b80003ad21300000f8649ae6d5b) dm-1 IBM,VirtualDisk [size=2.7T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=100][active] \_ 25:0:0:1 sdc 8:32 [active][ready] So, we are interested in iscsi06-apoio2 (dm-2, sdc) and in iscsi06-apoio1 (dm-3, sdb) [r...@core26 ~]# fdisk -l /dev/sdb1 Disk /dev/sdb1: 499.9 GB, 499999983104 bytes 255 heads, 63 sectors/track, 60788 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdb1 doesn't contain a valid partition table [r...@core26 ~]# fdisk -l /dev/sdc1 Disk /dev/sdc1: 499.9 GB, 499999983104 bytes 255 heads, 63 sectors/track, 60788 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdc1 doesn't contain a valid partition table [r...@core26 ~]# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 90491396 2008072 83812428 3% / tmpfs 524288 0 524288 0% /dev/shm /dev/mapper/iscsi06-apoio1p1 480618344 202804 456001480 1% /apoio06-1 /dev/mapper/iscsi06-apoio2p1 480618344 202800 456001484 1% /apoio06-2 The sizes, although not exactly the same (but that doesn't happen also for the system disk), are very close. > Then one could compare those sizes to those reported by the kernel. Maybe > the > setup just wrong, and it takes a while until the end of the device is > reached. > I do not think the difference I see in previous commands is big enough to justify a wrong setup. But I'm just guessing and I'm not really an expert. > > Then I would start slowly, i.e. with one izone running on one client. > I've already performed the same testes with 6 Raid 0 and 6 Raid 1 instead of 2 Raid 10 in similar DS 3300 systems without having this kind of errors. But probably, I could be hitting some kind of limit.. > > BTW, what do you want to measure: the kernel throughput, the network > throughput, > the iSCSI throughput, the controller throughput, or the disk throughput? > You > should have some concrete idea before starting the benchmark. Also with > just 12 > disks I see little sense in having that many threads accessign the disk. To > shorten a lengthy test, it may be advisable to reduce the system memory > (iozone > recommands to create a file size at least three times the amount of RAM, > end even > 8GB on a local disk takes hours to perform) I want to measure the I/O performance for the RAID in sequential and random write/reads. What matters for the final user is that he was able to write/read at XXX MB/s. I want to stress the system to know the limit of the ISCSI controllers (this is why I'm starting so many threads). In theory, at the controllers limit, they should take a lot of time to deal with the I/O traffic from the diferent clients but they are not suppose to die. Cheers Goncalo --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "open-iscsi" group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/open-iscsi -~----------~----~----~----~------~----~------~--~---