On 2 Apr 2009 at 18:19, Gonçalo Borges wrote:

[...]
> I have the following multipath devices:
[...]
> [r...@core26 ~]# multipath -ll
> sda: checker msg is "rdac checker reports path is down"
> iscsi06-apoio1 (3600a0b80003ad1e500000f2e49ae6d3e) dm-0 IBM,VirtualDisk
> [size=2.7T][features=1 queue_if_no_path][hwhandler=0]

Very interesting: Out SAN system allows only 2048 GB of storage per LUN. 
Lookinginto the SCSI protocol, it seems there is a 32bit number of blocks 
(512Bytes) to count the LUN capacity. Thus roughly 4Gig times 0.4kB makes 2TB. 
I 
wonder how your system represents 2.7TB in the SCSI protocol.

[...]
> [r...@core26 ~]# fdisk -l /dev/sdb1
> Disk /dev/sdb1: 499.9 GB, 499999983104 bytes

Isn't that a bit small for 2.7TB ? I think you should use fdisk on the disk, 
not 
on the partition!

> 255 heads, 63 sectors/track, 60788 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk /dev/sdb1 doesn't contain a valid partition table

See above!
[...]
> [r...@core26 ~]# df -k
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/sda1             90491396   2008072  83812428   3% /
> tmpfs                   524288         0    524288   0% /dev/shm
> /dev/mapper/iscsi06-apoio1p1
>                      480618344    202804 456001480   1% /apoio06-1
> /dev/mapper/iscsi06-apoio2p1
>                      480618344    202800 456001484   1% /apoio06-2
> 
> The sizes, although not exactly the same (but that doesn't happen also for
> the system disk), are very close.

So you have roughly 500GB on a 2.7TB LUN in use.

> 
> 
> 
> > Then one could compare those sizes to those reported by the kernel. Maybe
> > the
> > setup just wrong, and it takes a while until the end of the device is
> > reached.
> >
> 
> 
> I do not think the difference I see in previous commands is big enough to
> justify a wrong setup. But I'm just guessing and I'm not really an expert.

It now depends where the partition is located on the disk (use a corrected 
fdisk 
invocation to find out).

> 
> 
> >
> > Then I would start slowly, i.e. with one izone running on one client.
> >
> 
> 
> I've already performed the same testes with 6 Raid 0 and 6 Raid 1 instead of
> 2 Raid 10 in similar DS 3300 systems without having this kind of errors. But
> probably, I could be hitting some kind of limit..
> 
> 
> >
> > BTW, what do you want to measure: the kernel throughput, the network
> > throughput,
> > the iSCSI throughput, the controller throughput, or the disk throughput?
> > You
> > should have some concrete idea before starting the benchmark. Also with
> > just 12
> > disks I see little sense in having that many threads accessign the disk. To
> > shorten a lengthy test, it may be advisable to reduce the system memory
> > (iozone
> > recommands to create a file size at least three times the amount of RAM,
> > end even
> > 8GB on a local disk takes hours to perform)
> 
> 
> I want to measure the I/O performance for the RAID in sequential and random
> write/reads. What matters for the final user is that he was able to
> write/read at XXX MB/s. I want to stress the system to know the limit of the
> ISCSI controllers (this is why I'm starting so many threads). In theory, at
> the controllers limit, they should take a lot of time to deal with the I/O
> traffic from the diferent clients but they are not suppose to die.

I was able to reach the limit of our system (380MB/s over 4Gb FC) with one 
single 
machine. As a summary: Performance is best if you write large blocks (1MB) 
sequentially. Anything else is bad.

Regards,
Ulrich


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to