Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-09 Thread Roman Matiyenko
Downloaded, unzipped and flying! It shows GUID which is part of the
/dev/rdsk/c0t* name! Thanks!!! And thanks again! This msg goes
to the group.

root@carbon:~/bin/LSI-SAS2IRCU/SAS2IRCU_P13/sas2ircu_solaris_x86_rel#
./sas2ircu 0 DISPLAY | grep GUID
  GUID: 5000cca225cefd12
  GUID: 5000cca225cf0119
  GUID: 5000cca225cefd0b
  GUID: 5000cca225cf0420
  GUID: 5000cca225cf0517
  GUID: 5000cca225cf0115
  GUID: 5000cca225cf016d




On 9 May 2012 15:32, Daniel J. Priem  wrote:
>
> http://www.lsi.com/channel/products/storagecomponents/Pages/LSISAS9211-8i.aspx
>
> select * SUPPORT & DOWNLOADS
> download  SAS2IRCU_P13
>
> best regards
> daniel
>
>
>
> Roman Matiyenko  writes:
>
>> Thanks!
>>
>> LSI 9211-8i are not recognised by lsiutil. I run this S11 under VMware
>> ESXi with the three PCI devices in pass-through mode. The main
>> (virtual) disk controller is LSI as well with an VMDK boot disk
>> attached, and it is recognised. Nevertheless, many thanks for trying
>> to help!
>>
>> Roman
>> PS I got your other message with links, will see now...
>>
>> root@carbon:~/bin# ./lsiutil
>>
>> LSI Logic MPT Configuration Utility, Version 1.62, January 14, 2009
>>
>> 1 MPT Port found
>>
>>      Port Name         Chip Vendor/Type/Rev    MPT Rev  Firmware Rev  IOC
>>  1.  mpt0              LSI Logic 53C1030 B0      102      01032920     0
>>
>> Select a device:  [1-1 or 0 to quit] 1
>>
>>  1.  Identify firmware, BIOS, and/or FCode
>>  2.  Download firmware (update the FLASH)
>>  4.  Download/erase BIOS and/or FCode (update the FLASH)
>>  8.  Scan for devices
>> 10.  Change IOC settings (interrupt coalescing)
>> 11.  Change SCSI Initiator settings
>> 12.  Change SCSI Target settings
>> 20.  Diagnostics
>> 21.  RAID actions
>> 22.  Reset bus
>> 23.  Reset target
>> 42.  Display operating system names for devices
>> 59.  Dump PCI config space
>> 60.  Show non-default settings
>> 61.  Restore default settings
>> 69.  Show board manufacturing information
>> 99.  Reset port
>>  e   Enable expert mode in menus
>>  p   Enable paged mode
>>  w   Enable logging
>>
>> Main menu, select an option:  [1-99 or e/p/w or 0 to quit] 42
>>
>> mpt0 is /dev/cfg/c4
>>
>>  B___T___L  Type       Operating System Device Name
>>  0   0   0  Disk       /dev/rdsk/c4t0d0s2
>>
>> Main menu, select an option:  [1-99 or e/p/w or 0 to quit] 8
>>
>> 53C1030's host SCSI ID is 7
>>
>>  B___T___L  Type       Vendor   Product          Rev   Negotiated Speed & 
>> Width
>>  0   0   0  Disk       VMware   Virtual disk     1.0   Ultra4 Wide, 320 
>> MB/sec
>>
>>
>>
>>
>>
>> On 9 May 2012 15:08, Daniel J. Priem  wrote:
>>> attached.
>>> i didn't know where to download
>>>
>>>
>>>
>>>
>>> Roman Matiyenko  writes:
>>>
>>>> Hi Daniel,
>>>> Thanks. Where do I get lsiutil? I am on Oracle Solaris 11.
>>>> LSI wibsite says that for 9211-8i you don't need to install drivers as
>>>> they come with Solaris OS. So they don't have anything to download for
>>>> solaris.
>>>> Roman
>>>>
>>>> On 9 May 2012 14:22, Daniel J. Priem  wrote:
>>>>> Hi,
>>>>>
>>>>> Roman Matiyenko  writes:
>>>>>
>>>>>> Now, if anyone is still reading, I have another question. The new
>>>>>> Solaris 11 device naming convention hides the physical tree from me. I
>>>>>> got just a list of long disk names all starting with "c0" (see below)
>>>>>> but I need to know which disk is connected to which controller so that
>>>>>> I can create two parts of my mirrors to two different controllers in
>>>>>> order to tolerate a single controller failure. I need a way of
>>>>>> figuring the connection path for each disk. Hope I manage to explain
>>>>>> what I want?
>>>>>
>>>>> lsiutil
>>>>> select controller
>>>>> select option 42
>>>>>
>>>>> Main menu, select an option:  [1-99 or e/p/w or 0 to quit] 42
>>>>>
>>>>> mpt2 is /dev/cfg

Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-09 Thread Roman Matiyenko
I followed this guide but instead of 2108it.bin I downloaded the
latest firmware file for 9211-8i from LSI web site. I now have three
9211's! :)

http://lime-technology.com/forum/index.php?topic=12767.msg124393#msg124393





On 4 May 2012 18:33, Bob Friesenhahn  wrote:
> On Fri, 4 May 2012, Rocky Shek wrote:
>
>>
>> If I were you, I will not use 9240-8I.
>>
>> I will use 9211-8I as pure HBA with IT FW for ZFS.
>
>
> Is there IT FW for the 9240-8i?
>
> They seem to use the same SAS chipset.
>
> My next system will have 9211-8i with IT FW.  Playing it safe.  Good enough
> for Nexenta is good enough for me.
>
> Bob
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-09 Thread Roman Matiyenko
Thanks for the tips, everybody!

Progress report:

OpenIndiana failed to recognise LSI 9240-8i's. I installed 4.7 drivers
from LSI website ("for Solaris 11 and up") but it started throwing
"component failed" messages. So I gave up on 9240's and re-flashed
them into 9211-8i's ("IT mode"). Solaris 11 (11.11) recognised 9211
adapters instantly and so far show perfect performance with default
drivers with dd test on raw disks both reading and writing and also on
dd writing into a zpool built of 10 x two-way mirrors. The speed is
around 1GB/s. There are still some hiccups in this sequential write
process (for 4-5 sec the speed would drop on all disks suddenly when
monitored by iostat but then pick up to the usual 140MB/s per disk).
This is so much better then Solaris 11 with 9240's going persistently
around 3-4MB/s per disk on a simple dd seq write. I am pleased with
this performance.

Now, if anyone is still reading, I have another question. The new
Solaris 11 device naming convention hides the physical tree from me. I
got just a list of long disk names all starting with "c0" (see below)
but I need to know which disk is connected to which controller so that
I can create two parts of my mirrors to two different controllers in
order to tolerate a single controller failure. I need a way of
figuring the connection path for each disk. Hope I manage to explain
what I want?


root@carbon:~# echo | format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c0t5000CCA225CEFC73d0 
  /scsi_vhci/disk@g5000cca225cefc73
   1. c0t5000CCA225CEFD0Bd0 
  /scsi_vhci/disk@g5000cca225cefd0b
   2. c0t5000CCA225CEFD12d0 
  /scsi_vhci/disk@g5000cca225cefd12
   3. c0t5000CCA225CEFEDEd0 
  /scsi_vhci/disk@g5000cca225cefede
   4. c0t5000CCA225CEFEE7d0 
  /scsi_vhci/disk@g5000cca225cefee7
   5. c0t5000CCA225CF016Cd0 
  /scsi_vhci/disk@g5000cca225cf016c
   6. c0t5000CCA225CF016Dd0 
  /scsi_vhci/disk@g5000cca225cf016d
   7. c0t5000CCA225CF016Ed0 
  /scsi_vhci/disk@g5000cca225cf016e
   8. c0t5000CCA225CF023Cd0 
  /scsi_vhci/disk@g5000cca225cf023c
   9. c0t5000CCA225CF042Cd0 
  /scsi_vhci/disk@g5000cca225cf042c
  10. c0t5000CCA225CF050Fd0 
  /scsi_vhci/disk@g5000cca225cf050f
  11. c0t5000CCA225CF0115d0 
  /scsi_vhci/disk@g5000cca225cf0115
  12. c0t5000CCA225CF0119d0 
  /scsi_vhci/disk@g5000cca225cf0119
  13. c0t5000CCA225CF0144d0 
  /scsi_vhci/disk@g5000cca225cf0144
  14. c0t5000CCA225CF0156d0 
  /scsi_vhci/disk@g5000cca225cf0156
  15. c0t5000CCA225CF0167d0 
  /scsi_vhci/disk@g5000cca225cf0167
  16. c0t5000CCA225CF0419d0 
  /scsi_vhci/disk@g5000cca225cf0419
  17. c0t5000CCA225CF0420d0 
  /scsi_vhci/disk@g5000cca225cf0420
  18. c0t5000CCA225CF0517d0 
  /scsi_vhci/disk@g5000cca225cf0517
  19. c0t5000CCA225CF0522d0 
  /scsi_vhci/disk@g5000cca225cf0522
  20. c0t5001517BB27B5896d0 
  /scsi_vhci/disk@g5001517bb27b5896
  21. c0t5001517BB27DCE0Bd0 
  /scsi_vhci/disk@g5001517bb27dce0b
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-04 Thread Roman Matiyenko
Hi all,

I have a bad bad problem with our brand new server!

The lengthy details are below but to cut the story short, on the same
hardware (3 x LSI 9240-8i, 20 x 3TB 6gb HDDs) I am getting ZFS
sequential writes of 1.4GB/s on Solaris 10 (20 disks, 10 mirrors) and
only 200-240MB/s on latest Solaris 11.11 (same zpool config). By
writing directly to raw disks I found that in S10 the speed is 140MB/s
sequential writes per disk (consistent with combined 1.4GB/s for my
zpool) whereas only 24MB/s in Solaris 11 (consistent with 240MB/s
zpool, 10 mirrors 24MB/s each).

This must be the controller drivers, right? I downloaded drivers
version 4.7 off LSI site (says "for Solaris 10 and later") - they
failed to attach on S11. Version 3.03 worked but the system would
randomly crash, so I moved my experiments off S11 to S10. However, S10
has only the old implementation if iSCSI which gives me other problems
so I decided to give S11 another go.

Would there be any advice in this community?

Many thanks!

Roman

==


root@carbon:~# echo | format | grep Hitachi
  1. c5t8d1 
  2. c5t9d1 
  3. c5t10d1 
  4. c5t11d1 
  5. c5t13d1 
  6. c5t14d1 
  7. c5t15d1 
  9. c6t9d1 
 10. c6t10d1 
 11. c6t11d1 
 12. c6t13d1 
 13. c6t14d1 
 14. c6t15d1 
 15. c7t8d1 
 17. c7t10d1 
 18. c7t11d1 
 19. c7t12d1 
 20. c7t13d1 
 21. c7t14d1 
 22. c7t15d1 



Reading DD from all disks:
(dd of=/dev/null bs=1024kb if=/dev/rdsk/c7t9d1 &)

# Iostat –xznM 2

   extended device statistics
   r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
 614.50.0  153.60.0  0.0  1.00.01.6   0  98 c5t8d1
 595.50.0  148.90.0  0.0  1.00.01.7   0  99 c7t8d1
 1566.50.0  391.60.0  0.0  1.00.00.6   1  96 c6t8d1 # (SSD)
 618.50.0  154.60.0  0.0  1.00.01.6   0  99 c6t9d1
 616.50.0  154.10.0  0.0  1.00.01.6   0  99 c5t9d1
 1564.00.0  391.00.0  0.0  1.00.00.6   1  96 c7t9d1# (SSD)
 616.00.0  154.00.0  0.0  1.00.01.6   0  98 c7t10d1
 554.00.0  138.50.0  0.0  1.00.01.8   0  99 c6t10d1
 598.50.0  149.60.0  0.0  1.00.01.7   0  99 c5t10d1
 588.50.0  147.10.0  0.0  1.00.01.7   0  98 c6t11d1
 590.50.0  147.60.0  0.0  1.00.01.7   0  98 c7t11d1
 591.50.0  147.90.0  0.0  1.00.01.7   0  99 c5t11d1
 600.50.0  150.10.0  0.0  1.00.01.6   0  98 c6t13d1
 617.50.0  154.40.0  0.0  1.00.01.6   0  99 c7t12d1
 611.00.0  152.80.0  0.0  1.00.01.6   0  99 c5t13d1
 625.00.0  156.30.0  0.0  1.00.01.6   0  99 c6t14d1
 592.50.0  148.10.0  0.0  1.00.01.7   0  99 c7t13d1
 596.00.0  149.00.0  0.0  1.00.01.7   0  99 c5t14d1
 598.50.0  149.60.0  0.0  1.00.01.6   0  98 c6t15d1
 618.50.0  154.60.0  0.0  1.00.01.6   0  98 c7t14d1
 606.50.0  151.60.0  0.0  1.00.01.6   0  98 c5t15d1
 625.00.0  156.30.0  0.0  1.00.01.6   0  98 c7t15d1
   extended device statistics
   r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
 620.50.0  155.10.0  0.0  1.00.01.6   0  99 c5t8d1
 620.50.0  155.10.0  0.0  1.00.01.6   0  99 c7t8d1
 1581.00.0  395.20.0  0.0  1.00.00.6   1  96 c6t8d1
 611.50.0  152.90.0  0.0  1.00.01.6   0  99 c6t9d1
 587.50.0  146.90.0  0.0  1.00.01.7   0  99 c5t9d1
 1580.00.0  395.00.0  0.0  1.00.00.6   1  97 c7t9d1
 593.00.0  148.20.0  0.0  1.00.01.7   0  99 c7t10d1
 616.00.0  154.00.0  0.0  1.00.01.6   0  99 c6t10d1
 601.00.0  150.20.0  0.0  1.00.01.6   0  99 c5t10d1
 587.00.0  146.70.0  0.0  1.00.01.7   0  99 c6t11d1
 578.50.0  144.60.0  0.0  1.00.01.7   0  99 c7t11d1
 624.50.0  156.10.0  0.0  1.00.01.6   0  99 c5t11d1
 604.50.0  151.10.0  0.0  1.00.01.6   0  99 c6t13d1
 573.50.0  143.40.0  0.0  1.00.01.7   0  99 c7t12d1
 609.00.0  152.20.0  0.0  1.00.01.6   0  99 c5t13d1
 630.50.0  157.60.0  0.0  1.00.01.6   0  99 c6t14d1
 618.50.0  154.60.0  0.0  1.00.01.6   0  99 c7t13d1
 633.50.0  158.40.0  0.0  1.00.01.6   0  99 c5t14d1
 602.50.0  150.60.0  0.0  1.00.01.6   0  99 c6t15d1
 589.50.0  147.40.0  0.0  1.00.01.7   0  99 c7t14d1
 603.00.0  150.70.0  0.0  1.00.01.6   0  99 c5t15d1
 586.00.0  146.50.0  0.0  1.00.01.7   0  99 c7t15d1

WRITING ZEROS TO RAW DISKS
root@carbon:~# dd if=/dev/zero bs=1024kb of=/dev/rdsk/c6t9d1 &
root@carbon:~# dd if=/dev/zero bs=1024kb of=/dev/rdsk/c5t9d1 &
root@carbon:~# dd if=/dev/zero bs=1024kb of=/dev/rdsk/c7t8d1 &
root@carbon:~# dd if=/dev/zer