Re: [zfs-discuss] ZFS performance on LSI 9240-8i?
Now, if anyone is still reading, I have another question. The new Solaris 11 device naming convention hides the physical tree from me. I got just a list of long disk names all starting with c0 (see below) but I need to know which disk is connected to which controller so that I can create two parts of my mirrors to two different controllers in order to tolerate a single controller failure. I need a way of figuring the connection path for each disk. Hope I manage to explain what I want? See diskinfo(1M), for example: $ diskinfo -T bay -o Rc -h HDD00 - HDD01 - HDD02 c0t5000CCA00AC87F54d0 HDD03 c0t5000CCA00AA95838d0 HDD04 c0t5000CCA01510ECC0d0 HDD05 c0t5000CCA01515EE78d0 HDD06 c0t5000CCA01512DA3Cd0 HDD07 c0t5000CCA00AB3E1C8d0 HDD08 c0t5000CCA0151C1D18d0 HDD09 c0t5000CCA0151F7E08d0 HDD10 c0t5000CCA0151C7CA8d0 HDD11 c0t5000CCA00AA9D570d0 HDD12 c0t5000CCA0151CB180d0 HDD13 c0t5000CCA015208C98d0 HDD14 c0t5000CCA00AA97F04d0 HDD15 c0t5000CCA0151A287Cd0 HDD16 c0t5000CCA00AAA1544d0 HDD17 c0t5000CCA01521070Cd0 HDD18 c0t5000CCA00AA97EF4d0 HDD19 c0t5000CCA015214F84d0 HDD20 c0t5000CCA015214844d0 HDD21 c0t5000CCA00AAAD154d0 HDD22 c0t5000CCA00AA95558d0 HDD23 c0t5000CCA00AAA0D1Cd0 In your case you probably will have to put a configuration in place for your disk slots (on Oracle's HW it works out of the box) - go to support.oracle.com and look for the document: How To : Selecting a Physical Slot for a SAS Device with a WWN for an Oracle Solaris 11 Installation [ID 1411444.1] ps. there is also zpool status -l option which is cool: $ zpool status -l cwafseng3-0 pool: pool-0 state: ONLINE scan: scrub canceled on Thu Apr 12 13:52:13 2012 config: NAME STATE READ WRITE CKSUM pool-0 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 /dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD02/disk ONLINE 0 0 0 /dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD23/disk ONLINE 0 0 0 /dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD22/disk ONLINE 0 0 0 /dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD21/disk ONLINE 0 0 0 /dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD20/disk ONLINE 0 0 0 /dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD19/disk ONLINE 0 0 0 /dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD17/disk ONLINE 0 0 0 /dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD15/disk ONLINE 0 0 0 errors: No known data errors Best regards, Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS performance on LSI 9240-8i?
Thanks for the tips, everybody! Progress report: OpenIndiana failed to recognise LSI 9240-8i's. I installed 4.7 drivers from LSI website (for Solaris 11 and up) but it started throwing component failed messages. So I gave up on 9240's and re-flashed them into 9211-8i's (IT mode). Solaris 11 (11.11) recognised 9211 adapters instantly and so far show perfect performance with default drivers with dd test on raw disks both reading and writing and also on dd writing into a zpool built of 10 x two-way mirrors. The speed is around 1GB/s. There are still some hiccups in this sequential write process (for 4-5 sec the speed would drop on all disks suddenly when monitored by iostat but then pick up to the usual 140MB/s per disk). This is so much better then Solaris 11 with 9240's going persistently around 3-4MB/s per disk on a simple dd seq write. I am pleased with this performance. Now, if anyone is still reading, I have another question. The new Solaris 11 device naming convention hides the physical tree from me. I got just a list of long disk names all starting with c0 (see below) but I need to know which disk is connected to which controller so that I can create two parts of my mirrors to two different controllers in order to tolerate a single controller failure. I need a way of figuring the connection path for each disk. Hope I manage to explain what I want? root@carbon:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t5000CCA225CEFC73d0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cefc73 1. c0t5000CCA225CEFD0Bd0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cefd0b 2. c0t5000CCA225CEFD12d0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cefd12 3. c0t5000CCA225CEFEDEd0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cefede 4. c0t5000CCA225CEFEE7d0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cefee7 5. c0t5000CCA225CF016Cd0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf016c 6. c0t5000CCA225CF016Dd0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf016d 7. c0t5000CCA225CF016Ed0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf016e 8. c0t5000CCA225CF023Cd0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf023c 9. c0t5000CCA225CF042Cd0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf042c 10. c0t5000CCA225CF050Fd0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf050f 11. c0t5000CCA225CF0115d0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf0115 12. c0t5000CCA225CF0119d0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf0119 13. c0t5000CCA225CF0144d0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf0144 14. c0t5000CCA225CF0156d0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf0156 15. c0t5000CCA225CF0167d0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf0167 16. c0t5000CCA225CF0419d0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf0419 17. c0t5000CCA225CF0420d0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf0420 18. c0t5000CCA225CF0517d0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf0517 19. c0t5000CCA225CF0522d0 ATA-Hitachi HUA72303-A5C0-2.73TB /scsi_vhci/disk@g5000cca225cf0522 20. c0t5001517BB27B5896d0 ATA-INTEL SSDSC2CW24-400i-223.57GB /scsi_vhci/disk@g5001517bb27b5896 21. c0t5001517BB27DCE0Bd0 ATA-INTEL SSDSC2CW24-400i-223.57GB /scsi_vhci/disk@g5001517bb27dce0b ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS performance on LSI 9240-8i?
I followed this guide but instead of 2108it.bin I downloaded the latest firmware file for 9211-8i from LSI web site. I now have three 9211's! :) http://lime-technology.com/forum/index.php?topic=12767.msg124393#msg124393 On 4 May 2012 18:33, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Fri, 4 May 2012, Rocky Shek wrote: If I were you, I will not use 9240-8I. I will use 9211-8I as pure HBA with IT FW for ZFS. Is there IT FW for the 9240-8i? They seem to use the same SAS chipset. My next system will have 9211-8i with IT FW. Playing it safe. Good enough for Nexenta is good enough for me. Bob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS performance on LSI 9240-8i?
Downloaded, unzipped and flying! It shows GUID which is part of the /dev/rdsk/c0t* name! Thanks!!! And thanks again! This msg goes to the group. root@carbon:~/bin/LSI-SAS2IRCU/SAS2IRCU_P13/sas2ircu_solaris_x86_rel# ./sas2ircu 0 DISPLAY | grep GUID GUID: 5000cca225cefd12 GUID: 5000cca225cf0119 GUID: 5000cca225cefd0b GUID: 5000cca225cf0420 GUID: 5000cca225cf0517 GUID: 5000cca225cf0115 GUID: 5000cca225cf016d On 9 May 2012 15:32, Daniel J. Priem daniel.pr...@disy.net wrote: http://www.lsi.com/channel/products/storagecomponents/Pages/LSISAS9211-8i.aspx select * SUPPORT DOWNLOADS download SAS2IRCU_P13 best regards daniel Roman Matiyenko rmatiye...@gmail.com writes: Thanks! LSI 9211-8i are not recognised by lsiutil. I run this S11 under VMware ESXi with the three PCI devices in pass-through mode. The main (virtual) disk controller is LSI as well with an VMDK boot disk attached, and it is recognised. Nevertheless, many thanks for trying to help! Roman PS I got your other message with links, will see now... root@carbon:~/bin# ./lsiutil LSI Logic MPT Configuration Utility, Version 1.62, January 14, 2009 1 MPT Port found Port Name Chip Vendor/Type/Rev MPT Rev Firmware Rev IOC 1. mpt0 LSI Logic 53C1030 B0 102 01032920 0 Select a device: [1-1 or 0 to quit] 1 1. Identify firmware, BIOS, and/or FCode 2. Download firmware (update the FLASH) 4. Download/erase BIOS and/or FCode (update the FLASH) 8. Scan for devices 10. Change IOC settings (interrupt coalescing) 11. Change SCSI Initiator settings 12. Change SCSI Target settings 20. Diagnostics 21. RAID actions 22. Reset bus 23. Reset target 42. Display operating system names for devices 59. Dump PCI config space 60. Show non-default settings 61. Restore default settings 69. Show board manufacturing information 99. Reset port e Enable expert mode in menus p Enable paged mode w Enable logging Main menu, select an option: [1-99 or e/p/w or 0 to quit] 42 mpt0 is /dev/cfg/c4 B___T___L Type Operating System Device Name 0 0 0 Disk /dev/rdsk/c4t0d0s2 Main menu, select an option: [1-99 or e/p/w or 0 to quit] 8 53C1030's host SCSI ID is 7 B___T___L Type Vendor Product Rev Negotiated Speed Width 0 0 0 Disk VMware Virtual disk 1.0 Ultra4 Wide, 320 MB/sec On 9 May 2012 15:08, Daniel J. Priem daniel.pr...@disy.net wrote: attached. i didn't know where to download Roman Matiyenko rmatiye...@gmail.com writes: Hi Daniel, Thanks. Where do I get lsiutil? I am on Oracle Solaris 11. LSI wibsite says that for 9211-8i you don't need to install drivers as they come with Solaris OS. So they don't have anything to download for solaris. Roman On 9 May 2012 14:22, Daniel J. Priem daniel.pr...@disy.net wrote: Hi, Roman Matiyenko rmatiye...@gmail.com writes: Now, if anyone is still reading, I have another question. The new Solaris 11 device naming convention hides the physical tree from me. I got just a list of long disk names all starting with c0 (see below) but I need to know which disk is connected to which controller so that I can create two parts of my mirrors to two different controllers in order to tolerate a single controller failure. I need a way of figuring the connection path for each disk. Hope I manage to explain what I want? lsiutil select controller select option 42 Main menu, select an option: [1-99 or e/p/w or 0 to quit] 42 mpt2 is /dev/cfg/c8 B___T___L Type Operating System Device Name 0 0 0 Disk /dev/rdsk/c8t0d0s2 0 1 0 Disk /dev/rdsk/c8t1d0s2 0 2 0 Tape /dev/rmt/0 0 3 0 Disk /dev/rdsk/c8t3d0s2 0 4 0 Disk /dev/rdsk/c8t4d0s2 Main menu, select an option: [1-99 or e/p/w or 0 to quit] Best Regards Daniel -- disy Informationssysteme GmbH Daniel Priem Senior Netzwerk- und Systemadministrator Tel: +49 721 1 600 6-000, Fax: -05, E-Mail: daniel.pr...@disy.net Firmensitz: Erbprinzenstr. 4-12, 76133 Karlsruhe Registergericht: Amtsgericht Mannheim, HRB 107964 Geschäftsführer: Claus Hofmann -- disy Informationssysteme GmbH Daniel Priem Senior Netzwerk- und Systemadministrator Tel: +49 721 1 600 6-000, Fax: -05, E-Mail: daniel.pr...@disy.net Firmensitz: Erbprinzenstr. 4-12, 76133 Karlsruhe Registergericht: Amtsgericht Mannheim, HRB 107964 Geschäftsführer: Claus Hofmann ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS performance on LSI 9240-8i?
Hi all, I have a bad bad problem with our brand new server! The lengthy details are below but to cut the story short, on the same hardware (3 x LSI 9240-8i, 20 x 3TB 6gb HDDs) I am getting ZFS sequential writes of 1.4GB/s on Solaris 10 (20 disks, 10 mirrors) and only 200-240MB/s on latest Solaris 11.11 (same zpool config). By writing directly to raw disks I found that in S10 the speed is 140MB/s sequential writes per disk (consistent with combined 1.4GB/s for my zpool) whereas only 24MB/s in Solaris 11 (consistent with 240MB/s zpool, 10 mirrors 24MB/s each). This must be the controller drivers, right? I downloaded drivers version 4.7 off LSI site (says for Solaris 10 and later) - they failed to attach on S11. Version 3.03 worked but the system would randomly crash, so I moved my experiments off S11 to S10. However, S10 has only the old implementation if iSCSI which gives me other problems so I decided to give S11 another go. Would there be any advice in this community? Many thanks! Roman == root@carbon:~# echo | format | grep Hitachi 1. c5t8d1 ATA-Hitachi HUA72303-A5C0-2.73TB 2. c5t9d1 ATA-Hitachi HUA72303-A5C0-2.73TB 3. c5t10d1 ATA-Hitachi HUA72303-A5C0-2.73TB 4. c5t11d1 ATA-Hitachi HUA72303-A5C0-2.73TB 5. c5t13d1 ATA-Hitachi HUA72303-A5C0-2.73TB 6. c5t14d1 ATA-Hitachi HUA72303-A5C0-2.73TB 7. c5t15d1 ATA-Hitachi HUA72303-A5C0-2.73TB 9. c6t9d1 ATA-Hitachi HUA72303-A5C0-2.73TB 10. c6t10d1 ATA-Hitachi HUA72303-A5C0-2.73TB 11. c6t11d1 ATA-Hitachi HUA72303-A5C0-2.73TB 12. c6t13d1 ATA-Hitachi HUA72303-A5C0-2.73TB 13. c6t14d1 ATA-Hitachi HUA72303-A5C0-2.73TB 14. c6t15d1 ATA-Hitachi HUA72303-A5C0-2.73TB 15. c7t8d1 ATA-Hitachi HUA72303-A5C0-2.73TB 17. c7t10d1 ATA-Hitachi HUA72303-A5C0-2.73TB 18. c7t11d1 ATA-Hitachi HUA72303-A5C0-2.73TB 19. c7t12d1 ATA-Hitachi HUA72303-A5C0-2.73TB 20. c7t13d1 ATA-Hitachi HUA72303-A5C0-2.73TB 21. c7t14d1 ATA-Hitachi HUA72303-A5C0-2.73TB 22. c7t15d1 ATA-Hitachi HUA72303-A5C0-2.73TB Reading DD from all disks: (dd of=/dev/null bs=1024kb if=/dev/rdsk/c7t9d1 ) # Iostat –xznM 2 extended device statistics r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 614.50.0 153.60.0 0.0 1.00.01.6 0 98 c5t8d1 595.50.0 148.90.0 0.0 1.00.01.7 0 99 c7t8d1 1566.50.0 391.60.0 0.0 1.00.00.6 1 96 c6t8d1 # (SSD) 618.50.0 154.60.0 0.0 1.00.01.6 0 99 c6t9d1 616.50.0 154.10.0 0.0 1.00.01.6 0 99 c5t9d1 1564.00.0 391.00.0 0.0 1.00.00.6 1 96 c7t9d1# (SSD) 616.00.0 154.00.0 0.0 1.00.01.6 0 98 c7t10d1 554.00.0 138.50.0 0.0 1.00.01.8 0 99 c6t10d1 598.50.0 149.60.0 0.0 1.00.01.7 0 99 c5t10d1 588.50.0 147.10.0 0.0 1.00.01.7 0 98 c6t11d1 590.50.0 147.60.0 0.0 1.00.01.7 0 98 c7t11d1 591.50.0 147.90.0 0.0 1.00.01.7 0 99 c5t11d1 600.50.0 150.10.0 0.0 1.00.01.6 0 98 c6t13d1 617.50.0 154.40.0 0.0 1.00.01.6 0 99 c7t12d1 611.00.0 152.80.0 0.0 1.00.01.6 0 99 c5t13d1 625.00.0 156.30.0 0.0 1.00.01.6 0 99 c6t14d1 592.50.0 148.10.0 0.0 1.00.01.7 0 99 c7t13d1 596.00.0 149.00.0 0.0 1.00.01.7 0 99 c5t14d1 598.50.0 149.60.0 0.0 1.00.01.6 0 98 c6t15d1 618.50.0 154.60.0 0.0 1.00.01.6 0 98 c7t14d1 606.50.0 151.60.0 0.0 1.00.01.6 0 98 c5t15d1 625.00.0 156.30.0 0.0 1.00.01.6 0 98 c7t15d1 extended device statistics r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 620.50.0 155.10.0 0.0 1.00.01.6 0 99 c5t8d1 620.50.0 155.10.0 0.0 1.00.01.6 0 99 c7t8d1 1581.00.0 395.20.0 0.0 1.00.00.6 1 96 c6t8d1 611.50.0 152.90.0 0.0 1.00.01.6 0 99 c6t9d1 587.50.0 146.90.0 0.0 1.00.01.7 0 99 c5t9d1 1580.00.0 395.00.0 0.0 1.00.00.6 1 97 c7t9d1 593.00.0 148.20.0 0.0 1.00.01.7 0 99 c7t10d1 616.00.0 154.00.0 0.0 1.00.01.6 0 99 c6t10d1 601.00.0 150.20.0 0.0 1.00.01.6 0 99 c5t10d1 587.00.0 146.70.0 0.0 1.00.01.7 0 99 c6t11d1 578.50.0 144.60.0 0.0 1.00.01.7 0 99 c7t11d1 624.50.0 156.10.0 0.0 1.00.01.6 0 99 c5t11d1 604.50.0 151.10.0 0.0 1.00.01.6 0 99 c6t13d1 573.50.0 143.40.0 0.0 1.00.01.7 0 99 c7t12d1 609.00.0 152.20.0 0.0 1.00.01.6 0 99 c5t13d1 630.50.0 157.60.0 0.0 1.00.01.6 0 99 c6t14d1 618.50.0
Re: [zfs-discuss] ZFS performance on LSI 9240-8i?
On May 4, 2012, at 5:25 AM, Roman Matiyenko wrote: Hi all, I have a bad bad problem with our brand new server! The lengthy details are below but to cut the story short, on the same hardware (3 x LSI 9240-8i, 20 x 3TB 6gb HDDs) I am getting ZFS sequential writes of 1.4GB/s on Solaris 10 (20 disks, 10 mirrors) and only 200-240MB/s on latest Solaris 11.11 (same zpool config). By writing directly to raw disks I found that in S10 the speed is 140MB/s sequential writes per disk (consistent with combined 1.4GB/s for my zpool) whereas only 24MB/s in Solaris 11 (consistent with 240MB/s zpool, 10 mirrors 24MB/s each). This must be the controller drivers, right? I downloaded drivers version 4.7 off LSI site (says for Solaris 10 and later) - they failed to attach on S11. Version 3.03 worked but the system would randomly crash, so I moved my experiments off S11 to S10. However, S10 has only the old implementation if iSCSI which gives me other problems so I decided to give S11 another go. Would there be any advice in this community? Look at one of the other distros, OpenIndiana is a good first step. -- richard Many thanks! Roman == root@carbon:~# echo | format | grep Hitachi 1. c5t8d1 ATA-Hitachi HUA72303-A5C0-2.73TB 2. c5t9d1 ATA-Hitachi HUA72303-A5C0-2.73TB 3. c5t10d1 ATA-Hitachi HUA72303-A5C0-2.73TB 4. c5t11d1 ATA-Hitachi HUA72303-A5C0-2.73TB 5. c5t13d1 ATA-Hitachi HUA72303-A5C0-2.73TB 6. c5t14d1 ATA-Hitachi HUA72303-A5C0-2.73TB 7. c5t15d1 ATA-Hitachi HUA72303-A5C0-2.73TB 9. c6t9d1 ATA-Hitachi HUA72303-A5C0-2.73TB 10. c6t10d1 ATA-Hitachi HUA72303-A5C0-2.73TB 11. c6t11d1 ATA-Hitachi HUA72303-A5C0-2.73TB 12. c6t13d1 ATA-Hitachi HUA72303-A5C0-2.73TB 13. c6t14d1 ATA-Hitachi HUA72303-A5C0-2.73TB 14. c6t15d1 ATA-Hitachi HUA72303-A5C0-2.73TB 15. c7t8d1 ATA-Hitachi HUA72303-A5C0-2.73TB 17. c7t10d1 ATA-Hitachi HUA72303-A5C0-2.73TB 18. c7t11d1 ATA-Hitachi HUA72303-A5C0-2.73TB 19. c7t12d1 ATA-Hitachi HUA72303-A5C0-2.73TB 20. c7t13d1 ATA-Hitachi HUA72303-A5C0-2.73TB 21. c7t14d1 ATA-Hitachi HUA72303-A5C0-2.73TB 22. c7t15d1 ATA-Hitachi HUA72303-A5C0-2.73TB Reading DD from all disks: (dd of=/dev/null bs=1024kb if=/dev/rdsk/c7t9d1 ) # Iostat –xznM 2 extended device statistics r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 614.50.0 153.60.0 0.0 1.00.01.6 0 98 c5t8d1 595.50.0 148.90.0 0.0 1.00.01.7 0 99 c7t8d1 1566.50.0 391.60.0 0.0 1.00.00.6 1 96 c6t8d1 # (SSD) 618.50.0 154.60.0 0.0 1.00.01.6 0 99 c6t9d1 616.50.0 154.10.0 0.0 1.00.01.6 0 99 c5t9d1 1564.00.0 391.00.0 0.0 1.00.00.6 1 96 c7t9d1# (SSD) 616.00.0 154.00.0 0.0 1.00.01.6 0 98 c7t10d1 554.00.0 138.50.0 0.0 1.00.01.8 0 99 c6t10d1 598.50.0 149.60.0 0.0 1.00.01.7 0 99 c5t10d1 588.50.0 147.10.0 0.0 1.00.01.7 0 98 c6t11d1 590.50.0 147.60.0 0.0 1.00.01.7 0 98 c7t11d1 591.50.0 147.90.0 0.0 1.00.01.7 0 99 c5t11d1 600.50.0 150.10.0 0.0 1.00.01.6 0 98 c6t13d1 617.50.0 154.40.0 0.0 1.00.01.6 0 99 c7t12d1 611.00.0 152.80.0 0.0 1.00.01.6 0 99 c5t13d1 625.00.0 156.30.0 0.0 1.00.01.6 0 99 c6t14d1 592.50.0 148.10.0 0.0 1.00.01.7 0 99 c7t13d1 596.00.0 149.00.0 0.0 1.00.01.7 0 99 c5t14d1 598.50.0 149.60.0 0.0 1.00.01.6 0 98 c6t15d1 618.50.0 154.60.0 0.0 1.00.01.6 0 98 c7t14d1 606.50.0 151.60.0 0.0 1.00.01.6 0 98 c5t15d1 625.00.0 156.30.0 0.0 1.00.01.6 0 98 c7t15d1 extended device statistics r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 620.50.0 155.10.0 0.0 1.00.01.6 0 99 c5t8d1 620.50.0 155.10.0 0.0 1.00.01.6 0 99 c7t8d1 1581.00.0 395.20.0 0.0 1.00.00.6 1 96 c6t8d1 611.50.0 152.90.0 0.0 1.00.01.6 0 99 c6t9d1 587.50.0 146.90.0 0.0 1.00.01.7 0 99 c5t9d1 1580.00.0 395.00.0 0.0 1.00.00.6 1 97 c7t9d1 593.00.0 148.20.0 0.0 1.00.01.7 0 99 c7t10d1 616.00.0 154.00.0 0.0 1.00.01.6 0 99 c6t10d1 601.00.0 150.20.0 0.0 1.00.01.6 0 99 c5t10d1 587.00.0 146.70.0 0.0 1.00.01.7 0 99 c6t11d1 578.50.0 144.60.0 0.0 1.00.01.7 0 99 c7t11d1 624.50.0 156.10.0 0.0 1.00.01.6 0 99 c5t11d1 604.50.0 151.10.0 0.0 1.00.01.6 0 99 c6t13d1 573.50.0 143.40.0 0.0 1.00.0
Re: [zfs-discuss] ZFS performance on LSI 9240-8i?
hi s11 come with its own driver for some lsi sas HCA but on the HCL I only see LSI SAS 9200-8e http://www.oracle.com/webfolder/technetwork/hcl/data/components/details/lsi_logic/sol_11_11_11/9409.html LSI MegaRAID SAS 9260-8i http://www.oracle.com/webfolder/technetwork/hcl/data/components/details/lsi/sol_10_10_09/3264.html LSI 6Gb SAS2008 daughtercard http://www.oracle.com/webfolder/technetwork/hcl/data/components/details/lsi/sol_10_10_09/3263.html regards On 5/4/2012 8:25 AM, Roman Matiyenko wrote: Hi all, I have a bad bad problem with our brand new server! The lengthy details are below but to cut the story short, on the same hardware (3 x LSI 9240-8i, 20 x 3TB 6gb HDDs) I am getting ZFS sequential writes of 1.4GB/s on Solaris 10 (20 disks, 10 mirrors) and only 200-240MB/s on latest Solaris 11.11 (same zpool config). By writing directly to raw disks I found that in S10 the speed is 140MB/s sequential writes per disk (consistent with combined 1.4GB/s for my zpool) whereas only 24MB/s in Solaris 11 (consistent with 240MB/s zpool, 10 mirrors 24MB/s each). This must be the controller drivers, right? I downloaded drivers version 4.7 off LSI site (says for Solaris 10 and later) - they failed to attach on S11. Version 3.03 worked but the system would randomly crash, so I moved my experiments off S11 to S10. However, S10 has only the old implementation if iSCSI which gives me other problems so I decided to give S11 another go. Would there be any advice in this community? Many thanks! Roman == root@carbon:~# echo | format | grep Hitachi 1. c5t8d1ATA-Hitachi HUA72303-A5C0-2.73TB 2. c5t9d1ATA-Hitachi HUA72303-A5C0-2.73TB 3. c5t10d1ATA-Hitachi HUA72303-A5C0-2.73TB 4. c5t11d1ATA-Hitachi HUA72303-A5C0-2.73TB 5. c5t13d1ATA-Hitachi HUA72303-A5C0-2.73TB 6. c5t14d1ATA-Hitachi HUA72303-A5C0-2.73TB 7. c5t15d1ATA-Hitachi HUA72303-A5C0-2.73TB 9. c6t9d1ATA-Hitachi HUA72303-A5C0-2.73TB 10. c6t10d1ATA-Hitachi HUA72303-A5C0-2.73TB 11. c6t11d1ATA-Hitachi HUA72303-A5C0-2.73TB 12. c6t13d1ATA-Hitachi HUA72303-A5C0-2.73TB 13. c6t14d1ATA-Hitachi HUA72303-A5C0-2.73TB 14. c6t15d1ATA-Hitachi HUA72303-A5C0-2.73TB 15. c7t8d1ATA-Hitachi HUA72303-A5C0-2.73TB 17. c7t10d1ATA-Hitachi HUA72303-A5C0-2.73TB 18. c7t11d1ATA-Hitachi HUA72303-A5C0-2.73TB 19. c7t12d1ATA-Hitachi HUA72303-A5C0-2.73TB 20. c7t13d1ATA-Hitachi HUA72303-A5C0-2.73TB 21. c7t14d1ATA-Hitachi HUA72303-A5C0-2.73TB 22. c7t15d1ATA-Hitachi HUA72303-A5C0-2.73TB Reading DD from all disks: (dd of=/dev/null bs=1024kb if=/dev/rdsk/c7t9d1) # Iostat –xznM 2 extended device statistics r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 614.50.0 153.60.0 0.0 1.00.01.6 0 98 c5t8d1 595.50.0 148.90.0 0.0 1.00.01.7 0 99 c7t8d1 1566.50.0 391.60.0 0.0 1.00.00.6 1 96 c6t8d1 # (SSD) 618.50.0 154.60.0 0.0 1.00.01.6 0 99 c6t9d1 616.50.0 154.10.0 0.0 1.00.01.6 0 99 c5t9d1 1564.00.0 391.00.0 0.0 1.00.00.6 1 96 c7t9d1# (SSD) 616.00.0 154.00.0 0.0 1.00.01.6 0 98 c7t10d1 554.00.0 138.50.0 0.0 1.00.01.8 0 99 c6t10d1 598.50.0 149.60.0 0.0 1.00.01.7 0 99 c5t10d1 588.50.0 147.10.0 0.0 1.00.01.7 0 98 c6t11d1 590.50.0 147.60.0 0.0 1.00.01.7 0 98 c7t11d1 591.50.0 147.90.0 0.0 1.00.01.7 0 99 c5t11d1 600.50.0 150.10.0 0.0 1.00.01.6 0 98 c6t13d1 617.50.0 154.40.0 0.0 1.00.01.6 0 99 c7t12d1 611.00.0 152.80.0 0.0 1.00.01.6 0 99 c5t13d1 625.00.0 156.30.0 0.0 1.00.01.6 0 99 c6t14d1 592.50.0 148.10.0 0.0 1.00.01.7 0 99 c7t13d1 596.00.0 149.00.0 0.0 1.00.01.7 0 99 c5t14d1 598.50.0 149.60.0 0.0 1.00.01.6 0 98 c6t15d1 618.50.0 154.60.0 0.0 1.00.01.6 0 98 c7t14d1 606.50.0 151.60.0 0.0 1.00.01.6 0 98 c5t15d1 625.00.0 156.30.0 0.0 1.00.01.6 0 98 c7t15d1 extended device statistics r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 620.50.0 155.10.0 0.0 1.00.01.6 0 99 c5t8d1 620.50.0 155.10.0 0.0 1.00.01.6 0 99 c7t8d1 1581.00.0 395.20.0 0.0 1.00.00.6 1 96 c6t8d1 611.50.0 152.90.0 0.0 1.00.01.6 0 99 c6t9d1 587.50.0 146.90.0 0.0 1.00.01.7 0 99 c5t9d1 1580.00.0 395.00.0 0.0 1.00.00.6 1 97 c7t9d1 593.00.0 148.20.0 0.0 1.00.01.7 0 99 c7t10d1 616.00.0 154.00.0 0.0 1.00.01.6 0 99 c6t10d1
Re: [zfs-discuss] ZFS performance on LSI 9240-8i?
Roman, If I were you, I will not use 9240-8I. I will use 9211-8I as pure HBA with IT FW for ZFS. Rocky From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Richard Elling Sent: Friday, May 04, 2012 8:00 AM To: Roman Matiyenko Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] ZFS performance on LSI 9240-8i? On May 4, 2012, at 5:25 AM, Roman Matiyenko wrote: Hi all, I have a bad bad problem with our brand new server! The lengthy details are below but to cut the story short, on the same hardware (3 x LSI 9240-8i, 20 x 3TB 6gb HDDs) I am getting ZFS sequential writes of 1.4GB/s on Solaris 10 (20 disks, 10 mirrors) and only 200-240MB/s on latest Solaris 11.11 (same zpool config). By writing directly to raw disks I found that in S10 the speed is 140MB/s sequential writes per disk (consistent with combined 1.4GB/s for my zpool) whereas only 24MB/s in Solaris 11 (consistent with 240MB/s zpool, 10 mirrors 24MB/s each). This must be the controller drivers, right? I downloaded drivers version 4.7 off LSI site (says for Solaris 10 and later) - they failed to attach on S11. Version 3.03 worked but the system would randomly crash, so I moved my experiments off S11 to S10. However, S10 has only the old implementation if iSCSI which gives me other problems so I decided to give S11 another go. Would there be any advice in this community? Look at one of the other distros, OpenIndiana is a good first step. -- richard Many thanks! Roman == root@carbon:~# echo | format | grep Hitachi 1. c5t8d1 ATA-Hitachi HUA72303-A5C0-2.73TB 2. c5t9d1 ATA-Hitachi HUA72303-A5C0-2.73TB 3. c5t10d1 ATA-Hitachi HUA72303-A5C0-2.73TB 4. c5t11d1 ATA-Hitachi HUA72303-A5C0-2.73TB 5. c5t13d1 ATA-Hitachi HUA72303-A5C0-2.73TB 6. c5t14d1 ATA-Hitachi HUA72303-A5C0-2.73TB 7. c5t15d1 ATA-Hitachi HUA72303-A5C0-2.73TB 9. c6t9d1 ATA-Hitachi HUA72303-A5C0-2.73TB 10. c6t10d1 ATA-Hitachi HUA72303-A5C0-2.73TB 11. c6t11d1 ATA-Hitachi HUA72303-A5C0-2.73TB 12. c6t13d1 ATA-Hitachi HUA72303-A5C0-2.73TB 13. c6t14d1 ATA-Hitachi HUA72303-A5C0-2.73TB 14. c6t15d1 ATA-Hitachi HUA72303-A5C0-2.73TB 15. c7t8d1 ATA-Hitachi HUA72303-A5C0-2.73TB 17. c7t10d1 ATA-Hitachi HUA72303-A5C0-2.73TB 18. c7t11d1 ATA-Hitachi HUA72303-A5C0-2.73TB 19. c7t12d1 ATA-Hitachi HUA72303-A5C0-2.73TB 20. c7t13d1 ATA-Hitachi HUA72303-A5C0-2.73TB 21. c7t14d1 ATA-Hitachi HUA72303-A5C0-2.73TB 22. c7t15d1 ATA-Hitachi HUA72303-A5C0-2.73TB Reading DD from all disks: (dd of=/dev/null bs=1024kb if=/dev/rdsk/c7t9d1 ) # Iostat -xznM 2 extended device statistics r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 614.50.0 153.60.0 0.0 1.00.01.6 0 98 c5t8d1 595.50.0 148.90.0 0.0 1.00.01.7 0 99 c7t8d1 1566.50.0 391.60.0 0.0 1.00.00.6 1 96 c6t8d1 # (SSD) 618.50.0 154.60.0 0.0 1.00.01.6 0 99 c6t9d1 616.50.0 154.10.0 0.0 1.00.01.6 0 99 c5t9d1 1564.00.0 391.00.0 0.0 1.00.00.6 1 96 c7t9d1# (SSD) 616.00.0 154.00.0 0.0 1.00.01.6 0 98 c7t10d1 554.00.0 138.50.0 0.0 1.00.01.8 0 99 c6t10d1 598.50.0 149.60.0 0.0 1.00.01.7 0 99 c5t10d1 588.50.0 147.10.0 0.0 1.00.01.7 0 98 c6t11d1 590.50.0 147.60.0 0.0 1.00.01.7 0 98 c7t11d1 591.50.0 147.90.0 0.0 1.00.01.7 0 99 c5t11d1 600.50.0 150.10.0 0.0 1.00.01.6 0 98 c6t13d1 617.50.0 154.40.0 0.0 1.00.01.6 0 99 c7t12d1 611.00.0 152.80.0 0.0 1.00.01.6 0 99 c5t13d1 625.00.0 156.30.0 0.0 1.00.01.6 0 99 c6t14d1 592.50.0 148.10.0 0.0 1.00.01.7 0 99 c7t13d1 596.00.0 149.00.0 0.0 1.00.01.7 0 99 c5t14d1 598.50.0 149.60.0 0.0 1.00.01.6 0 98 c6t15d1 618.50.0 154.60.0 0.0 1.00.01.6 0 98 c7t14d1 606.50.0 151.60.0 0.0 1.00.01.6 0 98 c5t15d1 625.00.0 156.30.0 0.0 1.00.01.6 0 98 c7t15d1 extended device statistics r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 620.50.0 155.10.0 0.0 1.00.01.6 0 99 c5t8d1 620.50.0 155.10.0 0.0 1.00.01.6 0 99 c7t8d1 1581.00.0 395.20.0 0.0 1.00.00.6 1 96 c6t8d1 611.50.0 152.90.0 0.0 1.00.01.6 0 99 c6t9d1 587.50.0 146.90.0 0.0 1.00.01.7 0 99 c5t9d1 1580.00.0 395.00.0 0.0 1.00.00.6 1 97 c7t9d1 593.00.0 148.20.0 0.0 1.00.01.7 0 99 c7t10d1 616.00.0 154.00.0 0.0 1.00.01.6 0 99 c6t10d1 601.00.0 150.20.0 0.0 1.00.01.6 0 99 c5t10d1 587.00.0
Re: [zfs-discuss] ZFS performance on LSI 9240-8i?
On Fri, 4 May 2012, Rocky Shek wrote: If I were you, I will not use 9240-8I. I will use 9211-8I as pure HBA with IT FW for ZFS. Is there IT FW for the 9240-8i? They seem to use the same SAS chipset. My next system will have 9211-8i with IT FW. Playing it safe. Good enough for Nexenta is good enough for me. Bob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS performance on LSI 9240-8i?
Hi, We add several bad experience with a LSI card (LSI 3081E, LSI SAS84016E). Even with Solaris official drivers provided by LSI. Finally we use LSI SAS9201-16i card. http://www.lsi.com/channel/france/products/storagecomponents/Pages/LSISAS9201-16i.aspx This one work as expected on Nexenta and OpenIndiana. Best regards, Hugues -Message initial- De:Roman Matiyenko rmatiye...@gmail.com Envoyé:ven. 04-05-2012 14:25 Sujet:[zfs-discuss] ZFS performance on LSI 9240-8i? À:zfs-discuss@opensolaris.org; Hi all, I have a bad bad problem with our brand new server! The lengthy details are below but to cut the story short, on the same hardware (3 x LSI 9240-8i, 20 x 3TB 6gb HDDs) I am getting ZFS sequential writes of 1.4GB/s on Solaris 10 (20 disks, 10 mirrors) and only 200-240MB/s on latest Solaris 11.11 (same zpool config). By writing directly to raw disks I found that in S10 the speed is 140MB/s sequential writes per disk (consistent with combined 1.4GB/s for my zpool) whereas only 24MB/s in Solaris 11 (consistent with 240MB/s zpool, 10 mirrors 24MB/s each). This must be the controller drivers, right? I downloaded drivers version 4.7 off LSI site (says for Solaris 10 and later) - they failed to attach on S11. Version 3.03 worked but the system would randomly crash, so I moved my experiments off S11 to S10. However, S10 has only the old implementation if iSCSI which gives me other problems so I decided to give S11 another go. Would there be any advice in this community? Many thanks! Roman == root@carbon:~# echo | format | grep Hitachi 1. c5t8d1 ATA-Hitachi HUA72303-A5C0-2.73TB 2. c5t9d1 ATA-Hitachi HUA72303-A5C0-2.73TB 3. c5t10d1 ATA-Hitachi HUA72303-A5C0-2.73TB 4. c5t11d1 ATA-Hitachi HUA72303-A5C0-2.73TB 5. c5t13d1 ATA-Hitachi HUA72303-A5C0-2.73TB 6. c5t14d1 ATA-Hitachi HUA72303-A5C0-2.73TB 7. c5t15d1 ATA-Hitachi HUA72303-A5C0-2.73TB 9. c6t9d1 ATA-Hitachi HUA72303-A5C0-2.73TB 10. c6t10d1 ATA-Hitachi HUA72303-A5C0-2.73TB 11. c6t11d1 ATA-Hitachi HUA72303-A5C0-2.73TB 12. c6t13d1 ATA-Hitachi HUA72303-A5C0-2.73TB 13. c6t14d1 ATA-Hitachi HUA72303-A5C0-2.73TB 14. c6t15d1 ATA-Hitachi HUA72303-A5C0-2.73TB 15. c7t8d1 ATA-Hitachi HUA72303-A5C0-2.73TB 17. c7t10d1 ATA-Hitachi HUA72303-A5C0-2.73TB 18. c7t11d1 ATA-Hitachi HUA72303-A5C0-2.73TB 19. c7t12d1 ATA-Hitachi HUA72303-A5C0-2.73TB 20. c7t13d1 ATA-Hitachi HUA72303-A5C0-2.73TB 21. c7t14d1 ATA-Hitachi HUA72303-A5C0-2.73TB 22. c7t15d1 ATA-Hitachi HUA72303-A5C0-2.73TB Reading DD from all disks: (dd of=/dev/null bs=1024kb if=/dev/rdsk/c7t9d1 ) # Iostat –xznM 2 extended device statistics r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 614.5 0.0 153.6 0.0 0.0 1.0 0.0 1.6 0 98 c5t8d1 595.5 0.0 148.9 0.0 0.0 1.0 0.0 1.7 0 99 c7t8d1 1566.5 0.0 391.6 0.0 0.0 1.0 0.0 0.6 1 96 c6t8d1 # (SSD) 618.5 0.0 154.6 0.0 0.0 1.0 0.0 1.6 0 99 c6t9d1 616.5 0.0 154.1 0.0 0.0 1.0 0.0 1.6 0 99 c5t9d1 1564.0 0.0 391.0 0.0 0.0 1.0 0.0 0.6 1 96 c7t9d1# (SSD) 616.0 0.0 154.0 0.0 0.0 1.0 0.0 1.6 0 98 c7t10d1 554.0 0.0 138.5 0.0 0.0 1.0 0.0 1.8 0 99 c6t10d1 598.5 0.0 149.6 0.0 0.0 1.0 0.0 1.7 0 99 c5t10d1 588.5 0.0 147.1 0.0 0.0 1.0 0.0 1.7 0 98 c6t11d1 590.5 0.0 147.6 0.0 0.0 1.0 0.0 1.7 0 98 c7t11d1 591.5 0.0 147.9 0.0 0.0 1.0 0.0 1.7 0 99 c5t11d1 600.5 0.0 150.1 0.0 0.0 1.0 0.0 1.6 0 98 c6t13d1 617.5 0.0 154.4 0.0 0.0 1.0 0.0 1.6 0 99 c7t12d1 611.0 0.0 152.8 0.0 0.0 1.0 0.0 1.6 0 99 c5t13d1 625.0 0.0 156.3 0.0 0.0 1.0 0.0 1.6 0 99 c6t14d1 592.5 0.0 148.1 0.0 0.0 1.0 0.0 1.7 0 99 c7t13d1 596.0 0.0 149.0 0.0 0.0 1.0 0.0 1.7 0 99 c5t14d1 598.5 0.0 149.6 0.0 0.0 1.0 0.0 1.6 0 98 c6t15d1 618.5 0.0 154.6 0.0 0.0 1.0 0.0 1.6 0 98 c7t14d1 606.5 0.0 151.6 0.0 0.0 1.0 0.0 1.6 0 98 c5t15d1 625.0 0.0 156.3 0.0 0.0 1.0 0.0 1.6 0 98 c7t15d1 extended device statistics r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 620.5 0.0 155.1 0.0 0.0 1.0 0.0 1.6 0 99 c5t8d1 620.5 0.0 155.1 0.0 0.0 1.0 0.0 1.6 0 99 c7t8d1 1581.0 0.0 395.2 0.0 0.0 1.0 0.0 0.6 1 96 c6t8d1 611.5 0.0 152.9 0.0 0.0 1.0 0.0 1.6 0 99 c6t9d1 587.5 0.0 146.9 0.0 0.0 1.0 0.0 1.7 0 99 c5t9d1 1580.0 0.0 395.0 0.0 0.0 1.0 0.0 0.6 1 97 c7t9d1 593.0 0.0 148.2 0.0 0.0 1.0 0.0 1.7 0 99 c7t10d1 616.0 0.0 154.0 0.0 0.0 1.0 0.0 1.6 0 99 c6t10d1 601.0 0.0 150.2 0.0