We have have both 2540s and 6140s. We favor the 2540 with twelve 300 GB SAS drives, in some configuration we add a tray of an additional 12 drives. We have about forty 2540 controller/tray configurations between our two sites. We are primarily an Oracle shop for Databases but are starting our first MySQL configuration shortly. We are also a SAMQFS shop. For both QFS and Oracle servers we use the t2000/5220 servers with 32 GB of RAM.
For biggest PeopleSoft DB which is about 100 GB we run the following storage configuration. Two 2+2 128K Segments size RAID sets within the 2540, one on controller a the other on b. These are striped with the Solaris Volume manager with an interlace of 512K (this allows us to use both the a and b controller bandwidth and it outperforms a single 4+4). We then mirror this striped set with the Solaris Volume manager to our secondary site to another 2540 with the same RAID configuration. The entire "thing" is then mounted as /oracle/data with forcedirectio enabled. A second 1+1 raid set also exist on both 2540 also mirrored with VM and mounted as /oracle/apps. This mount does NOT use forcedirectio, and contains the oracle binaries and is the target for local exports/dumps/RMAN stuff. We have followed the Oracle SAME document / methodology. We have 2 spares as this configuration uses 10 drives per 2540. For our "large file" SAMQFS file systems we take two 2540 controllers each with a tray. We create four 4+1 128k segment size RAID sets per controller/tray and one 1+1 16K segment size. This leaves two spares. Since there are two of these with the same configuration we get an 8 way stripe with QFS for data using a 256k dua. The two 1+1s are striped together for the metadata. When performing read test with a bandwidth test application we can saturate the two 4Gb HBA links in a 5220 on separate PCI-e cards. Writes don't saturate both links. All our test have been with throughput (MBS) results not IO per seconds. We are formulating how to effectively do IOS per second test now against a DB. I have included the QFS info because we do more testing there. We have yet to understand what the 2540 won't do (Mainframe excluded - use a 9990 there). By striping across multiple units I believe any IO configuration could be meet. I come from an EMC Sym background and was looking for more cost effective, better performing solutions. -- mike cannon [EMAIL PROTECTED] 864.650.2577 (cell) 864.656.3809 (office) computing & information technology 340 computer court anderson, sc 29625 > From: Bob Friesenhahn <[EMAIL PROTECTED]> > Date: Tue, 1 Jul 2008 20:17:51 -0500 (CDT) > To: Justin Vassallo <[EMAIL PROTECTED]> > Cc: <[email protected]> > Subject: Re: [storage-discuss] 2x2540 FC vs 1x6140FC > > On Wed, 2 Jul 2008, Justin Vassallo wrote: >> >> **Current hardware setup: >> SunFire X4200 M2 with 16GB memory, 4 internal 70G 15krpm 2.5" SAS drives, >> with a raidz1 across the 4 discs. Db feels a little slow and getting slower >> quickly. > > Raidz1 was a bad choice here for performance but it fit. > >> C) 2*2540 w 12*300GB 3.5" 15krpm 3 Gb/sec SAS disks, dual controllers, >> 2*512M cache; dual dual-port PCIe cards (no spare). Max 515W/array=1030W > > I have a 2540 here with 12*300GB drives and like it a lot. It was > easy to setup. Two 2540's would be a dream since then you can split > the mirrors across the two arrays for maximal reliability and > performance. You could even use quad-mirroring without too much hit > by also splitting across multipath channels. However, the 2540 is > pretty expensive. > >> **Consideration factors: >> 1) 2.5" disks produce less vibration and are less sensitive to it, so seek >> time is better and more reliable. Also, less heat so less energy consumed. >> Any other array i should consider? > > While it is not a dedicated "array" and I have no personal experience > with it, I think you should definitely look at the Sun Fire X4240 > Server since it provides up 16 of those wonderful tiny 2.5" disks and > the whole thing fits in 2U space like your existing server. The entry > cost is half the price of one 2540 so you can use the money you save > to stuff it with RAM. On a "try and buy" program you can test it out > with your workload and reject it if it is not satisfactory. The only > problem is if the whole server craters and can't be immediately > repaired but this is a problem regardless. Just pay for a better > service contract and buy a spare server if need be. > > One thing you lose with this server approach is that it does not > provide a NVRAM cache like the dedicated arrays do, and that may > impact database update performance a bit. > >> 3) which setup will perform better? I've seen posts saying the 2540 is half >> the 6140 (zfs-discuss: some trends from the test center of SUN/LSI: 2540 / >> ca. 100 KIOPs, ca. 600 MB/s; 6140 / ca. 200 KIOPs, ca. 1000 MB/s) > > I don't know anything about the 6140, but the disks in the 2540 > operate in a sort of active/standby scheme where half of the drives > are active on each controller. Each controller also mirrors the > uncommitted data in its NVRAM by default (which costs in write > performance) so that it can take over the writes if the other > controller fails. > > Bob > ====================================== > Bob Friesenhahn > [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > > _______________________________________________ > storage-discuss mailing list > [email protected] > http://mail.opensolaris.org/mailman/listinfo/storage-discuss _______________________________________________ storage-discuss mailing list [email protected] http://mail.opensolaris.org/mailman/listinfo/storage-discuss
