On Fri, 8 Dec 2006, Jochen M. Kaiser wrote:

> Dear all,
>
> we're currently looking forward to restructure our hardware environment for
> our datawarehousing product/suite/solution/whatever.
>
> We're currently running the database side on various SF V440's attached via
> dual FC to our SAN backend (EMC DMX3) with UFS. The storage system is
> (obviously in  a SAN) shared between many systems. Performance is mediocre
> in  terms of raw throughput at 70-150MB/sec. (lengthy, sequential reads due to
> full table scan  operations on the db side) and excellent is terms of I/O and
> service times (averaging at 1,7ms according to sar).
> >From our applications perspective sequential read is the most important 
> >factor.
> Read-to-Write ratio is almost 20:1.
>
> We now want to consolidate our database servers (Oracle, btw.) to a pair of
> x4600 systems running Solaris 10 (which we've already tested in a benchmark
> setup). The whole system was still I/O-bound, even though the backend (3510,
> 12x146GB, QFS, RAID10) delivered a sustained data rate of 250-300MB/sec.
>
> I'd like to target a sequential read performance of 500++MB/sec while reading
> from the db on multiple tablespaces. We're experiencing massive data volume
> growth of about 100% per year and are therefore looking both for an 
> expandable,
> yet "cheap" solution. We'd like to use a DAS solution, because we had negative
> experiences with SAN in the past in terms of tuning and throughput.
>
> Being a friend of simplicity I was thinking about using a pair (or more) of 
> 3320
> SCSI JBODs with multiple RAIDZ and/or RAID10 zfs disk pools on which we'd

Have you not heard that SCSI is dead?  :)

But seriously, the big issue with SCSI, is that the SCSI commands are sent
over the SCSI bus at the original (legacy) rate of 5 Mbits/Sec in 8-bit
mode.  And since it takes an average of 5 SCSI commands to do something
useful, you can't send enough commands over the bus to busy out a modern
SCSI drive.  Even a single drive on a single SCSI bus.  Also, it takes a
lot of time to send those commands - so you have latency.  And everyone
understands how latency affects throughput on a LAN (or WAN) .. same issue
with SCSI.  This is the main reason why SCSI is EOL and could not be
extended without breaking the existing standards.

While I understand you don't want to build a SAN, an alternative would be
a Fibre Channel (FC) box that presents SATA drives.  This would be a DAS
solution with one or two connections to (Qlogic) FC controllers in the
host - IOW not a SAN and there is no FC switch required.  Many such boxes
are designed to provide expansion to a FC based hardware RAID box.  For
example, the DS4000 EXP100 Storage Expansion Unit from IBM.  In your
application you'd need to find something that supports FC rates of
4Gb/Sec, if possible.

Another possiblity, which is on my todo list to checkout, is:

http://www.norcotek.com/item_detail.php?categoryid=8&modelno=DS-1220

Now if I could find a Marvell based equivalent to the:
http://www.supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm with
external SATA ports, life would be great.  Another card with external SATA
ports that works with Solaris (via the si3124 driver) is:
http://www.newegg.com/product/product.asp?item=N82E16816124003 which only
has a 32-bit PCI connection. :(

> place the database. If we need more space we'll simply connect yet another
> JBOD. I'd calculate 1-2 PCIe U320 controllers (w/o raid) per jbod, starting 
> with a
> minimum of 4 controllers per server.
>
> Regarding ZFS I'd be very interested to know, whether someone else is running
> a similar setup and can provide me with some hints or point me at some 
> caveats.
>
> I'd be also very interested in the cpu usage of such a setup for the zfs raidz
> pools. After searching this forum I found the rule of thumb that 200MB/sec
> throughput roughly consume one 2GHz Opteron cpu, but am hoping that someone
> can provide me with some in depth data. (Frankly I can hardly imagine that 
> this
> holds true for reads).
>
> I'd be also be interested in you opinion on my targeted setup, so if you have
> any comments - go ahead.
>
> Any help is appreciated,
>
> Jochen
>
> P.S. Fallback scenarios would be Oracle with ASM or a (zfs/ufs) SAN setup.
>

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
           Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
             OpenSolaris Governing Board (OGB) Member - Feb 2006
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to