Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-13 Thread Jens Elkner
On Sat, Sep 12, 2009 at 02:37:35PM -0500, Tim Cook wrote:
On Sat, Sep 12, 2009 at 10:17 AM, Damjan Perenic
...
  I shopped for 1TB 7200rpm drives recently and I noticed Seagate
  Barracude ES.2 has 1TB version with SATA and SAS interface.
 
On the flip side, according to storage review, the SATA version trumps the
SAS version in pretty much everything but throughput (which is still
negligible).

 [5]http://www.storagereview.com/php/benchmark/suite_v4.php?typeID=10testbed
ID=4osID=6raidconfigID=1numDrives=1devID_0=354devID_1=362devCnt=2
--Tim

Just in case if interested in SATA, perhaps this helps (made on an almost
idle system):

elkner.sol /pool2  uname -a
SunOS sol 5.11 snv_98 i86pc i386 i86xpv

elkner.sol /rpool   prtdiag
System Configuration: Intel S5000PAL
BIOS Configuration: Intel Corporation S5000.86B.10.00.0091.081520081046 
08/15/2008
BMC Configuration: IPMI 2.0 (KCS: Keyboard Controller Style)

 Processor Sockets 

Version  Location Tag
 --
Intel(R) Xeon(R) CPU   E5440  @ 2.83GHz CPU1
Intel(R) Xeon(R) CPU   E5440  @ 2.83GHz CPU2
...

elkner.sol /pool2  + /usr/X11/bin/scanpci | grep -i sata
 Intel Corporation 631xESB/632xESB SATA AHCI Controller

elkner.sol ~  iostat -E | \
awk '/^sd/ { print $1; getline; print; getline; print }'
sd0
Vendor: ATA  Product: ST3250310NS  Revision: SN05 Serial No:
Size: 250.06GB 250059350016 bytes
sd1
Vendor: ATA  Product: ST3250310NS  Revision: SN04 Serial No:
Size: 250.06GB 250059350016 bytes
sd2
Vendor: ATA  Product: ST3250310NS  Revision: SN04 Serial No:
Size: 250.06GB 250059350016 bytes
sd3
Vendor: ATA  Product: ST3250310NS  Revision: SN05 Serial No:
Size: 250.06GB 250059350016 bytes
sd5
Vendor: ATA  Product: ST31000340NS Revision: SN06 Serial No:
Size: 1000.20GB 1000204886016 bytes
sd6
Vendor: ATA  Product: ST31000340NS Revision: SN06 Serial No:
Size: 1000.20GB 1000204886016 bytes

elkner.sol ~  zpool status | grep ONLINE
 state: ONLINE
pool1   ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
 state: ONLINE
pool2   ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
 state: ONLINE
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c1t0d0s0  ONLINE   0 0 0
c1t1d0s0  ONLINE   0 0 0


elkner.sol /pool2  + time sh -c mkfile 4g xx; sync; echo ST31000340NS
ST31000340NS
real 3:55.2
user0.0
sys 1.9

elkner.sol ~  iostat -zmnx c1t4d0 c1t5d0 5 | grep -v device
0.0  154.20.0 19739.4  3.0 32.0   19.4  207.5 100 100 c1t4d0
0.0  125.80.0 16103.9  3.0 32.0   23.8  254.3 100 100 c1t5d0
0.0  133.00.0 16366.9  2.4 25.9   17.9  194.4  80  82 c1t4d0
0.0  158.00.0 19592.5  2.8 30.3   17.6  191.7  93  96 c1t5d0
0.0  159.40.0 20054.8  2.8 30.3   17.7  190.2  94  95 c1t4d0
0.0  140.20.0 17597.2  2.8 30.3   20.1  216.4  94  95 c1t5d0
0.0  134.80.0 16298.7  2.0 23.0   15.2  170.8  68  76 c1t4d0
0.0  154.40.0 18807.5  2.7 29.3   17.3  189.9  89  94 c1t5d0
0.0  188.40.0 24115.5  3.0 32.0   15.9  169.8 100 100 c1t4d0
0.0  159.80.0 20454.6  3.0 32.0   18.8  200.2 100 100 c1t5d0
0.0  120.00.0 14328.3  2.0 22.2   16.4  184.9  66  71 c1t4d0
0.0  143.20.0 17169.9  2.6 28.2   18.0  197.1  86  93 c1t5d0
0.0  157.00.0 19140.9  2.6 29.3   16.5  186.9  87  96 c1t4d0
0.0  169.20.0 20676.9  2.2 24.8   13.2  146.6  75  79 c1t5d0
0.0  156.20.0 19993.8  3.0 32.0   19.2  204.8 100 100 c1t4d0
0.0  140.40.0 17971.3  3.0 32.0   21.3  227.9 100 100 c1t5d0
0.0  138.80.0 16759.6  2.6 29.3   18.4  210.9  86  94 c1t4d0
0.0  146.60.0 17809.2  2.7 29.6   18.4  201.7  90  94 c1t5d0
0.0  133.80.0 16196.8  2.5 28.0   18.9  209.3  85  90 c1t4d0
0.0  134.00.0 16222.4  2.6 28.7   19.5  214.3  87  94 c1t5d0
r/sw/s   kr/skw/s wait actv wsvc_t asvc_t  %w  %b device

elkner.sol /pool1  + time sh -c 'mkfile 4g xx; sync; echo ST3250310NS'
ST3250310NS
real 1:33.5
user0.0
sys 2.0

elkner.sol ~  iostat -zmnx c1t2d0 c1t3d0 5 | grep -v device
0.2  408.61.6 49336.8 25.7  0.8   62.81.9  79  79 c1t3d0
0.2  432.61.6 53284.4 29.9  0.9   69.02.1  89  89 c1t2d0
0.2  456.01.6 56280.0 28.6  0.9   62.61.9  86  86 c1t3d0
0.8  389.8   17.6 45360.7 25.8  0.8   66.02.1  81  80 c1t2d0
0.4  368.63.2 42698.0 21.1  0.6   57.31.8  65  65 c1t3d0
1.0  432.48.0 52615.8 30.2  0.9   69.62.1  91  91 c1t2d0

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-12 Thread Damjan Perenic
On Sat, Sep 12, 2009 at 7:25 AM, Tim Cook t...@cook.ms wrote:


 On Fri, Sep 11, 2009 at 4:46 PM, Chris Du dilid...@gmail.com wrote:

 You can optimize for better IOPS or for transfer speed. NS2 SATA and SAS
 share most of the design, but they are still different, cache, interface,
 firmware are all different.

 And I'm asking you to provide a factual basis for the interface playing any
 role in IOPS.  I know for a fact it has nothing to do with error recovery or
 command queue.

 Regardless, I've never seen either one provide any significant change in
 IOPS.  I feel fairly confident stating that within the storage industry
 there's a pretty well known range of IOPS provided for 7200, 10K, and 15K
 drives respectively, regardless of interface.  You appear to be saying this
 isn't the case, so I'd like to know what data you're using as a reference
 point.

I shopped for 1TB 7200rpm drives recently and I noticed Seagate
Barracude ES.2 has 1TB version with SATA and SAS interface.

In their datasheet at
http://www.seagate.com/www/en-us/products/servers/barracuda_es/ and
product overview they claim following:

---
Choose SAS for the seamless Tier 2 enterprise experience, with
improved data integrity and a 135 percent average performance
boost over SATA. SAS also reduces integration complexity and
optimizes system performance for rich media, reference data
storage and enterprise backup applications.
---
With a choice of either SATA or SAS
interfaces, the Barracuda ES.2 drive
utilizes perpendicular recording technology
to deliver the industry’s highest-capacity
4-platter drive. SAS delivers up to a 38
percent IOPS/watt improvement over
SATA.
---

And in Product overview:
---
• Full internal IOEDC/IOECC* data integrity protection on SAS models
• Dual-ported, multi-initiator SAS provides full-duplex compatibility
and a 135 percent average** performance improvement over SATA.

*IOEDC/IOECC on SATA (writes only), IOEDC/IOECC on SAS (both reads and writes)
**Averaged from random/sequential, read/write activities with write cache off
--

I admit I have no clue why SAS version should be/is faster. I just
pass on things I found out. But I am interested in opinion if there is
any substance in this marketing material.

Kind regards,
Damjan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-12 Thread Tim Cook
On Sat, Sep 12, 2009 at 10:17 AM, Damjan Perenic 
damjan.pere...@guest.arnes.si wrote:

 On Sat, Sep 12, 2009 at 7:25 AM, Tim Cook t...@cook.ms wrote:
 
 
  On Fri, Sep 11, 2009 at 4:46 PM, Chris Du dilid...@gmail.com wrote:
 
  You can optimize for better IOPS or for transfer speed. NS2 SATA and SAS
  share most of the design, but they are still different, cache,
 interface,
  firmware are all different.
 
  And I'm asking you to provide a factual basis for the interface playing
 any
  role in IOPS.  I know for a fact it has nothing to do with error recovery
 or
  command queue.
 
  Regardless, I've never seen either one provide any significant change in
  IOPS.  I feel fairly confident stating that within the storage industry
  there's a pretty well known range of IOPS provided for 7200, 10K, and 15K
  drives respectively, regardless of interface.  You appear to be saying
 this
  isn't the case, so I'd like to know what data you're using as a reference
  point.

 I shopped for 1TB 7200rpm drives recently and I noticed Seagate
 Barracude ES.2 has 1TB version with SATA and SAS interface.

 In their datasheet at
 http://www.seagate.com/www/en-us/products/servers/barracuda_es/ and
 product overview they claim following:

 ---
 Choose SAS for the seamless Tier 2 enterprise experience, with
 improved data integrity and a 135 percent average performance
 boost over SATA. SAS also reduces integration complexity and
 optimizes system performance for rich media, reference data
 storage and enterprise backup applications.
 ---
 With a choice of either SATA or SAS
 interfaces, the Barracuda ES.2 drive
 utilizes perpendicular recording technology
 to deliver the industry’s highest-capacity
 4-platter drive. SAS delivers up to a 38
 percent IOPS/watt improvement over
 SATA.
 ---

 And in Product overview:
 ---
 • Full internal IOEDC/IOECC* data integrity protection on SAS models
 • Dual-ported, multi-initiator SAS provides full-duplex compatibility
 and a 135 percent average** performance improvement over SATA.

 *IOEDC/IOECC on SATA (writes only), IOEDC/IOECC on SAS (both reads and
 writes)
 **Averaged from random/sequential, read/write activities with write cache
 off
 --

 I admit I have no clue why SAS version should be/is faster. I just
 pass on things I found out. But I am interested in opinion if there is
 any substance in this marketing material.

 Kind regards,
 Damjan



The two *'s leave much room to be desired.  Averaged?  How about some real
numbers with testing methodology.  I'm not at all surprised they claim 2x
performance from the drive they charge twice as much for.

On the flip side, according to storage review, the SATA version trumps the
SAS version in pretty much everything but throughput (which is still
negligible).
http://www.storagereview.com/php/benchmark/suite_v4.php?typeID=10testbedID=4osID=6raidconfigID=1numDrives=1devID_0=354devID_1=362devCnt=2


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Eugen Leitl
On Thu, Sep 10, 2009 at 01:11:49PM -0400, Eric Sproul wrote:

 I would not use the Caviar Black drives, regardless of TLER settings.  The RE3
 or RE4 drives would be a better choice, since they also have better vibration
 tolerance.  This will be a significant factor in a chassis with 20 spinning 
 drives.

Yes, I'm aware of the issue, and am using 16x RE4 drives in my current 
box right now (which I unfortunately had to convert to CentOS 5.3 for Oracle/
custom software compatibility reasons). I've made very bad experiences
with Seagate 7200.11 in RAID in the past.

Thanks for your advice against Caviar Black. 
 
  Do you think above is a sensible choice? 
 
 All your other choices seem good.  I've used a lot of Supermicro gear with 
 good
 results.  The very leading-edge hardware is sometimes not supported, but

I've been using 
http://www.supermicro.com/products/motherboard/QPI/5500/X8DAi.cfm
in above box.

 anything that's been out for a while should work fine.  I presume you're going
 for an Intel Xeon solution-- the peripherals on those boards a a bit better
 supported than the AMD stuff, but even the AMD boards work well.

Yes, dual-socket quadcore Xeon.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Markus Kovero
We've been using caviar black 1TB with disk configurations consisting 64 disks 
or more. They are working just fine.

Yours
Markus Kovero

-Original Message-
From: zfs-discuss-boun...@opensolaris.org 
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Eugen Leitl
Sent: 11. syyskuuta 2009 9:51
To: Eric Sproul; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] alternative hardware configurations for zfs

On Thu, Sep 10, 2009 at 01:11:49PM -0400, Eric Sproul wrote:

 I would not use the Caviar Black drives, regardless of TLER settings.  The RE3
 or RE4 drives would be a better choice, since they also have better vibration
 tolerance.  This will be a significant factor in a chassis with 20 spinning 
 drives.

Yes, I'm aware of the issue, and am using 16x RE4 drives in my current 
box right now (which I unfortunately had to convert to CentOS 5.3 for Oracle/
custom software compatibility reasons). I've made very bad experiences
with Seagate 7200.11 in RAID in the past.

Thanks for your advice against Caviar Black. 
 
  Do you think above is a sensible choice? 
 
 All your other choices seem good.  I've used a lot of Supermicro gear with 
 good
 results.  The very leading-edge hardware is sometimes not supported, but

I've been using 
http://www.supermicro.com/products/motherboard/QPI/5500/X8DAi.cfm
in above box.

 anything that's been out for a while should work fine.  I presume you're going
 for an Intel Xeon solution-- the peripherals on those boards a a bit better
 supported than the AMD stuff, but even the AMD boards work well.

Yes, dual-socket quadcore Xeon.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Tristan Ball
How long have you had them in production?

Were you able to adjust the TLER settings from within solaris?

Thanks,
Tristan.

-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Markus Kovero
Sent: Friday, 11 September 2009 5:00 PM
To: Eugen Leitl; Eric Sproul; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] alternative hardware configurations for zfs

We've been using caviar black 1TB with disk configurations consisting 64
disks or more. They are working just fine.

Yours
Markus Kovero

-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Eugen Leitl
Sent: 11. syyskuuta 2009 9:51
To: Eric Sproul; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] alternative hardware configurations for zfs

On Thu, Sep 10, 2009 at 01:11:49PM -0400, Eric Sproul wrote:

 I would not use the Caviar Black drives, regardless of TLER settings.
The RE3
 or RE4 drives would be a better choice, since they also have better
vibration
 tolerance.  This will be a significant factor in a chassis with 20
spinning drives.

Yes, I'm aware of the issue, and am using 16x RE4 drives in my current 
box right now (which I unfortunately had to convert to CentOS 5.3 for
Oracle/
custom software compatibility reasons). I've made very bad experiences
with Seagate 7200.11 in RAID in the past.

Thanks for your advice against Caviar Black. 
 
  Do you think above is a sensible choice? 
 
 All your other choices seem good.  I've used a lot of Supermicro gear
with good
 results.  The very leading-edge hardware is sometimes not supported,
but

I've been using
http://www.supermicro.com/products/motherboard/QPI/5500/X8DAi.cfm
in above box.

 anything that's been out for a while should work fine.  I presume
you're going
 for an Intel Xeon solution-- the peripherals on those boards a a bit
better
 supported than the AMD stuff, but even the AMD boards work well.

Yes, dual-socket quadcore Xeon.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Markus Kovero
Couple months, nope. I guess there is this DOS utility provided by WD that 
allows you change TLER settings
having TLER disabled can be problem, faulty disks timeout randomly and zfs 
doesn't always want to mark them as failed, sometimes it does though.

Yours
Markus Kovero

-Original Message-
From: Tristan Ball [mailto:tristan.b...@leica-microsystems.com] 
Sent: 11. syyskuuta 2009 10:04
To: Markus Kovero; zfs-discuss@opensolaris.org
Subject: RE: [zfs-discuss] alternative hardware configurations for zfs

How long have you had them in production?

Were you able to adjust the TLER settings from within solaris?

Thanks,
Tristan.

-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Markus Kovero
Sent: Friday, 11 September 2009 5:00 PM
To: Eugen Leitl; Eric Sproul; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] alternative hardware configurations for zfs

We've been using caviar black 1TB with disk configurations consisting 64
disks or more. They are working just fine.

Yours
Markus Kovero

-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Eugen Leitl
Sent: 11. syyskuuta 2009 9:51
To: Eric Sproul; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] alternative hardware configurations for zfs

On Thu, Sep 10, 2009 at 01:11:49PM -0400, Eric Sproul wrote:

 I would not use the Caviar Black drives, regardless of TLER settings.
The RE3
 or RE4 drives would be a better choice, since they also have better
vibration
 tolerance.  This will be a significant factor in a chassis with 20
spinning drives.

Yes, I'm aware of the issue, and am using 16x RE4 drives in my current 
box right now (which I unfortunately had to convert to CentOS 5.3 for
Oracle/
custom software compatibility reasons). I've made very bad experiences
with Seagate 7200.11 in RAID in the past.

Thanks for your advice against Caviar Black. 
 
  Do you think above is a sensible choice? 
 
 All your other choices seem good.  I've used a lot of Supermicro gear
with good
 results.  The very leading-edge hardware is sometimes not supported,
but

I've been using
http://www.supermicro.com/products/motherboard/QPI/5500/X8DAi.cfm
in above box.

 anything that's been out for a while should work fine.  I presume
you're going
 for an Intel Xeon solution-- the peripherals on those boards a a bit
better
 supported than the AMD stuff, but even the AMD boards work well.

Yes, dual-socket quadcore Xeon.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Joseph L. Casale
Can you use SATA drives with expanders at all? (I have to stick
to enterprise/nearline SATA (100 EUR/TByte vs. 60 EUR/TByte
consumer SATA) for cost reasons).

Yes, the expander has nothing to do with the drive in front of it.
I have several SAS expanders with SATA drives on them.

What is the advantage of using external disk expanders?
They use up more rack height units and add hardware expense
and cabling hassle for very little to show for it, IMHO.

Because a single port on any HBA/RAID controller far exceeds the
speed of any single disc. Your analogy doesn't scal well past what
you can fit into a single server.

You also assume every disc enclosure only holds the discs in a similar
fashion to the way they would be oriented in the supermicro chassis.

Here's a nice read:
http://www.scsita.org/aboutscsi/sas/SAS_Expander.doc

jlc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Chris Du
Can you use SATA drives with expanders at all? (I have to stick to 
enterprise/nearline SATA (100 EUR/TByte vs. 60 EUR/TByte consumer SATA) for 
cost reasons).

Yes you can in E1 model. E1 is single path model which supports both SAS and 
SATA. You need to know what you are buying. The Supermicro case you buy has 
backplane with SAS expander.

In E2 model which supports multipathing, dual-port is required because each 
path needs access to the disk at the same time, thus SAS is required, SATA is 
single-port. If you want more bandwidth between your HBA and disks and better 
redundancy, you need multipathing so E2.

I still suggest you go with nearline SAS. SAS is dual-port design,  it has much 
better IOPS, command queue and error recovery. Data transfer speed is same 
between NS2 SAS and SATA. I know a lot of people have big problem with 7200.11 
but this is not the same disk. It does cost even more than enterprise SATA.

We have SATA disk shelves in NetApp, I say SATA doesn't belong to Enterprise. 
Granted, SATA is only used in dev environment, production uses 15K FC disks 
which we never have performance issue.

The reason I use external disk shelf is I ran out of disk trays in head unit. 
Adding SAS shelf is the quick, easy and cheap way to expand storage. I won't 
touch cluster file system as it gets too complicated and way too expensive.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Tim Cook
On Fri, Sep 11, 2009 at 12:48 PM, Chris Du dilid...@gmail.com wrote:

 Can you use SATA drives with expanders at all? (I have to stick to
 enterprise/nearline SATA (100 EUR/TByte vs. 60 EUR/TByte consumer SATA) for
 cost reasons).

 Yes you can in E1 model. E1 is single path model which supports both SAS
 and SATA. You need to know what you are buying. The Supermicro case you buy
 has backplane with SAS expander.

 In E2 model which supports multipathing, dual-port is required because each
 path needs access to the disk at the same time, thus SAS is required, SATA
 is single-port. If you want more bandwidth between your HBA and disks and
 better redundancy, you need multipathing so E2.

 I still suggest you go with nearline SAS. SAS is dual-port design,  it has
 much better IOPS, command queue and error recovery. Data transfer speed is
 same between NS2 SAS and SATA. I know a lot of people have big problem with
 7200.11 but this is not the same disk. It does cost even more than
 enterprise SATA.

 We have SATA disk shelves in NetApp, I say SATA doesn't belong to
 Enterprise. Granted, SATA is only used in dev environment, production uses
 15K FC disks which we never have performance issue.

 The reason I use external disk shelf is I ran out of disk trays in head
 unit. Adding SAS shelf is the quick, easy and cheap way to expand storage. I
 won't touch cluster file system as it gets too complicated and way too
 expensive.



Better IOPS?  Do you have some numbers to back that claim up?  I've never
heard of anyone getting much better IOPS out of a drive by simply changing
the interface from SATA to SAS.  Or SATA to FATA for that matter.  A 7200RPM
drive is limited by the 7200RPM's, not the interface it's attached to.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Eric D. Mudama

On Fri, Sep 11 at 13:14, Tim Cook wrote:

  Better IOPS?  Do you have some numbers to back that claim up?  I've never
  heard of anyone getting much better IOPS out of a drive by simply
  changing the interface from SATA to SAS.  Or SATA to FATA for that
  matter.  A 7200RPM drive is limited by the 7200RPM's, not the interface
  it's attached to.


Depends on the model of drive.

A number of vendors put relatively larger magnets and stronger
actuators in their enterprise designs, where the customers are willing
to pay for it.  This can significantly decrease track-to-track seek
times, which improves IOPS.

On top of that, many enterprise drives are using smaller platters
and/or higher RPM, both of which also help IOPS at shallow queue
depths.

At infinitely high queue depth, IOPS basically becomes a function of
how quickly the servo system can settle, since seek distances approach
zero (both linearly and rotationally) as the number of operations to
choose from goes to infinity.



--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Tim Cook
On Fri, Sep 11, 2009 at 3:20 PM, Eric D. Mudama
edmud...@bounceswoosh.orgwrote:

 On Fri, Sep 11 at 13:14, Tim Cook wrote:

  Better IOPS?  Do you have some numbers to back that claim up?  I've never
  heard of anyone getting much better IOPS out of a drive by simply
  changing the interface from SATA to SAS.  Or SATA to FATA for that
  matter.  A 7200RPM drive is limited by the 7200RPM's, not the interface
  it's attached to.


 Depends on the model of drive.

 A number of vendors put relatively larger magnets and stronger
 actuators in their enterprise designs, where the customers are willing
 to pay for it.  This can significantly decrease track-to-track seek
 times, which improves IOPS.

 On top of that, many enterprise drives are using smaller platters
 and/or higher RPM, both of which also help IOPS at shallow queue
 depths.

 At infinitely high queue depth, IOPS basically becomes a function of
 how quickly the servo system can settle, since seek distances approach
 zero (both linearly and rotationally) as the number of operations to
 choose from goes to infinity.



The question wasn't about consumer vs. enterprise drives.  He said the SAS
interface improves IOPS.  Please don't change the topic of discussion
mid-thread.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Chris Du
You can optimize for better IOPS or for transfer speed. NS2 SATA and SAS share 
most of the design, but they are still different, cache, interface, firmware 
are all different. 

Then by much better, I don't mean just IOPS, it's all the 3, better IOPS, 
command queue and error recovery, etc.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-10 Thread Eric Sproul
Eugen Leitl wrote:
 Inspired by 
 http://www.webhostingtalk.com/showpost.php?p=6334764postcount=14
 I'm considering taking the Supermicro chassis like
 http://www.supermicro.com/products/chassis/4U/846/SC846E1-R900.cfm
 populating it with 1 TByte WD Caviar Black WD1001FALS with TLER
 set to 7 seconds, 

I would not use the Caviar Black drives, regardless of TLER settings.  The RE3
or RE4 drives would be a better choice, since they also have better vibration
tolerance.  This will be a significant factor in a chassis with 20 spinning 
drives.

 Do you think above is a sensible choice? 

All your other choices seem good.  I've used a lot of Supermicro gear with good
results.  The very leading-edge hardware is sometimes not supported, but
anything that's been out for a while should work fine.  I presume you're going
for an Intel Xeon solution-- the peripherals on those boards a a bit better
supported than the AMD stuff, but even the AMD boards work well.

Eric

-- 
Eric Sproul
Lead Site Reliability Engineer
OmniTI Computer Consulting, Inc.
Web Applications  Internet Architectures
http://omniti.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-10 Thread Chris Du
Why do you need 3x LSI SAS3081E-R? The back plane has LSI SAS x36 expander so 
you only nedd 1x 3081E. If you want multipathing, you need E2 model.

Second, I'd say use Seagate ES 2 1TB SAS disk especially if you want 
multipathing. I believe E2 only supports SAS disks.

I have Supermicro 936E1 (LSI SAS X28 expander) as diskshelf and LSI 3080X on 
head unit, Intel X25-E as ZIL, works like charm. Your setup is very well 
supported by Solaris.

For motherboard, my Supermicro X8SAX and X8ST3 both work well with Solaris. You 
may want dual proc board that supports more memory. ECC is given on i7 based 
when using XEON.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss