Re: OT: Torn between SCSI and SATA for RAID

2006-05-28 Thread Ian Jefferson



I'd rather run 5 SATA cables then one SCSI cable (say 68pin) with  
multiple heads...   The darn SCSI cables are so thick,  
comparatively, that running them in your case is a lot harder :-)





Well everyone's mileage may vary.  Parallel cables only work nicely  
when you have a stack of drives all close  lined up together.  I  
personally yearn for a simple 40gbps daisychainable serial bus.  I  
hoped firewire would have been it but we seem to be stuck at 800mpbs.


The other cabling option I forgot about is USB2 or Firewire.

There are a number of very low cost external cases that pre-package  
USB/Firewire SATA converters.  You basically fill a hard disk case  
with SATA or ATA drives and connect your computer to the case via a  
single firewire or USB cable.  I have not seen one of these that's  
hot swap yet but I did see a few of these recently in Tokyo Akihabara  
district for ~$100 so I assume they are available all over.  The  
box's I have seen are 4 drive systems.  Just fill them with your  
favorite commodity hard disk I guess.


At ~50MB/s the interface is plenty fast and greatly simplifies the  
cable issue inside the PC.


IJ

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: OT: Torn between SCSI and SATA for RAID

2006-05-26 Thread Nikolas Britton

On 5/10/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Hi,

I've been spending the last couple of days extensively looking at various
options for RAID and getting some storage system in place.  Performance is not
really a BIG issue, but I also don't want to have things hecticly slow either.
This will be a NAS type of implementation so speed would be bound by relatively
speaking slow network connections in any case...

Now first things first as well, I did look at Fiber Channels too - and the
tecnology is just to expensive and complex for a home type implementation that
I want this for.

Ideally, I'd like to start at 2TB of storage (yes, those movies must go
somewhere!), but I'd like to be able to grow this as times go by... I also
definately want redundancy on the data, as I just lost 80GB of precious data
when ironically, a 160GB SATA Seagate went out under me.

Now SCSI I know, is more expensive than SATA.  Whether it provides beter
performance than SATA I'm still uncertain off, but gut would tell me that due
to the cost factor, SCSI *should* run away as far as speed is concerned.  But
also as I said previously, speed and performance is not a priority for my
implementation and therefore it has very little weight.  This makes me look at
SATA then therefore.

My problem with SATA, is the whole 1 Port, 1 Drive scenario.  I've looked at the
Adaptec 16 Port SATA Controller.  The reviews I managed to get on that card on
the Internet, paints a very grim picture.  Buggered Firmware, the controller
destroys drives, and general sluggish performance.  Is anyone using this card
that can perhaps give me a better picture?


You want the Areca ARC-1160-ML (ML for Multi-Lane) card:
http://www.areca.com.tw/products/html/pcix-sata.htm
http://www.rackmountpro.com/productpage.php?prodid=2350

http://www.freebsd.org/cgi/man.cgi?query=arcmsrapropos=0sektion=0manpath=FreeBSD+6.1-RELEASEformat=html



--
BSD Podcasts @:
http://bsdtalk.blogspot.com/
http://freebsdforall.blogspot.com/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: OT: Torn between SCSI and SATA for RAID

2006-05-25 Thread Ian Jefferson

Hi Chris,

I have many of the same questions.  SATA is plenty fast for home  
systems and modern drives are smoking stuff that was enterprise class  
just a few years ago.  'twas ever thus.


Cables are a nightmare IMHO.  This was by far the reason I've been a  
big fan of SCSI for a long time.  You can make a pretty effective and  
tidy Raid system by custom making a short length daisy chain scsi  
cable. I have not explored this recently but used to do this ~5+  
years ago for non-raid applications.  We used to run into device  
compatibility problems on the SCSI bus though so same drive mfg might  
be a good idea.  Perhaps things have improved.


You can buy old 80 pin 16 bit SCSI controllers quite reasonably on  
EBay.  Even though the bus speeds might be 40 or 80 MB/sec (that's  
bytes) this still exceeds what I get on single disk SATA benchmarks.   
My impression is that modern drives are backward compatible with  
older SCSI but I've not tested this extensively, just a couple of  
anecdotes.


You can do quite well in the used Enterprise market.  You might have  
a look at pricewatch.com for some low cost SCSI disks.  My experience  
has been that S/P-ATA drives seem to be easily available in large  
sizes,  300 GB whereas SCSI seems to be available in volume only for  
smaller drives ~100-200GB.


Above is mostly supposition.

I have been experimenting with SATA to see what's possible.  There  
are gizmo's, Backplanes, out there that make the cabling issue easier:


I have one of these:
http://www.mwave.com/mwave/viewspec.hmx?scriteria=BA20689

And I'm considering one of these:
http://www.mwave.com/mwave/viewspec.hmx?scriteria=BA20690

Similar devices are available for SCSI and PATA drives they are a  
little difficult to find.  You can google for backplane, 3X5 and 2X3  
that type of thing.


I finally got gvinum to work for me under 6.1 i386 RELEASE for Raid  
5.  The volume manager concept appeals to me because you can work  
with smaller chunks pieces of storage than whole disks. So with the  
same set of physical disks you can contemplate different RAID  
strategies depending on how much performance you want, all at the  
same time.  So far my benchmarks indicate that a 3 partition raid 5  
vinum disk performs fine for me.  Minimum write performance is around  
7MB/s and Minimum read is around 14MB/s.  Usually however writes came  
in on  the low side of 15 MB/s and reads around 50 MB/s.  This is all  
just a first attempt though without any attempt to tune the raid  
set.  With two 5X3 backplanes and software Raid 5 you could build PDQ  
a 4TB system and your drives would not have to be identical.


Even with a backplane device though you end up with quite a cable issue.

The last option I've considered is to look at some of the SATA to  
SCSI backplanes.  There are commercial solutions that allow you to  
put SATA or PATA drives up to 12 in an enclosure then connect to your  
host computer via SCSI.  I haven't found anything cheap though.   
Cheap =  20% of the drive cost.  Apple sells such a device as do  
numerous other manufacturers.  Search for SATA Raid.


IJ


On May 11, 2006, at 7:51 PM, [EMAIL PROTECTED] wrote:




My questions that I'm posting is not really related towards the  
performance of
the system, it's more towards the capacity of the system... I guess  
it boils
down to the physical hardware... How does everything connect, how  
to expand
systems, and how to run arrays bigger than what one single  
controller can

provide...

--
C


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: OT: Torn between SCSI and SATA for RAID

2006-05-25 Thread Chad Leigh -- Shire.Net LLC


On May 25, 2006, at 5:30 PM, Ian Jefferson wrote:


Hi Chris,

I have many of the same questions.  SATA is plenty fast for home  
systems and modern drives are smoking stuff that was enterprise  
class just a few years ago.  'twas ever thus.


Cables are a nightmare IMHO.  This was by far the reason I've been  
a big fan of SCSI for a long time.  You can make a pretty effective  
and tidy Raid system by custom making a short length daisy chain  
scsi cable. I have not explored this recently but used to do this ~5 
+ years ago for non-raid applications.  We used to run into device  
compatibility problems on the SCSI bus though so same drive mfg  
might be a good idea.  Perhaps things have improved.



I'd rather run 5 SATA cables then one SCSI cable (say 68pin) with  
multiple heads...   The darn SCSI cables are so thick, comparatively,  
that running them in your case is a lot harder :-)



Chad

---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: OT: Torn between SCSI and SATA for RAID

2006-05-11 Thread lars
I recently read an interesting comparison
on consumer and enterprise grade harddisks:
http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf

Maybe this helps.

Kind regards
Lars
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: OT: Torn between SCSI and SATA for RAID

2006-05-11 Thread cknipe
Quoting lars [EMAIL PROTECTED]:

 I recently read an interesting comparison
 on consumer and enterprise grade harddisks:

http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf


This was posted yesterday in responce to my question as well.  That document
deals mainly with the performance and reliability of the different types of
hard drives (i.e. SATA vs SCSI).

My questions that I'm posting is not really related towards the performance of
the system, it's more towards the capacity of the system... I guess it boils
down to the physical hardware... How does everything connect, how to expand
systems, and how to run arrays bigger than what one single controller can
provide... 

--
C

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: OT: Torn between SCSI and SATA for RAID

2006-05-11 Thread Chad Leigh -- Shire.Net LLC


On May 11, 2006, at 4:51 AM, [EMAIL PROTECTED] wrote:


Quoting lars [EMAIL PROTECTED]:


I recently read an interesting comparison
on consumer and enterprise grade harddisks:

http://www.seagate.com/content/docs/pdf/whitepaper/ 
D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf



This was posted yesterday in responce to my question as well.  That  
document
deals mainly with the performance and reliability of the different  
types of

hard drives (i.e. SATA vs SCSI).

My questions that I'm posting is not really related towards the  
performance of
the system, it's more towards the capacity of the system... I guess  
it boils
down to the physical hardware... How does everything connect, how  
to expand
systems, and how to run arrays bigger than what one single  
controller can

provide...


Look at the Areca SATA controllers.

An 8 port RAID 6 SATA controller using 8 drives, 1 a hot spare, gives  
you about 5 drives worth of RAID 6 (5 + 2 parity = 7 drives, can  
suffer up to 2 simultaneous drive failures) and the Areca seem to be  
well regarded.  I have an 8 port and a 12 port one but not in service  
yet.  Areca has FBSD drivers.


5 drives * 500GB is a about 2.125 real TB (given that 500GB drive  
is not really 500 real GB)  (calculation made with simple ratios and  
could be way off).  The 12 port Areca card with 1 hot spare and RAID  
6 would give you 9 * 500GB = about 3.825 real TB


To get the size of array you want you need to go SATA as the SCSI  
drives aren't really big enough to get that big without getting into  
major major money.  Use good, 24/7 rated SATA drives, not cheap  
maxtor or WD (think seagate or hitachi probably).  Buy an extra drive  
to have or lose some capability and set up 2 hot spares.


I am considering a machine with 2 12 port Areca cards set up with 2  
RAID 6 arrays mirrored using ZFS under Solaris 10 as an nfs storage  
server...


Chad


---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


OT: Torn between SCSI and SATA for RAID

2006-05-10 Thread cknipe
Hi,

I've been spending the last couple of days extensively looking at various
options for RAID and getting some storage system in place.  Performance is not
really a BIG issue, but I also don't want to have things hecticly slow either. 
This will be a NAS type of implementation so speed would be bound by relatively
speaking slow network connections in any case... 

Now first things first as well, I did look at Fiber Channels too - and the
tecnology is just to expensive and complex for a home type implementation that
I want this for.

Ideally, I'd like to start at 2TB of storage (yes, those movies must go
somewhere!), but I'd like to be able to grow this as times go by... I also
definately want redundancy on the data, as I just lost 80GB of precious data
when ironically, a 160GB SATA Seagate went out under me.

Now SCSI I know, is more expensive than SATA.  Whether it provides beter
performance than SATA I'm still uncertain off, but gut would tell me that due
to the cost factor, SCSI *should* run away as far as speed is concerned.  But
also as I said previously, speed and performance is not a priority for my
implementation and therefore it has very little weight.  This makes me look at
SATA then therefore.

My problem with SATA, is the whole 1 Port, 1 Drive scenario.  I've looked at the
Adaptec 16 Port SATA Controller.  The reviews I managed to get on that card on
the Internet, paints a very grim picture.  Buggered Firmware, the controller
destroys drives, and general sluggish performance.  Is anyone using this card
that can perhaps give me a better picture?

Given than the 16 Port (for now) is out of the question, I have a 8 Port, 4 Port
and 2 Port (which isn't really worth looking at even) available to me.  Now,
even with a 8 Port card... Let's look at what I can achieve:

Ports 1+2: 750GB Seagates (Biggest available), 1.5TB - I'm short on my 2TB
Initial
Ports 3+4: Mirror of 1+2

Already, I am coming short of what I want to achieve, and I also have no
expansion available to me for upgrades... 

With the 16 Port cards, what I want to achieve becomes quite possible, up to
easy about 6TB of data - but I risk loosing drives *IF* what I read about the
card is true.  Also a gamble, considering the relatively high price of large
SATA drives.

Another thing that I read that I'm not completely sure about.  Some of the
Adaptec SCSI Cards advertises a max of 30 devices - some even more.  Excuse the
ignorance, but does the SCSI Bus not allow for a max of 8 devices?  Do these
cards then feature multiple buses to connect the cables to?  If so, SATA will
obviously not be able to provide something like this.

Now comes my question... Uhm.. Can SATA RAID Controllers be 'linked'.  Say, I
but 4 x 8-Port Adaptec SATA RAID Controllers... 2 x 8 Port Cards = 16 Ports for
1 RAID 5 Array (@ 750GB Drives, 12TB Max).  The other 2 cards, to mirror.  I
know that I can use one Controller to mirror another, but can I extend a array
across multiple controllers... And then naturally, just HOW much slower does
the array function?

I've seen some comments and posts (esp. on slashdot) made where people go about
running massive arrays successfully on SATA.  Given the limits on the Ports at
the controller, just how is this achieved?

Sorry that this is so OT, but I hope I'd get some good answers.  This is
definately not something that's been discussed allot before considering the
amount of info I got after spending a number of days on google... 

--
Chris.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: OT: Torn between SCSI and SATA for RAID

2006-05-10 Thread Jim Stapleton

I've found that scsi isn't exceptionally faster given similar RPMs, or
even slightly higher RPM (ex. a 10K RPM SCSI vs. 10K RPM SATA drive
would have simlar performance). However, SCSI tends to high tighter
standards, and you get the following advantages, which in some cases
are worth the money, and in some cases arent:

(1) More reliable/accurate reads/writes
(2) Longer expected lifespan


My advice for reliability is a RAID-1 setup with the most
cost-effective disks you can find, then use the OS to do a drive
spanning so you can put them in the same mount point (when it runs to
the end of a disk, it starts on the next). I'm not sure if the drive
spanning is possible though - I've not looked into it, though given
that Windows can do it, I don't see why FreeBSD would have trouble.

If that is still too expensive, you could try RAID-5, but the problem
with that is, adding new disks wouldn't be quite as easy, you may not
be able to use the RAID set until you get the replacement disk, and
it's not quite as fast (I could be wrong on this part) as RAID1 in the
case of writes.



with a 8 Port card... Let's look at what I can achieve:
Ports 1+2: 750GB Seagates (Biggest available), 1.5TB - I'm short on my 2TB
Initial
Ports 3+4: Mirror of 1+2


Maybe I'm missing something, where ports 4-8 (actually, 0 + 4-7)? With
8 500GB drives, and RAID1, you should be able to get 2TB out of that
(and more cost effective than 750GB drives)

Have you considered using two controller cards?



nother thing that I read that I'm not completely sure about.  Some of the
Adaptec SCSI Cards advertises a max of 30 devices - some even more.  Excuse the
ignorance, but does the SCSI Bus not allow for a max of 8 devices?  Do these
cards then feature multiple buses to connect the cables to?  If so, SATA will
obviously not be able to provide something like this.


8 devices, 1 is the controller, I think some newer busses hold 16
devices, is is the controlelr, (so 7 or 15 drives).

Now, a card may have multiple busses. I have an A-Ha 39160 in my
machine, and if I remember correctly it has 2 busses on it (or is it
three?), I don't use it to nearly it's capacity, I just got it for the
price of a 19160, and I couldn't turn down that option.



Now comes my question... Uhm.. Can SATA RAID Controllers be 'linked'.  Say, I
but 4 x 8-Port Adaptec SATA RAID Controllers... 2 x 8 Port Cards = 16 Ports for
1 RAID 5 Array (@ 750GB Drives, 12TB Max).  The other 2 cards, to mirror.  I
know that I can use one Controller to mirror another, but can I extend a array
across multiple controllers... And then naturally, just HOW much slower does
the array function?


The array will be using system cpu/memory, so quite a bit, and it'll
cause a hit on system perofrmance, however, the trick here is you can
do what I mentioned above with some trickery (I think), and just have
the OS link the two file systems, it's not any RAID form, and
shouldn't cost much performance.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: OT: Torn between SCSI and SATA for RAID

2006-05-10 Thread Bill Moran
On Wed, 10 May 2006 12:00:00 +0200
[EMAIL PROTECTED] wrote:

 Hi,
 
 I've been spending the last couple of days extensively looking at various
 options for RAID and getting some storage system in place.  Performance is not
 really a BIG issue, but I also don't want to have things hecticly slow 
 either. 
 This will be a NAS type of implementation so speed would be bound by 
 relatively
 speaking slow network connections in any case... 

http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf

-- 
Bill Moran
Collaborative Fusion Inc.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: OT: Torn between SCSI and SATA for RAID

2006-05-10 Thread Atom Powers


Another thing that I read that I'm not completely sure about.  Some of the
Adaptec SCSI Cards advertises a max of 30 devices - some even more.  Excuse the
ignorance, but does the SCSI Bus not allow for a max of 8 devices?  Do these
cards then feature multiple buses to connect the cables to?  If so, SATA will
obviously not be able to provide something like this.


I am not that familiar with SCSI protocols, but I imagine the
Ultra-Wide SCSI bus can probably address 32 devices ( 31 drives + the
controller ).


Now comes my question... Uhm.. Can SATA RAID Controllers be 'linked'.  Say, I
but 4 x 8-Port Adaptec SATA RAID Controllers... 2 x 8 Port Cards = 16 Ports for
1 RAID 5 Array (@ 750GB Drives, 12TB Max).  The other 2 cards, to mirror.  I
know that I can use one Controller to mirror another, but can I extend a array
across multiple controllers... And then naturally, just HOW much slower does
the array function?


I imagine you would probably have to use software raid at that point.
And even if you would use two controllers togeather (SLI for RAID?)
you would be limited by the PCI bus.



I've seen some comments and posts (esp. on slashdot) made where people go about
running massive arrays successfully on SATA.  Given the limits on the Ports at
the controller, just how is this achieved?


Probably with softawre RAID. With software RAID you can even mix drive
types, like SATA, PATA, SCSI, USB, etc. But it's much slower.


Sorry that this is so OT, but I hope I'd get some good answers.  This is
definately not something that's been discussed allot before considering the
amount of info I got after spending a number of days on google...

--
Chris.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]




--
--
Perfection is just a word I use occasionally with mustard.
--Atom Powers--
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: OT: Torn between SCSI and SATA for RAID

2006-05-10 Thread Andrea Venturoli

Atom Powers wrote:


Another thing that I read that I'm not completely sure about.  Some of 
the Adaptec SCSI Cards advertises a max of 30 devices - some even

 more. Excuse the ignorance, but does the SCSI Bus not allow for a max
 of 8 devices?  Do these cards then feature multiple buses to connect
 the cables to?


I am not that familiar with SCSI protocols, but I imagine the
Ultra-Wide SCSI bus can probably address 32 devices ( 31 drives + the
controller ).


Old 8-bit SCSI allow for 8 devices.
For HD today you'll want Wide (16-bit) SCSI, which allows for 16 devices 
 (15 drives + controller).

There is no 32-bit SCSI, AFAIK.
The Adaptec cards you mention do have two busses (basically they are two 
controllers on one chip and are as such seen by the OS).





Can SATA RAID Controllers be 'linked'.  
... can I extend a array across multiple controllers...



I imagine you would probably have to use software raid at that point.


Yes and true.




And even if you would use two controllers togeather (SLI for RAID?)
you would be limited by the PCI bus.


You might want a motherboard with multiple PCI buses and carefully 
choose the RAID scheme vs. HD distribution.


If you need so many drives, however, you might well be better off with 
an hardware solution.






Probably with softawre RAID. With software RAID you can even mix drive
types, like SATA, PATA, SCSI, USB, etc. But it's much slower.


I wouldn't want to do that... I've always heard you should get identical 
drives to build an array.




Of course you might have different arrays on the same machine...


 bye
av.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]