Re: large ide raid system

2000-01-17 Thread Mika Kuoppala



On Tue, 11 Jan 2000, Thomas Waldmann wrote:

  Cable length is not so much a pain as the number of cables. Of course with
  scsi you want multiple channels anyway for performance, so the situation
  is very similar to ide. A cable mess.
 
 Well, it is at least only a half / third / ... of the cable count of "tuned"
 single-device-on-a-cable EIDE RAID systems (and you don´t have these big
 problems with cable length).
 
 I didn´t try LVD/U2W SCSI yet, but using UW SCSI you can put e.g. 2 .. 3 IBM
 DNES 9GB on a single UW cable (these are FAST while being affordable each one
 does ~~15MB/s) without loosing too much performance.
 
 Did anybody measure how this is with U2W/LVD ?
 
 How is performance when putting e.g. 4, 6 or 8 IBM DNES 9GB LVD on a single
 U2W channel compared to putting them on multiple U2W channels ?


I do not know what kind of RAID level you are discussing, because
haven't followed the thread but here are some results for two IBM DMVS09V
drives on a single U2W channel with Linux software-RAID1 (with raid1 read
balance patch applied). These are tiotest results.

Size is MB, BlkSz is Bytes, Read and Write are MB/sec, Seeks are Seeks/sec

  MachineDirectory   Size(MB)  BlkSz   Threads   Read   Write   Seeks
--- --- - --- - -- --- ---
icesus-r1p /mnt/   800   4096   1   25.649  15.296 315.806
icesus-r1p /mnt/   800   4096   2   33.970  15.528 610.314
icesus-r1p /mnt/   800   4096   3   37.360  15.684 541.071
icesus-r1p /mnt/   800   4096   4   35.351  15.447 629.723
icesus-r1p /mnt/   800   4096   5   41.068  15.285 632.911
icesus-r1p /mnt/   800   4096   6   40.818  15.131 624.352
icesus-r1p /mnt/   800   4096   8   37.488  15.157 701.016

Not bad considering the bus was 40Mbytes/sec

Here are some info of the hardware:

Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: IBM  Model: DMVS09V  Rev: 0100
  Type:   Direct-AccessANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 02 Lun: 00
  Vendor: IBM  Model: DMVS09V  Rev: 0100
  Type:   Direct-AccessANSI SCSI revision: 03

Adaptec AIC7xxx driver version: 5.1.20/3.2.4
Compile Options:
  TCQ Enabled By Default : Enabled
  AIC7XXX_PROC_STATS : Disabled
  AIC7XXX_RESET_DELAY: 5

Adapter Configuration:
   SCSI Adapter: Adaptec AIC-7890/1 Ultra2 SCSI host adapter
   Ultra-2 LVD/SE Wide Controller
PCI MMAPed I/O Base: 0xe300
 Adapter SEEPROM Config: SEEPROM found and used.
  Adaptec SCSI BIOS: Enabled
IRQ: 10
   SCBs: Active 1, Max Active 24,
 Allocated 30, HW 32, Page 255
 Interrupts: 420588
  BIOS Control Word: 0x10a6
   Adapter Control Word: 0x1c5e
   Extended Translation: Enabled
Disconnect Enable Flags: 0x
 Ultra Enable Flags: 0x
 Tag Queue Enable Flags: 0x0045
Ordered Queue Tag Flags: 0x0045
Default Tag Queue Depth: 8
Tagged Queue By Device array for aic7xxx host instance 0:
  {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}
Actual queue depth per device for aic7xxx host instance 0:
  {8,1,8,1,1,1,8,1,1,1,1,1,1,1,1,1}

Statistics:

(scsi0:0:0:0)
  Device using Wide/Sync transfers at 40.0 MByte/sec, offset 31
  Transinfo settings: current(12/31/1/0), goal(12/31/1/0),
user(12/127/1/0)
  Total transfers 194067 (98491 reads and 95576 writes)


(scsi0:0:2:0)
  Device using Wide/Sync transfers at 40.0 MByte/sec, offset 31
  Transinfo settings: current(12/31/1/0), goal(12/31/1/0),
user(10/127/1/0)
  Total transfers 200068 (103008 reads and 97060 writes)

read_ahead 1024 sectors
md0 : active raid1 sdb1[1] sda1[0] 4417728 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0] 4538240 blocks [2/2] [UU]
unused devices: none





Re: large ide raid system

2000-01-17 Thread Mika Kuoppala



On Fri, 14 Jan 2000, Keith Underwood wrote:

 I also experimented with the master/slave setup, and my recollection is
 also that it was worse than half the performance of the master only setup.
 
 Keith


I do not know about performance, but if you build raid array using
masters and slaves on same channel, it will lack redudancy because
of if master dies, it will take slave with it ? So raid1 or raid5 using
masters AND slaves is totally unwise? 

Some IDE guru could enlighten us on this. Is using
slave safe on raid arrays considering redudancy ?

-- Mika

 On Tue, 11 Jan 2000, Jan Edler wrote:
 
  On Tue, Jan 11, 2000 at 04:25:27PM +0100, Benno Senoner wrote:
   Jan Edler wrote:
I wasn't advising against IDE, only against the use of slaves.
With UDMA-33 or -66, masters work quite well,
if you can deal with the other constraints that I mentioned
(cable length, PCI slots, etc).
   
   Do you have any numbers handy ?
  
  Sorry, I can't seem to find any quantitative results on that right now.
  
   will the performance of master/slave setup be at least HALF of the
   master-only setup.
  
  I did run some tests, and my recollection is that it was much worse.
  
   For some apps cost is really important, and software IDE RAID has a very low
   price/Megabyte.
   If the app doesn't need killer performance , then I think it is the best
   solution.
  
  It all depends on your minimum acceptable performance level.
  I know my master/slave test setup couldn't keep up with fast ethernet
  (10 MByte/s).  I don't remember if it was 1 Mbyte/s or not.
  
  I was also wondering about the reliability of using slaves.
  Does anyone know about the likelihood of a single failed drive
  bringing down the whole master/slave pair?  Since I have tended to
  stay away from slaves, for performance reasons, I don't know
  how they influence reliability.  Maybe it's ok.
  
  Jan Edler
  NEC Research Institute
  
 
 



Re: large ide raid system

2000-01-17 Thread Michael

 I do not know about performance, but if you build raid array using
 masters and slaves on same channel, it will lack redudancy because
 of if master dies, it will take slave with it ? So raid1 or raid5
 using masters AND slaves is totally unwise? 
 

I can only speak from experience. I have 3 production raid5 servers 
on 2 ide channels. Over the last few years there have been 2 deaths 
on different servers, both involving the master slave channel. In 
both cases the data reduncancy was fine and no data was lost due to 
the drive failure. I can see how data loss is possible but have not 
experienced it in this environment. BTW, moving the systems to 1 disk 
per controller anyway.

Michael
[EMAIL PROTECTED]



Re: large ide raid system

2000-01-14 Thread Thomas Waldmann

 Cable length is not so much a pain as the number of cables. Of course with
 scsi you want multiple channels anyway for performance, so the situation
 is very similar to ide. A cable mess.

Well, it is at least only a half / third / ... of the cable count of "tuned"
single-device-on-a-cable EIDE RAID systems (and you don´t have these big
problems with cable length).

I didn´t try LVD/U2W SCSI yet, but using UW SCSI you can put e.g. 2 .. 3 IBM
DNES 9GB on a single UW cable (these are FAST while being affordable each one
does ~~15MB/s) without loosing too much performance.

Did anybody measure how this is with U2W/LVD ?

How is performance when putting e.g. 4, 6 or 8 IBM DNES 9GB LVD on a single
U2W channel compared to putting them on multiple U2W channels ?

Thomas



Re: large ide raid system

2000-01-13 Thread Benno Senoner

Thomas Davis wrote:

 JMy 4way IDE based, 2 channels (ie, master/slave, master/slave) built
 using IBM 16gb Ultra33 drives in RAID0 are capable of about 25mb/sec
 across the raid.

nice to hear :-) not a very big performance degradation



 Adding a Promise 66 card, changing to all masters, got the numbers up
 into the 30's range (I don't have them at the moment.. hmm..)

  I was also wondering about the reliability of using slaves.
  Does anyone know about the likelihood of a single failed drive
  bringing down the whole master/slave pair?  Since I have tended to
  stay away from slaves, for performance reasons, I don't know
  how they influence reliability.  Maybe it's ok.
 

 When the slave fail, the master goes down.

 My experience has been, when _ANY_ IDE drive fails, it takes down the
 whole channel.  Master or slave.  The kernel just gives fits..

hmm .. strange .. I got an old Pentium box, and disconnected the slave and
the raid5 array continued  to work after a TON of syslog messages.

Anyway, I agree that the master-only configuration is much more reliable
from an electrical point of view.

I was wondering how much IDE channels linux 2.2 can handle,
can it handle 8 channels ?

would an Abit with 4 channels + 2 promise ultra 66 cards work ?
or a normal BX mainboard (2 channels) + 3 promise ultra 66 ?

thanks for infos,

Benno.





Re: large ide raid system

2000-01-13 Thread John Burton



[EMAIL PROTECTED] wrote:

 john b said:

  Performance is pretty good - these numbers are for a first generation
  smartcan (spring '99)

 these numbers are also useless since they are much too close to your ram size,
 and bonnie only shows how fast your system runs bonnie :) a better benchmark
 would be to see how this runs with multiple concurrent accesses to even larger
 files. perhaps something like tiotest?


Good point, I'll try re-running it with 500MB - 2GB file size. Just need to keep
other processes from doing raid i/o while I'm testing - it *is* a production
machine and has been running quite happily since June...


 but even with bonnie getting more cpu time, the speed did not seem terribly
 different. this makes me wonder about how fast the smartcan's logic really
 is...


True, but then again, it was a first generation smartcan setup...:-)


 speech type=rant
 i cant tell you about the division of responsiblility, but i can tell you i
 keep closed source, binary modules out of my kernel. i have enough problem
 with vendors who dont release specs to their equipment, let alone those who
 ride on the backs of the kernel developers by taking advantage of open code,
 and keeping theirs closed. vote with your dollars i say.
 /speech

I agree with you about the open source in the kernel... one thing you might want
to consider though is trying to motivate companies into developing / releasing
products for the Linux environment. In virtually every other market, the
*standard* is closed, binary modules  drivers. How do you get these companies to
open up and release their intellectual property? By refusing to buy their products
when they attempt to enter the market (you're basically saying, its my way, or no
way)? How about *working* with them, buying their initial products. Once you
*prove* there is a market for them, they'll listen to you about the market
culture... if they still refuse, don't buy anything else from them...

John





Re: large ide raid system

2000-01-13 Thread Thomas Davis

Brian Grossman wrote:
 RZ
 RZ Of course this is not the only thing the affects speed. Other issues that
 RZ make our units fast is the PCI bus which is 133Mbs and DMA directly to
 RZ drives.
 
 It is however, still unclear whether it's safe to run reiserfs on a
 raidzone.  I have a question about that out to Colin.
 

I tried the ext3 alpha code once..  it went BOOM almost on immediately
on first boot..  But, that may be the code, and not RZ's fault.

-- 
+--
Thomas Davis| PDSF Project Leader
[EMAIL PROTECTED] | 
(510) 486-4524  | "Only a petabyte of data this year?"



Re: large ide raid system

2000-01-13 Thread Jan Edler

Benno Senoner wrote:
 I was wondering how much IDE channels linux 2.2 can handle,
 can it handle 8 channels ?

I think the limit with the later 2.2 kernel ide patches is 10 IDE channels.
I have run quite a bit with 4 Promise cards (8 channels),
plus the 2 onboard PIIX channels.

Jan Edler
NEC Research Institute



Re: Ribbon Cabling (was Re: large ide raid system)

2000-01-12 Thread James Manning

[ Tuesday, January 11, 2000 ] Andy Poling wrote:
 On Tue, 11 Jan 2000, Gregory Leblanc wrote:
  If you cut the cable
  lengthwise (no, don't cut the wires) between wires (don't break the
  insulation on the wires themselves, just the connecting plastic) you can
  get your cables to be 1/4 the normal width (up until you get to the
  connector).
 
 I don't know about IDE, but I'm pretty sure that's a big no-no for SCSI
 cables.  The alternating conductors in the ribbon cable are sig, gnd, sig,
 gnd, sig, etc.  And it's electrically important (for proper impedance and
 noise and cross-talk rejection) that they stay that way.
 
 I think the same is probably true for the schmancy UDMA66 cables too...

So just check with a cable spec and make sure you're not separating a
data signal from its ground return path.  Throw some mag rings around
the thing if you want, but since we're (hopefully) terminated properly
(no reflection) the crosstalk issues aren't huge... they suffer more
through the LC matrix of connector adaptors than this split would cause :)

James
-- 
Miscellaneous Engineer --- IBM Netfinity Performance Development



Re: large ide raid system

2000-01-12 Thread Brian Grossman


 Getting back to the discussion of Hardware vs. Software raid...
 Can someone say *definitively* *where* the raid-5 code is being run on a
 *current* Raidzone product? Originally, it was an "md" process running
 on the system cpu. Currently I'm not so sure. The SmartCan *does* have
 its own BIOS, so there is *some* intelligence there, but what exactly is
 the division of responsibility here...

From a recent email exchange with [EMAIL PROTECTED] of consensys, makers
of raidzone:

BG Does the raidzone product for linux use hardware or software raid?

RZ It is in the firmware of the unit.
RZ
RZ Everything is our own raid. eg. it is not RAID tools in Linux.

BG [please clarify]

RZ Our raid is firmware. This means that its both hardware and software.
RZ 
RZ Most people are interested in a RAID5 configuration. The parity is   
RZ calculated on the CPU of the mother board. Our raid is as fast as anyone   
RZ else's raid hardware or software. There is a great misnomer regarding   
RZ raid today. Basically in the old days of 100 MHz CPU's there was a   
RZ performance issuer with calculating the parity on the CPU. Today that is   
RZ not true and many of the PC magazines reflect this in their comments.   
RZ There are lots of left over cycles to calculate the parity.
RZ 
RZ Of course this is not the only thing the affects speed. Other issues that   
RZ make our units fast is the PCI bus which is 133Mbs and DMA directly to   
RZ drives.


It is however, still unclear whether it's safe to run reiserfs on a
raidzone.  I have a question about that out to Colin.


Brian



Re: Ribbon Cabling (was Re: large ide raid system)

2000-01-12 Thread Chris Mauritz

 From [EMAIL PROTECTED] Tue Jan 11 21:44:29 2000
 
 On Tue, 11 Jan 2000, Gregory Leblanc wrote:
  If you cut the cable
  lengthwise (no, don't cut the wires) between wires (don't break the
  insulation on the wires themselves, just the connecting plastic) you can
  get your cables to be 1/4 the normal width (up until you get to the
  connector).
 
 I don't know about IDE, but I'm pretty sure that's a big no-no for SCSI
 cables.  The alternating conductors in the ribbon cable are sig, gnd, sig,
 gnd, sig, etc.  And it's electrically important (for proper impedance and
 noise and cross-talk rejection) that they stay that way.
 
 I think the same is probably true for the schmancy UDMA66 cables too...

vent

Back in the day  8-)  high end SCSI ribbon cables consisted of
twisted pairs between the connectors so it was really easy to deform
the cable to fit through tight spots.  Now, all I seem to find is the
cheap ribbon cable that's excreted from nameless companies in developing
countries where their ideas of quality control differ vastly from 
mine.  8-)  Either I'm really unlucky or the quality of ribbon cabling
in general is in decline...sigh.

/vent

And I agree with the idea that slicing up the ribbon cable is probably
not going to work.  

Cheers,

Chris
-- 
Christopher Mauritz
[EMAIL PROTECTED]



Re: Ribbon Cabling (was Re: large ide raid system)

2000-01-12 Thread Anton Ivanov

-BEGIN PGP SIGNED MESSAGE-


On 11-Jan-2000 James Manning wrote:
 [ Tuesday, January 11, 2000 ] Andy Poling wrote:
 On Tue, 11 Jan 2000, Gregory Leblanc wrote:
  If you cut the cable
  lengthwise (no, don't cut the wires) between wires (don't break the
  insulation on the wires themselves, just the connecting plastic) you can
  get your cables to be 1/4 the normal width (up until you get to the
  connector).
 
 I don't know about IDE, but I'm pretty sure that's a big no-no for SCSI
 cables.  The alternating conductors in the ribbon cable are sig, gnd, sig,
 gnd, sig, etc.  And it's electrically important (for proper impedance and
 noise and cross-talk rejection) that they stay that way.
 
 I think the same is probably true for the schmancy UDMA66 cables too...
 
 So just check with a cable spec and make sure you're not separating a
 data signal from its ground return path.  Throw some mag rings around
 the thing if you want, but since we're (hopefully) terminated properly
 (no reflection) the crosstalk issues aren't huge... they suffer more
 through the LC matrix of connector adaptors than this split would cause :)
 
 James
 -- 
 Miscellaneous Engineer --- IBM Netfinity Performance Development

Have a look at old 3M cables (used in most old suns and all old decstations).
They have all the wires separated. And they work at least up to SCSI2. I
also thought that the sig/gnd/sig/gnd was mandatory but these cables prove that
there is another way to do it at least in some cases.

My $0.02

- --
Anton R. Ivanov
IP Engineer Level3 Communications
RIPE: ARI2-RIPE  E-Mail: Anton Ivanov [EMAIL PROTECTED]
@*** Segal's Law ***
  A man with one watch knows what time it is;
  a man with two watches is never sure.

- --
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.0 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iQEVAwUBOHxGyylWAw/bM84zAQE5Swf+PUSlcf0gX+l1gUZTn/fsSN1Q+cO+kA6M
Z5v9X/83mD0KOV8IAo5YRY9+E7BAIBaD5+rXgyFSWYdeIvewI0C9mTjSlliwv1ZN
1goBHvL5tqsIz21v/cbx/veW+zoSssHrj/ufm9GI2dXAIzdIA2YQ3BzZ60w6YLdH
Pben/18W/KXNKuEqyEkBpRKJyXiLx6NBt2iM9qlMCfJHAd8KWv2ruqDv9v55aJOX
e1HCKNxFHHuO951JjV4zzb+rlhD6lqGsw2EtN77228qGs1uKUkktAviTmRduHtzJ
kTAr9YOu1T3/apUIFOjmZHtrjWqZWjVZ8lqT/iKF2HvcHsFCwqJJPg==
=84Gh
-END PGP SIGNATURE-



Re: Ribbon Cabling (was Re: large ide raid system)

2000-01-12 Thread Bohumil Chalupa

On Tue, 11 Jan 2000, James Manning wrote:

   If you cut the cable
   lengthwise (no, don't cut the wires) between wires (etc.)
 
  I don't know about IDE, but I'm pretty sure that's a big no-no for SCSI
  cables.  The alternating conductors in the ribbon cable are sig, gnd, sig,
  gnd, sig, etc.  And it's electrically important (for proper impedance and
  noise and cross-talk rejection) that they stay that way.
  
  I think the same is probably true for the schmancy UDMA66 cables too...
 
 So just check with a cable spec and make sure you're not separating a
 data signal from its ground return path.  Throw some mag rings around
 the thing if you want, but since we're (hopefully) terminated properly
 (no reflection) the crosstalk issues aren't huge... they suffer more
 through the LC matrix of connector adaptors than this split would cause :)

,,Termination`` means nothing else then a resistance at the end of the
cable (each pair) that is equivalent to the cable impedance. And the
impedance depends on the cable geometry (and material, of course).
So IMHO you can't divide the flat cable to the pairs (unless they're
twisted pairs) or even single wires without an impedance change
of the cable section involved.

BoChal.



RE: Ribbon Cabling (was Re: large ide raid system)

2000-01-12 Thread Kenneth Cornetet
Title: RE: Ribbon Cabling (was Re: large ide raid system)





You may be thinking of differential SCSI which uses a balanced (and twisted) pair for each data and signal line. In the old days, there was only one flavor of differential, and it was popular at least on Hewlett-Packard 800 series systems (which uses round SCSI cables). Now, in addition to the old differential, there is something called low voltage differential (LVD) also known in marketing hype as Ultra2. LVD cables look like a ribbon cable but have the signal pairs twisted. The last ones I bought from Adaptec were high quality, but expensive! Best I remember, they were almost $100 for a 4 device cable. But then again, they have active terminators built on the end of the cable (LVD drives don't have built-in terminators).

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 11, 2000 10:11 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Ribbon Cabling (was Re: large ide raid system)



 From [EMAIL PROTECTED] Tue Jan 11 21:44:29 2000
 
 On Tue, 11 Jan 2000, Gregory Leblanc wrote:
  If you cut the cable
  lengthwise (no, don't cut the wires) between wires (don't break the
  insulation on the wires themselves, just the connecting plastic) you can
  get your cables to be 1/4 the normal width (up until you get to the
  connector).
 
 I don't know about IDE, but I'm pretty sure that's a big no-no for SCSI
 cables. The alternating conductors in the ribbon cable are sig, gnd, sig,
 gnd, sig, etc. And it's electrically important (for proper impedance and
 noise and cross-talk rejection) that they stay that way.
 
 I think the same is probably true for the schmancy UDMA66 cables too...


vent


Back in the day 8-) high end SCSI ribbon cables consisted of
twisted pairs between the connectors so it was really easy to deform
the cable to fit through tight spots. Now, all I seem to find is the
cheap ribbon cable that's excreted from nameless companies in developing
countries where their ideas of quality control differ vastly from 
mine. 8-) Either I'm really unlucky or the quality of ribbon cabling
in general is in decline...sigh.


/vent


And I agree with the idea that slicing up the ribbon cable is probably
not going to work. 


Cheers,


Chris
-- 
Christopher Mauritz
[EMAIL PROTECTED]





Re: large ide raid system

2000-01-12 Thread Thomas Davis

Jan Edler wrote:
 
 It all depends on your minimum acceptable performance level.
 I know my master/slave test setup couldn't keep up with fast ethernet
 (10 MByte/s).  I don't remember if it was 1 Mbyte/s or not.
 

Fastethernet is 12mb/sec, Ethernet is 1.2mb/sec.

My 4way IDE based, 2 channels (ie, master/slave, master/slave) built
using IBM 16gb Ultra33 drives in RAID0 are capable of about 25mb/sec
across the raid.

Adding a Promise 66 card, changing to all masters, got the numbers up
into the 30's range (I don't have them at the moment.. hmm..)

 I was also wondering about the reliability of using slaves.
 Does anyone know about the likelihood of a single failed drive
 bringing down the whole master/slave pair?  Since I have tended to
 stay away from slaves, for performance reasons, I don't know
 how they influence reliability.  Maybe it's ok.
 

When the slave fail, the master goes down.

My experience has been, when _ANY_ IDE drive fails, it takes down the
whole channel.  Master or slave.  The kernel just gives fits..

-- 
+--
Thomas Davis| PDSF Project Leader
[EMAIL PROTECTED] | 
(510) 486-4524  | "Only a petabyte of data this year?"



Re: Ribbon Cabling (was Re: large ide raid system)

2000-01-12 Thread James Manning

$horse='dead';
beat($horse);

[ Wednesday, January 12, 2000 ] Bohumil Chalupa wrote:
 ,,Termination`` means nothing else then a resistance at the end of the
 cable (each pair) that is equivalent to the cable impedance. And the
 impedance depends on the cable geometry (and material, of course).
 So IMHO you can't divide the flat cable to the pairs (unless they're
 twisted pairs) or even single wires without an impedance change
 of the cable section involved.

Impedance depends on the cable the signal is going through and the
distance to its return path (which you agree to above).  The distributed
RLC model of transmission lines (longer than 3 inches, so I'm not trusting
lumped :) is based on properties of the cable itself, though.  Now,
fast signals (GHz signals skimming only along the top of a microstrip,
for instance) need a very-close signal return path (PCB traces that need
to be closer to the power plane under/over them, as impedance rises with
distance from return path) hence the need to keep the pairs together,
but with the pairs kept together each cable is equi-distant to its return
path both before and after the splitting, so impedance is not affected.

If someone splits out the individual wires, you're absolutely right,
but that's not what we're advocating :)

Anyway, there's enough cables that come off the manufacturing line not
meeting spec (and still get used) that even if messing with the ribbon
had integrity issues, that doesn't the cable wouldn't still work :)

James
-- 
Miscellaneous Engineer --- IBM Netfinity Performance Development



Re: large ide raid system

2000-01-12 Thread Thomas Davis

James Manning wrote:
 
 [ Tuesday, January 11, 2000 ] Thomas Davis wrote:
---Sequential Output ---Sequential Input--
  --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block---
  --Seeks---
  MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
  /sec %CPU
  pdsfdv10 1024 14076 85.1 18487 24.3 12089 35.8 20182 83.0 63064 69.8
  344.4  7.1
 
 hmmm ok...
 
 Any chance I could talk you into running the tiobench.pl from
 http://www.iki.fi/miku/tiotest/ (after "make" to build tiotest)?
 
 I'd love to see what it puts out vs. bonnie on such a system.
 

[root@pdsfdv10 tiotest-0.16]# ./tiobench.pl 
Found memory size of 255.94140625 MB
Now running ./tiotest -t 1 -f 510 -s 4000 -b 4096 -d . -T -W
Size is MB, BlkSz is Bytes, Read and Write are MB/sec, Seeks are
Seeks/sec

  MachineDirectory   Size(MB)  BlkSz   Threads   Read   Write  
Seeks
--- --- - --- - -- ---
---
 . 510   4096   1   63.197  16.073
185.185
Now running ./tiotest -t 2 -f 255 -s 2000 -b 4096 -d . -T -W
 . 510   4096   2   62.347  15.366
304.183
Now running ./tiotest -t 4 -f 127 -s 1000 -b 4096 -d . -T -W
 . 510   4096   4   67.285  15.070
528.402
Now running ./tiotest -t 8 -f 63 -s 500 -b 4096 -d . -T -W
 . 510   4096   8   52.610  14.797
803.213
[root@pdsfdv10 tiotest-0.16]# ./tiobench.pl --threads 16
Found memory size of 255.94140625 MB
Now running ./tiotest -t 16 -f 31 -s 250 -b 4096 -d . -T -W
Size is MB, BlkSz is Bytes, Read and Write are MB/sec, Seeks are
Seeks/sec

  MachineDirectory   Size(MB)  BlkSz   Threads   Read   Write  
Seeks
--- --- - --- - -- ---
---
 . 510   4096  16   33.514  14.806
1302.93
[root@pdsfdv10 tiotest-0.16]# ./tiobench.pl --threads 32
Found memory size of 255.94140625 MB
Now running ./tiotest -t 32 -f 15 -s 125 -b 4096 -d . -T -W
Size is MB, BlkSz is Bytes, Read and Write are MB/sec, Seeks are
Seeks/sec

  MachineDirectory   Size(MB)  BlkSz   Threads   Read   Write  
Seeks
--- --- - --- - -- ---
---
 . 510   4096  32   27.491  13.445
1851.85
[root@pdsfdv10 tiotest-0.16]# ./tiobench.pl --threads 64
Found memory size of 255.94140625 MB
Now running ./tiotest -t 64 -f 7 -s 62 -b 4096 -d . -T -W
Size is MB, BlkSz is Bytes, Read and Write are MB/sec, Seeks are
Seeks/sec

  MachineDirectory   Size(MB)  BlkSz   Threads   Read   Write  
Seeks
--- --- - --- - -- ---
---
 . 510   4096  64   42.667  13.211
2110.63
[root@pdsfdv10 tiotest-0.16]# ./tiobench.pl --threads 128
Found memory size of 255.94140625 MB
Now running ./tiotest -t 128 -f 3 -s 31 -b 4096 -d . -T -W
Size is MB, BlkSz is Bytes, Read and Write are MB/sec, Seeks are
Seeks/sec

  MachineDirectory   Size(MB)  BlkSz   Threads   Read   Write  
Seeks
--- --- - --- - -- ---
---
 . 510   4096 128   44.548  13.627
2511.39
[root@pdsfdv10 tiotest-0.16]# ./tiobench.pl --threads 256
Found memory size of 255.94140625 MB
Now running ./tiotest -t 256 -f 1 -s 15 -b 4096 -d . -T -W
Size is MB, BlkSz is Bytes, Read and Write are MB/sec, Seeks are
Seeks/sec

  MachineDirectory   Size(MB)  BlkSz   Threads   Read   Write  
Seeks
--- --- - --- - -- ---
---
 . 510   4096 256   36.941  12.580
4042.10

(I was playing around at the end..)

-- 
+--
Thomas Davis| PDSF Project Leader
[EMAIL PROTECTED] | 
(510) 486-4524  | "Only a petabyte of data this year?"



Re: large ide raid system

2000-01-11 Thread Gregory Leblanc

Dan Hollis wrote:
 
 On Mon, 10 Jan 2000, Jan Edler wrote:
 
 Cable length is not so much a pain as the number of cables. Of course with
 scsi you want multiple channels anyway for performance, so the situation
 is very similar to ide. A cable mess.

There's a (relatively) nice way to get around this, if you make your own
IDE cables (or are brave enough to cut some up).  If you cut the cable
lengthwise (no, don't cut the wires) between wires (don't break the
insulation on the wires themselves, just the connecting plastic) you can
get your cables to be 1/4 the normal width (up until you get to the
connector).  This also makes a big difference for airflow, since those
big, flat ribbon cables are really bad for that.  
Greg



Re: large ide raid system

2000-01-11 Thread Benno Senoner

Jan Edler wrote:

 On Mon, Jan 10, 2000 at 12:49:29PM -0800, Dan Hollis wrote:
  On Mon, 10 Jan 2000, Jan Edler wrote:
- Performance is really horrible if you use IDE slaves.
  Even though you say you aren't performance-sensitive, I'd
  recommend against it if possible.
 
  My tests indicate UDMA performs favorably with ultrascsi, at about 1/6 the
  cost. Cost is often a big factor.

 I wasn't advising against IDE, only against the use of slaves.
 With UDMA-33 or -66, masters work quite well,
 if you can deal with the other constraints that I mentioned
 (cable length, PCI slots, etc).

Do you have any numbers handy ?

will the performance of master/slave setup be at least HALF of the
master-only setup.

For some apps cost is really important, and software IDE RAID has a very low
price/Megabyte.
If the app doesn't need killer performance , then I think it is the best
solution.

now if we only had soft-RAID + journaled FS + power failure safeness  right now
...

cheers,
Benno.





Re: large ide raid system

2000-01-11 Thread John Burton

Thomas Davis wrote:
 
 James Manning wrote:
 
  Well, it's kind of on-topic thanks to this post...
 
  Has anyone used the systems/racks/appliances/etc from raidzone.com?
  If you believe their site, it certainly looks like a good possibility.
 
 
 Yes.
 
 It's pricey.  Not much cheaper that SCSI chassis.  You only save money
 on the drives.
 

Interesting... The 100GB Internal RAID-5 SmartCan I purchased from
RaidZone was approx. $5k. The quotes I got for a SCSI equivalent ranged
from $10k to $15K. Personally I consider half the cost significantly
cheaper. I also was quite impressed with a qoute for a 1TB rackmount
system in the $50K range, again SCSI equivalents were significantly
higher...

 Performance is ok.  Has a few other problems - your stuck with the
 kernels they support; the raid code is NOT open sourced.

Performance is pretty good - these numbers are for a first generation
smartcan (spring '99)

  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU 
/sec %CPU
raidzone  100  6923 89.7 25987 26.6 14230 28.9  7297 89.4 215121 77.7
16407.3 69.7
raidzone  200  6537 86.2 22175 21.5 14297 30.2  7667 92.5  56355 36.0  
377.5  3.1

Softraid  100  6598 86.0 43411 36.5 12077 27.4  6180 77.9  54022 46.4  
721.4  4.1
Softraid  200  8337 87.9 25373 24.0  9009 18.8  8952 87.1  34413 21.7  
301.1  2.2  

The two sets of numbers were measured on the same computer  hardware
setup (500Mhz PIII w/ 128MB, 100GB Smartcan w/ 5 24GB IBM drives).
"raidzone" is using Raidzone's most recent pre-release version of their
Linux software (BIOS upgrades  all). "Softraid" was based on early
alpha release of RaidZone's linux support which basically allowed you to
access the individual drives. RAID was handled by the Software Raid
support available under RedHat Linux 6.0  6.1. Both were set up as
RAID-5

Using "top":
 - With "Softraid" bonnie and the md Raid-5 software were sharing the
cpu equally
 - With "raidzone" bonnie was consuming most (85%) of the cpu and no
other processes 
   and "system"  15%

Getting back to the discussion of Hardware vs. Software raid...
Can someone say *definitively* *where* the raid-5 code is being run on a
*current* Raidzone product? Originally, it was an "md" process running
on the system cpu. Currently I'm not so sure. The SmartCan *does* have
its own BIOS, so there is *some* intelligence there, but what exactly is
the division of responsibility here...

John

-- 
John Burton, Ph.D.
Senior Associate GATS, Inc.  
[EMAIL PROTECTED]  11864 Canon Blvd - Suite 101
[EMAIL PROTECTED] (personal)  Newport News, VA 23606
(757) 873-5920 (voice)   (757) 873-5924 (fax)



Re: large ide raid system

2000-01-11 Thread D. Lance Robinson

SCSI works quite well with many devices connected to the same cable. The PCI bus
turns out to be the bottleneck with the faster scsi modes, so it doesn't matter
how many channels you have. If performance was the issue, but the original poster
wasn't interested in performance, multiple channels would improve performance if
the slower (single ended) devices are used.

 Lance

Dan Hollis wrote:

 Cable length is not so much a pain as the number of cables. Of course with
 scsi you want multiple channels anyway for performance, so the situation
 is very similar to ide. A cable mess.



Ribbon Cabling (was Re: large ide raid system)

2000-01-11 Thread Andy Poling

On Tue, 11 Jan 2000, Gregory Leblanc wrote:
 If you cut the cable
 lengthwise (no, don't cut the wires) between wires (don't break the
 insulation on the wires themselves, just the connecting plastic) you can
 get your cables to be 1/4 the normal width (up until you get to the
 connector).

I don't know about IDE, but I'm pretty sure that's a big no-no for SCSI
cables.  The alternating conductors in the ribbon cable are sig, gnd, sig,
gnd, sig, etc.  And it's electrically important (for proper impedance and
noise and cross-talk rejection) that they stay that way.

I think the same is probably true for the schmancy UDMA66 cables too...

-Andy



Re: large ide raid system

2000-01-11 Thread Thomas Davis

John Burton wrote:
 
 Thomas Davis wrote:
 
  James Manning wrote:
  
   Well, it's kind of on-topic thanks to this post...
  
   Has anyone used the systems/racks/appliances/etc from raidzone.com?
   If you believe their site, it certainly looks like a good possibility.
  
 
  Yes.
 
  It's pricey.  Not much cheaper that SCSI chassis.  You only save money
  on the drives.
 
 
 Interesting... The 100GB Internal RAID-5 SmartCan I purchased from
 RaidZone was approx. $5k. The quotes I got for a SCSI equivalent ranged
 from $10k to $15K. Personally I consider half the cost significantly
 cheaper. I also was quite impressed with a qoute for a 1TB rackmount
 system in the $50K range, again SCSI equivalents were significantly
 higher...
 

We paid $25k x 4, for:

2x450mhz cpu
256mb ram
15x37gb IBM 5400 drives (550 gb of drive space)
Intel system board, w/eepro
tulip card 
(channel bonded into cisco5500)

 
 Performance is pretty good - these numbers are for a first generation
 smartcan (spring '99)
 
   ---Sequential Output ---Sequential Input--
 --Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
 --Seeks---
 MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
 /sec %CPU
 raidzone  100  6923 89.7 25987 26.6 14230 28.9  7297 89.4 215121 77.7
 16407.3 69.7
 raidzone  200  6537 86.2 22175 21.5 14297 30.2  7667 92.5  56355 36.0
 377.5  3.1
 
 Softraid  100  6598 86.0 43411 36.5 12077 27.4  6180 77.9  54022 46.4
 721.4  4.1
 Softraid  200  8337 87.9 25373 24.0  9009 18.8  8952 87.1  34413 21.7
 301.1  2.2

You made a mistake.  :-)  Your bonnie size is smaller than the amount of
memory in the machine your tested on - so you tested the memory, NOT the
drive system.

Our current large machine(s) (15x37gb IBM drives, 500gb file system, 4kb
blocks, v2.2.13 kernel, fixed knfsd, channel bonding, raidzone 1.2.0b3)
does:

  ---Sequential Output ---Sequential Input--
--Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU 
/sec %CPU
pdsfdv10 1024 14076 85.1 18487 24.3 12089 35.8 20182 83.0 63064 69.8
344.4  7.1

I've also hit it with 8 machines, doing an NFS copy of about 60gb onto
it, and it sustained about a 20mb/sec write rate.

 
 Using "top":
  - With "Softraid" bonnie and the md Raid-5 software were sharing the
 cpu equally
  - With "raidzone" bonnie was consuming most (85%) of the cpu and no
 other processes
and "system"  15%
 

I've seen load averages in the 5's and 6's.  This is on a dual processor
machine w/256mb of ram.  My biggest complaint is the raid rebuild code
runs as the highest priority, so on a crash/reboot, it takes _forever_
for fsck to complete (because the rebuild thread is taking all of the
CPU and disk bandwidth).

The raidzone code also appears to be single threaded - it doesn't take
advantage of multiple CPU's.  (although, user space code benefits from
having a second CPU then)

 Getting back to the discussion of Hardware vs. Software raid...
 Can someone say *definitively* *where* the raid-5 code is being run on a
 *current* Raidzone product? Originally, it was an "md" process running
 on the system cpu. Currently I'm not so sure. The SmartCan *does* have
 its own BIOS, so there is *some* intelligence there, but what exactly is
 the division of responsibility here...
 

None of the RAID code runs in the smartcan, or the controller.  It all
runs in the kernel.  the current code has several kernel threads, and a
user space thread:

root 6  0.0  0.0 00 ?SW   Jan04   0:02
[rzft-syncd]
root 7  0.0  0.0 00 ?SW   Jan04   0:00
[rzft-rcvryd]
root 8  0.1  0.0 00 ?SW  Jan04  14:41
[rzft-dpcd]
root   620  0.0  0.0   5640 ?SW   Jan04   0:00 [rzmpd]
root   621  0.0  0.1  2080  296 ?SJan04   3:30 rzmpd
root  3372  0.0  0.0 00 ?ZJan10   0:00 [rzmpd
defunct]
root  3806  0.0  0.1  1240  492 pts/1S09:57   0:00 grep rz

-- 
+--
Thomas Davis| PDSF Project Leader
[EMAIL PROTECTED] | 
(510) 486-4524  | "Only a petabyte of data this year?"



Re: large ide raid system

2000-01-11 Thread Gregory Leblanc

Benno Senoner wrote:
 
 Jan Edler wrote:
 
  On Mon, Jan 10, 2000 at 12:49:29PM -0800, Dan Hollis wrote:
   On Mon, 10 Jan 2000, Jan Edler wrote:
 - Performance is really horrible if you use IDE slaves.
   Even though you say you aren't performance-sensitive, I'd
   recommend against it if possible.
  
   My tests indicate UDMA performs favorably with ultrascsi, at about 1/6 the
   cost. Cost is often a big factor.
 
  I wasn't advising against IDE, only against the use of slaves.
  With UDMA-33 or -66, masters work quite well,
  if you can deal with the other constraints that I mentioned
  (cable length, PCI slots, etc).
 
 Do you have any numbers handy ?
 
 will the performance of master/slave setup be at least HALF of the
 master-only setup.

Well, this depends on how it's used.  If you were saturating your I/O
bus, then things would be REALLY ugly.  Say you've got a controller
running in UDMA/33 mode, with two disks attached.  If you have drives
that are reasonably fast, say recent 5400 RPM UDMA drives, then this
will actually hinder performance compared to having just one drive.  If
you're doing 16MB/sec of I/O, then your performance will be slightly
less than half the performance of having just one drive on that channel
(consider overhead, IDE controller context switches, etc).  If you only
need the space, then this is an accptable solution, for low throughput
applications.  I don't know jack schitt about ext2, the linux ide
drivers (patches or old ones), or about the RAID code, except that they
work.  

 
 For some apps cost is really important, and software IDE RAID has a very low
 price/Megabyte.
 If the app doesn't need killer performance , then I think it is the best
 solution.

It's a very good solution for a small number of disks, where you can
keep everything in a small case.  It may actually be superior to SCSI
for situations where you have 4 or fewer disks and can put just a single
disk on a controller.  

 
 now if we only had soft-RAID + journaled FS + power failure safeness  right now
 ...

I'll be happy as long as it gets there relatively soon, I'll be happy. 
fsck'ing is the only thing that really bugs me...
Greg



Re: large ide raid system

2000-01-11 Thread Jan Edler

On Tue, Jan 11, 2000 at 04:25:27PM +0100, Benno Senoner wrote:
 Jan Edler wrote:
  I wasn't advising against IDE, only against the use of slaves.
  With UDMA-33 or -66, masters work quite well,
  if you can deal with the other constraints that I mentioned
  (cable length, PCI slots, etc).
 
 Do you have any numbers handy ?

Sorry, I can't seem to find any quantitative results on that right now.

 will the performance of master/slave setup be at least HALF of the
 master-only setup.

I did run some tests, and my recollection is that it was much worse.

 For some apps cost is really important, and software IDE RAID has a very low
 price/Megabyte.
 If the app doesn't need killer performance , then I think it is the best
 solution.

It all depends on your minimum acceptable performance level.
I know my master/slave test setup couldn't keep up with fast ethernet
(10 MByte/s).  I don't remember if it was 1 Mbyte/s or not.

I was also wondering about the reliability of using slaves.
Does anyone know about the likelihood of a single failed drive
bringing down the whole master/slave pair?  Since I have tended to
stay away from slaves, for performance reasons, I don't know
how they influence reliability.  Maybe it's ok.

Jan Edler
NEC Research Institute



Re: large ide raid system

2000-01-11 Thread James Manning

[ Tuesday, January 11, 2000 ] John Burton wrote:
 Performance is pretty good - these numbers are for a first generation
 smartcan (spring '99)

Could you re-run the raidzone and softraid with a size of 512MB or larger?

Could you run the tiobench.pl from http://www.iki.fi/miku/tiotest
(after "make" to build tiotest)

Those would be great results to see.

Thanks,

James
-- 
Miscellaneous Engineer --- IBM Netfinity Performance Development



Re: large ide raid system

2000-01-10 Thread James Manning

[ Sunday, January  9, 2000 ] Franc Carter wrote:
 I am planning to set up a large ide raid5 system. From reading the
 archives of the list it looks like the way to go is with promise
 ultra66 cards, making sure that I have good cables. I am hopeing
 to get a minimum of 8 drives into a machine. My current plan is for
 the following config:-
 
 37gig IBM ide drives
 2.2.14 kernel (or may be a 2.3 series)
 software raid5
 Promise Ultra66 cards
 Good quality cabling
 extra fans
 
 Any comments or suggestions ? I don't care about performance (it's
 only competing against tape drives), however I do care about dollars
 per gigabyte and reliablity.

Well, it's kind of on-topic thanks to this post...

Has anyone used the systems/racks/appliances/etc from raidzone.com?
If you believe their site, it certainly looks like a good possibility.

James
-- 
Miscellaneous Engineer --- IBM Netfinity Performance Development



Re: large ide raid system

2000-01-10 Thread Gregory Leblanc

Franc Carter wrote:
 
 I am planning to set up a large ide raid5 system. From reading the
 archives of the list it looks like the way to go is with promise
 ultra66 cards, making sure that I have good cables. I am hopeing
 to get a minimum of 8 drives into a machine. My current plan is for
 the following config:-
 
 37gig IBM ide drives
 2.2.14 kernel (or may be a 2.3 series)
 software raid5
 Promise Ultra66 cards
 Good quality cabling
 extra fans
 
 Any comments or suggestions ? I don't care about performance (it's
 only competing against tape drives), however I do care about dollars
 per gigabyte and reliablity.

Personally I'd recomend getting 8-channels for your drives.  (1 per
channel)  This will make things a bit more expensive (not too much
though), and will make your whole system a lot happier.  Of course, if
you really don't care about performance, then you should be getting
cheaper IDE controllers, and cheaper drives.  The two that you've got
picked out are high end for IDE components.  What kind of a case are you
looking to house this in?
Greg



Re: large ide raid system

2000-01-10 Thread Jan Edler

From my experience, it works fairly well, but there are some constraints:

 - Performance is really horrible if you use IDE slaves.
   Even though you say you aren't performance-sensitive, I'd
   recommend against it if possible.
 - Thus, to get 8 drives in a machine, you not only need
   mounting, power, and cooling for 8 drives,
   but also 4 available PCI slots for the Promise cards
   (or maybe 3 if you can make use of onboard ATA channels).
 - Cable length can be a problem.  I've had good luck with the
   24 inch cables, although they exceed the length specified
   in the spec.  Even so, it can be tough to route the cables
   from the promise cards to the drives.  I think it would be
   completely hopeless for 8 drives with 18 inch cables.
 - It may be worth getting hot-swap drive boxes, although
   it will add significantly to your per-drive cost.
   Be careful to get ones that support udma-66 (or at least
   udma-33).  This allows you to recover rather more quickly
   from a drive failure, assuming you buy at least 1 extra
   hot-swap box and drive.  Even if you don't mind rebooting
   to deal with a failure, it sure beats tearing open the machine.

Good luck,
Jan Edler
NEC Research Institute


On Mon, Jan 10, 2000 at 03:26:26PM +1100, Franc Carter wrote:
 
 I am planning to set up a large ide raid5 system. From reading the
 archives of the list it looks like the way to go is with promise
 ultra66 cards, making sure that I have good cables. I am hopeing
 to get a minimum of 8 drives into a machine. My current plan is for
 the following config:-
 
 37gig IBM ide drives
 2.2.14 kernel (or may be a 2.3 series)
 software raid5
 Promise Ultra66 cards
 Good quality cabling
 extra fans
 
 Any comments or suggestions ? I don't care about performance (it's
 only competing against tape drives), however I do care about dollars
 per gigabyte and reliablity.
 
 thanks
 
 
 -- 
 Franc CarterMEMLab, University of Sydney
 Ph: 61-2-9351-7819  Fax: 9351-6461



Re: large ide raid system

2000-01-10 Thread Jan Edler

On Mon, Jan 10, 2000 at 12:49:29PM -0800, Dan Hollis wrote:
 On Mon, 10 Jan 2000, Jan Edler wrote:
   - Performance is really horrible if you use IDE slaves.
 Even though you say you aren't performance-sensitive, I'd
 recommend against it if possible.
 
 My tests indicate UDMA performs favorably with ultrascsi, at about 1/6 the
 cost. Cost is often a big factor.

I wasn't advising against IDE, only against the use of slaves.
With UDMA-33 or -66, masters work quite well,
if you can deal with the other constraints that I mentioned
(cable length, PCI slots, etc).

Jan Edler
NEC Research Institute



Re: large ide raid system

2000-01-10 Thread Dan Hollis

On Mon, 10 Jan 2000, Jan Edler wrote:
  - Performance is really horrible if you use IDE slaves.
Even though you say you aren't performance-sensitive, I'd
recommend against it if possible.

My tests indicate UDMA performs favorably with ultrascsi, at about 1/6 the
cost. Cost is often a big factor.

-Dan



Re: large ide raid system

2000-01-10 Thread Jan Edler

On Mon, Jan 10, 2000 at 02:03:14AM -0500, James Manning wrote:
 Has anyone used the systems/racks/appliances/etc from raidzone.com?
 If you believe their site, it certainly looks like a good possibility.

The raidzone stuff works, and the packaging is nice.
They provide much more scalability than a roll-your-own ATA-based
solution, so you can have many more than 8 drives.  In terms
of software, their raid layer lives within the driver, which has
some advantages (and some disadvantages) also.

The disadvantages are mainly that it's much more expensive than
a roll-your-own approach and they are taking a fairly closed
attitude towards the software.  The driver is distributed in binary
form.  You apply a bunch of kernel patches, and link in their driver.
This causes all sorts of problems.

Jan Edler
NEC Research Institute



Re: large ide raid system

2000-01-10 Thread Dan Hollis

On Mon, 10 Jan 2000, Jan Edler wrote:
  My tests indicate UDMA performs favorably with ultrascsi, at about 1/6 the
  cost. Cost is often a big factor.
 I wasn't advising against IDE, only against the use of slaves.

Here we agree :D  1 device per channel. (When will any vendors implement
IDE disconnect? the spec has existed for ages.)

 With UDMA-33 or -66, masters work quite well,
 if you can deal with the other constraints that I mentioned
 (cable length, PCI slots, etc).

Get an Abit BP6 and you have 4 onboard udma channels, and 6 PCI slots. :D

Cable length is not so much a pain as the number of cables. Of course with
scsi you want multiple channels anyway for performance, so the situation
is very similar to ide. A cable mess.

-Dan



Re: large ide raid system

2000-01-10 Thread Thomas Davis

James Manning wrote:
 
 Well, it's kind of on-topic thanks to this post...
 
 Has anyone used the systems/racks/appliances/etc from raidzone.com?
 If you believe their site, it certainly looks like a good possibility.
 

Yes.

It's pricey.  Not much cheaper that SCSI chassis.  You only save money
on the drives.

Performance is ok.  Has a few other problems - your stuck with the
kernels they support; the raid code is NOT open sourced.

-- 
+--
Thomas Davis| PDSF Project Leader
[EMAIL PROTECTED] | 
(510) 486-4524  | "Only a petabyte of data this year?"



large ide raid system

2000-01-09 Thread Franc Carter


I am planning to set up a large ide raid5 system. From reading the
archives of the list it looks like the way to go is with promise
ultra66 cards, making sure that I have good cables. I am hopeing
to get a minimum of 8 drives into a machine. My current plan is for
the following config:-

37gig IBM ide drives
2.2.14 kernel (or may be a 2.3 series)
software raid5
Promise Ultra66 cards
Good quality cabling
extra fans

Any comments or suggestions ? I don't care about performance (it's
only competing against tape drives), however I do care about dollars
per gigabyte and reliablity.

thanks


-- 
Franc CarterMEMLab, University of Sydney
Ph: 61-2-9351-7819  Fax: 9351-6461