Re: speed and scaling

2000-07-11 Thread jlewis

On Tue, 11 Jul 2000, [ISO-8859-2] Krisztián Tamás wrote:

  or just brilliant driver design by Leonard Zubkof, but the Mylex 
  cards are the performance king for hardware RAID under Linux (and
 I was suprised to hear that. We just bought a Mylex AcceleRAID
 250 with five 18G IBM disks, and it's a lot slower than sw RAID.
 
 HW RAID 5  read:  26MB/s  write:   9MB/s  
 SW RAID 5  read:  43MB/s  write:  39MB/s

That's about the same I get with an AcceleRAID 150 with 16mb and 5 IBM 9GB
7200rpm drives doing RAID5.  The RAID controller CPU's just don't seem to
keep up with system processor speedwise.  Of course, these are Mylex's
low-end entry level RAID cards.  I want to try out an ExtremeRAID and see
if write speed gets much better.

--
 Jon Lewis *[EMAIL PROTECTED]*|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_




Re: speed and scaling

2000-07-10 Thread jlewis

On Mon, 10 Jul 2000, Seth Vidal wrote:

 What I was thinking was a good machine with a 64bit pci bus and/or
 multiple buses.
 And A LOT of external enclosures.

Multiple Mylex extremeRAID's.

 I've had some uncomfortable experiences with hw raid controllers -
 ie: VERY poor performance and exbortitant prices.

You're thinking of DPT :)
The Mylex stuff (at least the low end AccleRAID's) are cheap and not too
slow.

--
 Jon Lewis *[EMAIL PROTECTED]*|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_




Re: speed and scaling

2000-07-10 Thread jlewis

On Mon, 10 Jul 2000, Seth Vidal wrote:

  arguably only 500gb per machine will be needed. I'd like to get the fastest
  possible access rates from a single machine to the data. Ideally 90MB/s+
  
  Is this vastly read-only or will write speed also be a factor?
 
 mostly read-only.

If it were me, I'd do big RAID5 arrays.  Sure, you have the data on tape,
but do you want to sit around while hundreds of GB are restored from tape?
RAID5 should give you the read speed of RAID0, and if you're not writing
much, the write penalty shoulnd't be so bad.  If it were totally
read-only, you could mount ro, and save yourself considerable fsck time if
there's an impropper shutdown. 

--
 Jon Lewis *[EMAIL PROTECTED]*|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_




Re: Dell PERC2/SC

2000-07-04 Thread jlewis

On Tue, 4 Jul 2000, Michael Ghens wrote:

 Also how big of a raid partition is possible? I have a data partition need
 of 36 gigs plus.

No problem.  They make individual drives bigger than that now.  You can
supposedly have ext2 partitions up to a few terabytes.  If you fill a
partition that big and the system crashes, fsck time might be a good time
to schedule your next vacation :)

--
 Jon Lewis *[EMAIL PROTECTED]*|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_




Re: Moving form software to hardware raid

2000-07-04 Thread jlewis

On Sat, 1 Jul 2000, Michael Ghens wrote:

 Just need to know how hard it is to move from software to hardware
 raid. Would I have to reformat the HD's? Any special considerations?

Almost certainly, you will need to move the data elsewhere, let the raid
card do its thing with the disks, partition, and format again.  

--
 Jon Lewis *[EMAIL PROTECTED]*|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_




Re: Looking for drivers for DPT I2O card.

2000-06-25 Thread jlewis

On Sun, 25 Jun 2000, Alvin Starr wrote:

 I am trying to get a DPT I2O card running with RH 6.2. Does anybody have
 any pointers or suggestions?

SmartRAID V?  I think you'll have to find[1] their web site and download a
driver from them.  

[1] I don't know if it's a misconfiguration or a result of Adaptec buying
DPT, but www.dpt.com is not what it used to be.

--
 Jon Lewis *[EMAIL PROTECTED]*|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_




2.2.16 and module versioning

2000-06-11 Thread jlewis

On Fri, 9 Jun 2000 [EMAIL PROTECTED] wrote:

 Has anyone tried building the DAC960 driver as a module with the 2.2.16
 kernel?  I've tried with 2.2.16 stock, and 2.2.16 + DAC960-2.2.6.  Either
 way, when the initrd loads and tries to insmod DAC960.o, I get:
 
 unresolved symbol waitqueue_lock
 
 stock 2.2.14 and 2.2.15 build with DAC960 as a module and do not have this
 problem.  

While looking into this, I've noticed another oddity I suspected was also
a bug in symbol versioning.

[from end of nm on DAC960.o in 2.2.15smp with modvers]
 U vsprintf_Rsmp_13d9cea7
 U wait_for_request_Rsmp_f8c36762
 U waitqueue_lock_Rsmp_fcdc212d

[from end of nm on DAC960.o in 2.2.16smp with modvers]
 U vsprintf_R13d9cea7
 U wait_for_request_R37d32d70
 U waitqueue_lock

This and some messages on the kernel list between Scott McDermott
[EMAIL PROTECTED] and Keith Owens [EMAIL PROTECTED] made it
apparent that the problem is not actually a kernel bug, but a kernel
building problem.  When I first compiled from this tree, I was building
for a non-smp system.  I've since done make menuconfig ; make dep ; make
clean, make bzImage modules several times switching back and forth between
SMP and non SMP.  Apparently you can't do this as module versioning
doesn't get regenerated by these steps.  I haven't read the README for
some time (thought I knew what I was doing :), but maybe something should
be added to it cautioning about what happens when you change the value of
CONFIG_SMP and rebuild the kernel and modules.

So, in short, it's not a bug in the kernel code, and not new to 2.2.16.
It's just coincidence that I first ran into it while upgrading to 2.2.16.

--
 Jon Lewis *[EMAIL PROTECTED]*|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_








DAC960 + 2.2.16

2000-06-09 Thread jlewis

Has anyone tried building the DAC960 driver as a module with the 2.2.16
kernel?  I've tried with 2.2.16 stock, and 2.2.16 + DAC960-2.2.6.  Either
way, when the initrd loads and tries to insmod DAC960.o, I get:

unresolved symbol waitqueue_lock

stock 2.2.14 and 2.2.15 build with DAC960 as a module and do not have this
problem.  

--
 Jon Lewis *[EMAIL PROTECTED]*|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_






Re: HP Netserver LPr + Mylex

2000-05-24 Thread jlewis

On Wed, 24 May 2000, Chris Mauritz wrote:

 Have you SEEN one of these?  They are *extremely* deep and are a supreme
 pain in the ass to deal with unless you have 4 poster cabinets instead
 of racks.

Yeah...most 2U cases are pretty deep.  I can arrange to have 4 post open
cabinet/racks in the places I'd put them.  Where rack space is expensive,
2U cases are attractive, even if they're deep.


--
 Jon Lewis *[EMAIL PROTECTED]*|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_




Re: HP Netserver LPr + Mylex

2000-05-24 Thread jlewis

On Wed, 24 May 2000, Chris Mauritz wrote:

 Have you SEEN one of these?  They are *extremely* deep and are a supreme
 pain in the ass to deal with unless you have 4 poster cabinets instead
 of racks.

BTW...does this mean you actually have some?  I'm still trying to figure
out just how much it'll cost to turn a base LPr into a server (mylex card,
2 drives, 2 dimms) and the last unknown is the hot swap drive brackets.
The reseller I'm going through thinks he found the right part number...at
$165 each!  That's totally ridiculous and probably a deal breaker since
even with the 2 for 1 promo, the price comes out to just about what it
would cost to build from scratch from parts.

--
 Jon Lewis *[EMAIL PROTECTED]*|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_




Re: HP Netserver LPr + Mylex

2000-05-24 Thread jlewis

On Wed, 24 May 2000, Chris Mauritz wrote:

 For the cost of the LPr, you're probably better off going to someplace
 like Penguin Computing and have them roll something for you that's
 known to work.  It will likely be a bit cheaper too.

The deal with the LPr's though is there's a 2 for 1 promo.  At the 2 for 1
pricing (PIII-550 with 64mb is about 2750 IIRC...so $1375 each...but you
have to add more memory, disks, and in my case a RAID card) it's pretty
close to the price I'd pay to buy parts from a distributor like Tech Data.  
Vendors like Penguin or anyone else I'm aware of can't beat this kind of
price.  Based on current prices (if I could actually find Mylex
cards...which I can't) I can take one of these LPr's add an Acceleraid
card, 256mb of good memory, 2 IBM 9gb drives, and have spent about $2500
(not counting the hot swap brackets which may jack it up as much as $330
if the info I got today is accurate).  A similar box from Penguin is $2765
before adding a RAID card and doesn't appear to have hot-swap.  Add a RAID
card, and it's about $3200.  Even with the overpriced brackets, the LPr is
cheaper.  The other advantage to the LPr over buying parts is less parts
to hunt down and put together.  Parts shortages over the past few months
from Intel and Mylex are making it a PITA to build systems from parts.

 Personally, I found an outfit that has a 1 RU enclosure that has 2
 swappaple drive sleds (they do this by using an integrated floppy/CD
 device to save a bay).  They're designed for IDE disks.  I've been

Some of our colos have 1U cases with 2 hot-swap SCA SCSI drives.  I don't
think you can squeze a RAID card in any of these cases though.


--
 Jon Lewis *[EMAIL PROTECTED]*|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_




HP Netserver LPr + Mylex

2000-05-22 Thread jlewis

Has anyone tried installing a Mylex (either acceleraid or extremeraid)
card in one of the HP Netserver LPr 2U server systems?  HP is currently
running a 2 for 1 special on them, which just barely makes the price
attractive...but only if I can stick a RAID controller in and have it
utilize the 2 hot swap bays.

--
 Jon Lewis *[EMAIL PROTECTED]*|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_http://www.lewis.org/~jlewis/pgp for PGP public key__




Enlight gear?

2000-04-03 Thread jlewis

Supply problems with Intel server gear forced us to hunt around for
alternative sources for hardware, and in our searching I found the Enlight
8950 chassis and Enlight EN-8700 hot swap drive array module.  This looks
roughly comparable to an AstorII chassis, with the benefit of having a
rackmount kit available and redundant power as an option.  Has anyone
tried one of these out?  I'm considering the 8950 with 8700 paired with an
Intel N440BX and Mylex Acceleraid 250 (will run Red Hat 6.2 doing HW RAID1
or RAID5...and perhaps even an NT server for our accounting people).

I found that Micron is selling this case/array with the somewhat more
expensive L440GX+ board...so I'm assuming it must work and not be too bad.

--
 Jon Lewis *[EMAIL PROTECTED]*|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_http://www.lewis.org/~jlewis/pgp for PGP public key__




Re: Benchmark 1 [Mylex DAC960PG / 2.2.12-20 / P3]

2000-03-02 Thread jlewis

On Thu, 2 Mar 2000, Christian Robottom Reis wrote:

 How about running tiotest on them so we can have a look at some real
 numbers - bonnie just isn't very meaningful. If you can run tiobench with
 --numruns 5 or something close and a decent size (1024 looks fine) it's
 more meaningful.

I think the bonnie test at least tells me what max throughput of the
drives and controller's ability to do RAID-5 are.  I'll be happy to run
other benchmarks though.  Where can I find tiotest?  Searches on
google/altavista/freshmeat turned up nothing.

BTW...that system that had 1 reset on every drive last night now has 2 on
every drive.

0:0  Vendor: IBM   Model: DNES-309170Y  Revision: SA30
 Serial Number: AJGN8615
 Disk Status: Online, 17915904 blocks, 2 resets

The RELEASE_NOTES.DAC960 mention that this isn't necessarily a problem and
that it may happen from time to time when the system is under heavy load.
This system is under no load.  It's sitting here doing nothing.  Even the
bonnie results I posted last night were from a run several weeks (and
reboots) ago.  I know the notes say that kernel messages on resets are
disabled to keep people from worrying, but it would be nice if I had some
way of knowing when these resets were happening.  I guess I'll have to
beat the crap out of it with bonnie, tiotest, and some disk exercising
scripts and see if the reset frequency increases.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



RE: Benchmark 1 [Mylex DAC960PG / 2.2.12-20 / P3]

2000-03-02 Thread jlewis

On Thu, 2 Mar 2000, Gregory Leblanc wrote:

 Perhaps they got reset when syslog cycled at midnight, or something.  :)
 Try comparing the number of resets to the number of reads/writes, more like
 you would for ethernet collisions.  I think that resets will only indicate a
 problem when the number of them compared with drive activity is fairly high.
 BTW, where did you pull that info from, it doesn't show anywhere under proc
 for me.  

The system really is doing pretty much nothing.  There's very little
syslog data and it's not gziping the files, so logrotate just has to
rename them and send syslogd a HUP.

Anyway...it took around an hour...but here's the tiobench numbers:

# tiobench.pl --numruns 5 --size 1024
Size is MB, BlkSz is Bytes, Read and Write are MB/sec, Seeks are Seeks/sec

 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
- -- ---  - -- --
  .1024   40961   25.6001 12.0% 8.64345 7.08%  159.776 0.48%
  .1024   40962   26.0927 12.3% 8.63389 7.11%  199.146 0.64%-W
  .1024   40964   25.5174 12.0% 8.64470 7.19%  236.201 0.75%-W
  .1024   40968   25.1011 12.1% 8.61819 7.25%  265.246 0.92%W

It looks like tiobench has some screen formatting issues (the -W's and W)
above at the end of the CPU column.

For the record, this is an Intel N440BX in an AstorII chassis, 256mb ECC 
memory, PIII-500, AcceleRAID 250/8mb, 5 9gb IBM DNES-309170Y drives in a
RAID-5 array giving about 34gb of space run on a 30gb partition formatted
with 4kb blocks.
 
Strangely enough, these numbers are very similar to the bonnie block IO
results I posted earlier:

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
 1024  6766 85.3  9150  7.6  6452 11.9  7840 94.8 26361 13.0 268.0  2.5


--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: Benchmark 1 [Mylex DAC960PG / 2.2.12-20 / P3]

2000-03-02 Thread jlewis

On Thu, 2 Mar 2000, Leonard N. Zubkoff wrote:

 An occasional reset is not a problem; it simply means a command timed
 out and the DAC960 firmware responds by resetting the bus and retrying
 all the pending commands.  In the case of the IBM drives, there is a
 mode page setting that controls how long a command is allowed to

Normally, I'd look for such things via the GUI scsi-config interface.  Is
there any way to access the mode pages for each disk while they're
connected to the Mylex controller, or would I need to hook each drive to a
traditional SCSI controller to do that?

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: Benchmark 1 [Mylex DAC960PG / 2.2.12-20 / P3]

2000-03-01 Thread jlewis

On Wed, 1 Mar 2000, Leon Brouwers wrote:

 * DAC960 RAID Driver Version 2.2.4 of 23 August 1999 *
 Copyright 1998-1999 by Leonard N. Zubkoff [EMAIL PROTECTED]
 Configuring Mylex DAC1164P PCI RAID Controller
 0:1  Vendor: WDIGTLModel: WDE9150 ULTRA2Revision: 1.20
 0:2  Vendor: WDIGTLModel: WDE9150 ULTRA2Revision: 1.20
 0:3  Vendor: WDIGTLModel: WDE9150 ULTRA2Revision: 1.20
 
 In raid 5 configuration, machine has PIII-450 128 Mb/256Mb swap on 
 2.2.14 + mingo's raid patches
 
   ---Sequential Output ---Sequential Input-- --Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
 MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
   256  5451 78.7 10035  8.5  4000  7.3  3975 55.3 18765 11.3 262.8  3.9


Those are some disappointing numbers.  I've got:

* DAC960 RAID Driver Version 2.2.4 of 23 August 1999 *
Copyright 1998-1999 by Leonard N. Zubkoff [EMAIL PROTECTED]
Configuring Mylex DAC960PTL1 PCI RAID Controller
  Firmware Version: 4.07-0-29, Channels: 1, Memory Size: 8MB
... 5 of the following drives in a u2w hot swap system
0:0  Vendor: IBM   Model: DNES-309170Y  Revision: SA30
 Serial Number: AJGN8615
 Disk Status: Online, 17915904 blocks, 1 resets
  Logical Drives:
/dev/rd/c0d0: RAID-5, Online, 71663616 blocks, Write Thru

PIII-500, 256mb, AcceleRAID 250

  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
 1024  6766 85.3  9150  7.6  6452 11.9  7840 94.8 26361 13.0 268.0  2.5

I wonder how much better your numbers would be with more drives on
the DAC1164P.  I just noticed the 1 resets above (on all the drives).  
This box isn't in production yet.  I'll have to look into that and see
if there's a problem.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Mylex Acceleraid 200/250 speeds

2000-01-21 Thread jlewis

I finally built a RAID5 system this week using an Acceleraid 250 and an
Intel AstorII chassis with 5 IBM 7200RPM LVD disks (no hot spare).  I
don't have the bonnie numbers handy, but I do have the below which came
from a somewhat similar system running an Acceleraid 200 with 2 of the IBM
drives in a RAID1 config.  In each system, system RAM is 256MB,
controller RAM is 8mb.

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
 1024  3723 48.1  4875  5.7  3387  7.6  7782 95.6 16854 11.1 143.2  1.2

IIRC, the RAID5 numbers were considerably better...about 2x in the 
output columns and about +10MB/s in the input columns, and a little more
than 2x in the seeks/s column.  Since there's less to mirroring than
RAID5, I'm a bit puzzled by the relatively lousy write performance on the 
RAID1.  The disks should be capable of much more.  The RAID5 system
is off right now...but here's the /proc/rd info from the RAID1.

* DAC960 RAID Driver Version 2.2.4 of 23 August 1999 *
Copyright 1998-1999 by Leonard N. Zubkoff [EMAIL PROTECTED]
Configuring Mylex DAC960PTL0 PCI RAID Controller
  Firmware Version: 4.07-0-29, Channels: 1, Memory Size: 8MB
  PCI Bus: 0, Device: 14, Function: 1, I/O Address: Unassigned
  PCI Address: 0xF480 mapped at 0xD0009000, IRQ Channel: 10
  Controller Queue Depth: 124, Maximum Blocks per Command: 128
  Driver Queue Depth: 123, Maximum Scatter/Gather Segments: 33
  Stripe Size: 64KB, Segment Size: 8KB, BIOS Geometry: 128/32
  Physical Devices:
0:0  Vendor: IBM   Model: DNES-309170W  Revision: SA30
 Serial Number: AJ0D2975
 Disk Status: Online, 17915904 blocks
0:1  Vendor: IBM   Model: DNES-309170W  Revision: SA30
 Serial Number: AJ0D4664
 Disk Status: Online, 17915904 blocks
  Logical Drives:
/dev/rd/c0d0: RAID-1, Online, 17915904 blocks, Write Thru



--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: Hardware RAID chips

2000-01-16 Thread jlewis

On 14 Jan 2000, Chris Good wrote:

   Much, much better - we've taken all the i960 cards out of our systems
 and replaced them with strongarm based ones.  Needless to say we shant
 be buying any more i960 cards and our supplier is talking about stopping
 shipping them as the perform so much worse.

Are you comparing the new high-end to the old high-end (older i960's), or
are you saying that even in the new stuff, the extremeraids just blow the
acceleraids away?  I'm about to put together a RAID5 on an accleraid 250.
I've got a few systems using 200's for mirroring and have been happy with
them...but mirroring isn't terribly CPU intensive, esp. compared to RAID5.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: GPLed hardware RAID card driver?

1999-12-20 Thread jlewis

On Sun, 19 Dec 1999, jeremy smith wrote:

 I am new to RAID and I want to use a hardware solution for a very disk 
 intensive application on my Dual P2. I have found several vendors who carry 
 Linux drivers for their Ultra2 products: DPT Decade PM1554U2, Mylex 
 AccelRAID 150/250, ICP Vortex GDT6518RD, Syred Cruiser. However, none of the 
 drivers seem to be offered in the kernel distribution, but rather as binary 
 modules. Also, the drivers are only advertised as working with kernel 
 2.0/2.2. As I require kernel 2.3.30+ for 2GB filesize, I am skeptical to 

Since the Mylex and ICP RAID card drivers _are_ distributed with Linus's
kernel source, binary-only drivers aren't an issue with them, and they
should be included with the 2.3.x kernels since they've been in the
standard kernel source since late 2.0.x.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: Linux Raid5 and NFS, how stable an reliable?

1999-12-06 Thread jlewis

On Mon, 6 Dec 1999, Dong Hu wrote:

 We need about 200G hard disk space for software a development
 environment. I am considering of using linux-raid5 configuration.
 We will use nfs to share the disk on 100 Ethernet Lan.
 
 My concern is, how stable an reliable is linux NFS and raid5?
 Any experience of using this in a production environment?
 The speed is not so critical.

I've had very little trouble with the old user-space NFS code, but
recently I setup a Red Hat 6.1 system using knfsd and have been having
some trouble in a setup where the 6.1 box is the server and several older
(2.0.x kernel) Linux systems are using autofs to mount it.

The server has been logging lots of:

RPC: rpcauth_gc_credcache looping!
RPC: rpcauth_gc_credcache looping!
RPC: rpcauth_gc_credcache looping!

and at least once (it's only been in service a few days) something screwy
happened with the rpc progs and portmap.  Basically, all the rpc progs
became unregistered with portmap, and autofs mounts were failing.  I'm not
sure yet if autofs mounting/umounting frequently was the cause of the
problem, but as a workaround, I've configured the autofs clients to not
have a timeout (so they won't umount the server after some inactivity
period).

This is kind of off-topic for linux-raid, so to get back on-topic, the
server mentioned above is using a Mylex Acceleraid 200, which seems to be
working perfectly doing RAID1.
 
--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: /dev/md/0 instead of /dev/md0 ?

1999-12-04 Thread jlewis

On Sat, 4 Dec 1999 [EMAIL PROTECTED] wrote:

 not sure, but rumor has it that quota code is problematic with devices not
 directly in the /dev directory.
 
 as far as raid is concerned, the device should be able to be anywhere, so
 long as it has the right major and minor node numbers...right?

I just ran into this.  rpc.rquotad wants to find the devices your
partitions live on in /dev/.  Using a mylex RAID card, the devices will be
in /dev/rd/.  A quick patch to rquota_server.c is how I fixed it.  It may
also have been fixable by moving the devices, or creating links.

--- rquota_server.c~Sun Nov 30 10:07:41 1997
+++ rquota_server.c Fri Dec  3 17:59:53 1999
@@ -35,6 +35,8 @@
 
 #ifdef ELM
 #define _PATH_DEV_DSK   "/dev/dsk/"
+#elif MYLEX
+#define _PATH_DEV_DSK  "/dev/rd/"
 #else
 #define _PATH_DEV_DSK   "/dev/"
 #endif


--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_____http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: Best partition scheme on HW RAID-5 ?

1999-11-27 Thread jlewis

On Sat, 27 Nov 1999, [iso-8859-1] Jakob Østergaard wrote:

 Actually, nothing should stop you from putting the OS on software RAID either.

Doesn't at least /boot need to be on at most a RAID1 when using software
RAID...since boot loaders (like lilo) generally don't know how to read
from RAID5?

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: Best partition scheme on HW RAID-5 ?

1999-11-26 Thread jlewis

On Fri, 26 Nov 1999 [EMAIL PROTECTED] wrote:

 The Server:
  Dell Poweredge 2300, P3-500, 256 MB RAM, PERC-II SC RAID controller, Adaptec SCSI 
controller
  4 x 9 GB LVD SCSI drives
  DLT tape unit
  Redhat 6.1 - but I'm open to suggestions on another distro
 
 From what I've read, putting the OS on RAID is a no-no, so the basic
 scheme I've worked out is:
 
 Disk 1: 4 GB partition : OS (this'll actually be several partitions for / , /usr , 
and swap ), no RAID
 Disk1: 5 GB partition : Non critical data, no RAID
 
 Disk 2-4 : 9 GB partitions - critical data, RAID 5

Where did you read that putting the OS on RAID was a no-no?  I think you
got some bad info, or maybe remember it incorrectly.  If you're using
hardware RAID, there's nothing stopping you from booting from (and having
the OS on) RAID volumes.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Mylex and drive activity LED

1999-11-10 Thread jlewis

I setup a system today with a Mylex Acceleraid 200, Intel T440BX board,
and 2 IBM 9gb drives.  The T440BX has a built in Symbios 875, which the
Acceleraid 200 takes over.  When I connect a drive activity LED to the
system board's drive activity jumper, the LED is off until after the Mylex
card's BIOS loads.  After that, the LED is solid on until the system is
shut off.  Just halting the system, but leaving it powered up, leaves the
LED on.

Is the behavior normal?  The Mylex card has a drive activity jumper
(according to the manual), but on the cards I have, that jumper does not
exist.  Is there any way (other than connecting directly to the disks'
activity jumpers) to get a proper drive activity LED in this setup?

BTW...installing Red Hat 6.1 was a PITA.  First, it (disk druid) tried
making /var on c0d0p8.  After I partitioned it manually, I got it to
install, but then realized I hadn't gotten to specify stride...so I'll
probably be starting over.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: Offtopic: LVD U2W drives on UW SCSI-3 controller

1999-11-08 Thread jlewis

On Mon, 8 Nov 1999 [EMAIL PROTECTED] wrote:

   Sorry about the somewhat offtopic question, but I have a supplier
 of mine trying to tell me that LVD U2W dries will work on my Symbios
 53C875 UW SCSI-3 controller. 
 
   Will LVD U2W drives work on a UW controller? I thought that LVD
 was quite different than other forms of scsi.

LVD U2W scsi is backwards compatible with UW.  It's probably backwards
compatible with fast-wide as well.  I have systems running with IBM LVD
drives and AHA-2940UW cards.  You can't mix LVD and UW drives on the same
chain and do any better than UW though.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: How many Inodes?

1999-10-26 Thread jlewis

On Tue, 26 Oct 1999, Kent Nilsen wrote:

 The hardware is a dual PIII500, 256Mb RAM, 3x50Gb Barracudas on a Mylex 
 AccelRaid250. OS: Mandrake 6.1 kernel 2.2.13.

With all those tiny files and a huge amount of disk space, be aware that
the latest version of mke2fs seems to "decide on a block size" based on
partition size rather than default to 1kb blocks.  I noticed this when Red
Hat 6.1 gave me a 2.5gb /var partition with 4kb blocks.


--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



RE: DPT Linux RAID.

1999-10-26 Thread jlewis

On Tue, 26 Oct 1999, G.W. Wettstein wrote:

 The following is the .config option that needs to be set for reliable
 operation with the DPT cards:
 
   CONFIG_SCSI_EATA=y
 
 We have not found the driver configured with the following define to be stable:
 
   CONFIG_SCSI_EATA_DMA
 
 The instability is especially profound in an SMP environment.  Under
 any kind of load there will be crashes and hangs.

Red Hat defaults to using CONFIG_SCSI_EATA_DMA if you have a PM2144UW.  I
have a client with 2 servers using PM2144UW's with CONFIG_SCSI_EATA_DMA
that have been rock stable.  Neither is SMP.  One is a mail/web/dns
server.  The other is backup mail/dns and squid.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: How many Inodes?

1999-10-26 Thread jlewis

On Tue, 26 Oct 1999, Marc Mutz wrote:

  With all those tiny files and a huge amount of disk space, be aware that
  the latest version of mke2fs seems to "decide on a block size" based on
  partition size rather than default to 1kb blocks.  I noticed this when Red
  Hat 6.1 gave me a 2.5gb /var partition with 4kb blocks.
 
 This is of course desirable. 4k is the page size on x86, which makes the
 unified cache in 2.3 (2.4) much faster on such fss. I just yesterday
 converted all my fss to 4k block size. /var grew from 84M to 91M.

That of course depends on what your plans for the partiton are.  If you
plan of having large numbers of small files, 4kb blocks can eat a
surprising amount of your disk space.


--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: How many Inodes?

1999-10-26 Thread jlewis

On Tue, 26 Oct 1999, Kent Nilsen wrote:

  With all those tiny files and a huge amount of disk space, be aware that
  the latest version of mke2fs seems to "decide on a block size" based on
  partition size rather than default to 1kb blocks.  I noticed this when Red
  Hat 6.1 gave me a 2.5gb /var partition with 4kb blocks.
 
 Is that only during setup, or even if I say mke2fs -b 4096 -m 5 -i 8192 -R 
 stride=128 manually?

It's just during setup and whenever you don't specify a block size.  It's
just that the default behaviour has changed from 1kb to "it depends on the
partition size".

 What should I set the -b size to anyways with all these small files? I believe I 
 can save a lot of space by reducing it to 1024 or less, but how will that impact 
 performance? Should I adjust my stride= or i= size to match such a change? 
 Should I change it anyways? :)

I suppose it depends on which you're more interested in...performance or
squezing as much space as you can out of the drive.  Drives are getting
bigger/cheaper, so maybe 1kb on a big partition doesn't make sense anymore
except in special circumstances.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: FW: Dream RAID System (fwd)

1999-10-12 Thread jlewis

On Tue, 12 Oct 1999, Christopher E. Brown wrote:

   ---Sequential Output ---Sequential Input-- --Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
 MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
 fenris   1024 15716 96.7 45169 70.2 17574 45.8 18524 71.5 47481 38.5 509.5 13.2
 
 
   System is a dual Xeon 500/1mb w/ 512MB main memory.  It has 3
 SCSI busses, one UltraWide (DLT tape), and 2 Ultra2 LVD units (one
 used, one for expansion).  There are 7 seagate 18.2G cheeta drives
 installed, the last 6 are a 0 spare RAID5 array running left-symmetric
 parity and a chunk of 128.  The filesystem was created as follows.

That's no surprise.  I already knew Linux software RAID5 can perform
extremely well given enough CPU horsepower.  The question was, if I want
hardware RAID5, can I have the benefits[1] it provides and not have crappy
i/o performance.

[1] 
reliable data storage without having to worry about kernel versions /
raid driver versions and patches
automatic rebuilds using hot spares without having to roll my own scripts
to do so
ability to boot from a RAID5 (though AFAIK, lilo can boot from software
mirrored partitions now, no?)
Not being forced to go SMP just to give software RAID a CPU to play
on...what if we find our system isn't SMP stable?  That may not be
likely, especially with 2.2.x or 2.4.x, but it was still a serious issue
with 2.0.x.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: FW: Dream RAID System (fwd)

1999-10-12 Thread jlewis

On Tue, 12 Oct 1999, Marc Merlin wrote:

  ability to boot from a RAID5 (though AFAIK, lilo can boot from software
  mirrored partitions now, no?)
 
 Correct.
 Note that depending  on the Raid card  and the distribution, you  may not be
 able  to boot  on it  without bootstrapping  your distribution  (i.e. debian
 won't work with a DAC960 if / in on the RAID array. You need to install some
 other distribution, use it to copy an existing debian system, and change the
 kernel on the debian partition)

Though, how does the PC handle things booting up if the drive on
controller 0, ID 0 has gone bad?  I expect this is another case where
hardware RAID shines, being able to boot regardless of which drive died.

If we go with Red Hat, the DAC960's should be supported starting with 6.0
(5.2 if we get a custom boot image, though we wouldn't likely install that
at this point.)

 For me, one big advantage of hardware raid is disk shelves with hotswapping.
 Doing this in a PC case is awkward and usually a hack

There's nothing (that I know of) stopping anyone from buying Intel
Astor/AstorII/Cabrillo-C server chassis and making use of their hot-swap
bays without hardware RAID.  The Astor chassis are surprising cheap too.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: RAID controllers under Linux...

1999-10-12 Thread jlewis

On Tue, 12 Oct 1999, Dave Rynne wrote:

 (I haven't mailed them about this yet though). I've seen reports
 that Mylex DAC960s are very scarce these days so I've more or less
 ruled these out as I need guaranteed availability.

Scarce just in your area or in general?  I think the old DAC960 line was
discontinued in favor of the Acceleraid and ExtremeRaid lines.  The
Acceleraids are (AFAIK) DAC960's.  The ExtremeRaids are StrongARMs.
ExtremeRaid's are said to perform pretty well.  These sell for (in the US)
from $400 for the Acceleraid 200 to over $1000 for starting ExtremeRaids.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: FW: Dream RAID System (fwd)

1999-10-12 Thread jlewis

On Wed, 13 Oct 1999, [iso-8859-1] Jakob Østergaard wrote:

 With IDE drives, you can (at least on a few newer BIOSes where I've tried it)
 set your drives to ``autodetect'' so that the BIOS won't complain if one drive
 is missing.
 
 Then, with identical /boot partitions on both drives, and the system booting
 on RAID-1, you system will boot no matter which drive you pull out of it.
 No need to reconfigure the BIOS or anything else.

You can configure better SCSI cards to boot from any or the lowest
available SCSI ID.  I don't know what they do though if ID 0 exists but
isn't readable.  My guess would be hang.  Most SCSI drive failures I've
encountered have been drives that are able to be detected but can't
reliably (or at all in some cases) be read.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: FW: Dream RAID System

1999-10-06 Thread jlewis

On Wed, 6 Oct 1999, Kenneth Cornetet wrote:

 Hardware RAID does not necessarily preclude speed. I put a Mylex extremeraid
 1164 (32MB cache, 2 channels) in a dual 450MHz P3 connected to 4 18GB 10K
 RPM seagate wide low voltage differential SCSI disks in a raid 5 config and
 got about 22 MB/sec reads and writes as reported by bonnie.

Were you really getting 22MB/s writes (just as fast as reads)?  Was the
bonnie test size at least 2x physical memory?  Those are encouraging
numbers.  I'm looking at building a big file server using more drives and
an extremeraid.  I was worried about write performance and wondering if
I'd have to do RAID10 rather than RAID5.

--
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: Hardware RAID Solutions

1999-05-31 Thread jlewis

On Sat, 29 May 1999, Dave Wreski wrote:

 I thought somoene might know how the performance of the software raid
 support is versus (the typical) hardware raid?
 
 Hmm.. Maybe there's even a generalization that can be made?

Search around a bit.  Everything I remember reading is that software
RAID-5 (in a beefy system...say dual PII-450 or similar) runs circles
around most hardware RAID-5 controllers.  Hardware RAID gives you things
like the ability to easily boot from a RAID device, and automated hot
spares and rebuilds.  As software RAID improves and gets easier to manage,
there may be little reason to go with hardware RAID.

don't waste your cpu, crack rc5...www.distributed.net team enzo---
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: Hardware RAID Solutions

1999-05-28 Thread jlewis

On Tue, 25 May 1999, Bobby Hitt wrote:

 DPT
 ICP Vortex
 
 I just returned an DPT 2044UW controller and caching module, performance was
 AWFUL. Before I buy a ICP Vortex controller, I wanted to see if anyone knows
 about any other alternatives.

There's Mylex.  AFAIK, of the three, DPT is at the bottom in performance.
The 2044 is far from the top of their line though.

don't waste your cpu, crack rc5...www.distributed.net team enzo---
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__



Re: Raid problems.

1999-05-16 Thread jlewis

On Fri, 14 May 1999, Robert (Drew) Norman wrote:

 I have a IBM 9GB drive split into 3 partitions of equal size.
 
 raiddev /dev/md0
 raid-level0
 nr-raid-disks 3
 nr-spare-disks0
 chunk-size16
 
 device/dev/sdb1
 raid-disk 0
 device/dev/sdb2
 raid-disk 1
 device/dev/sdb3
 raid-disk 2

Why do you want to partition a disk and then turn those partitions back
into a single "disk"?  Striping in this way is going to slow you down.


don't waste your cpu, crack rc5...www.distributed.net team enzo---
 Jon Lewis *[EMAIL PROTECTED]*|  Spammers will be winnuked or 
 System Administrator|  nestea'd...whatever it takes
 Atlantic Net|  to get the job done.
_http://www.lewis.org/~jlewis/pgp for PGP public key__