On Sun, Jul 09, 2000 at 11:04:20PM -0700, Gregory Leblanc wrote:
What's the current status of RAID on SPARC? I haven't had a chance to keep
up very much, as I wasn't using RAID on SPARCs. I'm about to build a
mirrored system here, and I'd like to make sure that I'm not going to get
hosed
software raid will NOT save you from power failure. it will save you from
disk/controller/cable failure only! do NOT lull yourself into a false sense of
security.
if you have a people who cant handle unix and powering down, then you need an
UPS and lock your box in a closet.
linux software raid
I have kernel-2.2.16-3.i386.rpm.
I have lilo 21.4-3
I have /etc/raidtab =
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
persistent-superblock 1
chunk-size 4
device /dev/sda1
failed-disk
arguably only 500gb per machine will be needed. I'd like to get the fastest
possible access rates from a single machine to the data. Ideally 90MB/s+
Is this vastly read-only or will write speed also be a factor?
-HJC
/stage/etc/lilo.conf =
boot = /dev/md0
error is here. The boot device must be a real disk
see: ftp://ftp.bizsystems.net/pub/raid/Boot+Root+Raid+LILO.html
for examples
delay = 5
vga = normal
root = /dev/md0
image = /boot/bzImage
label = linux
when I run lilo I get the following:
Seth Vidal wrote:
[monster data set description snipped]
So were considering the following:
Dual Processor P3 something.
~1gb ram.
multiple 75gb ultra 160 drives - probably ibm's 10krpm drives
Adaptec's best 160 controller that is supported by linux.
[snip]
So my questions are these:
-Original Message-
From: Seth Vidal [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 10, 2000 12:23 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: speed and scaling
So were considering the following:
Dual Processor P3 something.
~1gb ram.
multiple 75gb ultra 160 drives
If you can afford it and this is for real work, you may want to
consider something like a Network Appliance Filer. It will be
a lot more robust and quite a bit faster than rolling your own
array. The downside is they are quite expensive. I believe the
folks at Raidzone make a "poor man's"
arguably only 500gb per machine will be needed. I'd like to get the fastest
possible access rates from a single machine to the data. Ideally 90MB/s+
Is this vastly read-only or will write speed also be a factor?
mostly read-only.
-sv
i have not used adaptec 160 cards, but i have found most everything else they
make to be very finicky about cabling and termination, and have had hard
drives give trouble on adaptec that worked fine on other cards.
my money stays with a lsi/symbios/ncr based card. tekram is a good vendor, and
If you can afford it and this is for real work, you may want to
consider something like a Network Appliance Filer. It will be
a lot more robust and quite a bit faster than rolling your own
array. The downside is they are quite expensive. I believe the
folks at Raidzone make a "poor man's"
You will definitely need that 64 bit PCI bus. You might want to watch out
for your memory bandwidth as well. (i.e. get something with interleaved
memory). Standard PC doesn't get but 800MB/s peak to main memory.
FWIW, you are going to have trouble pushing anywhere near 90MB/s out of a
gigabit
I haven't had very good experiences with the Adaptec cards either.
If you can take the performance hit, the Mylex ExtremeRAID cards
come in a 3-channel variety. You could then split your array
into 3 chunks of 3-4 disks each and use hardware RAID instead of
the software raidtools.
Cheers,
i have not used adaptec 160 cards, but i have found most everything else they
make to be very finicky about cabling and termination, and have had hard
drives give trouble on adaptec that worked fine on other cards.
my money stays with a lsi/symbios/ncr based card. tekram is a good vendor,
FWIW, you are going to have trouble pushing anywhere near 90MB/s out of a
gigabit ethernet card, at least under 2.2. I don't have any experience w/
2.4 yet.
I hadn't planned on implementing this under 2.2 - I realize the
constraints on the network performance. I've heard good things about
On Mon, Jul 10, 2000 at 05:40:54PM -0400, Seth Vidal wrote:
FWIW, you are going to have trouble pushing anywhere near 90MB/s out of a
gigabit ethernet card, at least under 2.2. I don't have any experience w/
2.4 yet.
I hadn't planned on implementing this under 2.2 - I realize the
I'd try an alpha machine, with 66MHz-64bit PCI bus, and interleaved
memory access, to improve memory bandwidth. It costs around $1
with 512MB of RAM, see SWT (or STW) or Microway. This cost is
small compared to the disks.
I've never had trouble with adaptec cards, if you terminate things
There are some (pre) test
versions by Linux and Alan Cox out awaiting feedback from testers, but
nothing solid or consistent yet. Be careful when using these for
serious work. Newer != Better
This isn't being planned for the next few weeks - its 2-6month planning
that I'm doing. So I'm
I'd try an alpha machine, with 66MHz-64bit PCI bus, and interleaved
memory access, to improve memory bandwidth. It costs around $1
with 512MB of RAM, see SWT (or STW) or Microway. This cost is
small compared to the disks.
The alpha comes with other headaches I'd rather not involve myself
From [EMAIL PROTECTED] Mon Jul 10 17:53:34 2000
If you can take the performance hit, the Mylex ExtremeRAID cards
come in a 3-channel variety. You could then split your array
into 3 chunks of 3-4 disks each and use hardware RAID instead of
the software raidtools.
I've not had good
From [EMAIL PROTECTED] Mon Jul 10 18:43:11 2000
There are some (pre) test
versions by Linux and Alan Cox out awaiting feedback from testers, but
nothing solid or consistent yet. Be careful when using these for
serious work. Newer != Better
This isn't being planned for the next few
On Mon, 10 Jul 2000, Seth Vidal wrote:
What I was thinking was a good machine with a 64bit pci bus and/or
multiple buses.
And A LOT of external enclosures.
Multiple Mylex extremeRAID's.
I've had some uncomfortable experiences with hw raid controllers -
ie: VERY poor performance and
On Mon, 10 Jul 2000, Seth Vidal wrote:
arguably only 500gb per machine will be needed. I'd like to get the fastest
possible access rates from a single machine to the data. Ideally 90MB/s+
Is this vastly read-only or will write speed also be a factor?
mostly read-only.
If it were me,
23 matches
Mail list logo