On Tue, 19 Apr 2005, Molle Bestefich wrote:
David Greaves wrote:
Does everyone really type cat /proc/mdstat from time to time??
How clumsy...
And yes, I do :)
You're not alone..
and it's still a lot better than some of the hw raid monitoring tools
- more data
- more
On Wed, 20 Apr 2005, Guy wrote:
Well, I agree with KISS, but from the operator's point of view!
I want the failed disk to light a red LED.
I want the tray the disk is in to light a red LED.
I want the cabinet the tray is in to light a red LED.
I want the re-build to the spare to start.
I
On Mon, 4 Apr 2005, Doug Ledford wrote:
Anyway, it might or might not hurt the drives to run them well below
their designed operating temperature, I don't have schematics and
materials lists in front of me to tell for sure.
ez enough to do ... its called specs on the various manufacturers
On Tue, 5 Apr 2005, Richard Scobie wrote:
http://www.hitachigst.com/hdd/technolo/drivetemp/drivetemp.htm
shows some of the factors you mention and for what it's worth Hitachi's
recommended operating range is 5 - 55 C for their 15K SCSI.
those specs is talking mostly about the SMART
hi ya raiders ..
we(they) have 14x 72GB scsi disks config'd as raid5,
( no hot spare .. )
- if 1 disk dies, no problem ... ez to recover
- my dumb question is,
- if 2 disks dies at the same time, i
assume the entire raid5 is basically hosed
if it won't reassemble and
hi ya frank
On Fri, 1 Apr 2005, Frank Wittig wrote:
- my dumb question is,
- if 2 disks dies at the same time, i
if 2 disks fail at the same time you data is lost.
if you have raid5 with 5 hot spares and a second disk dies, befor a hot
spare is synced into the array (will be
On Fri, 1 Apr 2005, Andy Smith wrote:
This seems like an awful lot of disks to have in a raid 5 with no
hot spares, to me, but then I am fairly new to RAID issues so maybe
I am wrong.. but I would much rather have raid 10.
i'd say its an over kill .. but thats what they have ..
hi ya gordon
On Fri, 1 Apr 2005, Gordon Henderson wrote:
On Fri, 1 Apr 2005, Alvin Oga wrote:
- ambient temp should be 65F or less
and disk operating temp ( hddtemp ) should be 35 or less
Are we confusing F and C here?
65F was for normal server room environment
On Tue, 22 Feb 2005, Jon Lewis wrote:
On Tue, 22 Feb 2005, Louis-David Mitterrand wrote:
I am considering getting a Sony SAIT 3 with 500G/1TB tapes, which seems
like a nice solution for backuping a whole server on a single tape.
Has anyone used that hardware and can comment on its
hi ya
On Tue, 22 Feb 2005, Jon Lewis wrote:
I should clarify, that's 80GB per tape...so 800GB native assumes you have
10 tapes in the unit.
yup...seen those puppies too ... too much headache for me
i keep wondering why people pay $150K for 1TB brandname tape subsystems ..
I wouldn't
hi ya clemen
On Wed, 19 Jan 2005, Clemens Schwaighofer wrote:
On 01/05/2005 11:44 PM, Alvin Oga wrote:
you're buying bad hardware from bad vendors
seriously, thats just wrong. Ever heard of IBM deathstart HDs? Or
other stuff? As long as you use IDE hardware you are always close
hi ya clemens
- yup.. i agree in general except for your bull shit comment :-)
( see below )
On Wed, 19 Jan 2005, Clemens Schwaighofer wrote:
Since I do SysAdmin as a get money for it service I had not a single
SCSI disk die (call it luck).
i think lots of people, probably
hi andy
assuming you have 4 disks... and not 2 disks...
hda + hdd == md0 hda mirrored to hdd
hdc + hdb == md1 hdc mirrored to hdb
i claim you should mirror first... than strip it to be able to
read the data faster
you should play with the idea that if hda dies...
you need to
hi
- Make one copy of /etc/lilo.conf for each disk, i.e., lilo-sda.conf,
lilo-sdb.conf, etc.
raid is supposed to be be hardware dependent... you should NOT
have to make 2 copies of lilo.conf
you need to make sure your kernel supports the scsi controller...
( ie... dont use the generic
- Make one copy of /etc/lilo.conf for each disk, i.e., lilo-sda.conf,
lilo-sdb.conf, etc.
raid is supposed to be be hardware dependent... you should NOT
have to make 2 copies of lilo.conf
thats hardware INdependent...
and same applies for raid5 or raid1 for booting from a degraded
hi ya
I do, however, have a couple of possible solutions:
hda: 58633344 sectors (30020 MB) w/2048KiB Cache, CHS=3649/255/63
hdc: 58633344 sectors (30020 MB) w/2048KiB Cache, CHS=58168/16/63
from what i've seen... for identical drives..
put both drives on the same cable... and it'd read
hi ya iain...
what does your /etc/fstab and /etc/raidtab look like ???
thanx
alvin
http://www.Linux-1U.net ... 500Gb 1U Raid5 ...
On Sat, 23 Jun 2001, Iain Campbell wrote:
Scenario:
1. Simple 2 disk RAID1
2. Disk 0 fails
3. Bring machine down and replace disk (same SCSI id as old one)
hi giulio
what would be the point of making swap a raid0 or raid1 device ??
- if you have problem... the machine will most likely
shutdown and you lose all swap data
just use a regular swap partition as swap... not /dev/mdxx
i'd try to use...
/ /dev/md0- so
hi ya
i think with 4 disks
--- raid0 ---
raid1 -A raid1 -B
hda + hddhdc + hdb
when things are working right... you can read data
from both sets of raid1 disks
when things die...
if hda dies... you boot off hdd
if hdc dies... you still have data on hdb
if id0 cable
hi kaelin
i've hotswpped ide drives but...different kind of hot swapp..
( w/ rh-7.1 ( root raid1 ) )
- power off...add/remove the ide disks...
- power up and let the software do its thing...
- removing/adding either of hda1 or hdb1 worked...
( at least it
-
From: Alvin Oga [mailto:[EMAIL PROTECTED]]
Sent: Donnerstag, 10. Mai 2001 00:02
To: Philippe Trolliet
Cc: Alvin Oga; [EMAIL PROTECTED]
Subject: RE: lilo and raid1
hi philippe...
unfortunately... i didnt keep that lilo.conf setup
or raidtab... ( nothng fancy in it )
- i did
hi philippe
if you are getting li- or lil- problems...
-- easiest fix is to boot linux with some other media
( floppy or other working linux box... move your target
( hd to that other box
-- once you have linux booted on that box... than re-run lilo
and see if that
hi philippe...
unfortunately... i didnt keep that lilo.conf setup
or raidtab... ( nothng fancy in it )
- i did NOT change it from the rh-7.1 install
- during your install be sure to select /dev/hda1 and /dev/hdb1 and
'linux-raid for filesystem type for your / partition
(
hi philippe
i just built and booted a raid1 root-boot system...
no problemit was with redhat-7.1 ( installed and works clean, first
time thru ) with the default setup/config
linux-2.4.2-redhat permutation
lilo-21.4-4
- i unplugged hda ( master )... and it booted off the
hi philippe
an issuedont know if suse-7.0 allows root raid drive setup
like the old rh-6.2 used to do... ( though i never used it )
- upon booting the suse-7.0 installerif it does NOT allow for
hda and hdb to be selected for fdiskyou're out of luck
to do
8 Apr 2001, Alistair Riddell wrote:
On Tue, 17 Apr 2001, Alvin Oga wrote:
-- question is... if the "master" disk dies...sometimes
the slave dont exist either... than you're sol...
surely it is more likely that the master drive will fail mechanically,
rather then electri
hi ya
best way to test raid5 is to write large ( 1Gb-2Gb ) data files to it...
and than compare the files
-- oooppss... just re-read david's post skip the part about
powering down the disks..etc...
than pull one of the disks offline
and see if it still compares...
insert a
hi joe
what kind of raid1 failures are you trying to find/monitor ??
i would just send a test file to disk...and see it i can
recover the test files from the mirrore'd set
- if your test file does nto show up on the mirror...
you have to go and check why not
- Some raid monitoring sw (
hi ya chris...
did you follow the instructions on www.linuxraid.org ???
the generic raid patch to generic 2.2.18 fails to patch
properly.. try to follow the steps at linuxraid.org
that was a very good site...
have fun
alvin
http://www.linux-1U.net ... 1U Raid5
On Wed, 3 Jan 2001,
.well past the reasonable limits
of ext2 ...so was wondering about current status for reiserfs and jfs
etc... and in production use or lab use...
have fun raiding...
alvin
http://www.linux-1u.net
On Fri, 22 Dec 2000, ritz wrote:
From [EMAIL PROTECTED] Fri Dec 22 12:09:44 2000
Alvin
hi ya chris
one "machine" being taken offline does NOT mean that the
"website" is down etc... there should be other disks/systems that
continues merrily...along while the "down'd box is being reviewed
if the customers only have one or two systems ... they have to
allow some down time for
in
On Fri, 22 Dec 2000 [EMAIL PROTECTED] wrote:
On Thu, 21 Dec 2000, Alvin Oga wrote:
- anybody using those 3ware ide cards that does hw raid5 ??
AFAIK, 3ware only does hw RAID1 and RAID10. No RAID5...unless they have
new products I don't know about. I've deployed one 3ware RAID1 box so
32 matches
Mail list logo