Mike Hardy wrote:
Mike Hardy wrote:
What I'm thinking of doing is writing a small (or, as small as
possible, anyway) perl program that can take a few command line
arguments (like the array construction information) and know how to
read the data blocks on the array, and calculate parity
Robin Bowes wrote:
Mike Hardy wrote:
To grow component count on raid5 you have to use raidreconf, which can
work, but will toast the array if anything goes bad. I have personally
had it work, and not work, in different instances. The failures were
not necessarily raidreconf's fault, but its
Guy wrote:
For future reference:
Everyone should do a nightly disk test to prevent bad blocks from hiding
undetected. smartd, badblocks or dd can be used. Example:
dd if=/dev/sda of=/dev/null bs=64k
Just create a nice little script that emails you the output. Put this
script in a nighty cron to
Robert Heinzmann wrote:
Hello,
can someone verify if the following statements are true ?
- It's not possible to simply convert a existing partition with a
filesystem on it to a raid1 mirror set.
I believe you're right, but I'm not totally sure on this one. I'd take
the second disk, create a new
Mark Hahn wrote:
Interesting - the private mail was from me, and I've got two dual
Opterons in service. The one with significantly more PCI activity has
significantly more problems then the one with less PCI activity.
that's pretty odd, since the most intense IO devices I know of
are cluster
NeilBrown wrote:
When an array is degraded, bit in the intent-bitmap are
never cleared. So if a recently failed drive is re-added, we only need
to reconstruct the block that are still reflected in the
bitmap.
This patch adds support for this re-adding.
Hi there -
If I understand this correctly,
Frank Wittig wrote:
It actually is available.
I've tested it and it worked fine for me. But taking a backup is highly
recommended.
The trick is not to use mdadm, since growing with mdadm is not possible
at the moment. Use raid-tools instead.
The program raidreconf comes along with raidtools.
berk walker wrote:
Have you guys seen/tried mdadm 1.90? I am delightfully experiencing the
I believe the mdadm based grow does not work for raid5, but only for
raid0 or raid1. raidreconf is actually capable of adding disks to raid5
and re-laying out the stripes / moving parity blocks, etc
Colin McDonald wrote:
Is it a bad idea to write the grub to a software mirror. Is it written
to a specific disk when this is done?
The Software Raid and Grub HOW-TO
http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/014331.html
I use grub+raid1 on the root drive of a number of
Max Waterman wrote:
OK, I am going to try to expand the capacity of my raid5 array and I
want to make sure I've got it right.
Not a bad idea, as its all or nothing...
Disk /dev/hdg: 200.0 GB, 200049647616 bytes
Disk /dev/hdi: 200.0 GB, 200049647616 bytes
Disk /dev/hdk: 200.0 GB, 200049647616
Hello all -
This is more of a cautionary tale than anything, as I have not attempted
to determine the root cause or anything, but I have been able to add a
disk to a raid5 array using raidreconf in the past and my last attempt
looked like it worked but still scrambled the filesystem.
So, if
Moreover, and I'm sure Neil will chime in here, isn't the clean/unclean
thing designed to prevent this exact scenario?
The array is marked unclean immediately prior to write, then the write
and parity write happens, then the array is marked clean.
If you crash during the write but before parity
Guy wrote:
It is not just a parity issue. If you have a 4 disk RAID 5, you can't be
sure which if any have written the stripe. Maybe the parity was updated,
but nothing else. Maybe the parity and 2 data disks, leaving 1 data disk
with old data.
Beyond that, md does write caching. I
Slightly off-topic, but:
Simon Valiquette wrote:
Francois Barre a écrit :
On production server with large RAID array, I tends to like very
much XFS and trust it more than ReiserFS (I had some bad experience
with ReiserFS in the past). You can also grow a XFS filesystem live,
which is
PFC wrote:
When rebuilding md1, it does not realize accesses to md0 wait for
the same disks. Thus reconstruction of md1 runs happily at full speed,
and the machine is dog slow, because the OS and everything is on md0.
(I cat /dev/zero to a file on md1 to slow the rebuild so it
I saw this on my array, and other(s) have reported it as well.
Apparently the reconstruction speed algorithm doesn't understand that
it's not syncing all the blocks and hilarity ensues. I believe that was
it, anyway
Either that or you really have a hell of a server :-)
-Mike
jurriaan wrote:
If you remove the '-Werror' it'll compile and work, but you still can't
convert a raid 0 to a raid 5. You're raid level understanding is off as
well, raid 5 is a parity block rotating around all drives, you were
thinking of raid 4 which has a single parity disk. Migrating raid 0 to
raid 4 (and
Chris Osicki wrote:
To rephrase my question, is there any way to make it visible to the
other host that the array is up an running on the this host?
Any comments, ideas?
Would that not imply an unlock command before you could run the array
on the other host?
Would that not then break
Why would you not be happy? resyncs in general are bad since they
indicate your data is possibly out-of-sync and the resync itself
consumes an enormous amount of resources
This is a feature of new-ish md driver code that more aggressively marks
the array as clean after writes
The end result is
I can think of two things I'd do slightly differently...
Do a smartctl -t long on each disk before you do anything, to verify
that you don't have single sector errors on other drives
Use ddrescue for better results copying a failing drive
-Mike
PFC wrote:
I have a raid5 array that contain
Addonics adst114 was the cheapest one I've found that works. I found it
for $41 at thenerds.net but you may be better at the price searching
than me.
It's a Silicon Images 3114 chip, driven by the sata_sil driver
I honestly don't recall if it was out-of-the-box working on FC4, but the
updated
had already tried to fsck the filesystem
on this thing, so you may have hashed the remaining drive. It's hard to
say. Truly bleak though...
-Mike
Technomage wrote:
mike.
given the problem, I have a request.
On Friday 31 March 2006 15:55, Mike Hardy wrote:
I can't imagine how to coax
Brad Campbell wrote:
Martin Stender wrote:
Hi there!
I have two identical disks sitting on a Promise dual channel IDE
controller. I guess both disks are primary's then.
One of the disks have failed, so I bought a new disk, took out the
failed disk, and put in the new one.
That might
Recreate the array from the constituent drives in the order you mention,
with 'missing' in place of the first drive that failed?
It won't resync because it has a missing drive.
If you created it correctly, the data will be there
If you didn't create it correctly, you can keep trying
Neil Brown wrote:
On Monday May 1, [EMAIL PROTECTED] wrote:
Hey folks.
There's no point in using LVM on a raid5 setup if all you intend to do
in the future is resize the filesystem on it, is there? The new raid5
resizing code takes care of providing the extra space and then as long
as the say
Paul Clements wrote:
Gil wrote:
So for those of us using other filesystems (e.g. ext2/3), is there
some way to determine whether or not barriers are available?
You'll see something like this in your system log if barriers are not
supported:
Apr 3 16:44:01 adam kernel: JBD:
Something fishy here
Dexter Filmore wrote:
# mdadm -E /dev/sdd
Device /dev/sdd
# cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 sda1[0] sdd1[3] sdc1[2] sdb1[1]
732563712 blocks level 5, 32k chunk, algorithm 2 [4/4] []
Components that are all the first partition.
Bruno Seoane wrote:
mdadm -C -l5 -n5
-c=128 /dev/md0 /dev/sdb1 /dev/sdd1 /dev/sde1 /dev/sdc1 /dev/sda1
I took the devices order from the mdadm output on a working device. Is this
the way it's supposed to be the command assembled?
Is there anything alse I should consider or any other
Nigel J. Terry wrote:
One comment - As I look at the rebuild, which is now over 20%, the time
till finish makes no sense. It did make sense when the first reshape
started. I guess your estimating / averaging algorithm doesn't work for
a restarted reshape. A minor cosmetic issue - see below
Warning: I'm not certain this info is correct (I test on fake loopback
arrays before taking my own advice - be warned). More authoritative
folks are more than welcome to correct me or disagree.
create is safe on existing arrays in general, so long as you get the old
device order correct in the
Richard Scobie wrote:
Dexter Filmore wrote:
Of all modes I wouldn't use a linear setup for backups. One disk dies
- all data is lost.
I'd go for an external raid5 solution, tho those tend to be slow and
expensive.
Unfortunately budget is the overriding factor here. Unlike RAID 0, I
Steve Cousins wrote:
MAILADDR [EMAIL PROTECTED]
ARRAY /dev/md0 level=raid5 num-devices=3
UUID=39d07542:f3c97e69:fbb63d9d:64a052d3
devices=/dev/sdb1,/dev/sdc1,/dev/sdd1
If you list the devices explicitly, you're opening the possibility for
errors when the devices are re-ordered following
berlin % rpm -qf /usr/lib/nagios/plugins/contrib/check_linux_raid.pl
nagios-plugins-1.4.1-1.2.fc4.rf
It is built in to my nagios plugins package at least, and works great.
-Mike
Tomasz Chmielewski wrote:
I would like to have RAID status monitored by nagios.
This sounds like a simple
Gordon Henderson wrote:
This might not be strictly on-topic here, but you may provide
enlightenment, as a lot of web searching hasn't helpmed me so-far )-:
A client has bought some Dell hardware - Dell 1950 1U server, 2 on-board
SATA drives connected to a Fusion MPT SAS controller. This
Justin Piszcz wrote:
cards perhaps. Or, after reading that article, consider SAS maybe..?
I hate to be the guy that breaks out the unsubstantiated anecdotal
evidence, but I've got a RAID10 with 4x300GB Maxtor SAS drives, and I've
already had two trigger their internal SMART I'm about to
Neil Brown wrote:
On Tuesday October 31, [EMAIL PROTECTED] wrote:
1 Warm swap - replacing drives without taking down the array but maybe
having to type in a few commands. Presumably a sata or sata/raid
interface issue. (True hot swap is nice but not worth delaying warm-
swap.)
I believe
dean gaudet wrote:
On Sun, 5 Nov 2006, Bradshaw wrote:
I don't know how to scan the one disk for bad sectors, stopping the array and
doing an fsck or similar throws errors, so I need help in determining whether
the disc itself is faulty.
try swapping the cable first. after that swap
You don't want to use raidreconf unless I'm misunderstanding your goal -
I have also had success with raidreconf but have had data-loss failures
as well (I've posted to the list about it if you search). The data-loss
failures were after I had run tests that showed me it should work.
raidreconf
Michel Lespinasse wrote:
Hi,
I'm hitting a small issue with a RAID1 array and a 2.6.16.36 kernel.
Debian's mdadm package has a checkarray process which runs monthly and
checks the RAID arrays. Among other things, this process does an
echo check /sys/block/md1/md/sync_action . Looking
google BadBlockHowto
Any just google it response sounds glib, but this is actually how to
do it :-)
If you're new to md and mdadm, don't forget to actually remove the drive
from the array before you start working on it with 'dd'
-Mike
Mike wrote:
On Fri, 12 Jan 2007, Neil Brown might have
40 matches
Mail list logo