On Friday October 13, [EMAIL PROTECTED] wrote:
I am curious if there are plans for either of the following;
-RAID6 reshape
-RAID5 to RAID6 migration
No concrete plans with timelines and milestones and such, no.
I would like to implement both of these but I really don't know when I
will
I am pleased to announce the availability of
mdadm version 2.5.4
It is available at the usual places:
http://www.cse.unsw.edu.au/~neilb/source/mdadm/
and
countrycode=xx.
http://www.${countrycode}kernel.org/pub/linux/utils/raid/mdadm/
and via git at
git://neil.brown.name/mdadm
On Wednesday October 11, [EMAIL PROTECTED] wrote:
After realizing my stupid error in specifying the bitmap during array
creation, I've triggered a couple of 100% repeatable bugs with this
scenario.
BUG 1)
Strangely, whatever the underlying cause is, ext3 seems immune (at
On Friday October 6, [EMAIL PROTECTED] wrote:
Paul Clements wrote:
--- mdadm-2.5.1/bitmap.c.orig Fri Oct 6 13:40:35 2006
+++ mdadm-2.5.1/bitmap.cFri Oct 6 13:40:53 2006
@@ -33,6 +33,7 @@ inline void sb_le_to_cpu(bitmap_super_t
sb-chunksize = __le32_to_cpu(sb-chunksize);
On Monday October 9, [EMAIL PROTECTED] wrote:
Hello,
I wonder what is the type-0.90.0 superblock and md-1 superblocks
of md and what is the differnce between them? Is it merely
some version of kernel md driver ?
Two different formats for the metadata describing the array.
I usually refer to
On Monday October 9, [EMAIL PROTECTED] wrote:
Ok, after more testing, this lockup happens consistently when
bitmaps are switched on and never when they are switched off.
Ideas anybody?
No. I'm completely stumped.
Which means it is probably something very obvious, but I keep looking
in the
On Monday October 9, [EMAIL PROTECTED] wrote:
superblock-init_flag == FALSE then make all writes a parity generating
not updating write (less efficient, so you would want to resync the
array and clear this up soon, but possible).
Yeh, that would work. I wonder if it is worth the effort
On Thursday October 5, [EMAIL PROTECTED] wrote:
I am trying to compare the three RADI10 layouts with each other.
Assuming a simple 4 drive setup with 2 copies of each block,
I understand that a near layout makes RAID10 resemble RAID1+0
(although it's not 1+0).
I also understand that the far
On Tuesday October 10, [EMAIL PROTECTED] wrote:
Very happy to. Let me know what you'd like me to do.
Cool thanks.
At the end is a patch against 2.6.17.11, though it should apply against
any later 2.6.17 kernel.
Apply this and reboot.
Then run
while true
do cat
back into the active configuration and
re-synced. Any comments?
Does this patch help?
Fix count of degraded drives in raid10.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/raid10.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff .prev/drivers
On Monday October 9, [EMAIL PROTECTED] wrote:
In the kernel log I see:
md0: raid array is not clean -- starting background reconstruction
What is the meaning of raid array is not clean ?
It means md believe there could be an inconsistency in the array.
This typically happens due to an
hierarchy a deadlock could happen. However that causes bigger
problems than a deadlock and should be fixed independently.
So we flag the lock in md_open as a nested lock. This requires
defining mutex_lock_interruptible_nested.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
On Monday September 11, [EMAIL PROTECTED] wrote:
Neil,
The following patches implement hardware accelerated raid5 for the Intel
XscaleĀ® series of I/O Processors. The MD changes allow stripe
operations to run outside the spin lock in a work queue. Hardware
acceleration is achieved by
On Tuesday October 3, [EMAIL PROTECTED] wrote:
Hello Neil, Ingo and [insert your name here],
I try to understand the raid5 and md code and I have a question
concerning the cache.
There are two ways of calculating the parity: read-modify-write and
reconstruct-write. In my understanding,
the wrong number of
working drives. This is probably only in 2.6.18. Patch is below.
NeilBrown
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/raid10.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff .prev/drivers/md/raid10.c ./drivers/md/raid10.c
On Thursday October 5, [EMAIL PROTECTED] wrote:
On Oct 5, 2006, at 3:15 AM, Jurriaan Kalkman wrote:
AFAIK, linux raid-10 is not exactly raid 1+0, it allows you to, for
example, use 3 disks.
I made a raid-10 device earlier today with 7 drives and I was
surprised to see that it
On Thursday October 5, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
On Wednesday October 4, [EMAIL PROTECTED] wrote:
I have been trying to run:
mdadm --grow /dev/md0 --raid-devices=6 --backup-file /backup_raid_grow
I get:
mdadm: Need to backup 1280K of critical section..
mdadm: /dev
On Wednesday October 4, [EMAIL PROTECTED] wrote:
I have been trying to run:
mdadm --grow /dev/md0 --raid-devices=6 --backup-file /backup_raid_grow
I get:
mdadm: Need to backup 1280K of critical section..
mdadm: /dev/md0: Cannot get array details from sysfs
It shouldn't do that
Can
On Sunday October 1, [EMAIL PROTECTED] wrote:
Richard Bollinger [EMAIL PROTECTED] writes:
It appears that raidhotadd doesn't always trigger a resync under 2.6.18.
Starting with a broken raid1 mirror:
Same with evms and 2.6.18. it does not trigger the raid1 resync in any
case. (while
the least-significant bit on bigendian
machines, so it is really wrong to use it.
ffs is closer, but takes an 'int' and we have a 'unsigned long'.
So use ffz(~X) to convert a chunksize into a chunkshift.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/bitmap.c |3
On Friday September 22, [EMAIL PROTECTED] wrote:
The SET_BITMAP_FILE ioctl currently doesn't have a compat entry, so on
64-bit systems a 32-bit mdadm will fail:
Sep 20 16:34:52 caspian kernel: ioctl32(mdadm:8056): Unknown cmd fd(3)
cmd(8004092b){00} arg(0004) on /dev/md0
The fix is
On Thursday September 14, [EMAIL PROTECTED] wrote:
Neil (or others), what is the recommended way to have the array start up
if you use whole drives instead of partitions? Do you put mdadm -A etc.
in rc.local?
I would use
mdadm -As
is rc.local (or /etc/init.d/mdadm or whatever)
On Tuesday September 12, [EMAIL PROTECTED] wrote:
I currently have a 4 disk raid5 array which I wish to move to another
machine.
In the current machine it is comprised of sda1/sdb1/sdc1/sdd1. When I move it
to the new machine the disks will all shift up by 2. ie. the partitions will
now
On Monday August 28, [EMAIL PROTECTED] wrote:
This might be a dumb question, but what causes md to use a large amount of
cpu resources when reading a large amount of data from a raid1 array?
I assume you meant raid5 there.
md/raid5 shouldn't use that much CPU when reading.
It does use more
On Monday September 4, [EMAIL PROTECTED] wrote:
This one will really curl your hair. So, operating with the knowledge
that the checksum's state of correctness or incorrectness was changing
all the time, I did this:
while [ $? != 0 ] ; do
mdadm -A /dev/md0 /dev/sd[abcd]1
done
On Monday September 4, [EMAIL PROTECTED] wrote:
Hi,
I'm not sure if this is the right place to ask this question, but a
Google search didn't turned out anything relevant, so I'm taking a
chance here.
I have my root on software raid on Debian and every night, cron is
sending me this
On Monday September 4, [EMAIL PROTECTED] wrote:
On Mon, 2006-09-04 at 17:46 -0400, Josh Litherland wrote:
I've only used it for a couple days, but never got any read errors or
invalid file problems.
Feh, disregard that. I've beaten it up some more, and occasional errors
are cropping
On Sunday September 3, [EMAIL PROTECTED] wrote:
On Sun, 3 Sep 2006, Clive Messer wrote:
This leads me to a question. I understand from reading the linux-raid
archives
that the current behaviour when rebuilding with a single badblock on
another
disk is for that disk to also be
On Sunday September 3, [EMAIL PROTECTED] wrote:
Hello GABELN
I have a really really big problem. In fact, the problem is the output of
mdadm --examine as shown on http://nomorepasting.com/paste.php?pasteID=68021
Please explain why you think that output is a problem. It looks fine
to me.
On Saturday September 2, [EMAIL PROTECTED] wrote:
Attempting to build a new raid5 md array across 4 hard drives. At the
exact moment that the drive finishes rebuilding, the superblock checksum
changes to an invalid value. During the rebuild, mdadm -E for the 4
drives shows:
On Thursday August 31, [EMAIL PROTECTED] wrote:
Hi all,
Just wondering if there is any way to get mdadm created multipath devices
to re-activate a previously disabled path?
I know I can
mdadm /dev/md0 -f /dev/sdx -r /dev/sdx -a /dev/sdx
to re-activate it, but I want mdadm to do it
On Monday August 28, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
On Saturday August 26, [EMAIL PROTECTED] wrote:
All,
[...]
* Problem 1: Since moving from 2.4 - 2.6 kernel, a reboot kicks one
device out of the array (c.f. post by Andreas Pelzner on 24th Aug 2006).
* Problem 2
On Tuesday August 29, [EMAIL PROTECTED] wrote:
Hi,
i hope i picked the right list for this problem,
here's the formal report:
[1.] One line summary of the problem:
System crashes while accessing a 3TB RAID5 on an AMD64 with 2.6.17.11.
Yes.. you are hitting some pretty serious
On Monday August 28, [EMAIL PROTECTED] wrote:
Am Montag, 28. August 2006 04:03 schrieben Sie:
The easiest thing to do is simply recreate the array, making sure to
have the drives in the correct order, and any options (like chunk
size) the same. This will not hurt the data (if done
On Monday August 28, [EMAIL PROTECTED] wrote:
Neil Brown [EMAIL PROTECTED] writes:
You say some of the drives are 'spare'. How did that happen? Did you
try to add them back to the array after it has failed? That is a
mistake.
Surely it was, although not mine.
;-)
The easiest
On Sunday August 13, [EMAIL PROTECTED] wrote:
well ... me again
Following your advice
I added a deadline for every WRITE stripe head when it is created.
in raid5_activate_delayed i checked if deadline is expired and if not i am
setting the sh to prereadactive mode as .
This small
On Thursday August 17, [EMAIL PROTECTED] wrote:
I just tried the patch and now it seems to be syncing the drives instead
of only checking them? (At the very least the message is misleading.)
Yes, the message is misleading. I should fix that.
NeilBrown
# echo check
On Saturday August 12, [EMAIL PROTECTED] wrote:
On 8/9/06, James Peverill [EMAIL PROTECTED] wrote:
I'll try the force assemble but it sounds like I'm screwed. It sounds
like what happened was that two of my drives developed bad sectors in
different places that weren't found until I
On Thursday August 24, [EMAIL PROTECTED] wrote:
On Thu, 24 Aug 2006 17:40:56 +1000
NeilBrown [EMAIL PROTECTED] wrote:
[PATCH 001 of 4] md: Fix recent breakage of md/raid1 array checking
[PATCH 002 of 4] md: Fix issues with referencing rdev in md/raid1.
[PATCH 003 of 4] md: new sysfs
On Tuesday August 22, [EMAIL PROTECTED] wrote:
Hi,
I have a set of 11 500 GB drives. Currently each has two 250 GB
partitions (/dev/sd?1 and /dev/sd?2). I have two RAID6 arrays set up,
each with 10 drives and then I wanted the 11th drive to be a hot-spare.
When I originally created
On Tuesday August 22, [EMAIL PROTECTED] wrote:
G'day all,
I have a box with 15 SATA drives in it, they are all on the PCI bus and it's
a relatively slow machine.
I can extract about 100MB/s combined read speed from these drives with dd.
When reading /dev/md0 with dd I get about 80MB/s,
On Tuesday August 22, [EMAIL PROTECTED] wrote:
Am Montag, 21. August 2006 13:04 schrieb Dexter Filmore:
I seriously don't know what's going on here.
I upgraded packages and rebooted the machine to find that now disk 4 of 4 is
not assembled.
Here's dmesg and mdadm -E
*
at it, there seem to be several place that reference
-rdev that could be cleaned up.
So I'm thinking of something like the following against -mm
I'll make a smaller patch for -stable.
Thanks,
NeilBrown
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/raid1.c | 55
On Monday August 21, [EMAIL PROTECTED] wrote:
On Mon, Aug 21, 2006 at 09:50:13AM +1000, NeilBrown wrote:
patch for 2.6.16 stable series
...
Thanks for this patch.
It does also apply against both 2.6.17.9 and Linus' tree.
Is it not required in these trees, or are you also submitting
On Thursday August 17, [EMAIL PROTECTED] wrote:
HI Neil ..
Also sprach Neil Brown:
On Wednesday August 16, [EMAIL PROTECTED] wrote:
1) I would like raid request retries to be done with exponential
delays, so that we get a chance to overcome network brownouts.
2) I would like
On Monday August 21, [EMAIL PROTECTED] wrote:
raid5, 4 sata disks, slackware with 2.6.14.6.
Yesterday the machine hung, so I used MagicKey to sync, remount read only and
reboot.
After that the third disk was not assembled into the array.
dmesg came up with:
[ 34.652268] md: md0 stopped.
On Thursday August 17, [EMAIL PROTECTED] wrote:
A long time ago I noticied pretty bad formatting of
dmesg text in md array reconstruction output, but
never bothered to ask. So here it goes.
What kernel version?
My patch logs suggest that I fixed that about 1 year ago...
which means it is
On Wednesday August 16, [EMAIL PROTECTED] wrote:
Hello,
I have been trying to get a raid5 array going on SuSE 10.1 Final
(2.6.16.21-0.13-default) but every time I create, resync, or rebuild a
spare it seems to have worked, but
# mdadm --examine /dev/sd[bcd]1
gives a report that the
On Wednesday August 16, [EMAIL PROTECTED] wrote:
So,
1) I would like raid request retries to be done with exponential
delays, so that we get a chance to overcome network brownouts.
2) I would like some channel of communication to be available
with raid that devices can use to say
On Friday August 11, [EMAIL PROTECTED] wrote:
Hi all,
I've got a machine with a RAID6 array which hung on me yesterday. Upon
reboot, mdadm refused to start the array, since it was degraded and
dirty. The array had 7 drives, and one had previously gone bad. I'm
running Fedora Core 5.
I
On Wednesday August 9, [EMAIL PROTECTED] wrote:
Why we're updating it BACKWARD in the first place?
To avoid writing to spares when it isn't needed - some people want
their spare drives to go to sleep.
That sounds a little dangerous. What if it decrements below 0?
It cannot.
md
is valuable.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/md.c | 11 +++
1 file changed, 11 insertions(+)
diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/md.c 2006-08-03 11:42:48.0 +1000
+++ ./drivers/md/md.c 2006-08-07
On Tuesday August 8, [EMAIL PROTECTED] wrote:
Why we're updating it BACKWARD in the first place?
To avoid writing to spares when it isn't needed - some people want
their spare drives to go to sleep.
If we increment the event count without writing to the spares, the
spares quickly get left
On Tuesday August 8, [EMAIL PROTECTED] wrote:
Michael Tokarev [EMAIL PROTECTED] writes:
Why we're updating it BACKWARD in the first place?
Another scenario: 1 disk (of 2) is removed, another is added, RAID-1
is rebuilt, then the disk added last is removed and replaced by
the disk which
On Tuesday August 8, [EMAIL PROTECTED] wrote:
The resize went fine, but after re-adding the drive back into the array
I got another fail event (on another drive) about 23% through the
rebuild :(
Did I have to remove the bad drive before re-adding it with mdadm? I
think my array might
On Saturday August 5, [EMAIL PROTECTED] wrote:
On Sat, Aug 05, 2006 at 06:31:37PM +0100, David Greaves wrote:
Say going from 300gbx4 to 500gbx4. Can one replace them
one at a time, going through fail/rebuild as appropriate
and then expand the array into the unused space
Yes.
I
On Sunday August 6, [EMAIL PROTECTED] wrote:
Allow user to force raid1 to read all data from a given disk.
This lets users do integrity checking by comparing results
from reading different disks. If at any time the system finds
it cannot read from the given disk it resets the disk number
to
I am pleased to announce the availability of
mdadm version 2.5.3
It is available at the usual places:
http://www.cse.unsw.edu.au/~neilb/source/mdadm/
and
countrycode=xx.
http://www.${countrycode}kernel.org/pub/linux/utils/raid/mdadm/
and via git at
git://neil.brown.name/mdadm
On Wednesday August 2, [EMAIL PROTECTED] wrote:
Hi,
I seem to be having a problem with mdadm running on Gentoo. I recently
upgraded from 1.12 to 2.5-r1 and then to 2.5.2, with both of the latter
exhibiting the same behaviour on the machine in question.
The machine is running a RAID1
On Friday August 4, [EMAIL PROTECTED] wrote:
What can I do?
a) force with some magic the assemble (maybe --assume-clean could
help)?
Yes. --assemble --force
NeilBrown
b) wait for new disk and dd_rescue old /dev/hdg to new /dev/hdg and
retry?
c) recreate the raid (I'm scared...)?
On Thursday August 3, [EMAIL PROTECTED] wrote:
The r1_bio-master_bio may already have had end_io() called and been
freed by the time we bio_clone() it. This results in an invalid bio
being sent down to one (or more) of the raid1 components. In my testing,
the original 4K barrier write
On Tuesday August 1, [EMAIL PROTECTED] wrote:
don't think this is better, NeilBrown wrote:
raid10d has t many nested block, so take the fix_read_error
functionality out into a separate function.
Definite improvement in readability. Will all versions of the compiler
do something
On Monday July 31, [EMAIL PROTECTED] wrote:
On Jul 30, 2006, Neil Brown [EMAIL PROTECTED] wrote:
1/
It just isn't right. We don't mount filesystems from partitions
just because they have type 'Linux'. We don't enable swap on
partitions just because they have type 'Linux
[linux-raid added to cc.
Background: patch was submitted to remove the current hard limit
of 127 partitions that can be auto-detected - limit set by
'detected_devices array in md.c.
]
My first inclination is not to fix this problem.
I consider md auto-detect to be a legacy feature.
I don't
On Friday July 28, [EMAIL PROTECTED] wrote:
Hello list,
I've recently subscribed to this list as i'm facing a little problem
using md v0.90.3 (bitmap v4.39) on Linux 2.6.17.1, Debian 'sid', mdadm
2.4.1...
sounds fairly up-to-date.
While the system is under heavy disk IO, calls to
On Sunday July 23, [EMAIL PROTECTED] wrote:
Please, please! I am dead in the water!
To recap, I have a RAID6 array on a Fedora Core 5 system, using /dev/hd[acdeg]
2 and /dev/sd[ab]2. /dev/hdd went bad so I replaced the drive and tried to
add it back to the array. Here is what happens:
On Sunday July 23, [EMAIL PROTECTED] wrote:
At this point, I'd just be happy to be able to get the degraded array back up
and running. Is there any way to do that? Thanks!
mdadm --assemble --force /dev/md1 /dev/hd[aceg]2 /dev/sd[ab]2
should get you the degraded array. If not, what
On Wednesday July 19, [EMAIL PROTECTED] wrote:
On Tue, Jul 18, 2006 at 06:58:56PM +1000, Neil Brown wrote:
On Tuesday July 18, [EMAIL PROTECTED] wrote:
On Mon, Jul 17, 2006 at 01:32:38AM +0800, Federico Sevilla III wrote:
On Sat, Jul 15, 2006 at 12:48:56PM +0200, Martin Steigerwald wrote
On Wednesday July 19, [EMAIL PROTECTED] wrote:
Neil hello.
you say raid5.h:
...
* Whenever the delayed queue is empty and the device is not plugged, we
* move any strips from delayed to handle and clear the DELAYED flag
and set PREREAD_ACTIVE.
...
i do not understand how can one
On Wednesday July 19, [EMAIL PROTECTED] wrote:
Hi,
Another question on this matter please:
If there is a raid5 with 4 disks and 1 missing, and we add that disk,
while its doing the resync of that disk, how do we know which disk it
is (if we forgot what one we added)?
mdadm --detail will
On Tuesday July 18, [EMAIL PROTECTED] wrote:
I think there's a bug here somewhere. I wonder/suspect that the
superblock should contain the fact that it's a partitioned/able md device?
I've thought about that and am not in favour.
I would rather just assume everything is partitionable - put
On Wednesday July 19, [EMAIL PROTECTED] wrote:
Situation:
I accidentally killed the power to my 5-disk RAID 5 array. I then powered
it back up and rebooted the system. After reboot, however, I got the follow-
ing error when trying to assemble the array:
mdadm -A -amd /dev/md0 /dev/sd[a-e]
On Wednesday July 19, [EMAIL PROTECTED] wrote:
Is there any way to recover the original data?
Well, if you got all the right devices in the right order, then your
data should be fine. If not, I hope you have good backups, because
they are your only hope.
NeilBrown
So
On Tuesday July 18, [EMAIL PROTECTED] wrote:
Jul 16 16:59:37 ceres kernel: ide: failed opcode was: unknown
Jul 16 16:59:37 ceres kernel: hdb: drive not ready for command
Jul 16 16:59:37 ceres kernel: ide0: reset: success
Jul 16 16:59:37 ceres kernel: hdb: status error: status=0x00 { }
Jul 16
On Thursday July 6, [EMAIL PROTECTED] wrote:
I created a raid1 array using /dev/disk/by-id with (2) 250GB USB 2.0
Drives. It was working for about 2 minutes until I tried to copy a
directory tree from one drive to the array and then cancelled it
midstream. After cancelling the copy, when I
On Friday July 7, [EMAIL PROTECTED] wrote:
Neil But if you wanted to (and were running a fairly recent kernel) you
Neil could
Neil mdadm --grow --bitmap=internal /dev/md0
Did this. And now I can do mdadm -X /dev/hde1 to examine the bitmap,
but I think this totally blows. To create a
On Tuesday July 11, [EMAIL PROTECTED] wrote:
Hm, what's superblock 0.91? It is not mentioned in mdadm.8.
Not sure, the block version perhaps?
Well yes of course, but what characteristics? The manual only lists
0, 0.90, default
1, 1.0, 1.1, 1.2
No 0.91 :(
AFAICR superblock
On Tuesday July 11, [EMAIL PROTECTED] wrote:
Neil,
It worked, echo'ing the 600 to the stripe width in /sys, however, how
come /dev/md3 says it is 0 MB when I type fdisk -l?
Is this normal?
Yes. The 'cylinders' number is limited to 16bits. For you 2.2TB
array, the number of 'cylinders'
On Tuesday July 11, [EMAIL PROTECTED] wrote:
Hi,
I created to 3 x /dev/md1 to /dev/md3 which consist of six identical
200GB hdd
my mdadm --detail --scan looks like
Proteus:/home/vladoportos# mdadm --detail --scan
ARRAY /dev/md1 level=raid1 num-devices=2
On Monday July 17, [EMAIL PROTECTED] wrote:
I have written some posts about this before... My 6 disk RAID 5 broke
down because of hardware failure. When I tried to get it up'n'running again
I did a --create without any missing disk, which made it rebuild. I have
also lost all information
On Tuesday July 11, [EMAIL PROTECTED] wrote:
Christian Pernegger wrote:
The fact that the disk had changed minor numbers after it was plugged
back in bugs me a bit. (was sdc before, sde after). Additionally udev
removed the sdc device file, so I had to manually recreate it to be
able to
On Monday July 10, [EMAIL PROTECTED] wrote:
Karl Voit wrote:
[snip]
Well this is because of the false(?) superblocks of sda-sdd in comparison to
sda1 to sdd1.
I don't understand this. Do you have more than a single partion on sda?
Is sda1 occupying the entire disk? since the superblock
On Friday July 14, [EMAIL PROTECTED] wrote:
Hi i have small problem when i booting i have md1 as /boot md2 as swap
and md3 as / (root) and when it come to md3 it say something like md3
has no identity information i cant read it its go too fast... it
actualy dont affect system as far i can
On Tuesday July 11, [EMAIL PROTECTED] wrote:
Checksum : 4aa9094a - expected 4aa908c4
This is a bit scary. You have a single-bit error, either in the
checksum or elsewhere in the superblock.
I would recommend at least a memtest86 run.
NeilBrown
-
To unsubscribe from this list: send
On Thursday July 13, [EMAIL PROTECTED] wrote:
Hi all,
I'm new to MD RAID. When I read the book Understanding the Linux Kernel, I
know that there are several layers between Filesystems (e.g. ext2) and block
device files (e.g. /dev/sda1). These layers are:
Filesystem == Generic Block Layer ==
On Tuesday July 4, [EMAIL PROTECTED] wrote:
Michael Tokarev [EMAIL PROTECTED] wrote:
Why to test for udev at all? If the device does not exist, regardless
if udev is running or not, it might be a good idea to try to create it.
Because IT IS NEEDED, period. Whenever the operation fails or
On Sunday July 16, [EMAIL PROTECTED] wrote:
Thanks for the reply, Neil. Here is my version:
[EMAIL PROTECTED] log]# mdadm --version
mdadm - v2.3.1 - 6 February 2006
Positively ancient :-) Nothing obvious in the change log since then.
Can you show me the output of
mdadm -E /dev/hdd2
On Monday July 17, [EMAIL PROTECTED] wrote:
/dev/md/0 on /boot type ext2 (rw,nogrpid)
/dev/md/1 on / type reiserfs (rw)
/dev/md/2 on /var type reiserfs (rw)
/dev/md/3 on /opt type reiserfs (rw)
/dev/md/4 on /usr type reiserfs (rw)
/dev/md/5 on /data type reiserfs (rw)
I'm running the
On Saturday July 15, [EMAIL PROTECTED] wrote:
Hi all,
I have a RAID6 array where a disk went bad. I removed the old disk, put in
an
identical one, and repartitioned the new disk. I am now trying to add the
new partition to the array, but I get this error:
[EMAIL PROTECTED] ~]#
On Friday July 7, [EMAIL PROTECTED] wrote:
How are you shutting down the machine? If something sending SIGKILL
to all processes?
First SIGTERM, then SIGKILL, yes.
That really should cause the array to be clean. Once the md thread
gets SIGKILL (it ignores SIGTERM) it will mark the array
On Friday July 7, [EMAIL PROTECTED] wrote:
Jul 7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough
stripes. Needed 512
Jul 7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array
info. -28
So the RAID5 reshape only works if you use a 128kb or smaller
On Saturday July 8, [EMAIL PROTECTED] wrote:
I'm just in the process of upgrading the RAID-1 disks in my server, and have
started to experiment with the RAID-1 --grow command. The first phase of the
change went well, I added the new disks to the old arrays and then increased
the
size of
On Friday July 7, [EMAIL PROTECTED] wrote:
My RAID-5 array is composed of six USB drives. Unfortunately, my
Ubuntu Dapper system doesn't always assign the same devices to the
drives after a reboot. However, mdadm doesn't seem to like having an
mdadm.conf that doesn't have a Devices line with
On Friday July 7, [EMAIL PROTECTED] wrote:
Hey! You're awake :)
Yes, and thinking about breakfast (it's 8:30am here).
I am going to try it with just 64kb to prove to myself it works with that,
but then I will re-create the raid5 again like I had it before and attempt
it again, I did
On Friday July 7, [EMAIL PROTECTED] wrote:
I guess one has to wait until the reshape is complete before growing the
filesystem..?
Yes. The extra space isn't available until the reshape has completed
(if it was available earlier, the reshape wouldn't be necessary)
NeilBrown
-
To
On Thursday July 6, [EMAIL PROTECTED] wrote:
I suggest you find a SATA related mailing list to post this to (Look
in the MAINTAINERS file maybe) or post it to linux-kernel.
linux-ide couldn't help much, aside from recommending a bleeding-edge
patchset which should fix a lot of things
do see some room for improvement in the md shutdown
sequence - it shouldn't give up at that point just because the device
seems to be in use I'll look into that.
You could try the following patch. I think it should be safe.
NeilBrown
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat
On Thursday July 6, [EMAIL PROTECTED] wrote:
hello, i just realized that internal bitmaps do not seem to work
anymore.
I cannot imagine why. Nothing you have listed show anything wrong
with md...
Maybe you were expecting
mdadm -X /dev/md100
to do something useful. Like -E, -X must be
On Thursday July 6, [EMAIL PROTECTED] wrote:
Currently I have 4 discs on a 4 channel sata controller which does its job
quite well for 20 bucks.
Now, if I wanted to grow the array I'd probably go for another one of these.
How can I tell if the discs on the new controller will become
On Thursday July 6, [EMAIL PROTECTED] wrote:
Neil,
First off, thanks for all your hard work on this software, it's really
a great thing to have.
But I've got some interesting issues here. Though not urgent. As
I've said in other messages, I've got a pair of 120gb HDs mirrored.
I'm
401 - 500 of 975 matches
Mail list logo