On Sunday February 5, [EMAIL PROTECTED] wrote:
I can see what I did now. Very silly.
I created a RAID 5 with 4 disks then went about hot adding disks.
Silly me thinking it would increase the active space. Of course with
the parity being shared through the active disks this doesn't makes
On Saturday February 4, [EMAIL PROTECTED] wrote:
Ahh, i almost forget!
The mdadm is sometimes drop cannot allocate memory and next try
segfault
when i try -G --bitmap=internal on 2TB arrays!
And after segfault, the full raid is stops...
Cheers,
Janos
I think i found the bug, its
On Sunday February 5, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
So my plea for help: Could someone with a Redhat installation
please see if there is any way to get functions that convert
between little endian and host endian, or if there is some name I
can #define to disable
On Saturday February 4, [EMAIL PROTECTED] wrote:
Hi all,
I recently had a machine (Debian 2.6.7) with a raid array have a failure,
the disk controller card died partially ( so a couple of disks went
offline) then died permantly taking a further two drives out of the array.
The array
On Friday February 3, [EMAIL PROTECTED] wrote:
Hello, list, Neil,
I try to add bitmaps to raid4, and mdadm is done this fine.
In the /proc/mdstat shows this, and it is really works well.
But on reboot, the kernel drops the bitmap(, and resync the entire array if
it is unclean). :(
It
On Friday February 3, [EMAIL PROTECTED] wrote:
Hello, list,
I plan to resize (grow) one raid4 array.
1. stop the array.
2. resize the partition on all disks to fit the maximum size.
The approach is currently not supported. It would need a change to
mdadm to find the old superblock and
, and the next crash we will see
I mean this was:
bitmap is only support in raid1.
bitmap is removed.
But not so sure. :(
Close enough. Here is the fix.
NeilBrown
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/md.c |3 ++-
1 file changed, 2 insertions(+), 1
On Thursday February 2, [EMAIL PROTECTED] wrote:
Per the discussion in this thread (http://marc.theaimsgroup.com/?
t=11260312014r=1w=2) these patches implement the first phase of MD
acceleration, pre-emptible xor. To date these patches only cover raid5
calls to compute_parity for
On Wednesday February 1, [EMAIL PROTECTED] wrote:
On Tue, Jan 31, 2006 at 10:58:03PM +0100, Molle Bestefich wrote:
Quoting Luca Berra:
the in kernel auto assembly should be removed for good
it should be replaced by auto assembly in user space (mdadm),
which does not suffer from the
On Thursday February 2, [EMAIL PROTECTED] wrote:
In any case, I have included the relevant dmesgs and a --examine and
--detail for all drives as soon as the second (2.6.15.1) weird rebuild
started. If you need any more info I'll do my best to provide it, but I
thought I should at least
On Thursday February 2, [EMAIL PROTECTED] wrote:
On Thursday February 2, [EMAIL PROTECTED] wrote:
In any case, I have included the relevant dmesgs and a --examine and
--detail for all drives as soon as the second (2.6.15.1) weird rebuild
started. If you need any more info I'll do my best
I am pleased to announce the availability of
mdadm version 2.3
It is available at the usual places:
http://www.cse.unsw.edu.au/~neilb/source/mdadm/
and
http://www.{countrycode}.kernel.org/pub/linux/utils/raid/mdadm/
mdadm is a tool for creating, managing and monitoring
device arrays
On Wednesday February 1, [EMAIL PROTECTED] wrote:
We're wondering if it's possible to run the following --
* define 4 pairs of RAID 1 with an 8-port 3ware 9500S card
* the OS will see these are four normal drives
* use md to configure them into a RAID 6 array
Would this work?
On Monday January 30, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
On Monday January 30, [EMAIL PROTECTED] wrote:
Any feeling how best to do that? My current thinking is to export a
flags entry in addition to the current ones, presumably based on
struct parsed_partitions-parts[].flags
On Monday January 30, [EMAIL PROTECTED] wrote:
On Jan 30, 2006, at 20:10, Neil Brown wrote:
On Monday January 30, [EMAIL PROTECTED] wrote:
Any feeling how best to do that? My current thinking is to export
a flags entry in addition to the current ones, presumably based
on struct
On Friday January 27, [EMAIL PROTECTED] wrote:
# mdadm --create --verbose /dev/md4 --level=1 --raid-devices=2 /dev/hde3
/dev/hdg3
mdadm: /dev/hde3 appears to contain an ext2fs file system
size=35214480K mtime=Thu Jan 26 13:38:20 2006
mdadm: /dev/hdg3 appears to contain an ext2fs file
On Wednesday January 25, [EMAIL PROTECTED] wrote:
Hi,
I am using a 8 disc scsi raid5 array on a fedora 3 system.
after a system crash, while the array was rebuilding a missing hd,
i am now unable to get this array running again.
thats all i get (/dev/sdr1 is not the one which was
On Monday January 23, [EMAIL PROTECTED] wrote:
- Original Message -
From: Neil Brown [EMAIL PROTECTED]
To: JaniD++ [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent: Thursday, January 19, 2006 3:09 AM
Subject: Re: mdadm options, and man page
On Tuesday January 17, [EMAIL
On Monday January 23, [EMAIL PROTECTED] wrote:
Here is my /etc/mdadm/mdadm.conf file
DEVICE /dev/sd[abcdefghijkl] /dev/md1 /dev/md2 /dev/md3 /dev/md4
/dev/md5 /dev/md6
ARRAY /dev/md1 devices=/dev/sda,/dev/sdb
ARRAY /dev/md2 devices=/dev/sdc,/dev/sdd
ARRAY /dev/md3
On Saturday January 21, [EMAIL PROTECTED] wrote:
Neil Brown ([EMAIL PROTECTED]) wrote on 18 January 2006 09:47:
On Tuesday January 17, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
In general, I think increasing the connection between the filesystem
and the volume manager/virtual
On Thursday January 19, [EMAIL PROTECTED] wrote:
The patch you gave me worked, Neil, but the I ran into an odd problem where
the change is lost after a reboot.
I have raid1 devices for both superblocks 0.90.? and 1.0
If I update the superblock 1.0 device, mdadm --detail shows the update,
On Saturday January 21, [EMAIL PROTECTED] wrote:
NeilBrown [EMAIL PROTECTED] wrote:
In line with the principle of release early, following are 5 patches
against md in 2.6.latest which implement reshaping of a raid5 array.
By this I mean adding 1 or more drives to the array and then
On Friday January 20, [EMAIL PROTECTED] wrote:
Though now that I look at it, don't we have a circular reference
here? Let me quote the code section, which starts of with where I was
confused:
..
Now we seem to end up with:
mddev-private = conf;
conf-mddev = mddev;
This
On Sunday January 22, [EMAIL PROTECTED] wrote:
hi,
is there any way to force to put together a raid5 array? we've to a 7+1
disks raid5 array (with 7 disks and 1 spare), but it's faild (probably
because of heat). so currently it's 4+3:-( is there any way to force
mdadm to put together the
On Monday January 23, [EMAIL PROTECTED] wrote:
NeilBrown wrote:
In line with the principle of release early, following are 5 patches
against md in 2.6.latest which implement reshaping of a raid5 array.
By this I mean adding 1 or more drives to the array and then re-laying
out all of the
On Sunday January 22, [EMAIL PROTECTED] wrote:
Hello Neil ,
On Mon, 23 Jan 2006, Neil Brown wrote:
On Monday January 23, [EMAIL PROTECTED] wrote:
NeilBrown wrote:
In line with the principle of release early, following are 5 patches
against md in 2.6.latest which implement
On Saturday January 21, [EMAIL PROTECTED] wrote:
Hi there, I recently had a disk go bad in a linear RAID built with
mdadm. The particular disk that failed was the last device of the
RAID. I am curious about how devices are utilized in a linear RAID.
Would the md be filled sequentially from
On Thursday January 19, [EMAIL PROTECTED] wrote:
I'm currently of the opinion that dm needs a raid5 and raid6 module
added, then the user land lvm tools fixed to use them, and then you
could use dm instead of md. The benefit being that dm pushes things
like volume autodetection and
On Thursday January 19, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
The in-kernel autodetection in md is purely legacy support as far as I
am concerned. md does volume detection in user space via 'mdadm'.
What other things like were you thinking of.
Oh, I suppose that's true
On Wednesday January 18, [EMAIL PROTECTED] wrote:
Mark Hahn wrote:
They seem to suggest RAID 0 is faster for reading than RAID 1, and I
can't figure out why.
with R0, streaming from two disks involves no seeks;
with R1, a single stream will have to read, say 0-64K from the first disk,
On Wednesday January 18, [EMAIL PROTECTED] wrote:
personally, I think this this useful functionality, but my personal
preference is that this would be in DM/LVM2 rather than MD. but given
Neil is the MD author/maintainer, I can see why he'd prefer to do it in
MD. :)
Why don't MD and DM
for any help.
This is exactly the correct forum.
NeilBrown
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/md.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff ./drivers/md/md.c~current~ ./drivers/md/md.c
--- ./drivers/md/md.c~current~ 2006-01-17 17
On Wednesday January 18, [EMAIL PROTECTED] wrote:
2006/1/18, Mario 'BitKoenig' Holbe [EMAIL PROTECTED]:
Mario 'BitKoenig' Holbe [EMAIL PROTECTED] wrote:
scheduled read-requests. Would it probably make sense to split one
single read over all mirrors that are currently idle?
A I got it
On Wednesday January 18, [EMAIL PROTECTED] wrote:
On Wed, 18 Jan 2006, John Hendrikx wrote:
I agree with the original poster though, I'd really love to see Linux
Raid take special action on sector read failures. It happens about 5-6
times a year here that a disk gets kicked out of the
On Wednesday January 18, [EMAIL PROTECTED] wrote:
I agree with the original poster though, I'd really love to see Linux
Raid take special action on sector read failures. It happens about 5-6
times a year here that a disk gets kicked out of the array for a simple
read failure. A rebuild
On Wednesday January 18, [EMAIL PROTECTED] wrote:
hi,
I have a silly question. Why md request buffers will not
across devices? That means Why a bh will only locate in a single
storage device? I guess maybe file system has aligned the bh? Who
can tell me the exact reasons? Thanks a lot!
On Tuesday January 17, [EMAIL PROTECTED] wrote:
Hello Neil ,
On Tue, 17 Jan 2006, NeilBrown wrote:
Greetings.
In line with the principle of release early, following are 5 patches
against md in 2.6.latest which implement reshaping of a raid5 array.
By this I mean adding 1 or
On Tuesday January 17, [EMAIL PROTECTED] wrote:
NeilBrown == NeilBrown [EMAIL PROTECTED] writes:
NeilBrown Previously the array of disk information was included in
NeilBrown the raid5 'conf' structure which was allocated to an
NeilBrown appropriate size. This makes it awkward to change
On Tuesday January 17, [EMAIL PROTECTED] wrote:
On Jan 17, 2006, at 06:26, Michael Tokarev wrote:
This is about code complexity/bloat. It's already complex enouth.
I rely on the stability of the linux softraid subsystem, and want
it to be reliable. Adding more features, especially
On Tuesday January 17, [EMAIL PROTECTED] wrote:
NeilBrown wrote (ao):
+config MD_RAID5_RESHAPE
Would this also be possible for raid6?
Yes. The will follow once raid5 is reasonably reliable. It is
essentially the same change to a different file.
(One day we will merge raid5 and raid6
On Tuesday January 17, [EMAIL PROTECTED] wrote:
I'm wondering: how well does md currently make use of the fact there
are multiple devices in the different (non-parity) RAID levels for
optimising reading and writing?
It does the best it can. Every request from the filesystem goes
directly to
On Tuesday January 17, [EMAIL PROTECTED] wrote:
I was
also under the impression that md was going to be phased out and
replaced by the device mapper.
I wonder where this sort of idea comes from
Obviously individual
On Monday January 16, [EMAIL PROTECTED] wrote:
Hi,
I'm experiencing a problem on a 2.2.16C37_III driven Cobalt RaQ4
after I add a new 2nd disk to a RAID1.
2.2.16 that's old, isn't it!
raid1 was only every available as external patches for 2.2 kernels...
I'm uncertain whether this
On Sunday January 15, [EMAIL PROTECTED] wrote:
I upgraded a system from FC2 to FC4,
The system had a working raidtools linear raid partition
(md0 = sda1 and sdb1)
I now need to get the partition going using mdadm, while maintaining the data.
I used raidtabtomdadm.sh to create
On Wednesday January 11, [EMAIL PROTECTED] wrote:
Any suggestions would be greatly appreciated. The system's new and not
yet in production, so I can reinstall it if I have to, but I'd prefer to
be able to fix something as simple as this.
Debian's installer - the mkinitrd part in
On Monday January 2, [EMAIL PROTECTED] wrote:
5. The question
Why shows sdh2 as spare?
The MD array size is correct.
And i really can see, the all drive is reading, and sdh2 is *ONLY* writing.
man mdadm
Towards the end of the CREATE MODE section:
When creating a RAID5 array,
On Thursday January 12, [EMAIL PROTECTED] wrote:
2.6.15-mm3 hangs during boot for me, after the lines
md4: bitmap initialized from disk: read 15/15 pages, set 51 bits, status: 0
created bitmap (224 pages) for device md4
ctrl-alt-del to reboot works sometimes (2 out of
On Monday December 19, [EMAIL PROTECTED] wrote:
- Original Message -
From: Neil Brown [EMAIL PROTECTED]
To: JaniD++ [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent: Monday, December 19, 2005 1:57 AM
Subject: Re: RAID5 resync question BUGREPORT!
How big is your array
On Tuesday November 22, [EMAIL PROTECTED] wrote:
I have already try the all available options, including readahead in all
layer (result in earlyer mails), and chunksize.
But with this settings, i cannot workaround this.
And the result is incomprehensible for me!
The raid0 performance is not
On Tuesday December 20, [EMAIL PROTECTED] wrote:
I'm seeing hard system lockups with 2.6.15-rc5 when trying to use a
RAID-6 array as a PV for LVM2.
I've got four SATA disks hanging off a Marvell 6081 controller. The disks
work great when I access them raw (without going through md or dm).
On Sunday December 18, [EMAIL PROTECTED] wrote:
The raid (md) device why dont have scheduler in sysfs?
And if it have scheduler, where can i tune it?
raid0 doesn't do any scheduling.
All it does is take requests from the filesystem, decide which device
they should go do (possibly splitting
On Friday December 16, [EMAIL PROTECTED] wrote:
- How would one switch from the latter to the former? Is there
something like grow_to_RAID_6?
No... at least not yet
Hm. :) Around here, it is pretty much Christmas time, with all the
wishing going on - hint, hint? ;)
Growing
On Thursday December 15, [EMAIL PROTECTED] wrote:
Hello,
I tried to compile mdadm 2.2 on my machine (sles8)
Kernel: 2.4.21-295-athlon
gcc: gcc version 2.95.3 20010315
The compile fails with this error message:
cc1: warnings being treated as errors
In file included from mdadm.h:219,
On Thursday December 15, [EMAIL PROTECTED] wrote:
Hello,
i just downloaded the mdadm 2.2 version. It seems that there are some
new options which are not explained in details. of special interest are
2 options.
-GROW
The new grow mode is very very helpfull for us. It would allow us to
On Thursday December 15, [EMAIL PROTECTED] wrote:
delurk
On 15.12.2005 21:46, Brad Campbell wrote:
Callahan, Tom wrote:
It is always wise to build in a spare however, that being said about all
raid levels. In your configuration, if a disk fails in your RAID5, your
array will go
On Thursday December 15, [EMAIL PROTECTED] wrote:
I can, but I suck at putting these things down in writing, which is
why my initial description was intentionally vague and shallow :-).
Now, I'll try anyway. It'll be imprecise and I'll miss some things
and show up late with major points
On Saturday December 10, [EMAIL PROTECTED] wrote:
I am sorry because there is no information very.
I currently ext3 file system.
I composed RAID-1(/dev/md0) disk array by order such as lower part.
mkfs.ext3 -j /dev/sda1
mkfs.ext3 -j /dev/sda1
mdadm -Cv /dev/md0 --level=1
On Saturday December 10, [EMAIL PROTECTED] wrote:
Should I do format after make RAID-1(/dev/md0) device?
Yes. format (mkfs) must come AFTER make RAID1 (mdadm -C).
After I format disks(/dev/sda1, /dev/sdb1), do not you compose to
RAID-1?
Don't format sda1 or sdb1. Compose the RAID-1
On Friday December 9, [EMAIL PROTECTED] wrote:
Hi,
I found that there's a new sysfs stripe_cache_size variable. I want to
know how does it affect RAID5 read / write performance (if any) ?
Please cc to me if possible, thanks.
Would you like to try it out and see?
Any value from about 10 to a
On Friday December 9, [EMAIL PROTECTED] wrote:
On Fri, 9 Dec 2005, Neil Brown wrote:
On Friday December 9, [EMAIL PROTECTED] wrote:
Hi,
I found that there's a new sysfs stripe_cache_size variable. I want to
know how does it affect RAID5 read / write performance (if any
On Saturday December 10, [EMAIL PROTECTED] wrote:
Hi Brown.
I have raid arrays, raid1 called md0. Basically they run fine, but
something is switching md0 readonly during write to disk(cp, mv);
Is changed by that RAID readonly in what case? let me know, describe
number of case?
I am
On Friday December 9, [EMAIL PROTECTED] wrote:
Hello, Neil,
[EMAIL PROTECTED] mdadm-2.2]# mdadm --grow /dev/md0 --bitmap=internal
mdadm: Warning - bitmaps created on this kernel are not portable
between different architectured. Consider upgrading the Linux kernel.
Dec 8 23:59:45
On Wednesday December 7, [EMAIL PROTECTED] wrote:
Neil,
could u elaborate on kernel/mdadm version this bug was fixed ?
i having the same problem myslef.
It was fixed in 2.6.12-rc5
The mddev-queue-unplug function was being set to early in 'run', and
if run failed, it would still be set, so
- which will be in 2.6.15 - fixes the problem.
Thanks for your testing.
NeilBrown
---
Fix locking problem in r5/r6
bitmap_unplug actually writes data (bits) to storage, so we
shouldn't be holding a spinlock...
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat
On Thursday December 8, [EMAIL PROTECTED] wrote:
I got this when I attempted to rpmbuild -ba mdadm.spec:
In file included from super0.c:31:
/usr/include/asm/byteorder.h:6:2: error: #warning using private kernel
header; include endian.h instead!
make: ***
On Thursday December 8, [EMAIL PROTECTED] wrote:
Not really on-topic, but has anyone else gotten ext2online to work? I
get this Dec 8 21:27:02 istanbul kernel: JBD: ext2online wants too
many credits (2050 2048) after letting it attempt to extend the FS to
cover the entire device.
I
On Tuesday December 6, [EMAIL PROTECTED] wrote:
On 12/6/05, Neil Brown [EMAIL PROTECTED] wrote:
If the raid is ok in degraded (missing 1 drive) mode, shouldn't I be
able to mount it?
Yes you should.
The fact that you cannot tends to suggest something wrong at a
hardware level
On Tuesday December 6, [EMAIL PROTECTED] wrote:
I know, it is some chance to leave some incorrect parity information on
the
array, but may be corrected by next write.
Or it may not be corrected by the next write. The parity-update
algorithm assumes that the parity is correct.
On Tuesday December 6, [EMAIL PROTECTED] wrote:
Hello,
I'm currently trying to understand the flow of the I/O in Linux raid1
devices in regard to superblock updates and resynces on machine crashes.
I looked at the source (2.6 kernel) and made some guesses about the
working of the raid1
On Tuesday December 6, [EMAIL PROTECTED] wrote:
Hello, I have been trying to figure out how to fix my raid system,
SUSE 9.3, linux 2.6.11.4-21.9-default. A hard reset put my raid in
unstable state, almost same errors as
On Monday December 5, [EMAIL PROTECTED] wrote:
Written by hand:
Null pointer dereference
last sysfs file /block/md0/md/sync_action
put_page
ohh dear, it seems that, unlike kfree, put_page doesn't like 'NULL' as
an argument.
You can try
--
diff ./mm/swap.c~current~
' as an argument, though
free_page does (but it wants an address I think...).
But I have since changed this code to use put_page, and put_page
doesn't like NULL either..
Would you accept:
--
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./mm/swap.c |2 ++
1 file
On Thursday December 1, [EMAIL PROTECTED] wrote:
NeilBrown [EMAIL PROTECTED] wrote:
+ paddr = kmap_atomic(page, KM_USER0);
+ memset(paddr + offset, 0xff,
PAGE_SIZE - offset);
This page which is being
On Tuesday December 6, [EMAIL PROTECTED] wrote:
Hello, list,
Is there a way to force the raid to skip this type of resync?
Why would you want to?
The array is 'unclean', presumably due to a system crash. The parity
isn't certain to be correct so your data isn't safe against a device
On Tuesday December 6, [EMAIL PROTECTED] wrote:
- Original Message -
From: Neil Brown [EMAIL PROTECTED]
To: JaniD++ [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent: Tuesday, December 06, 2005 1:32 AM
Subject: Re: RAID5 resync question
On Tuesday December 6, [EMAIL
On Sunday December 4, [EMAIL PROTECTED] wrote:
Hi,
I have a RAID5 array consisting of 4 disks:
/dev/hda3
/dev/hdc3
/dev/hde3
/dev/hdg3
and the Linux machine that this system was running on crashed yesterday
due to a faulty Kernel driver (i.e. the machine just halted).
So I resetted
On Saturday December 3, [EMAIL PROTECTED] wrote:
In mail.linux.raid I write:
Other than the superblocks, does 'mdadm --create' harm existing data?
It would seem not according to this test:
If resync starts (as it normally does with mdadm --create), and if you
assembled the array 'wrongly',
On Saturday December 3, [EMAIL PROTECTED] wrote:
I hope this is the correct list for this question.
linux-raid@vger.kernel.org is possibly better, but linux-kernel is
probably a suitable catch-all.
I've just recently begun using mdadm to set up some
arrays using large drives (300-400Gb).
On Friday December 2, [EMAIL PROTECTED] wrote:
On Thu, 1 Dec 2005, Neil Brown wrote:
What I would really like is a cheap (Well, not too expensive) board
that had at least 100Meg of NVRAM which was addressable on the PCI
buss, and an XOR and RAID-6 engine connected to the DMA engine
On Thursday December 1, [EMAIL PROTECTED] wrote:
I'd probably be happy to consider the 'verified read' enhancements to
md for inclusion in mainline.
A couple more thoughts on this:
1/ What do you do for a 'verified read' when the array is degraded.
For raid1, you can just check whichever
On Sunday December 4, [EMAIL PROTECTED] wrote:
Neil - did you get a chance to look at the syslog and text
messaging patches I posted?
Didn't I reply to those?... No, I guess I didn't. Thanks for the
reminder.
The text-messaging I don't like. That is what the --program option if
for. If
On Saturday December 3, [EMAIL PROTECTED] wrote:
I am testing my Promise SATA-II-150-TX4.
I ran 'e2fsck -f' for a few hours and all is well.
I then wanted some writing to so I changed to 'e2fsck -cc'.
I am seeing the following messages. Is this a problem?
It seems to continue unharmed
On Friday December 2, [EMAIL PROTECTED] wrote:
Hi,
I had a RAID-1 with two mirrors as my root partition. I decided to
fail/remove one of the mirrors prior to upgrading parts of the OS so
that I'd have a backup just in case things went horribly wrong. After
doing the upgrade, I wanted to
On Friday December 2, [EMAIL PROTECTED] wrote:
OK! Now I can fsck -n and see how bad things are.
Hmmm. sounds like you had fun!!! :-)
A feature request would be for a way to force mdadm to use a device in a
certain slot regardless of what the superblock says.
The recommended approach
I am pleased to announce the availability of
mdadm version 2.2
It is available at the usual places:
http://www.cse.unsw.edu.au/~neilb/source/mdadm/
and
http://www.{countrycode}.kernel.org/pub/linux/utils/raid/mdadm/
mdadm is a tool for creating, managing and monitoring
device arrays
On Thursday December 1, [EMAIL PROTECTED] wrote:
Ok, thanks for your answer.
Then I suppose I will have to wait a bit.
If I can be of any help for testing and hacking purpose, please let me know.
I think I will really *need* this feature by the end of february,
2006. Do you think I have any
On Friday December 2, [EMAIL PROTECTED] wrote:
Which generates errors when I try and copy off large amounts of data:
About ten of these:
ata1: translated ATA stat/err 0x25/00 to SCSI SK/ASC/ASCQ 0x4/00/00
ata1: status=0x25 { DeviceFault CorrectedError Error }
On Friday December 2, [EMAIL PROTECTED] wrote:
Thank you for the feedback Neil.
Although, your last comment did confuse me a little...run what in
parallel? Should I be running badblocks against the unassembled
components of the raid and then doing something like:
fsck -l
On Wednesday November 30, [EMAIL PROTECTED] wrote:
NeilBrown ([EMAIL PROTECTED]) wrote on 28 November 2005 10:40:
This is a simple port of match functionality across from raid5.
If we get a read error, we don't kick the drive straight away, but
try to over-write with good data first.
On Tuesday November 29, [EMAIL PROTECTED] wrote:
Hi Neil,
Glad to see this patch is making its way to mainline. I have a couple of
questions on the patch, though...
Thanks for reviewing the code - I really value that!
NeilBrown wrote:
+ if (uptodate || conf-working_disks = 1) {
On Tuesday November 29, [EMAIL PROTECTED] wrote:
The time and speed display for resync is wrong, the recovery numbers are fine.
The resync is actually running at a few MB/sec.
md1 : active raid6 sdn1[8](S) sde1[9] sdq1[0] sdu1[6] sdo1[5] sdaa3[4]
sdab1[2] sds1[1]
1757815296 blocks
On Tuesday November 22, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
[]
I would like it to take an argument in contexts where --bitmap was
meaningful (Create, Assemble, Grow) and not where --brief is
meaningful (Examine, Detail). but I don't know if getopt_long will
allow the 'short_opt
On Tuesday November 22, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
I would like it to take an argument in contexts where --bitmap was
meaningful (Create, Assemble, Grow) and not where --brief is
meaningful (Examine, Detail). but I don't know if getopt_long will
allow the 'short_opt
On Friday November 18, [EMAIL PROTECTED] wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi!
I just upgraded to mdadm-2.1 from mdadm-1.12.0 and noticed
that the following command (which is even mentioned in the
manual page) doesn't work anymore:
[EMAIL PROTECTED]:~ {838} $ mdadm
On Monday November 21, jeff@jab.org wrote:
Hi all,
Debian is a little slow tracking mdadm, and currently ships version
1.9 in unstable. Of course, I want to try out the fancy new features
in mdadm 2.1 to match my shiny new 2.6.14 (Debian stock) Linux kernel.
On Saturday November 19, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
The other is to use a filesystem that allows the problem to be avoided
by making sure that the only blocks that can be corrupted are dead
blocks.
This could be done with a copy-on-write filesystem that knows about
On Saturday November 19, [EMAIL PROTECTED] wrote:
I've just installed a new server with a 5-disk raid5 and kernel
2.6.14.2. To check something I did a hard reset without shutdown and
on reboot the machine didn't do any automatic resync; all arrays are
shown clean.
Is this some automatic
On Friday November 18, [EMAIL PROTECTED] wrote:
So, I continue to believe silent corruption is mythical. I'm still open
to good explanation it's not though.
Silent corruption is not mythical, though it is probably talked about
more than it actually happens (but then as it is silent, I
On Thursday November 17, [EMAIL PROTECTED] wrote:
Hello there,
I just tried to compile package raidtools-1.00.3-234 with the Intel C
compiler.
It said
scsi.c(277): warning #175: subscript out of range
The source code is
sg_dev-revision[5] = '\0';
but
On Monday November 14, [EMAIL PROTECTED] wrote:
NeilBrown [EMAIL PROTECTED] wrote:
Despite the fact that md threads don't need to be signalled, and won't
respond to signals anyway, we need to have an 'interruptible' wait,
else they stay in 'D' state and add to the load average.
701 - 800 of 975 matches
Mail list logo