On Tuesday March 8, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
It is writes, but don't be scared. It is just super-block updates.
In 2.6, the superblock is marked 'clean' whenever there is a period of
about 20ms of no write activity. This increases the chance on a
resync won't
On Tuesday March 8, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
Then after 20ms with no write, they are all marked 'clean'.
Then before the next write they are all marked 'active'.
As the event count needs to be updated every time the superblock is
modified, the event count
On Saturday March 5, [EMAIL PROTECTED] wrote:
What might the proper [or functional] syntax be to do this?
I'm running 2.6.10-1.766-FC3, and mdadm 1.90.
It would help if you told us what you tried as then we could possible
give a more focussed answer, however:
mdadm --create /dev/md1
On Tuesday March 1, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
Could you please confirm if there is a problem with
2.6.11-rc4-bk4-bk10
as reported, and whether it seems to be the same problem.
Ok.. are we all ready? I had applied your development patches to all my
vanilla
On Wednesday March 2, [EMAIL PROTECTED] wrote:
Is there any sound reason why this is not feasible? Is it just that
someone needs to write the code to implement it?
Exactly (just needs to be implemented).
NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the
On Friday February 25, [EMAIL PROTECTED] wrote:
Turning on debugging in raid6main.c and md.c make it much harder to hit. So
I'm assuming something
timing related.
raid6d -- md_check_recovery -- generic_make_request -- make_request --
get_active_stripe
Yes, there is a real problem
On Friday February 18, [EMAIL PROTECTED] wrote:
Would you recommend to apply this package
http://neilb.web.cse.unsw.edu.au/~neilb/patches/linux-devel/2.6/2005-02-18-00/patch-all-2005-02-18-00
To a 2.6.10 kernel?
No. I don't think it would apply.
That patch it mostly experimental stuff. Only
On Thursday February 17, [EMAIL PROTECTED] wrote:
NeilBrown wrote:
When an array is degraded, bit in the intent-bitmap are
never cleared. So if a recently failed drive is re-added, we only need
to reconstruct the block that are still reflected in the
bitmap.
This patch adds support
),
but there are too many failed devices, the resync cannot complete
but will be retried indefinitedly.
2/ if the array has two many failed drives to be usable and a spare is
available, reconstruction will be attempted, but cannot work. This
also is retried indefinitely.
Signed-off-by: Neil
On Monday February 14, [EMAIL PROTECTED] wrote:
Hi, Neil...
Hi.
I use MD driver two year ago with Debian, and run perfectly.
Great!
The machine boot the new kernel a run Ok... but... if I (or another
process) make a change/write to the raid md system, the computer crash
with the
On Monday February 14, [EMAIL PROTECTED] wrote:
Maybe I am confused, but if you use the whole disk, I would expect the whole
disk could be over-written! What am I missing?
I second that.
Once you do anything to a whole disk, whether make an md array out of
it, or mkfs it or anything else, you
I am pleased to announce the availability of
mdadm version 1.9.0
It is available at
http://www.cse.unsw.edu.au/~neilb/source/mdadm/
and
http://www.{countrycode}.kernel.org/pub/linux/utils/raid/mdadm/
as a source tar-ball and (at the first site) as an SRPM, and as an RPM for i386.
On Friday February 4, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
Release 1.9.0 adds:
...
- --assemble --auto recognises 'standard' name and insists on using
the appropriate major/minor number for them.
Is this the problem I encountered when I added auto=md to my
On Monday January 31, [EMAIL PROTECTED] wrote:
Hi to all, md gurus!
Is there a way to edit the preferred minor of a stopped device?
mdadm --assemble /dev/md0 --update=super-minor /dev/
will assemble the array and update the preferred minor to 0 (from
/dev/md0).
However this won't work
On Wednesday January 26, [EMAIL PROTECTED] wrote:
A useful trick I discovered yesterday: Add --auto to your mdadm commandline
and it will create the device for you if it is missing :)
Well, it seems that this machine is using the udev scheme for managing
device files. I didn't realize
On Wednesday January 26, [EMAIL PROTECTED] wrote:
This bug that's fixed in 1.9.0, is in a bug when you create the array? ie
do we need to use 1.9.0 to create the array. I'm looking to do the same but
my bootdisk currently only has 1.7.soemthing on it. Do I need to make a
custom bootcd with
On Tuesday January 25, [EMAIL PROTECTED] wrote:
Been trying for days to get a software RAID 0+1 setup. This is on SuSe
9.2 with kernel 2.6.8-24.11-smp x86_64.
I am trying to setup a RAID 0+1 with 4 250gb SATA drives. I do the
following:
mdadm --create /dev/md1 --level=0 --chunk=4
On Monday January 24, [EMAIL PROTECTED] wrote:
how can the existing raid setup be moved to the new disks
without data loss ?
I guess it must be something like this:
1) physically remove first old drive
2) physically add first new drive
3) re-create partitions on new drive
4) run
On Tuesday January 25, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
If you are using a recent 2.6 kernel and mdadm 1.8.0, you can grow the
array with
mdadm --grow /dev/mdX --size=max
Neil,
Is this just for RAID1? OR will it work for RAID5 too?
--grow --size=max
should work
On Monday July 9, [EMAIL PROTECTED] wrote:
I was wondering what people thought of using raid 0+1 (a mirrored array
of raid0 stripes) vs. raid 1+0 (a raid0 array of mirrored disks). It
seems that either can sustain at least one drive failure and the
performance should be similar. Are there
Linus,
I wonder if you would consider applying, or commenting on this patch.
It adds support for partitioning md devices. In particular, a new
major device is created (name==mdp, number assigned dynamically)
which provides for 15 partitions on each of the first 16 md devices.
I
On Monday June 25, [EMAIL PROTECTED] wrote:
Is there any way for the RAID code to be smarter when deciding
about those event counters? Does it have any chance (theoretically)
to _know_ that it shouldn't use the drive with event count 28?
My current thinking is that once a raid array becomes
On Tuesday June 26, [EMAIL PROTECTED] wrote:
Hi,
I currently have to salvage data from an ancient box that looks like
to have run kernel 2.0.35. However, the system on that disk is
corrupted and won't boot any more (at least not on today's hardware).
It looks like main data is on a RAID.
On Sunday June 24, [EMAIL PROTECTED] wrote:
Hi,
We used to (long ago, 2.2.x), whenever we got a write request for some
buffer,
search the buffer cache to see if additional buffers which belong to that
particular stripe are dirty, and then schedule them for writing as well, in
an
attempt
On Sunday June 24, [EMAIL PROTECTED] wrote:
Hi!
Neil Brown wrote:
On Thursday June 14, [EMAIL PROTECTED] wrote:
Dear All
I've just had a disk (sdc) fail in my raid5 array (sdb sdc sdd),
Great! A real live hardware failure1 It is always more satisfying to
watch one
Linus,
it is possible to start an md array from the boot command line with,
e.g.
md=0,/dev/something,/dev/somethingelse
However only names recognised by name_to_kdev_t work here. devfs
based names do not work.
To fix this, the follow patch moves the name lookup from __setup
Linus,
This patch makes sure that all the printks in md.c print a message
starting with md: or md%d:.
The next step (not today) will be to reduce a lot of them to
KERN_INFO or similar as md is really quite noisy.
Also, two printks in raid1.c get prefixed with raid1:
This patch is
Linus, and fellow RAIDers,
This is the third in my three patch series for improving RAID5
throughput.
This one substantially lifts write thoughput by leveraging the
opportunities for write gathering provided by the first patch.
With RAID5, it is much more efficient to write a whole
Linus,
There is a buggy BUG in the raid5 code.
If a request on an underlying device reports an error, raid5 finds out
which device that was and marks it as failed. This is fine.
If another request on the same device reports an error, raid5 fails
to find that device in its table (because
On Friday June 15, [EMAIL PROTECTED] wrote:
There appears to be a discrepancy between the true state of affairs on my
RAID partitions and what df reports;
[root /]# sfdisk -l /dev/hda
Disk /dev/hda: 38792 cylinders, 16 heads, 63 sectors/track
Units = cylinders of 516096 bytes, blocks
Thankyou for all the suggestions for names for mdctl.
We have
raidctl raidctrl
swraidctl
mdtools mdutils
mdmanage mdmgr mdmd:-) mdcfg mdconfig mdadmin
Mike Black suggested that it is valuable for tools that are related to
start with a common prefix so that command completion can be used
On Friday June 8, [EMAIL PROTECTED] wrote:
On Fri, 8 Jun 2001, Neil Brown wrote:
If you don't like the name mdctl (I don't), please suggest another.
How about raidctrl?
Possible... though I don't think it is much better. Longer to type too:-)
I kind of like having the md
On Thursday June 7, [EMAIL PROTECTED] wrote:
Hi Neil;
I am hoping you are going to tell me this is already solved,
but here goes...
Almost :-)
scenario:
hda4, hdb4, and hdc4 in a raid 5 with no hotspare.
With 2.4.3 XFS kernels, it seems that a raid 5 does not come
up
On Wednesday June 6, [EMAIL PROTECTED] wrote:
In the XFS kernel tree v2.4.3 w/ several patches,
we were unable to raidhotremove and subsequently
raidhotadd a spare without a reboot. It did not
matter if you had a new or the same hard disk. We then
tried the patch Igno Molnar sent regarding
On Saturday June 2, [EMAIL PROTECTED] wrote:
I've moved from:
md=4,/dev/sdf5,/dev/sdg5
to:
md=4,/dev/scsi/host0/bus0/target30/lun0/part5,\
/dev/scsi/host0/bus0/target32/lun0/part5
And now get:
md: Unknown device name,\
Linus,
raid1 allocates a new buffer_head when passing a request done
to an underlying device.
It currently sets b_blocknr to b_rsector/(b_size9) from the
original buffer_head to parallel other uses of b_blocknr (i.e. it
being the number of the block).
However, if raid1 gets a
Linus,
The current raid1/raid5 resync code requests resync in units of 1k
(though the raid personality can round up requests if it likes).
This interacts badly with filesystems that do IO in 512 byte blocks,
such as XFS (because raid5 need to use the same blocksize for IO and
resync).
The
On Wednesday May 16, [EMAIL PROTECTED] wrote:
(more patches to come. They will go to Linus, Alan, and linux-raid only).
This is the next one, which actually addresses the NULL Checking
Bug.
There are two places in the the raid code which allocate memory
without (properly) checking for
Linus,
This isn't a bug fix, just a tidy up.
Current, md_error - which is called when an underlying device detects
an error - takes a kdev_t to identify which md array is affected.
It converts this into a mddev_t structure pointer, and in every case,
the caller already has the desired
On Saturday April 28, [EMAIL PROTECTED] wrote:
Question: can you have one or more spare-disk entries in /etc/raidtab when
running a RAID1 set?
First answer: the Linux Software-RAID HOWTO says yes, and gives an example
of this in the section on RAID1 config in raidtab.
On Thursday March 29, [EMAIL PROTECTED] wrote:
Hi,
I have been doing some performance checks on my RAID 5 system.
Good.
The system is
5 Seagate Cheetahs X15
Linux 2.4.2
I am using IOtest 3.0 on /dev/md0
My chunk size is 1M...
When I do random reads of 64K blobs using one
On Wednesday March 21, [EMAIL PROTECTED] wrote:
My question is based upon prior experience working for Stratus Computer. At
Stratus it was impractical to go beat the disk drives with a hammer to cause
them to fail - rather we would simply use a utility to cause the disk driver
to begin to
On Monday March 19, [EMAIL PROTECTED] wrote:
Hi,
I have a RAID setup, 3 Compaq 4Gb drives running of an Adaptec2940UW.
Kernel 2.2.18 with RAID-patches etc.
I have been trying out various options, doing some stress-testing etc.,
and I have now arrived at the following situation that I
On Monday March 19, [EMAIL PROTECTED] wrote:
I'm having trouble running a RAID1 root/boot mirror under 2.4.2. Works
fine on 2.2.14 though.
I'm running RH 6.2 with stock 2.2.14 kernel. Running RAID1 on a pair of
9.1 UW SCSI Barracudas as root/boot/lilo. md0 is / and md1 is 256M swap,
(I've taken Alan and Linus off the Cc list. I'm sure they have plenty
to read, and may even be on linux-raid anyway).
On Thursday March 15, [EMAIL PROTECTED] wrote:
I'm not too happy with the linux RAID5 implementation. In my
opinion, a number of changes need to be made, but I'm not sure how
On Sunday March 4, [EMAIL PROTECTED] wrote:
Hi folks,
I have a two-disk RAID 1 test array that I was playing with. I then
decided to hot add a third disk using ``raidhotadd''. The disk was added
to the array, but as far as I could see, the RAID software did not start
a reconstruction of
On Monday February 26, [EMAIL PROTECTED] wrote:
Actually, the LVM metadata is somewhat poorly layed out in this respect.
The metadata is at the start of the device, and on occasion is not even
sector aligned, AFAICS. Also, the PE structures, while large powers of
2 in size, are not
On February 26, [EMAIL PROTECTED] wrote:
I'm currently running a standard v2.2.17 kernel w/ the 'accompanying'
raid system (linear).
Having the following /etc/mdtab file:
/dev/md0 linear,4k,0,75f3bcd8/dev/sdb1 /dev/sdc1 /dev/sdd10 /dev/sde1
/dev/sdf1 /dev/sdg1
And converted
On Sunday February 25, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
OK, this time I really want to know how this should be handled.
Well. it "should" be handled by re-writing various bits of raid code
to make it all work more easily, but without doing that it "coul
On Sunday February 25, [EMAIL PROTECTED] wrote:
Linux 2.4.1/RAIDtools2 0.90
I have 4 ide disks which have identical partition layouts.
RAID is working successfully, its even booting RAID1.
I created a RAID5 set on a set of 4 partitions, which works OK.
I then destroyed that set and
On Wednesday February 21, [EMAIL PROTECTED] wrote:
Hello,
This is my first time playing with software raid so sorry if I sound dumb.
What I have is a remote device that only has one hard drive. There is no
ability for a second. Can I use the raidtools package to setup a raid-1
mirror on
On Friday February 9, [EMAIL PROTECTED] wrote:
Greetings,
I'm getting ready to put kernel 2.4.1 on my server at home. I have some
questions about the status of RAID in 2.4.1. Sorry to be dense but I
couldn't glean the answers to these questions from my search of the
mailing list.
1.
--- Neil Brown [EMAIL PROTECTED] wrote:
On Wednesday January 3,
[EMAIL PROTECTED]
wrote:
mkraid aborts with no usefull error mssg on
screen
or
in the syslog. My /etc/raidtab is set up
correctly
and I am using raidtools2 with kernel 2.2.18
with
raid
Linus
here is a rather large patch for raid5 in 2.4.0-test13-pre3.
It is a substantial re-write of the stripe-cache handling code,
which is the heart of the raid5 module.
I have been sitting on this for a while so that others can test it
(a few have) and so that I would have had quite
Linus (et al)
The raid1 code has a concept of finding a "next available drive". It
uses this for read balancing.
Currently, this is implemented via a simple linked list that links
the working drives together.
However, there is no locking to make sure that the list does not get
modified
Linus (et al),
An md device need to know if it is in-use so that it doesn't allow
raidstop while still mounted. Previously it did this by looking for
a superblock on the device. This is a bit in-elegant and doesn't
generalise.
With this patch, it tracks opens and closes (get and
Linus (et al),
The raid code wants to be the sole accessor of any devices are are
combined into the array. i.e. it want to lock those devices agaist
other use.
It currently tried to do this bby creating an inode that appears to
be associated with that device.
This no longer has any
On Wednesday December 6, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
Could you try this patch and see how it goes?
Same result!
Ok... must be something else... I tried again to reproduce it, and
this time I succeeded.
The problem happens when you try to access the last 128k of a raid1
On Sunday December 3, [EMAIL PROTECTED] wrote:
I'm new to the raid under linux world, and had a question. Sorry if several
posts have been made by me previously, I had some trouble subscribing to the
list...
I successfully installed redhat 6.2 with raid 0 for two drives on a sun
ultra 1.
On Friday December 1, [EMAIL PROTECTED] wrote:
If I have all of MD as a module and autodetect raid enabled, do the MD
drives that the machine has get detected and setup
1) at boot
2) at module load
or
3) it doesn't
3. It doesn't.
Rationale: by the time you are loading a module, you have
On Friday December 1, [EMAIL PROTECTED] wrote:
-Original Message-
From: Carly Yang [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 01, 2000 2:42 PM
To: [EMAIL PROTECTED]
Subject: how to upgrade raid disks from redhat 4.1 to redhat 6.2
Dear Gregory
I have a system
Linus,
here is a patch for test12 which cleans up the initialisation of raid
personalities. I didn't include it in the previous raid init cleanup
because I hadn't figured out the inner mysteries of link order
completely. The linux-kbuild list helped there.
This patch arranges that each
On Tuesday November 28, [EMAIL PROTECTED] wrote:
Hi,
I'm forwarding the message to you guys because I got no answer from Ingo
Thanks
I would suggest always CCing to [EMAIL PROTECTED] I have
taken the liberty of Ccing this reply there.
-- Forwarded message --
Date:
On Tuesday November 28, [EMAIL PROTECTED] wrote:
snip
If I understood correctly, bh-b_rsector is used to know if the sector
number of the request being processed is not inside the resync range.
In case it is, it sleeps waiting for the resync daemon. Otherwise, it can
send
On Tuesday November 28, [EMAIL PROTECTED] wrote:
On Tue, Nov 28, 2000 at 10:50:06AM +1100, Neil Brown wrote:
However, there is only one "unplug-all-devices"(*) call in the API
that a reader or write can make. It is not possible to unplug a
particular device, or be
Linus,
A couple of versions of this patch went into Alan's tree, but weren't
quite right. This one is minimal, but works.
The problem is that the the tidy up of xor.o, it auto-initialises
itself, instead of being called by raid.o, and so needs to be linked
*before* md.o, as the
Linus,
I sent this patch to Alan a little while ago, but after ac4, so I
don't know if it went into his tree.
There is a bit of code at the front of raid5_sync_request which
calculates which block is the parity block for a given stripe.
However, to convert from a block number (1K units)
Linus,
The are currently two ways to get md/raid devices configured at boot
time.
AUTODETECT_RAID finds bits of raid arrays from partition types and
automagically connected them together
MD_BOOT allows bits of raid arrays to be explicitly described on the
boot line.
Currently,
Linus,
md currently has two #defines which give a limit to the number of
devices that can be in a given raid array:
MAX_REAL (==12) dates back to the time before we had persistent
superblocks, and mostly affects raid0
MD_SB_DISKS (==27) is a characteristic of the newer persistent
Linus,
This is a resend of a patch that probably got lost a week or so ago.
(It is also more gramatically correct).
If md.c has two raid arrays that need to be resynced, and they share
a physical device, then the two resyncs are serialised. However the
message printed says something
Hi,
I am considering using an ASUS CUR-DLS mother board in a new
NFS/RAID server, and wonder if anyone was any experience to report
either with it, or with the Ultra-160 dual buss scsi controller that
it has - the LSI SYM 53c1010.
From what I can find in the kernel source, and from lsi
On Monday November 27, [EMAIL PROTECTED] wrote:
When md2 is finished then md1 is resynced. Shouldn't they do
resync at the same time?
I never saw "md: serializing resync,..." what I supected to get because
md0 and md1 share the same physical disks.
My findigs:
The md driver in
On Tuesday November 7, [EMAIL PROTECTED] wrote:
I have a question regarding the diffrences between the 2.2+RAID-patch
kernels and the 2.4-test kernels - I was wondering if there are any
diffrences between them.
For example, if I build systems with a 2.2.17+RAID and later install 2.4
Linus,
The following patch addresses a small number of bugs in raid1.c in
2.4.0-test10.
1/ A number of routines that are called from interrupt context used
spin_lock_irq / spin_unlock_irq
instead of the more appropriate
spin_lock_irqsave( ,flags) / spin_unlock_irqrestore(
On Wednesday September 27, [EMAIL PROTECTED] wrote:
I was just wondering if the issues with swap on a raid device and with using a
journaling fs on a raid device had been fixed in the latest 2.4.0-test
kernels?
Yes. md in 2.4 doesn't do interesting things with the buffer cache,
so swap and
901 - 975 of 975 matches
Mail list logo