The ioctl requires CAP_SYS_ADMIN, so sysfs should too.
Note that we don't require CAP_SYS_ADMIN for reading
attributes even though the ioctl does. There is no reason
to limit the read access, and much of the information is
already available via /proc/mdstat
cc: Chris Wright [EMAIL PROTECTED]
Following are 12 assorted small patches for md.
They are against 2.6.17-mm2 and are suitable for inclusion in 2.6.18.
They are primarily small bug fixes, many fixing possible races, some
of which have been seen in the wild, some not.
Thanks,
NeilBrown
[PATCH 001 of 12] md: Possible fix for
The comment gives more details, but I didn't quite have the
sequencing write, so there was room for races to leave bits
unset in the on-disk bitmap for short periods of time.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/raid5.c | 30
Some places we use number (0660) someplaces names (S_IRUGO).
Change all numbers to be names, and change 0655 to be
what it should be.
Also make some formatting more consistent.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/md.c | 56
I have reports of a problem with raid5 which turns out to be because
the raid5 device gets stuck in a 'plugged' state. This shouldn't be
able to happen as 3msec after it gets plugged it should get unplugged.
However it happens none-the-less. This patch fixes the problem and is
a reasonable
When an array is started we start one or two threads (two if
there is a reshape or recovery that needs to be completed).
We currently start these *before* the array is completely set up and
in particular before queue-queuedata is set. If the thread
actually starts very quickly on another CPU,
When a device is unplugged, requests are moved from one or two
(depending on whether a bitmap is in use) queues to the main
request queue.
So whenever requests are put on either of those queues, we should make
sure the raid5 array is 'plugged'.
However we don't. We currently plug the raid5
This is generally useful, but particularly helps see if it is
the same sector that always needs correcting, or different ones.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/raid1.c |3 +++
./drivers/md/raid10.c |4
./drivers/md/raid5.c | 30
Though it rarely matters, we should be using 's' rather than
r1_bio-sector here.
### Diffstat output
./drivers/md/raid1.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff .prev/drivers/md/raid1.c ./drivers/md/raid1.c
--- .prev/drivers/md/raid1.c2006-06-27 12:15:16.0
We introduced 'io_sectors' recently so we could count
the sectors that causes io during resync separate from sectors
which didn't cause IO - there can be a difference if a bitmap
is being used to accelerate resync.
However when a speed is reported, we find the number of sectors
processed
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/raid5.c |1 -
1 file changed, 1 deletion(-)
diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c
--- .prev/drivers/md/raid5.c2006-06-27 12:17:33.0 +1000
+++ ./drivers/md/raid5.c2006-06-27
This has to be done in -load_super, not -validate_super
Without this, hot-adding devices to an array doesn't always
work right - though there is a work around in mdadm-2.5.2 to
make this less of an issue.
### Diffstat output
./drivers/md/md.c |6 +-
1 file changed, 5 insertions(+), 1
If a partitionable array is used, we should say e.g.
Will configure md_d0 (super-block) from
rather than
Will configure md0 (super-block) from
which implies non-partitionable.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./usr/kinit/do_mounts_md.c |4 ++--
On Jun 27, 2006 11:23 +1000, Neil Brown wrote:
You could try hunting for ext3 superblocks on the device. There might
be an easier way but
od -x /dev/hdb | grep '^.60 ef53 '
Compile the findsuper tool from e2fsprogs - it was made for this. Also
possible to use
On 26 Jun 2006, Neil Brown said:
On Tuesday June 20, [EMAIL PROTECTED] wrote:
For some time, mdadm's been dumping core on me in my uClibc-built
initramfs. As you might imagine this is somewhat frustrating, not least
since my root filesystem's in LVM on RAID. Half an hour ago I got around
to
On 25 Jun 2006, Chris Allen uttered the following:
Back to my 12 terabyte fileserver, I have decided to split the storage
into four partitions each of 3TB. This way I can choose between XFS
and EXT3 later on.
So now, my options are between the following:
1. Single 12TB /dev/md0,
On Tue, 27 Jun 2006, Neil Brown prattled cheerily:
On Tuesday June 27, [EMAIL PROTECTED] wrote:
,[ config.c:load_partitions() ]
| name = map_dev(major, minor, 1);
|
| d = malloc(sizeof(*d));
| d-devname = strdup(name);
`
Ahh.. uhmmm... Oh yes. I've fixed that since, but
Nix wrote:
On 25 Jun 2006, Chris Allen uttered the following:
Back to my 12 terabyte fileserver, I have decided to split the storage
into four partitions each of 3TB. This way I can choose between XFS
and EXT3 later on.
So now, my options are between the following:
1. Single 12TB /dev/md0,
I am pleased to announce the availability of
mdadm version 2.5.2
It is available at the usual places:
http://www.cse.unsw.edu.au/~neilb/source/mdadm/
and
countrycode=xx.
http://www.${countrycode}kernel.org/pub/linux/utils/raid/mdadm/
and via git at
git://neil.brown.name/mdadm
Am Freitag, 23. Juni 2006 14:50 schrieben Sie:
Strange that whatever the filesystem you get equal numbers of people
saying that
they have never lost a single byte to those who have had horrible
corruption and
would never touch it again. We stopped using XFS about a year ago because
we were
On Tue, 27 Jun 2006, Neil Brown wrote:
On Monday June 26, [EMAIL PROTECTED] wrote:
This is what I get now, after creating with fdisk /dev/hdb1 and
/dev/hdc1 as linux raid autodetect partitions
So I'm totally confused now.
You said it was 'linear', but the boot log showed 'raid0'.
On Tue, 27 Jun 2006, Chris Allen wondered:
Nix wrote:
There is a third alternative which can be useful if you have a mess of
drives of widely-differing capacities: make several RAID arrays so as to
tesselate
space across all the drives, and then pile an LVM on the top of all of them
to
Anyone have an ETA on this? I heard soon but was wondering how soon..?
kernel-version-2.6.x
kernel-version-2.6.x/arcmsr
kernel-version-2.6.x/arcmsr/arcmsr.c
kernel-version-2.6.x/arcmsr/arcmsr.h
kernel-version-2.6.x/arcmsr/Makefile
kernel-version-2.6.x/readme.txt
The driver is quite small and
On Tue, 27 Jun 2006 14:05:54 -0400 (EDT) Justin Piszcz wrote:
Anyone have an ETA on this? I heard soon but was wondering how soon..?
kernel-version-2.6.x
kernel-version-2.6.x/arcmsr
kernel-version-2.6.x/arcmsr/arcmsr.c
kernel-version-2.6.x/arcmsr/arcmsr.h
Hello All , What change in Glibc mekes this necessary ? Is there a
method available to include the getpwnam getgrnam structures so that
full static build will work . Tia , JimL
gcc -Wall -Werror -Wstrict-prototypes -ggdb -DSendmail=\/usr/sbin/sendmail -t\
On Fri, 23 Jun 2006, Molle Bestefich wrote:
Christian Pernegger wrote:
Anything specific wrong with the Maxtors?
I'd watch out regarding the Western Digital disks, apparently they
have a bad habit of turning themselves off when used in RAID mode, for
some reason:
Hello group,
I am upgrading my disks from old 18 Gig SCSI disks to 300 Gig SATA
disks. I need a good SATA controller. My system is old and has PCI V 2.1.
I need a 4 port card, or 2 2 port cards. My system has multi PCI buses, so
2 cards may give me better performance, but I don't need
Guy wrote:
Hello group,
I am upgrading my disks from old 18 Gig SCSI disks to 300 Gig SATA
disks. I need a good SATA controller. My system is old and has PCI V 2.1.
I need a 4 port card, or 2 2 port cards. My system has multi PCI buses, so
2 cards may give me better performance, but
28 matches
Mail list logo