Hi everyone,
One of my boxes crashed (with a hardware error, I think - CPU and
motherboard replacements are on their way). I booted it up on a
rescue disk (Fedora 8) to let the software raid sync up.
When it was running I noticed that one of the disks were listed as
dm-5 and ... uh-oh
On Jan 18, 2008, at 3:17 AM, Ask Bjørn Hansen wrote:
[ Uh, I just realized that I forgot to update the subject line as I
figured out what was going on; it's obviously not a software raid
problem but a multipath problem ]
One of my boxes crashed (with a hardware error, I think - CPU and
(This should be merged with fix-occasional-deadlock-in-raid5.patch)
As we don't call stripe_handle in make_request any more, we need to
clear STRIPE_DELAYED to (previously done by stripe_handle) to ensure
that we test if the stripe still needs to be delayed or not.
Signed-off-by: Neil Brown
Finish ITERATE_ to for_each conversion.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/md.c |8
./include/linux/raid/md_k.h | 14 --
2 files changed, 8 insertions(+), 14 deletions(-)
diff .prev/drivers/md/md.c
Currently, a given device is claimed by a particular array so
that it cannot be used by other arrays.
This is not ideal for DDF and other metadata schemes which have
their own partitioning concept.
So for externally managed metadata, just claim the device for
md in general, require that offset
Following are 4 patches for md.
The first two replace
md-allow-devices-to-be-shared-between-md-arrays.patch
which was recently remove. They should go at the same place in the
series, between
md-allow-a-maximum-extent-to-be-set-for-resyncing.patch
and
If you try to start an array for which the number of raid disks is
listed as zero, md will currently try to read metadata off any devices
that have been given. This was done because the value of raid_disks
is used to signal whether array details have been provided by
userspace (raid_disks 0) or
On Fri, 18 Jan 2008, Bill Davidsen wrote:
Justin Piszcz wrote:
On Thu, 17 Jan 2008, Al Boldi wrote:
Justin Piszcz wrote:
On Wed, 16 Jan 2008, Al Boldi wrote:
Also, can you retest using dd with different block-sizes?
I can do this, moment..
I know about oflag=direct but I choose to
Also, don't use ext*, XFS can be up to 2-3x faster (in many of the
benchmarks).
I'm going to swap file systems and give it a shot right now! :)
How is stability of XFS? I heard recovery is easier with ext2/3 due to
more people using it, more tools available, etc?
Greg
-
To unsubscribe from
Quoting Norman Elton [EMAIL PROTECTED]:
I posed the question a few weeks ago about how to best accommodate
software RAID over an array of 48 disks (a Sun X4500 server, a.k.a.
Thumper). I appreciate all the suggestions.
Well, the hardware is here. It is indeed six Marvell 88SX6081 SATA
On Fri, 18 Jan 2008, Greg Cormier wrote:
Also, don't use ext*, XFS can be up to 2-3x faster (in many of the
benchmarks).
I'm going to swap file systems and give it a shot right now! :)
How is stability of XFS? I heard recovery is easier with ext2/3 due to
more people using it, more tools
On Fri, 18 Jan 2008, Greg Cormier wrote:
Justin, thanks for the script. Here's my results. I ran it a few times
with different tests, hence the small number of results you see here,
I slowly trimmed out the obvious not-ideal sizes.
Nice, we all love benchmarks!! :)
System
---
Athlon64
Justin Piszcz wrote:
On Thu, 17 Jan 2008, Al Boldi wrote:
Justin Piszcz wrote:
On Wed, 16 Jan 2008, Al Boldi wrote:
Also, can you retest using dd with different block-sizes?
I can do this, moment..
I know about oflag=direct but I choose to use dd with sync and
measure the
total time
On Jan 18, 2008, at 4:33 AM, Heinz Mauelshagen wrote:
Much later I figured out that dmraid -b reported two of the disks
as
being the same:
Looks like the md sync duplicated the metadata and dmraid just spots
that duplication. You gotta remove one of the duplicates to clean
this up
but
I wonder how long it would take to run an fsck on one large filesystem?
:)
I would imagine you'd have time to order a new system, build it, and
restore the backups before the fsck was done!
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
On Fri, Jan 18, 2008 at 03:23:24AM -0800, Ask Bjørn Hansen wrote:
On Jan 18, 2008, at 3:17 AM, Ask Bjørn Hansen wrote:
[ Uh, I just realized that I forgot to update the subject line as I figured
out what was going on; it's obviously not a software raid problem but a
multipath problem ]
It is quite a box. There's a picture of the box with the cover removed
on Sun's website:
http://www.sun.com/images/k3/k3_sunfirex4500_4.jpg
From the X4500 homepage, there's a gallery of additional pictures. The
drives drop in from the top. Massive fans channel air in the small
gaps between the
Justin, thanks for the script. Here's my results. I ran it a few times
with different tests, hence the small number of results you see here,
I slowly trimmed out the obvious not-ideal sizes.
System
---
Athlon64 3500
2GB RAM
4x500GB WD Raid editions, raid 5. SDE is the old 4-platter version
On Thu, 17 Jan 2008, Janek Kozicki wrote:
I wish RHEL would support XFS/ZFS, but for now, I'm stuck with ext3.
there is ext4 (or ext4dev) - it's an ext3 modified to support 1024 PB size
(1048576 TB). You could check if it's feasible. Personally I'd always
stick with ext2/ext3/ext4 since it is
19 matches
Mail list logo