David Greaves wrote:
David Robinson wrote:
David Greaves wrote:
This isn't a regression.
I was seeing these problems on 2.6.21 (but 22 was in -rc so I waited
to try it).
I tried 2.6.22-rc4 (with Tejun's patches) to see if it had improved -
no.
Note this is a different (desktop) machine to
Dave,
Questions inline and below.
On Mon, 18 Jun 2007, David Chinner wrote:
On Fri, Jun 15, 2007 at 04:36:07PM -0400, Justin Piszcz wrote:
Hi,
I was wondering if the XFS folks can recommend any optimizations for high
speed disk arrays using RAID5?
[sysctls snipped]
None of those options
On Mon, 18 Jun 2007 16:09:49 +0900, Tejun Heo wrote:
Mikael Pettersson wrote:
On Sat, 16 Jun 2007 15:52:33 +0400, Brad Campbell wrote:
I've got a box here based on current Debian Stable.
It's got 15 Maxtor SATA drives in it on 4 Promise TX4 controllers.
Using kernel 2.6.21.x it shuts
Mikael Pettersson wrote:
I don't think sata_promise is the guilty party here. Looks like some
layer above sata_promise got confused about the state of the interface.
But locking up hard after hardreset is a problem of sata_promise, no?
Maybe, maybe not. The original report doesn't specify
David Chinner wrote:
On Fri, Jun 15, 2007 at 04:36:07PM -0400, Justin Piszcz wrote:
Hi,
I was wondering if the XFS folks can recommend any optimizations for high
speed disk arrays using RAID5?
[sysctls snipped]
None of those options will make much difference to performance.
mkfs parameters
Last I check expanding drives (reshaping the RAID) in a raid set within Windows
is not supported.
Significant size is relative I guess, but 4-8 terabytes will not be a problem
in either OS.
I run a RAID 6 (Windows does not support this either last I checked). I
started out with 5 drives
On Mon, Jun 18, 2007 at 08:49:34AM +0100, David Greaves wrote:
David Greaves wrote:
OK, that gave me an idea.
Freeze the filesystem
md5sum the lvm
hibernate
resume
md5sum the lvm
snip
So the lvm and below looks OK...
I'll see how it behaves now the filesystem has been frozen/thawed
Bootet today, got this in dmesg:
[ 44.884915] md: bindsdd1
[ 44.885150] md: bindsda1
[ 44.885352] md: bindsdb1
[ 44.885552] md: bindsdc1
[ 44.885601] md: kicking non-fresh sdd1 from array!
[ 44.885637] md: unbindsdd1
[ 44.885671] md: export_rdev(sdd1)
[ 44.900824] raid5: device
Dexter Filmore wrote:
1661 minutes is *way* too long. it's a 4x250GiB sATA array and usually takes 3
hours to resync or check, for that matter.
So, what's this?
kernel, mdadm verisons?
I seem to recall a long fixed ETA calculation bug some time back...
David
-
To unsubscribe from this
I'm creating a larger backup server that uses bacula (this
software works well). The way I'm going about this I need
lots of space in the filesystem where temporary files are
stored. I have been looking at the Norco (link at the bottom),
but there seem to be some grumblings that the adapter card
On Mon, 18 Jun 2007, Mike wrote:
I'm creating a larger backup server that uses bacula (this
software works well). The way I'm going about this I need
lots of space in the filesystem where temporary files are
stored. I have been looking at the Norco (link at the bottom),
but there seem to be
On Mon, 18 Jun 2007, Dexter Filmore wrote:
On Monday 18 June 2007 17:22:06 David Greaves wrote:
Dexter Filmore wrote:
1661 minutes is *way* too long. it's a 4x250GiB sATA array and usually
takes 3 hours to resync or check, for that matter.
So, what's this?
kernel, mdadm verisons?
I seem
[EMAIL PROTECTED] wrote:
in my case it takes 2+ days to resync the array before I can do any
performance testing with it. for some reason it's only doing the rebuild
at ~5M/sec (even though I've increased the min and max rebuild speeds
and a dd to the array seems to be ~44M/sec, even during
On Mon, 18 Jun 2007, Brendan Conoboy wrote:
[EMAIL PROTECTED] wrote:
in my case it takes 2+ days to resync the array before I can do any
performance testing with it. for some reason it's only doing the rebuild
at ~5M/sec (even though I've increased the min and max rebuild speeds and
a dd
On Mon, 18 Jun 2007, Lennart Sorensen wrote:
On Mon, Jun 18, 2007 at 10:28:38AM -0700, [EMAIL PROTECTED] wrote:
I plan to test the different configurations.
however, if I was saturating the bus with the reconstruct how can I fire
off a dd if=/dev/zero of=/mnt/test and get ~45M/sec whild only
On Mon, 18 Jun 2007, Brendan Conoboy wrote:
[EMAIL PROTECTED] wrote:
I plan to test the different configurations.
however, if I was saturating the bus with the reconstruct how can I fire
off a dd if=/dev/zero of=/mnt/test and get ~45M/sec whild only slowing the
reconstruct to ~4M/sec?
On Mon, 18 Jun 2007, Lennart Sorensen wrote:
On Mon, Jun 18, 2007 at 11:12:45AM -0700, [EMAIL PROTECTED] wrote:
simple ultra-wide SCSI to a single controller.
Hmm, isn't ultra-wide limited to 40MB/s? Is it Ultra320 wide? That
could do a lot more, and 220MB/s sounds plausable for 320 scsi.
[EMAIL PROTECTED] wrote:
yes, sorry, ultra 320 wide.
Exactly how many channels and drives?
--
Brendan Conoboy / Red Hat, Inc. / [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
On Mon, 18 Jun 2007, Brendan Conoboy wrote:
[EMAIL PROTECTED] wrote:
yes, sorry, ultra 320 wide.
Exactly how many channels and drives?
one channel, 2 OS drives plus the 45 drives in the array.
yes I realize that there will be bottlenecks with this, the large capacity
is to handle longer
Neil,
The following two patches are the respin of the changes you suggested to
raid5: coding style cleanup / refactor. I have added them to the
git-md-accel tree for a 2.6.23-rc1 pull. The full, rebased, raid
acceleration patchset will be sent for a another round of review once I
address
handle_stripe5 and handle_stripe6 have very deep logic paths handling the
various states of a stripe_head. By introducing the 'stripe_head_state'
and 'r6_state' objects, large portions of the logic can be moved to
sub-routines.
'struct stripe_head_state' consumes all of the automatic variables
Replaces PRINTK with pr_debug, and kills the RAID5_DEBUG definition in
favor of the global DEBUG definition. To get local debug messages just add
'#define DEBUG' to the top of the file.
Signed-off-by: Dan Williams [EMAIL PROTECTED]
---
drivers/md/raid5.c | 116
Why dontcha just cut all the look how big my ePenis is chatter and tell us
what you wanna do?
Nobody gives a rat if your ultra1337 sound cards needs a 10 megawatt power
supply.
--
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
In article [EMAIL PROTECTED], Justin Piszcz wrote:
On Mon, 18 Jun 2007, Mike wrote:
I'm creating a larger backup server that uses bacula (this
software works well). The way I'm going about this I need
lots of space in the filesystem where temporary files are
stored. I have been looking
Get 3 Hitachi 1TB drives and use SW RAID5 on an Intel 965 motherboard OR
use PCI-e cards that use the Silicon Image chipset.
04:00.0 RAID bus controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid
II Controller (rev 01)
What he said. After some thinking, I loaded up a machine with 3x
25 matches
Mail list logo