On Friday October 19, [EMAIL PROTECTED] wrote:
On 10/19/07, Neil Brown [EMAIL PROTECTED] wrote:
On Friday October 19, [EMAIL PROTECTED] wrote:
I'm using a stock 2.6.19.7 that I then backported various MD fixes to
from 2.6.20 - 2.6.23... this kernel has worked great until I
attempted
It appears that a couple of bugs slipped in to md for 2.6.23.
These two patches fix them and are appropriate for 2.6.23.y as well
as 2.6.24-rcX
Thanks,
NeilBrown
[PATCH 001 of 2] md: Fix an unsigned compare to allow creation of bitmaps with
v1.0 metadata.
[PATCH 002 of 2] md: raid5: fix
As page-index is unsigned, this all becomes an unsigned comparison, which
almost always returns an error.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
Cc: Stable [EMAIL PROTECTED]
### Diffstat output
./drivers/md/bitmap.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
From: Dan Williams [EMAIL PROTECTED]
ops_complete_biofill() runs outside of spin_lock(sh-lock) and clears the
'pending' and 'ack' bits. Since the test_and_ack_op() macro only checks
against 'complete' it can get an inconsistent snapshot of pending work.
Move the clearing of these bits to
On Tue, Oct 09, 2007 at 01:48:50PM +0400, Michael Tokarev wrote:
There still is - at least for ext[23]. Even offline resizers
can't do resizes from any to any size, extfs developers recommend
to recreate filesystem anyway if size changes significantly.
I'm too lazy to find a reference now,
Greetings happy mdadm users.
I have a little problem that after many hours of searching around I
couldn't seem to solve.
I have upgraded my motherboard and kernel (bad practice I know but the
ICH9R controller needs 2.6.2*+) at the same time.
The array was build using 2.6.18-7 Now i'm using
On Mon Oct 22, 2007 at 09:46:08PM +1000, Sam Redfern wrote:
Greetings happy mdadm users.
I have a little problem that after many hours of searching around I
couldn't seem to solve.
I have upgraded my motherboard and kernel (bad practice I know but the
ICH9R controller needs 2.6.2*+) at
Hi,
[using kernel 2.6.23 and mdadm 2.6.3+20070929]
I have a rather flaky sata controller with which I am trying to resync a raid5
array. It usually starts failing after 40% of the resync is done. Short of
changing the controller (which I will do later this week), is there a way to
have mdmadm
On 10/22/07, Neil Brown [EMAIL PROTECTED] wrote:
On Friday October 19, [EMAIL PROTECTED] wrote:
On 10/19/07, Neil Brown [EMAIL PROTECTED] wrote:
On Friday October 19, [EMAIL PROTECTED] wrote:
I'm using a stock 2.6.19.7 that I then backported various MD fixes to
from 2.6.20 -
On Mon, 22 Oct 2007, Louis-David Mitterrand wrote:
Hi,
[using kernel 2.6.23 and mdadm 2.6.3+20070929]
I have a rather flaky sata controller with which I am trying to resync a raid5
array. It usually starts failing after 40% of the resync is done. Short of
changing the controller (which I
Does anyone have any insights here? How do I interpret the seemingly competing
system iowait numbers... is my system both CPU and PCI bus bound?
- Original Message
From: nefilim
To: linux-raid@vger.kernel.org
Sent: Thursday, October 18, 2007 4:45:20 PM
Subject: slow raid5 performance
- Message from [EMAIL PROTECTED] -
Date: Mon, 22 Oct 2007 21:46:08 +1000
From: Sam Redfern [EMAIL PROTECTED]
Reply-To: Sam Redfern [EMAIL PROTECTED]
Subject: Fwd: issues rebuilding raid array.
To: linux-raid@vger.kernel.org
The array was build using 2.6.18-7 Now i'm
- Original Message
From: Peter Grandi [EMAIL PROTECTED]
Thank you for your insightful response Peter (Yahoo spam filter hid it from me
until now).
Most 500GB drives can do 60-80MB/s on the outer tracks
(30-40MB/s on the inner ones), and 3 together can easily swamp
the PCI
Thanks Justin, good to hear about some real world experience.
- Original Message
From: Justin Piszcz [EMAIL PROTECTED]
To: Peter [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent: Monday, October 22, 2007 9:58:16 AM
Subject: Re: slow raid5 performance
With SW RAID 5 on the PCI bus
Peter wrote:
Thanks Justin, good to hear about some real world experience.
Hi Peter,
I recently built a 3 drive RAID5 using the onboard SATA controllers on
an MCP55 based board and get around 115MB/s write and 141MB/s read.
A fourth drive was added some time later and after growing the
On Tue, 23 Oct 2007, Richard Scobie wrote:
Peter wrote:
Thanks Justin, good to hear about some real world experience.
Hi Peter,
I recently built a 3 drive RAID5 using the onboard SATA controllers on an
MCP55 based board and get around 115MB/s write and 141MB/s read.
A fourth drive was
[ I was going to reply to this earlier, but the Red Sox and good
weather got into the way this weekend. ;-]
Michael == Michael Tokarev [EMAIL PROTECTED] writes:
Michael I'm doing a sysadmin work for about 15 or 20 years.
Welcome to the club! It's a fun career, always something new to
learn.
On Mon, 22 Oct 2007 15:33:09 -0400 (EDT), Justin Piszcz
[EMAIL PROTECTED] said:
[ ... speed difference between PCI and PCIe RAID HAs ... ]
I recently built a 3 drive RAID5 using the onboard SATA
controllers on an MCP55 based board and get around 115MB/s
write and 141MB/s read. A fourth
John Stoffel wrote:
Michael == Michael Tokarev [EMAIL PROTECTED] writes:
If you are going to mirror an existing filesystem, then by definition
you have a second disk or partition available for the purpose. So you
would merely setup the new RAID1, in degraded mode, using the new
partition
Hello,
I am having a rather urgent and annoying problem and I would appreciate
some input from anyone who has come across this. I have not been able to
find a solution as of yet. My issue deals with nested raid using mdadm,
and it seems that upon a reboot mdadm is attempting to assemble the
On Monday October 22, [EMAIL PROTECTED] wrote:
Hello,
I am having a rather urgent and annoying problem and I would appreciate
some input from anyone who has come across this. I have not been able to
find a solution as of yet. My issue deals with nested raid using mdadm,
and it seems that
On Monday October 22, [EMAIL PROTECTED] wrote:
Hey Neil,
Your fix works for me too. However, I'm wondering why you held back
on fixing the same issue in the bitmap runs into data comparison
that follows:
It isn't really needed here. In this case bitmap-offset is positive,
so all the
22 matches
Mail list logo