Fix confirmed, filled the whole 11T hard disk, without crashing.
I presume this would go into 2.6.22
Thanks again.
Jeff
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jeff Zheng
Sent: Thursday, 17 May 2007 5:39 p.m.
To: Neil Brown; [EMAIL
On Friday May 18, [EMAIL PROTECTED] wrote:
Fix confirmed, filled the whole 11T hard disk, without crashing.
I presume this would go into 2.6.22
Yes, and probably 2.6.21.y, though the patch will be slightly
different, see below.
Thanks again.
And thank-you for pursuing this with me.
Jeff Zheng wrote:
Here is the information of the created raid0. Hope it is enough.
If I read this correctly, the problem is with JFS rather than RAID? Have
you tried not mounting the JFS filesystem but just starting the array
which crashes, so you can read bits of it, etc, and verify that
I tried the patch, same problem show up, but no bug_on report
Is there any other things I can do?
Jeff
Yes, I meant 2T, and yes, the components are always over 2T.
So I'm at a complete loss. The raid0 code follows the same
paths and does the same things and uses 64bit arithmetic where
On Thursday May 17, [EMAIL PROTECTED] wrote:
I tried the patch, same problem show up, but no bug_on report
Is there any other things I can do?
What is the nature of the corruption? Is it data in a file that is
wrong when you read it back, or does the filesystem metadata get
corrupted?
Can
On Thu, 17 May 2007, Neil Brown wrote:
On Thursday May 17, [EMAIL PROTECTED] wrote:
The only difference of any significance between the working
and non-working configurations is that in the non-working,
the component devices are larger than 2Gig, and hence have
sector offsets greater than 32
On Wednesday May 16, [EMAIL PROTECTED] wrote:
On Thu, 17 May 2007, Neil Brown wrote:
On Thursday May 17, [EMAIL PROTECTED] wrote:
The only difference of any significance between the working
and non-working configurations is that in the non-working,
the component devices are larger
What is the nature of the corruption? Is it data in a file
that is wrong when you read it back, or does the filesystem
metadata get corrupted?
The corruption is in fs metadata, jfs is completely destroied, after
Umount, fsck does not recogonize it as jfs anymore. Xfs gives kernel
Crash,
On Thursday May 17, [EMAIL PROTECTED] wrote:
Uhm, I just noticed something.
'chunk' is unsigned long, and when it gets shifted up, we might lose
bits. That could still happen with the 4*2.75T arrangement, but is
much more likely in the 2*5.5T arrangement.
Actually, it cannot be a problem
Yeah, seems you've locked it down, :D. I've written 600GB of data now,
and anything is still fine.
Will let it run overnight, and fill the whole 11T. I'll post the result
tomorrow
Thanks a lot though.
Jeff
-Original Message-
From: Neil Brown [mailto:[EMAIL PROTECTED]
Sent:
[Ingo, Neil, linux-raid added to CC]
On 16/05/07, Jeff Zheng [EMAIL PROTECTED] wrote:
Hi everyone:
We are experiencing problems with software raid0, with very
large disk arrays.
We are using two 3ware disk array controllers, each of them is connected
8 750GB harddrives. And we build a
11 matches
Mail list logo