Hi,
I bought two new hard drives to expand my raid array today and
unfortunately one of them appears to be bad. The problem didn't arise
until after I attempted to grow the raid array. I was trying to expand
the array from 6 to 8 drives. I added both drives using mdadm --add
/dev/md1 /dev/sdb1
On Sun, Oct 28, 2007 at 08:21:34PM -0400, Bill Davidsen wrote:
Because you didn't stripe align the partition, your bad.
Align to /what/ stripe? Hardware (CHS is fiction), software (of the RAID
the real stripe (track) size of the storage, you must read the manual
and/or bug technical support
On Sun, Oct 28, 2007 at 10:59:01PM -0700, Daniel L. Miller wrote:
Doug Ledford wrote:
Anyway, I happen to *like* the idea of using full disk devices, but the
reality is that the md subsystem doesn't have exclusive ownership of the
disks at all times, and without that it really needs to stake a
On Sun, Oct 28, 2007 at 01:47:55PM -0400, Doug Ledford wrote:
On Sun, 2007-10-28 at 15:13 +0100, Luca Berra wrote:
On Sat, Oct 27, 2007 at 08:26:00PM -0400, Doug Ledford wrote:
It was only because I wasn't using mdadm in the initrd and specifying
uuids that it found the right devices to start
Ming Zhang wrote:
off topic, could you resubmit the alignment issue patch to list and see
if tomof accept. he needs a patch inlined in email. it is found and
fixed by you, so had better you post it (instead of me). thx.
diff -u kernel.old/iscsi.c kernel/iscsi.c
--- kernel.old/iscsi.c
Luca Berra wrote:
On Sun, Oct 28, 2007 at 08:21:34PM -0400, Bill Davidsen wrote:
Because you didn't stripe align the partition, your bad.
Align to /what/ stripe? Hardware (CHS is fiction), software (of the RAID
the real stripe (track) size of the storage, you must read the manual
and/or
On Sun, 2007-10-28 at 20:21 -0400, Bill Davidsen wrote:
Doug Ledford wrote:
On Fri, 2007-10-26 at 11:15 +0200, Luca Berra wrote:
On Thu, Oct 25, 2007 at 02:40:06AM -0400, Doug Ledford wrote:
The partition table is the single, (mostly) universally recognized
arbiter of what
On Mon, 2007-10-29 at 09:22 -0400, Bill Davidsen wrote:
consider a storage with 64 spt, an io size of 4k and partition starting
at sector 63.
first io request will require two ios from the storage (1 for sector 63,
and one for sectors 64 to 70)
the next 7 io
On Mon, 2007-10-29 at 09:41 +0100, Luca Berra wrote:
Remaking the initrd installs the new mdadm.conf file, which would have
then contained the whole disk devices and it's UUID. There in would
have been the problem.
yes, i read the patch, i don't like that code, as i don't like most of
what
On Mon, 2007-10-29 at 09:18 +0100, Luca Berra wrote:
On Sun, Oct 28, 2007 at 10:59:01PM -0700, Daniel L. Miller wrote:
Doug Ledford wrote:
Anyway, I happen to *like* the idea of using full disk devices, but the
reality is that the md subsystem doesn't have exclusive ownership of the
disks at
On Mon, Oct 29, 2007 at 08:41:39AM +0100, Luca Berra wrote:
consider a storage with 64 spt, an io size of 4k and partition starting
at sector 63.
first io request will require two ios from the storage (1 for sector 63,
and one for sectors 64 to 70)
the next 7 io
Hi,
I would welcome if someone could work on a new feature for raid5/6
that would allow replacing a disk in a raid5/6 with a new one without
having to degrade the array.
Consider the following situation:
raid5 md0 : sda sdb sdc
Now sda gives a SMART - failure iminent warning and you want to
On Sun, 2007-10-28 at 22:59 -0700, Daniel L. Miller wrote:
Doug Ledford wrote:
Anyway, I happen to *like* the idea of using full disk devices, but the
reality is that the md subsystem doesn't have exclusive ownership of the
disks at all times, and without that it really needs to stake a
On Sun, 2007-10-28 at 01:27 -0500, Alberto Alonso wrote:
On Sat, 2007-10-27 at 19:55 -0400, Doug Ledford wrote:
On Sat, 2007-10-27 at 16:46 -0500, Alberto Alonso wrote:
Regardless of the fact that it is not MD's fault, it does make
software raid an invalid choice when combined with those
Daniel L. Miller wrote:
Nothing in the documentation (that I read - granted I don't always read
everything) stated that partitioning prior to md creation was necessary
- in fact references were provided on how to use complete disks. Is
there an official position on, To Partition, or Not To
On Mon, Oct 29, 2007 at 11:47:19AM -0400, Doug Ledford wrote:
On Mon, 2007-10-29 at 09:18 +0100, Luca Berra wrote:
On Sun, Oct 28, 2007 at 10:59:01PM -0700, Daniel L. Miller wrote:
Doug Ledford wrote:
Anyway, I happen to *like* the idea of using full disk devices, but the
reality is that the md
On Mon, Oct 29, 2007 at 11:30:53AM -0400, Doug Ledford wrote:
On Mon, 2007-10-29 at 09:41 +0100, Luca Berra wrote:
Remaking the initrd installs the new mdadm.conf file, which would have
then contained the whole disk devices and it's UUID. There in would
have been the problem.
yes, i read the
On Mon, 2007-10-29 at 22:44 +0100, Luca Berra wrote:
On Mon, Oct 29, 2007 at 11:30:53AM -0400, Doug Ledford wrote:
On Mon, 2007-10-29 at 09:41 +0100, Luca Berra wrote:
Remaking the initrd installs the new mdadm.conf file, which would have
then contained the whole disk devices and it's
On Mon, 2007-10-29 at 22:29 +0100, Luca Berra wrote:
At which point he found that
the udev scripts in ubuntu are being stupid, and from the looks of it
are the cause of the problem. So, I've considered the initial issue
root caused for a bit now.
It seems i made an idiot of myself by missing
Doug Ledford wrote:
Nah. Even if we had concluded that udev was to blame here, I'm not
entirely certain that we hadn't left Daniel with the impression that we
suspected it versus blamed it, so reiterating it doesn't hurt. And I'm
sure no one has given him a fix for the problem (although Neil
On Friday October 26, [EMAIL PROTECTED] wrote:
Perhaps you could have called them 1.start, 1.end, and 1.4k in the
beginning? Isn't hindsight wonderful?
Those names seem good to me. I wonder if it is safe to generate them
in -Eb output
Maybe the key confusion here is between version
On Sat, 2007-10-27 at 12:33 +0200, Samuel Tardieu wrote:
I agree with Doug: nothing prevents you from using md above very slow
drivers (such as remote disks or even a filesystem implemented over a
tape device to make it extreme). Only the low-level drivers know when
it is appropriate to
On Friday October 26, [EMAIL PROTECTED] wrote:
Can someone help me understand superblocks and MD a little bit?
I've got a raid5 array with 3 disks - sdb1, sdc1, sdd1.
--examine on these 3 drives shows correct information.
However, if I also examine the raw disk devices, sdb and sdd,
On Friday October 26, [EMAIL PROTECTED] wrote:
I've been asking on my other posts but haven't seen
a direct reply to this question:
Can MD implement timeouts so that it detects problems when
drivers don't come back?
No.
However it is possible that we will start sending the BIO_RW_FAILFAST
On Mon, 2007-10-29 at 13:22 -0400, Doug Ledford wrote:
OK, these you don't get to count. If you run raid over USB...well...you
get what you get. IDE never really was a proper server interface, and
SATA is much better, but USB was never anything other than a means to
connect simple devices
On Monday October 29, [EMAIL PROTECTED] wrote:
Hi,
I bought two new hard drives to expand my raid array today and
unfortunately one of them appears to be bad. The problem didn't arise
until after I attempted to grow the raid array. I was trying to expand
the array from 6 to 8 drives. I added
On Mon, Oct 29, 2007 at 07:05:42PM -0400, Doug Ledford wrote:
And I agree -D has less chance of finding a stale superblock, but it's
also true that it has no chance of finding non-stale superblocks on
Well it might be a matter of personal preference, but i would prefer
an initrd doing just the
27 matches
Mail list logo