block to
simulate auto-sense on the command, and the scsi disk driver was either
trying to get sense or retrying the same command. Anyway, not an md
issue, a sata/scsi issue in terms of why it wasn't getting out of the
reset loop eventually. I would send your bad cable to Jeff Garzik for
fu
g"
flags in the same name space sucks.
Anyway, I don't think Neil's original names were that bad, just
obviously the names describe the condition that precipitated the state,
not the current state, implying that a reader of that code should
probably be thinking about what caused the
wouldn't do that directly.
It's the mount program collecting possible LABEL= data on the partitions
listed in /proc/partitions, of which sde3 is outside the valid range for
the drive.
--
Doug Ledford <[EMAIL PROTECTED]>
GPG KeyID: CFBFF194
http://people.red
nitial install, I'd say it's better to
> create the array with one missing drive, install the system and let it
> resync upon the next boot. Be sure to tell the user about that, though.
>
>
> Erik
>
--
Doug Ledford <[EMAIL PROTECTED]>
GPG KeyI
array until it was done, but it would be quick. If you wanted
to be *really* fast, at least for SCSI drives you could write one large
chunk of 0's and one large chunk of 1's at the first parity block, then
use the SCSI COPY command to copy the 0 chunk everywhere it needs to go,
y faster. It is the sort of thing you might
> do in a "hardware" RAID controller but I doubt it would ever get done
> in md (there is a price for being very general).
Bleh...sometimes I really dislike always making things cater to the
lowest common denominator...you're never as good as you could be and you
are always as bad as the worst case...
--
Doug Ledford <[EMAIL PROTECTED]>
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
http://people.redhat.com/dledford/Infiniband
signature.asc
Description: This is a digitally signed message part
On Tue, 2006-10-10 at 11:55 +0200, Gabor Gombas wrote:
> On Mon, Oct 09, 2006 at 12:32:00PM -0400, Doug Ledford wrote:
>
> > You don't really need to. After a clean install, the operating system
> > has no business reading any block it didn't write to during the ins
On Tue, 2006-10-10 at 23:18 +0400, Sergey Vlasov wrote:
> On Tue, 10 Oct 2006 13:47:56 -0400 Doug Ledford wrote:
>
> [...]
> > So, like my original email said, fsck has no business reading any block
> > that hasn't been written to either by the install or since the insta
ng of that
filesystem and there is no one to blame but Hans Reiser for that.
--
Doug Ledford <[EMAIL PROTECTED]>
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
http://people.redhat.com/dledford/Infiniban
artistic licence 2.0)
>
> 0.
> http://svn.debian.org/wsvn/pkg-mdadm/mdadm/trunk/debian/FAQ?op=file&rev=0&sc=0
> 1.
> http://svn.debian.org/wsvn/pkg-mdadm/mdadm/trunk/debian/README.recipes?op=file&rev=0&sc=0
>
--
Doug Ledford <[EMAIL PROTECTED]>
On Wed, 2006-10-18 at 15:43 +0200, martin f krafft wrote:
> also sprach Doug Ledford <[EMAIL PROTECTED]> [2006.10.18.1526 +0200]:
> > There are a couple reasons I can think.
>
> Thanks for your elaborate response. If you don't mind, I shall link
> to it from the FA
and it may not be unfixable, I'm just saying
it's an additional layer you have to deal with).
--
Doug Ledford <[EMAIL PROTECTED]>
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
http://people.redhat.com/dledford/Infiniband
signature.asc
Description: This is a digitally signed message part
ually removed from the block queue and sent to
the device (the active load) and updated again when the command is
received back. Then, I'd basically look at what an incoming command
*would* do to each constituent disk's load values to see whether it
should go to one or the oth
orry, I too much of a hurry, those are 120cm exhaust and 120cm intake
Hehehe, I'll burn in hell for pointing this out, but as 10mm == 1cm, a
120*mm* fan or 12*cm* fan would be correct. I'm pretty sure your fans
are neither 12mm nor 120cm (or if you do have a 120cm
fan...damn...that
/dev/md//boot. I
don't think this is documented anywhere. This also raises the question
of how partitionable md devices will be handled in regards to their name
component.
--
Doug Ledford <[EMAIL PROTECTED]>
GPG KeyID: CFBFF194
http://people.redhat.c
eal device offset. That may not matter much in the
end, but it will have to be done.
The difference in geometry also precludes doing a whole device md array
with the superblock at the end and the partition table where the normal
device partition table would be. Although that sort of setup is ri
k the drives and switch the linux partition types from raid
autodetect to plain linux, reboot, and you are done.
--
Doug Ledford <[EMAIL PROTECTED]>
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
http://pe
if (bdev) {
> mutex_lock(&bdev->bd_inode->i_mutex);
> i_size_write(bdev->bd_inode, (loff_t)mddev->array_size
> << 10);
> mutex_unlock(&bdev->bd_inode->i_mutex);
> bdput(bdev);
> }
> }
> return rv;
> }
--
Doug Ledford <[EMAIL PROTECTED]>
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
http://people.redhat.com/dledford/Infiniband
signature.asc
Description: This is a digitally signed message part
nsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Doug Ledford <[EMAIL PROTECTED]>
http://people.redhat.com/dledford
Infiniband specific RPMs can be
Most
people think "Heat kills" and therefore like to keep things as cool as
possible. For mechanical devices anyway, it's not so much that heat
kills, as it is operating outside of the designed temperature range,
either above or below, that reduces overall life expectancy. Keep your
d
you actually use.
> Can you publish your /etc/fstab and fdisk -l output?
Keep in mind the root partitions is already mounted in ro mode by the
time fstab is available and the rc.sysinit script merely remounts it rw.
Again, the command line is the authority.
--
Doug Ledford <[EMAIL PROTECTED]&
o the position it was just rebuilt to replace as part of the
final transition from being rebuilt to being an active, live component
in the array).
--
Doug Ledford <[EMAIL PROTECTED]>
http://people.redhat.com/dledford
-
To unsubscribe from this list: send the line "unsubscribe linux-raid
er try to admin lots of machines.
See Peter imagine problems that don't exist.
See Peter disable features that would make his life easier as Peter
takes steps to circumvent his imaginary problems.
See Peter stay at work over New Years holiday fixing problems that
were likely a result of his own efforts to avoid problems.
Don't be a Peter, listen to Neil.
--
Doug Ledford <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, 2005-04-04 at 15:51 -0700, Alvin Oga wrote:
>
> On Mon, 4 Apr 2005, Doug Ledford wrote:
>
> > Anyway, it might or might not hurt the drives to run them well below
> > their designed operating temperature, I don't have schematics and
> > materials lists
ull ; cat /dev/hdb2 > /dev/null
> even *during* the md is active and getting used r/w?
It's ok to do this. However, reads happen from both hard drives in a
raid1 array in a sort of round robin fashion. You don't really know
which reads are going to go where, but each drive will get
*really* don't want that to happen.
So, no, the default should *not* be at the end of the device.
> As to the people who complained exactly because of this feature, LVM has
> two mechanisms to protect from accessing PVs on the raw disks (the
> ignore raid components option and the fil
erver then I would go with a large chunk size
because the filesystem activities themselves are going to produce lots
of random seeks, and you don't want your raid setup to make that problem
worse. Plus, most mail doesn't come in or go out at any sort of massive
streaming speed, so you don
is no actual version 1.1, or 1.2, the .0, .1, and .2 part of the
version *only* means where to put the version 1 superblock on the disk.
If you just say version 1, then it goes to the default location for
version 1 superblocks, and last I checked that was the end of disk (aka,
1.0).
--
Doug Ledfo
gt; anything else!
>
> Are you sure? I find that GRUB is much easier to use and setup than
> LILO these days. But hey, just dropping down to support 00.09.03 and
> 1.2 formats would be fine too. Let's just lessen the confusion if at
> all possible.
>
> John
> -
at uses. When you then use an internal
bitmap, you are adding writes to every member of the raid456 array,
which adds more seeks. The same is true for raid1, but since raid1
doesn't have the same level of dependency on seek rates that raid456
has, it doesn't show the same performance
ferent people have different priorities, but as I said, I
> like that this conversion is possible, and I never had the case of a
> tool saying "hmm, /dev/md is not there, let's look at
> /dev/sdc instead".
mount, pvscan.
> thanks,
> iustin
--
Doug Ledford <[EMAIL PRO
On Sat, 2007-10-20 at 00:43 +0200, Michal Soltys wrote:
> Doug Ledford wrote:
> > course, this comes at the expense of peak throughput on the device.
> > Let's say you were building a mondo movie server, where you were
> > streaming out digital movie files. In that ca
On Sat, 2007-10-20 at 18:09 +0400, Michael Tokarev wrote:
> Doug Ledford wrote:
> []
> > 1.0, 1.1, and 1.2 are the same format, just in different positions on
> > the disk. Of the three, the 1.1 format is the safest to use since it
> > won't allow you to accidental
rch device,
then it is guaranteed to not start the device and possibly try and
modify the underlying constituent devices. All around, it's just a
*really* bad idea.
I've heard several descriptions of things you *could* do with the
superblock at the end, but as of yet, not one of them is a go
On Sat, 2007-10-20 at 22:38 +0400, Michael Tokarev wrote:
> Justin Piszcz wrote:
> >
> > On Fri, 19 Oct 2007, Doug Ledford wrote:
> >
> >> On Fri, 2007-10-19 at 13:05 -0400, Justin Piszcz wrote:
> []
> >>> Got it, so for RAID1 it would make sense if L
From the standpoint of wanting to make sure an
array is suitable for embedding a boot sector, the 1.2 superblock may be
the best default.
> Since you have to support all of them or break existing arrays, and they
> all use the same format so there's no saving of code size to mentio
On Tue, 2007-10-23 at 21:21 +0200, Michal Soltys wrote:
> Doug Ledford wrote:
> >
> > Well, first I was thinking of files in the few hundreds of megabytes
> > each to gigabytes each, and when they are streamed, they are streamed at
> > a rate much lower than the full sp
y a non-starter for me. If someone wants
to do this manually, then go right ahead. But as for what we do by
default when the user asks us to create a raid array, we really need to
be on superblock 1.1 or 1.2 (although we aren't yet, we've waited for
the version 1 superblock issues to iron
things to it in worst scenario. Sure thing the first
> Michael> 512 bytes should be just cleared.. but that's another topic.
>
> I would argue that ext[234] should be clearing those 512 bytes. Why
> aren't they cleared
Actually, I didn't think msdos used the f
ec) for resync.
> md: using 128k window, over a total of 312581632 blocks.
> Filesystem "md0": Disabling barriers, not supported by the underlying device
> XFS mounting filesystem md0
> Starting XFS recovery on filesystem: md0 (logdev: internal)
> Ending XFS recovery on fil
four drives in a RAID-10
> configuration - I thought this would provide a good blend of safety and
> performance for a small fileserver.
>
> Because it's RAID-10 - I would ASSuME that I can drop one drive (after
> all, I keep booting one drive short), partition if necess
mdadm copy A gets three of the devices, I wouldn't think mdadm copy B
would have been able to get enough devices to decide to even try and
assemble the array (assuming that once copy A locked the devices during
open, that it then held the devices until time to assemble the array).
--
Doug
a command to any partition
failed, ability to use standard partition tables, etc. while being 100%
transparent to the rest of the OS. The second you considered FC
connected devices and multi-OS access, that fell apart in a big way.
Very analogous.
So, I wouldn't necessarily call it wrong,
On Wed, 2007-10-24 at 16:22 -0400, Bill Davidsen wrote:
> Doug Ledford wrote:
> > On Mon, 2007-10-22 at 16:39 -0400, John Stoffel wrote:
> >
> >
> >> I don't agree completely. I think the superblock location is a key
> >> issue, because if you have
geometries, that basically leaves 254 sectors, or
127k of space. This might not be enough for your particular needs if
you have a complex boot environment. In that case, you would need to
bump at least the starting track of your first partition to make room
for your boot loader. Unfortunately, ho
On Fri, 2007-10-26 at 11:54 +0200, Luca Berra wrote:
> On Sat, Oct 20, 2007 at 09:11:57AM -0400, Doug Ledford wrote:
> just apply some rules, so if you find a partition table _AND_ an md
> superblock at the end, read both and you can tell if it is an md on a
> partition or a partitio
On Fri, 2007-10-26 at 11:15 +0200, Luca Berra wrote:
> On Thu, Oct 25, 2007 at 02:40:06AM -0400, Doug Ledford wrote:
> >The partition table is the single, (mostly) universally recognized
> >arbiter of what possible data might be on the disk. Having a partition
> >tab
vel disk driver problem. Yell
at the author of the disk driver in question. If that driver doesn't
time things out and return errors up the stack in a reasonable time,
then it's broken. Md should not, and realistically can not, take the
place of a properly written low level dri
On Sat, 2007-10-27 at 00:20 +0200, Gabor Gombas wrote:
> On Fri, Oct 26, 2007 at 02:41:56PM -0400, Doug Ledford wrote:
>
> > * When using lilo to boot from a raid device, it automatically installs
> > itself to the mbr, not to the partition. This can not be changed. Onl
On Sat, 2007-10-27 at 10:00 +0200, Luca Berra wrote:
> On Fri, Oct 26, 2007 at 02:52:59PM -0400, Doug Ledford wrote:
> >On Fri, 2007-10-26 at 11:54 +0200, Luca Berra wrote:
> >> On Sat, Oct 20, 2007 at 09:11:57AM -0400, Doug Ledford wrote:
> >> just apply some rules,
On Sat, 2007-10-27 at 09:50 +0200, Luca Berra wrote:
> On Fri, Oct 26, 2007 at 03:26:33PM -0400, Doug Ledford wrote:
> >On Fri, 2007-10-26 at 11:15 +0200, Luca Berra wrote:
> >> On Thu, Oct 25, 2007 at 02:40:06AM -0400, Doug Ledford wrote:
> >> >The partitio
On Fri, 2007-10-26 at 14:41 -0400, Doug Ledford wrote:
> Actually, after doing some research, here's what I've found:
> * When using grub2, there is supposedly already support for raid/lvm
> devices. However, I do not know if this includes version 1.0, 1.1, or
> 1.2 sup
On Sat, 2007-10-27 at 16:46 -0500, Alberto Alonso wrote:
> On Fri, 2007-10-26 at 15:00 -0400, Doug Ledford wrote:
> >
> > This isn't an md problem, this is a low level disk driver problem. Yell
> > at the author of the disk driver in question. If that driver doesn
the
various kernel boot params to specify different root partitions and in
so doing I could boot a RHEL5 kernel using a RHEL4 install and vice
versa. But if you do that, you have to manually
patch /etc/rc.d/rc.sysinit to mount the /lib/modules partition before
ever trying to do anything with modules (
On Sat, 2007-10-27 at 00:30 +0200, Gabor Gombas wrote:
> On Fri, Oct 26, 2007 at 02:52:59PM -0400, Doug Ledford wrote:
>
> > In fact, no you can't. I know, because I've created a device that had
> > both but wasn't a raid device. And it's matching partne
On Sun, 2007-10-28 at 15:13 +0100, Luca Berra wrote:
> On Sat, Oct 27, 2007 at 08:26:00PM -0400, Doug Ledford wrote:
> >On Sat, 2007-10-27 at 00:30 +0200, Gabor Gombas wrote:
> >> On Fri, Oct 26, 2007 at 02:52:59PM -0400, Doug Ledford wrote:
> >>
> >> >
On Sun, 2007-10-28 at 14:37 +0100, Luca Berra wrote:
> On Sat, Oct 27, 2007 at 04:47:30PM -0400, Doug Ledford wrote:
> >Most of the time it does. But those times where it can fail, the
> >failure is due to not taking the precautions necessary to prevent it:
> >aka labelin
On Sun, 2007-10-28 at 20:21 -0400, Bill Davidsen wrote:
> Doug Ledford wrote:
> > On Fri, 2007-10-26 at 11:15 +0200, Luca Berra wrote:
> >
> >> On Thu, Oct 25, 2007 at 02:40:06AM -0400, Doug Ledford wrote:
> >>
> >>> The partition table i
re a
> decade out of date.
Your missing the point, it's not about drive tracks, it's about array
tracks, aka chunks. A 64k write, that should write to one and only one
chunk, ends up spanning two. That increases the amount of writing the
array has to do and the number of disks
t code, as i don't like most of
> what has been put in mkinitrd from 5.0 onward.
> Imho the correct thing here would not have been copying the existing
> mdadm.conf but generating a safe one from output of mdadm -D (note -D,
> not -E)
I'm not sure I'd want that. Besides,
On Mon, 2007-10-29 at 09:18 +0100, Luca Berra wrote:
> On Sun, Oct 28, 2007 at 10:59:01PM -0700, Daniel L. Miller wrote:
> >Doug Ledford wrote:
> >>Anyway, I happen to *like* the idea of using full disk devices, but the
> >>reality is that the md subsystem doesn't
On Sun, 2007-10-28 at 22:59 -0700, Daniel L. Miller wrote:
> Doug Ledford wrote:
> > Anyway, I happen to *like* the idea of using full disk devices, but the
> > reality is that the md subsystem doesn't have exclusive ownership of the
> > disks at all times, and witho
On Sun, 2007-10-28 at 01:27 -0500, Alberto Alonso wrote:
> On Sat, 2007-10-27 at 19:55 -0400, Doug Ledford wrote:
> > On Sat, 2007-10-27 at 16:46 -0500, Alberto Alonso wrote:
> > > Regardless of the fact that it is not MD's fault, it does make
> > > software raid
On Mon, 2007-10-29 at 22:44 +0100, Luca Berra wrote:
> On Mon, Oct 29, 2007 at 11:30:53AM -0400, Doug Ledford wrote:
> >On Mon, 2007-10-29 at 09:41 +0100, Luca Berra wrote:
> >
> >> >Remaking the initrd installs the new mdadm.conf file, which would have
> >> &g
debug output, but not solve the
problem), so not dropping it entirely would seem appropriate as well.
--
Doug Ledford <[EMAIL PROTECTED]>
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
http://people.redha
, just the needed arrays to get booted
into your / partition.
--
Doug Ledford <[EMAIL PROTECTED]>
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
http://people.redhat.com/dledford/Infiniband
signature.asc
as such, keeping
with more current kernels than you have been using is likely to be a big
factor in whether or not these sorts of things happen.
> If not, I would like to see what people that have experienced
> hardware failures and survived them are using so that such
> a list can be c
On Tue, 2007-10-30 at 00:08 -0500, Alberto Alonso wrote:
> On Mon, 2007-10-29 at 13:22 -0400, Doug Ledford wrote:
>
> > OK, these you don't get to count. If you run raid over USB...well...you
> > get what you get. IDE never really was a proper server interface, and
>
boot up from it. If it does
work, shut the machine down one more time, put the old hda in as hdc,
boot back up (which should boot from hda to the md0 root, it should not
touch hdc), add hdc to the raid array, let it resync, and then the final
step is to run the grub install on hdc to make it ma
ll procedure is if
you loose a drive and need to add a new one back in, then the new one
will need it.
--
Doug Ledford <[EMAIL PROTECTED]>
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
http://people.redh
On Thu, 2007-11-01 at 10:31 -0700, H. Peter Anvin wrote:
> Doug Ledford wrote:
> >
> > device /dev/sda (hd0)
> > root (hd0,0)
> > install --stage2=/boot/grub/stage2 /boot/grub/stage1 (hd0)
> > /boot/grub/e2fs_stage1_5 p /boot/grub/stage2 /boot/grub/menu.lst
&
On Thu, 2007-11-01 at 00:08 -0500, Alberto Alonso wrote:
> On Tue, 2007-10-30 at 13:39 -0400, Doug Ledford wrote:
> >
> > Really, you've only been bitten by three so far. Serverworks PATA
> > (which I tend to agree with the other person, I would probably chock
>
On Thu, 2007-11-01 at 20:04 +0100, Janek Kozicki wrote:
> Doug Ledford said: (by the date of Thu, 01 Nov 2007 14:30:58 -0400)
>
> > So, what I said is true, the MBR will search on the disk it is being run
> > from for the files it needs: 0x80.
>
> my motherboard allo
On Fri, 2007-11-02 at 03:41 -0500, Alberto Alonso wrote:
> On Thu, 2007-11-01 at 15:16 -0400, Doug Ledford wrote:
> > Not in the older kernel versions you were running, no.
>
> These "old versions" (specially the RHEL) are supposed to be
> the official versions
On Thu, 2007-11-01 at 11:57 -0700, H. Peter Anvin wrote:
> Doug Ledford wrote:
> >
> > Correct, and that's what you want. The alternative is that if the BIOS
> > can see the first disk but it's broken and can't be used, and if you
> > have the boot secto
On Thu, 2007-11-01 at 14:02 -0700, H. Peter Anvin wrote:
> Doug Ledford wrote:
> >>
> >> I would argue that ext[234] should be clearing those 512 bytes. Why
> >> aren't they cleared
> >
> > Actually, I didn't think msdos used the first 512 b
On Fri, 2007-11-02 at 13:21 -0500, Alberto Alonso wrote:
> On Fri, 2007-11-02 at 11:45 -0400, Doug Ledford wrote:
>
> > The key word here being "supported". That means if you run across a
> > problem, we fix it. It doesn't mean there will never be any problems.
o a chunk, so you really only need to align the
lvm superblock so that data starts at 128K offset into the raid array.
--
Doug Ledford <[EMAIL PROTECTED]>
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
htt
correct
Events : 2
Array Slot : 0 (0, 1)
Array State : Uu
[EMAIL PROTECTED] ~]#
--
Doug Ledford <[EMAIL PROTECTED]>
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
http://people.redhat.com/d
79 matches
Mail list logo