On 6 Dec 2007, Jan Engelhardt verbalised:
On Dec 5 2007 19:29, Nix wrote:
On Dec 1 2007 06:19, Justin Piszcz wrote:
RAID1, 0.90.03 superblocks (in order to be compatible with LILO, if
you use 1.x superblocks with LILO you can't boot)
Says who? (Don't use LILO ;-)
Well, your kernels must
On 1 Dec 2007, Jan Engelhardt uttered the following:
On Dec 1 2007 06:19, Justin Piszcz wrote:
RAID1, 0.90.03 superblocks (in order to be compatible with LILO, if
you use 1.x superblocks with LILO you can't boot)
Says who? (Don't use LILO ;-)
Well, your kernels must be on a
On 19 Sep 2007, maximilian attems said:
hello,
working on initramfs i'd be curious to know what the /sys/block
entry of a /dev/md/NN device is. have a user request to support
it and no handy box using it.
i presume it may also be /sys/block/mdNN ?
That's it, e.g. /sys/block/md0. Notable
On 11 Jul 2007, Michael stated:
I am running Suse, and the check program is not available
`check' isn't a program. The line suggested has a typo: it should
be something like this:
30 2 * * Mon echo check /sys/block/md0/md/sync_action
The only program that line needs is `echo' and I'm sure
On 21 Jun 2007, Neil Brown stated:
I have that - apparently naive - idea that drives use strong checksum,
and will never return bad data, only good data or an error. If this
isn't right, then it would really help to understand what the cause of
other failures are before working out how to
On 19 Jun 2007, Michael outgrape:
[regarding `welcome to my killfile']
Grow up man, and I thanks for the threat. I will take that into
account if anything bad happens to my computer system.
Read http://en.wikipedia.org/wiki/Killfile and learn. All he's saying
is `I am automatically ignoring
On 12 Jun 2007, Jon Nelson told this:
On Mon, 11 Jun 2007, Nix wrote:
On 11 Jun 2007, Justin Piszcz told this:
loki:~# time dd if=/dev/md1 bs=1000 count=502400 of=/dev/null
502400+0 records in
502400+0 records out
50240 bytes (502 MB) copied, 16.2995 s, 30.8 MB/s
loki:~# time dd
On 11 Jun 2007, Justin Piszcz told this:
You can do a read test.
10gb read test:
dd if=/dev/md0 bs=1M count=10240 of=/dev/null
What is the result?
I've read that LVM can incur a 30-50% slowdown.
FWIW I see a much smaller penalty than that.
loki:~# lvs -o +devices
LV VG
On 29 May 2007, Jan Engelhardt uttered the following:
from your post at
http://www.mail-archive.com/linux-raid@vger.kernel.org/msg07384.html I
read that autodetecting arrays with a 1.x superblock is currently
impossible. Does it at least work to force the kernel to always assume a
1.x
On 30 May 2007, Bill Davidsen stated:
Nix wrote:
On 29 May 2007, Jan Engelhardt uttered the following:
from your post at
http://www.mail-archive.com/linux-raid@vger.kernel.org/msg07384.html I read
that autodetecting arrays with a
1.x superblock is currently impossible. Does it at least
On 8 May 2007, Michael Tokarev told this:
BTW, for such recovery purposes, I use initrd (initramfs really, but
does not matter) with a normal (but tiny) set of commands inside,
thanks to busybox. So everything can be done without any help from
external recovery CD. Very handy at times,
On 9 May 2007, Michael Tokarev spake thusly:
Nix wrote:
On 8 May 2007, Michael Tokarev told this:
BTW, for such recovery purposes, I use initrd (initramfs really, but
does not matter) with a normal (but tiny) set of commands inside,
thanks to busybox. So everything can be done without any
On 19 Mar 2007, James W. Laferriere outgrabe:
What I don't see is the reasoning behind the use of initrd . It's a
kernel ran to put the dev tree in order , start up devices ,... Just to
start the kernel again ?
That's not what initrds do. No second kernel is started, and
On 17 Mar 2007, Chris Lindley told this:
What I think the OP is getting at is that MDADM will create an array
with partitions whose type is not set to FD (Linux Raid Auto), but are
perhaps 83.
The issue with that is that upon a reboot mdadm will not be able to
start the array.
I think you
On 20 Feb 2007, Al Boldi outgrape:
Eyal Lebedinsky wrote:
Disks are sealed, and a dessicant is present in each to keep humidity
down. If you ever open a disk drive (e.g. for the magnets, or the mirror
quality platters, or for fun) then you can see the dessicant sachet.
Actually, they aren't
On 22 Feb 2007, [EMAIL PROTECTED] uttered the following:
On 20 Feb 2007, Al Boldi outgrape:
Eyal Lebedinsky wrote:
Disks are sealed, and a dessicant is present in each to keep humidity
down. If you ever open a disk drive (e.g. for the magnets, or the mirror
quality platters, or for fun) then
On 23 Jan 2007, Neil Brown said:
On Tuesday January 23, [EMAIL PROTECTED] wrote:
My question is then : what prevents the upper layer to open the array
read-write, submit a write and make the md code BUG_ON() ?
The theory is that when you tell an md array to become read-only, it
tells the
On 18 Jan 2007, Bill Davidsen spake thusly:
) Steve Cousins wrote:
time dd if=/dev/zero of=/mount-point/test.dat bs=1024k count=1024
That doesn't give valid (repeatable) results due to caching issues. Go
back to the thread I started on RAID-5 write, and see my results. More
important, the way
On 15 Jan 2007, Bill Davidsen told this:
Nix wrote:
Number Major Minor RaidDevice State
0 860 active sync /dev/sda6
1 8 221 active sync /dev/sdb6
3 2252 active sync /dev/hdc5
On 14 Jan 2007, Neil Brown told this:
A quick look suggests that the following patch might make a
difference, but there is more to it than that. I think there are
subtle differences due to the use of version-1 superblocks. That
might be just another one-line change, but I want to make sure
On 13 Jan 2007, [EMAIL PROTECTED] uttered the following:
mdadm-2.6 bug, I fear. I haven't tracked it down yet but will look
shortly: I can't afford to not run mdadm --monitor... odd, that
code hasn't changed during 2.6 development.
Whoo! Compile Monitor.c without optimization and the problem
On 12 Jan 2007, Ernst Herzberg told this:
Then every about 60 sec 4 times
event=SpareActive
mddev=/dev/md3
I see exactly this on both my RAID-5 arrays, neither of which have any
spare device --- nor have any active devices transitioned to spare
(which is what that event is actually supposed
On 13 Jan 2007, [EMAIL PROTECTED] spake thusly:
On 12 Jan 2007, Ernst Herzberg told this:
Then every about 60 sec 4 times
event=SpareActive
mddev=/dev/md3
I see exactly this on both my RAID-5 arrays, neither of which have any
spare device --- nor have any active devices transitioned to
On 13 Jan 2007, [EMAIL PROTECTED] uttered the following:
On 12 Jan 2007, Ernst Herzberg told this:
Then every about 60 sec 4 times
event=SpareActive
mddev=/dev/md3
I see exactly this on both my RAID-5 arrays, neither of which have any
spare device --- nor have any active devices
On 27 Nov 2006, Dragan Marinkovic stated:
On 11/26/06, Nix [EMAIL PROTECTED] wrote:
Well, I assemble my arrays with the command
/sbin/mdadm --assemble --scan --auto=md
[...]
No metadata versions needed anywhere.
[...]
But you do have to specify the version (other than 0.90) when you want
On 25 Nov 2006, Dragan Marinkovic stated:
Hm, I was playing with RAID 5 with one spare (3 + 1) and metadata
version 1.2 . If I let it build to some 10% and cleanly reboot it does
not start where it left off -- basically it starts from scratch. I was
under the impression that RAID with metadata
On 6 Nov 2006, Thomas Andrews uttered the following:
Thanks Neil, I fixed my problem by creating the raid set using the -e
option:
mdadm -C /dev/md0 -e 0.90 --level=raid1 --raid-devices=2 /dev/sda1
/dev/sdb1
You're suggestion to use mdadm to assemble the array is not an option
for me
On 21 Oct 2006, Bodo Thiesen yowled:
was hdb and what was hdd? And hde? Hmmm ...), so we decided the following
structure:
hda - vg called raida - creating LVs called raida1..raida4
hdb - vg called raidb - creating LVs called raidb1..raidb4
I'm interested: why two VGs? Why not have one VG
On 8 Oct 2006, Daniel Pittman said:
Jyri Hovila [EMAIL PROTECTED] writes:
I would appreciate it a lot if somebody could give me a hand here. All
I need to understand right now is how I can find out the first sector
of the actual RAID data. I'm starting with a simple configuration,
where there
On 2 Oct 2006, David Greaves spake:
I suggest you link from http://linux-raid.osdl.org/index.php/RAID_Boot
The pages don't really have the same purpose. RAID_Boot is `how to boot
your RAID system using initramfs'; this is `how to set up a RAID system
in the first place', i.e., setup.
I'll give
On 6 Sep 2006, Mario Holbe spake:
You don't necessarily need one. However, since Neil considers in-kernel
RAID-autodetection a bad thing and since mdadm typically relies on
mdadm.conf for RAID-assembly
You can specify the UUID on the command-line too (although I don't).
The advantage of the
On 5 Sep 2006, Paul Waldo uttered the following:
What about bitmaps? Nobody has mentioned them. It is my
understanding that you just turn them on with mdadm /dev/mdX -b
internal. Any caveats for this?
Notably, how many additional writes does it incur? I have some RAID
arrays using drives
On 16 Aug 2006, Molle Bestefich murmured woefully:
Peter T. Breuer wrote:
The comm channel and hey, I'm OK message you propose doesn't seem
that different from just hot-adding the disks from a shell script
using 'mdadm'.
[snip speculations on possible blocking calls]
You could always
On 5 Aug 2006, David Greaves prattled cheerily:
As an example of the cons: I've just set up lvm2 over my raid5 and whilst
testing snapshots, the first thing that happened was a kernel BUG and an
oops...
I've been backing up using writable snapshots on LVM2 over RAID-5 for
some time. No BUGs.
On 20 Jul 2006, Neil Brown uttered the following:
On Tuesday July 18, [EMAIL PROTECTED] wrote:
I think there's a bug here somewhere. I wonder/suspect that the
superblock should contain the fact that it's a partitioned/able md device?
I've thought about that and am not in favour.
I would
On 18 Jul 2006, Neil Brown moaned:
The superblock locations for sda and sda1 can only be 'one and the
same' if sda1 is at an offset in sda which is a multiple of 64K, and
if sda1 ends near the end of sda. This certainly can happen, but it
is by no means certain.
For this reason, version-1
On 17 Jul 2006, Christian Pernegger suggested tentatively:
I'm still having problems with some md arrays not shutting down
cleanly on halt / reboot.
The problem seems to affect only arrays that are started via an
initrd, even if they do not have the root filesystem on them.
That's all
On 26 Jun 2006, Neil Brown said:
On Tuesday June 20, [EMAIL PROTECTED] wrote:
For some time, mdadm's been dumping core on me in my uClibc-built
initramfs. As you might imagine this is somewhat frustrating, not least
since my root filesystem's in LVM on RAID. Half an hour ago I got around
to
On 25 Jun 2006, Chris Allen uttered the following:
Back to my 12 terabyte fileserver, I have decided to split the storage
into four partitions each of 3TB. This way I can choose between XFS
and EXT3 later on.
So now, my options are between the following:
1. Single 12TB /dev/md0,
On Tue, 27 Jun 2006, Neil Brown prattled cheerily:
On Tuesday June 27, [EMAIL PROTECTED] wrote:
,[ config.c:load_partitions() ]
| name = map_dev(major, minor, 1);
|
| d = malloc(sizeof(*d));
| d-devname = strdup(name);
`
Ahh.. uhmmm... Oh yes. I've fixed that since, but
On Tue, 27 Jun 2006, Chris Allen wondered:
Nix wrote:
There is a third alternative which can be useful if you have a mess of
drives of widely-differing capacities: make several RAID arrays so as to
tesselate
space across all the drives, and then pile an LVM on the top of all of them
On Fri, 23 Jun 2006, Neil Brown mused:
On Friday June 23, [EMAIL PROTECTED] wrote:
On 20 Jun 2006, [EMAIL PROTECTED] prattled cheerily:
For some time, mdadm's been dumping core on me in my uClibc-built
initramfs. As you might imagine this is somewhat frustrating, not least
since my root
On 23 Jun 2006, Francois Barre uttered the following:
The problem is that there is no cost effective backup available.
One-liner questions :
- How does Google make backups ?
Replication across huge numbers of cheap machines on a massively
distributed filesystem.
--
`NB: Anyone suggesting
On 23 Jun 2006, PFC suggested tentatively:
- ext3 is slow if you have many files in one directory, but has
more mature tools (resize, recovery etc)
This is much less true if you turn on the dir_index feature.
--
`NB: Anyone suggesting that we should say Tibibytes instead of
On 23 Jun 2006, Christian Pedaschus said:
and my main points for using ext3 is still: it's a very mature fs,
nobody will tell you such horrible storys about data-lossage with ext3
than with any other filesystem.
Actually I can, but it required bad RAM *and* a broken disk controller
*and* an
For some time, mdadm's been dumping core on me in my uClibc-built
initramfs. As you might imagine this is somewhat frustrating, not least
since my root filesystem's in LVM on RAID. Half an hour ago I got around
to debugging this.
Imagine my surprise when I found that it was effectively guaranteed
On 13 Jun 2006, Gordon Henderson said:
On Tue, 13 Jun 2006, Adam Talbot wrote:
Can any one give me more info on this error? Pulled from
/var/log/messages.
raid6: read error corrected!!
Not seen that one!!!
The message is pretty easy to figure out and the code (in
drivers/md/raid6main.c)
On 29 May 2006, Neil Brown suggested tentatively:
On Sunday May 28, [EMAIL PROTECTED] wrote:
- mdadm-2.4-strict-aliasing.patch
fix for another srict-aliasing problem, you can typecast a reference to a
void pointer to anything, you cannot typecast a reference to a
struct.
Why can't I
On 2 Jun 2006, Uwe Meyer-Gruhl uttered the following:
Neil's suggestion indicates that there may be a race condition
stacking md and dm over each other, but I have not yet tested that
patch. I once had problems stacking cryptoloop over RAID-6, so it
might really be a stacking problem. We don't
On 24 May 2006, Florian Dazinger uttered the following:
Neil Brown wrote:
Presumably you have a 'DEVICE' line in mdadm.conf too? What is it.
My first guess is that it isn't listing /dev/sdd? somehow.
Otherwise, can you add a '-v' to the mdadm command that assembles the
array, and capture the
On 23 May 2006, Neil Brown noted:
On Monday May 22, [EMAIL PROTECTED] wrote:
A few simple questions about the 2.6.16+ kernel and software RAID.
Does software RAID in the 2.6.16 kernel take advantage of SMP?
Not exactly. RAID5/6 tends to use just one cpu for parity
calculations, but that
On 10 May 2006, Dexter Filmore wrote:
Do I have to provide stride parameter like for ext2?
Yes, definitely.
--
`On a scale of 1-10, X's brokenness rating is 1.1, but that's only
because bringing Windows into the picture rescaled brokenness by
a factor of 10.' --- Peter da Silva
-
To
On 23 Apr 2006, Mark Hahn stipulated:
I've seen a lot of cheap disks say (generally deep in the data sheet
that's only available online after much searching and that nobody ever
reads) that they are only reliable if used for a maximum of twelve hours
a day, or 90 hours a week, or something of
On 23 Apr 2006, Mark Hahn said:
some people claim that if you put a normal (desktop)
drive into a 24x7 server (with real round-the-clock load), you should
expect failures quite promptly. I'm inclined to believe that with
MTBF's upwards of 1M hour, vendors would not claim a
On 23 Mar 2006, Dan Christensen moaned:
To answer myself, the boot parameter raid=noautodetect is supposed
to turn off autodetection. However, it doesn't seem to have an
effect with Debian's 2.6.16 kernel. It does disable autodetection
for my self-compiled kernel, but since that kernel has
On 23 Mar 2006, Daniel Pittman uttered the following:
The initramfs tool, which is mostly shared with Ubuntu, is less stupid.
It uses mdadm and a loop to scan through the devices found on the
machine and find what RAID levels are required, then builds the RAID
arrays with mdrun.
That's much
On Fri, 17 Mar 2006, Andre Noll murmured woefully:
On 00:41, Nix wrote:
So I downloaded iproute2-2.4.7-now-ss020116-try.tar.gz, but there
seems to be a problem with errno.h:
Holy meatballs that's ancient.
It is the most recent version on the ftp server mentioned in the HOWTO.
OK, so
On Thu, 16 Mar 2006, Neil Brown wrote:
On Wednesday March 15, [EMAIL PROTECTED] wrote:
On 08:29, Nix wrote:
Yeah, that would work. Neil's very *emphatic* about hardwiring the UUIDs of
your arrays, though I'll admit that given the existence of --examine
--scan,
I don't really see why
raid5: device sda7 operational as raid disk 1
raid5: allocated 3155kB for md2
raid5: raid level 5 set md2 active with 3 out of 3 devices, algorithm 2
Anyway, without further ado, here's usr/init:
#!/bin/sh
#
# init --- locate and mount root filesystem
# By Nix [EMAIL PROTECTED
59 matches
Mail list logo