I recently installed a server with mirrored disks using software RAID.
Everything was working fine for a few days until a normal reboot (not
the first). Now the machine will not boot because it appears the
superblock is wrong on some of the RAID devices on the first disk.
The rough layout
*
references but I'll check other scripts when (if? :) I get the system
back up and running.
Whilst the machine is not critical and is only a new install, I'd like
to keep fighting rather than give in if possible.
Thanks,
David
-
To unsubscribe from this list: send the line unsubscribe
Quoting David [EMAIL PROTECTED]:
Or is the correct way
to remove the bad superblock drive from the array, mount the md,
remove the file then resync the array?
Common sense says this is correct.
If it is possible to do either of the above, how do I stop the
recovery? It now starts
, or is
the correct method to have them as an md with the md initialised as
swap?
Brief details are the same as my previous mails last week: 2.6.15,
mdadm 1.12.0 (on md0, so I can't see that it is at fault).
Thanks,
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
On Thu, 17 May 2007, Neil Brown wrote:
On Thursday May 17, [EMAIL PROTECTED] wrote:
The only difference of any significance between the working
and non-working configurations is that in the non-working,
the component devices are larger than 2Gig, and hence have
sector offsets greater than 32
On Wed, 30 May 2007, David Chinner wrote:
On Tue, May 29, 2007 at 04:03:43PM -0400, Phillip Susi wrote:
David Chinner wrote:
The use of barriers in XFS assumes the commit write to be on stable
storage before it returns. One of the ordering guarantees that we
need is that the transaction
On Wed, 30 May 2007, David Chinner wrote:
On Tue, May 29, 2007 at 05:01:24PM -0700, [EMAIL PROTECTED] wrote:
On Wed, 30 May 2007, David Chinner wrote:
On Tue, May 29, 2007 at 04:03:43PM -0400, Phillip Susi wrote:
David Chinner wrote:
The use of barriers in XFS assumes the commit write
On Thu, 31 May 2007, Jens Axboe wrote:
On Thu, May 31 2007, Phillip Susi wrote:
David Chinner wrote:
That sounds like a good idea - we can leave the existing
WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED
behaviour that only guarantees ordering. The filesystem can
) it's not uncommon to want to operate in degraded mode just long
enought oget to a maintinance window and then recreate the array and
reload from backup.
David Lang
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
/sec?
I'm putting 10x as much data through the bus at that point, it would seem
to proove that it's not the bus that's saturated.
David Lang
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
was the write
speed that was takeing place, I thought it was the total data rate (reads
+ writes). the next time this message gets changed it would be a good
thing to clarify this.
David Lang
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
again I'll try iostat to get
more details
Also, how's your CPU utilization?
~30% of one cpu for the raid 6 thread, ~5% of one cpu for the resync
thread
David Lang
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
the total size of the array for the amount of work that needs to be done,
but then show only the write speed for the rate pf progress being made
through the job.
total rebuild time was estimated at ~3200 min
David Lang
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
history (it's going to be a 30TB circular buffer being
fed by a pair of OC-12 links)
it appears that my big mistake was not understanding what /proc/mdstat is
telling me.
David Lang
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
/proc/mdstat was telling me. I thought that it was telling
me that the resync was processing 5M/sec, not that it was writing 5M/sec
on each of the two parity locations.
David Lang
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED
On Fri, 22 Jun 2007, David Greaves wrote:
That's not a bad thing - until you look at the complexity it brings - and
then consider the impact and exceptions when you do, eg hardware
acceleration? md information fed up to the fs layer for xfs? simple long term
maintenance?
Often
(and code) to have two codebases
that try to do the same thing, one stand-alone, and one as a part of an
integrated solution (and it gets even worse if there end up being multiple
integrated solutions)
David Lang
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
appropriate
to let it be handled by the combining end user, like OCFS or GFS.
there are times when you want to replicate at the block layer, and there
are times when you want to have a filesystem do the work. don't force a
filesystem on use-cases where a block device is the right answer.
David
in adding all the code to deal
with the network type issues, then the argument that DRDB should not be
merged becouse you can do the same thing with MD/DM + NBD is invalid and
can be dropped/ignored
David Lang
On Sun, 12 Aug 2007, Paul Clements wrote:
Iustin Pop wrote:
On Sun, Aug 12, 2007
On Mon, 13 Aug 2007, David Greaves wrote:
[EMAIL PROTECTED] wrote:
per the message below MD (or DM) would need to be modified to work
reasonably well with one of the disk components being over an unreliable
link (like a network link)
are the MD/DM maintainers interested in extending
I'm new to the raid under linux world, and had a question. I successfully
installed redhat 6.2 with raid 0 for two drives on a sun ultra 1. However
i'm trying to rebuild the kernel, and thought i'd play with 2.4test11 since
it has the raid code built in, but to no avail. while it will auto
I'm new to the raid under linux world, and had a question. Sorry if several
posts have been made by me previously, I had some trouble subscribing to the
list...
I successfully installed redhat 6.2 with raid 0 for two drives on a sun
ultra 1. However i'm trying to rebuild the kernel, and
. Anyone know of any good (easy to setup) applications for doing that,
or perhaps a shell script that might do the same thing?
David Christensen
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
You might have a look at one man's experience with a Terabyte configuration
of 16 IDE drives at http://www.research.att.com/~gjm/linux/ide-raid.html.
David Christensen
I'm working on an 18-disk raid system, but I've heard a couple
responses implying that larger sized arrays do not work well
cause them to be recognized the same.
Thanks in advance for any help,
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
was still there but I got
warnings about partitions not ending on cylinder boundaries. A quick reboot
later and dmesg reports the same drive parameters and everything works great!
Thanks for the brilliant and speedy response! Both of my RAID1's are currently
happily adding their mirrors!
David
a difference, I am running linux-2.4.26
Thanks
--David Dougall
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Not sure if it is important to many people, but tapes take a lot less
electricity than online disks.
--David Dougall
On Tue, 22 Feb 2005, Jon Lewis wrote:
On Tue, 22 Feb 2005, Alvin Oga wrote:
Better depends on what you want/need/can afford. Last time I was tape
shopping, I thought
does it mean that the superblock
is up to date?
In fact isn't that misleading?
Surely, if anything, the spare _should_ have an out of date superblock?
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info
. It's a
teeny bit rough and a bit OTT for a personal server though so I'm
sticking with md/lvm2 for now :)
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
4) I use xfs. Has anyone used xfs_growfs?
Yes - it's been flawless.
I've used it on lvm2 over md
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
be
interested in:
CONFIG_MD_FAULTY:
The faulty module allows for a block device that occasionally returns
read or write errors. It is useful for testing.
HTH
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info
kernel version, mdadm version?
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
the mail right down :)
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Mitchell Laks wrote:
On Sunday 13 March 2005 10:49 am, David Greave wrote: Many Helpful remarks:
David I am grateful that you were there for me.
No probs - we've all been there!
My assessment (correct me if I am wrong) is that I have to rethink my
architecture. As I continue to work
In my experience, if you are concerned about filesystem performance, don't
use ext3. It is one of the slowest filesystems I have ever used
especially for writes. I would suggest either reiserfs or xfs.
--David Dougall
On Fri, 11 Mar 2005, Arshavir Grigorian wrote:
Hi,
I have a RAID5 array
needed to do this (it won't extend a
degraded array, though I don't know if rr will either...)
FWIW I migrated to an EVMS setup and back to plain md/lvm2 without any
issues.
AFAIK raidreconf is unmaintained.
I know which I'd steeer clear of...
David
Mike Hardy wrote:
Hello all -
This is more
This is just a potentially interesting forwarded mail from the EVMS
mailing list to illustrate the kind of issues/responses to the raid5
resize questions...
David
[EMAIL PROTECTED] wrote on 03/01/2005 09:16:51 AM:
I read in the evms user guide that it should be possible but I can't
seem
to find
for it? ;) I'd love to use evms on my new filserver if it supported
RAID6.
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
?
no - striping mirroring
The kernel will fail to read data on the crashed disk - game over.
I.e. do I have to let my swap disk be a
RAID-setup too if I wan't it to continue upon disk crash?
yes - a mirror, not a stripe.
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
.
or:
no, it would be mighty strange if the raid subsystem just grabbed every
new disk it saw...
Think of what would happen when I insert my camera's compact flash card
and it suddenly gets used as a hot spare grin
I'll leave Luca's last word - although it's also worth re-reading Peter's
first words!!
David
in there...
David
--
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
-disk /dev/sda
So this command could mark as faulty and remove of the array any
implied partition(s) of the disk to be removed.
see above 1 liner...
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
control ones.
I do think you would need to ask Neil to support
mdadm --sync-/dev/sdc-to-replace-/dev/sdg-even-though-/dev/sdg-is-fine
mdadm --use-/dev/sdc-and-make-/dev/sdg-spare
which would be especially useful if /dev/sdg were part of a shared
spares pool.
David
-
To unsubscribe from this list
not sure about this but it looks like the problem is occuring at a lower
level than md.
I'd take it over to ide-linux and/or hotplug.
ide-linux is at linux-ide@vger.kernel.org
I don't know about hotplug
It would help to tell them what kernel you're running too grin
HTH
David
[EMAIL PROTECTED
an
XFS file system on a 200Gb mirrored RAID array, two drives,
on seperate IDE channels (seperate cables.)
Thanks for your time,
- --
David Kowis
ISO Team Lead - www.sourcemage.org
SourceMage GNU/Linux
One login to rule them all, one login to find them. One login to bring them
all
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
whoops, I was mistaken, and a fool for not checking, but I don't use XFS, it's
reiserfs on the 200gb array.
Sorry about the second mail.
David Kowis wrote:
I'm not entirely sure if this is mdadm's fault, but I cannot find anything
else that would
%0.06K 1 61 4K uid_cache
61 1 1%0.06K 1 61 4K inet_peer_cache
59 59 100%4.00K 591 236K pgd
What does 'cat /proc/slabinfo' show?
I've attached my /proc/slabinfo.
Thanks :)
David
--
One login to rule them all, one login
Quoting Guy [EMAIL PROTECTED]:
Run ipcs to see if you have shared memory usage that seems wrong, or
grows.
# ipcs -m
-- Shared Memory Segments
keyshmid owner perms bytes nattch status
0x 65536 root 60033554432 11
Dan Christensen wrote:
Ming Zhang [EMAIL PROTECTED] writes:
test on a production environment is too dangerous. :P
and many benchmark tool u can not perform as well.
Well, I put production in quotes because this is just a home mythtv
box. :-) So there are plenty of times when it is
And notice you can apply different readahead to:
The raw devices (/dev/sda)
The md device (/dev/mdX)
Any lvm device (/dev/lvm_name/lvm_device)
David
Raz Ben Jehuda wrote:
read the blockdev man page
On Thu, 2005-08-04 at 16:06 +0200, [EMAIL PROTECTED] wrote:
Hi list, Neil!
I have a little
file, then it's a different story.
Have you tried / can you try XFS.
IIRC it is very good indeed at this kind of scenario (used to be an
*excellent* nntp server fs)
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More
combination
David
--
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
!)
finally, watch the filesystem - eg xfs is excellent for big files but
can't shrink
HTH
David
--
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Ross Vandegrift wrote:
On Thu, Jan 12, 2006 at 11:16:36AM +, David Greaves wrote:
ok, first off: a 14 device raid1 is 14 times more likely to lose *all*
your data than a single device.
No, this is completely incorrect. Let A denote the event that a single
disk has failed, A_i
if I've missed the reason that this is a bad idea.
David
--
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
before attempting repair on 'working' images.
(Of course you need lots of disk space so you may need new disks -
depends how valuable your data is)
HTH
David
--
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
://www.trustedreviews.com/article.aspx?art=1014
David
PS Mitchell, replies direct to you bounce. Verizon are apparently still
blocking us 'dangerous' european spammers! Maybe consider switching to
an ISP that's less antisocial ? :)
--
-
To unsubscribe from this list: send the line unsubscribe linux-raid
} Sent: Thursday, February 02, 2006 1:42 PM
} To: linux-raid@vger.kernel.org
} Subject: Re: RAID 16?
}
} Matthias Urlichs [EMAIL PROTECTED] wrote:
} Hi, David Liontooth wrote:
}* define 4 pairs of RAID 1 with an 8-port 3ware 9500S card * the OS
} will
} Hmm. You'd have eight disks, five(!) may
Mattias Wadenstein wrote:
On Sun, 5 Feb 2006, David Liontooth wrote:
In designing an archival system, we're trying to find data on when it
pays to power or spin the drives down versus keeping them running.
Hitachi claims 5 years (Surface temperature of HDA is 45°C or less)
Life
/Faulty-RAIDDisk.img
/mnt/hdb1/Faulty-RAIDDisk.log
This will be much quicker because the log file contains details of the
faulty sectors.
With luck (mucho luck) you may not even lose data.
David
--
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message
problems in 'fua' (IIRC) handling which was pulled for 2.6.16.
2.6.16 seems to be much better (fewer 'odd' errors reported and md
doesn't mind)
David
PS Mitchell - you're still using Verizon and I still live off the edge
of their known world (in the UK) so I don't expect you'll get this reply
- hard
BadCRC }
Look here:
http://marc.theaimsgroup.com/?l=linux-kernelm=114386015009790w=2
I don't know he's right - you may want to get into it...
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
of 3 then it won't need the assume-clean.
The detail and dmesg data suggests that the order in the command above
is correct.
Can anyone confirm this?
Thanks
David
--
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More
need to find out if I have bad hardware or if there is something
(else) wrong with libata :)
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
really should read up on mdadm -F -
it runs as a daemon and sends you mail if any raid events occur.
See if FC4 has a script that automatically runs it - you may need to
tweak some config parameters somewhere (I use Debian so I'm not much help).
David
-
To unsubscribe from this list: send the line
to do it:
mdadm -S /dev/md0
mdadm -A /dev/md0 --force /dev/sd[abd]
mdadm /dev/md0 --add /dev/sdv
Typo: this last line should be:
mdadm /dev/md0 --add /dev/sdc
^
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
mail message. After that, you
shouldn't get any bounces from me. Sorry if this is
an inconvenience.
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
dd (possibly threaded so streams both drives rather
than read a drive, write a drive)
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Molle Bestefich wrote:
Anyway, a quick cheat sheet might come in handy:
Which is why I posted about a wiki a few days back :)
I'm progressing it and I'll see if we can't get something up.
There's a lot of info on the list and it would be nice to get it a
little more focused...
David
used both dd_rescue/dd_rhelp and the gnu ddrescue in anger, I'd
suggest gnu ddrescue.
http://www.gnu.org/software/ddrescue/ddrescue.html
David
--
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
of failing, it simply restarts the resync.
I imagine the two are related - maybe 'set faulty' simply simulates an
i/o error on the member, but during resync, the behavior is 'retry'.
Is there anything that can be done about this (other than politely ask
vendor for a fix ;-)?
David
.
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
on
demand.
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
devices and the raid device?
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 6/23/06, Nix [EMAIL PROTECTED] wrote:
On 23 Jun 2006, PFC suggested tentatively:
- ext3 is slow if you have many files in one directory, but has
more mature tools (resize, recovery etc)
This is much less true if you turn on the dir_index feature.
However, even with dir_index,
to contribute' (just so I
can keep track of interested parties) and we can build something up...
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Neil Brown wrote:
I guess I could test for both, but then udev might change
again I'd really like a more robust check.
Maybe I could test if /dev was a mount point?
IIRC you can have diskless machines with a shared root and nfs mounted
static /dev/
David
--
-
To unsubscribe from
Francois Barre wrote:
Hello David, all,
You pointed the http://linux-raid.osdl.org as a future ressource for
SwRAID and MD knowledge base.
Yes. it's not ready for public use yet so I've not announced it formally
- I just mention it to people when things pop up.
In fact, the TODO page
-01 #3 PREEMPT Sat Jun 3 09:20:24 BST
2006 i686 GNU/Linux
teak:~# mdadm -V
mdadm - v2.5.2 - 27 June 2006
David
--
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Jul 18, 2006 at 06:58:56PM +1000, Neil Brown wrote:
On Tuesday July 18, [EMAIL PROTECTED] wrote:
On Mon, Jul 17, 2006 at 01:32:38AM +0800, Federico Sevilla III wrote:
On Sat, Jul 15, 2006 at 12:48:56PM +0200, Martin Steigerwald wrote:
I am currently gathering information to write
David Greaves wrote:
Hi
After a powercut I'm trying to mount an array and failing :(
A reboot after tidying up /dev/ fixed it.
The first time through I'd forgotten to update the boot scripts and they
were assembling the wrong UUID. That was fine; I realised this and ran
the manual assemble
FAQ:
http://oss.sgi.com/projects/xfs/faq.html#dir2
It appears that efforts are being focused on the repair tools now.
It appears to me that the best response is to patch the kernel, reboot,
backup the fs, recreate the fs and restore - but please read up before
taking any action.
David
.
HTH
David
--
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
process, bring up hostname-raid6 by
--name too.
mdadm --assemble --scan --config partitions --name hostname-raid6
David
--
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo
impact
As an example of the cons: I've just set up lvm2 over my raid5 and whilst
testing snapshots, the first thing that happened was a kernel BUG and an oops...
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
On 8/10/06, dean gaudet [EMAIL PROTECTED] wrote:
- set up smartd to run long self tests once a month. (stagger it every
few days so that your disks aren't doing self-tests at the same time)
I personally prefer to do a long self-test once a week, a month seems
like a lot of time for
On 8/11/06, dean gaudet [EMAIL PROTECTED] wrote:
On Fri, 11 Aug 2006, David Rees wrote:
On 8/10/06, dean gaudet [EMAIL PROTECTED] wrote:
- set up smartd to run long self tests once a month. (stagger it every
few days so that your disks aren't doing self-tests at the same time)
I
be going nuts, as it does not appear as an option. Below
is the list under Device Drivers if I do a make menuconfig:
Recently reported on lkml
Andrew Morton said:
ftp://ftp.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.18-rc5/2.6.18-rc5-mm1/hot-fixes/
contains a fix for this.
HTH
David
On 9/8/06, Ruth Ivimey-Cook [EMAIL PROTECTED] wrote:
I messed up slightly when creating a new 6-disk raid6 array, and am wondering
if there is a simple answer. The problem is that I didn't partition the drives,
but simply used the whole drive. All drives are of the same type and using the
that the mirror will work when it's needed?
Read up on the md-faulty device.
Also, FWIW, md works just fine :)
(Lots of other things can go wrong so testing your setup is a food idea though)
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
Typo in first line of this patch :)
I have had enough success reports not^H^H^H to believe that this
is safe for 2.6.19.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
?
No (eg EVMS)
David
--
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
andy liebman wrote:
Feel free to add it here:
http://linux-raid.osdl.org/index.php/Main_Page
I haven't been able to do much for a few weeks (typical - I find some
time and
use it all up just getting the basic setup done - still it's started!)
David
Any hints on how to add a page
Nix wrote:
On 2 Oct 2006, David Greaves spake:
I suggest you link from http://linux-raid.osdl.org/index.php/RAID_Boot
The pages don't really have the same purpose. RAID_Boot is `how to boot
your RAID system using initramfs'; this is `how to set up a RAID system
in the first place', i.e
On 10/14/06, Lane Brooks [EMAIL PROTECTED] wrote:
I am wondering if there is a way to cut my losses with these bad sectors
and have it recover what it can so that I can get my raid array back to
functioning. Right now I cannot get a spare disk recovery to finish
because these bad sectors. Is
? Or is
this now a replacement?
You should be OK - I'll reply quickly now and see if I can make some suggestions
later (or sooner).
David
When I try to rebuild the array mdadm --assemble /dev/md0 /dev/sda2
/dev/sdb2 /dev/sdc2 /dev/sdd2 I see failed to RUN_ARRAY /dev/md0:
Input/output error
Gordon Henderson wrote:
1747 ?S 724:25 [md9_raid5]
It's kernel 2.6.18 and
Wasn't the module merged to raid456 in 2.6.18?
Are your mdx_raid6's earlier kernels. My raid 6 is on 2.7.17 and says _raid6
Could it be that the combined kernel thread is called mdX_raid5
David
David Greaves wrote:
Gordon Henderson wrote:
1747 ?S 724:25 [md9_raid5]
It's kernel 2.6.18 and
Wasn't the module merged to raid456 in 2.6.18?
Are your mdx_raid6's earlier kernels. My raid 6 is on 2.7.17 and says _raid6
Could it be that the combined kernel thread is called
Neil Brown wrote:
Patches to the man page to add useful examples are always welcome.
And if people would like to be more verbose, the wiki is available at
http://linux-raid.osdl.org/
It's now kinda useful but definitely not fully migrated from the old RAID FAQ.
David
-
To unsubscribe from
1 - 100 of 272 matches
Mail list logo