Checksums wrong on one disk of mirror

2006-11-07 Thread David
I recently installed a server with mirrored disks using software RAID. Everything was working fine for a few days until a normal reboot (not the first). Now the machine will not boot because it appears the superblock is wrong on some of the RAID devices on the first disk. The rough layout

Re: Checksums wrong on one disk of mirror

2006-11-07 Thread David
* references but I'll check other scripts when (if? :) I get the system back up and running. Whilst the machine is not critical and is only a new install, I'd like to keep fighting rather than give in if possible. Thanks, David - To unsubscribe from this list: send the line unsubscribe

Re: Checksums wrong on one disk of mirror

2006-11-08 Thread David
Quoting David [EMAIL PROTECTED]: Or is the correct way to remove the bad superblock drive from the array, mount the md, remove the file then resync the array? Common sense says this is correct. If it is possible to do either of the above, how do I stop the recovery? It now starts

Swap initialised as an md?

2006-11-10 Thread David
, or is the correct method to have them as an md with the md initialised as swap? Brief details are the same as my previous mails last week: 2.6.15, mdadm 1.12.0 (on md0, so I can't see that it is at fault). Thanks, David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body

RE: Software raid0 will crash the file-system, when each disk is 5TB

2007-05-16 Thread david
On Thu, 17 May 2007, Neil Brown wrote: On Thursday May 17, [EMAIL PROTECTED] wrote: The only difference of any significance between the working and non-working configurations is that in the non-working, the component devices are larger than 2Gig, and hence have sector offsets greater than 32

Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-05-29 Thread david
On Wed, 30 May 2007, David Chinner wrote: On Tue, May 29, 2007 at 04:03:43PM -0400, Phillip Susi wrote: David Chinner wrote: The use of barriers in XFS assumes the commit write to be on stable storage before it returns. One of the ordering guarantees that we need is that the transaction

Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-05-30 Thread david
On Wed, 30 May 2007, David Chinner wrote: On Tue, May 29, 2007 at 05:01:24PM -0700, [EMAIL PROTECTED] wrote: On Wed, 30 May 2007, David Chinner wrote: On Tue, May 29, 2007 at 04:03:43PM -0400, Phillip Susi wrote: David Chinner wrote: The use of barriers in XFS assumes the commit write

Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-05-31 Thread david
On Thu, 31 May 2007, Jens Axboe wrote: On Thu, May 31 2007, Phillip Susi wrote: David Chinner wrote: That sounds like a good idea - we can leave the existing WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED behaviour that only guarantees ordering. The filesystem can

Re: limits on raid

2007-06-16 Thread david
) it's not uncommon to want to operate in degraded mode just long enought oget to a maintinance window and then recreate the array and reload from backup. David Lang - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo

Re: limits on raid

2007-06-18 Thread david
/sec? I'm putting 10x as much data through the bus at that point, it would seem to proove that it's not the bus that's saturated. David Lang - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http

Re: limits on raid

2007-06-18 Thread david
was the write speed that was takeing place, I thought it was the total data rate (reads + writes). the next time this message gets changed it would be a good thing to clarify this. David Lang - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL

Re: limits on raid

2007-06-18 Thread david
again I'll try iostat to get more details Also, how's your CPU utilization? ~30% of one cpu for the raid 6 thread, ~5% of one cpu for the resync thread David Lang - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo

Re: limits on raid

2007-06-18 Thread david
the total size of the array for the amount of work that needs to be done, but then show only the write speed for the rate pf progress being made through the job. total rebuild time was estimated at ~3200 min David Lang - To unsubscribe from this list: send the line unsubscribe linux-raid in the body

Re: limits on raid

2007-06-18 Thread david
history (it's going to be a 30TB circular buffer being fed by a pair of OC-12 links) it appears that my big mistake was not understanding what /proc/mdstat is telling me. David Lang - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL

Re: limits on raid

2007-06-19 Thread david
/proc/mdstat was telling me. I thought that it was telling me that the resync was processing 5M/sec, not that it was writing 5M/sec on each of the two parity locations. David Lang - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED

Re: limits on raid

2007-06-22 Thread david
On Fri, 22 Jun 2007, David Greaves wrote: That's not a bad thing - until you look at the complexity it brings - and then consider the impact and exceptions when you do, eg hardware acceleration? md information fed up to the fs layer for xfs? simple long term maintenance? Often

Re: limits on raid

2007-06-22 Thread david
(and code) to have two codebases that try to do the same thing, one stand-alone, and one as a part of an integrated solution (and it gets even worse if there end up being multiple integrated solutions) David Lang - To unsubscribe from this list: send the line unsubscribe linux-raid in the body

Re: [RFD] Layering: Use-Case Composers (was: DRBD - what is it, anyways? [compare with e.g. NBD + MD raid])

2007-08-12 Thread david
appropriate to let it be handled by the combining end user, like OCFS or GFS. there are times when you want to replicate at the block layer, and there are times when you want to have a filesystem do the work. don't force a filesystem on use-cases where a block device is the right answer. David

Re: [RFD] Layering: Use-Case Composers (was: DRBD - what is it, anyways? [compare with e.g. NBD + MD raid])

2007-08-12 Thread david
in adding all the code to deal with the network type issues, then the argument that DRDB should not be merged becouse you can do the same thing with MD/DM + NBD is invalid and can be dropped/ignored David Lang On Sun, 12 Aug 2007, Paul Clements wrote: Iustin Pop wrote: On Sun, Aug 12, 2007

Re: [RFD] Layering: Use-Case Composers (was: DRBD - what is it, anyways? [compare with e.g. NBD + MD raid])

2007-08-13 Thread david
On Mon, 13 Aug 2007, David Greaves wrote: [EMAIL PROTECTED] wrote: per the message below MD (or DM) would need to be modified to work reasonably well with one of the disk components being over an unreliable link (like a network link) are the MD/DM maintainers interested in extending

Unable to Read Superblock

2000-12-02 Thread David Nellans
I'm new to the raid under linux world, and had a question. I successfully installed redhat 6.2 with raid 0 for two drives on a sun ultra 1. However i'm trying to rebuild the kernel, and thought i'd play with 2.4test11 since it has the raid code built in, but to no avail. while it will auto

Ex2FS unable to read superblock

2000-12-03 Thread David Nellans
I'm new to the raid under linux world, and had a question. Sorry if several posts have been made by me previously, I had some trouble subscribing to the list... I successfully installed redhat 6.2 with raid 0 for two drives on a sun ultra 1. However i'm trying to rebuild the kernel, and

Best way to test a new RAID configuration

2001-03-16 Thread David Christensen
. Anyone know of any good (easy to setup) applications for doing that, or perhaps a shell script that might do the same thing? David Christensen - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED]

RE: looking for PC hardware recommendations

2001-04-02 Thread David Christensen
You might have a look at one man's experience with a Terabyte configuration of 16 IDE drives at http://www.research.att.com/~gjm/linux/ide-raid.html. David Christensen I'm working on an 18-disk raid system, but I've heard a couple responses implying that larger sized arrays do not work well

Two identical hard drives with different CHS... help please!

2001-06-28 Thread David Nedved
cause them to be recognized the same. Thanks in advance for any help, David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED]

Re: Two identical hard drives with different CHS... help please!

2001-06-28 Thread David Nedved
was still there but I got warnings about partitions not ending on cylinder boundaries. A quick reboot later and dmesg reports the same drive parameters and everything works great! Thanks for the brilliant and speedy response! Both of my RAID1's are currently happily adding their mirrors! David

No response?

2005-01-20 Thread David Dougall
a difference, I am running linux-2.4.26 Thanks --David Dougall - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [OT] best tape backup system?

2005-02-22 Thread David Dougall
Not sure if it is important to many people, but tapes take a lot less electricity than online disks. --David Dougall On Tue, 22 Feb 2005, Jon Lewis wrote: On Tue, 22 Feb 2005, Alvin Oga wrote: Better depends on what you want/need/can afford. Last time I was tape shopping, I thought

Re: Spare disk could not sleep / standby

2005-03-08 Thread David Greaves
does it mean that the superblock is up to date? In fact isn't that misleading? Surely, if anything, the spare _should_ have an out of date superblock? David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info

Re: md Grow for Raid 5

2005-03-10 Thread David Greaves
. It's a teeny bit rough and a bit OTT for a personal server though so I'm sticking with md/lvm2 for now :) David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: raidreconfig advice

2005-03-12 Thread David Greaves
4) I use xfs. Has anyone used xfs_growfs? Yes - it's been flawless. I've used it on lvm2 over md David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Raid Recovery after Machine Failure

2005-03-13 Thread David Greaves
be interested in: CONFIG_MD_FAULTY: The faulty module allows for a block device that occasionally returns read or write errors. It is useful for testing. HTH David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info

Re: disaster. raid1 drive failure rsync=DELAYED why?? please help

2005-03-13 Thread David Greaves
kernel version, mdadm version? David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: disaster. raid1 drive failure rsync=DELAYED why?? please help

2005-03-13 Thread David Greaves
the mail right down :) David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: disaster. raid1 drive failure rsync=DELAYED why?? please help

2005-03-14 Thread David Greaves
Mitchell Laks wrote: On Sunday 13 March 2005 10:49 am, David Greave wrote: Many Helpful remarks: David I am grateful that you were there for me. No probs - we've all been there! My assessment (correct me if I am wrong) is that I have to rethink my architecture. As I continue to work

Re: Postgres on RAID5

2005-03-16 Thread David Dougall
In my experience, if you are concerned about filesystem performance, don't use ext3. It is one of the slowest filesystems I have ever used especially for writes. I would suggest either reiserfs or xfs. --David Dougall On Fri, 11 Mar 2005, Arshavir Grigorian wrote: Hi, I have a RAID5 array

Re: raidreconf / growing raid 5 doesn't seem to work anymore

2005-04-03 Thread David Greaves
needed to do this (it won't extend a degraded array, though I don't know if rr will either...) FWIW I migrated to an EVMS setup and back to plain md/lvm2 without any issues. AFAIK raidreconf is unmaintained. I know which I'd steeer clear of... David Mike Hardy wrote: Hello all - This is more

[Fwd: Re: [Evms-devel] md raid5 expand possible yet?]

2005-04-04 Thread David Greaves
This is just a potentially interesting forwarded mail from the EVMS mailing list to illustrate the kind of issues/responses to the raid5 resize questions... David [EMAIL PROTECTED] wrote on 03/01/2005 09:16:51 AM: I read in the evms user guide that it should be possible but I can't seem to find

Re: EVMS or md?

2005-04-04 Thread David Kewley
for it? ;) I'd love to use evms on my new filserver if it supported RAID6. David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Questions about software RAID

2005-04-18 Thread David Greaves
? no - striping mirroring The kernel will fail to read data on the crashed disk - game over. I.e. do I have to let my swap disk be a RAID-setup too if I wan't it to continue upon disk crash? yes - a mirror, not a stripe. David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body

Re: Questions about software RAID

2005-04-19 Thread David Greaves
. or: no, it would be mighty strange if the raid subsystem just grabbed every new disk it saw... Think of what would happen when I insert my camera's compact flash card and it suddenly gets used as a hot spare grin I'll leave Luca's last word - although it's also worth re-reading Peter's first words!! David

Re: waiting for recovery to complete

2005-04-19 Thread David Greaves
in there... David -- - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Questions about software RAID

2005-04-19 Thread David Greaves
-disk /dev/sda So this command could mark as faulty and remove of the array any implied partition(s) of the disk to be removed. see above 1 liner... David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http

Re: Questions about software RAID

2005-04-20 Thread David Greaves
control ones. I do think you would need to ask Neil to support mdadm --sync-/dev/sdc-to-replace-/dev/sdg-even-though-/dev/sdg-is-fine mdadm --use-/dev/sdc-and-make-/dev/sdg-spare which would be especially useful if /dev/sdg were part of a shared spares pool. David - To unsubscribe from this list

Re: Raid1 stalls during hotplug and Promise SATA TX4

2005-04-22 Thread David Greaves
not sure about this but it looks like the problem is occuring at a lower level than md. I'd take it over to ide-linux and/or hotplug. ide-linux is at linux-ide@vger.kernel.org I don't know about hotplug It would help to tell them what kernel you're running too grin HTH David [EMAIL PROTECTED

mdadm memory leak?

2005-07-04 Thread David Kowis
an XFS file system on a 200Gb mirrored RAID array, two drives, on seperate IDE channels (seperate cables.) Thanks for your time, - -- David Kowis ISO Team Lead - www.sourcemage.org SourceMage GNU/Linux One login to rule them all, one login to find them. One login to bring them all

Re: mdadm memory leak?

2005-07-04 Thread David Kowis
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 whoops, I was mistaken, and a fool for not checking, but I don't use XFS, it's reiserfs on the 200gb array. Sorry about the second mail. David Kowis wrote: I'm not entirely sure if this is mdadm's fault, but I cannot find anything else that would

Re: mdadm memory leak?

2005-07-05 Thread David Kowis
%0.06K 1 61 4K uid_cache 61 1 1%0.06K 1 61 4K inet_peer_cache 59 59 100%4.00K 591 236K pgd What does 'cat /proc/slabinfo' show? I've attached my /proc/slabinfo. Thanks :) David -- One login to rule them all, one login

RE: mdadm memory leak?

2005-07-05 Thread David Kowis
Quoting Guy [EMAIL PROTECTED]: Run ipcs to see if you have shared memory usage that seems wrong, or grows. # ipcs -m -- Shared Memory Segments keyshmid owner perms bytes nattch status 0x 65536 root 60033554432 11

Re: RAID-5 streaming read performance

2005-07-13 Thread David Greaves
Dan Christensen wrote: Ming Zhang [EMAIL PROTECTED] writes: test on a production environment is too dangerous. :P and many benchmark tool u can not perform as well. Well, I put production in quotes because this is just a home mythtv box. :-) So there are plenty of times when it is

Re: RAID5 question.

2005-08-04 Thread David Greaves
And notice you can apply different readahead to: The raw devices (/dev/sda) The md device (/dev/mdX) Any lvm device (/dev/lvm_name/lvm_device) David Raz Ben Jehuda wrote: read the blockdev man page On Thu, 2005-08-04 at 16:06 +0200, [EMAIL PROTECTED] wrote: Hi list, Neil! I have a little

Re: split RAID1 during backups?

2005-10-25 Thread David Greaves
file, then it's a different story. Have you tried / can you try XFS. IIRC it is very good indeed at this kind of scenario (used to be an *excellent* nntp server fs) David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More

Re: RAID5 problem

2005-12-08 Thread David Greaves
combination David -- - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re:

2006-01-12 Thread David Greaves
!) finally, watch the filesystem - eg xfs is excellent for big files but can't shrink HTH David -- - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re:

2006-01-17 Thread David Greaves
Ross Vandegrift wrote: On Thu, Jan 12, 2006 at 11:16:36AM +, David Greaves wrote: ok, first off: a 14 device raid1 is 14 times more likely to lose *all* your data than a single device. No, this is completely incorrect. Let A denote the event that a single disk has failed, A_i

Re: [PATCH 000 of 5] md: Introduction

2006-01-20 Thread David Greaves
if I've missed the reason that this is a bad idea. David -- - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: linear raid space utilization

2006-01-21 Thread David Greaves
before attempting repair on 'working' images. (Of course you need lots of disk space so you may need new disks - depends how valuable your data is) HTH David -- - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo

Re: multiple Sata SATAII 150, TX4 - how to tell which drive is which?headaches galore!

2006-01-24 Thread David Greaves
://www.trustedreviews.com/article.aspx?art=1014 David PS Mitchell, replies direct to you bounce. Verizon are apparently still blocking us 'dangerous' european spammers! Maybe consider switching to an ISP that's less antisocial ? :) -- - To unsubscribe from this list: send the line unsubscribe linux-raid

Re: RAID 16?

2006-02-02 Thread David Liontooth
} Sent: Thursday, February 02, 2006 1:42 PM } To: linux-raid@vger.kernel.org } Subject: Re: RAID 16? } } Matthias Urlichs [EMAIL PROTECTED] wrote: } Hi, David Liontooth wrote: }* define 4 pairs of RAID 1 with an 8-port 3ware 9500S card * the OS } will } Hmm. You'd have eight disks, five(!) may

Re: Hard drive lifetime: wear from spinning up or rebooting vs running

2006-02-06 Thread David Liontooth
Mattias Wadenstein wrote: On Sun, 5 Feb 2006, David Liontooth wrote: In designing an archival system, we're trying to find data on when it pays to power or spin the drives down versus keeping them running. Hitachi claims 5 years (Surface temperature of HDA is 45°C or less) Life

Re: RAID 5 inaccessible - continued

2006-02-14 Thread David Greaves
/Faulty-RAIDDisk.img /mnt/hdb1/Faulty-RAIDDisk.log This will be much quicker because the log file contains details of the faulty sectors. With luck (mucho luck) you may not even lose data. David -- - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message

Re: sata controllers status=0x51 { DriveReady SeekComplete Error } error=0x84 { DriveStatusError BadCRC }

2006-03-30 Thread David Greaves
problems in 'fua' (IIRC) handling which was pulled for 2.6.16. 2.6.16 seems to be much better (fewer 'odd' errors reported and md doesn't mind) David PS Mitchell - you're still using Verizon and I still live off the edge of their known world (in the UK) so I don't expect you'll get this reply - hard

Re: sata controllers status=0x51 { DriveReady SeekComplete Error } error=0x84 { DriveStatusError BadCRC }

2006-04-01 Thread David Greaves
BadCRC } Look here: http://marc.theaimsgroup.com/?l=linux-kernelm=114386015009790w=2 I don't know he's right - you may want to get into it... David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http

Help recreating a raid5

2006-04-02 Thread David Greaves
of 3 then it won't need the assume-clean. The detail and dmesg data suggests that the order in the command above is correct. Can anyone confirm this? Thanks David -- - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More

Re: Help recreating a raid5

2006-04-03 Thread David Greaves
need to find out if I have bad hardware or if there is something (else) wrong with libata :) David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Help needed - RAID5 recovery from Power-fail

2006-04-04 Thread David Greaves
really should read up on mdadm -F - it runs as a daemon and sends you mail if any raid events occur. See if FC4 has a script that automatically runs it - you may need to tweak some config parameters somewhere (I use Debian so I'm not much help). David - To unsubscribe from this list: send the line

Re: Raid5 says it's rebuilding, but it lies :)

2006-04-19 Thread David Greaves
to do it: mdadm -S /dev/md0 mdadm -A /dev/md0 --force /dev/sd[abd] mdadm /dev/md0 --add /dev/sdv Typo: this last line should be: mdadm /dev/md0 --add /dev/sdc ^ David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body

Re: data recovery on raid5

2006-04-22 Thread David Greaves
mail message. After that, you shouldn't get any bounces from me. Sorry if this is an inconvenience. David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Problem with 5disk RAID5 array - two drives lost

2006-04-22 Thread David Greaves
dd (possibly threaded so streams both drives rather than read a drive, write a drive) David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: data recovery on raid5

2006-04-22 Thread David Greaves
Molle Bestefich wrote: Anyway, a quick cheat sheet might come in handy: Which is why I posted about a wiki a few days back :) I'm progressing it and I'll see if we can't get something up. There's a lot of info on the list and it would be nice to get it a little more focused... David

Re: Problem with 5disk RAID5 array - two drives lost

2006-04-24 Thread David Greaves
used both dd_rescue/dd_rhelp and the gnu ddrescue in anger, I'd suggest gnu ddrescue. http://www.gnu.org/software/ddrescue/ddrescue.html David -- - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http

RAID1: can't remove (or set-faulty) a disk during resync with mdadm

2006-05-02 Thread David Mansfield
of failing, it simply restarts the resync. I imagine the two are related - maybe 'set faulty' simply simulates an i/o error on the member, but during resync, the behavior is 'retry'. Is there anything that can be done about this (other than politely ask vendor for a fix ;-)? David

Re: Removing a RAID 1 setting without loosing data

2006-05-13 Thread David Greaves
. David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Raid5 software problems after loosing 4 disks for 48 hours

2006-06-17 Thread David Greaves
on demand. David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Large single raid and XFS or two small ones and EXT3?

2006-06-24 Thread David Greaves
devices and the raid device? David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Large single raid and XFS or two small ones and EXT3?

2006-06-25 Thread David Rees
On 6/23/06, Nix [EMAIL PROTECTED] wrote: On 23 Jun 2006, PFC suggested tentatively: - ext3 is slow if you have many files in one directory, but has more mature tools (resize, recovery etc) This is much less true if you turn on the dir_index feature. However, even with dir_index,

Re: raid issues after power failure

2006-07-02 Thread David Greaves
to contribute' (just so I can keep track of interested parties) and we can build something up... David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH] enable auto=yes by default when using udev

2006-07-03 Thread David Greaves
Neil Brown wrote: I guess I could test for both, but then udev might change again I'd really like a more robust check. Maybe I could test if /dev was a mount point? IIRC you can have diskless machines with a shared root and nfs mounted static /dev/ David -- - To unsubscribe from

Re: SWRaid Wiki

2006-07-11 Thread David Greaves
Francois Barre wrote: Hello David, all, You pointed the http://linux-raid.osdl.org as a future ressource for SwRAID and MD knowledge base. Yes. it's not ready for public use yet so I've not announced it formally - I just mention it to people when things pop up. In fact, the TODO page

md reports: unknown partition table

2006-07-18 Thread David Greaves
-01 #3 PREEMPT Sat Jun 3 09:20:24 BST 2006 i686 GNU/Linux teak:~# mdadm -V mdadm - v2.5.2 - 27 June 2006 David -- - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: XFS and write barrier

2006-07-18 Thread David Chinner
On Tue, Jul 18, 2006 at 06:58:56PM +1000, Neil Brown wrote: On Tuesday July 18, [EMAIL PROTECTED] wrote: On Mon, Jul 17, 2006 at 01:32:38AM +0800, Federico Sevilla III wrote: On Sat, Jul 15, 2006 at 12:48:56PM +0200, Martin Steigerwald wrote: I am currently gathering information to write

Re: md reports: unknown partition table - fixed.

2006-07-18 Thread David Greaves
David Greaves wrote: Hi After a powercut I'm trying to mount an array and failing :( A reboot after tidying up /dev/ fixed it. The first time through I'd forgotten to update the boot scripts and they were assembling the wrong UUID. That was fine; I realised this and ran the manual assemble

Serious XFS bug in 2.6.17 kernels - FYI

2006-07-20 Thread David Greaves
FAQ: http://oss.sgi.com/projects/xfs/faq.html#dir2 It appears that efforts are being focused on the repair tools now. It appears to me that the best response is to patch the kernel, reboot, backup the fs, recreate the fs and restore - but please read up before taking any action. David

Re: host based mirror distance in a fc-based SAN environment

2006-07-26 Thread David Greaves
. HTH David -- - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: let md auto-detect 128+ raid members, fix potential race condition

2006-07-31 Thread David Greaves
process, bring up hostname-raid6 by --name too. mdadm --assemble --scan --config partitions --name hostname-raid6 David -- - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo

Re: raid5/lvm setup questions

2006-08-05 Thread David Greaves
impact As an example of the cons: I've just set up lvm2 over my raid5 and whilst testing snapshots, the first thing that happened was a kernel BUG and an oops... David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo

Re: Resize on dirty array?

2006-08-11 Thread David Rees
On 8/10/06, dean gaudet [EMAIL PROTECTED] wrote: - set up smartd to run long self tests once a month. (stagger it every few days so that your disks aren't doing self-tests at the same time) I personally prefer to do a long self-test once a week, a month seems like a lot of time for

Re: Resize on dirty array?

2006-08-11 Thread David Rees
On 8/11/06, dean gaudet [EMAIL PROTECTED] wrote: On Fri, 11 Aug 2006, David Rees wrote: On 8/10/06, dean gaudet [EMAIL PROTECTED] wrote: - set up smartd to run long self tests once a month. (stagger it every few days so that your disks aren't doing self-tests at the same time) I

Re: Kernel RAID support

2006-09-03 Thread David Greaves
be going nuts, as it does not appear as an option. Below is the list under Device Drivers if I do a make menuconfig: Recently reported on lkml Andrew Morton said: ftp://ftp.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.18-rc5/2.6.18-rc5-mm1/hot-fixes/ contains a fix for this. HTH David

Re: Messed up creating new array...

2006-09-08 Thread David Rees
On 9/8/06, Ruth Ivimey-Cook [EMAIL PROTECTED] wrote: I messed up slightly when creating a new 6-disk raid6 array, and am wondering if there is a simple answer. The problem is that I didn't partition the drives, but simply used the whole drive. All drives are of the same type and using the

Re: Simulating Drive Failure on Mirrored OS drive

2006-10-02 Thread David Greaves
that the mirror will work when it's needed? Read up on the md-faulty device. Also, FWIW, md works just fine :) (Lots of other things can go wrong so testing your setup is a food idea though) David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL

Re: [PATCH 003 of 6] md: Remove 'experimental' classification from raid5 reshape.

2006-10-02 Thread David Greaves
Typo in first line of this patch :) I have had enough success reports not^H^H^H to believe that this is safe for 2.6.19. - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at

Re: mdadm and raidtools - noob

2006-10-02 Thread David Greaves
? No (eg EVMS) David -- - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Recipe for Mirrored OS Drives

2006-10-02 Thread David Greaves
andy liebman wrote: Feel free to add it here: http://linux-raid.osdl.org/index.php/Main_Page I haven't been able to do much for a few weeks (typical - I find some time and use it all up just getting the basic setup done - still it's started!) David Any hints on how to add a page

Re: Recipe for Mirrored OS Drives

2006-10-03 Thread David Greaves
Nix wrote: On 2 Oct 2006, David Greaves spake: I suggest you link from http://linux-raid.osdl.org/index.php/RAID_Boot The pages don't really have the same purpose. RAID_Boot is `how to boot your RAID system using initramfs'; this is `how to set up a RAID system in the first place', i.e

Re: Multiple Disk Failure Recovery

2006-10-15 Thread David Rees
On 10/14/06, Lane Brooks [EMAIL PROTECTED] wrote: I am wondering if there is a way to cut my losses with these bad sectors and have it recover what it can so that I can get my raid array back to functioning. Right now I cannot get a spare disk recovery to finish because these bad sectors. Is

Re: Need help recovering a raid5 array

2006-10-24 Thread David Greaves
? Or is this now a replacement? You should be OK - I'll reply quickly now and see if I can make some suggestions later (or sooner). David When I try to rebuild the array mdadm --assemble /dev/md0 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 I see failed to RUN_ARRAY /dev/md0: Input/output error

Re: Raid5 or 6 here... ?

2006-10-24 Thread David Greaves
Gordon Henderson wrote: 1747 ?S 724:25 [md9_raid5] It's kernel 2.6.18 and Wasn't the module merged to raid456 in 2.6.18? Are your mdx_raid6's earlier kernels. My raid 6 is on 2.7.17 and says _raid6 Could it be that the combined kernel thread is called mdX_raid5 David

Re: Raid5 or 6 here... ?

2006-10-24 Thread David Greaves
David Greaves wrote: Gordon Henderson wrote: 1747 ?S 724:25 [md9_raid5] It's kernel 2.6.18 and Wasn't the module merged to raid456 in 2.6.18? Are your mdx_raid6's earlier kernels. My raid 6 is on 2.7.17 and says _raid6 Could it be that the combined kernel thread is called

Re: Relabeling UUID

2006-12-13 Thread David Greaves
Neil Brown wrote: Patches to the man page to add useful examples are always welcome. And if people would like to be more verbose, the wiki is available at http://linux-raid.osdl.org/ It's now kinda useful but definitely not fully migrated from the old RAID FAQ. David - To unsubscribe from

  1   2   3   >