Neil Brown wrote:
As the event count needs to be updated every time the superblock is
modified, the event count will be updated forever active-clean or
clean-active transition. All the drives in an array must have the
same value for the event count, so the spares need to be updated even
though
Neil Brown wrote:
Growing a raid5 or raid6 by adding another drive is conceptually
possible to do while the array is online, but I have not definite
plans to do this (I would like to). Growing a raid5 into a raid6
would also be useful.
These require moving lots of data around, and need to be able
4) I use xfs. Has anyone used xfs_growfs?
Yes - it's been flawless.
I've used it on lvm2 over md
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
I *think* this is correct.
I'm a user, not a coder.
If nothing else it should help you search the archives for clarification :)
In general I think the answer lies around md's superblocks
Can Sar wrote:
Hi,
I am working with a research group that is currently building a tool
to automatically find
kernel version, mdadm version?
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Mitchell Laks wrote:
Hi,
I have a remote system with a raid1 of a data disk. I got a call from the
person using the system that the application that writes to the data disk was
not working.
system drive is /dev/hda with separte partitions / , /var, /home, /tmp.
data drive is linux software
Mitchell Laks wrote:
On Sunday 13 March 2005 10:49 am, David Greave wrote: Many Helpful remarks:
David I am grateful that you were there for me.
No probs - we've all been there!
My assessment (correct me if I am wrong) is that I have to rethink my
architecture. As I continue to work with
Just to re-iterate for the googlers...
EVMS has an alternative raid5 grow solution that is active, maintained
and apparently works (ie someone who knows the code actually cares if it
fails!!!)
It does require a migration to EVMS and it has limitations which
prevented me from using it when I
This is just a potentially interesting forwarded mail from the EVMS
mailing list to illustrate the kind of issues/responses to the raid5
resize questions...
David
[EMAIL PROTECTED] wrote on 03/01/2005 09:16:51 AM:
I read in the evms user guide that it should be possible but I can't
seem
to find
tmp wrote:
I read the software RAID-HOWTO, but the below 6 questions is still
unclear. I have asked around on IRC-channels and it seems that I am not
the only one being confused. Maybe the HOWTO could be updated to
clearify the below items?
1) I have a RAID-1 setup with one spare disk. A disk
Luca Berra wrote:
many people find it easier to understand if raid partitions are set to
0XFD. kernel autodetection is broken and should not be relied upon.
Could you clarify what is broken?
I understood that it was simplistic (ie if you have a raid0 built over a
raid5
or something exotic then it
Hervé Eychenne wrote:
On Sun, Apr 17, 2005 at 10:49:14AM -0700, Tim Moore wrote:
The recovery daemon adjusts reconstruction speed dynamically according to
available system resources.
Disk I/O is somewhat slower but works just fine. You don't have to wait.
So I don't have to wait to take the
Hervé Eychenne wrote:
On Tue, Apr 19, 2005 at 01:00:11PM +0200, [EMAIL PROTECTED] wrote:
First you have to look if there are partitions on that disk to which no
data was written since the disk failed (this typically concerns the swap
partition). These partitions have to be marked faulty by hand
Guy wrote:
Well, I agree with KISS, but from the operator's point of view!
I want... snip
Fair enough.
But I think the point is - should you expect the mdadm command to do all
that?
or do you think that it would make sense to stick with a layered
approach that allows anything from my Zaurus PDA
not sure about this but it looks like the problem is occuring at a lower
level than md.
I'd take it over to ide-linux and/or hotplug.
ide-linux is at linux-ide@vger.kernel.org
I don't know about hotplug
It would help to tell them what kernel you're running too grin
HTH
David
[EMAIL PROTECTED]
Dan Christensen wrote:
Ming Zhang [EMAIL PROTECTED] writes:
test on a production environment is too dangerous. :P
and many benchmark tool u can not perform as well.
Well, I put production in quotes because this is just a home mythtv
box. :-) So there are plenty of times when it is
And notice you can apply different readahead to:
The raw devices (/dev/sda)
The md device (/dev/mdX)
Any lvm device (/dev/lvm_name/lvm_device)
David
Raz Ben Jehuda wrote:
read the blockdev man page
On Thu, 2005-08-04 at 16:06 +0200, [EMAIL PROTECTED] wrote:
Hi list, Neil!
I have a little
Jeff Breidenbach wrote:
Individual directories contain up to about 150,000 files. If I run ls
-U on all directories, it completes in a reasonably amount of time (I
forget how much, but I think it is well under an hour). Reiserfs is
supposed to be good at this sort of thing. If I were to stat each
Ross Vandegrift wrote:
On Thu, Dec 08, 2005 at 11:07:36AM +1100, James Neale wrote:
Hi Ross
I'm a bit of a mdadm newb and have been wrangling with -monitor rather
unsucccessfully.
Currently I'm manually checking /proc/mdstat until I've sorted out
something better.
I'm running a single 1TB
[EMAIL PROTECTED] wrote:
snip - a lot!!
can I summarise (!) as:
I want to create a non-system data-storage raid array (ie can be
initialised well after boot)
I want to use a mix of SCSI, sata + USB devices
Will this work if the USB devices change their 'order'
Short answer: yes, with the
Ross Vandegrift wrote:
On Thu, Jan 12, 2006 at 11:16:36AM +, David Greaves wrote:
ok, first off: a 14 device raid1 is 14 times more likely to lose *all*
your data than a single device.
No, this is completely incorrect. Let A denote the event that a single
disk has failed, A_i
Andre' Breiler wrote:
Hi,
On Fri, 20 Jan 2006, Reuben Farrelly wrote:
On 20/01/2006 11:32 a.m., Neil Brown wrote:
The in-kernel autodetection in md is purely legacy support as far as I
am concerned. md does volume detection in user space via 'mdadm'.
Hrm. puzzled look How
Andy Gajetzki wrote:
Hi there, I recently had a disk go bad in a linear RAID built with
mdadm. The particular disk that failed was the last device of the
RAID. I am curious about how devices are utilized in a linear RAID.
Would the md be filled sequentially from device 1 upto 5? In other
Mitchell Laks wrote:
I just discovered that I need a second power supply because
a 450W antec smartpower 2.0 is not enough power for 9 active drives and fans
on my system :(.
I must look for a better power supply. What do you recommend for big
multidrive systems?
FYI...
Krekna Mektek wrote:
I want to rebuilt from the good one and the faulty one. That's why I
wanted to dd the disk to an image file, but it complains it has no
boot sector.
I did the folowing:
dd conv=noerror if=dev/hdd1 of=/mnt/hdb1/Faulty-RAIDDisk.img
losetup /dev/loop0
Party line: It's a faulty cable (on both drives? triggered by rsync?
Doesn't show up under 'badblocks'? hah!)
Check out the linux-ide archive for my (and others) reports.
I've had lots of issues like this - spurious and IMHO incorrect error
messages. Only certain types of disk access cause them
Mitchell Laks wrote:
Hi,
I have a production server in place at a remote site.
I have a single system drive that is an ide drive
and two data drives that are on a via SATA controller in a raid1
configuration.
I am monitoring the /var/log/messages and I get messages every few days
Mar 22
Hi
I need to rebuild a 3-disk raid5.
One disk may be faulty (sda) ; one is good (sdd) and the other I think
is OK too (sdb).
The array dropped one disk (sda), then a short time later, another (sdb)
I mistakenly 'added' sdb back in which of course marked it as a spare.
This means that
Neil Brown wrote:
On Sunday April 2, [EMAIL PROTECTED] wrote:
From some archive reading I understand that I can recreate the array using
mdadm --create /dev/md1 -l5 -n3 /dev/sdd1 /dev/sdb1 missing
but that I need to specify the correct order for the drives.
I've not used --assume-clean,
Neil Brown wrote:
On Monday April 3, [EMAIL PROTECTED] wrote:
I wonder if you could help a Raid Newbie with a problem
snip
It looks like you lost a drive a while ago. Did you notice?
This is not unusual - raid just keeps on going if a disk fails.
When things are working again you
Neil Brown wrote:
On Wednesday April 19, [EMAIL PROTECTED] wrote:
I have a raid5 configuration with 4 disks, of which all were active. My
system froze due to a separate issue (firewire), so I had to power cycle.
In the past, I've always been able to recover with fsck on /dev/md0, however
Sam Hopkins wrote:
Hello,
I have a client with a failed raid5 that is in desperate need of the
data that's on the raid. The attached file holds the mdadm -E
superblocks that are hopefully the keys to the puzzle. Linux-raid
folks, if you can give any help here it would be much appreciated.
Carlos Carvalho wrote:
Molle Bestefich ([EMAIL PROTECTED]) wrote on 22 April 2006 05:54:
Tim Bostrom wrote:
raid5: Disk failure on hdf1, disabling device.
MD doesn't like to find errors when it's rebuilding.
It will kick that disk off the array, which will cause MD to return
crap
Molle Bestefich wrote:
Anyway, a quick cheat sheet might come in handy:
Which is why I posted about a wiki a few days back :)
I'm progressing it and I'll see if we can't get something up.
There's a lot of info on the list and it would be nice to get it a
little more focused...
David
--
Arthur Britto wrote:
On Sun, 2006-04-23 at 17:17 -0700, Tim Bostrom wrote:
I bought two extra 250GB drives - I'll try using dd_rescue as
recommended and see if I can get a good copy of hdf online.
You might want to use dd_rhelp:
http://www.kalysto.org/utilities/dd_rhelp/index.en.html
Having
Gordon Henderson wrote:
On Sat, 13 May 2006, Ra�l G�mez Cabrera wrote:
Hi Gordon, thanks for your quick response.
Well my client does not want to spend more money on this particular
server, I think maybe that is because they are planning to replace it...
Ask your client just how
Wilson Wilson wrote:
Neil great stuff, its online now!!!
Congratulations :)
I am still unsure how this raid5 volume was partially readable with 4
disks missing. My understanding each file is written across all disks
apart from one, which is used for CRC. So if 2 disks are offline the
whole
Adam Talbot wrote:
OK, this topic I relay need to get in on.
I have spent the last few week bench marking my new 1.2TB, 6 disk, RAID6
array.
Very interesting. Thanks.
Did you get around to any 'tuning'.
Things like raid chunk size, external logs for xfs, blockdev readahead
on the underlying
Francois Barre wrote:
2006/7/1, Ákos Maróy [EMAIL PROTECTED]:
Neil Brown wrote:
Try adding '--force' to the -A line.
That tells mdadm to try really hard to assemble the array.
thanks, this seems to have solved the issue...
Akos
Well, Neil, I'm wondering,
It seemed to me that Akos'
Neil Brown wrote:
I guess I could test for both, but then udev might change
again I'd really like a more robust check.
Maybe I could test if /dev was a mount point?
IIRC you can have diskless machines with a shared root and nfs mounted
static /dev/
David
--
-
To unsubscribe from this
Francois Barre wrote:
Hello David, all,
You pointed the http://linux-raid.osdl.org as a future ressource for
SwRAID and MD knowledge base.
Yes. it's not ready for public use yet so I've not announced it formally
- I just mention it to people when things pop up.
In fact, the TODO page on
Hi
After a powercut I'm trying to mount an array and failing :(
teak:~# mdadm --assemble /dev/media --auto=p /dev/sd[bcdef]1
mdadm: /dev/media has been started with 5 drives.
Good
However:
teak:~# mount /media
mount: /dev/media1 is not a valid block device
teak:~# dd if=/dev/media1
David Greaves wrote:
Hi
After a powercut I'm trying to mount an array and failing :(
A reboot after tidying up /dev/ fixed it.
The first time through I'd forgotten to update the boot scripts and they
were assembling the wrong UUID. That was fine; I realised this and ran
the manual assemble
Just an FYI for my friends here who may be running 2.6.17.x kernels and
using XFS and who may not be monitoring lkml :)
There is a fairly serious corruption problem that has recently been
discussed on lkml and affects all 2.6.17 before -stable .7 (not yet
released)
Essentially the fs can be
Stefan Majer wrote:
Hi,
im curious if there are some numbers out up to which distance its possible
to mirror (raid1) 2 FC-LUNs. We have 2 datacenters with a effective
distance of 11km. The fabrics in one datacenter are connected to the
fabrics in the other datacenter with 5 dark fibre both
Alexandre Oliva wrote:
On Jul 30, 2006, Neil Brown [EMAIL PROTECTED] wrote:
1/
It just isn't right. We don't mount filesystems from partitions
just because they have type 'Linux'. We don't enable swap on
partitions just because they have type 'Linux swap'. So why do we
Shane wrote:
Hello all,
I'm building a new server which will use a number of disks
and am not sure of the best way to go about the setup.
There will be 4 320gb SATA drives installed at first. I'm
just wondering how to set the system up for upgradability.
I'll be using raid5 but not sure
Richard Scobie wrote:
Josh Litherland wrote:
On Sun, 2006-09-03 at 15:56 +1200, Richard Scobie wrote:
I am building 2.6.18rc5-mm1 and I cannot find the entry under make
config, to enable the various RAID options.
Under Device Drivers, switch on Multi-device support.
Thanks. I must be
andy liebman wrote:
I tried simply unplugging one drive from its power and from its SATA
connector. The OS didn't like that at all. My KDE session kept running,
but I could no longer open any new terminals. I couldn't become root in
an existing terminal that was already running. And I couldn't
Typo in first line of this patch :)
I have had enough success reports not^H^H^H to believe that this
is safe for 2.6.19.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Mark Ryden wrote:
Hello linux-raid list,
I want to create a Linux Software RAID1 on linux FC5 (x86_64),
from SATA II disks. I am a noob in this.
No problems.
I looked for it and saw that as far as I understand,
raidtools is quite old - from 2003.
for exanple,
andy liebman wrote:
Feel free to add it here:
http://linux-raid.osdl.org/index.php/Main_Page
I haven't been able to do much for a few weeks (typical - I find some
time and
use it all up just getting the basic setup done - still it's started!)
David
Any hints on how to add a page?
Nix wrote:
On 2 Oct 2006, David Greaves spake:
I suggest you link from http://linux-raid.osdl.org/index.php/RAID_Boot
The pages don't really have the same purpose. RAID_Boot is `how to boot
your RAID system using initramfs'; this is `how to set up a RAID system
in the first place', i.e
[EMAIL PROTECTED] wrote:
Hello all,
Hi
First off, don't do anything else without reading up or talking on here :)
The list archive has got a lot of good material - 'help' is usually a good
search term!!!
I had a disk fail in a raid 5 array (4 disk array, no spares), and am
having trouble
Gordon Henderson wrote:
1747 ?S 724:25 [md9_raid5]
It's kernel 2.6.18 and
Wasn't the module merged to raid456 in 2.6.18?
Are your mdx_raid6's earlier kernels. My raid 6 is on 2.7.17 and says _raid6
Could it be that the combined kernel thread is called mdX_raid5
David
-
To
David Greaves wrote:
Gordon Henderson wrote:
1747 ?S 724:25 [md9_raid5]
It's kernel 2.6.18 and
Wasn't the module merged to raid456 in 2.6.18?
Are your mdx_raid6's earlier kernels. My raid 6 is on 2.7.17 and says _raid6
Could it be that the combined kernel thread is called
Neil Brown wrote:
Patches to the man page to add useful examples are always welcome.
And if people would like to be more verbose, the wiki is available at
http://linux-raid.osdl.org/
It's now kinda useful but definitely not fully migrated from the old RAID FAQ.
David
-
To unsubscribe from
Patrik Jonsson wrote:
Hi all,
this may not be the best list for this question, but I figure that the
number of disks connected to users here should be pretty big...
I upgraded from 2.6.17-rc4 to 2.6.18.3 about a week ago, and I've since
had 3 drives kicked out of my 10-drive RAID5 array.
Lasse Kärkkäinen wrote:
I managed to mess up a RAID-5 array by mdadm -adding a few failed disks
back, trying to get the array running again. Unfortunately, -add didn't
do what I expected, but instead made spares out of the failed disks. The
disks failed due to loose SATA cabling and the data
Hi Neil
I think this is a bug.
Essentially if I create an auto=part md device then I get md_d0p? partitions.
If I stop the array and just re-assemble, I don't.
It looks like the same (?) problem as Mike (see below - Mike do you have a
patch?) but I'm on 2.6.20.7 with mdadm v2.5.6
FWIW I
There is some odd stuff in there:
/dev/sda1:
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Events : 0.115909229
/dev/sdb1:
Active Devices : 5
Working Devices : 4
Failed Devices : 1
Events : 0.115909230
/dev/sdc1:
Active Devices : 8
Working Devices : 8
Failed Devices : 1
Events :
Leon Woestenberg wrote:
On 4/24/07, Leon Woestenberg [EMAIL PROTECTED] wrote:
Hello,
On 4/23/07, David Greaves [EMAIL PROTECTED] wrote:
There is some odd stuff in there:
[EMAIL PROTECTED] ~]# mdadm -v --assemble --scan
--config=/tmp/mdadm.conf --force
[...]
mdadm: no uptodate device
Neil Brown wrote:
This problem is very hard to solve inside the kernel.
The partitions will not be visible until the array is opened *after*
it has been created. Making the partitions visible before that would
be possible, but would be very easy.
I think the best solution is Mike's
Mike Accetta wrote:
David Greaves writes:
...
It looks like the same (?) problem as Mike (see below - Mike do you have a
patch?) but I'm on 2.6.20.7 with mdadm v2.5.6
...
We have since started assembling the array from the initrd using
--homehost and --auto-update-homehost which takes
David Greaves wrote:
currently recompiling the kernel to allow autorun...
Which of course won't work because I'm on 1.2 superblocks:
md: Autodetecting RAID arrays.
md: invalid raid superblock magic on sdb1
md: sdb1 has invalid sb, not importing!
md: invalid raid superblock magic on sdc1
md
Neil Brown wrote:
This problem is very hard to solve inside the kernel.
The partitions will not be visible until the array is opened *after*
it has been created. Making the partitions visible before that would
be possible, but would be very easy.
I think the best solution is Mike's
Neil Brown wrote:
On Tuesday April 24, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
This problem is very hard to solve inside the kernel.
The partitions will not be visible until the array is opened *after*
it has been created. Making the partitions visible before that would
be possible, but
Neil Brown wrote:
On Tuesday April 24, [EMAIL PROTECTED] wrote:
Neil, isn't it easy to just do this after an assemble?
Yes, but it should not be needed, and I'd like to understand why it
is.
One of the last things do_md_run does is
mddev-changed = 1;
When you next open /dev/md_d0,
Leon Woestenberg wrote:
David,
thanks for all the advice so far.
No problem :)
In first instance we were searching for ways to tell mdadm what we
know about the array (through mdadm.conf) but from all advice we got
we have to take the 'usual' non-syncing-recreate approach.
We will try
Jan Engelhardt wrote:
Hi list,
when a user does `mdadm -C /dev/md0 -l any -n whatever fits
devices`, the array gets rebuilt for at least RAID1 and RAID5, even if
the disk contents are most likely not of importance (otherwise we would
not be creating a raid array right now). Could not
Ruslan Sivak wrote:
So a custom kernel is needed? Is there a way to do a kickstart install
with the new kernel? Or better yet, put it on the install cd?
have you tried:
modprobe raid10
?
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to
.
How do other block devices initialise their partitions on 'discovery'?
David
David Greaves wrote:
Neil Brown wrote:
On Tuesday April 24, [EMAIL PROTECTED] wrote:
Neil, isn't it easy to just do this after an assemble?
Yes, but it should not be needed, and I'd like to understand why
Brad Campbell wrote:
G'day all,
I've got 3 arrays here. A 3 drive raid-5, a 10 drive raid-5 and a 15
drive raid-6. They are all currently 250GB SATA drives.
I'm contemplating an upgrade to 500GB drives on one or more of the
arrays and wondering the best way to do the physical swap.
The
Neil Brown wrote:
On Wednesday May 9, [EMAIL PROTECTED] wrote:
Neil Brown [EMAIL PROTECTED] [2007.04.02.0953 +0200]:
Hmmm... this is somewhat awkward. You could argue that udev should be
taught to remove the device from the array before removing the device
from /dev. But I'm not convinced
[Repost - didn't seem to make it to the lists, sorry cc's]
Sorry, rushed email - it wasn't clear. I think there is something important here
though.
Oh, it may be worth distinguishing between a drive identifier (/dev/sdb) and a
drive slot (md0, slot2).
Neil Brown wrote:
On Thursday May 10,
Sorry, rushed email - it wasn't clear. I think there is something important here
though.
Oh, it may be worth distinguishing between a drive identifier (/dev/sdb) and a
drive slot (md0, slot2).
Neil Brown wrote:
On Thursday May 10, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
On Wednesday May 9,
Tomasz Chmielewski wrote:
Peter Rabbitson schrieb:
Tomasz Chmielewski wrote:
I have a RAID-10 setup of four 400 GB HDDs. As the data grows by several
GBs a day, I want to migrate it somehow to RAID-5 on separate disks in a
separate machine.
Which would be easy, if I didn't have to do it
David Greaves wrote:
David Robinson wrote:
David Greaves wrote:
This isn't a regression.
I was seeing these problems on 2.6.21 (but 22 was in -rc so I waited
to try it).
I tried 2.6.22-rc4 (with Tejun's patches) to see if it had improved -
no.
Note this is a different (desktop) machine
David Chinner wrote:
On Fri, Jun 15, 2007 at 04:36:07PM -0400, Justin Piszcz wrote:
Hi,
I was wondering if the XFS folks can recommend any optimizations for high
speed disk arrays using RAID5?
[sysctls snipped]
None of those options will make much difference to performance.
mkfs parameters
Dexter Filmore wrote:
1661 minutes is *way* too long. it's a 4x250GiB sATA array and usually takes 3
hours to resync or check, for that matter.
So, what's this?
kernel, mdadm verisons?
I seem to recall a long fixed ETA calculation bug some time back...
David
-
To unsubscribe from this
Frank Jenkins wrote:
So here's the /proc/mdstat prior to the array failure:
I'll take a look through this and see if I can see any problems Frank. Bit busy
now - give me a few minutes.
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to
David Greaves wrote:
I'm going to have to do some more testing...
done
David Chinner wrote:
On Mon, Jun 18, 2007 at 08:49:34AM +0100, David Greaves wrote:
David Greaves wrote:
So doing:
xfs_freeze -f /scratch
sync
echo platform /sys/power/disk
echo disk /sys/power/state
# resume
Rafael J. Wysocki wrote:
This is on 2.6.22-rc5
Is the Tejun's patch
http://www.sisk.pl/kernel/hibernation_and_suspend/2.6.22-rc5/patches/30-block-always-requeue-nonfs-requests-at-the-front.patch
applied on top of that?
2.6.22-rc5 includes it.
(but, when I was testing rc4, I did apply this
Neil Brown wrote:
This isn't quite right.
Thanks :)
Firstly, it is mdadm which decided to make one drive a 'spare' for
raid5, not the kernel.
Secondly, it only applies to raid5, not raid6 or raid1 or raid10.
For raid6, the initial resync (just like the resync after an unclean
shutdown)
Bill Davidsen wrote:
David Greaves wrote:
[EMAIL PROTECTED] wrote:
On Fri, 22 Jun 2007, David Greaves wrote:
If you end up 'fiddling' in md because someone specified
--assume-clean on a raid5 [in this case just to save a few minutes
*testing time* on system with a heavily choked bus
Richard Michael wrote:
How do I create an array with a helpful name? i.e. /dev/md/storage?
The mdadm man page hints at this in the discussion of the --auto option
in the ASSEMBLE MODE section, but doesn't clearly indicate how it's done.
Must I create the device nodes by hand first using
David Chinner wrote:
On Wed, Jun 27, 2007 at 07:20:42PM -0400, Justin Piszcz wrote:
For drives with 16MB of cache (in this case, raptors).
That's four (4) drives, right?
I'm pretty sure he's using 10 - email a few days back...
Justin Piszcz wrote:
Running test with 10 RAPTOR 150 hard
(back on list for google's benefit ;) and because there are some good questions
and I don't know all the answers... )
Oh, and Neil 'cos there may be a bug ...
Richard Michael wrote:
On Wed, Jun 27, 2007 at 08:49:22AM +0100, David Greaves wrote:
http://linux-raid.osdl.org/index.php
David Chinner wrote:
On Fri, Jun 29, 2007 at 12:16:44AM +0200, Rafael J. Wysocki wrote:
There are two solutions possible, IMO. One would be to make these workqueues
freezable, which is possible, but hacky and Oleg didn't like that very much.
The second would be to freeze XFS from within the
Ian Dall wrote:
There doesn't seem to be any designated place to send bug reports and
feature requests to mdadm, so I hope I am doing the right thing by
sending it here.
I have a small patch to mdamd which allows the write-behind amount to be
set a array grow time (instead of currently only at
Guy Watkins wrote:
} [EMAIL PROTECTED] On Behalf Of Jon Collette
} I wasn't thinking and did a mdadm --create to my existing raid5 instead
} of --assemble. The syncing process ran and now its not mountable. Is
} there anyway to recover from this?
Maybe. Not really sure. But don't do anything
David Greaves wrote:
For a simple 4 device array I there are 24 permutations - doable by
hand, if you have 5 devices then it's 120, 6 is 720 - getting tricky ;)
Oh, wait, for 4 devices there are 24 permutations - and you need to do it 4
times, substituting 'missing' for each device - so 96
Bryan Christ wrote:
I do have the type set to 0xfd. Others have said that auto-assemble
only works on RAID 0 and 1, but just as Justin mentioned, I too have
another box with RAID5 that gets auto assembled by the kernel (also no
initrd). I expected the same behavior when I built this
dean gaudet wrote:
On Mon, 16 Jul 2007, David Greaves wrote:
Bryan Christ wrote:
I do have the type set to 0xfd. Others have said that auto-assemble only
works on RAID 0 and 1, but just as Justin mentioned, I too have another box
with RAID5 that gets auto assembled by the kernel (also
Bryan Christ wrote:
I'm now very confused...
It's all that top-posting...
When I run mdadm --examine /dev/md0 I get the error message: No
superblock detected on /dev/md0
However, when I run mdadm -D /dev/md0 the report clearly states
Superblock is persistent
David Greaves wrote
Paul Clements wrote:
Well, if people would like to see a timeout option, I actually coded up
a patch a couple of years ago to do just that, but I never got it into
mainline because you can do almost as well by doing a check at
user-level (I basically ping the nbd connection periodically and if
[EMAIL PROTECTED] wrote:
Would this just be relevant to network devices or would it improve
support for jostled usb and sata hot-plugging I wonder?
good question, I suspect that some of the error handling would be
similar (for devices that are unreachable not haning the system for
example),
Tomas France wrote:
Hi everyone,
I apologize for asking such a fundamental question on the Linux-RAID
list but the answers I found elsewhere have been contradicting one another.
So, is it possible to have a swap file on a RAID-10 array?
yes.
mkswap /dev/mdX
swapon /dev/mdX
Should you use
Tomas France wrote:
Thanks for the answer, David!
you're welome
By the way, does anyone know if there is a comprehensive how-to on
software RAID with mdadm available somewhere? I mean a website where I
could get answers to questions like How to convert your system from no
RAID to RAID-1,
Richard Grundy wrote:
Hello,
I was just wonder if it's possible to move my RAID5 array to another
distro, same machine just a different flavor of Linux.
Yes.
The only problem will be if it is the root filesystem (unlikely).
Would it just be a case of running:
sudo mdadm --create --verbose
1 - 100 of 139 matches
Mail list logo