Jeff Breidenbach wrote:
It's not a RAID issue, but make sure you don't have any duplicate volume
names. According to Murphy's Law, if there are two / volumes, the wrong
one will be chosen upon your next reboot.
Thanks for the tip. Since I'm not using volumes or LVM at all, I should be
safe
Jeff Breidenbach wrote:
I'm planning to take some RAID-1 drives out of an old machine
and plop them into a new machine. Hoping that mdadm assemble
will magically work. There's no reason it shouldn't work. Right?
old [ mdadm v1.9.0 / kernel 2.6.17 / Debian Etch / x86-64 ]
new [ mdad v2.6.2
Jan Engelhardt wrote:
Feel free to argue that the manpage is clear on this - but as we know, not
everyone reads the manpages in depth...
That is indeed suboptimal (but I would not care since I know the
implications of an SB at the front)
Neil cares even less and probably doesn't even need
Keld Jørn Simonsen wrote:
I am trying to get some order to linux raid info.
Help appreciated :)
The list description at
http://vger.kernel.org/vger-lists.html#linux-raid
does list af FAQ, http://www.linuxdoc.org/FAQ/
Yes, that should be amended. Drop them a line about the FAQ too
So our FAQ
Jan Engelhardt wrote:
On Jan 29 2008 18:08, Bill Davidsen wrote:
IIRC there was a discussion a while back on renaming mdadm options
(google Time to deprecate old RAID formats?) and the superblocks
to emphasise the location and data structure. Would it be good to
introduce the new names at
Dexter Filmore wrote:
On Friday 08 February 2008 00:22:36 Neil Brown wrote:
On Thursday February 7, [EMAIL PROTECTED] wrote:
On Tuesday 05 February 2008 03:02:00 Neil Brown wrote:
On Monday February 4, [EMAIL PROTECTED] wrote:
Seems the other topic wasn't quite clear...
not necessarily.
Jan Engelhardt wrote:
On Feb 10 2008 10:34, David Greaves wrote:
Jan Engelhardt wrote:
On Jan 29 2008 18:08, Bill Davidsen wrote:
IIRC there was a discussion a while back on renaming mdadm options
(google Time to deprecate old RAID formats?) and the superblocks
to emphasise the location
Keld Jørn Simonsen wrote:
I would then like that to be reflected in the main page.
I would rather that this be called Howto and FAQ - Linux raid
than Main Page - Linux Raid. Is that possible?
Just like C has a main() wiki's have a Main Page :)
I guess it could be changed but I think it
Marcin Krol wrote:
Hello everyone,
I have had a problem with RAID array (udev messed up disk names, I've had
RAID on
disks only, without raid partitions)
Do you mean that you originally used /dev/sdb for the RAID array? And now you
are using /dev/sdb1?
Given the system seems confused I
Richard Scobie wrote:
David Rees wrote:
FWIW, this step is clearly marked in the Software-RAID HOWTO under
Booting on RAID:
http://tldp.org/HOWTO/Software-RAID-HOWTO-7.html#ss7.3
The one place I didn't look...
Good - I hope you'll both look here instead:
Peter Rabbitson wrote:
I guess I will sit down tonight and craft some patches to the existing
md* man pages. Some things are indeed left unsaid.
If you want to be more verbose than a man page allows then there's always the
wiki/FAQ...
http://linux-raid.osdl.org/
Keld Jørn Simonsen wrote:
Is
On 26 Oct 2007, Neil Brown wrote:
On Thursday October 25, [EMAIL PROTECTED] wrote:
I also suspect that a *lot* of people will assume that the highest superblock
version is the best and should be used for new installs etc.
Grumble... why can't people expect what I want them to expect?
Moshe
Keld Jørn Simonsen wrote:
Hmm, I read the Linux raid faq on
http://www.faqs.org/contrib/linux-raid/x37.html
It looks pretty outdated, referring to how to patch 2.2 kernels and
not mentioning new mdadm, nor raid10. It was not dated.
It seemed to be related to the linux-raid list, telling
Peter Rabbitson wrote:
Moshe Yudkowsky wrote:
over the other. For example, I've now learned that if I want to set up
a RAID1 /boot, it must actually be 1.2 or grub won't be able to read
it. (I would therefore argue that if the new version ever becomes
default, then the default sub-version
Bill Davidsen wrote:
David Greaves wrote:
Jan Engelhardt wrote:
This makes 1.0 the default sb type for new arrays.
IIRC there was a discussion a while back on renaming mdadm options
(google Time
to deprecate old RAID formats?) and the superblocks to emphasise the
location
Jan Engelhardt wrote:
This makes 1.0 the default sb type for new arrays.
IIRC there was a discussion a while back on renaming mdadm options (google Time
to deprecate old RAID formats?) and the superblocks to emphasise the location
and data structure. Would it be good to introduce the new
Peter Rabbitson wrote:
David Greaves wrote:
Jan Engelhardt wrote:
This makes 1.0 the default sb type for new arrays.
IIRC there was a discussion a while back on renaming mdadm options
(google Time
to deprecate old RAID formats?) and the superblocks to emphasise the
location
and data
Tomasz Chmielewski wrote:
Michael Harris schrieb:
i have a disk fail say HDC for example, i wont know which disk HDC is
as it could be any of the 5 disks in the PC. Is there anyway to make
it easier to identify which disk is which?.
If the drives have any LEDs, the most reliable way would
Mitchell Laks wrote:
I think my error was that maybe I did not
do write the fdisk changes to the drive with
fdisk w
No - your problem was that you needed to use the literal word missing
like you did this time:
mdadm -C /dev/md0 --level=2 -n2 /dev/sda1 missing
[however, this time you also
Guy Watkins wrote:
man md
man mdadm
and
http://linux-raid.osdl.org/index.php/Main_Page
:)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Michael Makuch wrote:
So my questions are:
...
- Is this a.o.k for a raid5 array?
So I realised that /proc/mdstat isn't documented too well anywhere...
http://linux-raid.osdl.org/index.php/Mdstat
Comments welcome...
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid
Dragos wrote:
Thank you for your very fast answers.
First I tried 'fsck -n' on the existing array. The answer was that If I
wanted to check a XFS partition I should use 'xfs_check'. That seems to
say that my array was partitioned with xfs, not reiserfs. Am I correct?
Then I tried the
Neil Brown wrote:
On Thursday November 29, [EMAIL PROTECTED] wrote:
2. Do you know of any way to recover from this mistake? Or at least what
filesystem it was formated with.
It may not have been lost - yet.
If you created the same array with the same devices and layout etc,
the data will
Neil Cavan wrote:
Hello,
Hi Neil
What kernel version?
What mdadm version?
This morning, I woke up to find the array had kicked two disks. This
time, though, /proc/mdstat showed one of the failed disks (U_U_U, one
of the _s) had been marked as a spare - weird, since there are no
spare drives
Neil Cavan wrote:
Thanks for taking a look, David.
No problem.
Kernel:
2.6.15-27-k7, stock for Ubuntu 6.06 LTS
mdadm:
mdadm - v1.12.0 - 14 June 2005
OK - fairly old then. Not really worth trying to figure out why hdc got re-added
when things had gone wrong.
You're right, earlier in
Chris Eddington wrote:
Hi,
Thanks for the pointer on xfs_repair -n , it actually tells me something
(some listed below) but I'm not sure what it means but there seems to be
a lot of data loss. One complication is I see an error message in ata6,
so I moved the disks around thinking it was a
Chris Eddington wrote:
Yes, there is some kind of media error message in dmesg, below. It is
not random, it happens at exactly the same moments in each xfs_repair -n
run.
Nov 11 09:48:25 altair kernel: [37043.300691] res
51/40:00:01:00:00/00:00:00:00:00/e1 Emask 0x9 (media error)
Ok - it looks like the raid array is up. There will have been an event count
mismatch which is why you needed --force. This may well have caused some
(hopefully minor) corruption.
FWIW, xfs_check is almost never worth running :) (It runs out of memory easily).
xfs_repair -n is much better.
What
Chris Eddington wrote:
Hi,
Hi
While on vacation I had one SATA port/cable fail, and then four hours
later a second one fail. After fixing/moving the SATA ports, I can
reboot and all drives seem to be OK now, but when assembled it won't
recognize the filesystem.
That's unusual - if the
Paul VanGundy wrote:
All,
Hello. I don't know if this is the right place to post this issue but it
does deal with RAID so I thought I would try.
It deals primarily with linux *software* raid.
But stick with it - you may end up doing that...
What hardware/distro etc are you using?
Is this an
Paul VanGundy wrote:
Thanks for the prompt replay David. Below are the answers to your questions:
What hardware/distro etc are you using?
Is this an expensive (hundreds of £) card? Or an onboard/motherboard chipset?
The distro is Suse 10.1.
As a bit of trivia, Neil (who wrote and maintains
Michael Tokarev wrote:
Justin Piszcz wrote:
On Sun, 4 Nov 2007, Michael Tokarev wrote:
[]
The next time you come across something like that, do a SysRq-T dump and
post that. It shows a stack trace of all processes - and in particular,
where exactly each task is stuck.
Yes I got it before
Alberto Alonso wrote:
On Thu, 2007-11-01 at 15:16 -0400, Doug Ledford wrote:
Not in the older kernel versions you were running, no.
These old versions (specially the RHEL) are supposed to be
the official versions supported by Redhat and the hardware
vendors, as they were very specific as
Jeff Garzik wrote:
Neil Brown wrote:
As for where the metadata should be placed, it is interesting to
observe that the SNIA's DDFv1.2 puts it at the end of the device.
And as DDF is an industry standard sponsored by multiple companies it
must be ..
Sorry. I had intended to say correct,
Janek Kozicki wrote:
Hello,
I just created a new array /dev/md1 like this:
mdadm --create --verbose /dev/md1 --chunk=64 --level=raid5 \
--metadata=1.1 --bitmap=internal \
--raid-devices=3 /dev/hdc2 /dev/sda2 missing
But later I changed my mind, and I wanted to use chunk 128.
Bill Davidsen wrote:
Neil Brown wrote:
I certainly accept that the documentation is probably less that
perfect (by a large margin). I am more than happy to accept patches
or concrete suggestions on how to improve that. I always think it is
best if a non-developer writes documentation (and a
Doug Ledford wrote:
On Mon, 2007-10-22 at 16:39 -0400, John Stoffel wrote:
I don't agree completely. I think the superblock location is a key
issue, because if you have a superblock location which moves depending
the filesystem or LVM you use to look at the partition (or full disk)
then
Dan Williams wrote:
On 8/26/07, Abe Skolnik [EMAIL PROTECTED] wrote:
Because you can rely on the configuration file to be certain about
which disks to pull in and which to ignore. Without the config file
the auto-detect routine may not always do the right thing because it
will need to make
Richard Scobie wrote:
This looks like a potentially good, cheap candidate for md use.
Although Linux support is not explicitly mentioned, SiI 3124 is used.
http://www.addonics.com/products/host_controller/ADSA3GPX8-4e.asp
Thanks Richard. FWIW I find this kind of info useful.
David
-
To
Tomas France wrote:
Hi everyone,
I apologize for asking such a fundamental question on the Linux-RAID
list but the answers I found elsewhere have been contradicting one another.
So, is it possible to have a swap file on a RAID-10 array?
yes.
mkswap /dev/mdX
swapon /dev/mdX
Should you use
Tomas France wrote:
Thanks for the answer, David!
you're welome
By the way, does anyone know if there is a comprehensive how-to on
software RAID with mdadm available somewhere? I mean a website where I
could get answers to questions like How to convert your system from no
RAID to RAID-1,
Richard Grundy wrote:
Hello,
I was just wonder if it's possible to move my RAID5 array to another
distro, same machine just a different flavor of Linux.
Yes.
The only problem will be if it is the root filesystem (unlikely).
Would it just be a case of running:
sudo mdadm --create --verbose
Paul Clements wrote:
Well, if people would like to see a timeout option, I actually coded up
a patch a couple of years ago to do just that, but I never got it into
mainline because you can do almost as well by doing a check at
user-level (I basically ping the nbd connection periodically and if
[EMAIL PROTECTED] wrote:
Would this just be relevant to network devices or would it improve
support for jostled usb and sata hot-plugging I wonder?
good question, I suspect that some of the error handling would be
similar (for devices that are unreachable not haning the system for
example),
dean gaudet wrote:
On Mon, 16 Jul 2007, David Greaves wrote:
Bryan Christ wrote:
I do have the type set to 0xfd. Others have said that auto-assemble only
works on RAID 0 and 1, but just as Justin mentioned, I too have another box
with RAID5 that gets auto assembled by the kernel (also
Bryan Christ wrote:
I'm now very confused...
It's all that top-posting...
When I run mdadm --examine /dev/md0 I get the error message: No
superblock detected on /dev/md0
However, when I run mdadm -D /dev/md0 the report clearly states
Superblock is persistent
David Greaves wrote
Bryan Christ wrote:
I do have the type set to 0xfd. Others have said that auto-assemble
only works on RAID 0 and 1, but just as Justin mentioned, I too have
another box with RAID5 that gets auto assembled by the kernel (also no
initrd). I expected the same behavior when I built this
Guy Watkins wrote:
} [EMAIL PROTECTED] On Behalf Of Jon Collette
} I wasn't thinking and did a mdadm --create to my existing raid5 instead
} of --assemble. The syncing process ran and now its not mountable. Is
} there anyway to recover from this?
Maybe. Not really sure. But don't do anything
David Greaves wrote:
For a simple 4 device array I there are 24 permutations - doable by
hand, if you have 5 devices then it's 120, 6 is 720 - getting tricky ;)
Oh, wait, for 4 devices there are 24 permutations - and you need to do it 4
times, substituting 'missing' for each device - so 96
Ian Dall wrote:
There doesn't seem to be any designated place to send bug reports and
feature requests to mdadm, so I hope I am doing the right thing by
sending it here.
I have a small patch to mdamd which allows the write-behind amount to be
set a array grow time (instead of currently only at
David Chinner wrote:
On Fri, Jun 29, 2007 at 12:16:44AM +0200, Rafael J. Wysocki wrote:
There are two solutions possible, IMO. One would be to make these workqueues
freezable, which is possible, but hacky and Oleg didn't like that very much.
The second would be to freeze XFS from within the
David Chinner wrote:
On Wed, Jun 27, 2007 at 07:20:42PM -0400, Justin Piszcz wrote:
For drives with 16MB of cache (in this case, raptors).
That's four (4) drives, right?
I'm pretty sure he's using 10 - email a few days back...
Justin Piszcz wrote:
Running test with 10 RAPTOR 150 hard
(back on list for google's benefit ;) and because there are some good questions
and I don't know all the answers... )
Oh, and Neil 'cos there may be a bug ...
Richard Michael wrote:
On Wed, Jun 27, 2007 at 08:49:22AM +0100, David Greaves wrote:
http://linux-raid.osdl.org/index.php
Richard Michael wrote:
How do I create an array with a helpful name? i.e. /dev/md/storage?
The mdadm man page hints at this in the discussion of the --auto option
in the ASSEMBLE MODE section, but doesn't clearly indicate how it's done.
Must I create the device nodes by hand first using
Bill Davidsen wrote:
David Greaves wrote:
[EMAIL PROTECTED] wrote:
On Fri, 22 Jun 2007, David Greaves wrote:
If you end up 'fiddling' in md because someone specified
--assume-clean on a raid5 [in this case just to save a few minutes
*testing time* on system with a heavily choked bus
Neil Brown wrote:
This isn't quite right.
Thanks :)
Firstly, it is mdadm which decided to make one drive a 'spare' for
raid5, not the kernel.
Secondly, it only applies to raid5, not raid6 or raid1 or raid10.
For raid6, the initial resync (just like the resync after an unclean
shutdown)
Frank Jenkins wrote:
So here's the /proc/mdstat prior to the array failure:
I'll take a look through this and see if I can see any problems Frank. Bit busy
now - give me a few minutes.
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to
David Greaves wrote:
I'm going to have to do some more testing...
done
David Chinner wrote:
On Mon, Jun 18, 2007 at 08:49:34AM +0100, David Greaves wrote:
David Greaves wrote:
So doing:
xfs_freeze -f /scratch
sync
echo platform /sys/power/disk
echo disk /sys/power/state
# resume
Rafael J. Wysocki wrote:
This is on 2.6.22-rc5
Is the Tejun's patch
http://www.sisk.pl/kernel/hibernation_and_suspend/2.6.22-rc5/patches/30-block-always-requeue-nonfs-requests-at-the-front.patch
applied on top of that?
2.6.22-rc5 includes it.
(but, when I was testing rc4, I did apply this
David Greaves wrote:
David Robinson wrote:
David Greaves wrote:
This isn't a regression.
I was seeing these problems on 2.6.21 (but 22 was in -rc so I waited
to try it).
I tried 2.6.22-rc4 (with Tejun's patches) to see if it had improved -
no.
Note this is a different (desktop) machine
David Chinner wrote:
On Fri, Jun 15, 2007 at 04:36:07PM -0400, Justin Piszcz wrote:
Hi,
I was wondering if the XFS folks can recommend any optimizations for high
speed disk arrays using RAID5?
[sysctls snipped]
None of those options will make much difference to performance.
mkfs parameters
Dexter Filmore wrote:
1661 minutes is *way* too long. it's a 4x250GiB sATA array and usually takes 3
hours to resync or check, for that matter.
So, what's this?
kernel, mdadm verisons?
I seem to recall a long fixed ETA calculation bug some time back...
David
-
To unsubscribe from this
Tomasz Chmielewski wrote:
Peter Rabbitson schrieb:
Tomasz Chmielewski wrote:
I have a RAID-10 setup of four 400 GB HDDs. As the data grows by several
GBs a day, I want to migrate it somehow to RAID-5 on separate disks in a
separate machine.
Which would be easy, if I didn't have to do it
[Repost - didn't seem to make it to the lists, sorry cc's]
Sorry, rushed email - it wasn't clear. I think there is something important here
though.
Oh, it may be worth distinguishing between a drive identifier (/dev/sdb) and a
drive slot (md0, slot2).
Neil Brown wrote:
On Thursday May 10,
Sorry, rushed email - it wasn't clear. I think there is something important here
though.
Oh, it may be worth distinguishing between a drive identifier (/dev/sdb) and a
drive slot (md0, slot2).
Neil Brown wrote:
On Thursday May 10, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
On Wednesday May 9,
Neil Brown wrote:
On Wednesday May 9, [EMAIL PROTECTED] wrote:
Neil Brown [EMAIL PROTECTED] [2007.04.02.0953 +0200]:
Hmmm... this is somewhat awkward. You could argue that udev should be
taught to remove the device from the array before removing the device
from /dev. But I'm not convinced
Brad Campbell wrote:
G'day all,
I've got 3 arrays here. A 3 drive raid-5, a 10 drive raid-5 and a 15
drive raid-6. They are all currently 250GB SATA drives.
I'm contemplating an upgrade to 500GB drives on one or more of the
arrays and wondering the best way to do the physical swap.
The
.
How do other block devices initialise their partitions on 'discovery'?
David
David Greaves wrote:
Neil Brown wrote:
On Tuesday April 24, [EMAIL PROTECTED] wrote:
Neil, isn't it easy to just do this after an assemble?
Yes, but it should not be needed, and I'd like to understand why
Ruslan Sivak wrote:
So a custom kernel is needed? Is there a way to do a kickstart install
with the new kernel? Or better yet, put it on the install cd?
have you tried:
modprobe raid10
?
David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to
Jan Engelhardt wrote:
Hi list,
when a user does `mdadm -C /dev/md0 -l any -n whatever fits
devices`, the array gets rebuilt for at least RAID1 and RAID5, even if
the disk contents are most likely not of importance (otherwise we would
not be creating a raid array right now). Could not
Leon Woestenberg wrote:
On 4/24/07, Leon Woestenberg [EMAIL PROTECTED] wrote:
Hello,
On 4/23/07, David Greaves [EMAIL PROTECTED] wrote:
There is some odd stuff in there:
[EMAIL PROTECTED] ~]# mdadm -v --assemble --scan
--config=/tmp/mdadm.conf --force
[...]
mdadm: no uptodate device
Neil Brown wrote:
This problem is very hard to solve inside the kernel.
The partitions will not be visible until the array is opened *after*
it has been created. Making the partitions visible before that would
be possible, but would be very easy.
I think the best solution is Mike's
Mike Accetta wrote:
David Greaves writes:
...
It looks like the same (?) problem as Mike (see below - Mike do you have a
patch?) but I'm on 2.6.20.7 with mdadm v2.5.6
...
We have since started assembling the array from the initrd using
--homehost and --auto-update-homehost which takes
David Greaves wrote:
currently recompiling the kernel to allow autorun...
Which of course won't work because I'm on 1.2 superblocks:
md: Autodetecting RAID arrays.
md: invalid raid superblock magic on sdb1
md: sdb1 has invalid sb, not importing!
md: invalid raid superblock magic on sdc1
md
Neil Brown wrote:
This problem is very hard to solve inside the kernel.
The partitions will not be visible until the array is opened *after*
it has been created. Making the partitions visible before that would
be possible, but would be very easy.
I think the best solution is Mike's
Neil Brown wrote:
On Tuesday April 24, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
This problem is very hard to solve inside the kernel.
The partitions will not be visible until the array is opened *after*
it has been created. Making the partitions visible before that would
be possible, but
Neil Brown wrote:
On Tuesday April 24, [EMAIL PROTECTED] wrote:
Neil, isn't it easy to just do this after an assemble?
Yes, but it should not be needed, and I'd like to understand why it
is.
One of the last things do_md_run does is
mddev-changed = 1;
When you next open /dev/md_d0,
Leon Woestenberg wrote:
David,
thanks for all the advice so far.
No problem :)
In first instance we were searching for ways to tell mdadm what we
know about the array (through mdadm.conf) but from all advice we got
we have to take the 'usual' non-syncing-recreate approach.
We will try
Hi Neil
I think this is a bug.
Essentially if I create an auto=part md device then I get md_d0p? partitions.
If I stop the array and just re-assemble, I don't.
It looks like the same (?) problem as Mike (see below - Mike do you have a
patch?) but I'm on 2.6.20.7 with mdadm v2.5.6
FWIW I
There is some odd stuff in there:
/dev/sda1:
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Events : 0.115909229
/dev/sdb1:
Active Devices : 5
Working Devices : 4
Failed Devices : 1
Events : 0.115909230
/dev/sdc1:
Active Devices : 8
Working Devices : 8
Failed Devices : 1
Events :
Lasse Kärkkäinen wrote:
I managed to mess up a RAID-5 array by mdadm -adding a few failed disks
back, trying to get the array running again. Unfortunately, -add didn't
do what I expected, but instead made spares out of the failed disks. The
disks failed due to loose SATA cabling and the data
Patrik Jonsson wrote:
Hi all,
this may not be the best list for this question, but I figure that the
number of disks connected to users here should be pretty big...
I upgraded from 2.6.17-rc4 to 2.6.18.3 about a week ago, and I've since
had 3 drives kicked out of my 10-drive RAID5 array.
Neil Brown wrote:
Patches to the man page to add useful examples are always welcome.
And if people would like to be more verbose, the wiki is available at
http://linux-raid.osdl.org/
It's now kinda useful but definitely not fully migrated from the old RAID FAQ.
David
-
To unsubscribe from
[EMAIL PROTECTED] wrote:
Hello all,
Hi
First off, don't do anything else without reading up or talking on here :)
The list archive has got a lot of good material - 'help' is usually a good
search term!!!
I had a disk fail in a raid 5 array (4 disk array, no spares), and am
having trouble
Gordon Henderson wrote:
1747 ?S 724:25 [md9_raid5]
It's kernel 2.6.18 and
Wasn't the module merged to raid456 in 2.6.18?
Are your mdx_raid6's earlier kernels. My raid 6 is on 2.7.17 and says _raid6
Could it be that the combined kernel thread is called mdX_raid5
David
-
To
David Greaves wrote:
Gordon Henderson wrote:
1747 ?S 724:25 [md9_raid5]
It's kernel 2.6.18 and
Wasn't the module merged to raid456 in 2.6.18?
Are your mdx_raid6's earlier kernels. My raid 6 is on 2.7.17 and says _raid6
Could it be that the combined kernel thread is called
Nix wrote:
On 2 Oct 2006, David Greaves spake:
I suggest you link from http://linux-raid.osdl.org/index.php/RAID_Boot
The pages don't really have the same purpose. RAID_Boot is `how to boot
your RAID system using initramfs'; this is `how to set up a RAID system
in the first place', i.e
andy liebman wrote:
I tried simply unplugging one drive from its power and from its SATA
connector. The OS didn't like that at all. My KDE session kept running,
but I could no longer open any new terminals. I couldn't become root in
an existing terminal that was already running. And I couldn't
Typo in first line of this patch :)
I have had enough success reports not^H^H^H to believe that this
is safe for 2.6.19.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Mark Ryden wrote:
Hello linux-raid list,
I want to create a Linux Software RAID1 on linux FC5 (x86_64),
from SATA II disks. I am a noob in this.
No problems.
I looked for it and saw that as far as I understand,
raidtools is quite old - from 2003.
for exanple,
andy liebman wrote:
Feel free to add it here:
http://linux-raid.osdl.org/index.php/Main_Page
I haven't been able to do much for a few weeks (typical - I find some
time and
use it all up just getting the basic setup done - still it's started!)
David
Any hints on how to add a page?
Richard Scobie wrote:
Josh Litherland wrote:
On Sun, 2006-09-03 at 15:56 +1200, Richard Scobie wrote:
I am building 2.6.18rc5-mm1 and I cannot find the entry under make
config, to enable the various RAID options.
Under Device Drivers, switch on Multi-device support.
Thanks. I must be
Shane wrote:
Hello all,
I'm building a new server which will use a number of disks
and am not sure of the best way to go about the setup.
There will be 4 320gb SATA drives installed at first. I'm
just wondering how to set the system up for upgradability.
I'll be using raid5 but not sure
Alexandre Oliva wrote:
On Jul 30, 2006, Neil Brown [EMAIL PROTECTED] wrote:
1/
It just isn't right. We don't mount filesystems from partitions
just because they have type 'Linux'. We don't enable swap on
partitions just because they have type 'Linux swap'. So why do we
Stefan Majer wrote:
Hi,
im curious if there are some numbers out up to which distance its possible
to mirror (raid1) 2 FC-LUNs. We have 2 datacenters with a effective
distance of 11km. The fabrics in one datacenter are connected to the
fabrics in the other datacenter with 5 dark fibre both
Just an FYI for my friends here who may be running 2.6.17.x kernels and
using XFS and who may not be monitoring lkml :)
There is a fairly serious corruption problem that has recently been
discussed on lkml and affects all 2.6.17 before -stable .7 (not yet
released)
Essentially the fs can be
Hi
After a powercut I'm trying to mount an array and failing :(
teak:~# mdadm --assemble /dev/media --auto=p /dev/sd[bcdef]1
mdadm: /dev/media has been started with 5 drives.
Good
However:
teak:~# mount /media
mount: /dev/media1 is not a valid block device
teak:~# dd if=/dev/media1
David Greaves wrote:
Hi
After a powercut I'm trying to mount an array and failing :(
A reboot after tidying up /dev/ fixed it.
The first time through I'd forgotten to update the boot scripts and they
were assembling the wrong UUID. That was fine; I realised this and ran
the manual assemble
Francois Barre wrote:
Hello David, all,
You pointed the http://linux-raid.osdl.org as a future ressource for
SwRAID and MD knowledge base.
Yes. it's not ready for public use yet so I've not announced it formally
- I just mention it to people when things pop up.
In fact, the TODO page on
Neil Brown wrote:
I guess I could test for both, but then udev might change
again I'd really like a more robust check.
Maybe I could test if /dev was a mount point?
IIRC you can have diskless machines with a shared root and nfs mounted
static /dev/
David
--
-
To unsubscribe from this
1 - 100 of 139 matches
Mail list logo