George Spelvin wrote:
I just discovered (the hard way, sigh, but not too much data loss) that a
4-drive RAID 10 array had the mirroring set up incorrectly.
Given 4 drvies A, B, C and D, I had intended to mirror A-C and B-D,
so that I could split the mirror and run on either (A,B) or (C,D).
Jeff Breidenbach wrote:
It's not a RAID issue, but make sure you don't have any duplicate volume
names. According to Murphy's Law, if there are two / volumes, the wrong
one will be chosen upon your next reboot.
Thanks for the tip. Since I'm not using volumes or LVM at all, I should be
safe
Quoting Hubert Verstraete [EMAIL PROTECTED]:
Hi All,
My RAID 5 array is running slow.
I've made a lot of test to find out where this issue is laying.
I've come to the conclusion that once the array is created with mdadm
2.6.x (up to 2.6.4), whatever the kernel you run, whatever the mdadm
you
Janek Kozicki wrote:
Marcin Krol said: (by the date of Tue, 5 Feb 2008 11:42:19 +0100)
2. How can I delete that damn array so it doesn't hang my server up in a
loop?
dd if=/dev/zero of=/dev/sdb1 bs=1M count=10
This works provided the superblocks are at the beginning of the
component
Moshe Yudkowsky wrote:
Michael Tokarev wrote:
Janek Kozicki wrote:
Marcin Krol said: (by the date of Tue, 5 Feb 2008 11:42:19 +0100)
2. How can I delete that damn array so it doesn't hang my server up
in a loop?
dd if=/dev/zero of=/dev/sdb1 bs=1M count=10
This works provided
Janek Kozicki wrote:
Michael Tokarev said: (by the date of Tue, 05 Feb 2008 16:52:18 +0300)
Janek Kozicki wrote:
I'm not using mdadm.conf at all.
That's wrong, as you need at least something to identify the array
components.
I was afraid of that ;-) So, is that a correct way
Linda Walsh wrote:
Michael Tokarev wrote:
Unfortunately an UPS does not *really* help here. Because unless
it has control program which properly shuts system down on the loss
of input power, and the battery really has the capacity to power the
system while it's shutting down (anyone tested
Moshe Yudkowsky wrote:
[]
But that's *exactly* what I have -- well, 5GB -- and which failed. I've
modified /etc/fstab system to use data=journal (even on root, which I
thought wasn't supposed to work without a grub option!) and I can
power-cycle the system and bring it up reliably afterwards.
Moshe Yudkowsky wrote:
[]
If I'm reading the man pages, Wikis, READMEs and mailing lists correctly
-- not necessarily the case -- the ext3 file system uses the equivalent
of data=journal as a default.
ext3 defaults to data=ordered, not data=journal. ext2 doesn't have
journal at all.
The
Eric Sandeen wrote:
Moshe Yudkowsky wrote:
So if I understand you correctly, you're stating that current the most
reliable fs in its default configuration, in terms of protection against
power-loss scenarios, is XFS?
I wouldn't go that far without some real-world poweroff testing, because
John Stoffel wrote:
[]
C'mon, how many of you are programmed to believe that 1.2 is better
than 1.0? But when they're not different, just just different
placements, then it's confusing.
Speaking of more is better thing...
There were quite a few bugs fixed in recent months wrt version 1
Eric Sandeen wrote:
[]
http://oss.sgi.com/projects/xfs/faq.html#nulls
and note that recent fixes have been made in this area (also noted in
the faq)
Also - the above all assumes that when a drive says it's written/flushed
data, that it truly has. Modern write-caching drives can wreak
Moshe Yudkowsky wrote:
I've been reading the draft and checking it against my experience.
Because of local power fluctuations, I've just accidentally checked my
system: My system does *not* survive a power hit. This has happened
twice already today.
I've got /boot and a few other pieces in
Moshe Yudkowsky wrote:
Michael Tokarev wrote:
Speaking of repairs. As I already mentioned, I always use small
(256M..1G) raid1 array for my root partition, including /boot,
/bin, /etc, /sbin, /lib and so on (/usr, /home, /var are on
their own filesystems). And I had the following
at
low-level? Can I trust the Device?
Best regards,
Michael
--
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
Moshe Yudkowsky wrote:
[]
Mr. Tokarev wrote:
By the way, on all our systems I use small (256Mb for small-software systems,
sometimes 512M, but 1G should be sufficient) partition for a root filesystem
(/etc, /bin, /sbin, /lib, and /boot), and put it on a raid1 on all...
... doing [it]
this
Peter Rabbitson wrote:
Moshe Yudkowsky wrote:
over the other. For example, I've now learned that if I want to set up
a RAID1 /boot, it must actually be 1.2 or grub won't be able to read
it. (I would therefore argue that if the new version ever becomes
default, then the default sub-version
Keld Jørn Simonsen wrote:
[]
Ugh. 2-drive raid10 is effectively just a raid1. I.e, mirroring
without any striping. (Or, backwards, striping without mirroring).
uhm, well, I did not understand: (Or, backwards, striping without
mirroring). I don't think a 2 drive vanilla raid10 will do
Moshe Yudkowsky wrote:
Peter Rabbitson wrote:
It is exactly what the names implies - a new kind of RAID :) The setup
you describe is not RAID10 it is RAID1+0. As far as how linux RAID10
works - here is an excellent article:
Keld Jørn Simonsen wrote:
On Tue, Jan 29, 2008 at 09:57:48AM -0600, Moshe Yudkowsky wrote:
In my 4 drive system, I'm clearly not getting 1+0's ability to use grub
out of the RAID10. I expect it's because I used 1.2 superblocks (why
not use the latest, I said, foolishly...) and therefore the
Peter Rabbitson wrote:
[]
However if you want to be so anal about names and specifications: md
raid 10 is not a _full_ 1+0 implementation. Consider the textbook
scenario with 4 drives:
(A mirroring B) striped with (C mirroring D)
When only drives A and C are present, md raid 10 with near
Keld Jørn Simonsen wrote:
On Tue, Jan 29, 2008 at 06:13:41PM +0300, Michael Tokarev wrote:
Linux raid10 MODULE (which implements that standard raid10
LEVEL in full) adds some quite.. unusual extensions to that
standard raid10 LEVEL. The resulting layout is also called
raid10 in linux (ie
Moshe Yudkowsky wrote:
Michael Tokarev wrote:
There are more-or-less standard raid LEVELS, including
raid10 (which is the same as raid1+0, or a stripe on top
of mirrors - note it does not mean 4 drives, you can
use 6 - stripe over 3 mirrors each of 2 components; or
the reverse - stripe
Peter Rabbitson wrote:
Michael Tokarev wrote:
Raid10 IS RAID1+0 ;)
It's just that linux raid10 driver can utilize more.. interesting ways
to lay out the data.
This is misleading, and adds to the confusion existing even before linux
raid10. When you say raid10 in the hardware raid world
Peter Rabbitson wrote:
Moshe Yudkowsky wrote:
One of the puzzling things about this is that I conceive of RAID10 as
two RAID1 pairs, with RAID0 on top of to join them into a large drive.
However, when I use --level=10 to create my md drive, I cannot find
out which two pairs are the RAID1's:
Martin Seebach wrote:
Hi,
I'm not sure this is completely linux-raid related, but I can't figure out
where to start:
A few days ago, my server died. I was able to log in and salvage this content
of dmesg:
http://pastebin.com/m4af616df
I talked to my hosting-people and they said
Hi,
I have just built a Raid 5 array using mdadm and while it is running fine I
have a question, about identifying the order of disks in the array.
In the pre sata days you would connect your drives as follows:
Primary Master - HDA
Primary Slave - HDB
Secondary - Master - HDC
Secondary -
Quoting Mitchell Laks [EMAIL PROTECTED]:
Hi mdadm raid gurus,
I wanted to make a raid1 array, but at the moment I have only 1
drive available. The other disk is
in the mail. I wanted to make a raid1 that i will use as a backup.
But I need to do the backup now, before the second drive
Quoting Norman Elton [EMAIL PROTECTED]:
I posed the question a few weeks ago about how to best accommodate
software RAID over an array of 48 disks (a Sun X4500 server, a.k.a.
Thumper). I appreciate all the suggestions.
Well, the hardware is here. It is indeed six Marvell 88SX6081 SATA
Neil Brown wrote:
On Monday December 31, [EMAIL PROTECTED] wrote:
I'm hoping that if I can get raid5 to continue despite the errors, I
can bring back up enough of the server to continue, a bit like the
remount-ro option in ext2/ext3.
If not, oh well...
Sorry, but it is oh well.
Speaking
Justin Piszcz wrote:
[]
Good to know/have it confirmed by someone else, the alignment does not
matter with Linux/SW RAID.
Alignment matters when one partitions Linux/SW raid array.
If the inside partitions will not be aligned on a stripe
boundary, esp. in the worst case when the filesystem
maobo wrote:
Hi,all
Yes, Raid10 read balance is the shortest position time first and
considering the sequential access condition. But its performance is
really poor from my test than raid0.
Single-stream write performance of raid0, raid1 and raid10 should be
of similar level (with raid5 and
Michael Tokarev wrote:
I just noticed that with Linux software RAID10, disk
usage isn't equal at all, that is, most reads are
done from the first part of mirror(s) only.
Attached (disk-hour.png) is a little graph demonstrating
this (please don't blame me for poor choice of colors
Janek Kozicki wrote:
Michael Tokarev said: (by the date of Fri, 21 Dec 2007 14:53:38 +0300)
I just noticed that with Linux software RAID10, disk
usage isn't equal at all, that is, most reads are
done from the first part of mirror(s) only.
what's your kernel version? I recall that recently
Thierry Iceta wrote:
Hi
I would like to use raidtools-1.00.3 on Rhel5 distribution
but I got thie error
Use mdadm instead. Raidtools is dangerous/unsafe, and is
not maintained for a long time already.
/mjt
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of
David Greaves wrote:
Michael Makuch wrote:
So my questions are:
...
- Is this a.o.k for a raid5 array?
So I realised that /proc/mdstat isn't documented too well anywhere...
http://linux-raid.osdl.org/index.php/Mdstat
Comments welcome...
David
One thing
Guy Watkins wrote:
man md
man mdadm
I use RAID6. Happy with it so far, but haven't had a disk failure yet.
RAID5 sucks because if you have 1 failed disk and 1 bad block on any other
disk, you are hosed.
Hope that helps.
I can't believe I've been using a raid array for 2 years and didn't know
I realize this is the developers list and though I am a developer I'm
not a developer
of linux raid, but I can find no other source of answers to these questions:
I've been using linux software raid (5) for a couple of years, having
recently uped
to the 2.6.23 kernel (FC7, was previously on
I come across a situation where external MD bitmaps
aren't usable on any standard linux distribution
unless special (non-trivial) actions are taken.
First is a small buglet in mdadm, or two.
It's not possible to specify --bitmap= in assemble
command line - the option seems to be ignored. But
[Cc'd to xfs list as it contains something related]
Dragos wrote:
Thank you.
I want to make sure I understand.
[Some background for XFS list. The talk is about a broken linux software
raid (the reason for breakage isn't relevant anymore). The OP seems to
lost the order of drives in his
Justin Piszcz said: (by the date of Sun, 2 Dec 2007 04:11:59 -0500 (EST))
The badblocks did not do anything; however, when I built a software raid 5
and the performed a dd:
/usr/bin/time dd if=/dev/zero of=fill_disk bs=1M
I saw this somewhere along the way:
[42332.936706] ata5.00:
On the heels of last week's post asking about hardware recommendations,
I'd like to ask a few questions too. :)
I'm considering my first SAS purchase. I'm planning to build a software
RAID6 array using a SAS JBOD attached to a linux box. I haven't decided
on any of the hardware specifics.
I'm
Janek Kozicki wrote:
[]
Can you please add do the manual under 'SEE ALSO' a reference
to /usr/share/doc/mdadm ?
/usr/share/doc/mdadm is Debian-specific (well.. not sure it's really
Debian (or something derived from it) -- some other distros may use
the same naming scheme, too). Other
Justin Piszcz wrote:
# ps auxww | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
root 273 0.0 0.0 0 0 ?DOct21 14:40 [pdflush]
root 274 0.0 0.0 0 0 ?DOct21 13:00 [pdflush]
After several days/weeks,
Justin Piszcz wrote:
On Sun, 4 Nov 2007, Michael Tokarev wrote:
[]
The next time you come across something like that, do a SysRq-T dump and
post that. It shows a stack trace of all processes - and in particular,
where exactly each task is stuck.
Yes I got it before I rebooted, ran
John Stoffel wrote:
Michael == Michael Tokarev [EMAIL PROTECTED] writes:
If you are going to mirror an existing filesystem, then by definition
you have a second disk or partition available for the purpose. So you
would merely setup the new RAID1, in degraded mode, using the new
partition
Justin Piszcz wrote:
[]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Justin, forgive me please, but can you learn to trim the original
messages when
Doug Ledford wrote:
[]
1.0, 1.1, and 1.2 are the same format, just in different positions on
the disk. Of the three, the 1.1 format is the safest to use since it
won't allow you to accidentally have some sort of metadata between the
beginning of the disk and the raid superblock (such as an
John Stoffel wrote:
Michael == Michael Tokarev [EMAIL PROTECTED] writes:
[]
Michael Well, I strongly, completely disagree. You described a
Michael real-world situation, and that's unfortunate, BUT: for at
Michael least raid1, there ARE cases, pretty valid ones, when one
Michael NEEDS
Justin Piszcz wrote:
On Fri, 19 Oct 2007, Doug Ledford wrote:
On Fri, 2007-10-19 at 13:05 -0400, Justin Piszcz wrote:
[]
Got it, so for RAID1 it would make sense if LILO supported it (the
later versions of the md superblock)
Lilo doesn't know anything about the superblock format,
Neil Brown wrote:
On Tuesday October 9, [EMAIL PROTECTED] wrote:
[]
o During this reshape time, errors may be fatal to the whole array -
while mdadm do have a sense of critical section, but the
whole procedure isn't as much tested as the rest of raid code,
I for one will not rely on it,
Janek Kozicki wrote:
Hello,
Recently I started to use mdadm and I'm very impressed by its
capabilities.
I have raid0 (250+250 GB) on my workstation. And I want to have
raid5 (4*500 = 1500 GB) on my backup machine.
Hmm. Are you sure you need that much space on the backup, to
start with?
Rustedt, Florian wrote:
Hello list,
some folks reported severe filesystem-crashes with ext3 and reiserfs on
mdraid level 1 and 5.
I guess much more strong evidience and details are needed.
Without any additional information I for one can only make
a (not-so-pleasant) guess about those some
Daniel Santos wrote:
I retried rebuilding the array once again from scratch, and this time
checked the syslog messages. The reconstructions process is getting
stuck at a disk block that it can't read. I double checked the block
number by repeating the array creation, and did a bad block scan.
Patrik Jonsson wrote:
Michael Tokarev wrote:
[]
But in any case, md should not stall - be it during reconstruction
or not. For this, I can't comment - to me it smells like a bug
somewhere (md layer? error handling in driver? something else?)
which should be found and fixed
Dean S. Messing wrote:
Michael Tokarev writes:
[]
: the procedure is something like this:
:
: cd /backups
: rm -rf tmp/
: cp -al $yesterday tmp/
: rsync -r --delete -t ... /filesystem tmp
: mv tmp $today
:
: That is, link the previous backup to temp (which takes no space
Dean S. Messing wrote:
[]
[] That's what
attracted me to RAID 0 --- which seems to have no downside EXCEPT
safety :-).
So I'm not sure I'll ever figure out the right tuning. I'm at the
point of abandoning RAID entirely and just putting the three disks
together as a big LV and being done
Hi
Looks like a disk I/O error to me.
As I can remember, after kernel 2.6.16, raid1 read error will be auto-corrected.
I think do a filesystem check might help.
Michael
On 9/3/07, Mitchell Laks [EMAIL PROTECTED] wrote:
Hi,
I run raid1 on a debian etch server.
If I do halt = shutdown -h
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans [EMAIL PROTECTED
On 8/27/07, Randy Dunlap [EMAIL PROTECTED] wrote:
Michael J. Evans wrote:
On Monday 27 August 2007, Randy Dunlap wrote:
On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
=
--- linux/drivers/md/md.c.orig 2007-08-21
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans [EMAIL PROTECTED
On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch against.
2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
Additionally I
On Tuesday 28 August 2007, Jan Engelhardt wrote:
On Aug 28 2007 06:08, Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch against.
2.6.23-rc4 is on kernel.org and I don't see any git snapshots
On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote:
Michael Evans wrote:
On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch against.
2.6.23
On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote:
Michael Evans wrote:
On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote:
Michael Evans wrote:
On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now
On 8/26/07, Kyle Moffett [EMAIL PROTECTED] wrote:
On Aug 26, 2007, at 08:20:45, Michael Evans wrote:
Also, I forgot to mention, the reason I added the counters was
mostly for debugging. However they're also as useful in the same
way that listing the partitions when a new disk is added can
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans [EMAIL PROTECTED
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans [EMAIL PROTECTED
On Monday 27 August 2007, Randy Dunlap wrote:
On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
=
--- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700
+++ linux/drivers/md/md.c 2007-08-21 04:30
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans [EMAIL PROTECTED
Also, I forgot to mention, the reason I added the counters was mostly
for debugging. However they're also as useful in the same way that
listing the partitions when a new disk is added can be (in fact this
augments that and the existing messages the autodetect routines
provide).
As for using
On 8/26/07, Jan Engelhardt [EMAIL PROTECTED] wrote:
On Aug 26 2007 04:51, Michael J. Evans wrote:
{
- if (dev_cnt = 0 dev_cnt 127)
- detected_devices[dev_cnt++] = dev;
+ struct detected_devices_node *node_detected_dev;
+ node_detected_dev = kzalloc(sizeof
On 8/26/07, Randy Dunlap [EMAIL PROTECTED] wrote:
On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote:
From: Michael J. Evans [EMAIL PROTECTED]
Is there any way to tell the user what device (or partition?) is
bein skipped? This printk should just print (confirm
:
On Wednesday August 22, [EMAIL PROTECTED] wrote:
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array
Tomas France wrote:
Thanks for the answer, David!
I kind of think RAID-10 is a very good choice for a swap file. For now I
will need to setup the swap file on a simple RAID-1 array anyway, I just
need to be prepared when it's time to add more disks and transform the
whole thing into
I have removed the drives from my machine, the problem Im having is that I dont
know the order (ports) they go back into the machine. Does anyone know how to
determine the order, or how to fix the drive array if the order is not correct?
mullaly wrote:
[]
All works well until a system reboot. md2 appears to be brought up before
md0 and md1 which causes the raid to start without two of its drives.
Is there anyway to fix this?
How about listing the arrays in proper order in mdadm.conf ?
/mjt
-
To unsubscribe from this list:
From: Daniel Korstad [EMAIL PROTECTED]
To: Michael [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent: Monday, July 16, 2007 10:23:23 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?
You will learn a lot by building your own system and will allow you to do more
Joshua Baker-LePain wrote:
[]
Yep, hardware RAID -- I need the hot swappability (which, AFAIK, is
still an issue with md).
Just out of curiocity - what do you mean by swappability ?
For many years we're using linux software raid, we had no problems
with swappability of the component drives (in
as possible.
- Original Message
From: Bill Davidsen [EMAIL PROTECTED]
To: Daniel Korstad [EMAIL PROTECTED]
Cc: Michael [EMAIL PROTECTED]; linux-raid@vger.kernel.org
Sent: Wednesday, July 11, 2007 10:21:42 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?
Daniel Korstad wrote:
You
it
is easy to use, supports all of my hardware right on install and has the auto
update features that I enjoy. I have instead I have seen a report of tune2fs
(which is available), though I am not sure if this is of use on a RAID-5 array.
Thanks
Michael Parisi
- Original Message
From: Bill
.
Maybe the other IDE controller uses a module that it loaded late.
Hmm, I'd need to check that after I rebuild the arrays. Maybe the other
IDE-controller is not in the initrd. That wouldn't explain the missing hdb,
though.
--
YT,
Michael
-
To unsubscribe from this list: send the line unsubscribe
and changed hdc to hdg, so that can't
be the reason.
I seem to be missing something here, but what is it?
--
YT,
Michael
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Jun 28, 2007 at 09:12:56AM +0100, David Greaves wrote:
(back on list for google's benefit ;) and because there are some good
questions and I don't know all the answers... )
Thanks, I didn't realize I didn't 'reply-all' to stay on the list.
Hopefully it will snowball as people who use
How do I create an array with a helpful name? i.e. /dev/md/storage?
The mdadm man page hints at this in the discussion of the --auto option
in the ASSEMBLE MODE section, but doesn't clearly indicate how it's done.
Must I create the device nodes by hand first using MAKEDEV?
Thanks.
-
To
My 750w PSU is going into my dream machine (Overclocked core2duo extreme with
1066mhz memory, lots of optical drives, water cooling, Radion 1900xtx, aka high
power application). The 550 Ultra is coming out of that machine and going into
the NAS. Its not the perfect solution, for I would
Now MD subsystem does a very good job at trying to
recover a bad block on a disk, by re-writing its
content (to force drive to reallocate the block in
question) and verifying it's written ok.
But I wonder if it's worth the effort to go further
than that.
Now, md can use bitmaps. And a bitmap
Nix wrote:
On 8 May 2007, Michael Tokarev told this:
BTW, for such recovery purposes, I use initrd (initramfs really, but
does not matter) with a normal (but tiny) set of commands inside,
thanks to busybox. So everything can be done without any help from
external recovery CD. Very handy
Bernd Schubert wrote:
Benjamin Schieder wrote:
[EMAIL PROTECTED]:~# mdadm /dev/md/2 -r /dev/hdh5
mdadm: hot remove failed for /dev/hdh5: No such device
md1 and md2 are supposed to be raid5 arrays.
You are probably using udev, don't you? Somehow there's presently
no /dev/hdh5, but to
Bernd Schubert wrote:
Hi,
we are presently running into a hotplug/linux-raid problem.
Lets assume a hard disk entirely fails or a stupid human being pulls it out
of
the system. Several partitions of the very same hardisk are also part of
linux-software raid. Also, /dev is managed by
Brad Campbell wrote:
[]
It occurs though that the superblocks would be in the wrong place for
the new drives and I'm wondering if the kernel or mdadm might not find
them.
I once had a similar issue. And wrote a tiny program (a hack, sort of),
to read or write md superblock from/to a component
Benjamin Schieder wrote:
Hi list.
md2 : inactive hdh5[4](S) hdg5[1] hde5[3] hdf5[2]
11983872 blocks
[EMAIL PROTECTED]:~# mdadm -R /dev/md/2
mdadm: failed to run array /dev/md/2: Input/output error
[EMAIL PROTECTED]:~# mdadm /dev/md/
0 1 2 3 4 5
[EMAIL PROTECTED]:~# mdadm
Mark A. O'Neil wrote:
Hello,
I hope this is the appropriate forum for this request if not please
direct me to the correct one.
I have a system running FC6, 2.6.20-1.2925, software RAID5 and a power
outage seems to have borked the file structure on the RAID.
Boot shows the following
Neil Brown wrote:
On Tuesday April 3, [EMAIL PROTECTED] wrote:
[]
After the power cycle the kernel boots, devices are discovered, among
which the ones holding raid. Then we try to find the device that holds
swap in case of resume and / in case of a normal boot.
Now comes a crucial point. The
Bill Davidsen wrote:
[]
If you use RAID0 on an array it will be faster (usually) than just
partitions, but any process with swapped pages will crash if you lose
either drive. With RAID1 operation will be more reliable but no faster.
If you use RAID10 the array will be faster and more reliable,
AGAIN thank you both! You've been of great help.
--
Michael Schwarz
Michael Schwarz wrote:
More than ever, I am convinced that it is actually a hardware problem,
but
I am curious for the opinions of both of you on whether the system
(meaning, I guess, the combination of usb-storage driver
Comments below.
--
Michael Schwarz
On Mon, 19 Mar 2007, Michael Schwarz wrote:
I'm going to hang on to the hardware. This is a pilot/demo that may lead
to development of a new device, and, if so, I'll be getting back into
device driver writing. Working this problem would be great practice
touch a
given functionality for example).
Please please please!
cheers
--
Michael Ellerman
OzLabs, IBM Australia Development Lab
wwweb: http://michael.ellerman.id.au
phone: +61 2 6212 1183 (tie line 70 21183)
We do not inherit the earth from our ancestors,
we borrow it from our children
I've tried both single and multiple files. The files are not sparse. They
are highly compressed files (mpeg files) that would, to the filesystem, be
nearly random with no repeated patterns or voids.
--
Michael Schwarz
Michael Schwarz wrote:
Update:
(For those who've been waiting
that to you two in a separate message.
If anyone else would like my logs, let me know.
--
Michael Schwarz
Okay. I've verified my hardware (by doing large write/reads to non-raid
file systems on each of the seven USB flash drives on the hub).
So this morning I booted cold and began gathering
1 - 100 of 207 matches
Mail list logo