Keld Jørn Simonsen [EMAIL PROTECTED] wrote:
On Mon, Feb 18, 2008 at 09:51:15PM +1100, Neil Brown wrote:
Recovery after a failed drive would not be an easy operation, and I
cannot imagine it being even close to the raw speed of the device.
I thought this was a problem with most raid types,
Christina Braun [EMAIL PROTECTED] wrote:
which is in raid1 the criterion for the direction of synchronisation? In
The event counter :)
How can I tell the system which mirrored partition is now the
data-source without
making the raid1 new or zero the superblock? Is the destination in
Hello,
using non-'standard' names for raid-arrays like /dev/md/root always
leads to (more or less) dynamically assigned device minors 127+.
Wouldn't it be possible to use the Preferred Minor hint from the
array's superblock or the super-minor option from the mdadm.conf?
This would lead to more
Hello Again :)
Having a component device with slightly different superblock
characteristics in the system prevents mdadm from assembling arrays.
For example:
mdadm --fail /dev/mdx /dev/xdx
mdadm -G -n y-1 /dev/mdx
would lead to a non-assemble-able /dev/mdx as long as /dev/xdx
Rustedt, Florian [EMAIL PROTECTED] wrote:
If the speed on RAID 0 is based on reading out in parallel, then it must be
the same on RAID 1, mustn't it?
On RAID 1, it is possible, to read two blocks in parallel to speed up, too.
It's not that simple.
On RAID0 you can read one single stream of
Al Boldi [EMAIL PROTECTED] wrote:
Interesting link. They seem to point out that smart not necessarily warns of
pending failure. This is probably worse than not having smart at all, as it
gives you the illusion of safety.
If SMART gives you the illusion of safety, you didn't understand
Al Boldi [EMAIL PROTECTED] wrote:
Don't underestimate the effects mere layout can have on multi-disk array
performance, despite it being highly hw dependent.
I can't see the difference between equal mirrors and somehow interleaved
layout on RAID1. Since you have to seek anyways, there should
Neil Brown [EMAIL PROTECTED] wrote:
Skipping over blocks within a track is no faster than reading blocks
in the track, so you would need to make sure that your chunk size is
Not even no faster but probably even slower.
For seek()+read() from userspace this gets smoothed by the kernel's
implicit
martin f krafft [EMAIL PROTECTED] wrote:
Why would anyone want to create a partitionable array and put
partitions in it, rather than creating separate arrays for each
filesystem? Intuitively, this makes way more sense as then the
partitions are independent of each other; one array can fail and
Henrik Holst [EMAIL PROTECTED] wrote:
It would be good to have an analog to memtest but for PATA and SATA
ports. Anyone seen something like that out there on the web?
Are you looking for `badblocks'?
There is also a `memtest.sh' from Doug Ledford. It's main intention is,
as the name suggests,
Tomasz Chmielewski [EMAIL PROTECTED] wrote:
# by default, run at 01:06 on the first Sunday of each month.
6 1 1-7 * 7 root [ -x /usr/share/mdadm/checkarray ]
You have a quite old version of mdadm. This issue has been fixed in
mdadm (2.5.2-8) unstable from 27 Jul 2006. Current version is mdadm
Benjamin Schieder [EMAIL PROTECTED] wrote:
I have made the experience that long tests slow down the raid to a point
where the system becomes unusable.
Even though we're quite off-topic here with that since it's more
SMART-related... this is at least unusual.
I'm also running regular SMART
Gordon Henderson [EMAIL PROTECTED] wrote:
I don't have any mdadm.conf files ... What am I missing? (I've always been
under the impression that after needing the /etc/raidtab file with the old
raidtools, you didn't need a config file as such under mdadm... However,
You don't necessarily need
Neil Brown [EMAIL PROTECTED] wrote:
I would be a lot happier about it if the block layer told me whether
the fail was a Media error or some other sort of error.
This wouldn't help you either. I've seen drives (mainly Samsung) that
locked up the whole IDE bus after some simple (subsequent)
Peter T. Breuer [EMAIL PROTECTED] wrote:
1) I would like raid request retries to be done with exponential
delays, so that we get a chance to overcome network brownouts.
Hmmm, IMHO this should be implemented in nbd/enbd where it belongs to
and errors should be masked within nbd/enbd then.
Jim Klimov [EMAIL PROTECTED] wrote:
Are there any actual bonuses to making RAIDs on whole raw disks?
You win 63 sectors (i.e. 32k) usually.
regards
Mario
--
*axiom* welcher sensorische input bewirkte die output-aktion,
den irc-chatter mit dem nick dus des irc-servers
Michael Tokarev [EMAIL PROTECTED] wrote:
Why to test for udev at all? If the device does not exist, regardless
if udev is running or not, it might be a good idea to try to create it.
Because IT IS NEEDED, period. Whenever the operation fails or not, and
Perhaps it was just a typo and you
Jason Lunz [EMAIL PROTECTED] wrote:
there's a udevd you can check for. I don't know whether that's a better
test or not.
It's not. For example, the Debian package does also start this udevd on
package upgrades, even when a 2.4 kernel is running which definitely
has no udev support :)
regards
Neil Brown [EMAIL PROTECTED] wrote:
On Wednesday February 15, [EMAIL PROTECTED] wrote:
E.g. Data and parity write requests are issued in parallel but only one
finishes. This will
lead to inconsistent data. It will be undetected and can not be
If the array is degraded, that the inconsistency
Matthias Urlichs [EMAIL PROTECTED] wrote:
Hi, David Liontooth wrote:
* define 4 pairs of RAID 1 with an 8-port 3ware 9500S card * the OS will
Hmm. You'd have eight disks, five(!) may fail at any time, giving you
Four, isn't it?
RAID6 covers the failure of 2 of the underlying RAID1s, which,
Max Waterman [EMAIL PROTECTED] wrote:
Still, it seems like it should be a solvable problem...if you order the
data differently on each disk; for example, in the two disk case,
putting odd and even numbered 'stripes' on different platters [or sides
Well, unfortunately for todays hard disks
Tim Moore [EMAIL PROTECTED] wrote:
Andy Smith wrote:
Are reads from a 2 device RAID-1 twice as fast as from a single
md14 : active raid0 sdb13[1] sda13[0]
md13 : active raid1 sdb12[1] sda12[0]
/dev/md14:
Timing buffered disk reads: 272 MB in 3.01 seconds = 90.37 MB/sec
/dev/md13:
Jure Peèar [EMAIL PROTECTED] wrote:
How to use gfx memory as nvram (no batteries tho :)
http://hedera.linuxnews.pl/_news/2002/09/03/_long/1445.html
Where's the 'n' in nvram there? You and the author mean vram, i.e.
Video-RAM, which is quite different from nvram, i.e. Non-Volatile RAM.
Anyways
Hello,
Nathan Lewis [EMAIL PROTECTED] wrote:
As part of my Master's thesis, I am working on adding a Reed-Solomon
personality to the existing linux RAID structure and I would like some
Is there any progress in implementing a generic Reed-Solomon personality
in MD since this mail from 31 Jan
Molle Bestefich [EMAIL PROTECTED] wrote:
Eg. will it increase the event counters on the other disks /even/ when
no reboot or stop-start has been performed, so that when the flappy
Event counters are increased immediately when an event occurs.
A device failure is an event as well as start and
[EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
On Sun, 4 Sep 2005, Neil Brown wrote:
I've never liked kernel autodetect, and while I won't break it, I
would like to migrate people away from it.
How will we use MD for root filesystem? Will we *always* need a tricky
initrd with mdadm added?
If
Hi,
Pallai Roland [EMAIL PROTECTED] wrote:
this is a feature patch that implements 'proactive raid5 disk
replacement' (http://www.arctic.org/~dean/raid-wishlist.html),
that could help a lot on large raid5 arrays built from cheap sata
...
linux software raid is very fragile by default, the
27 matches
Mail list logo