G'day all,
I have just finished my shiny new RAID-6 box. 15 x 250GB SATA drives.
While doing some failure testing (inadvertently due to libata SMART causing command errors) I
dropped 3 drives out of the array in sequence.
md coped with the first two (as it should), but after the third one dropped
Neil Brown wrote:
On Tuesday February 15, [EMAIL PROTECTED] wrote:
G'day all,
I'm not really sure how it's supposed to cope with losing more disks than planned, but filling the
syslog with nastiness is not very polite.
Thanks for the bug report. There are actually a few problems relating
to
G'day all,
I have a painful issue with a RAID-6 box. It only manifests itself on a fully complete and synced up
array, and I can't reproduce it on an array smaller than the entire drives which means after every
attempt at debugging I have to endure a 12 hour resync before I try again.
I have a
J.A. Magallon wrote:
Hi...
I posted this in other mail, but now I can confirm this.
I have a box with a SATA RAID-5, and with 2.6.11-rc3-mm2+libata-dev1
works like a charm as a samba server, I dropped it 12Gb from an
osx client, and people does backups from W2k boxes and everything was fine.
With
Gordon Henderson wrote:
I'm in the middle of building up a new home server - looking at RAID-5 or
6 and 2.6.x, so maybe it's time to look at all this again, but it sounds
like the auto superblock update might thwart it all now...
Nah... As far as I can tell, 20ms after the last write, the auto
Gordon Henderson wrote:
And do check your disks regularly, although I don't think current version
of smartmontools fully supports sata under the scsi subsystem yet...
Actually, if you are using a UP machine, the libata-dev tree has patches that make this work. I
believe there may be races on SMP
John McMonagle wrote:
Was planning to adding a hot spare to my 3 disk raid5 array and was
thinking if I go to 4 drives I would be a better off as 2 raid1 arrays
considering the current state of raid5.
I just wonder about the comment considering the current state of raid5. What might be wrong
Alexander Stockinger wrote:
Hi all,
I have a linux software RAID 5 running on a Debian Sarge with 2.6.7-smp.
Since I installed the system (several kernel updates ago) the disks of
the RAID 5 won't stay spinned down. Having them sent to standy manually
using hdparm ends up in having the disks
H. Peter n wrote:
No hiccups, data losses, or missing functionality. At the end of the
whole ordeal, the filesystem (1 TB, 50% full) was still quite prisine,
and fsck confirmed this. I was quite pleased :)
I second this. I endured numerous kernel crashes and other lockup/forced restart issues
Ming Zhang wrote:
Hi folks
I am testing some HW performance with raid5 with 2.4.x kenrel.
It is really troublesome every time I create a raid5, wait 4 hours for
reconstruction, and then test some data and then recreate another one
and wait again. I wonder if there is any hack or option
Ming Zhang wrote:
I did a similar thing a while back.
I created the raid and waited for it to sync, I then make dd copies of the raid
superblocks.
When I blew it up I just dd the clean superblocks back again (saved a 12 hour
rebuild time)
interesting to know about this. u just check the
G'day all,
This message is really just for future googles.
I have been running a 15 disk raid-6 since 24th feb in production and can completely vouch for it's
stability. I have had both simulated and real drive failures and it has handled itself perfectly
under all cases. Unclean shutdowns and
Francisco Zafra wrote:
Hi Neil,
Since some hours I am trying to solved it with the last version:
[EMAIL PROTECTED]:~ # mdadm --version
mdadm - v2.0-devel-2 - DEVELOPMENT VERSION NOT FOR REGULAR USE - 7 July 2005
With the same results :(
I really don't think it is locked I dd it in act of
Jeff Breidenbach wrote:
So - I'm thinking of the following backup scenario. First, remount
/dev/md0 readonly just to be safe. Then mount the two component
paritions (sdc1, sdd1) readonly. Tell the webserver to work from one
component partition, and tell the backup process to work from the
G'day all,
Here is an interesting question( well I think so in any case ). I just replaced a failed disk in my
15 drive Raid-6.
Simply mdadm --add /dev/md0 /dev/sdl
Why, when there is no other activity on the array at all, is it writing to every disk during the
recovery? I would have
Brad Campbell wrote:
G'day all,
Here is an interesting question( well I think so in any case ). I just
replaced a failed disk in my 15 drive Raid-6.
Forgot the most important detail (as usual)
bklaptop:~ssh storage1 uname -a
Linux storage1 2.6.11.7 #4 Fri Oct 7 20:00:25 GST 2005 i686 GNU
Neil Brown wrote:
- add disks to convert to raid6.
I don't think this is possible, but you should check the latest
raid reconfig.
It's not. I started work on it Feb last year but then real life got in the way
again.
In the longer term, I think raidreconf as it stands is going to die
Max Waterman wrote:
Still, it seems like it should be a solvable problem...if you order the
data differently on each disk; for example, in the two disk case,
putting odd and even numbered 'stripes' on different platters [or sides
of platters].
The only problem there is determining the
John Rowe wrote:
First, can raidreconf grow a RAID6 device? The man page doesn't seem to
mention RAID6 at all.
No, raidreconf has no knowledge of raid-6 at all.
Second, with RAID5 or RAID6 my biggest fear is a system crash whilst the
RAID is writing resulting in dirty blocks. Does RAID6
Brad Campbell wrote:
G'day all,
I have a box here.. it has a 2Ghz processor and 1.5GB of ram. It runs
the entire OS over NFS and it's sole purpose in life is to run 15 SATA
drives in a Raid-6 with ext3 on it, and share that over NFS. Most of
that ram is sitting completely idle and thus I
Christopher Smith wrote:
Brad Campbell wrote:
I've been running 3 together in one box for about 18 months, and four
in another for a year now... the on board BIOS will only pickup 8
drives, but they work just fine under Linux and recognise all
connected drives.
What distro and kernel
Martin Stender wrote:
Hi there!
I have two identical disks sitting on a Promise dual channel IDE
controller. I guess both disks are primary's then.
One of the disks have failed, so I bought a new disk, took out the
failed disk, and put in the new one.
That might seem a little naive, and
Guy wrote:
Hello group,
I am upgrading my disks from old 18 Gig SCSI disks to 300 Gig SATA
disks. I need a good SATA controller. My system is old and has PCI V 2.1.
I need a 4 port card, or 2 2 port cards. My system has multi PCI buses, so
2 cards may give me better performance, but
[EMAIL PROTECTED] wrote:
Mike Dresser wrote:
On Fri, 23 Jun 2006, Molle Bestefich wrote:
Christian Pernegger wrote:
Anything specific wrong with the Maxtors?
I'd watch out regarding the Western Digital disks, apparently they
have a bad habit of turning themselves off when used in RAID mode,
Francois Barre wrote:
What are you expecting fdisk to tell you? fdisk lists partitions and
I suspect you didn't have any partitions on /dev/md0
More likely you want something like
fsck -n -f /dev/md0
and see which one produces the least noise.
Maybe a simple file -s /dev/md0 could do the
David Rees wrote:
I personally prefer to do a long self-test once a week, a month seems
like a lot of time for something to go wrong.
unfortunately i found some drives (seagate 400 pata) had a rather
negative
effect on performance while doing self-test.
Interesting that you noted negative
G'day all,
I have a box with 15 SATA drives in it, they are all on the PCI bus and it's a
relatively slow machine.
I can extract about 100MB/s combined read speed from these drives with dd.
When reading /dev/md0 with dd I get about 80MB/s, but when I ask it to check the array on a
completely
Neil Brown wrote:
Hmm nothing obvious.
Have you tried increasing
/proc/sys/dev/raid/speed_limit_min:1000
just in case that makes a difference (it shouldn't but you seem to be
down close to that speed).
No difference..
What speed in the raid6 algorithm used - as reported at boot
Michael Tokarev wrote:
Neil Brown wrote:
On Sunday October 29, [EMAIL PROTECTED] wrote:
Hi,
I have 2 arrays whose numbers get inverted, creating havoc, when booting
under different kernels.
I have md0 (raid1) made up of ide drives and md1 (raid5) made up of five
sata drives, when booting
G'day all,
I've got 3 arrays here. A 3 drive raid-5, a 10 drive raid-5 and a 15 drive raid-6. They are all
currently 250GB SATA drives.
I'm contemplating an upgrade to 500GB drives on one or more of the arrays and wondering the best way
to do the physical swap.
The slow and steady way
David Greaves wrote:
I was more wondering about the feasibility of using dd to copy the drive
contents to the larger drives (then I could do 5 at a time) and working
it from there.
Err, if you can dd the drives, why can't you create a new array and use xfsdump
or equivalent? Is downtime due
Mikael Pettersson wrote:
I don't think sata_promise is the guilty party here. Looks like some
layer above sata_promise got confused about the state of the interface.
But locking up hard after hardreset is a problem of sata_promise, no?
Maybe, maybe not. The original report doesn't specify
greenjelly wrote:
The options I seek are to be able to start with a 6 Drive array RAID-5
array, then as my demand for more space increases in the future I want to be
able to plug in more drives and incorporate them into the Array without the
need to backup the data. Basically I need the
jahammonds prost wrote:
From: Brad Campbell [EMAIL PROTECTED]
I've got 2 boxes. One has 14 drives and a 480W PSU and the other has 15 drives and a 600W PSU. It's
not rocket science.
Where did you find reasonably priced cases to hold so many drives? Each of my
home servers top out at 8 data
Johny Mail list wrote:
Hello list,
I have a little question about software RAID on Linux.
I have installed Software Raid on all my SC1425 servers DELL by
believing that the md raid was a strong driver.
And recently i make some test on a server and try to view if the RAID
hard drive power failure
35 matches
Mail list logo