- XFS is faster and fragments less, but make sure you have a good UPS
- ReiserFS 3.6 is mature and fast, too, you might consider it
- ext3 is slow if you have many files in one directory, but has more
mature tools (resize, recovery etc)
I'd go with XFS or Reiser.
A little nit-picking...
# echo check /sys/block/md0/md/sync_action
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [raid4]
md1 : active raid1 sda7[1] hda7[0]
6253248 blocks [2/2] [UU]
md2 : active raid5 sdd1[2] sdc1[3] sdb1[1] hdc1[0] hdb1[4]
976783616
If you happen to be unfortunate enough to have also purchased a cheap
ASUS K8N VM with the Nforce410 chipset in order to get the software RAID
And if you are also unfortunate enough to have bought some newer Maxtor
SATA harddrives, use the jumper on the drive to revert to SATA150 instead
I think you would like something like this :
A LVM (or dm- device mapper) layer which sits between the RAID layer and
the physical disks. This layer computes checksums as data is written to
the physical disks, and checks read data against these checksums.
Problem is, where do
I have a raid5 array that contain 4 disk and 1 spare disk. now i saw one
disk have sign of going fail via smart log.
Better safe than sorry... replace the failing disk and resync, that's
all.
You might want to do cat /dev/md# /dev/null, or cat /dev/hd?
/dev/null first. This is
Anybody tried a Raid1 or Raid5 on USB2.
If so did it crawl or was it usable ?
Why not external SATA ?
After all, the little cute SATA cables are a lot more suited to this than
the old, ugly flat PATA cables...
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
This also raises another point, which is relevant for both cases - same
exact models of hard disks have different number of cylinders, so if a
RAID partition is created on a larger drive it cannot be mirrored to a
smaller drive.
I have a RAID5 with 5 250G drives, but some are 251 GiB
Does SMART work for your SATA drives? Without SMART support I don't
really want to get any more SATA drives. Mine reports this:
The feature should come some day. This is quite vital for RAID arrays...
Meanwhile, get the same error as you :
SATA disks
On Mon, 23 Jan 2006 09:36:54 +0100, Mitchell Laks [EMAIL PROTECTED]
wrote:
Dear Experts,
I wanted to ask for any experience with running raid with SATA drives and
controllers here under linux.
Well, here's mine :
Maxtor SATA drives series 6V (those with 16 MB cache) are
What kernel are you using?
NeilBrown
Kernel version : 2.6.15-gentoo
Yes, it's strange... Not very annoying, as the rebuild is finished already
(at 40 MB/s it was short), but strange.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
While we're at it, here's a little issue I had with RAID5 ; not really
the fault of md, but you might want to know...
I have a 5x250GB RAID5 array for home storage (digital photo, my lossless
ripped cds, etc). 1 IDE Drive ave 4 SATA Drives.
Now, turns out one of the SATA drives is a
Hello,
I've had a lot of problems with nv_sata (nforce3) and Maxtor harddrives.
Basically it always boils down to :
Dec 24 23:04:34 apollo13 ata3: command 0x35 timeout, stat 0xd0 host_stat
0x21
Dec 24 23:04:34 apollo13 ata3: translated ATA stat/err 0x35/00 to SCSI
SK/ASC/ASCQ
So far ok for a few days. The promise cards are 54-62 dollars with 4
controllers. they work with kernel 2.6.12 and 2.6.14 debian stock
kernels.
via controllers on motherboard are good too - so far. i will let you
know more
over time.
Well. The linux box is holding.
The windows
Hello !
This is my first post here, so hello to everyone !
So, I have a 1 Terabyte 5-disk RAID5 array (md) that is now dead. I'll
try to explain.
It's a bit long because I tried to be complete...
I forgot :
I tried these :
mdadm --assemble /dev/md2 /dev/hdb1 /dev/sd{a,b,c,d}1
with --run, --force, and both, --stop'ping the array before each try, and
everytime is the same error, and the same line in dmesg : cannot start
dirty degraded array for md2
And mdstat says it's inactive ; while mdadm says it's active,
degraded... what's happening
apollo13 ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
md1 : active raid1 hdc7[1] hda7[0]
6248832 blocks [2/2] [UU]
md2 : inactive sda1[0] hdb1[4] sdc1[3] sdb1[1]
OK, I bit the bullet and removed the goto abort in raid5.c
I was then able to mount everything and recover all of my data without
any problem. Hm.
There should be a way to do this with mdadm without recompiling the
kernel, but anyway, opensource saved my ass xDDD
-
To
17 matches
Mail list logo