Hi list,
I use sw raid in all TSL2.2 systems and monitor them with cat /proc/mdstat.
Now I use raid 5 on several servers and when I do a mdadm -D /dev/md1
it shows the array is dirty with one failed disk and /proc/mdstat shows
a clean array:
[EMAIL PROTECTED] ~# cat /proc/mdstat
Personalities : [raid5]
read_ahead 1024 sectors
md1 : active raid5 sdd5[3] sdc5[2] sdb5[1] sda5[0]
102558528 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
[EMAIL PROTECTED] ~# mdadm -D /dev/md1
/dev/md1:
Version : 00.90.00
Creation Time : Tue May 9 19:08:51 2006
Raid Level : raid5
Array Size : 102558528 (97.81 GiB 105.02 GB)
Device Size : 34186176 (32.60 GiB 35.01 GB)
Raid Devices : 4
Total Devices : 5
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Fri May 12 22:53:18 2006
State : dirty
Active Devices : 4
Working Devices : 4
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 950f3c43:0eb24fcc:5dd13c24:bd3d7847
Events : 0.24
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 21 1 active sync /dev/sdb5
2 8 37 2 active sync /dev/sdc5
3 8 53 3 active sync /dev/sdd5
On an other system I have this strange reading of mdadm -D (array has
one spare):
Personalities : [raid5]
read_ahead 1024 sectors
md1 : active raid5 sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdc4[2]
sdb4[1] sda4[0]
1460211456 blocks level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU]
[EMAIL PROTECTED] ~# mdadm -D /dev/md1
/dev/md1:
Version : 00.90.00
Creation Time : Fri Jan 27 20:13:05 2006
Raid Level : raid5
Array Size : 1460211456 (1392.57 GiB 1495.26 GB)
Device Size : 243368576 (232.09 GiB 249.21 GB)
Raid Devices : 7
Total Devices : 8
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Mon Apr 24 01:56:43 2006
State : dirty
Active Devices : 7
Working Devices : 8
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : 76cbb73f:6ec7a8eb:13ce394a:2397663c
Events : 0.14
Number Major Minor RaidDevice State
0 8 4 0 active sync /dev/sda4
1 8 20 1 active sync /dev/sdb4
2 8 36 2 active sync /dev/sdc4
3 8 52 3 active sync /dev/sdd4
4 8 68 4 active sync /dev/sde4
5 8 84 5 active sync /dev/sdf4
6 8 100 6 active sync /dev/sdg4
7 8 116 7 spare /dev/sdh4
It say's the state is dirty, but there's nothing wrong with the array.
Is this a bug in mdadm or are my array's really bad?
--
Ariën Huisken
_______________________________________________
tsl-discuss mailing list
[email protected]
http://lists.trustix.org/mailman/listinfo/tsl-discuss