What you call pathologic cases when it comes to real-world data are
very common. It is not at all unusual to find sectors filled with only
a constant (usually zero, but not always), in which case your **512
becomes **1.
Of course it would be easy to check how many of the 512 Bytes are really
On Mon, 7 Jan 2008, Thiemo Nagel wrote:
What you call pathologic cases when it comes to real-world data are very
common. It is not at all unusual to find sectors filled with only a
constant (usually zero, but not always), in which case your **512 becomes
**1.
Of course it would be easy to
Mattias Wadenstein wrote:
On Mon, 7 Jan 2008, Thiemo Nagel wrote:
What you call pathologic cases when it comes to real-world data are
very common. It is not at all unusual to find sectors filled with
only a constant (usually zero, but not always), in which case your
**512 becomes **1.
Of
Neil Brown wrote:
On Saturday January 5, [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED]:~# mdadm /dev/md0 --add /dev/hdb5
mdadm: Cannot open /dev/hdb5: Device or resource busy
All the solutions I've been able to google fail with the busy. There is
nothing that I can find that might be using
Neil Brown wrote:
On Saturday January 5, [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED]:~# mdadm /dev/md0 --add /dev/hdb5
mdadm: Cannot open /dev/hdb5: Device or resource busy
All the solutions I've been able to google fail with the busy. There is
nothing that I can find that might be using
I'm experiencing trouble when trying to add a new disk to a raid 1 array
after having replaced a faulty disk.
A few details about my configuration:
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid1 sdb3[1]
151388452 blocks super 1.0 [2/1] [_U]
The /dev/md0 is set as RAID0
cat /proc/mdstat shows
md0 : active raid0 sda1[0] sdd1[3] sdc1[2] sdb1[1]
157307904 blocks 64k chunks
Then sdd is removed.
But cat /proc/mdsta still shows the same information as above, while two
RAID5 devices show their sdd parts as (F)
md0 : active raid0 sda1[0]
Mattias Wadenstein wrote:
On Mon, 7 Jan 2008, Thiemo Nagel wrote:
What you call pathologic cases when it comes to real-world data are
very common. It is not at all unusual to find sectors filled with
only a constant (usually zero, but not always), in which case your
**512 becomes **1.
Of
On Jan 7, 2008 6:44 AM, Radu Rendec [EMAIL PROTECTED] wrote:
I'm experiencing trouble when trying to add a new disk to a raid 1 array
after having replaced a faulty disk.
[..]
# mdadm --version
mdadm - v2.6.2 - 21st May 2007
[..]
However, this happens with both mdadm 2.6.2 and 2.6.4. I
On Monday January 7, [EMAIL PROTECTED] wrote:
Problem is not raid, or at least not obviously raid related. The
problem is that the whole disk, /dev/hdb is unavailable.
Maybe check /sys/block/hdb/holders ? lsof /dev/hdb ?
good luck :-)
NeilBrown
-
To unsubscribe from this list: send the
On Saturday January 5, [EMAIL PROTECTED] wrote:
Hi all,
I need to monitor my RAID and if it fails, I'd like to call my-script to
deal with the failure.
I did:
mdadm --monitor --program my-script --delay 60 /dev/md1
And then, I simulate a failure with
mdadm --manage --set-faulty
On Monday January 7, [EMAIL PROTECTED] wrote:
On Jan 7, 2008 6:44 AM, Radu Rendec [EMAIL PROTECTED] wrote:
I'm experiencing trouble when trying to add a new disk to a raid 1 array
after having replaced a faulty disk.
[..]
# mdadm --version
mdadm - v2.6.2 - 21st May 2007
[..]
12 matches
Mail list logo