I suggest you find a SATA related mailing list to post this to (Look
in the MAINTAINERS file maybe) or post it to linux-kernel.
linux-ide couldn't help much, aside from recommending a bleeding-edge
patchset which should fix a lot of things SATA:
http://home-tj.org/files/libata-tj-stable/
Still more problems ... :(
My md raid5 still does not always shut down cleanly. The last few
lines of the shutdown sequence are always as follows:
[...]
Will now halt.
md: stopping all md devices.
md: md0 still in use.
Synchronizing SCSI cache for disk /dev/sdd:
Synchronizing SCSI cache for
Is there not any way for me to recover my data
As I said, i have already rebuilt the raid with wrong disc setup... Now
i have built the raid with the right setup, and one missing disk.
But there's not any ext3 partition on it anymore. Is there any way to
search trough the md0 device and
hello, i just realized that internal bitmaps do not seem to work
anymore.
kernel 2.6.17
mdadm 2.5.2
[EMAIL PROTECTED] ~]# mdadm --create --level=1 -n 2 -e 1 --bitmap=internal
/dev/md100 /dev/sda1 /dev/sda2
mdadm: array /dev/md100 started.
... wait awhile ...
[EMAIL PROTECTED] ~]# cat
My md raid5 still does not always shut down cleanly. The last few
lines of the shutdown sequence are always as follows:
[...]
Will now halt.
md: stopping all md devices.
md: md0 still in use.
Synchronizing SCSI cache for disk /dev/sdd:
Synchronizing SCSI cache for disk /dev/sdc:
May be your shutdown script is doing halt -h? Halting the disk
immediately without letting the RAID to settle to a clean state
can be the cause?
I'm using Debian as well and my halt script has the fragment you posted.
Besides, shouldn't the array be marked clean at this point:
md: stopping
Currently I have 4 discs on a 4 channel sata controller which does its job
quite well for 20 bucks.
Now, if I wanted to grow the array I'd probably go for another one of these.
How can I tell if the discs on the new controller will become sd[e-h] or if
they'll be the new a-d and push the
From: Christian Pernegger [EMAIL PROTECTED]
Date: Thu, Jul 06, 2006 at 07:18:06PM +0200
May be your shutdown script is doing halt -h? Halting the disk
immediately without letting the RAID to settle to a clean state
can be the cause?
I'm using Debian as well and my halt script has the
I get these messages too on Debian Unstable, but since enabling the
bitmaps on my devices, resyncing is so fast that I don't even notice it
on booting.
Bitmaps are great, but the speed of the rebuild is not the problem.
The box doesn't have hotswap bays, so I have to shut it down to
replace a
Dexter Currently I have 4 discs on a 4 channel sata controller which
Dexter does its job quite well for 20 bucks. Now, if I wanted to
Dexter grow the array I'd probably go for another one of these.
So, which SATA controller are you using? I'm thinking my next box
will go SATA, but I'm still
On Thursday July 6, [EMAIL PROTECTED] wrote:
I suggest you find a SATA related mailing list to post this to (Look
in the MAINTAINERS file maybe) or post it to linux-kernel.
linux-ide couldn't help much, aside from recommending a bleeding-edge
patchset which should fix a lot of things
On Thursday July 6, [EMAIL PROTECTED] wrote:
Still more problems ... :(
My md raid5 still does not always shut down cleanly. The last few
lines of the shutdown sequence are always as follows:
[...]
Will now halt.
md: stopping all md devices.
md: md0 still in use.
Synchronizing SCSI
On Thursday July 6, [EMAIL PROTECTED] wrote:
hello, i just realized that internal bitmaps do not seem to work
anymore.
I cannot imagine why. Nothing you have listed show anything wrong
with md...
Maybe you were expecting
mdadm -X /dev/md100
to do something useful. Like -E, -X must be
On Thursday July 6, [EMAIL PROTECTED] wrote:
Currently I have 4 discs on a 4 channel sata controller which does its job
quite well for 20 bucks.
Now, if I wanted to grow the array I'd probably go for another one of these.
How can I tell if the discs on the new controller will become
Neil,
First off, thanks for all your hard work on this software, it's really
a great thing to have.
But I've got some interesting issues here. Though not urgent. As
I've said in other messages, I've got a pair of 120gb HDs mirrored.
I'm using MD across partitions, /dev/hde1 and /dev/hdg1.
md is very dependant on the driver doing the right thing. It doesn't
do any timeouts or anything like that - it assumes the driver will.
md simply trusts the return status from the drive, and fails a drive
if and only if a write to the drive is reported as failing (if a read
fails, md trys to
On Thursday July 6, [EMAIL PROTECTED] wrote:
Neil,
First off, thanks for all your hard work on this software, it's really
a great thing to have.
But I've got some interesting issues here. Though not urgent. As
I've said in other messages, I've got a pair of 120gb HDs mirrored.
I'm
Perhaps I am misunderstanding how assemble works, but I have created a
new RAID 1 array on a pair of SCSI drives and am having difficulty
re-assembling it after a reboot.
The relevent mdadm.conf entry looks like this:
ARRAY /dev/md3 level=raid1 num-devices=2
On Friday July 7, [EMAIL PROTECTED] wrote:
Perhaps I am misunderstanding how assemble works, but I have created a
new RAID 1 array on a pair of SCSI drives and am having difficulty
re-assembling it after a reboot.
The relevent mdadm.conf entry looks like this:
ARRAY /dev/md3
How are you shutting down the machine? If something sending SIGKILL
to all processes?
First SIGTERM, then SIGKILL, yes.
You could try the following patch. I think it should be safe.
Hmm, it said chunk failed, so I replaced the line by hand. That didn't
want to compile because mode
Neil Brown wrote:
Add
DEVICE /dev/sd?
or similar on a separate line.
Remove
devices=/dev/sdc,/dev/sdd
Thanks.
My mistake, I thought after having assembled the arrays initially, that
the output of:
mdadm --detail --scan mdadm.conf
could be used directly.
I'm using Centos 4.3, which
I created a raid1 array using /dev/disk/by-id with (2) 250GB USB 2.0
Drives. It was working for about 2 minutes until I tried to copy a
directory tree from one drive to the array and then cancelled it
midstream. After cancelling the copy, when I list the contents of the
directory it doesn't
22 matches
Mail list logo