On Friday July 7, [EMAIL PROTECTED] wrote:
How are you shutting down the machine? If something sending SIGKILL
to all processes?
First SIGTERM, then SIGKILL, yes.
That really should cause the array to be clean. Once the md thread
gets SIGKILL (it ignores SIGTERM) it will mark the array
On Fri, Jul 07, 2006 at 08:16:18AM +1000, Neil Brown wrote:
On Thursday July 6, [EMAIL PROTECTED] wrote:
hello, i just realized that internal bitmaps do not seem to work
anymore.
I cannot imagine why. Nothing you have listed show anything wrong
with md...
Maybe you were expecting
mdadm -X
On Fri, 7 Jul 2006, Neil Brown wrote:
On Thursday July 6, [EMAIL PROTECTED] wrote:
I suggest you find a SATA related mailing list to post this to (Look
in the MAINTAINERS file maybe) or post it to linux-kernel.
linux-ide couldn't help much, aside from recommending a bleeding-edge
patchset
Good morning!
That patch was against latest -mm For earlier kernels you want to
test 'ro'.
Ok. Was using stock 2.6.17.
Done unmounting local file systems.
*md: md0 stopped
*md: unbind sdf
*md: export_rdevsdf
*[last two lines for each disk.]
*Stopping RAID arrays ... done (1
p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1
p34:~# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
Creation Time : Fri Jun 30 09:17:12 2006
Raid Level : raid5
Array Size : 1953543680 (1863.04 GiB 2000.43 GB)
Device Size : 390708736 (372.61 GiB 400.09 GB)
My RAID-5 array is composed of six USB drives. Unfortunately, my
Ubuntu Dapper system doesn't always assign the same devices to the
drives after a reboot. However, mdadm doesn't seem to like having an
mdadm.conf that doesn't have a Devices line with specified device
names.
Anyway to setup an
On Fri, 7 Jul 2006, Justin Piszcz wrote:
p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1
p34:~# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
Creation Time : Fri Jun 30 09:17:12 2006
Raid Level : raid5
Array Size : 1953543680 (1863.04 GiB 2000.43 GB)
Device Size :
On Thu, Jul 06, 2006 at 08:12:14PM +0200, Dexter Filmore wrote:
How can I tell if the discs on the new controller will become sd[e-h] or if
they'll be the new a-d and push the existing ones back?
If they are the same type (or more precisely, if they use the same
driver), then their order on
I'm just in the process of upgrading the RAID-1 disks in my server, and have
started to experiment with the RAID-1 --grow command. The first phase of the
change went well, I added the new disks to the old arrays and then increased the
size of the arrays to include both the new and old disks.
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1
p34:~# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
Creation Time : Fri Jun 30 09:17:12 2006
On Fri, 2006-07-07 at 00:29 +0200, Christian Pernegger wrote:
I don't know exactly how the driver was responding to the bad cable,
but it clearly wasn't returning an error, so md didn't fail it.
There were a lot of errors in dmesg -- seems like they did not get
passed up to md? I find it
On Sat, 8 Jul 2006, Reuben Farrelly wrote:
I'm just in the process of upgrading the RAID-1 disks in my server, and have
started to experiment with the RAID-1 --grow command. The first phase of the
change went well, I added the new disks to the old arrays and then increased
the size of the
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1
p34:~# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1
p34:~# mdadm -D
It seems like it really isn't an md issue -- when I remove everything
to do with evms (userspace tools + initrd hooks) everything works
fine.
I took your patch back out and put a few printks in there ...
Without evms the active counter is 1 in an idle state, i. e. after the box
has finished
On 8/07/2006 6:52 a.m., Justin Piszcz wrote:
Reuben,
What chunk size did you use?
I can't even get mine to get past this part:
p34:~# mdadm /dev/md3 --grow --raid-disks=7
mdadm: Need to backup 15360K of critical section..
mdadm: Cannot set device size/shape for /dev/md3: No space left on
On Friday July 7, [EMAIL PROTECTED] wrote:
Jul 7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough
stripes. Needed 512
Jul 7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array
info. -28
So the RAID5 reshape only works if you use a 128kb or smaller
On Saturday July 8, [EMAIL PROTECTED] wrote:
I'm just in the process of upgrading the RAID-1 disks in my server, and have
started to experiment with the RAID-1 --grow command. The first phase of the
change went well, I added the new disks to the old arrays and then increased
the
size of
On Friday July 7, [EMAIL PROTECTED] wrote:
My RAID-5 array is composed of six USB drives. Unfortunately, my
Ubuntu Dapper system doesn't always assign the same devices to the
drives after a reboot. However, mdadm doesn't seem to like having an
mdadm.conf that doesn't have a Devices line with
On Sat, 8 Jul 2006, Neil Brown wrote:
On Friday July 7, [EMAIL PROTECTED] wrote:
Jul 7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough
stripes. Needed 512
Jul 7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array
info. -28
So the RAID5 reshape only works if
On Friday July 7, [EMAIL PROTECTED] wrote:
Hey! You're awake :)
Yes, and thinking about breakfast (it's 8:30am here).
I am going to try it with just 64kb to prove to myself it works with that,
but then I will re-create the raid5 again like I had it before and attempt
it again, I did
On Sat, 8 Jul 2006, Neil Brown wrote:
On Friday July 7, [EMAIL PROTECTED] wrote:
Hey! You're awake :)
Yes, and thinking about breakfast (it's 8:30am here).
I am going to try it with just 64kb to prove to myself it works with that,
but then I will re-create the raid5 again like I had
On Sat, 8 Jul 2006, Neil Brown wrote:
On Friday July 7, [EMAIL PROTECTED] wrote:
I guess one has to wait until the reshape is complete before growing the
filesystem..?
Yes. The extra space isn't available until the reshape has completed
(if it was available earlier, the reshape wouldn't
On Friday July 7, [EMAIL PROTECTED] wrote:
I guess one has to wait until the reshape is complete before growing the
filesystem..?
Yes. The extra space isn't available until the reshape has completed
(if it was available earlier, the reshape wouldn't be necessary)
NeilBrown
-
To
On 8/07/2006 10:12 a.m., Neil Brown wrote:
On Saturday July 8, [EMAIL PROTECTED] wrote:
And lastly, I felt brave and decided to plunge ahead, resize to 128 blocks
smaller than the device size. mdadm --grow /dev/md1 --size=
The kernel then went like this:
md: couldn't update array info.
25 matches
Mail list logo