Jeff Garzik writes:
Promise just gave permission to post the docs for their PDC20621 (i.e.
SX4) hardware:
http://gkernel.sourceforge.net/specs/promise/pdc20621-pguide-1.2.pdf.bz2
joining the existing PDC20621 DIMM and PLL docs:
Hi mdadm raid gurus,
I wanted to make a raid1 array, but at the moment I have only 1 drive
available. The other disk is
in the mail. I wanted to make a raid1 that i will use as a backup.
But I need to do the backup now, before the second drive comes.
So I did this.
formated /dev/sda creating
Quoting Mitchell Laks [EMAIL PROTECTED]:
Hi mdadm raid gurus,
I wanted to make a raid1 array, but at the moment I have only 1
drive available. The other disk is
in the mail. I wanted to make a raid1 that i will use as a backup.
But I need to do the backup now, before the second drive
I think my error was that maybe I did not
do write the fdisk changes to the drive with
fdisk w
so I did
fdisk /dev/sda
p
then
w
and then when I did
mdadm -C /dev/md0 --level=2 -n2 /dev/sda1 missing
it worked and set up the array.
Thanks for being there!
Mitchell
-
To unsubscribe from this
Thanks for the response Bill. Neil has responded to me a few times, but
I'm more than happy to try and keep it on this list instead as it feels
like I'm badgering Neil which really isn't fair...
Since my initial email, I got to the point of believing it was down to
the superblock, and that
Question: with the same number of physical drives, do I get better
performance with one large md-based drive, or do I get better
performance if I have several smaller md-based drives?
Situation: dual CPU, 4 drives (which I will set up as RAID-1 after being
terrorized by the anti-RAID-5
Let's assume that I have 4 drives; they are set up in mirrored pairs as
RAID 1, and then aggregated together to create a RAID 10 system (RAID 1
followed by RAID 0). That is, 4 x N disks become a 2N size filesystem.
Question: Is this higher or lower performance than using LVM to
aggregate the
On Sat Jan 19, 2008 at 11:08:43PM -, Steve Fairbairn wrote:
Hi All,
I have a Software RAID 5 device configured, but one of the drives
failed. I removed the drive with the following command...
mdadm /dev/md0 --remove /dev/hdc1
Now, when I try to insert the replacement drive back
-Original Message-
From: Neil Brown [mailto:[EMAIL PROTECTED]
Sent: 20 January 2008 20:37
md: hdd1 has invalid sb, not importing!
md: md_import_device returned -22
In 2.6.18, the only thing that can return this message
without other more explanatory messages are:
2/ If
On Sun, Jan 20, 2008 at 02:24:46PM -0600, Moshe Yudkowsky wrote:
Question: with the same number of physical drives, do I get better
performance with one large md-based drive, or do I get better
performance if I have several smaller md-based drives?
No expert here, but my opinion:
- md
Moshe Yudkowsky wrote:
Question: with the same number of physical drives, do I get better
performance with one large md-based drive, or do I get better
performance if I have several smaller md-based drives?
Situation: dual CPU, 4 drives (which I will set up as RAID-1 after
being terrorized
Mitchell Laks wrote:
I think my error was that maybe I did not
do write the fdisk changes to the drive with
fdisk w
No - your problem was that you needed to use the literal word missing
like you did this time:
mdadm -C /dev/md0 --level=2 -n2 /dev/sda1 missing
[however, this time you also
Mitchell Laks wrote:
Hi mdadm raid gurus,
I wanted to make a raid1 array, but at the moment I have only 1 drive
available. The other disk is
in the mail. I wanted to make a raid1 that i will use as a backup.
But I need to do the backup now, before the second drive comes.
So I did this.
I've got a raid5 array with 5 disks where 2 failed. The failures are
occasional and only on a few sectors so I tried to assemble it with 4
disks anyway:
# mdadm -A -f -R /dev/mdnumber /dev/disk1 /dev/disk2 /dev/disk3 /dev/disk4
However mdadm complains that one of the disks has an out-of-date
A raid6 array with a spare and bitmap is idle: not mounted and with no
IO to it or any of its disks (obviously), as shown by iostat. However
it's consuming cpu: since reboot it used about 11min in 24h, which is quite
a lot even for a busy array (the cpus are fast). The array was cleanly
shutdown
On Sunday January 20, [EMAIL PROTECTED] wrote:
I've got a raid5 array with 5 disks where 2 failed. The failures are
occasional and only on a few sectors so I tried to assemble it with 4
disks anyway:
# mdadm -A -f -R /dev/mdnumber /dev/disk1 /dev/disk2 /dev/disk3 /dev/disk4
However mdadm
On Sunday January 20, [EMAIL PROTECTED] wrote:
A raid6 array with a spare and bitmap is idle: not mounted and with no
IO to it or any of its disks (obviously), as shown by iostat. However
it's consuming cpu: since reboot it used about 11min in 24h, which is quite
a lot even for a busy array
Neil Brown ([EMAIL PROTECTED]) wrote on 21 January 2008 12:15:
On Sunday January 20, [EMAIL PROTECTED] wrote:
A raid6 array with a spare and bitmap is idle: not mounted and with no
IO to it or any of its disks (obviously), as shown by iostat. However
it's consuming cpu: since reboot it used
Neil Brown ([EMAIL PROTECTED]) wrote on 21 January 2008 12:13:
On Sunday January 20, [EMAIL PROTECTED] wrote:
I've got a raid5 array with 5 disks where 2 failed. The failures are
occasional and only on a few sectors so I tried to assemble it with 4
disks anyway:
# mdadm -A -f -R
Bill Davidsen wrote:
One partitionable RAID-10, perhaps, then partition as needed. Read the
discussion here about performance of LVM and RAID. I personally don't do
LVM unless I know I will have to have great flexibility of configuration
and can give up performance to get it. Other report
On Monday January 21, [EMAIL PROTECTED] wrote:
The command is
mdadm -A --verbose -f -R /dev/md3 /dev/sda4 /dev/sdc4 /dev/sde4 /dev/sdd4
The failed areas are sdb4 (which I didn't include above) and sdd4. I
did a dd if=/dev/sdb4 of=/dev/hda4 bs=512 conv=noerror and it
complained about
Thanks for the tips, and in particular:
Iustin Pop wrote:
- if you download torrents, fragmentation is a real problem, so use a
filesystem that knows how to preallocate space (XFS and maybe ext4;
for XFS use xfs_io to set a bigger extend size for where you
download)
That's a
Neil Brown ([EMAIL PROTECTED]) wrote on 21 January 2008 14:09:
As you note, sda4 says that it thinks slot 1 is still active/sync, but
it doesn't seem to know which device should go there either.
However that does indicate that slot 3 failed first and slot 1 failed
later. So if we have
23 matches
Mail list logo