On Friday June 23, [EMAIL PROTECTED] wrote:
Why would you ever want to reduce the size of a raid5 in this way?
A feature that would have been useful to me a few times is the ability
to shrink an array by whole disks.
Example:
8x 300 GB disks - 2100 GB raw capacity
shrink file
On Sun, 25 Jun 2006, Chris Allen wrote:
Back to my 12 terabyte fileserver, I have decided to split the storage
into four partitions
each of 3TB. This way I can choose between XFS and EXT3 later on.
So now, my options are between the following:
1. Single 12TB /dev/md0, partitioned into four
On Sun, 25 Jun 2006, Bill Davidsen wrote:
Justin Piszcz wrote:
On Sat, 24 Jun 2006, Neil Brown wrote:
On Friday June 23, [EMAIL PROTECTED] wrote:
The problem is that there is no cost effective backup available.
One-liner questions :
- How does Google make backups ?
No, Google ARE
Gordon Henderson wrote:
I use option 2 (above) all the time, and I've never noticed any
performance issues. (not issues with recovery after a power failure) I'd
like to think that on a modern processor the CPU can handle the parity,
etc. calculations several orders of magnitude faster than the
This is shrinking an array by removing drives. We were talking about
shrinking an array by reducing the size of drives - a very different
think.
Yes I know - I just wanted to get this in as an alternative shrinking semantic.
As for reducing the RAID (partition) size on the individual drives I
As Christian said, specific error message help a lot.
Assume the two devices are hdc and hde,
fdisk -l /dev/hdc
fdisk -l /dev/hde
mdadm -E /dev/hdc
mdadm -E /dev/hde
and my best guess
mdadm --build /dev/md0 --level linear --raid-disks 2 /dev/hdc /dev/hde
fsck -n /dev/md0
(and
Adam Talbot wrote:
Not exactly sure how to tune for stripe size.
What would you advise?
-Adam
See the -R option of mke2fs. I don't have a number for the performance
impact of this, but I bet someone else on the list will. Depending on
what posts you read, reports range from measurable
Ronald Lembcke wrote:
Hi!
I set up a RAID5 array of 4 disks. I initially created a degraded array
and added the fourth disk (sda1) later.
The array is clean, but when I do
mdadm -S /dev/md0
mdadm --assemble /dev/md0 /dev/sd[abcd]1
it won't start. It always says sda1 is failed.
When I
Mr. James W. Laferriere wrote:
Hello Gabor ,
On Tue, 20 Jun 2006, Gabor Gombas wrote:
On Tue, Jun 20, 2006 at 03:08:59PM +0200, Niccolo Rigacci wrote:
Do you know if it is possible to switch the scheduler at runtime?
echo cfq /sys/block/disk/queue/scheduler
At least one can
I managed to get the hard disk of the retired system and this is
its raid-related boot log:
md: Autodetecting RAID arrays.
[events: 004d]
[events: 004d]
md: autorun ...
md: considering hdb1 ...
md: adding hdb1 ...
md: adding hdc1 ...
md: created md0
md: bindhdc1,1
md: bindhdb1,2
This is what I get now, after creating with fdisk /dev/hdb1 and
/dev/hdc1 as linux raid autodetect partitions
mdadm -E /dev/hdb1
/dev/hdb1:
Magic : a92b4efc
Version : 00.90.00
UUID : a7e90d4b:f347bd0e:07ebf941:e718f695
Creation Time : Wed Mar 16 18:14:25 2005
We can now make the following variables static:
- drivers/md/md.c: mdp_major
- init/main.c: envp_init[]
Signed-off-by: Adrian Bunk [EMAIL PROTECTED]
---
This patch was already sent on:
- 16 May 2006
drivers/md/md.c |2 +-
init/main.c |2 +-
2 files changed, 2 insertions(+), 2
Neil Brown wrote:
snip
Alternately you can apply the following patch to the kernel and
version-1 superblocks should work better.
-stable material?
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
On Monday June 26, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
snip
Alternately you can apply the following patch to the kernel and
version-1 superblocks should work better.
-stable material?
Maybe. I'm not sure it exactly qualifies, but I might try sending it
to them and see what they
On Monday June 26, [EMAIL PROTECTED] wrote:
This is what I get now, after creating with fdisk /dev/hdb1 and
/dev/hdc1 as linux raid autodetect partitions
So I'm totally confused now.
You said it was 'linear', but the boot log showed 'raid0'.
The drives didn't have a partition table
On Monday June 26, [EMAIL PROTECTED] wrote:
This is what I get now, after creating with fdisk /dev/hdb1 and
/dev/hdc1 as linux raid autodetect partitions
So I'm totally confused now.
You said it was 'linear', but the boot log showed 'raid0'.
The drives didn't have a partition table
16 matches
Mail list logo