personally, I don't see any point to worrying about the default,
compile-time or boot time:
for f in `find /sys/block/* -name scheduler`; do echo cfq $f; done
I tested this case:
- reboot as per power failure (RAID goes dirty)
- RAID start resyncing as soon as the kernel assemble it
-
On Monday June 19, [EMAIL PROTECTED] wrote:
Neil hello
if i am not mistaken here:
in first instance of : if(bi) ...
...
you return without setting to NULL
Yes, you are right. Thanks.
And fixing that bug removes the crash.
However
I've been doing a few tests and
Christian Pernegger wrote:
Intel SE7230NH1-E mainboard
Pentium D 930
HPA recently said that x86_64 CPUs have better RAID5 performance.
Promise Ultra133 TX2 (2ch PATA)
- 2x Maxtor 6B300R0 (300GB, DiamondMax 10) in RAID1
Onboard Intel ICH7R (4ch SATA)
- 4x Western Digital WD5000YS
Molle Bestefich wrote:
Christian Pernegger wrote:
Intel SE7230NH1-E mainboard
Pentium D 930
HPA recently said that x86_64 CPUs have better RAID5 performance.
Actually, anything with SSE2 should be OK.
-hpa
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the
Dear All,
I have a Linux storage server containing 16x750GB drives - so 12TB raw
space.
If I make them into a single RAID5 array, then it appears my only
choice for a filesystem is XFS - as EXT3 won't really handle partitions
over 8TB.
Alternatively, I could split each drive into 2
On Thu, 22 Jun 2006, Chris Allen wrote:
Dear All,
I have a Linux storage server containing 16x750GB drives - so 12TB raw
space.
Just one thing - Do you want to use RAID-5 or RAID-6 ?
I just ask, as with that many drives (and that much data!) the
possibilities of a 2nd drive failure is
H. Peter Anvin wrote:
Gordon Henderson wrote:
On Thu, 22 Jun 2006, Chris Allen wrote:
Dear All,
I have a Linux storage server containing 16x750GB drives - so 12TB raw
space.
Just one thing - Do you want to use RAID-5 or RAID-6 ?
I just ask, as with that many drives (and that much data!)
Pentium D 930
HPA recently said that x86_64 CPUs have better RAID5 performance.
Good to know. I did intend to use Debian-amd64 anyway.
Is it a NAS kind of device?
Yes, mostly. It also runs a caching NNTP proxy and drives our
networked audio players :)
Personal file server describes it
Hi Dean
Thanks a lot for sharing this.
I am not quite understand about these 2 commands. Why we want to add a
pre-failing disk back to md4?
mdadm --zero-superblock /dev/sde1
mdadm /dev/md4 -a /dev/sde1
Ming
On Sun, 2006-04-23 at 18:40 -0700, dean gaudet wrote:
i had a disk in a raid5 which
well that part is optional... i wasn't replacing the disk right away
anyhow -- it had just exhibited its first surface error during SMART and i
thought i'd try moving the data elsewhere just for the experience of it.
-dean
On Thu, 22 Jun 2006, Ming Zhang wrote:
Hi Dean
Thanks a lot for
ic. thx for clarifying.
ming
On Thu, 2006-06-22 at 17:09 -0700, dean gaudet wrote:
well that part is optional... i wasn't replacing the disk right away
anyhow -- it had just exhibited its first surface error during SMART and i
thought i'd try moving the data elsewhere just for the
Neil Brown wrote:
On Monday June 19, [EMAIL PROTECTED] wrote:
Hi,
I'd like to shrink the size of a RAID5 array - is this
possible? My first attempt shrinking 1.4Tb to 600Gb,
mdadm --grow /dev/md5 --size=629145600
gives
mdadm: Cannot set device size/shape for /dev/md5: No space left on
On Thursday June 22, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
On Monday June 19, [EMAIL PROTECTED] wrote:
Hi,
I'd like to shrink the size of a RAID5 array - is this
possible? My first attempt shrinking 1.4Tb to 600Gb,
mdadm --grow /dev/md5 --size=629145600
gives
mdadm:
Neil Brown wrote:
In short, reducing a raid5 to a particular size isn't something that
really makes sense to me. Reducing the amount of each device that is
used does - though I would much more expect people to want to increase
that size.
If Paul really has a reason to reduce the array to a
Molle Bestefich wrote:
Christian Pernegger wrote:
Anything specific wrong with the Maxtors?
No. I've used Maxtor for a long time and I'm generally happy with them.
They break now and then, but their online warranty system is great.
I've also been treated kindly by their help desk - talked
Niccolo Rigacci wrote:
personally, I don't see any point to worrying about the default,
compile-time or boot time:
for f in `find /sys/block/* -name scheduler`; do echo cfq $f; done
I tested this case:
- reboot as per power failure (RAID goes dirty)
- RAID start resyncing as soon as
Christian Pernegger wrote:
Hi list!
Having experienced firsthand the pain that hardware RAID controllers
can be -- my 3ware 7500-8 died and it took me a week to find even a
7508-8 -- I would like to switch to kernel software RAID.
Here's a tentative setup:
Intel SE7230NH1-E mainboard
Pentium
Molle Bestefich wrote:
Christian Pernegger wrote:
Anything specific wrong with the Maxtors?
No. I've used Maxtor for a long time and I'm generally happy with them.
They break now and then, but their online warranty system is great.
I've also been treated kindly by their help desk -
Hi, guys:
My copy of 2.6.17-rc5 has the following code in autostart_array():
mdp_disk_t *desc = sb-disks + i;
dev_t dev = MKDEV(desc-major, desc-minor);
if (!dev)
continue;
if (dev == startdev)
Pete Zaitcev wrote:
Hi, guys:
My copy of 2.6.17-rc5 has the following code in autostart_array():
mdp_disk_t *desc = sb-disks + i;
dev_t dev = MKDEV(desc-major, desc-minor);
if (!dev)
continue;
if (dev ==
On Fri, 23 Jun 2006 14:46:13 +1000, Neil Brown [EMAIL PROTECTED] wrote:
dev_t dev = MKDEV(desc-major, desc-minor);
if (MAJOR(dev) != desc-major || MINOR(dev) != desc-minor)
continue;
desc-major and desc-minor have been
read of the disk, so
21 matches
Mail list logo