Bill Davidsen wrote:
Richard Scobie wrote:
A followup for the archives:
I found this document very useful:
http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html
After modifying my grub.conf to refer to (hd0,0), reinstalling grub on
hdc with:
grub device (hd0) /dev/hdc
Berni wrote:
Hi
I created the raid arrays during install with the text-installer-cd.
So first the raid array was created and then the system was installed on it.
I don't have a extra /boot partition its on the root (/) partition and the root
is the md0 in the raid. Every partition for
Richard Scobie wrote:
A followup for the archives:
I found this document very useful:
http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html
After modifying my grub.conf to refer to (hd0,0), reinstalling grub on
hdc with:
grub device (hd0) /dev/hdc
grub root (hd0,0)
On Sun, Feb 03, 2008 at 10:53:51AM -0500, Bill Davidsen wrote:
Keld Jørn Simonsen wrote:
This is intended for the linux raid howto. Please give comments.
It is not fully ready /keld
Howto prepare for a failing disk
6. /etc/mdadm.conf
Something here on /etc/mdadm.conf. What would be
On Sun, Feb 03, 2008 at 10:56:01AM -0500, Bill Davidsen wrote:
Keld Jørn Simonsen wrote:
I found a sentence in the HOWTO:
raid1 and raid 10 always writes all data to all disks
I think this is wrong for raid10.
eg
a raid10,f2 of 4 disks only writes to two of the disks -
not all 4
Bill Davidsen wrote:
Have you actually tested this by removing the first hd and booting?
Depending on the BIOS I believe that the fallback drive will be called
hdc by the BIOS but will be hdd in the system. That was with RHEL3, but
worth testing.
Hi Bill,
I did not try this particular
I've been reading the draft and checking it against my experience.
Because of local power fluctuations, I've just accidentally checked my
system: My system does *not* survive a power hit. This has happened
twice already today.
I've got /boot and a few other pieces in a 4-disk RAID 1 (three
On Sun Feb 03, 2008 at 01:15:10PM -0600, Moshe Yudkowsky wrote:
I've been reading the draft and checking it against my experience. Because
of local power fluctuations, I've just accidentally checked my system: My
system does *not* survive a power hit. This has happened twice already
Moshe Yudkowsky wrote:
I've been reading the draft and checking it against my experience.
Because of local power fluctuations, I've just accidentally checked my
system: My system does *not* survive a power hit. This has happened
twice already today.
I've got /boot and a few other pieces in
Robin Hill wrote:
This is wrong - the disk you boot from will always be hd0 (no matter
what the map file says - that's only used after the system's booted).
You need to remap the hd0 device for each disk:
grub --no-floppy EOF
root (hd0,1)
setup (hd0)
device (hd0) /dev/sdb
root (hd0,1)
setup
Michael Tokarev wrote:
Speaking of repairs. As I already mentioned, I always use small
(256M..1G) raid1 array for my root partition, including /boot,
/bin, /etc, /sbin, /lib and so on (/usr, /home, /var are on
their own filesystems). And I had the following scenarios
happened already:
But
Moshe Yudkowsky wrote:
Michael Tokarev wrote:
Speaking of repairs. As I already mentioned, I always use small
(256M..1G) raid1 array for my root partition, including /boot,
/bin, /etc, /sbin, /lib and so on (/usr, /home, /var are on
their own filesystems). And I had the following
On Sun Feb 03, 2008 at 02:46:54PM -0600, Moshe Yudkowsky wrote:
Robin Hill wrote:
This is wrong - the disk you boot from will always be hd0 (no matter
what the map file says - that's only used after the system's booted).
You need to remap the hd0 device for each disk:
grub --no-floppy EOF
On Sunday February 3, [EMAIL PROTECTED] wrote:
Hi,
Maybe I'll buy three HDDs to put a raid10 on them. And get the total
capacity of 1.5 of a disc. 'man 4 md' indicates that this is possible
and should work.
I'm wondering - how a single disc failure is handled in such configuration?
1.
Neil Brown said: (by the date of Mon, 4 Feb 2008 10:11:27 +1100)
wow, thanks for quick reply :)
3. Another thing - would raid10,far=2 work when three drives are used?
Would it increase the read performance?
Yes.
is far=2 the most I could do to squeeze every possible MB/sec
On Feb 3, 2008 5:29 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
Neil Brown said: (by the date of Mon, 4 Feb 2008 10:11:27 +1100)
wow, thanks for quick reply :)
3. Another thing - would raid10,far=2 work when three drives are used?
Would it increase the read performance?
Yes.
On Feb 3, 2008 5:29 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
Neil Brown said: (by the date of Mon, 4 Feb 2008 10:11:27 +1100)
wow, thanks for quick reply :)
3. Another thing - would raid10,far=2 work when three drives are used?
Would it increase the read performance?
Yes.
On Thursday January 31, [EMAIL PROTECTED] wrote:
Hello linux-raid.
i have DEBIAN.
raid01:/# mdadm -V
mdadm - v2.6.4 - 19th October 2007
raid01:/# mdadm -D /dev/md1
/dev/md1:
Version : 00.91.03
Creation Time : Tue Nov 13 18:42:36 2007
Raid Level : raid5
Delta
Здравствуйте, Neil.
Вы писали 4 февраля 2008 г., 03:44:21:
On Thursday January 31, [EMAIL PROTECTED] wrote:
Hello linux-raid.
i have DEBIAN.
raid01:/# mdadm -V
mdadm - v2.6.4 - 19th October 2007
raid01:/# mdadm -D /dev/md1
/dev/md1:
Version : 00.91.03
Creation Time : Tue
On Saturday February 2, [EMAIL PROTECTED] wrote:
Çäðàâñòâóéòå, linux-raid.
Help please, How i can to fight THIS :
[EMAIL PROTECTED]:~# mdadm -I /dev/sdb
mdadm: /dev/sdb has different metadata to chosen array /dev/md1 0.91 0.90.
Apparently mdadm -I doesn't work with arrays that are in
Hi, Neil.
4 февраля 2008 г., 03:44:21:
On Thursday January 31, [EMAIL PROTECTED] wrote:
Hello linux-raid.
i have DEBIAN.
raid01:/# mdadm -V
mdadm - v2.6.4 - 19th October 2007
raid01:/# mdadm -D /dev/md1
/dev/md1:
Version : 00.91.03
Creation Time : Tue Nov 13 18:42:36
On Monday February 4, [EMAIL PROTECTED] wrote:
raid01:/etc# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
[multipath] [faulty]
md1 : active(auto-read-only) raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
^^^
Hi linux-raid.
on DEBIAN :
[EMAIL PROTECTED]:/# mdadm -D /dev/md1
/dev/md1:
Version : 00.91.03
Creation Time : Tue Nov 13 18:42:36 2007
Raid Level : raid5
Array Size : 1465159488 (1397.29 GiB 1500.32 GB)
Used Dev Size : 488386496 (465.76 GiB 500.11 GB)
Raid Devices : 5
I understand that lilo and grub only can boot partitions that look like
a normal single-drive partition. And then I understand that a plain
raid10 has a layout which is equivalent to raid1. Can such a raid10
partition be used with grub or lilo for booting?
And would there be any advantages in
24 matches
Mail list logo