[gentoo-user] Re: raid1 grub ext4
Florian Philipp lists at binarywings.net writes: Your boot partition is not by any chance a logical partition and therefore would be (hd0,4) and not (hd0,0)? grub root (hd0,4) Error 22: No such partition No? You can try to use 0.90 metadata by specifying it while creating the RAID with mdadm. I'm using it myself because AFAIK this is the only way for grub to handle a single RAID containing partitions instead of partitions containing RAIDs. OK so I read about this 0.90 metadata but could not find details (syntax) of when and exactly how to use this information. OK, so, I've rebooted and got the md1, md2, md3 renamed by (whatever) to md125 md127 and md126, respectively. I changed the fstab like so: #/dev/md1 /boot ext4noauto,noatime 1 2 #/dev/md3 / ext4noatime 0 1 #/dev/md2 swap swapdefaults0 0 none/proc procdefaults 0 0 /dev/cdrom /mnt/cdrom autonoauto,rw,user 0 0 shm /dev/shmtmpfs nodev,nosuid,noexec 0 0 /dev/md125 /boot ext2 noauto,noatime 1 2 /dev/md126 / ext4 noatime 0 1 /dev/md127 swapswap defaults0 0 I put ext2 on /boot, re-emerged grub, edit the grub.conf, but when I run grub I still get HD that cannot be found? grub root (hd0,0) Filesystem type unknown, partition type 0xfd grub root (hd1,0) Filesystem type unknown, partition type 0xfd grub find /boot/grub/stage1 Error 15: File not found grub find /grub/stage1 Error 15: File not found All the files are in /boot/grub... ext2 support is built into the kernel, with extended attributes. ideas? (syntax and steps to repeat after a reboot?) Its my first software raid on gentoo, so I'm sure I've mucked things up a bit James
[gentoo-user] Re: raid1 grub ext4
Florian Philipp lists at binarywings.net writes: You can try to use 0.90 metadata by specifying it while creating the RAID with mdadm. I'm using it myself because AFAIK this is the only way for grub to handle a single RAID containing partitions instead of partitions containing RAIDs. Not sure what this inconsistency is tell me: (chroot) livecd grub # cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md125 : active raid1 sda1[0] sdb1[1] 262132 blocks super 1.2 [2/2] [UU] md126 : active raid1 sdb3[1] sda3[0] 1948226512 blocks super 1.2 [2/2] [UU] md127 : active raid1 sdb2[1] sda2[0] 5022708 blocks super 1.2 [2/2] [UU] (chroot) livecd grub # cd /boot/grub/ (chroot) livecd grub # df . FilesystemSize Used Avail Use% Mounted on /dev/md1 248M 7.5M 228M 4% /boot So is it md1 or md125 for /boot, which is on it's own partition? James
Re: [gentoo-user] Re: raid1 grub ext4
Am 14.04.2011 14:56, schrieb James: Florian Philipp lists at binarywings.net writes: Your boot partition is not by any chance a logical partition and therefore would be (hd0,4) and not (hd0,0)? grub root (hd0,4) Error 22: No such partition No? You can try to use 0.90 metadata by specifying it while creating the RAID with mdadm. I'm using it myself because AFAIK this is the only way for grub to handle a single RAID containing partitions instead of partitions containing RAIDs. OK so I read about this 0.90 metadata but could not find details (syntax) of when and exactly how to use this information. OK, so, I've rebooted and got the md1, md2, md3 renamed by (whatever) to md125 md127 and md126, respectively. The parameter for specifying metadata versions is -e. Try mdadm --create --metadata=0.90 ... Of course it can only be specified while creating the array. The renaming is pretty ugly. You can force specific names by circumventing the kernel autodetection. Add the following kernel parameters: raid=noautodetect md=0,/dev/sda1,/dev/sdb1 ... This assembles md0 with sda1 and sdb1. You can also try to keep autodetection on and only force the numbering for your raid partition. Hope this helps, Florian Philipp signature.asc Description: OpenPGP digital signature
[gentoo-user] Re: raid1 grub ext4
James wireless at tampabay.rr.com writes: Not sure what this inconsistency is tell me: I rebooted, using a minimal CD. Dmesg has this information: md: bindsda1 md: bindsdb3 md: bindsda2 md: bindsda3 md/raid1:md126: active with 2 out of 2 mirrors md126: detected capacity change from 0 to 1994983948288 md: bindsdb1 md126: unknown partition table md: bindsdb2 md/raid1:md127: active with 2 out of 2 mirrors md127: detected capacity change from 0 to 268423168 md/raid1:md125: active with 2 out of 2 mirrors md125: detected capacity change from 0 to 5143252992 md127: unknown partition table md125: unknown partition table unknown partition tables? Trying to avoid the 4k disk problems, I used this to format the drives originally (which) I found in a gentoo bug: livecd ~ # fdisk -c -S 56 -u /dev/sda Command (m for help): p Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 56 sectors/track, 273601 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xab83344a Device Boot Start End Blocks Id System /dev/sda1 *2048 526335 262144 fd Linux raid autodetect /dev/sda2 52633610573823 5023744 fd Linux raid autodetect /dev/sda310573824 3907029167 1948227672 fd Linux raid autodetect I think my problem in the partition table is unknown? If so, what did I miss and how to recover? Also, still unsure if my fstab is correct. (see previous post). when I boot with the minCD all is there after I mount and go into chroot environment Perplexed, James
Re: [gentoo-user] Re: raid1 grub ext4
Am 14.04.2011 15:41, schrieb James: James wireless at tampabay.rr.com writes: Not sure what this inconsistency is tell me: I rebooted, using a minimal CD. Dmesg has this information: md: bindsda1 md: bindsdb3 md: bindsda2 md: bindsda3 md/raid1:md126: active with 2 out of 2 mirrors md126: detected capacity change from 0 to 1994983948288 md: bindsdb1 md126: unknown partition table md: bindsdb2 md/raid1:md127: active with 2 out of 2 mirrors md127: detected capacity change from 0 to 268423168 md/raid1:md125: active with 2 out of 2 mirrors md125: detected capacity change from 0 to 5143252992 md127: unknown partition table md125: unknown partition table unknown partition tables? Trying to avoid the 4k disk problems, I used this to format the drives originally (which) I found in a gentoo bug: livecd ~ # fdisk -c -S 56 -u /dev/sda Command (m for help): p Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 56 sectors/track, 273601 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xab83344a Device Boot Start End Blocks Id System /dev/sda1 *2048 526335 262144 fd Linux raid autodetect /dev/sda2 52633610573823 5023744 fd Linux raid autodetect /dev/sda310573824 3907029167 1948227672 fd Linux raid autodetect I think my problem in the partition table is unknown? If so, what did I miss and how to recover? Also, still unsure if my fstab is correct. (see previous post). when I boot with the minCD all is there after I mount and go into chroot environment Perplexed, James I don't think the missing partition table is your problem. Linux supports partitions within md devices. You don't use this feature and therefore there is no partition table within the md devices to be detected. However, you might be onto something with the changed sector offset. But I don't know enough of this to help you. Regards, Florian Philipp signature.asc Description: OpenPGP digital signature
Re: [gentoo-user] Re: raid1 grub ext4
Hi, Just picking the last post I read here. OP. You may want to read this: http://grub.enbug.org/LVMandRAID I know little about LVM and nothing about RAID but found that howto that is pretty straight foreword on how it should work. Also, make sure you are using a version of grub that can see RAID/LVM. According to what I read, not all versions can, only the most recent has that feature. It also has a grub.conf example too. Maybe that will help to. Hope that helps. Dale :-) :-)
Re: [gentoo-user] Re: raid1 grub ext4
On Thu, Apr 14, 2011 at 7:56 AM, James wirel...@tampabay.rr.com wrote: OK, so, I've rebooted and got the md1, md2, md3 renamed by (whatever) to md125 md127 and md126, respectively. The name of the array probably got weird because your hostname doesn't match the homehost of the array. The array has the host name stored in its metadata, so if you're booting in an environment that doesn't have the same hostname (such as a live CD) then it'll use different (large) numbering to avoid a conflict with local arrays. It may also cause some other differences. The manpage of mdadm has good information. I think you can also set it to ignore the hostname entirely in mdadm.conf, but I've not personally ever tried that.
[gentoo-user] Re: raid1 grub ext4
Dale rdalek1967 at gmail.com writes: http://grub.enbug.org/LVMandRAID Not using lvm at all. Simple raid1 on /boot, /, and swap partitions. I do not need the added complexity of LVM on a simple raid array; I perfectly capable of follow explicit instructions(syntax) and still screwing things up, without LVM... You build a raid1 system yet? NO lvm ;-) Come-on Dale, I need you to flush this out http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml http://en.gentoo-wiki.com/wiki/RAID/Software :-( Alligators? I do not see any Gators. Come on in, the water is FINE! James
[gentoo-user] Re: raid1 grub ext4
Florian Philipp lists at binarywings.net writes: I don't think the missing partition table is your problem. OK, let's assume you are correct, ignoring . However, you might be onto something with the changed sector offset. But I don't know enough of this to help you. Well if I have to reformat I look everything on the install. Not ready to start over yet. So after a fresh reboot I see: livecd ~ # cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md125 : active (auto-read-only) raid1 sdb3[1] sda3[0] 1948226512 blocks super 1.2 [2/2] [UU] md126 : active (auto-read-only) raid1 sda2[0] sdb2[1] 5022708 blocks super 1.2 [2/2] [UU] md127 : active (auto-read-only) raid1 sda1[0] sdb1[1] 262132 blocks super 1.2 [2/2] [UU] If you look at previous posts of mine on the mdpart names, and focus on the sized, you'll see something very troubling... The minimal CD keeps using the md125-127 names but assigns them to the different partitions: NOW /boot is: md127 : active (auto-read-only) raid1 sda1[0] sdb1[1] 262132 blocks super 1.2 [2/2] [UU] / is md125 : active (auto-read-only) raid1 sdb3[1] sda3[0] 1948226512 blocks super 1.2 [2/2] [UU] swap is md126 : active (auto-read-only) raid1 sda2[0] sdb2[1] 5022708 blocks super 1.2 [2/2] [UU] Something is morphing the numbers each time I reboot with minCD So no what I put in /etc/fstab, it's going to be wrong. grub cannot find the partition with the kernel? OR is this not a problem? Plus, since I'm never able to write the grub stuffage to the MBR, grub nor the kernel every run. after rebooting I tried this step to correct for the metadata problem you previously posted about: mdadm --create /dev/md1 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda1 /dev/sdb1 mdadm: super0.90 cannot open /dev/sda1: Device or resource busy mdadm: /dev/sda1 is not suitable for this array. mdadm: super0.90 cannot open /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 is not suitable for this array. mdadm --create /dev/md127 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda1 /dev/sdb1 mdadm: super0.90 cannot open /dev/sda1: Device or resource busy mdadm: /dev/sda1 is not suitable for this array. mdadm: super0.90 cannot open /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 is not suitable for this array.
Re: [gentoo-user] Re: raid1 grub ext4
Am 14.04.2011 17:07, schrieb James: Florian Philipp lists at binarywings.net writes: I don't think the missing partition table is your problem. OK, let's assume you are correct, ignoring . However, you might be onto something with the changed sector offset. But I don't know enough of this to help you. Well if I have to reformat I look everything on the install. Not ready to start over yet. So after a fresh reboot I see: livecd ~ # cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md125 : active (auto-read-only) raid1 sdb3[1] sda3[0] 1948226512 blocks super 1.2 [2/2] [UU] md126 : active (auto-read-only) raid1 sda2[0] sdb2[1] 5022708 blocks super 1.2 [2/2] [UU] md127 : active (auto-read-only) raid1 sda1[0] sdb1[1] 262132 blocks super 1.2 [2/2] [UU] If you look at previous posts of mine on the mdpart names, and focus on the sized, you'll see something very troubling... The minimal CD keeps using the md125-127 names but assigns them to the different partitions: NOW /boot is: md127 : active (auto-read-only) raid1 sda1[0] sdb1[1] 262132 blocks super 1.2 [2/2] [UU] / is md125 : active (auto-read-only) raid1 sdb3[1] sda3[0] 1948226512 blocks super 1.2 [2/2] [UU] swap is md126 : active (auto-read-only) raid1 sda2[0] sdb2[1] 5022708 blocks super 1.2 [2/2] [UU] Something is morphing the numbers each time I reboot with minCD So no what I put in /etc/fstab, it's going to be wrong. I guess you can resort to labels or UUIDs. The real problem is the root=... parameter for the kernel. That's why I suggested overriding the auto detection and define the raids explicitly on the kernel parameter list. grub cannot find the partition with the kernel? OR is this not a problem? Wild guess: Does grub maybe rely on the partition type to identify file system? Does it work if you change the type from 0xfd to standard 0x82? Plus, since I'm never able to write the grub stuffage to the MBR, grub nor the kernel every run. As a workaround to get your system into a usable state, you can still try to put /boot on a USB stick. In the past, I've also had a system where grub (whole /boot except kernel) was located on a floppy and then located the kernel file on the HDD. You could try this in order to find out whether an working grub still has trouble with your file system. after rebooting I tried this step to correct for the metadata problem you previously posted about: mdadm --create /dev/md1 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda1 /dev/sdb1 mdadm: super0.90 cannot open /dev/sda1: Device or resource busy mdadm: /dev/sda1 is not suitable for this array. mdadm: super0.90 cannot open /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 is not suitable for this array. mdadm --create /dev/md127 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda1 /dev/sdb1 mdadm: super0.90 cannot open /dev/sda1: Device or resource busy mdadm: /dev/sda1 is not suitable for this array. mdadm: super0.90 cannot open /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 is not suitable for this array. Are you sure sda1 and sdb1 are not in use? Did the kernel activate the already present RAID? Then you have to deactivate it. Use mdadm --stop /dev/md* Additionally, check that you did not mount sda1 or sdb1 by accident. Hope this helps, Florian Philipp signature.asc Description: OpenPGP digital signature
Re: [gentoo-user] Re: raid1 grub ext4
James wrote: Dalerdalek1967at gmail.com writes: http://grub.enbug.org/LVMandRAID Not using lvm at all. Simple raid1 on /boot, /, and swap partitions. I do not need the added complexity of LVM on a simple raid array; I perfectly capable of follow explicit instructions(syntax) and still screwing things up, without LVM... You build a raid1 system yet? NO lvm ;-) Come-on Dale, I need you to flush this out http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml http://en.gentoo-wiki.com/wiki/RAID/Software :-( Alligators? I do not see any Gators. Come on in, the water is FINE! James That talks about using RAID tho. I don't think you have to be using LVM to use that guide. It just talks about both in one place. Maybe I don't know enough to see that it requires both tho. lol Dale :-) :-)
[gentoo-user] Re: raid1 grub ext4
Florian Philipp lists at binarywings.net writes: Are you sure sda1 and sdb1 are not in use? Did the kernel activate the already present RAID? Then you have to deactivate it. Use mdadm --stop /dev/md* AHh! livecd ~ # mdadm --stop /dev/md* mdadm: error opening /dev/md: Is a directory mdadm: stopped /dev/md1 mdadm: stopped /dev/md125 mdadm: stopped /dev/md126 mdadm: stopped /dev/md127 mdadm: stopped /dev/md3 mdadm: stopped /dev/md4 So it has 2 sets of md ? mdadm --create /dev/md127 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda1 /dev/sdb1 mdadm: /dev/sda1 appears to be part of a raid array: level=raid1 devices=2 ctime=Sun Apr 10 17:12:42 2011 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid1 devices=2 ctime=Sun Apr 10 17:12:42 2011 Continue creating array? y mdadm: array /dev/md127 started. What next? James
[gentoo-user] Re: raid1 grub ext4
Dale rdalek1967 at gmail.com writes: http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml http://en.gentoo-wiki.com/wiki/RAID/Software That talks about using RAID tho. I don't think you have to be using LVM to use that guide. It just talks about both in one place. Correct, if my research-comprehension is properly aligned Maybe I don't know enough to see that it requires both tho. lol Nope, lvm is extra. ONCE you master lvm, I'll dive in with both feet! For now, no lvm as my needs are simple mirroring of all 3 partions. boot and swap are plenty big, everything else is / So this should be straight forward I think Florian is bout to help me flesh out the problem, on the other thread James
Re: [gentoo-user] Re: raid1 grub ext4
Am 14.04.2011 18:29, schrieb James: Florian Philipp lists at binarywings.net writes: Are you sure sda1 and sdb1 are not in use? Did the kernel activate the already present RAID? Then you have to deactivate it. Use mdadm --stop /dev/md* AHh! livecd ~ # mdadm --stop /dev/md* mdadm: error opening /dev/md: Is a directory mdadm: stopped /dev/md1 mdadm: stopped /dev/md125 mdadm: stopped /dev/md126 mdadm: stopped /dev/md127 mdadm: stopped /dev/md3 mdadm: stopped /dev/md4 So it has 2 sets of md ? *Head scratch* This, uhm, looks odd. No clue what to make of it. mdadm --create /dev/md127 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda1 /dev/sdb1 mdadm: /dev/sda1 appears to be part of a raid array: level=raid1 devices=2 ctime=Sun Apr 10 17:12:42 2011 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid1 devices=2 ctime=Sun Apr 10 17:12:42 2011 Continue creating array? y mdadm: array /dev/md127 started. What next? Guess you also have to remove them from the old array: mdadm /dev/md0 --remove /dev/sda1 You can also try --force. Regards, Florian Philipp signature.asc Description: OpenPGP digital signature
[gentoo-user] Re: raid1 grub ext4
*Head scratch* This, uhm, looks odd. No clue what to make of it. Ahhh, Don't give up just yet? I issued these commands: mdadm --create /dev/md127 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda1 /dev/sdb1 mdadm --create /dev/md125 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda3 /dev/sdb3 mdadm: /dev/sda3 appears to be part of a raid array: level=raid1 devices=2 ctime=Thu Apr 14 13:22:32 2011 mdadm: /dev/sdb3 appears to be part of a raid array: level=raid1 devices=2 ctime=Thu Apr 14 13:22:32 2011 Continue creating array? y mdadm --create /dev/md126 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda2 /dev/sdb2 I'm not sure if I just wiped the drives clean (empty)? If so, I'll have to start over.? mdadm --detail /dev/md1 mdadm: cannot open /dev/md1: No such file or directory same now for md2 and md3... Look (ma no hands!): livecd gentoo # mdadm --detail /dev/md125 /dev/md125: Version : 0.90 Creation Time : Thu Apr 14 14:15:21 2011 Raid Level : raid1 Array Size : 1948227584 (1857.97 GiB 1994.99 GB) Used Dev Size : 1948227584 (1857.97 GiB 1994.99 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 125 Persistence : Superblock is persistent Update Time : Thu Apr 14 15:51:46 2011 State : clean, resyncing Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Rebuild Status : 37% complete UUID : fa800cdb:33955cfd:cb201669:f728008a (local to host livecd) Events : 0.6 Number Major Minor RaidDevice State 0 830 active sync /dev/sda3 1 8 191 active sync /dev/sdb3 mdadm --detail /dev/md126 /dev/md126: Version : 0.90 Creation Time : Thu Apr 14 14:16:01 2011 Raid Level : raid1 Array Size : 5023680 (4.79 GiB 5.14 GB) Used Dev Size : 5023680 (4.79 GiB 5.14 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 126 Persistence : Superblock is persistent Update Time : Thu Apr 14 14:16:01 2011 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : e4651ca8:4aae2908:cb201669:f728008a (local to host livecd) Events : 0.1 Number Major Minor RaidDevice State 0 820 active sync /dev/sda2 1 8 181 active sync /dev/sdb2 # mdadm --detail /dev/md127 /dev/md127: Version : 0.90 Creation Time : Thu Apr 14 14:10:56 2011 Raid Level : raid1 Array Size : 262080 (255.98 MiB 268.37 MB) Used Dev Size : 262080 (255.98 MiB 268.37 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Apr 14 16:12:41 2011 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 8939604f:676aa8df:cb201669:f728008a (local to host livecd) Events : 0.18 Number Major Minor RaidDevice State 0 810 active sync /dev/sda1 1 8 171 active sync /dev/sdb1 We'll see in a few hours James
[gentoo-user] Re: raid1 grub ext4
Florian Philipp lists at binarywings.net writes: livecd ~ # mdadm --stop /dev/md* mdadm: error opening /dev/md: Is a directory mdadm: stopped /dev/md1 mdadm: stopped /dev/md125 mdadm: stopped /dev/md126 mdadm: stopped /dev/md127 mdadm: stopped /dev/md3 mdadm: stopped /dev/md4 From this web page: http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml possibly? Code Listing 2.10: Create device nodes and devices livecd ~ # mknod /dev/md1 b 9 1 livecd ~ # mknod /dev/md3 b 9 3 livecd ~ # mknod /dev/md4 b 9 4 livecd ~ # mdadm --create /dev/md1 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda1 /dev/sdb1 mdadm: array /dev/md1 started. livecd ~ # mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3 mdadm: array /dev/md3 started. livecd ~ # mdadm --create /dev/md4 --level=0 --raid-devices=2 /dev/sda4 /dev/sdb4 mdadm: array /dev/md4 started. Not exactly what I did, as the (omitted forth partition and only used raid 1) but it does not align with the md125-md127 numbers, but all are present. Comments and suggestions are most welcome! James
Re: [gentoo-user] Re: raid1 grub ext4
Am 14.04.2011 22:19, schrieb James: *Head scratch* This, uhm, looks odd. No clue what to make of it. Ahhh, Don't give up just yet? I issued these commands: mdadm --create /dev/md127 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda1 /dev/sdb1 mdadm --create /dev/md125 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda3 /dev/sdb3 mdadm: /dev/sda3 appears to be part of a raid array: level=raid1 devices=2 ctime=Thu Apr 14 13:22:32 2011 mdadm: /dev/sdb3 appears to be part of a raid array: level=raid1 devices=2 ctime=Thu Apr 14 13:22:32 2011 Continue creating array? y mdadm --create /dev/md126 --level=1 --raid-devices=2 --metadata=0.90 /dev/sda2 /dev/sdb2 I'm not sure if I just wiped the drives clean (empty)? If so, I'll have to start over.? Ouch, I didn't think of that. Well, I guess it will not wipe it, it will merely re-sync the disks. Since they have been mirrors of each other before this action, you might be lucky and it keeps working. mdadm --detail /dev/md1 mdadm: cannot open /dev/md1: No such file or directory same now for md2 and md3... Well, at least you are rid of the duplicate arrays. Look (ma no hands!): livecd gentoo # mdadm --detail /dev/md125 /dev/md125: Version : 0.90 Creation Time : Thu Apr 14 14:15:21 2011 Raid Level : raid1 Array Size : 1948227584 (1857.97 GiB 1994.99 GB) Used Dev Size : 1948227584 (1857.97 GiB 1994.99 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 125 Persistence : Superblock is persistent Update Time : Thu Apr 14 15:51:46 2011 State : clean, resyncing Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Rebuild Status : 37% complete UUID : fa800cdb:33955cfd:cb201669:f728008a (local to host livecd) Events : 0.6 Number Major Minor RaidDevice State 0 830 active sync /dev/sda3 1 8 191 active sync /dev/sdb3 [...] We'll see in a few hours James You can keep using it while it re-syncs. Re-syncing just means that you do not have any redundancy, yet. You can still read/write on the array. You will get or manipulate whatever mdadm thinks is the correct value for each block. That's also what will end up on both disks, ultimately. I guess you can even reboot but since your setup is not really persistent, I wouldn't try it. Regards, Florian Philipp signature.asc Description: OpenPGP digital signature
Re: [gentoo-user] Re: raid1 grub ext4
Am 12.04.2011 18:53, schrieb James: James wireless at tampabay.rr.com writes: Everything I try within grub indicated the filesystem is unknown. This stumps me http://bugs.gentoo.org/show_bug.cgi?id=250829 Bug above looks like this grub support of ext4 was flushed out and fixed some time ago? Maybe unmount the boot partition, reformat it to ext2 copy over the kernrel (run what mdadm commands again) remount and see if it works? This is still my best idea, if nobody has any other ideas? James Your boot partition is not by any chance a logical partition and therefore would be (hd0,4) and not (hd0,0)? Also: According to this bug [1], grub gained support for md metadata 1.0 in 2010. Maybe this has not yet been merged into Gentoo (or legacy grub, at all). You can try to use 0.90 metadata by specifying it while creating the RAID with mdadm. I'm using it myself because AFAIK this is the only way for grub to handle a single RAID containing partitions instead of partitions containing RAIDs. [1] http://savannah.gnu.org/task/?10196 Hope this helps, Florian Philipp signature.asc Description: OpenPGP digital signature
Re: [gentoo-user] Re: raid1 grub ext4
Dale writes: Same here. I use ext3 and reiserfs, depending on what it is, but /boot is always ext2. Why, it works well with grub and has for many many years and most likely will for many years to come as well. As for making things the same, that my not always be a good idea either. I put some things on reiserfs but some on ext3. It seams each file system has its strengths and weaknesses. I read that portage, with a lot of small files, does better on ext* file systems. So I put portage on that. Most everything else is on reiserfs. It's the other way around here - all ext3 except for /boot, but the portage tree is on reiserfs. Which is said to be very fast when dealing with lots of small files, because files under 4K are stored directly in the inodes. http://en.wikipedia.org/wiki/ReiserFS#Performance Wonko
[gentoo-user] Re: raid1 grub ext4
Stroller stroller at stellar.eclipse.co.uk writes: James, if I'm not wrong (legacy) sys-boot/grub-0.97-r10 does not have drivers for ext4. Not sure if there's a patch for it, or if grub2 can boot from ext4. Mick, that's what I was wondering. No evidence either way, that I could find so I decided to make everything ext4. There's no need for extents on such a small partition, nor journalling (because you write to /boot so rarely, the likelihood of a power failure when you're doing so is minuscule). Yea, sure, but that's not the point. I just wanted to use ext4 for everything. Not on this system, but often, my boot partition is very active, as I copy many kernels there for many different (arch)machines and different hardware (HD, SSD, CF, SD...) I try to make the many systems I admin as homogeneous as possible, hence the switch to ext4 for boot. James
[gentoo-user] Re: raid1 grub ext4
Neil Bothwick neil at digimed.co.uk writes: If /boot is on a separate partition, you should be using It is. find /grub/stage1 grub find /grub/stage1 Error 15: File not found grub find /boot/grub/stage1 Error 15: File not found If the symlink is there for boot - /boot -- and it is by default -- both work. # ls -alg snip lrwxrwxrwx 1 root 1 Apr 6 21:40 boot - . drwxr-xr-x 2 root1024 Apr 11 12:05 grub I've found GRUB's handling of symlinks to be variable at best. Try searching for the real file. Everything I try within grub indicated the filesystem is unknown. Maybe unmount the boot partition, reformat it to ext2 copy over the kernrel (run what mdadm commands again) remount and see if it works? Other ideas? James
Re: [gentoo-user] Re: raid1 grub ext4
On Tuesday 12 April 2011 15:10:52 James wrote: Stroller stroller at stellar.eclipse.co.uk writes: There's no need for extents on such a small partition, nor journalling (because you write to /boot so rarely, the likelihood of a power failure when you're doing so is minuscule). Yea, sure, but that's not the point. I just wanted to use ext4 for everything. Not on this system, but often, my boot partition is very active, as I copy many kernels there for many different (arch)machines and different hardware (HD, SSD, CF, SD...) I try to make the many systems I admin as homogeneous as possible, hence the switch to ext4 for boot. Nevertheless, if ext4 isn't working for you you should follow the advice you've been given and format /boot as ext2. All my boot partitions are ext2, regardless of which others are ext4 or reiserfs. -- Rgds Peter
Re: [gentoo-user] Re: raid1 grub ext4
Peter Humphrey wrote: On Tuesday 12 April 2011 15:10:52 James wrote: Strollerstrollerat stellar.eclipse.co.uk writes: There's no need for extents on such a small partition, nor journalling (because you write to /boot so rarely, the likelihood of a power failure when you're doing so is minuscule). Yea, sure, but that's not the point. I just wanted to use ext4 for everything. Not on this system, but often, my boot partition is very active, as I copy many kernels there for many different (arch)machines and different hardware (HD, SSD, CF, SD...) I try to make the many systems I admin as homogeneous as possible, hence the switch to ext4 for boot. Nevertheless, if ext4 isn't working for you you should follow the advice you've been given and format /boot as ext2. All my boot partitions are ext2, regardless of which others are ext4 or reiserfs. Same here. I use ext3 and reiserfs, depending on what it is, but /boot is always ext2. Why, it works well with grub and has for many many years and most likely will for many years to come as well. As for making things the same, that my not always be a good idea either. I put some things on reiserfs but some on ext3. It seams each file system has its strengths and weaknesses. I read that portage, with a lot of small files, does better on ext* file systems. So I put portage on that. Most everything else is on reiserfs. Just my $0.02 worth and that ain't much. Dale :-) :-)
Re: [gentoo-user] Re: raid1 grub ext4
On Tuesday 12 April 2011 09:57:26 Dale wrote: Peter Humphrey wrote: On Tuesday 12 April 2011 15:10:52 James wrote: Strollerstrollerat stellar.eclipse.co.uk writes: There's no need for extents on such a small partition, nor journalling (because you write to /boot so rarely, the likelihood of a power failure when you're doing so is minuscule). Yea, sure, but that's not the point. I just wanted to use ext4 for everything. Not on this system, but often, my boot partition is very active, as I copy many kernels there for many different (arch)machines and different hardware (HD, SSD, CF, SD...) I try to make the many systems I admin as homogeneous as possible, hence the switch to ext4 for boot. Nevertheless, if ext4 isn't working for you you should follow the advice you've been given and format /boot as ext2. All my boot partitions are ext2, regardless of which others are ext4 or reiserfs. Same here. I use ext3 and reiserfs, depending on what it is, but /boot is always ext2. Why, it works well with grub and has for many many years and most likely will for many years to come as well. As for making things the same, that my not always be a good idea either. I put some things on reiserfs but some on ext3. It seams each file system has its strengths and weaknesses. I read that portage, with a lot of small files, does better on ext* file systems. So I put portage on that. Most everything else is on reiserfs. Where did you read that portage, with lots of small files, is best on ext*? I was under the impression that reiserfs has better performance with lots of smaller files. -- Joost
Re: [gentoo-user] Re: raid1 grub ext4
On Tuesday 12 April 2011 15:57:26 Dale wrote: As for making things the same, that my not always be a good idea either. I might add a quotation from Ralph Waldo Emerson: a foolish preoccupation with consistency is the hobgoblin of small minds. -- Rgds Peter
[gentoo-user] Re: raid1 grub ext4
James wireless at tampabay.rr.com writes: I've found GRUB's handling of symlinks to be variable at best. Try searching for the real file. All the files are in /boot/grub: (chroot) slam grub # ls defaultgrub.conf minix_stage1_5 stage2.old device.map grub.conf.bak reiserfs_stage1_5 stage2_eltorito e2fs_stage1_5 iso9660_stage1_5 splash.xpm.gz ufs2_stage1_5 fat_stage1_5 jfs_stage1_5 stage1 vstafs_stage1_5 ffs_stage1_5 menu.lst stage2 xfs_stage1_5 Everything I try within grub indicated the filesystem is unknown. This stumps me http://bugs.gentoo.org/show_bug.cgi?id=250829 Bug above looks like this grub support of ext4 was flushed out and fixed some time ago? Maybe unmount the boot partition, reformat it to ext2 copy over the kernrel (run what mdadm commands again) remount and see if it works? This is still my best idea, if nobody has any other ideas? James