Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Michael Tokarev
Moshe Yudkowsky wrote: [] But that's *exactly* what I have -- well, 5GB -- and which failed. I've modified /etc/fstab system to use data=journal (even on root, which I thought wasn't supposed to work without a grub option!) and I can power-cycle the system and bring it up reliably afterwards.

Re: raid1 or raid10 for /boot

2008-02-04 Thread Robin Hill
On Mon Feb 04, 2008 at 07:34:54AM +0100, Keld Jørn Simonsen wrote: I understand that lilo and grub only can boot partitions that look like a normal single-drive partition. And then I understand that a plain raid10 has a layout which is equivalent to raid1. Can such a raid10 partition be used

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Moshe Yudkowsky
Michael Tokarev wrote: Moshe Yudkowsky wrote: [] But that's *exactly* what I have -- well, 5GB -- and which failed. I've modified /etc/fstab system to use data=journal (even on root, which I thought wasn't supposed to work without a grub option!) and I can power-cycle the system and bring it up

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Moshe Yudkowsky
Robin, thanks for the explanation. I have a further question. Robin Hill wrote: Once the file system is mounted then hdX,Y maps according to the device.map file (which may actually bear no resemblance to the drive order at boot - I've had issues with this before). At boot time it maps to the

Re: raid1 or raid10 for /boot

2008-02-04 Thread Keld Jørn Simonsen
On Mon, Feb 04, 2008 at 09:17:35AM +, Robin Hill wrote: On Mon Feb 04, 2008 at 07:34:54AM +0100, Keld Jørn Simonsen wrote: I understand that lilo and grub only can boot partitions that look like a normal single-drive partition. And then I understand that a plain raid10 has a layout

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Michael Tokarev
Moshe Yudkowsky wrote: [] If I'm reading the man pages, Wikis, READMEs and mailing lists correctly -- not necessarily the case -- the ext3 file system uses the equivalent of data=journal as a default. ext3 defaults to data=ordered, not data=journal. ext2 doesn't have journal at all. The

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Moshe Yudkowsky
Eric, Thanks very much for your note. I'm becoming very leery of resiserfs at the moment... I'm about to run another series of crash tests. Eric Sandeen wrote: Justin Piszcz wrote: Why avoid XFS entirely? esandeen, any comments here? Heh; well, it's the meme. Well, yeah... Note also

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Michael Tokarev
Eric Sandeen wrote: Moshe Yudkowsky wrote: So if I understand you correctly, you're stating that current the most reliable fs in its default configuration, in terms of protection against power-loss scenarios, is XFS? I wouldn't go that far without some real-world poweroff testing, because

Re: raid1 and raid 10 always writes all data to all disks?

2008-02-04 Thread Bill Davidsen
Keld Jørn Simonsen wrote: On Sun, Feb 03, 2008 at 10:56:01AM -0500, Bill Davidsen wrote: Keld Jørn Simonsen wrote: I found a sentence in the HOWTO: raid1 and raid 10 always writes all data to all disks I think this is wrong for raid10. eg a raid10,f2 of 4 disks only writes to two

Re: draft howto on making raids for surviving a disk crash

2008-02-04 Thread Bill Davidsen
Keld Jørn Simonsen wrote: On Sun, Feb 03, 2008 at 10:53:51AM -0500, Bill Davidsen wrote: Keld Jørn Simonsen wrote: This is intended for the linux raid howto. Please give comments. It is not fully ready /keld Howto prepare for a failing disk 6. /etc/mdadm.conf Something here on

Re: In this partition scheme, grub does not find md information?

2008-02-04 Thread Michael Tokarev
John Stoffel wrote: [] C'mon, how many of you are programmed to believe that 1.2 is better than 1.0? But when they're not different, just just different placements, then it's confusing. Speaking of more is better thing... There were quite a few bugs fixed in recent months wrt version 1

Re: In this partition scheme, grub does not find md information?

2008-02-04 Thread John Stoffel
David On 26 Oct 2007, Neil Brown wrote: On Thursday October 25, [EMAIL PROTECTED] wrote: I also suspect that a *lot* of people will assume that the highest superblock version is the best and should be used for new installs etc. Grumble... why can't people expect what I want them to

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Richard Scobie
Michael Tokarev wrote: Unfortunately an UPS does not *really* help here. Because unless it has control program which properly shuts system down on the loss of input power, and the battery really has the capacity to power the system while it's shutting down (anyone tested this? With new UPS?

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Eric Sandeen
Moshe Yudkowsky wrote: So if I understand you correctly, you're stating that current the most reliable fs in its default configuration, in terms of protection against power-loss scenarios, is XFS? I wouldn't go that far without some real-world poweroff testing, because various fs's are

Re: raid1 or raid10 for /boot

2008-02-04 Thread Robin Hill
On Mon Feb 04, 2008 at 12:21:40PM +0100, Keld Jørn Simonsen wrote: On Mon, Feb 04, 2008 at 09:17:35AM +, Robin Hill wrote: On Mon Feb 04, 2008 at 07:34:54AM +0100, Keld Jørn Simonsen wrote: I understand that lilo and grub only can boot partitions that look like a normal

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Robin Hill
On Mon Feb 04, 2008 at 05:06:09AM -0600, Moshe Yudkowsky wrote: Robin, thanks for the explanation. I have a further question. Robin Hill wrote: Once the file system is mounted then hdX,Y maps according to the device.map file (which may actually bear no resemblance to the drive order at

Re: Linux md and iscsi problems

2008-02-04 Thread aristizb
Good morning. Quoting Neil Brown [EMAIL PROTECTED]: On Friday February 1, [EMAIL PROTECTED] wrote: Summarizing, I have two questions about the behavior of Linux md with slow devices: 1. Is it possible to modify some kind of time-out parameter on the mdadm tool so the slow device wouldn't

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Justin Piszcz
On Mon, 4 Feb 2008, Michael Tokarev wrote: Moshe Yudkowsky wrote: [] If I'm reading the man pages, Wikis, READMEs and mailing lists correctly -- not necessarily the case -- the ext3 file system uses the equivalent of data=journal as a default. ext3 defaults to data=ordered, not

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Michael Tokarev
Eric Sandeen wrote: [] http://oss.sgi.com/projects/xfs/faq.html#nulls and note that recent fixes have been made in this area (also noted in the faq) Also - the above all assumes that when a drive says it's written/flushed data, that it truly has. Modern write-caching drives can wreak

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Eric Sandeen
Eric Sandeen wrote: Justin Piszcz wrote: Why avoid XFS entirely? esandeen, any comments here? Heh; well, it's the meme. see: http://oss.sgi.com/projects/xfs/faq.html#nulls and note that recent fixes have been made in this area (also noted in the faq) Actually, continue reading

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Eric Sandeen
Justin Piszcz wrote: Why avoid XFS entirely? esandeen, any comments here? Heh; well, it's the meme. see: http://oss.sgi.com/projects/xfs/faq.html#nulls and note that recent fixes have been made in this area (also noted in the faq) Also - the above all assumes that when a drive says it's

Re: using update-initramfs: how to get new mdadm.conf into the /boot? Or is it XFS?

2008-02-04 Thread maximilian attems
On Mon, Feb 04, 2008 at 02:59:44PM -0600, Moshe Yudkowsky wrote: Problem: on reboot, the I get an error message: root (hd0,1) (Moshe comment: as expected) Filesystem type is xfs, partition type 0xfd (Moshe comment: as expected) kernel /boot/vmliuz-etc.-amd64 root=/dev/md/boot ro Error

Re: using update-initramfs: how to get new mdadm.conf into the /boot? Or is it XFS?

2008-02-04 Thread Robin Hill
On Mon Feb 04, 2008 at 02:59:44PM -0600, Moshe Yudkowsky wrote: I've managed to get myself into a little problem. Since power hits were taking out the /boot partition, I decided to split /boot out of root. Working from my emergency partition, I copied all files from /root, re-partitioned

Re: using update-initramfs: how to get new mdadm.conf into the /boot? Or is it XFS?

2008-02-04 Thread Moshe Yudkowsky
Robin Hill wrote: File not found at that point would suggest it can't find the kernel file. The path here should be relative to the root of the partition /boot is on, so if your /boot is its own partition then you should either use kernel /vmlinuz or (the more usual solution from what I've

Re: using update-initramfs: how to get new mdadm.conf into the /boot? Or is it XFS?

2008-02-04 Thread Moshe Yudkowsky
maximilian attems wrote: error 15 is an *grub* error. grub is known for it's dislike of xfs, so with this whole setup use ext3 rerun grub-install and you should be fine. I should mention that something *did* change. When attempting to use XFS, grub would give me a note about 18 partitions

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-04 Thread Justin Piszcz
On Mon, 4 Feb 2008, Michael Tokarev wrote: Eric Sandeen wrote: [] http://oss.sgi.com/projects/xfs/faq.html#nulls and note that recent fixes have been made in this area (also noted in the faq) Also - the above all assumes that when a drive says it's written/flushed data, that it truly has.

Re: using update-initramfs: how to get new mdadm.conf into the /boot? Or is it XFS?

2008-02-04 Thread Robin Hill
On Mon Feb 04, 2008 at 02:59:44PM -0600, Moshe Yudkowsky wrote: I've managed to get myself into a little problem. Since power hits were taking out the /boot partition, I decided to split /boot out of root. Working from my emergency partition, I copied all files from /root, re-partitioned

when is a disk non-fresh?

2008-02-04 Thread Dexter Filmore
Seems the other topic wasn't quite clear... Occasionally a disk is kicked for being non-fresh - what does this mean and what causes it? Dex -- -BEGIN GEEK CODE BLOCK- Version: 3.12 GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K- w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@

using update-initramfs: how to get new mdadm.conf into the /boot? Or is it XFS?

2008-02-04 Thread Moshe Yudkowsky
I've managed to get myself into a little problem. Since power hits were taking out the /boot partition, I decided to split /boot out of root. Working from my emergency partition, I copied all files from /root, re-partitioned what had been /root into room for /boot and /root, and then created

Re: mdadm 2.6.4 : How i can check out current status of reshaping ?

2008-02-04 Thread Neil Brown
On Monday February 4, [EMAIL PROTECTED] wrote: [EMAIL PROTECTED]:/# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1] 1465159488 blocks super 0.91 level 5, 64k

Re: using update-initramfs: how to get new mdadm.conf into the /boot? Or is it XFS?

2008-02-04 Thread Moshe Yudkowsky
I wrote: Now it's failed in a different section and complains that it can't find /sbin/init. I'm at the (initramfs) prompt, which I don't ever recall seeing before. I can't mount /dev/md/root on any mount points (invalid arguments even though I'm not supplying any). I've checked /dev/md/root

Re: when is a disk non-fresh?

2008-02-04 Thread Neil Brown
On Monday February 4, [EMAIL PROTECTED] wrote: Seems the other topic wasn't quite clear... not necessarily. sometimes it helps to repeat your question. there is a lot of noise on the internet and somethings important things get missed... :-) Occasionally a disk is kicked for being non-fresh