Re: Grub2 reinstall on raid1 system.

2011-01-23 Thread Tom H
On Sat, Jan 22, 2011 at 8:24 AM, Jack Schneider p...@dp-indexing.com wrote:
 On Sat, 22 Jan 2011 04:54:32 -0500
 Tom H tomh0...@gmail.com wrote:

 On Fri, Jan 21, 2011 at 8:51 PM, Jack Schneider
 p...@dp-indexing.com wrote:
 
   I think I found a significant glitch.. I appears that mdadm is
   confused.  I think it happened when I created the /dev/md2 array
  from the new disks.  It looks like the metadata 1.2 vs 0.90 configs
  is the culprit...
 
  Here's the output of:
 
  mdadm --detail --scan:
  ARRAY /dev/md0 metadata=0.90
  UUID=e45b34d8:50614884:1f1d6a6a:d9c6914c ARRAY /dev/md1
  metadata=0.90 UUID=c06c0ea6:5780b170:ea2fd86a:09558bd1
 
  Here's the output of /etc/mdadm/mdadm.conf:
 
  DEVICE partitions
  CREATE owner=root group=disk mode=0660 auto=yes
  HOMEHOST system
  MAILADDR root
  ARRAY /dev/md/0 metadata=1.2
  UUID=f6de5584:d9dbce39:090f16ff:f795e54c name=hetzner:0
  ARRAY /dev/md/1 metadata=1.2
  UUID=0e065fee:15dea43e:f4ed7183:70d519bd name=hetzner:1
  ARRAY /dev/md/2 metadata=1.2
  UUID=ce4dd5a8:d8c2fdf4:4612713e:06047473 name=hetzner:2
 
  Given that the metadata from 0.90  1.2 cannot be on each  md0 and
  md1 at the same time. Although they are on different places on the
  disks IIRC.  Something needs to change...  I am thinking of an
  mdadm.conf edit. But there maybe an alternative tool or
  approach...  This was obtained using my Debian-live amd64
  rescue disk.

 Check your partitions' metadata with mdadm --examine --scan
 --config=partitions. Those'll be the settings that you'll need in
 mdadm.conf.


 Thanks, Tom

 A couple of small ?s.  I can get the output of the command on the live
 file system and it appears  to make sense.   I am running on a
 debian-live amd64 O/S. The data has 4 arrays and I only have 3, there
 are two entries for the new empty disks /dev/md/2 and /dev/md127.
 They don't appear in /proc/mdstat, so they are not running.  Do I need
 to kill them permanently some how?  I need, I think, to get the info
 to the mdadm.conf on the real /dev/sda1  /dev/sdc1 partitions.  Mount
 them on the live system and edit mdadm.conf???  When I reboot, I'll
 need the right info... chroot??

You're welcome.

I don't follow what you've said above.

Running the command that I pointed to earlier will output the mdadm
metadata in the superblocks of the partitions listed in
/proc/partitions irrespective of the system that you're booted into.

You had a question about one of your array's metadata version and some
of your arrays' minor number.

The mdadm.conf above has everything set as v1.2 when your original
array was v0.9

Was this mdadm.conf created while booted into a sidux/aptosid CD/DVD?
Does --examine also output hetzner as an array name?


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/AANLkTi=ykg3utcoxqyo7mkyqtdcecvnuxnfdvszce...@mail.gmail.com



Re: Grub2 reinstall on raid1 system.

2011-01-22 Thread Tom H
On Fri, Jan 21, 2011 at 8:51 PM, Jack Schneider p...@dp-indexing.com wrote:

  I think I found a significant glitch.. I appears that mdadm is
  confused.  I think it happened when I created the /dev/md2 array from
  the new disks.  It looks like the metadata 1.2 vs 0.90 configs is the
  culprit...

 Here's the output of:

 mdadm --detail --scan:
 ARRAY /dev/md0 metadata=0.90 UUID=e45b34d8:50614884:1f1d6a6a:d9c6914c
 ARRAY /dev/md1 metadata=0.90 UUID=c06c0ea6:5780b170:ea2fd86a:09558bd1

 Here's the output of /etc/mdadm/mdadm.conf:

 DEVICE partitions
 CREATE owner=root group=disk mode=0660 auto=yes
 HOMEHOST system
 MAILADDR root
 ARRAY /dev/md/0 metadata=1.2 UUID=f6de5584:d9dbce39:090f16ff:f795e54c 
 name=hetzner:0
 ARRAY /dev/md/1 metadata=1.2 UUID=0e065fee:15dea43e:f4ed7183:70d519bd 
 name=hetzner:1
 ARRAY /dev/md/2 metadata=1.2 UUID=ce4dd5a8:d8c2fdf4:4612713e:06047473 
 name=hetzner:2

 Given that the metadata from 0.90  1.2 cannot be on each  md0 and md1
 at the same time. Although they are on different places on the disks
 IIRC.  Something needs to change...  I am thinking of an mdadm.conf
 edit. But there maybe an alternative tool or approach... 
 This was obtained using my Debian-live amd64 rescue disk.

Check your partitions' metadata with mdadm --examine --scan
--config=partitions. Those'll be the settings that you'll need in
mdadm.conf.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktim-epo7gmqgjmb_8x54+hqg90fs8ewmuphyo...@mail.gmail.com



Re: Grub2 reinstall on raid1 system.

2011-01-22 Thread Jack Schneider
On Sat, 22 Jan 2011 04:54:32 -0500
Tom H tomh0...@gmail.com wrote:

 On Fri, Jan 21, 2011 at 8:51 PM, Jack Schneider
 p...@dp-indexing.com wrote:
 
   I think I found a significant glitch.. I appears that mdadm is
   confused.  I think it happened when I created the /dev/md2 array
  from the new disks.  It looks like the metadata 1.2 vs 0.90 configs
  is the culprit...
 
  Here's the output of:
 
  mdadm --detail --scan:
  ARRAY /dev/md0 metadata=0.90
  UUID=e45b34d8:50614884:1f1d6a6a:d9c6914c ARRAY /dev/md1
  metadata=0.90 UUID=c06c0ea6:5780b170:ea2fd86a:09558bd1
 
  Here's the output of /etc/mdadm/mdadm.conf:
 
  DEVICE partitions
  CREATE owner=root group=disk mode=0660 auto=yes
  HOMEHOST system
  MAILADDR root
  ARRAY /dev/md/0 metadata=1.2
  UUID=f6de5584:d9dbce39:090f16ff:f795e54c name=hetzner:0
  ARRAY /dev/md/1 metadata=1.2
  UUID=0e065fee:15dea43e:f4ed7183:70d519bd name=hetzner:1
  ARRAY /dev/md/2 metadata=1.2
  UUID=ce4dd5a8:d8c2fdf4:4612713e:06047473 name=hetzner:2
 
  Given that the metadata from 0.90  1.2 cannot be on each  md0 and
  md1 at the same time. Although they are on different places on the
  disks IIRC.  Something needs to change...  I am thinking of an
  mdadm.conf edit. But there maybe an alternative tool or
  approach...  This was obtained using my Debian-live amd64
  rescue disk.
 
 Check your partitions' metadata with mdadm --examine --scan
 --config=partitions. Those'll be the settings that you'll need in
 mdadm.conf.
 
 
Thanks, Tom

A couple of small ?s.  I can get the output of the command on the live
file system and it appears  to make sense.   I am running on a
debian-live amd64 O/S. The data has 4 arrays and I only have 3, there
are two entries for the new empty disks /dev/md/2 and /dev/md127.
They don't appear in /proc/mdstat, so they are not running.  Do I need
to kill them permanently some how?  I need, I think, to get the info
to the mdadm.conf on the real /dev/sda1  /dev/sdc1 partitions.  Mount
them on the live system and edit mdadm.conf???  When I reboot, I'll
need the right info... chroot??
  
TIA Jack


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110122072452.0a4b5...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system.

2011-01-21 Thread Jack Schneider
On Tue, 18 Jan 2011 17:31:29 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  It booted to the correct grub menu then to Busy Box. I am thinking
  it goes to BB because it can't find /var and or /usr on the
  md1/sda5 LVM partition.
 
 Very likely.
 
  I checked /proc/mdstat and lo  behold there was md1:active
  with correct partitions and md0: active also correct partitions...
 
 That is good news to hear.  Becuase it should mean that all of your
 data is okay on those disks.  That is always a comfort to know.
 
  So here I sit with a root prompt from Busy Box I checked mdadm
  --examine for all known partitions and mdadm --detail /mdo  /md1
  and all seems normal and correct.  No Errors.
 
 Yeah!  :-)
 
  Both /etc/fstab and /etc/mtab show entries for /dev/md126..
  What the ... ?
 
 That does seem strange.  Could the tool you used previously have
 edited that file?
 
 You said you were using /dev/md1 as an lvm volume for /var, /home,
 swap and other.  As I read this it means you would only have /dev/md0
 for /boot in your /etc/fstab.  Right?  Something like this from my
 system:
 
   /dev/md0/boot  ext2defaults0   2
 
 You /var, /home and swap would use the lvm, right?  So from my system
 I have the following:
 
   /dev/mapper/v1-var  /var   ext3defaults0   2
   /dev/mapper/v1-home /home  ext3defaults0   2
 
 Those don't mention /dev/md1 (which showed up for you as /dev/md126)
 at all.  They would only show up in the volume group display.
 
 If you are seeing /dev/md126 in /etc/fstab then it is conflicting
 information.  You will have to sort out the information conflict.  Do
 you really have LVM in there?
 
 Certainly if the /dev/md0 /boot boot line is incorrect then you
 should correct it.  Edit the file and fix it.  If your filesystem is
 mounted read-only at that point you will need to remount it
 read-write.
 
   mount -n -o remount,rw /
 
 Bob



Hi, Bob  Back at it...8-(

 I think I found a significant glitch.. I appears that mdadm is
 confused.  I think it happened when I created the /dev/md2 array from
 the new disks.  It looks like the metadata 1.2 vs 0.90 configs is the
 culprit...  
Here's the output of:
mdadm --detail --scan:
ARRAY /dev/md0 metadata=0.90 UUID=e45b34d8:50614884:1f1d6a6a:d9c6914c
ARRAY /dev/md1 metadata=0.90 UUID=c06c0ea6:5780b170:ea2fd86a:09558bd1

Here's the output of /etc/mdadm/mdadm.conf:
 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST system

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=f6de5584:d9dbce39:090f16ff:f795e54c 
name=hetzner:0
ARRAY /dev/md/1 metadata=1.2 UUID=0e065fee:15dea43e:f4ed7183:70d519bd 
name=hetzner:1
ARRAY /dev/md/2 metadata=1.2 UUID=ce4dd5a8:d8c2fdf4:4612713e:06047473 
name=hetzner:2

# This file was auto-generated on Mon, 10 Jan 2011 00:32:59 +
# by mkconf 3.1.4-1+8efb9d1

Given that the metadata from 0.90  1.2 cannot be on each  md0 and md1
at the same time. Although they are on different places on the disks
IIRC.  Something needs to change...  I am thinking of an mdadm.conf
edit. But there maybe an alternative tool or approach... 
This was obtained using my Debian-live amd64 rescue disk.

TIA
Jack


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110121195115.3e62f...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-18 Thread Jack Schneider
On Mon, 17 Jan 2011 20:43:16 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  Bob Proulx wrote:
 mdadm --stop /dev/md125
 mdadm --assemble /dev/md0
   --update=super-minor /dev/sda1 /dev/sdc1
  
 mdadm --stop /dev/126
 mdadm --assemble /dev/md1
   --update=super-minor /dev/sda5 /dev/sdc5
 
  Bob, a small glitch.  mdadm:/dev/sda1 exists but is not an md array.
  mdadm --stop was successful, before the above.
 
 If mdadm --stop was successful then it must have been an array before
 that point.  So that doesn't make sense.  Double check everything.
 
   mdadm --examine /dev/sda1
   mdadm --examine /dev/sdc1
   mdadm --detail /dev/md0
 
  It appears that a --create-like command is needed.  Looks like
  md125 is md0 overwritten somewhere...
 
 If you create an array it will destroy the data that is on the
 array.  Unless you want to discard your data you don't want to do
 that.  You want to assemble an array from the components.  That is
 an important distinction.
 
 You really want to be able to assemble the array.  Do so with one disk
 only if that is the only way (would need the mdadm forcing options to
 start an array without all of the components) and then add the other
 disk back in.  But if the array was up a moment before then it should
 still be okay.  So I am suspicious about the problem.  Poke around a
 little more with --examine and --detail first.  Something does seem
 right.
 
  Additionally, maybe I'm in the wrong config.  Running from a
  sysrescuecd.  I do have a current Debian-AMD64-rescue-live cd.
  Which I made this AM.
 
 That would definitely improve things.  Because then you will have
 compatible versions of all of the tools.
 
 Is your system amd64?
 
Yes,  a Supermicro X7DAL-E M/B with dual XEON quad core 3.2 ghz
processors and 4 Seagate Barracuda drives. 8 gigs of Ram.



  I need to find out what's there...  
  further:
  Can I execute the mdadm commands from a su out of a busybox
  prompt? 
 
 If you are in a busybox prompt at boot time then you are already root
 and don't need an explicit 'su'.  You should be able to execute root
 commands.  The question is whether the mdadm command is available at
 that point.  The reason for busybox is that it is a self-contained set
 of small unix commands.  'mdadm' isn't one of those and so probably
 isn't available.  Normally you can edit files and the like.  Normally
 I would mount and chroot to the system.  But you don't yet have a
 system.  So that is problematic at that point.
 
 Bob

This AM when I booted, (I powerdown init 0 each PM to save power 
hassle from S/O) the machine did not come up with grub-rescue prompt.
It booted to the correct grub menu then to Busy Box. I am thinking it
goes to BB because it can't find /var and or /usr on the md1/sda5 LVM
partition. I checked /proc/mdstat and lo  behold there was md1:active
with correct partitions and md0: active also correct partitions...  I
must have been seeing md125 et al from only the sysrescuecd 2.0.0.  So
here I sit with a root prompt from Busy Box I checked mdadm
--examine for all known partitions and mdadm --detail /mdo  /md1 and
all seems normal and correct.  No Errors. 

I seem to need a way of rerunning grub-install or update-grub to
fix this setup.  What say you??  I am thinking of trying to start the
/etc/grub.d demon.

Jack






-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110118081441.6a807...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system.

2011-01-18 Thread Jack Schneider
On Mon, 17 Jan 2011 20:43:16 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  Bob Proulx wrote:
 mdadm --stop /dev/md125
 mdadm --assemble /dev/md0
   --update=super-minor /dev/sda1 /dev/sdc1
  
 mdadm --stop /dev/126
 mdadm --assemble /dev/md1
   --update=super-minor /dev/sda5 /dev/sdc5
 
  Bob, a small glitch.  mdadm:/dev/sda1 exists but is not an md array.
  mdadm --stop was successful, before the above.
 
 If mdadm --stop was successful then it must have been an array before
 that point.  So that doesn't make sense.  Double check everything.
 
   mdadm --examine /dev/sda1
   mdadm --examine /dev/sdc1
   mdadm --detail /dev/md0
 
  It appears that a --create-like command is needed.  Looks like
  md125 is md0 overwritten somewhere...
 
 If you create an array it will destroy the data that is on the
 array.  Unless you want to discard your data you don't want to do
 that.  You want to assemble an array from the components.  That is
 an important distinction.
 
 You really want to be able to assemble the array.  Do so with one disk
 only if that is the only way (would need the mdadm forcing options to
 start an array without all of the components) and then add the other
 disk back in.  But if the array was up a moment before then it should
 still be okay.  So I am suspicious about the problem.  Poke around a
 little more with --examine and --detail first.  Something does seem
 right.
 
  Additionally, maybe I'm in the wrong config.  Running from a
  sysrescuecd.  I do have a current Debian-AMD64-rescue-live cd.
  Which I made this AM.
 
 That would definitely improve things.  Because then you will have
 compatible versions of all of the tools.
 
 Is your system amd64?
 
  I need to find out what's there...  
  further:
  Can I execute the mdadm commands from a su out of a busybox
  prompt? 
 
 If you are in a busybox prompt at boot time then you are already root
 and don't need an explicit 'su'.  You should be able to execute root
 commands.  The question is whether the mdadm command is available at
 that point.  The reason for busybox is that it is a self-contained set
 of small unix commands.  'mdadm' isn't one of those and so probably
 isn't available.  Normally you can edit files and the like.  Normally
 I would mount and chroot to the system.  But you don't yet have a
 system.  So that is problematic at that point.
 
 Bob


Bob, MORE!

Both /etc/fstab and /etc/mtab show entries for /dev/md126..

What the ... ?

Jack


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110118084217.0e9ed...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system.

2011-01-18 Thread Henrique de Moraes Holschuh
On Tue, 18 Jan 2011, Jack Schneider wrote:
 Both /etc/fstab and /etc/mtab show entries for /dev/md126..
 
 What the ... ?

After you modified thei files in the real filesystem, did you update the
initramfs?

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110118181909.ga10...@khazad-dum.debian.net



Re: Grub2 reinstall on raid1 system.

2011-01-18 Thread Jack Schneider
On Tue, 18 Jan 2011 16:19:11 -0200
Henrique de Moraes Holschuh h...@debian.org wrote:

 On Tue, 18 Jan 2011, Jack Schneider wrote:
  Both /etc/fstab and /etc/mtab show entries for /dev/md126..
  
  What the ... ?
 
 After you modified thei files in the real filesystem, did you update
 the initramfs?
 
Hi, Henrique

I have not modified any files a yet. I never got to the mdadm
--assemble because of the   mdadm:/dev/sda1 exists but is not an md
array.  ERROR!

The curious question that exists is why were md125,md126, created
when I tried to build the /dev/md2 array. The only difference is the
versions of mdadm used 3 years ago and ~10 days ago

TIA

Jack 


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110118132038.4cad8...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-18 Thread Bob Proulx
Jack Schneider wrote:
 It booted to the correct grub menu then to Busy Box. I am thinking it
 goes to BB because it can't find /var and or /usr on the md1/sda5 LVM
 partition.

Very likely.

 I checked /proc/mdstat and lo  behold there was md1:active
 with correct partitions and md0: active also correct partitions...

That is good news to hear.  Becuase it should mean that all of your
data is okay on those disks.  That is always a comfort to know.

 So here I sit with a root prompt from Busy Box I checked mdadm
 --examine for all known partitions and mdadm --detail /mdo  /md1
 and all seems normal and correct.  No Errors.

Yeah!  :-)

 Both /etc/fstab and /etc/mtab show entries for /dev/md126..
 What the ... ?

That does seem strange.  Could the tool you used previously have
edited that file?

You said you were using /dev/md1 as an lvm volume for /var, /home,
swap and other.  As I read this it means you would only have /dev/md0
for /boot in your /etc/fstab.  Right?  Something like this from my
system:

  /dev/md0/boot  ext2defaults0   2

You /var, /home and swap would use the lvm, right?  So from my system
I have the following:

  /dev/mapper/v1-var  /var   ext3defaults0   2
  /dev/mapper/v1-home /home  ext3defaults0   2

Those don't mention /dev/md1 (which showed up for you as /dev/md126)
at all.  They would only show up in the volume group display.

If you are seeing /dev/md126 in /etc/fstab then it is conflicting
information.  You will have to sort out the information conflict.  Do
you really have LVM in there?

Certainly if the /dev/md0 /boot boot line is incorrect then you
should correct it.  Edit the file and fix it.  If your filesystem is
mounted read-only at that point you will need to remount it
read-write.

  mount -n -o remount,rw /

Bob


signature.asc
Description: Digital signature


Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Jack Schneider
On Sun, 16 Jan 2011 18:42:49 -0700
Bob Proulx b...@proulx.com wrote:

 Jack,
 
 With your pastebin information and the mdstat information (that last
 information in your mail and pastebins was critical good stuff) and
 I found this old posting from you too:  :-)
 
   http://lists.debian.org/debian-user/2009/10/msg00808.html
 
 With all of that I deduce the following:
 
   /dev/md125 /dev/sda1 /dev/sdc1 (10G) root partition with no lvm
   /dev/md126 /dev/sda5 /dev/sdc5 (288G) LVM for /home, /var, swap, ...
   /dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted
 
 Jack, If that is wrong please correct me.  But I think that is right.
 

That is Exactly correct.


 The mdstat data showed that the arrays are sync'd.  The UUIDs are as
 follows.
 
   ARRAY /dev/md/125_0 metadata=0.90
 UUID=e45b34d8:50614884:1f1d6a6a:d9c6914c ARRAY /dev/md/126_0
 metadata=0.90 UUID=c06c0ea6:5780b170:ea2fd86a:09558bd1
 ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2
 UUID=91ae6046:969bad93:92136016:116577fd
 
 The desired state:
 
   /dev/md0 /dev/sda1 /dev/sdc1 (10G) root partition with no lvm
   /dev/md1 /dev/sda5 /dev/sdc5 (288G) LVM for /home, /var, swap, ...
 
 Will get to /dev/md2 later...
 
  My thinking is that I should rerun mdadm and reassemble the arrays
  to the original definitions...  /md0  from sda1  sdc1
   /md1  from sda5  sdc5  note: sda2
  sdc2 are  legacy msdos extended partitions.
  I would not build a md device with msdos extended partitions under
  LVM2 at this time..   Agree?
 
 Agreed.  You want to rename the arrays.  Don't touch the msdos
 partitions.
 
  Is the above doable?  If I can figure the right mdadm commands...8-)
 
 Yes.  It is doable.  You can rename the array.  First stop the array.
 Then assemble it again with the new desired name.  Here is what you
 want to do.  Tom, Henrique, others, Please double check me on these.
 
   mdadm --stop /dev/md125
   mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1
 
   mdadm --stop /dev/126
   mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5
 
 That should by itself be enough to get the arrays going.
 
 But, and this is an important but, did you previously add the new disk
 array to the LVM volume group on the above array?  If so then you are
 not done yet.  The LVM volume group won't be able to assemble without
 the new disk.  If you did then you need to fix up LVM next.


NO!  I did NOT add /dev/sdb and /dev/sdd to the LVM..  So that is not a
problem.. I was about to do that when the machine failed..
 
 I think you should try to get back to where you were before when your
 system was working.  Therefore I would remove the new disks from the
 LVM volume group.  But I don't know if you did or did not add it yet.
 So I must stop here and wait for further information from you.
 

 I don't know if your rescue disk has lvm automatically configured or
 not.  You may need to load the device mapper module dm_mod.  I don't
 know.  If you do then here is a hint:
 
   modprobe dm_mod
 
 To scan for volume groups:
 
   vgscan
 
Found volume group Speeduke using metadata type lvm2


 To activate a volume group:
 
   vgchange -ay

5 logical volume(s) in volume group Speeduke now active

 
 To display the physical volumes associated with a volume group:
 
   pvdisplay


PV Name /dev/md126
VG Name Speeduke

Other data ommited

PV UUID kUoBgV-R9n6-exZ1-fdIk-aqlb-7Ue1-R3B1PD 




 If the new disks haven't been added to the volume group (I am hoping
 not) then you should be home free.  But if they are then I think you
 will need to remove them first.
 
 I don't know if the LVM actions above are going to be needed.  I am
 just trying to proactively give some possible hints.
 
 Bob



 Bob, You cannot know how much I appreciate the time and effort you
 and others have given to this, hopefully a few more steps and all will
 be well..
 I have not done the things you have suggested above. I'll wait for your
 response and then go!!!

 One other thing I am bothered by, md0, md1 were built using mdadm
 v0.90, md2 was built with the current mdadm v 3.1.4. which changed
 the md names.  Does this matter

Jack



Jack


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110117071903.09664...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system. 2nd Corrections!!!!!

2011-01-17 Thread Jack Schneider
On Mon, 17 Jan 2011 07:19:03 -0600
Jack Schneider p...@dp-indexing.com wrote:

 On Sun, 16 Jan 2011 18:42:49 -0700
 Bob Proulx b...@proulx.com wrote:
 
  Jack,
  
  With your pastebin information and the mdstat information (that last
  information in your mail and pastebins was critical good stuff) and
  I found this old posting from you too:  :-)
  
http://lists.debian.org/debian-user/2009/10/msg00808.html
  
  With all of that I deduce the following:
  
/dev/md125 /dev/sda1 /dev/sdc1 (10G) root partition with no lvm
/dev/md126 /dev/sda5 /dev/sdc5 (288G) LVM for /home, /var,
  swap, ... /dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted
  
  Jack, If that is wrong please correct me.  But I think that is
  right.
  
 
 That is Exactly correct.
 
 
  The mdstat data showed that the arrays are sync'd.  The UUIDs are as
  follows.
  
ARRAY /dev/md/125_0 metadata=0.90
  UUID=e45b34d8:50614884:1f1d6a6a:d9c6914c ARRAY /dev/md/126_0
  metadata=0.90 UUID=c06c0ea6:5780b170:ea2fd86a:09558bd1
  ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2
  UUID=91ae6046:969bad93:92136016:116577fd
  
  The desired state:
  
/dev/md0 /dev/sda1 /dev/sdc1 (10G) root partition with no lvm
/dev/md1 /dev/sda5 /dev/sdc5 (288G) LVM for /home, /var, swap, ...
  
  Will get to /dev/md2 later...
  
   My thinking is that I should rerun mdadm and reassemble the arrays
   to the original definitions...  /md0  from sda1  sdc1
  /md1  from sda5  sdc5  note: sda2
   sdc2 are  legacy msdos extended partitions.
   I would not build a md device with msdos extended partitions under
   LVM2 at this time..   Agree?
  
  Agreed.  You want to rename the arrays.  Don't touch the msdos
  partitions.
  
   Is the above doable?  If I can figure the right mdadm
   commands...8-)
  
  Yes.  It is doable.  You can rename the array.  First stop the
  array. Then assemble it again with the new desired name.  Here is
  what you want to do.  Tom, Henrique, others, Please double check me
  on these.
  
mdadm --stop /dev/md125
mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1
  
mdadm --stop /dev/126
mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5
  
  That should by itself be enough to get the arrays going.
  
  But, and this is an important but, did you previously add the new
  disk array to the LVM volume group on the above array?  If so then
  you are not done yet.  The LVM volume group won't be able to
  assemble without the new disk.  If you did then you need to fix up
  LVM next.
 
 
 NO!  I did NOT add /dev/sdb and /dev/sdd to the LVM..  So that is not
 a problem.. I was about to do that when the machine failed..
  
  I think you should try to get back to where you were before when
  your system was working.  Therefore I would remove the new disks
  from the LVM volume group.  But I don't know if you did or did not
  add it yet. So I must stop here and wait for further information
  from you.
  
 
  I don't know if your rescue disk has lvm automatically configured or
  not.  You may need to load the device mapper module dm_mod.  I don't
  know.  If you do then here is a hint:
  
modprobe dm_mod
  
  To scan for volume groups:
  
vgscan
  
 Found volume group Speeduke using metadata type lvm2
 
 
  To activate a volume group:
  
vgchange -ay
 
 5 logical volume(s) in volume group Speeduke now active
 
  
  To display the physical volumes associated with a volume group:
  
pvdisplay
 
 
 PV Name /dev/md126
 VG Name Speeduke
 
 Other data ommited
 
 PV UUID kUoBgV-R9n6-exZ1-fdIk-aqlb-7Ue1-R3B1PD 
 
 
 
 
  If the new disks haven't been added to the volume group (I am hoping
  not) then you should be home free.  But if they are then I think you
  will need to remove them first.
  
  I don't know if the LVM actions above are going to be needed.  I am
  just trying to proactively give some possible hints.
  
  Bob
 
 
 
  Bob, You cannot know how much I appreciate the time and effort you
  and others have given to this, hopefully a few more steps and all
 will be well..
  I have not done the things you have suggested above. I'll wait for
 your response and then go!!!
 
  One other thing I am bothered by, md0, md1 were built using md
 metadata v0.90, md2 was built with the current mdadm metadata v 1.3
 which changed the md names.  Does this matter
 

 Jack
 
 
 

 


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110117132946.06f98...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Bob Proulx
Jack Schneider wrote:
 Bob Proulx wrote:
  But, and this is an important but, did you previously add the new disk
  array to the LVM volume group on the above array?  If so then you are
  not done yet.  The LVM volume group won't be able to assemble without
  the new disk.  If you did then you need to fix up LVM next.
 
 NO!  I did NOT add /dev/sdb and /dev/sdd to the LVM..  So that is not a
 problem.. I was about to do that when the machine failed..

Oh good.  Then you are good to go.  Run these commands to stop the
arrays and to reassemble them with the new names.

  mdadm --stop /dev/md125
  mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1

  mdadm --stop /dev/126
  mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5

Then try rebooting to the system.  I think at that point that all
should be okay and that it should boot up into the previous system.

  Bob, You cannot know how much I appreciate the time and effort you
  and others have given to this, hopefully a few more steps and all will
  be well..

I have my fingers crossed for you that it will all be okay.

  I have not done the things you have suggested above. I'll wait for your
  response and then go!!!

Please go ahead and do the above commands to rename the arrays and to
reboot to the previous system.  I believe that should work.  Hope so.
These things can be finicky though.

  One other thing I am bothered by, md0, md1 were built using mdadm
  v0.90, md2 was built with the current mdadm v 3.1.4. which changed
  the md names.  Does this matter

Yes.  I am a little worried about that problem too.  But we were at a
good stopping point and I didn't want to get ahead of things.  But
let's assume that the above renaming of the raid arrays works and you
can boot to your system again.  Then what should be done about the new
disks?  Let me talk about the new disks.  But hold off working this
part of the problem until you have the first part done.  Just do one
thing at a time.

  /dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted
  ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2 
UUID=91ae6046:969bad93:92136016:116577fd

This was created using newer metadata.  I think that is going to be a
problem for Lenny/Sqeeze.  It says 1.2 but Lenny/Squeeze is 0.90.  (A
major difference is where the metadata is located.  1.0 is in a
similar location to 0.90 but 1.1 and 1.2 use locations near the start
of the device.)  Plus you assigned the entire drive (/dev/sdb) instead
of using a partition for it (/dev/sdb1).  I personally don't prefer
that and always set up using a partition instead of the whole disk.

I am not sure the best course of action for the new disks.  I suggest
stopping the new array, partitioning the drives to a partion instead
of the raw disk, then recreating it using the newly created
partitions.  Do that under your (hopefully now booting) Squeeze system
and then you are assured of compatibility.  It is perhaps possible
that because of the new metadata that the metadata=1.2 array won't be
recognized under Squeeze.  I don't know.  I haven't been in that
situation yet.  I think that would be good though because it would
mean that they would just look like raw disks again without needing to
stop the array, if it never got started.  Then you could partition and
so forth.  The future is hard to see here.

So that is my advice.  If the new array is running then I would stop
it.  (mdadm --stop /dev/md127) Then partition it, partition /dev/sdb
into /dev/sdb1 and /dev/sdd into /dev/sdd1.  Then create the array
using the new sdb1 and sdd1 partitions.  Then decide how to make use
of it.

Note that if you add new disk to the lvm root volume group then you
also need to rebuild the initrd or your system won't be able to
assemble the array at boot time and will fail to boot.  (Saying that
mostly for people who find this in the archive later.)

Bob



signature.asc
Description: Digital signature


Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Jack Schneider
On Mon, 17 Jan 2011 12:48:29 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  Bob Proulx wrote: will be well..
 
 I have my fingers crossed for you that it will all be okay.
 
   I have not done the things you have suggested above. I'll wait for
  your response and then go!!!
 
 Please go ahead and do the above commands to rename the arrays and to
 reboot to the previous system.  I believe that should work.  Hope so.
 These things can be finicky though.
 
   One other thing I am bothered by, md0, md1 were built using mdadm
   v0.90, md2 was built with the current mdadm v 3.1.4. which changed
   the md names.  Does this matter
 
 Yes.  I am a little worried about that problem too.  But we were at a
 good stopping point and I didn't want to get ahead of things.  But
 let's assume that the above renaming of the raid arrays works and you
 can boot to your system again.  Then what should be done about the new
 disks?  Let me talk about the new disks.  But hold off working this
 part of the problem until you have the first part done.  Just do one
 thing at a time.
 
   /dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted
   ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2
 UUID=91ae6046:969bad93:92136016:116577fd
 
 This was created using newer metadata.  I think that is going to be a
 problem for Lenny/Sqeeze.  It says 1.2 but Lenny/Squeeze is 0.90.  (A
 major difference is where the metadata is located.  1.0 is in a
 similar location to 0.90 but 1.1 and 1.2 use locations near the start
 of the device.)  Plus you assigned the entire drive (/dev/sdb) instead
 of using a partition for it (/dev/sdb1).  I personally don't prefer
 that and always set up using a partition instead of the whole disk.
 
 I am not sure the best course of action for the new disks.  I suggest
 stopping the new array, partitioning the drives to a partion instead
 of the raw disk, then recreating it using the newly created
 partitions.  Do that under your (hopefully now booting) Squeeze system
 and then you are assured of compatibility.  It is perhaps possible
 that because of the new metadata that the metadata=1.2 array won't be
 recognized under Squeeze.  I don't know.  I haven't been in that
 situation yet.  I think that would be good though because it would
 mean that they would just look like raw disks again without needing to
 stop the array, if it never got started.  Then you could partition and
 so forth.  The future is hard to see here.
 
 So that is my advice.  If the new array is running then I would stop
 it.  (mdadm --stop /dev/md127) Then partition it, partition /dev/sdb
 into /dev/sdb1 and /dev/sdd into /dev/sdd1.  Then create the array
 using the new sdb1 and sdd1 partitions.  Then decide how to make use
 of it.
 
 Note that if you add new disk to the lvm root volume group then you
 also need to rebuild the initrd or your system won't be able to
 assemble the array at boot time and will fail to boot.  (Saying that
 mostly for people who find this in the archive later.)
 
 Bob
 

Thanks, Bob 

What is the command to rebuild initrd? From what directory?
Just mostly for people who find this in the archive later.   8-)

Jack


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110117151918.6acbb...@torrid.volunteerwireless.net




Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Bob Proulx
Jack Schneider wrote:
 Bob Proulx wrote:
  Note that if you add new disk to the lvm root volume group then you
  also need to rebuild the initrd or your system won't be able to
  assemble the array at boot time and will fail to boot.  (Saying that
  mostly for people who find this in the archive later.)

 What is the command to rebuild initrd? From what directory?
 Just mostly for people who find this in the archive later.   8-)

You can do this most easily by reconfiguring the kernel package.

  dpkg-reconfigure linux-image-2.6.32-5-i686

Adjust that for your currently installed kernel.  That will rebuild
the initrd as part of the postinst script process.

Doing so will take the updated /etc/mdadm/mdadm.conf information and
update the stored copy in the initrd.  (In the mdadm.conf file stored
in the /boot/initrd.img-2.6.32-5-amd64 initial ram disk filesystem.)

Bob


signature.asc
Description: Digital signature


Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Jack Schneider
On Mon, 17 Jan 2011 12:48:29 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  Bob Proulx wrote:
   But, and this is an important but, did you previously add the new
   disk array to the LVM volume group on the above array?  If so
   then you are not done yet.  The LVM volume group won't be able to
   assemble without the new disk.  If you did then you need to fix
   up LVM next.
  
  NO!  I did NOT add /dev/sdb and /dev/sdd to the LVM..  So that is
  not a problem.. I was about to do that when the machine failed..
 
 Oh good.  Then you are good to go.  Run these commands to stop the
 arrays and to reassemble them with the new names.
 
   mdadm --stop /dev/md125
   mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1
 
   mdadm --stop /dev/126
   mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5
 
 Then try rebooting to the system.  I think at that point that all
 should be okay and that it should boot up into the previous system.
 
   Bob, You cannot know how much I appreciate the time and effort you
   and others have given to this, hopefully a few more steps and all
  will be well..
 
 I have my fingers crossed for you that it will all be okay.
 
   I have not done the things you have suggested above. I'll wait for
  your response and then go!!!
 
 Please go ahead and do the above commands to rename the arrays and to
 reboot to the previous system.  I believe that should work.  Hope so.
 These things can be finicky though.
 
   One other thing I am bothered by, md0, md1 were built using mdadm
   v0.90, md2 was built with the current mdadm v 3.1.4. which changed
   the md names.  Does this matter
 
 Yes.  I am a little worried about that problem too.  But we were at a
 good stopping point and I didn't want to get ahead of things.  But
 let's assume that the above renaming of the raid arrays works and you
 can boot to your system again.  Then what should be done about the new
 disks?  Let me talk about the new disks.  But hold off working this
 part of the problem until you have the first part done.  Just do one
 thing at a time.
 
   /dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted
   ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2
 UUID=91ae6046:969bad93:92136016:116577fd
 
 This was created using newer metadata.  I think that is going to be a
 problem for Lenny/Sqeeze.  It says 1.2 but Lenny/Squeeze is 0.90.  (A
 major difference is where the metadata is located.  1.0 is in a
 similar location to 0.90 but 1.1 and 1.2 use locations near the start
 of the device.)  Plus you assigned the entire drive (/dev/sdb) instead
 of using a partition for it (/dev/sdb1).  I personally don't prefer
 that and always set up using a partition instead of the whole disk.
 
 I am not sure the best course of action for the new disks.  I suggest
 stopping the new array, partitioning the drives to a partion instead
 of the raw disk, then recreating it using the newly created
 partitions.  Do that under your (hopefully now booting) Squeeze system
 and then you are assured of compatibility.  It is perhaps possible
 that because of the new metadata that the metadata=1.2 array won't be
 recognized under Squeeze.  I don't know.  I haven't been in that
 situation yet.  I think that would be good though because it would
 mean that they would just look like raw disks again without needing to
 stop the array, if it never got started.  Then you could partition and
 so forth.  The future is hard to see here.
 
 So that is my advice.  If the new array is running then I would stop
 it.  (mdadm --stop /dev/md127) Then partition it, partition /dev/sdb
 into /dev/sdb1 and /dev/sdd into /dev/sdd1.  Then create the array
 using the new sdb1 and sdd1 partitions.  Then decide how to make use
 of it.
 
 Note that if you add new disk to the lvm root volume group then you
 also need to rebuild the initrd or your system won't be able to
 assemble the array at boot time and will fail to boot.  (Saying that
 mostly for people who find this in the archive later.)
 
 Bob
 

Bob, a small glitch.  mdadm:/dev/sda1 exists but is not an md array.
mdadm --stop was successful, before the above. 

It appears that a --create-like command is needed.  Looks like
md125 is md0 overwritten somewhere...  

One of the problems of my no problem found mentality..

Jack  


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110117155012.7c001...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Jack Schneider
On Mon, 17 Jan 2011 15:50:12 -0600
Jack Schneider p...@dp-indexing.com wrote:

 On Mon, 17 Jan 2011 12:48:29 -0700
 Bob Proulx b...@proulx.com wrote:
 
  Jack Schneider wrote:
   Bob Proulx wrote:
But, and this is an important but, did you previously add the
new disk array to the LVM volume group on the above array?  If
so then you are not done yet.  The LVM volume group won't be
able to assemble without the new disk.  If you did then you
need to fix up LVM next.
   
   NO!  I did NOT add /dev/sdb and /dev/sdd to the LVM..  So that is
   not a problem.. I was about to do that when the machine failed..
  
  Oh good.  Then you are good to go.  Run these commands to stop the
  arrays and to reassemble them with the new names.
  
mdadm --stop /dev/md125
mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1
  
mdadm --stop /dev/126
mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5
  
  Then try rebooting to the system.  I think at that point that all
  should be okay and that it should boot up into the previous system.
  
Bob, You cannot know how much I appreciate the time and effort
   you and others have given to this, hopefully a few more steps and
   all will be well..
  
  I have my fingers crossed for you that it will all be okay.
  
I have not done the things you have suggested above. I'll wait
   for your response and then go!!!
  
  Please go ahead and do the above commands to rename the arrays and
  to reboot to the previous system.  I believe that should work.
  Hope so. These things can be finicky though.
  
One other thing I am bothered by, md0, md1 were built using mdadm
v0.90, md2 was built with the current mdadm v 3.1.4. which
   changed the md names.  Does this matter
  
  Yes.  I am a little worried about that problem too.  But we were at
  a good stopping point and I didn't want to get ahead of things.  But
  let's assume that the above renaming of the raid arrays works and
  you can boot to your system again.  Then what should be done about
  the new disks?  Let me talk about the new disks.  But hold off
  working this part of the problem until you have the first part
  done.  Just do one thing at a time.
  
/dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted
ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2
  UUID=91ae6046:969bad93:92136016:116577fd
  
  This was created using newer metadata.  I think that is going to be
  a problem for Lenny/Sqeeze.  It says 1.2 but Lenny/Squeeze is
  0.90.  (A major difference is where the metadata is located.  1.0
  is in a similar location to 0.90 but 1.1 and 1.2 use locations near
  the start of the device.)  Plus you assigned the entire drive
  (/dev/sdb) instead of using a partition for it (/dev/sdb1).  I
  personally don't prefer that and always set up using a partition
  instead of the whole disk.
  
  I am not sure the best course of action for the new disks.  I
  suggest stopping the new array, partitioning the drives to a
  partion instead of the raw disk, then recreating it using the newly
  created partitions.  Do that under your (hopefully now booting)
  Squeeze system and then you are assured of compatibility.  It is
  perhaps possible that because of the new metadata that the
  metadata=1.2 array won't be recognized under Squeeze.  I don't
  know.  I haven't been in that situation yet.  I think that would be
  good though because it would mean that they would just look like
  raw disks again without needing to stop the array, if it never got
  started.  Then you could partition and so forth.  The future is
  hard to see here.
  
  So that is my advice.  If the new array is running then I would stop
  it.  (mdadm --stop /dev/md127) Then partition it, partition /dev/sdb
  into /dev/sdb1 and /dev/sdd into /dev/sdd1.  Then create the array
  using the new sdb1 and sdd1 partitions.  Then decide how to make use
  of it.
  
  Note that if you add new disk to the lvm root volume group then you
  also need to rebuild the initrd or your system won't be able to
  assemble the array at boot time and will fail to boot.  (Saying that
  mostly for people who find this in the archive later.)
  
  Bob
  
 
 Bob, a small glitch.  mdadm:/dev/sda1 exists but is not an md array.
 mdadm --stop was successful, before the above. 
 
 It appears that a --create-like command is needed.  Looks like
 md125 is md0 overwritten somewhere...  
 
 One of the problems of my no problem found mentality..
 
 Jack  
 
 
Additionally, maybe I'm in the wrong config.  Running from a
sysrescuecd.  I do have a current Debian-AMD64-rescue-live cd.   Which
I made this AM.  I need to find out what's there...  
further:
Can I execute the mdadm commands from a su out of a busybox prompt? 

Jack


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 

Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Bob Proulx
Jack Schneider wrote:
 Bob Proulx wrote:
mdadm --stop /dev/md125
mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1
 
mdadm --stop /dev/126
mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5

 Bob, a small glitch.  mdadm:/dev/sda1 exists but is not an md array.
 mdadm --stop was successful, before the above.

If mdadm --stop was successful then it must have been an array before
that point.  So that doesn't make sense.  Double check everything.

  mdadm --examine /dev/sda1
  mdadm --examine /dev/sdc1
  mdadm --detail /dev/md0

 It appears that a --create-like command is needed.  Looks like
 md125 is md0 overwritten somewhere...

If you create an array it will destroy the data that is on the
array.  Unless you want to discard your data you don't want to do
that.  You want to assemble an array from the components.  That is
an important distinction.

You really want to be able to assemble the array.  Do so with one disk
only if that is the only way (would need the mdadm forcing options to
start an array without all of the components) and then add the other
disk back in.  But if the array was up a moment before then it should
still be okay.  So I am suspicious about the problem.  Poke around a
little more with --examine and --detail first.  Something does seem
right.

 Additionally, maybe I'm in the wrong config.  Running from a
 sysrescuecd.  I do have a current Debian-AMD64-rescue-live cd.   Which
 I made this AM.

That would definitely improve things.  Because then you will have
compatible versions of all of the tools.

Is your system amd64?

 I need to find out what's there...  
 further:
 Can I execute the mdadm commands from a su out of a busybox prompt? 

If you are in a busybox prompt at boot time then you are already root
and don't need an explicit 'su'.  You should be able to execute root
commands.  The question is whether the mdadm command is available at
that point.  The reason for busybox is that it is a self-contained set
of small unix commands.  'mdadm' isn't one of those and so probably
isn't available.  Normally you can edit files and the like.  Normally
I would mount and chroot to the system.  But you don't yet have a
system.  So that is problematic at that point.

Bob


signature.asc
Description: Digital signature


Re: Grub2 reinstall on raid1 system.

2011-01-16 Thread Jack Schneider
On Sat, 15 Jan 2011 16:57:46 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  Bob Proulx wrote:
   Jack Schneider wrote:
I have a raid1 based W/S running Debian Squeeze uptodate. (was
until ~7 days ago) There are 4 drives, 2 of which had never been
used or formatted. I configured a new array using Disk Utility
from a live Ubuntu CD. That's where I screwed up... The end
result was the names of the arrays were changed on the working
2 drives. IE: /dev/md0 to /dev/126 and /dev/md1 became md127.
   
   Something else must have happened too.  Because normally just
   adding arrays will not rename the existing arrays.  I am not
   familiar with the Disk Utility that you mention.
 
  Hi, Bob 
  Thanks for your encouraging advice...
 
 I believe you should be able to completely recover from the current
 problems.  But it may be tedious and not completely trivial.  You will
 just have to work through it.
 
 Now that there is more information available, and knowing that you are
 using software raid and lvm, let me guess.  You added another physical
 extent (a new /dev/md2 partition) to the root volume group?  If so
 that is a common problem.  I have hit it myself on a number of
 occasions.  You need to update the mdadm.conf file and rebuild the
 initrd.  I will say more details about it as I go here in this
 message.
 
  As I mentioned in a prior post,Grub was leaving me at a Grub
  rescueprompt.  
  
  I followed this procedure:
  http://www.gnu.org/software/grub/manual/html_node/GRUB-only-offers-a-rescue-shell.html#GRUB-only-offers-a-rescue-shell
 
 That seems reasonable.  It talks about how to drive the grub boot
 prompt to manually set up the boot.
 
 But you were talking about using a disk utility from a live cd to
 configure a new array with two new drives and that is where I was
 thinking that you had been modifying the arrays.  It sounded like it
 anyway.
 
 Gosh it would be a lot easier if we could just pop in for a quick peek
 at the system in person.  But we will just have to make do with the
 correspondence course.  :-)
 
  Booting now leaves me at a busy box: However the Grub menu is
  correct. With the correct kernels. So it appears that grub is now
  finding the root/boot partitions and files. 
 
 That sounds good.  Hopefully not too bad off then.
 
   Next time instead you might just use mdadm directly.  It really is
   quite easy to create new arrays using it.  Here is an example that
   will create a new device /dev/md9 mirrored from two other devices
   /dev/sdy5 and /dev/sdz5.
   
 mdadm --create /dev/md9 --level=mirror
   --raid-devices=2 /dev/sdy5 /dev/sdz5
   
Strangely the md2 array which I setup on the added drives
remains as /dev/md2. My root partition is/was on /dev/md0. The
result is that Grub2 fails to boot the / array.
 
  This is how I created /dev/md2.
 
 Then that explains why it didn't change.  Probably the HOMEHOST
 parameter is involved on the ones that changed.  Using mdadm from the
 command line doesn't set that parameter.
 
 There was just a long discussion about this topic just recently.
 You might want to jump into it in the middle here and read our
 learnings with HOMEHOST.
 
   http://lists.debian.org/debian-user/2010/12/msg01105.html
 
  mdadm --examine /dev/sda1  /dev/sda2  gives I think a clean result 
  I have posted the output at : http://pastebin.com/pHpKjgK3
 
 That looks good to me.  And healthy and normal.  Looks good to me for
 that part.
 
 But that is only the first partition.  That is just /dev/md0.  Do you
 have any information on the other partitions?
 
 You can look at /proc/partitions to get a list of all of the
 partitions that the kernel knows about.
 
   cat /proc/partitions
 
 Then you can poke at the other ones too.  But it looks like the
 filesystems are there okay.
 
  mdadm --detail /dev/md0 -- gives  mdadm: md device /dev/md0 does
  not appear to be active. 
  
  There is no /proc/mdstat  data output.  
 
 So it looks like the raid data is there on the disks but that the
 multidevice (md) module is not starting up in the kernel.  Because it
 isn't starting then there aren't any /dev/md* devices and no status
 output in /proc/mdstat.
 
   I would boot a rescue image and then inspect the current
   configuration using the above commands.  Hopefully that will show
   something wrong that can be fixed after you know what it is.
 
 I still think this is the best course of action for you.  Boot a
 rescue disk into the system and then go from there.  Do you have a
 Debian install disk #1 or Debian netinst or other installation disk?
 Any of those will have a rescue system that should boot your system
 okay.  The Debian rescue disk will automatically search for raid
 partitions and automatically start the md modules.
 
  So it appears that I must rebuild my arrays.
 
 I think your arrays might be fine.  More information is needed.
 
 You said your boot partition was /dev/md0.  I assume that your 

Re: Grub2 reinstall on raid1 system.

2011-01-16 Thread Jack Schneider
On Sat, 15 Jan 2011 16:57:46 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  Bob Proulx wrote:
   Jack Schneider wrote:
I have a raid1 based W/S running Debian Squeeze uptodate. (was
until ~7 days ago) There are 4 drives, 2 of which had never been
used or formatted. I configured a new array using Disk Utility
from a live Ubuntu CD. That's where I screwed up... The end
result was the names of the arrays were changed on the working
2 drives. IE: /dev/md0 to /dev/126 and /dev/md1 became md127.
   
   Something else must have happened too.  Because normally just
   adding arrays will not rename the existing arrays.  I am not
   familiar with the Disk Utility that you mention.
 
  Hi, Bob 
  Thanks for your encouraging advice...
 
 I believe you should be able to completely recover from the current
 problems.  But it may be tedious and not completely trivial.  You will
 just have to work through it.
 
 Now that there is more information available, and knowing that you are
 using software raid and lvm, let me guess.  You added another physical
 extent (a new /dev/md2 partition) to the root volume group?  If so
 that is a common problem.  I have hit it myself on a number of
 occasions.  You need to update the mdadm.conf file and rebuild the
 initrd.  I will say more details about it as I go here in this
 message.
 
  As I mentioned in a prior post,Grub was leaving me at a Grub
  rescueprompt.  
  
  I followed this procedure:
  http://www.gnu.org/software/grub/manual/html_node/GRUB-only-offers-a-rescue-shell.html#GRUB-only-offers-a-rescue-shell
 
 That seems reasonable.  It talks about how to drive the grub boot
 prompt to manually set up the boot.
 
 But you were talking about using a disk utility from a live cd to
 configure a new array with two new drives and that is where I was
 thinking that you had been modifying the arrays.  It sounded like it
 anyway.
 
 Gosh it would be a lot easier if we could just pop in for a quick peek
 at the system in person.  But we will just have to make do with the
 correspondence course.  :-)
 
  Booting now leaves me at a busy box: However the Grub menu is
  correct. With the correct kernels. So it appears that grub is now
  finding the root/boot partitions and files. 
 
 That sounds good.  Hopefully not too bad off then.
 
   Next time instead you might just use mdadm directly.  It really is
   quite easy to create new arrays using it.  Here is an example that
   will create a new device /dev/md9 mirrored from two other devices
   /dev/sdy5 and /dev/sdz5.
   
 mdadm --create /dev/md9 --level=mirror
   --raid-devices=2 /dev/sdy5 /dev/sdz5
   
Strangely the md2 array which I setup on the added drives
remains as /dev/md2. My root partition is/was on /dev/md0. The
result is that Grub2 fails to boot the / array.
 
  This is how I created /dev/md2.
 
 Then that explains why it didn't change.  Probably the HOMEHOST
 parameter is involved on the ones that changed.  Using mdadm from the
 command line doesn't set that parameter.
 
 There was just a long discussion about this topic just recently.
 You might want to jump into it in the middle here and read our
 learnings with HOMEHOST.
 
   http://lists.debian.org/debian-user/2010/12/msg01105.html
 
  mdadm --examine /dev/sda1  /dev/sda2  gives I think a clean result 
  I have posted the output at : http://pastebin.com/pHpKjgK3
 
 That looks good to me.  And healthy and normal.  Looks good to me for
 that part.
 
 But that is only the first partition.  That is just /dev/md0.  Do you
 have any information on the other partitions?
 
 You can look at /proc/partitions to get a list of all of the
 partitions that the kernel knows about.
 
   cat /proc/partitions
 
 Then you can poke at the other ones too.  But it looks like the
 filesystems are there okay.
 
  mdadm --detail /dev/md0 -- gives  mdadm: md device /dev/md0 does
  not appear to be active. 
  
  There is no /proc/mdstat  data output.  
 
 So it looks like the raid data is there on the disks but that the
 multidevice (md) module is not starting up in the kernel.  Because it
 isn't starting then there aren't any /dev/md* devices and no status
 output in /proc/mdstat.
 
   I would boot a rescue image and then inspect the current
   configuration using the above commands.  Hopefully that will show
   something wrong that can be fixed after you know what it is.
 
 I still think this is the best course of action for you.  Boot a
 rescue disk into the system and then go from there.  Do you have a
 Debian install disk #1 or Debian netinst or other installation disk?
 Any of those will have a rescue system that should boot your system
 okay.  The Debian rescue disk will automatically search for raid
 partitions and automatically start the md modules.
 
  So it appears that I must rebuild my arrays.
 
 I think your arrays might be fine.  More information is needed.
 
 You said your boot partition was /dev/md0.  I assume that your 

Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-16 Thread Jack Schneider
On Sat, 15 Jan 2011 16:57:46 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  Bob Proulx wrote:
   Jack Schneider wrote:
I have a raid1 based W/S running Debian Squeeze uptodate. (was
until ~7 days ago) There are 4 drives, 2 of which had never been
used or formatted. I configured a new array using Disk Utility
from a live Ubuntu CD. That's where I screwed up... The end
result was the names of the arrays were changed on the working
2 drives. IE: /dev/md0 to /dev/126 and /dev/md1 became md127.
   
   Something else must have happened too.  Because normally just
   adding arrays will not rename the existing arrays.  I am not
   familiar with the Disk Utility that you mention.
 
  Hi, Bob 
  Thanks for your encouraging advice...
 
 I believe you should be able to completely recover from the current
 problems.  But it may be tedious and not completely trivial.  You will
 just have to work through it.
 
 Now that there is more information available, and knowing that you are
 using software raid and lvm, let me guess.  You added another physical
 extent (a new /dev/md2 partition) to the root volume group?  If so
 that is a common problem.  I have hit it myself on a number of
 occasions.  You need to update the mdadm.conf file and rebuild the
 initrd.  I will say more details about it as I go here in this
 message.
 
  As I mentioned in a prior post,Grub was leaving me at a Grub
  rescueprompt.  
  
  I followed this procedure:
  http://www.gnu.org/software/grub/manual/html_node/GRUB-only-offers-a-rescue-shell.html#GRUB-only-offers-a-rescue-shell
 
 That seems reasonable.  It talks about how to drive the grub boot
 prompt to manually set up the boot.
 
 But you were talking about using a disk utility from a live cd to
 configure a new array with two new drives and that is where I was
 thinking that you had been modifying the arrays.  It sounded like it
 anyway.
 
 Gosh it would be a lot easier if we could just pop in for a quick peek
 at the system in person.  But we will just have to make do with the
 correspondence course.  :-)
 
  Booting now leaves me at a busy box: However the Grub menu is
  correct. With the correct kernels. So it appears that grub is now
  finding the root/boot partitions and files. 
 
 That sounds good.  Hopefully not too bad off then.
 
   Next time instead you might just use mdadm directly.  It really is
   quite easy to create new arrays using it.  Here is an example that
   will create a new device /dev/md9 mirrored from two other devices
   /dev/sdy5 and /dev/sdz5.
   
 mdadm --create /dev/md9 --level=mirror
   --raid-devices=2 /dev/sdy5 /dev/sdz5
   
Strangely the md2 array which I setup on the added drives
remains as /dev/md2. My root partition is/was on /dev/md0. The
result is that Grub2 fails to boot the / array.
 
  This is how I created /dev/md2.
 
 Then that explains why it didn't change.  Probably the HOMEHOST
 parameter is involved on the ones that changed.  Using mdadm from the
 command line doesn't set that parameter.
 
 There was just a long discussion about this topic just recently.
 You might want to jump into it in the middle here and read our
 learnings with HOMEHOST.
 
   http://lists.debian.org/debian-user/2010/12/msg01105.html
 
  mdadm --examine /dev/sda1  /dev/sda2  gives I think a clean result 
  I have posted the output at : http://pastebin.com/pHpKjgK3
 
 That looks good to me.  And healthy and normal.  Looks good to me for
 that part.
 
 But that is only the first partition.  That is just /dev/md0.  Do you
 have any information on the other partitions?
 
 You can look at /proc/partitions to get a list of all of the
 partitions that the kernel knows about.
 
   cat /proc/partitions
 
 Then you can poke at the other ones too.  But it looks like the
 filesystems are there okay.
 
  mdadm --detail /dev/md0 -- gives  mdadm: md device /dev/md0 does
  not appear to be active. 
  
  There is no /proc/mdstat  data output.  
 
 So it looks like the raid data is there on the disks but that the
 multidevice (md) module is not starting up in the kernel.  Because it
 isn't starting then there aren't any /dev/md* devices and no status
 output in /proc/mdstat.
 
   I would boot a rescue image and then inspect the current
   configuration using the above commands.  Hopefully that will show
   something wrong that can be fixed after you know what it is.
 
 I still think this is the best course of action for you.  Boot a
 rescue disk into the system and then go from there.  Do you have a
 Debian install disk #1 or Debian netinst or other installation disk?
 Any of those will have a rescue system that should boot your system
 okay.  The Debian rescue disk will automatically search for raid
 partitions and automatically start the md modules.
 
  So it appears that I must rebuild my arrays.
 
 I think your arrays might be fine.  More information is needed.
 
 You said your boot partition was /dev/md0.  I assume that your 

Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-16 Thread Bob Proulx
Jack,

With your pastebin information and the mdstat information (that last
information in your mail and pastebins was critical good stuff) and
I found this old posting from you too:  :-)

  http://lists.debian.org/debian-user/2009/10/msg00808.html

With all of that I deduce the following:

  /dev/md125 /dev/sda1 /dev/sdc1 (10G) root partition with no lvm
  /dev/md126 /dev/sda5 /dev/sdc5 (288G) LVM for /home, /var, swap, ...
  /dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted

Jack, If that is wrong please correct me.  But I think that is right.

The mdstat data showed that the arrays are sync'd.  The UUIDs are as
follows.

  ARRAY /dev/md/125_0 metadata=0.90 UUID=e45b34d8:50614884:1f1d6a6a:d9c6914c
  ARRAY /dev/md/126_0 metadata=0.90 UUID=c06c0ea6:5780b170:ea2fd86a:09558bd1
  ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2 
UUID=91ae6046:969bad93:92136016:116577fd

The desired state:

  /dev/md0 /dev/sda1 /dev/sdc1 (10G) root partition with no lvm
  /dev/md1 /dev/sda5 /dev/sdc5 (288G) LVM for /home, /var, swap, ...

Will get to /dev/md2 later...

 My thinking is that I should rerun mdadm and reassemble the arrays to
 the original definitions...  /md0  from sda1  sdc1
/md1  from sda5  sdc5  note: sda2 sdc2
are  legacy msdos extended partitions.
 I would not build a md device with msdos extended partitions under LVM2
 at this time..   Agree?

Agreed.  You want to rename the arrays.  Don't touch the msdos
partitions.

 Is the above doable?  If I can figure the right mdadm commands...8-)

Yes.  It is doable.  You can rename the array.  First stop the array.
Then assemble it again with the new desired name.  Here is what you
want to do.  Tom, Henrique, others, Please double check me on these.

  mdadm --stop /dev/md125
  mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1

  mdadm --stop /dev/126
  mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5

That should by itself be enough to get the arrays going.

But, and this is an important but, did you previously add the new disk
array to the LVM volume group on the above array?  If so then you are
not done yet.  The LVM volume group won't be able to assemble without
the new disk.  If you did then you need to fix up LVM next.

I think you should try to get back to where you were before when your
system was working.  Therefore I would remove the new disks from the
LVM volume group.  But I don't know if you did or did not add it yet.
So I must stop here and wait for further information from you.

I don't know if your rescue disk has lvm automatically configured or
not.  You may need to load the device mapper module dm_mod.  I don't
know.  If you do then here is a hint:

  modprobe dm_mod

To scan for volume groups:

  vgscan

To activate a volume group:

  vgchange -ay

To display the physical volumes associated with a volume group:

  pvdisplay

If the new disks haven't been added to the volume group (I am hoping
not) then you should be home free.  But if they are then I think you
will need to remove them first.

I don't know if the LVM actions above are going to be needed.  I am
just trying to proactively give some possible hints.

Bob


signature.asc
Description: Digital signature


Re: Grub2 reinstall on raid1 system.

2011-01-15 Thread Jack Schneider
On Fri, 14 Jan 2011 12:16:37 -0500
Tom H tomh0...@gmail.com wrote:
  
[BIG SNIP]

  You might want to try configuring grub and fstab to use UUID's
  instead of /dev/mdX.  That removes the possibility that the kernel
  will change the mdX designations.
 
  Use blkid to find out the UUID's of your partitions.
 
  Thanks for the reply, Rob. What grub file do I change?
  grub.cfg?  grub *.map? I seem to have UUIDs for both disks and
  LVM partitions, change both?
 
 So you have LVM over RAID, not just RAID.
Hi, Tom.
Well, not really, not all of the disks are LVM2. The first two
disks raid1 /dev/sda  /dev/sdc are partitioned with 1 small
/(root) partition, /dev/md0 - 10 gigs. The balance of the disk
is /dev/md1 under LVM2 with seven logical volumes. /home,/var,/swap
etc  The next two disks sdb and sdd are raid1 as /dev/md2 which I
need to use as an extension of the LVM. 

More info, when I boot the machine, I see the GRUB loading. WELCOME to
GRUB! info.  Then it enters the rescue mode with a grub rescue
prompt.  So the kernel is found/finding the / partition. Right?
 


 
 For grub2, you're only supposed to edit /etc/default/grub.
 
 

I started interpreting that as simply needing a update-grub type of
fix.. I was/am  wrong...   So I resorted  to systemrescuecd-2.0.0..
to fix up Grub..


Thanks, Jack


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110115072451.00d25...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system.

2011-01-15 Thread Jack Schneider
On Fri, 14 Jan 2011 05:25:45 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  I have a raid1 based W/S running Debian Squeeze uptodate. (was
  until ~7 days ago) There are 4 drives, 2 of which had never been
  used or formatted. I configured a new array using Disk Utility from
  a live Ubuntu CD. That's where I screwed up... The end result was
  the names of the arrays were changed on the working 2 drives.
  IE: /dev/md0 to /dev/126 and /dev/md1 became md127.
 
 Something else must have happened too.  Because normally just adding
 arrays will not rename the existing arrays.  I am not familiar with
 the Disk Utility that you mention.
 
 Next time instead you might just use mdadm directly.  It really is
 quite easy to create new arrays using it.  Here is an example that
 will create a new device /dev/md9 mirrored from two other devices
 /dev/sdy5 and /dev/sdz5.
 
   mdadm --create /dev/md9 --level=mirror
 --raid-devices=2 /dev/sdy5 /dev/sdz5

This is how I created /dev/md2.

 
  Strangely the md2 array which I setup on the added drives remains as
  /dev/md2. My root partition is/was on /dev/md0. The result is that
  Grub2 fails to boot the / array.
 
 You may have to boot a rescue cd.  I recommend booting the Debian
 install disk in rescue mode.  Then you can inspect and fix the
 problem.  But as of yet you haven't said enough to let us know what
 the problem might be yet.
 
  I have tried three REINSTALLING GRUB procedures from Sysresccd
  online docs and many others GNU.org, Ubuntu etc.
 
 This isn't encouraging.  I can tell that you are grasping at straws.
 You have my sympathy.  But unfortunately that doesn't help diagnose
 the problem.  Remain calm.  And repeat exactly the problem that you
 are seeing and the steps you have taken to correct it.
 
I have not made any changes to any files on the root partition. I have
only used the procedures from SystemRescueCD and then backed out. All
seem to fail with the same linux_raid_member error.
  The errors occur when I try to mount the partition with the /boot
  directory. 'Complains about file system type 'linux_raid_member'
 


 I haven't seen that error before.  Maybe someone else will recognize
 it.
 
 I don't understand why you would get an error mounting /boot that
 would prevent the system from coming online.  Because by the time the
 system has booted enough to mount /boot it has already practically
 booted completely.  The system doesn't actually need /boot mounted to
 boot.  Grub reads the files from /boot and sets things in motion and
 then /etc/fstab instructs the system to mount /boot.
I get that when using the live rescue disk.
 
 Usually when the root device cannot be assembled the error I see is
 that the system is Waiting for root filesystem and can eventually
 get to a recovery shell prompt.
 
  This machine has worked for 3 years flawlessly.. Can anyone help
  with this? Or point me to a place or link to get this fixed. Google
  doesn't help... I can't find a article/posting where it ended
  successfully.  I have considered a full reinstall after Squeeze goes
  stable, since this O/S is a crufty upgrade from sarge over time. But
  useless now..
 
 The partitions for raid volumes should be 'autodetect' 0xFD.  This
 will enable mdadm to assemble then into raid at boot time.
 
 You can inspect the raid partitions with --detail and --examine.
 
   mdadm --examine /dev/sda1
   mdadm --detail /dev/md0
 
 That will list information about the devices.  Replace with your own
 series of devices.
 
 I would boot a rescue image and then inspect the current configuration
 using the above commands.  Hopefully that will show something wrong
 that can be fixed after you know what it is.
 
 A couple of other hints: If you are not booting a rescue system but
 using something like a live boot then you may need to load the kernel
 modules manually.  You may need to load the dm_mod and md_mod modules.
 
   modprobe md_mod
 
 You might get useful information from looking at the /proc/mdstat
 status.
 
   cat /proc/mdstat
 
 There is a configuration file /etc/mdadm/mdadm.conf that holds the
 UUIDs of the configured devices.  If those have become corrupted then
 mdadm won't be able to assemble the /dev/md* devices.  Check that file
 and compare against what you see with the --detail output.
 
 The initrd contains a copy of the mdadm.conf file with the components
 needed to assemble the root filesystem.  If the UUIDs change over what
 is recorded in the initrd then the initrd will need to be rebuilt.  To
 do that make sure that the /etc/mdadm/mdadm.conf file is correct and
 then reconfigure the kernel with dpkg-reconfigure.
 
   dpkg-reconfigure linux-image-2.6.32-5-i686
 
 Good luck!
 
 Bob
Thanks, Bob
I will do as you suggest shortly.. BTW,  A little more info in my reply
to Tom..

TIA, Jack


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 

Re: Grub2 reinstall on raid1 system.

2011-01-15 Thread Henrique de Moraes Holschuh
On Sat, 15 Jan 2011, Jack Schneider wrote:
   You might want to try configuring grub and fstab to use UUID's
   instead of /dev/mdX.  That removes the possibility that the kernel
   will change the mdX designations.
  
   Use blkid to find out the UUID's of your partitions.

Whatever you do, NEVER use the UUIDs of partitions, use the UUID of the
md devices.  The worst failure scenario involving MD and idiotic tools
is for a tool to cause a component device to be mounted instead of the
MD array.

This is one of the reasons why the new MD formats that offset the data
inside the component devices exists.

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110115140611.ga26...@khazad-dum.debian.net



Re: Grub2 reinstall on raid1 system.

2011-01-15 Thread Jack Schneider
On Sat, 15 Jan 2011 12:06:11 -0200
Henrique de Moraes Holschuh h...@debian.org wrote:

 On Sat, 15 Jan 2011, Jack Schneider wrote:
You might want to try configuring grub and fstab to use UUID's
instead of /dev/mdX.  That removes the possibility that the
kernel will change the mdX designations.
   
Use blkid to find out the UUID's of your partitions.
 
 Whatever you do, NEVER use the UUIDs of partitions, use the UUID of
 the md devices.  The worst failure scenario involving MD and idiotic
 tools is for a tool to cause a component device to be mounted instead
 of the MD array.
 
 This is one of the reasons why the new MD formats that offset the data
 inside the component devices exists.
 
Thanks, Henrique!!!  That gives me a new place to start this AM...

Jack


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110115082823.28019...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system.

2011-01-15 Thread Jack Schneider
On Fri, 14 Jan 2011 05:25:45 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  I have a raid1 based W/S running Debian Squeeze uptodate. (was
  until ~7 days ago) There are 4 drives, 2 of which had never been
  used or formatted. I configured a new array using Disk Utility from
  a live Ubuntu CD. That's where I screwed up... The end result was
  the names of the arrays were changed on the working 2 drives.
  IE: /dev/md0 to /dev/126 and /dev/md1 became md127.
 
 Something else must have happened too.  Because normally just adding
 arrays will not rename the existing arrays.  I am not familiar with
 the Disk Utility that you mention.
 
Hi, Bob 
Thanks for your encouraging advice...

As I mentioned in a prior post,Grub was leaving me at a Grub rescueprompt.  

I followed this procedure:
http://www.gnu.org/software/grub/manual/html_node/GRUB-only-offers-a-rescue-shell.html#GRUB-only-offers-a-rescue-shell
Booting now leaves me at a busy box: However the Grub menu is correct.
With the correct kernels. So it appears that grub is now finding the
root/boot partitions and files. 
 Next time instead you might just use mdadm directly.  It really is
 quite easy to create new arrays using it.  Here is an example that
 will create a new device /dev/md9 mirrored from two other devices
 /dev/sdy5 and /dev/sdz5.
 
   mdadm --create /dev/md9 --level=mirror
 --raid-devices=2 /dev/sdy5 /dev/sdz5
 
  Strangely the md2 array which I setup on the added drives remains as
  /dev/md2. My root partition is/was on /dev/md0. The result is that
  Grub2 fails to boot the / array.
 
 You may have to boot a rescue cd.  I recommend booting the Debian
 install disk in rescue mode.  Then you can inspect and fix the
 problem.  But as of yet you haven't said enough to let us know what
 the problem might be yet.
 
  I have tried three REINSTALLING GRUB procedures from Sysresccd
  online docs and many others GNU.org, Ubuntu etc.
 
 This isn't encouraging.  I can tell that you are grasping at straws.
 You have my sympathy.  But unfortunately that doesn't help diagnose
 the problem.  Remain calm.  And repeat exactly the problem that you
 are seeing and the steps you have taken to correct it.
 
  The errors occur when I try to mount the partition with the /boot
  directory. 'Complains about file system type 'linux_raid_member'
 
 I haven't seen that error before.  Maybe someone else will recognize
 it.
 
 I don't understand why you would get an error mounting /boot that
 would prevent the system from coming online.  Because by the time the
 system has booted enough to mount /boot it has already practically
 booted completely.  The system doesn't actually need /boot mounted to
 boot.  Grub reads the files from /boot and sets things in motion and
 then /etc/fstab instructs the system to mount /boot.
 
 Usually when the root device cannot be assembled the error I see is
 that the system is Waiting for root filesystem and can eventually
 get to a recovery shell prompt.
 
  This machine has worked for 3 years flawlessly.. Can anyone help
  with this? Or point me to a place or link to get this fixed. Google
  doesn't help... I can't find a article/posting where it ended
  successfully.  I have considered a full reinstall after Squeeze goes
  stable, since this O/S is a crufty upgrade from sarge over time. But
  useless now..
 
 The partitions for raid volumes should be 'autodetect' 0xFD.  This
 will enable mdadm to assemble then into raid at boot time.
 
 You can inspect the raid partitions with --detail and --examine.
 
   mdadm --examine /dev/sda1
   mdadm --detail /dev/md0
 

mdadm --examine /dev/sda1  /dev/sda2  gives I think a clean result 
I have posted the output at : http://pastebin.com/pHpKjgK3
mdadm --detail /dev/md0 -- gives  mdadm: md device /dev/md0 does not
appear to be active. 

There is no /proc/mdstat  data output.  

 That will list information about the devices.  Replace with your own
 series of devices.
 
 I would boot a rescue image and then inspect the current configuration
 using the above commands.  Hopefully that will show something wrong
 that can be fixed after you know what it is.
 
 A couple of other hints: If you are not booting a rescue system but
 using something like a live boot then you may need to load the kernel
 modules manually.  You may need to load the dm_mod and md_mod modules.
 
   modprobe md_mod
 
 You might get useful information from looking at the /proc/mdstat
 status.


 
   cat /proc/mdstat
 
 There is a configuration file /etc/mdadm/mdadm.conf that holds the
 UUIDs of the configured devices.  If those have become corrupted then
 mdadm won't be able to assemble the /dev/md* devices.  Check that file
 and compare against what you see with the --detail output.
 
 The initrd contains a copy of the mdadm.conf file with the components
 needed to assemble the root filesystem.  If the UUIDs change over what
 is recorded in the initrd then the initrd will need to be rebuilt.  To
 do that make 

Re: Grub2 reinstall on raid1 system.

2011-01-15 Thread Tom H
On Sat, Jan 15, 2011 at 8:24 AM, Jack Schneider p...@dp-indexing.com wrote:
 On Fri, 14 Jan 2011 12:16:37 -0500
 Tom H tomh0...@gmail.com wrote:
  
 [BIG SNIP]

  You might want to try configuring grub and fstab to use UUID's
  instead of /dev/mdX.  That removes the possibility that the kernel
  will change the mdX designations.
 
  Use blkid to find out the UUID's of your partitions.
 
  Thanks for the reply, Rob. What grub file do I change?
  grub.cfg?  grub *.map? I seem to have UUIDs for both disks and
  LVM partitions, change both?

 So you have LVM over RAID, not just RAID.

 Well, not really, not all of the disks are LVM2. The first two
 disks raid1 /dev/sda  /dev/sdc are partitioned with 1 small
 /(root) partition, /dev/md0 - 10 gigs. The balance of the disk
 is /dev/md1 under LVM2 with seven logical volumes. /home,/var,/swap
 etc  The next two disks sdb and sdd are raid1 as /dev/md2 which I
 need to use as an extension of the LVM.

So you don't need to have any lvm reference in grub.cfg.


 More info, when I boot the machine, I see the GRUB loading. WELCOME to
 GRUB! info.  Then it enters the rescue mode with a grub rescue
 prompt.  So the kernel is found/finding the / partition. Right?

For grub rescue, check out
http://www.gnu.org/software/grub/manual/grub.html#GRUB-only-offers-a-rescue-shell


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktik3whb1d5fhvp4xsyutfqfpbrr9em8xr8xs8...@mail.gmail.com



Re: Grub2 reinstall on raid1 system.

2011-01-15 Thread Tom H
On Sat, Jan 15, 2011 at 9:06 AM, Henrique de Moraes Holschuh
h...@debian.org wrote:
 On Sat, 15 Jan 2011, Jack Schneider wrote:
   You might want to try configuring grub and fstab to use UUID's
   instead of /dev/mdX.  That removes the possibility that the kernel
   will change the mdX designations.
  
   Use blkid to find out the UUID's of your partitions.

 Whatever you do, NEVER use the UUIDs of partitions, use the UUID of the
 md devices.  The worst failure scenario involving MD and idiotic tools
 is for a tool to cause a component device to be mounted instead of the
 MD array.

 This is one of the reasons why the new MD formats that offset the data
 inside the component devices exists.

If you want to use an md device's UUID in grub.cfg, you're going to
have to edit it by hand or edit the grub2 scripts. AFAIK, they'll only
use the md device names because they're unique (through their UUIDs).


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktinblbrvvh-2avsog49zejzepunc7nmnvvubs...@mail.gmail.com



Re: Grub2 reinstall on raid1 system.

2011-01-15 Thread Tom H
On Sat, Jan 15, 2011 at 5:29 PM, Jack Schneider p...@dp-indexing.com wrote:
 On Fri, 14 Jan 2011 05:25:45 -0700
 Bob Proulx b...@proulx.com wrote:
 Jack Schneider wrote:
 
  I have a raid1 based W/S running Debian Squeeze uptodate. (was
  until ~7 days ago) There are 4 drives, 2 of which had never been
  used or formatted. I configured a new array using Disk Utility from
  a live Ubuntu CD. That's where I screwed up... The end result was
  the names of the arrays were changed on the working 2 drives.
  IE: /dev/md0 to /dev/126 and /dev/md1 became md127.

 Something else must have happened too.  Because normally just adding
 arrays will not rename the existing arrays.  I am not familiar with
 the Disk Utility that you mention.

 As I mentioned in a prior post,Grub was leaving me at a Grub rescueprompt.

 I followed this procedure:
 http://www.gnu.org/software/grub/manual/html_node/GRUB-only-offers-a-rescue-shell.html#GRUB-only-offers-a-rescue-shell
 Booting now leaves me at a busy box: However the Grub menu is correct.
 With the correct kernels. So it appears that grub is now finding the
 root/boot partitions and files.

Assemble the arrays at the busybox/initramfs prompt with
--update=super-minor in order to update the minor.


 You can inspect the raid partitions with --detail and --examine.

   mdadm --examine /dev/sda1
   mdadm --detail /dev/md0

 mdadm --examine /dev/sda1  /dev/sda2  gives I think a clean result
 I have posted the output at : http://pastebin.com/pHpKjgK3
 mdadm --detail /dev/md0 -- gives  mdadm: md device /dev/md0 does not
 appear to be active.

 There is no /proc/mdstat data output.

How about mdadm --detail /dev/md125 given that
http://pastebin.com/pHpKjgK3 shows that sda1 and sdc1 have 125 as
their minor.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/AANLkTi=et-gpncsaxuxqegkss7sprl8zj8zuhcwnl...@mail.gmail.com



Re: Grub2 reinstall on raid1 system.

2011-01-15 Thread Bob Proulx
Jack Schneider wrote:
 Bob Proulx wrote:
  Jack Schneider wrote:
   I have a raid1 based W/S running Debian Squeeze uptodate. (was
   until ~7 days ago) There are 4 drives, 2 of which had never been
   used or formatted. I configured a new array using Disk Utility from
   a live Ubuntu CD. That's where I screwed up... The end result was
   the names of the arrays were changed on the working 2 drives.
   IE: /dev/md0 to /dev/126 and /dev/md1 became md127.
  
  Something else must have happened too.  Because normally just adding
  arrays will not rename the existing arrays.  I am not familiar with
  the Disk Utility that you mention.

 Hi, Bob 
 Thanks for your encouraging advice...

I believe you should be able to completely recover from the current
problems.  But it may be tedious and not completely trivial.  You will
just have to work through it.

Now that there is more information available, and knowing that you are
using software raid and lvm, let me guess.  You added another physical
extent (a new /dev/md2 partition) to the root volume group?  If so
that is a common problem.  I have hit it myself on a number of
occasions.  You need to update the mdadm.conf file and rebuild the
initrd.  I will say more details about it as I go here in this message.

 As I mentioned in a prior post,Grub was leaving me at a Grub rescueprompt.  
 
 I followed this procedure:
 http://www.gnu.org/software/grub/manual/html_node/GRUB-only-offers-a-rescue-shell.html#GRUB-only-offers-a-rescue-shell

That seems reasonable.  It talks about how to drive the grub boot
prompt to manually set up the boot.

But you were talking about using a disk utility from a live cd to
configure a new array with two new drives and that is where I was
thinking that you had been modifying the arrays.  It sounded like it
anyway.

Gosh it would be a lot easier if we could just pop in for a quick peek
at the system in person.  But we will just have to make do with the
correspondence course.  :-)

 Booting now leaves me at a busy box: However the Grub menu is correct.
 With the correct kernels. So it appears that grub is now finding the
 root/boot partitions and files. 

That sounds good.  Hopefully not too bad off then.

  Next time instead you might just use mdadm directly.  It really is
  quite easy to create new arrays using it.  Here is an example that
  will create a new device /dev/md9 mirrored from two other devices
  /dev/sdy5 and /dev/sdz5.
  
mdadm --create /dev/md9 --level=mirror
  --raid-devices=2 /dev/sdy5 /dev/sdz5
  
   Strangely the md2 array which I setup on the added drives remains as
   /dev/md2. My root partition is/was on /dev/md0. The result is that
   Grub2 fails to boot the / array.

 This is how I created /dev/md2.

Then that explains why it didn't change.  Probably the HOMEHOST
parameter is involved on the ones that changed.  Using mdadm from the
command line doesn't set that parameter.

There was just a long discussion about this topic just recently.
You might want to jump into it in the middle here and read our
learnings with HOMEHOST.

  http://lists.debian.org/debian-user/2010/12/msg01105.html

 mdadm --examine /dev/sda1  /dev/sda2  gives I think a clean result 
 I have posted the output at : http://pastebin.com/pHpKjgK3

That looks good to me.  And healthy and normal.  Looks good to me for
that part.

But that is only the first partition.  That is just /dev/md0.  Do you
have any information on the other partitions?

You can look at /proc/partitions to get a list of all of the
partitions that the kernel knows about.

  cat /proc/partitions

Then you can poke at the other ones too.  But it looks like the
filesystems are there okay.

 mdadm --detail /dev/md0 -- gives  mdadm: md device /dev/md0 does not
 appear to be active. 
 
 There is no /proc/mdstat  data output.  

So it looks like the raid data is there on the disks but that the
multidevice (md) module is not starting up in the kernel.  Because it
isn't starting then there aren't any /dev/md* devices and no status
output in /proc/mdstat.

  I would boot a rescue image and then inspect the current configuration
  using the above commands.  Hopefully that will show something wrong
  that can be fixed after you know what it is.

I still think this is the best course of action for you.  Boot a
rescue disk into the system and then go from there.  Do you have a
Debian install disk #1 or Debian netinst or other installation disk?
Any of those will have a rescue system that should boot your system
okay.  The Debian rescue disk will automatically search for raid
partitions and automatically start the md modules.

 So it appears that I must rebuild my arrays.

I think your arrays might be fine.  More information is needed.

You said your boot partition was /dev/md0.  I assume that your root
partition was /dev/md1?  Then you added two new disks as /dev/md2?

  /dev/md0   /dev/sda1  /dev/sdc1

Let me guess at the next two:

  /dev/md1   /dev/sda2  /dev/sdc2  -- ?? missing info 

Re: Grub2 reinstall on raid1 system.

2011-01-15 Thread Henrique de Moraes Holschuh
On Sat, 15 Jan 2011, Tom H wrote:
 On Sat, Jan 15, 2011 at 9:06 AM, Henrique de Moraes Holschuh
 h...@debian.org wrote:
  On Sat, 15 Jan 2011, Jack Schneider wrote:
You might want to try configuring grub and fstab to use UUID's
instead of /dev/mdX.  That removes the possibility that the kernel
will change the mdX designations.
   
Use blkid to find out the UUID's of your partitions.
 
  Whatever you do, NEVER use the UUIDs of partitions, use the UUID of the
  md devices.  The worst failure scenario involving MD and idiotic tools
  is for a tool to cause a component device to be mounted instead of the
  MD array.
 
  This is one of the reasons why the new MD formats that offset the data
  inside the component devices exists.
 
 If you want to use an md device's UUID in grub.cfg, you're going to
 have to edit it by hand or edit the grub2 scripts. AFAIK, they'll only
 use the md device names because they're unique (through their UUIDs).

You must either use /dev/md* or the MD device UUID.  Anything else is going
to bite you back, hard.

There really isn't a reason to use UUIDs with MD.  The md devices will _not_
move around, especially not when kernel autostart is non-operational (and it
is not operational in any Debian kernel).  But some initrd scripts will keep
pestering you until you do switch to UUIDs everywhere.  Annoying, that.

OTOH, you will learn very fast to never ever forget to update
/etc/mdadm/mdadm.conf AND the initrds when you touch the md arrays...

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110116000103.ga20...@khazad-dum.debian.net



Re: Grub2 reinstall on raid1 system.

2011-01-14 Thread Jack Schneider
On Thu, 13 Jan 2011 17:43:53 -0500
Rob Owens row...@ptd.net wrote:

 On Thu, Jan 13, 2011 at 08:23:11AM -0600, Jack Schneider wrote:
  
  I have a raid1 based W/S running Debian Squeeze uptodate. (was
  until ~7 days ago) There are 4 drives, 2 of which had never been
  used or formatted. I configured a new array using Disk Utility from
  a live Ubuntu CD. That's where I screwed up... The end result was
  the names of the arrays were changed on the working 2 drives.
  IE: /dev/md0 to /dev/126 and /dev/md1 became md127. Strangely the
  md2 array which I setup on the added drives remains as /dev/md2. My
  root partition is/was on /dev/md0. The result is that Grub2 fails
  to boot the / array. I have tried three REINSTALLING GRUB
  procedures from Sysresccd online docs and many others GNU.org,
  Ubuntu etc. The errors occur when I try to mount the partition with
  the /boot directory. 'Complains about file system type
  'linux_raid_member' This machine has worked for 3 years
  flawlessly.. Can anyone help with this? Or point me to a place or
  link to get this fixed. Google doesn't help... I can't find a
  article/posting where it ended successfully. I have considered a
  full reinstall after Squeeze goes stable, since this O/S is a
  crufty upgrade from sarge over time. But useless now..
  
 You might want to try configuring grub and fstab to use UUID's instead
 of /dev/mdX.  That removes the possibility that the kernel will change
 the mdX designations.
 
 Use blkid to find out the UUID's of your partitions.
 
 -Rob
 
 
Thanks for the reply, Rob.   What grub file do I change?
grub.cfg?  grub *.map? I seem to have UUIDs for both disks and
LVM partitions, change both? 

TIA, Jack
   


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110114060617.5dfa8...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system.

2011-01-14 Thread Bob Proulx
Jack Schneider wrote:
 I have a raid1 based W/S running Debian Squeeze uptodate. (was
 until ~7 days ago) There are 4 drives, 2 of which had never been
 used or formatted. I configured a new array using Disk Utility from a
 live Ubuntu CD. That's where I screwed up... The end result was the
 names of the arrays were changed on the working 2 drives. IE: /dev/md0
 to /dev/126 and /dev/md1 became md127.

Something else must have happened too.  Because normally just adding
arrays will not rename the existing arrays.  I am not familiar with
the Disk Utility that you mention.

Next time instead you might just use mdadm directly.  It really is
quite easy to create new arrays using it.  Here is an example that
will create a new device /dev/md9 mirrored from two other devices
/dev/sdy5 and /dev/sdz5.

  mdadm --create /dev/md9 --level=mirror --raid-devices=2 /dev/sdy5 /dev/sdz5

 Strangely the md2 array which I setup on the added drives remains as
 /dev/md2. My root partition is/was on /dev/md0. The result is that
 Grub2 fails to boot the / array.

You may have to boot a rescue cd.  I recommend booting the Debian
install disk in rescue mode.  Then you can inspect and fix the
problem.  But as of yet you haven't said enough to let us know what
the problem might be yet.

 I have tried three REINSTALLING GRUB procedures from Sysresccd
 online docs and many others GNU.org, Ubuntu etc.

This isn't encouraging.  I can tell that you are grasping at straws.
You have my sympathy.  But unfortunately that doesn't help diagnose
the problem.  Remain calm.  And repeat exactly the problem that you
are seeing and the steps you have taken to correct it.

 The errors occur when I try to mount the partition with the /boot
 directory. 'Complains about file system type 'linux_raid_member'

I haven't seen that error before.  Maybe someone else will recognize
it.

I don't understand why you would get an error mounting /boot that
would prevent the system from coming online.  Because by the time the
system has booted enough to mount /boot it has already practically
booted completely.  The system doesn't actually need /boot mounted to
boot.  Grub reads the files from /boot and sets things in motion and
then /etc/fstab instructs the system to mount /boot.

Usually when the root device cannot be assembled the error I see is
that the system is Waiting for root filesystem and can eventually
get to a recovery shell prompt.

 This machine has worked for 3 years flawlessly.. Can anyone help
 with this? Or point me to a place or link to get this fixed. Google
 doesn't help... I can't find a article/posting where it ended
 successfully.  I have considered a full reinstall after Squeeze goes
 stable, since this O/S is a crufty upgrade from sarge over time. But
 useless now..

The partitions for raid volumes should be 'autodetect' 0xFD.  This
will enable mdadm to assemble then into raid at boot time.

You can inspect the raid partitions with --detail and --examine.

  mdadm --examine /dev/sda1
  mdadm --detail /dev/md0

That will list information about the devices.  Replace with your own
series of devices.

I would boot a rescue image and then inspect the current configuration
using the above commands.  Hopefully that will show something wrong
that can be fixed after you know what it is.

A couple of other hints: If you are not booting a rescue system but
using something like a live boot then you may need to load the kernel
modules manually.  You may need to load the dm_mod and md_mod modules.

  modprobe md_mod

You might get useful information from looking at the /proc/mdstat
status.

  cat /proc/mdstat

There is a configuration file /etc/mdadm/mdadm.conf that holds the
UUIDs of the configured devices.  If those have become corrupted then
mdadm won't be able to assemble the /dev/md* devices.  Check that file
and compare against what you see with the --detail output.

The initrd contains a copy of the mdadm.conf file with the components
needed to assemble the root filesystem.  If the UUIDs change over what
is recorded in the initrd then the initrd will need to be rebuilt.  To
do that make sure that the /etc/mdadm/mdadm.conf file is correct and
then reconfigure the kernel with dpkg-reconfigure.

  dpkg-reconfigure linux-image-2.6.32-5-i686

Good luck!

Bob


signature.asc
Description: Digital signature


Re: Grub2 reinstall on raid1 system.

2011-01-14 Thread Jack Schneider
On Fri, 14 Jan 2011 06:06:17 -0600
Jack Schneider p...@dp-indexing.com wrote:

 On Thu, 13 Jan 2011 17:43:53 -0500
 Rob Owens row...@ptd.net wrote:
 
  On Thu, Jan 13, 2011 at 08:23:11AM -0600, Jack Schneider wrote:
   
   I have a raid1 based W/S running Debian Squeeze uptodate. (was
   until ~7 days ago) There are 4 drives, 2 of which had never been
   used or formatted. I configured a new array using Disk Utility
   from a live Ubuntu CD. That's where I screwed up... The end
   result was the names of the arrays were changed on the working 2
   drives. IE: /dev/md0 to /dev/126 and /dev/md1 became md127.
   Strangely the md2 array which I setup on the added drives remains
   as /dev/md2. My root partition is/was on /dev/md0. The result is
   that Grub2 fails to boot the / array. I have tried three
   REINSTALLING GRUB procedures from Sysresccd online docs and many
   others GNU.org, Ubuntu etc. The errors occur when I try to mount
   the partition with the /boot directory. 'Complains about file
   system type 'linux_raid_member' This machine has worked for 3
   years flawlessly.. Can anyone help with this? Or point me to a
   place or link to get this fixed. Google doesn't help... I can't
   find a article/posting where it ended successfully. I have
   considered a full reinstall after Squeeze goes stable, since this
   O/S is a crufty upgrade from sarge over time. But useless now..
   
  You might want to try configuring grub and fstab to use UUID's
  instead of /dev/mdX.  That removes the possibility that the kernel
  will change the mdX designations.
  
  Use blkid to find out the UUID's of your partitions.
  
  -Rob
  
  
 Thanks for the reply, Rob.   What grub file do I change?
 grub.cfg?  grub *.map? I seem to have UUIDs for both disks and
 LVM partitions, change both? 
 
 TIA, Jack

Whoops!! UUIDs for Not just disks, LVM volumes  RAID arrays..
Jack


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110114063014.3d4c5...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system.

2011-01-14 Thread Tom H
On Thu, Jan 13, 2011 at 5:43 PM, Rob Owens row...@ptd.net wrote:
 On Thu, Jan 13, 2011 at 08:23:11AM -0600, Jack Schneider wrote:

 I have a raid1 based W/S running Debian Squeeze uptodate. (was
 until ~7 days ago) There are 4 drives, 2 of which had never been
 used or formatted. I configured a new array using Disk Utility from a
 live Ubuntu CD. That's where I screwed up... The end result was the
 names of the arrays were changed on the working 2 drives. IE: /dev/md0
 to /dev/126 and /dev/md1 became md127. Strangely the md2 array which I
 setup on the added drives remains as /dev/md2. My root partition is/was
 on /dev/md0. The result is that Grub2 fails to boot the / array. I have
 tried three REINSTALLING GRUB procedures from Sysresccd online docs
 and many others GNU.org, Ubuntu etc. The errors occur when I try to
 mount the partition with the /boot directory. 'Complains about file
 system type 'linux_raid_member' This machine has worked for 3 years
 flawlessly.. Can anyone help with this? Or point me to a place or link
 to get this fixed. Google doesn't help... I can't find a
 article/posting where it ended successfully.
 I have considered a full reinstall after Squeeze goes stable, since this
 O/S is a crufty upgrade from sarge over time. But useless now..

 You might want to try configuring grub and fstab to use UUID's instead
 of /dev/mdX.  That removes the possibility that the kernel will change
 the mdX designations.

 Use blkid to find out the UUID's of your partitions.

I don't think that the UUIDs'll change anything because mdadm.conf
establishes a one-to-one correspondence between mdX and its UUID. If
the superblock metadata's v0.9, the hostname's hashed and integrated
into the UUID (I can't imagine that the latter would've changed) and
if the superblock metada's v1.x, the hostname's held separately from
the UUID so it can change independently of the latter.

If the arrays are being name 127 and 126, they must be considered
foreign to the system; most probably because the metadata's been
modified while booted from the Ubuntu CD/DVD. Using mdadm's
--homehost flag to reset the hostname should reset them to being
recognized 0 and 1.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktik0mvqxyfiv_6seq2zd6qgwysoy9ctg0zcgj...@mail.gmail.com



Re: Grub2 reinstall on raid1 system.

2011-01-14 Thread Tom H
On Fri, Jan 14, 2011 at 7:06 AM, Jack Schneider p...@dp-indexing.com wrote:
 On Thu, 13 Jan 2011 17:43:53 -0500
 Rob Owens row...@ptd.net wrote:
 On Thu, Jan 13, 2011 at 08:23:11AM -0600, Jack Schneider wrote:
 
  I have a raid1 based W/S running Debian Squeeze uptodate. (was
  until ~7 days ago) There are 4 drives, 2 of which had never been
  used or formatted. I configured a new array using Disk Utility from
  a live Ubuntu CD. That's where I screwed up... The end result was
  the names of the arrays were changed on the working 2 drives.
  IE: /dev/md0 to /dev/126 and /dev/md1 became md127. Strangely the
  md2 array which I setup on the added drives remains as /dev/md2. My
  root partition is/was on /dev/md0. The result is that Grub2 fails
  to boot the / array. I have tried three REINSTALLING GRUB
  procedures from Sysresccd online docs and many others GNU.org,
  Ubuntu etc. The errors occur when I try to mount the partition with
  the /boot directory. 'Complains about file system type
  'linux_raid_member' This machine has worked for 3 years
  flawlessly.. Can anyone help with this? Or point me to a place or
  link to get this fixed. Google doesn't help... I can't find a
  article/posting where it ended successfully. I have considered a
  full reinstall after Squeeze goes stable, since this O/S is a
  crufty upgrade from sarge over time. But useless now..
 
 You might want to try configuring grub and fstab to use UUID's instead
 of /dev/mdX.  That removes the possibility that the kernel will change
 the mdX designations.

 Use blkid to find out the UUID's of your partitions.

 Thanks for the reply, Rob. What grub file do I change?
 grub.cfg?  grub *.map? I seem to have UUIDs for both disks and
 LVM partitions, change both?

So you have LVM over RAID, not just RAID.

For grub2, you're only supposed to edit /etc/default/grub.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktimgop7xkwtramp5uejfe1aq9yikimiazkvuw...@mail.gmail.com



Grub2 reinstall on raid1 system.

2011-01-13 Thread Jack Schneider

I have a raid1 based W/S running Debian Squeeze uptodate. (was
until ~7 days ago) There are 4 drives, 2 of which had never been
used or formatted. I configured a new array using Disk Utility from a
live Ubuntu CD. That's where I screwed up... The end result was the
names of the arrays were changed on the working 2 drives. IE: /dev/md0
to /dev/126 and /dev/md1 became md127. Strangely the md2 array which I
setup on the added drives remains as /dev/md2. My root partition is/was
on /dev/md0. The result is that Grub2 fails to boot the / array. I have
tried three REINSTALLING GRUB procedures from Sysresccd online docs
and many others GNU.org, Ubuntu etc. The errors occur when I try to
mount the partition with the /boot directory. 'Complains about file
system type 'linux_raid_member' This machine has worked for 3 years
flawlessly.. Can anyone help with this? Or point me to a place or link
to get this fixed. Google doesn't help... I can't find a
article/posting where it ended successfully.  
I have considered a full reinstall after Squeeze goes stable, since this
O/S is a crufty upgrade from sarge over time. But useless now..

TIA, Jack


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110113082311.6ad80...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system.

2011-01-13 Thread Rob Owens
On Thu, Jan 13, 2011 at 08:23:11AM -0600, Jack Schneider wrote:
 
 I have a raid1 based W/S running Debian Squeeze uptodate. (was
 until ~7 days ago) There are 4 drives, 2 of which had never been
 used or formatted. I configured a new array using Disk Utility from a
 live Ubuntu CD. That's where I screwed up... The end result was the
 names of the arrays were changed on the working 2 drives. IE: /dev/md0
 to /dev/126 and /dev/md1 became md127. Strangely the md2 array which I
 setup on the added drives remains as /dev/md2. My root partition is/was
 on /dev/md0. The result is that Grub2 fails to boot the / array. I have
 tried three REINSTALLING GRUB procedures from Sysresccd online docs
 and many others GNU.org, Ubuntu etc. The errors occur when I try to
 mount the partition with the /boot directory. 'Complains about file
 system type 'linux_raid_member' This machine has worked for 3 years
 flawlessly.. Can anyone help with this? Or point me to a place or link
 to get this fixed. Google doesn't help... I can't find a
 article/posting where it ended successfully.  
 I have considered a full reinstall after Squeeze goes stable, since this
 O/S is a crufty upgrade from sarge over time. But useless now..
 
You might want to try configuring grub and fstab to use UUID's instead
of /dev/mdX.  That removes the possibility that the kernel will change
the mdX designations.

Use blkid to find out the UUID's of your partitions.

-Rob


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110113224353.ga2...@aurora.owens.net