Thank you, Ken and Fernando. I've had a try about your suggestions. The results 
is below.

>________________________________
> 发件人: Fernando de Oliveira <fam...@yahoo.com.br>
>收件人: LFS Support List <lfs-support@linuxfromscratch.org> 
>发送日期: 2012年5月25日, 星期五, 上午 10:14
>主题: Re: [lfs-support] lfs7.1 cannot boot
> 
>On 24-05-2012 11:07, Ken Moffat wrote:
>> On Thu, May 24, 2012 at 03:07:34PM +0800, Omar wrote:
>>> Hi, all:
>>> I finished all work of the LFS 7.1 book except the error when booting my 
>>> LFS.
>>>
>>> Firstly, I states my LFS 7.1.
>>> I use VMware installed Ubuntu 10.04 on a virtual SCSI disk of 20G.
>>> Before beginning, I add another 8G virtual SCSI disk to VM and mount it
>>> in the Ubuntu manually. So the first disk with Ubuntu displays sda in
>>> /dev/ and the second which is mounted newly displays sdb in /dev/.
>>> Following the book I install LFS 7.1 on the sdb1 which is formatted with
>>> ext3 on the sdb and given only one partition. All pass with no error.
>> 
>> [...]
>>>
>>> After a few works such as logout and unmount, I reboot the
>>> computer. But it starts up using Ubuntu again only appearing error
>>> checking of a moment.
>>>
>>> Then I change the cmd in chapter 8.4.3 to grub-install /dev/sda and keep
>>> grub.cfg the same and reboot again. The computer gives errors as follows
>>> and stop starting up.
>>>
>
>>  So, you have overwritten the ubuntu grub.cfg.
>
>You have also overwritten the MBR of sda. Probably Ubuntu's grub version
>is different from LFS's.
>
>>> md: Autodetecting RAID arrays.
>>> md: Scanned 0 and added 0 devices.
>>> md: Autorun ...
>>> md: ... autorun DONE.
>>> Root-NFS: on NFS server address
>>> VFS: Unable to mount root fs via NFS. trying floppy.
>>> VFS: cannot open root device "sdb1" or unknown-block(2.0)
>>> Please append a correct "root=" boot option; here are the available 
>>> partitions:
>>> 0b00    1048575   sr0 driver: sr
>>> Kernel panic - not syncing: VFS: Unable to mount root fs on 
>>> unknown-block(2.0)
>>> Pid: 1. comm: swapper/0 Not tained 3.2.6 #1
>>> Call trace:
>>> ...mount_block_root+0x141/0x1c9...mount_root...kernel_init...
>>> 
>
>>> 
>>> I search this problem from lfs mail list and google, which says
>>> that compiling kernel needs some SCSI driver or changing hda to sda in the
>>> grub.cfg and etc. .I compile the kernel again with more drivers like
>>> SCSI. When I reboot again, it is the same.
>>> 
>>> Could anybody help me? Thanks in advance.
>>>
>>> Omar
>> 
>>  You have booted *a* linux kernel, so I don't think Elly's
>> suggestions to change where grub is looking or installed will be
>> needed.  If I've read correctly, you now only have one entry in
>> grub.cfg and that one doesn't boot.
>> 
>>  I think you might have booted the ubuntu kernel : that needs its
>> initrd to be able to access anything.  If you can get to the grub
>> command line, and you know what the initrd is called, you might be
>> able to edit the commandline (if the initrd is still there).
>> Otherwise, recover from a backup.  Once you can boot the ubuntu VM,
>> you will be able to use that to fix problems with your LFS system.
>> 


Look at the following (1).


>>  After that, add the LFS system to ubuntu's grub.cfg by *editing*
>> that file so that there are entries for both ubuntu and LFS, not by
>> running programs and NOT by overwriting it.  When you have the LFS
>> system booting and working correctly, you can think about doing
>> without the ubuntu VM (hint: until you can download other software
>> to it, the LFS system is not particularly useful),
>> 
>>  Note that I know very little about Virtual Machines, they just seem
>> to make it harder for new builders.
>> 
>> ĸen
>
>Please, if you can, provide the output of
>
>    cat /etc/lsb-release

Run this cmd with root on Ubuntu:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=10.04
DISTRIB_CODENAME=lucid
DISTRIB_DESCRIPTION="Ubuntu 10.04.4 LTS"

>
>I believe you have first to fix Ubuntu, and doing so, the other part
>will be fixed as well.

Yes, I backup the entire VM directory of Ubuntu10.04. Just a copy.


>
>If you can boot into Ubuntu, run as root:
>
>    grub-install /dev/sda
>
>to get back the original grub in the MBR.
>
>Then, as root, run
>
>    update-grub
>
>This will create a new grub.cfg in /etc/boot/grub which will have also a
>menuentry for LFS.
>

I run update-grub with root and check the file /boot/grub/grub.cfg in the end 
of which it has contained the description of LFS7.1 like below.
### BEGIN /etc/grub.d/30_os-prober ###
menuentry "GNU/Linux, Linux 3.2.6-lfs-7.1 (on /dev/sdb1)" {
insmod ext2
set root='(hd1,1)'
search --no-floppy --fs-uuid --set 63a59ea4-b2dd-4ede-a506-14b8d0a951c5
linux /boot/vmlinuz-3.2.6-lfs-7.1 root=/dev/sdb1 ro
}
### END /etc/grub.d/30_os-prober ###
After rebooting select LFS 7.1 in the boot option(ubuntu can start up and run 
correctly.) and appear the same errors as before.

...md: ... autorun DONE.
Root-NFS: on NFS server address......
(1)Then I do a test according to what Ken said. 
Copy initrd.img-2.6.32-41-generic from /boot of the ubuntuto /boot of LFS, and 
change the name toinitrd.img-3.2.6-lfs-7.1, and add it to grub.cfg of ubuntu as 
follows.
 
### BEGIN /etc/grub.d/30_os-prober ###
menuentry "GNU/Linux, Linux 3.2.6-lfs-7.1 (on /dev/sdb1)" {
insmod ext2
set root='(hd1,1)'
search --no-floppy --fs-uuid --set 63a59ea4-b2dd-4ede-a506-14b8d0a951c5
linux /boot/vmlinuz-3.2.6-lfs-7.1 root=/dev/sdb1 ro
initrd /boot/initrd.img-3.2.6-lfs-7.1
}
### END /etc/grub.d/30_os-prober ###
After the same operation the error prompt changes to :


1039/oom_score_adj instead.
Begin: Loading essential drivers... ...
Done.
Begin: Running /scripts/init-premount ...
Done.
Begin: Mounting root file system... ...
Begin: Running /scripts/local-top...
Done.
[2.803565] blkid used greatest stack depth: 6744 bytes left
[2.820658] scsi_id used greatest stack depth: 6304 bytes left
[5.879234] udevd used greatest stack depth: 6268 bytes left
Gave up waiting for root device. Common problems:
- Boot args (cat /proc/cmdline)
  - Check rootdelay= (did the system wait long enough?)
  - Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
FATAL: Could not load /lib/modules/3.2.6/modules.dep: No such file or directory
FATAL: Could not load /lib/modules/3.2.6/modules.dep: No such file or directory
ALERT! /dev/sdb1 does not exist. Dropping to a shell!

BusyBox v1.13.3 (Ubuntu 1:1.13.3-1ubuntu11) built-in shell (ash)
Enter 'help' for a list of built-in commands.
(initramfs) cat /proc/cmdline    (It is the command I run from here)

BOOT_IMAGE=/boot/vmlinux-3.2.6-lfs-7.1 root=/dev/sdb1 ro
(initramfs) cat /proc/modules
(nothing)
(initramfs) ls /lib/modules
2.6.32-41-generic 
(because it is the initrd.img of ubuntu)
(initramfs) ls /dev
(a lot of tty* devices but no sdb)
So! If the above all prompt that LFS 7.1 really need its own initrd.img. I 
don't know how to generate it now because lfs7.1 book doesn't contain the 
content. I'll search and have some other tries. 


>If you cannot boot, the instructions above should be done using chroot.
>This will be a little more difficult. Ubuntu's live CD or iso image can
>be used, but first you have to be able to change the VM bios boot order
>to have CD option before HD option. This is a little tricky, as you have
>to the hit Esc key *only once* during the time the VMW bar is
>displaying.
>
>Before going on with the explanation of how to chroot in, we need more
>feedback from you.
>
>-- 
>[]s,
>Fernando
>-- 
>http://linuxfromscratch.org/mailman/listinfo/lfs-support
>FAQ: http://www.linuxfromscratch.org/lfs/faq.html
>Unsubscribe: See the above information page
>
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page

Reply via email to