On 03/14/2013 12:53 AM, Wenyi Gao wrote:
On 2013-03-13 20:47, Mike Burns wrote:
On 03/11/2013 03:15 AM, Wenyi Gao wrote:
On 2013-03-08 20:37, Mike Burns wrote:
On 03/07/2013 09:58 PM, Wenyi Gao wrote:
On 2013-03-07 20:52, Mike Burns wrote:
On 03/07/2013 04:03 AM, Wenyi Gao wrote:
Hey Jbos,


When I install automatically the ovirt-node-2.6.1 iso to our machine
via pxe, I ran into the following error:

Starting ovirt-firstboot: Performing automatic disk partitioning
ERROR:ovirtnode.storage:HostVG/AppVG exists on a separate disk
ERROR:ovirtnode.storage:Manual Intervention required


It seems the auto install will stop if the "HostVG" exists on the
disk on the machine.
I checked the code and find the following patch:

http://lists.ovirt.org/pipermail/node-patches/2012-December/003471.html



The patch add hostvg check to auto installation to fix rhbz#889198
that I have no access to it.

It seems it can't auto install ovirt-node via pxe with the patch and
need to delete the existing hostvg manually.

So what do you think about the issue and could you give me some
suggestions to fix it? Thanks.

What is your pxe commandline?  What device are you installing to?
What
device contains the previous installation?

The intention of the fix is to prevent users from wiping data
accidentally.  It's existed in the TUI install for some time, and was
previously in the auto-install, but in a migration from bash to
python, was missed.

As for the bug, it was a customer filed issue in RHEV-H, and not
something I can make public, unfortunately.

Mike

Thank you answering the question.

My pex command line is as follow:

In the emergency shell

[root@mcptest ~]# cat /proc/cmdline
ksdevice=bootif text BOOTIF=eth0 firstboot storage_init=/dev/sda lang=
upgrade storage_vol=50:1024:512:5:2048:-1 standalone hostname=mcptest
vga=791 rootpw=$1$cfm5kMmj$M1uknfs/8aSeZkJyf/NBC/
root=live:/ovirt-node-image-2.6.1-1.1.el6.iso ssh_pwauth=1
ks=http://9.181.129.219/cblr/svc/op/ks/system/62 rootfstype=auto ro
liveimg nomodeset check rootflags=loop
crashkernel=512M-2G:64M,2G-:128M
elevator=deadline processor.max_cstate=1 install rhgb rd_NO_LUKS
rd_NO_MD rd_NO_DM

[root@mcptest ~]# pvs
   PV                                              VG Fmt Attr
PSize PFree
   /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a--
1.36t    0


We are installing it to the hard disk , and the previous
installation is
RHEVH 6.3. During the installation,
before performing the disk format, it checks that there is a existing
HostVG on the disk, so it stops installing.

It's possible we're not translating correctly from /dev/sda to
/dev/mapper/3600*.

Can you try one thing?  try adding reinstall instead of install to the
PXE command line.  Also, you probably shouldn't pass both upgrade and
install on the same command line.  It's may be getting confused
because it has both of those.

I removed upgrade and change install to reinstall in the parameters, and
still got the same error. The error is caused by following code in
storage.py:


def storage_auto():
     storage = Storage()
     if not _functions.OVIRT_VARS["OVIRT_INIT"] == "":
         #force root install variable for autoinstalls
         _functions.OVIRT_VARS["OVIRT_ROOT_INSTALL"] = "y"
         if _functions.check_existing_hostvg("") or \
            _functions.check_existing_hostvg("","AppVG"):
             logger.error("HostVG/AppVG exists on a separate disk")
             logger.error("Manual Intervention required")
             return False
         if storage.perform_partitioning():
             return True
     else:
         logger.error("Storage Device Is Required for Auto
Installation")
     return False

When run check_existing_hostvg, because our disk have a existing HostVG,

[root@mcptest ~]# pvs
    PV                                              VG     Fmt Attr
PSize PFree
    /dev/mapper/3600605b003b86620166a1ddb0bfa15b7p4 HostVG lvm2 a--
1.36t    0


So check_existing_hostvg("") always return "True", which leads to the
issue. I think the HostVG should be there
because the machine have installed RHEVH system before. So can we can
skip the check_existing_hostvg
for an machine with a HostVG already?

Yes, you're right, that logic is broken.  It should be passing the
disks mentioned in storage_init.

Can you file a bug on this?  I'll try to work up a patch.

Thanks

Mike

Mike,

There is another question confused me. If I install
rhev-hypervisor6-6.4-20130221 with ovirt-node-2.5.0 via pxe, which has
same check_existing_hostvg
code as what mentioned above,  I can install it successfully, in
addition, I do some debug before run
_functions.OVIRT_VARS["OVIRT_ROOT_INSTALL"] = "y"
          if _functions.check_existing_hostvg("") or \
             _functions.check_existing_hostvg("","AppVG"):
              logger.error("HostVG/AppVG exists on a separate disk")
              logger.error("Manual Intervention required")
              return False
          if storage.perform_partitioning():
              return True

[root@mcptest ~]# pvs

I didn't got the HostVG information that got in ovirt-node-2.6.1
version. So I guess ovirt-node-2.5.0 does something about HostVG before
check
but ovirt-node-2.6.1 doesn't.

Yes, we need to investigate what's happening. Any chance you can file a bz for this so we can track it?

Mike


Thanks
Wenyi Gao







Joey, can you see if you can reproduce this?

Thanks

Mike


Wenyi Gao

Best regards
Wenyi Gao


_______________________________________________
node-devel mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/node-devel




_______________________________________________
node-devel mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/node-devel


_______________________________________________
node-devel mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/node-devel

_______________________________________________
node-devel mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/node-devel

Reply via email to