Hi,

Here, we are using a home-made PXE script that create partitions and do a 
bootstrap of a standard OS (Debian and Centos). Then all package install and 
configuration is done by CFengine.

SystemRescueCD is a good choice, and easily PXE bootable, but we had some 
trouble with it, especially with odd hardware ( unrecognized NIC or RAID 
controler.) 

A few years ago, I used partimage. It work well and consume less space than a 
simple dd.
Also, using IPMI over LAN could be interesting to manage hardware servers as 
virtual machines.

Although a configuration manager like CFengine or Puppet is mandatory 
for a cloud, I think it's out of scope for integration in OpenNebula. Deploy a 
custom image, OS independant,  (with the same system as vm?) seems to be a 
simple way and facilitate integration with existing infrastructure.

Cheers,
Nicolas AGIUS

PS: We are using Centos 5 for Xen hosts, with backported kernel from SUSE, 
because openSUSE life-cycle is really too short for us.

--- En date de : Mar 6.11.12, Jaime Melis <[email protected]> a écrit :

De: Jaime Melis <[email protected]>
Objet: Re: [one-users] using PXE to boot xen hvm
À: "Steve Heistand" <[email protected]>
Cc: "Users OpenNebula" <[email protected]>
Date: Mardi 6 novembre 2012, 0h19

Hello Steve,
This is a very interesting email, mainly because one of the upcoming features 
we want to deliver with the next release of OpenNebula is integration with bare 
metal provisioning systems. So, I'd like to take this opportunity to ask for 
more opinions from the community about this feature. So, a few questions for 
the community:


- Do you have a favourite PXE installation system you want to see OpenNebula 
integrated with- Is there any specific feature of the bare metal provision 
system you'd like to see addressed?- Do you have any ideas / suggestions about 
this?


 For the moment, let me describe you what we do internally at OpenNebula to 
address this problem.
- Manually install the bare metal system (once) and configure it:  - network 
dhcp


  - remove persistent udev rules  - configure hypervisor  - oneadmin user, 
ruby, and the rest of opennebula dependencies  - add SSH keys  - etc- Restart 
the server and boot from system rescue cd [1] 


- Backup the system installation by doing 'dd|gzip' of the disk drive over NFS 
to our NAS
Once that's ready we deploy with a small webapp internally developed utility 
that does the following:

- the webapp configures the tftpboot so that the server will boot a system 
rescue cd over pxe
- once when the system rescue boots it automatically executes an autorun script 
[2] that is dynamically served by http by the webapp wich contains the command 
to  dump the backed up image to the disk. (I'm happy to share more 
configuration specific details)



As you can see this has a few shortcomings, the most important one being that 
we aren't using kickstarts, basically because we want to cover all the OS.
And answering your question about what do we use for Xen, we are currently 
using openSUSE since it has an out-of-the-box support for Xen which makes life 
a lot easier.



[1] http://www.sysresccd.org/SystemRescueCd_Homepage[2] http://www.sysresccd.org/Sysresccd-manual-en_Run_your_own_scripts_with_autorun




Cheers,
Jaime


On Mon, Nov 5, 2012 at 4:34 PM, Steve Heistand <[email protected]> wrote:


-----BEGIN PGP SIGNED MESSAGE-----

Hash: SHA1



Im curious to know peoples' thoughts on how to take bare hardware

without an OS and get a xen aware base kernel on it so it will

take guest OS's from opennebula?



I was looking around for options involving various live cds,

found a nice livecd-xen thing but it hangs are boot time on our nodes.

the console was complaining about a bad .iso file when it booted

but the md5sum of it is at it should be. (its booted up with 
memdisk/gpxelinux/httpd)



is there a handy/easy way to get new empty hardware available for opennebula?



thanks



steve







- --

************************************************************************

 Steve Heistand                          NASA Ames Research Center

 email: [email protected]          Steve Heistand/Mail Stop 258-6

 ph: (650) 604-4369                      Bldg. 258, Rm. 232-5

 Scientific & HPC Application            P.O. Box 1

 Development/Optimization                Moffett Field, CA 94035-0001

************************************************************************

 "Any opinions expressed are those of our alien overlords, not my own."

-----BEGIN PGP SIGNATURE-----

Version: GnuPG v2.0.14 (GNU/Linux)



iEYEARECAAYFAlCYMOgACgkQoBCTJSAkVrHBYwCgtN9vJGa46j3yOzLU4bmaVagf

Zh0AoN5OkHpkInjqAJrBbp7LX1e732FD

=X0CW

-----END PGP SIGNATURE-----

_______________________________________________

Users mailing list

[email protected]

http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | [email protected]




-----La pièce jointe associée suit-----

_______________________________________________
Users mailing list
[email protected]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
_______________________________________________
Users mailing list
[email protected]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to