--On October 9, 2009 12:07:54 PM -0400 Sean Dilda <s...@duke.edu> wrote:


1. Use API calls instead of provided perl scripts -  My experience
writing cacti and nagios checks with the VIPerl Toolkit has shown that
there's a heavy CPU hit for a few seconds while perl processes all of the
VIPerl libraries.  Calling those scripts several times in the loading can
cause problems with larger VCL deployements.  Rewriting it to use the API
calls should reduce that CPU hit to once per reservation.


Is it a REST or xmlrpc api? There's an xmlrpc_call routine in utils.pm that could be used, for REST calls another one would need to be added.

2. Base the new vmx on the image's vmx.  This allows the image creator to
customize the VM hardware and we will carry those settings along.

3. Remove dependency on ssh/scp to ESX host.  This will make it easier to
setup for ESXi, as well as allowing deployments to use vCenter and deploy
to a cluster instead of to an individual host.

Sounds like a good approach. We may need to make #3 optional depending on the different storage options being used as a datastore, etc.


I also looked through the vmware.pm and had a question that I'm hoping
someone can answer.  In the load() subroutine, the module checks through
/var/log/messages for dhcpd responses before checking on ssh.  What is
gained by doing this?  It seems this could be cleaned up by just checking
with pings and ssh attempts.  This would simply the code, as well as
removing the dependency of running dhcpd on your management node.  We're
planning on using our campus DHCP servers for our VCL environment, so
avoiding that dependency would be a big plus for us.


Correct these could be changed. Source the messages log is a left-over logic from xCAT1.3 that got ported into the vmware.pm. We'd need to check, but I think some of those checks correspond loading status window. But it can definitely be cleaned up.

Aaron Peeler
OIT Advanced Computing
College of Engineering-NCSU
919.513.4571
http://vcl.ncsu.edu

Reply via email to