Hi,

The monitorization reports the total cpu and memory in the host, and then
opennebula assumes all of it is available for VMs. You can make static
adjustments in the scheduler configuration [1], or you could look into the
monitorization scripts and try to modify them to suit your needs [2].

A more simple approach would be to disable the hosts [3] when you plan to
use them for other jobs.

Regards

[1] http://opennebula.org/documentation:rel4.2:schg#configuration
[2] http://opennebula.org/documentation:rel4.2:img
[3]
http://opennebula.org/documentation:rel4.2:host_guide#enable_disable_and_flush

--
Join us at OpenNebulaConf2013 <http://opennebulaconf.com> in Berlin, 24-26
September, 2013
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | [email protected] |
@OpenNebula<http://twitter.com/opennebula><[email protected]>


On Thu, Aug 1, 2013 at 5:59 PM, Dmitri Chebotarov <[email protected]> wrote:

>  Hi
>
>  I've noticed that ONED continuously monitors compute nodes for available
> resources (CPU/MEM).
> So I had this idea of sharing compute nodes between HPC cluster and
> OpenNebula.
> The HPC cluster is not necessary loaded 100% all the time and may have
> spare resources to host VMs.
> If I added those nodes to ONE, do think sharing resources would work?
> The idea is when HPC assigns jobs to compute/ONE node, scheduler will
> monitor the node and "see" that it doesn't have CPU/MEM resource available
> and won't use it for new VMs....
>
>
>  Thanks.
>
> _______________________________________________
> Users mailing list
> [email protected]
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
_______________________________________________
Users mailing list
[email protected]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to