Hi,
You can reference any attribute *inside* the TEMPLATE.
For example, if you add to your user template, executing 'oneuser update
id', the following:
USERNAME=test_name
Then you can use that attribute inside the context:
CONTEXT=[
ROOT_USERNAME = $USER[USERNAME] ]
Regards.
--
Carlos
Hi,
You can't import an existing running VM into OpenNebula.
You will have to shut it down, register its disk as an Image [1], and
instantiate a new VM Template [2]. The guest needs to be contextualized
[3], otherwise it won't be able to use the IP that OpenNebula assigns it.
Regards.
[1]
=/usr/bin/pygrub ]
RAW=[
TYPE=xen ]
TEMPLATE_ID=6
VCPU=1
VMID=11*
What is wrong?
Regards,
Roberto.
On 05/29/2012 10:49 AM, Carlos Martín Sánchez wrote:
Hi,
You can't import an existing running VM into OpenNebula.
You will have to shut it down, register its disk as an Image [1
Hi,
You will need to open the OpenNebula xml-rpc port (or access it through a
tunnel), and install the CLI and Ruby OCA files in your Host (those located
in /usr/lib/one/ruby/OpenNebula).
Some distros package the opennebula client tools separately [1]. Or you can
download the code and use the
Hi,
Did you follow these configuration steps?
http://opennebula.org/documentation:rel3.4:kvmg#configuration
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
So... can you ssh to 127.0.0.1 as oneadmin?
Fri Jun 1 13:13:00 2012 [VMM][I]: Command execution fail: scp -r
/srv/cloud/one3/var/remotes//. 127.0.0.1:/var/tmp/one
Fri Jun 1 13:13:00 2012 [VMM][I]: scp: /var/tmp/one/./scripts_common.**rb:
Permission denied
Regards
--
Carlos Martín, MSc
Project
Dear community,
The Java OCA code distributed with the OpenNebula 3.4.x source code had
some minor bugs, and a couple of missing API methods that have been fixed
in the repo.
We have re-packaged the Java OCA, the javadoc and the required lib files in
this new .tar.gz file [1] for 3.4.1.
It is
Hi,
Looks like you are using the image name DISK/IMAGE, but user #4 does not
own any Image with that name.
You'll need to use the IMAGE_ID instead, or tell OpenNebula who is the
image owner with the IMAGE_UID or IMAGE_UNAME attributes. More info here
[1,2]
By the way, be sure to download the
Hi,
In OpenNebula you can't dynamically edit the capacity of a VM (yet, this
will change in the short-term).
So, rather than modifying the webby VM, what you have to do is create a
new Template based on little that uses the webby disk.
The steps you would have to follow are:
1) Save the webby
Hi,
I've found these two related mails [1,2], in which the http_proxy var is
discussed. In [1] Florian also suggests using the no_proxy variable.
Other thing you could check is if there is any message in oned.log similar
to could not bind to port.
Regards
[1]
Hi,
The template can be deleted, but the image can't.
If the image is persistent, OpenNebula will only allow a maximum of one VM
using it. Appart from that, you can change the permissions [1] so it is
only visible to its owner.
Regards
[1] http://opennebula.org/documentation:rel3.4:chmod
--
Hi,
If you mean the OpenNebula VM name, you can set any name when a Template is
instantiated.
If you are talking about the name in the hypervisor, you'll have to change
the deployment.0 inside the /var/lib/one/remotes/vmm/*/deploy script. Or
change the deployment.0 generation code (see [1]), but
Hi,
The message Could not find information driver im_xen means that the
driver was not found in oned.conf, or that the driver did not answer the
INIT command. This can be checked in the first lines of oned.log.
Maybe you didn't restart oned after oned.conf was edited?
Regards
--
Carlos Martín,
://twitter.com/opennebulacmar...@opennebula.org
On Thu, Jun 7, 2012 at 6:21 AM, Kumar Vaibhav kvaib...@barc.gov.in wrote:
Can this be put as part of configuration.
I don't want to put a hack into this.
Regards,
Vaibhav
On 6/6/2012 8:11 PM, Carlos Martín Sánchez wrote:
Hi,
If you
Hi,
Old error messages are left in the template to warn you that a problem
existed, even if it was later fixed.
You can ignore old messages, but if it is bothering you, it can be deleted
from the Host template using the 'onehost update' command.
Regards
--
Carlos Martín, MSc
Project Engineer
Hi,
The VM needs to have the contextualization scripts configured to be run at
start-up. The important one is vmcontext, it reads the MAC address and sets
the IP from it: the assigned IP 10.112.10.37 is 0a:70:0a:25 in hexadecimal.
The CONTEXT section in the VM template is completely optional, it
Hi,
oned.log should contain the error message, it will be marked as [E]. You
also have the individual VM log file in /var/log/one/id.log or
$ONE_LOCATION/var/datastores/0/id/vm.log
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
Hi,
What OpenNebula version are you using? There was a bug in older versions
that sometimes caused these strange usage values. It was triggered when
several VMs had the same name.
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
Hi,
Did you configure the host machine?
http://opennebula.org/documentation:rel3.4:kvmg#configuration
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
Hi,
If I recall correctly, that was fixed in 3.4.x versions.
If that was a one-time bug and you just need to reset the values, you can
stop opennebula and edit the host_pool entry in the DB. The body column
contains the complete XML you get with 'onehost show -x'.
Regards
--
Carlos Martín, MSc
So, the IP is not assigned properly.
Please guide for the same
Thanks and Regards
Ankit Anand
On Mon, Jun 11, 2012 at 6:41 PM, Carlos Martín Sánchez
cmar...@opennebula.org wrote:
Hi,
The VM needs to have the contextualization scripts configured to be
run at start-up. The important one
Hi,
This is kind of basic but, is the IMAGE/PATH file in /home/oneadmin/images
? Maybe PATH is not pointing to a symlink, in which case the drivers will
determine the real file path.
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center
Hi,
Can you paste your VM template, and the output of onevm show?
By the way, OpenNebula has come a long way since 2.2, the latest version is
3.4
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org |
Hi,
The PHYDEV attribute is used only by the network isolation driver 802.1Q
[1].
When OpenNebula sees it, it assumes you meant to set VLAN=YES
Regards
[1] http://opennebula.org/documentation:rel3.4:hm-vlan
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data
Hi,
Those are VM template attributes, take a look at
http://opennebula.org/documentation:rel3.4:template#os_and_boot_options_section
http://opennebula.org/documentation:rel3.4:xeng#xen_specific_attributes
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for
]
MEMORY=1024
NAME=VM
NETWORK=[
BRIDGE=br0,
MODEL=virtio,
NETWORK=LAN ]
OS=[
ARCH=x86_64,
BOOT=hd ]
VMID=34
On Thu, Jun 14, 2012 at 1:16 PM, Carlos Martín Sánchez
cmar...@opennebula.org wrote:
Hi,
Can you paste your VM template, and the output of onevm show?
By the way
Hi,
When a VM is created it is not assigned yet to any host, until the
scheduler decides to deploy it, thus the hook can't be remote.
You can use the RUNNING hook instead.
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
Hi,
In the prolog state, OpenNebula is copying files to the destination host.
The time it takes will vary greatly depending on your storage configuration.
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org |
: Carlos Martín Sánchez cmar...@opennebula.org
Subject: Re: [one-users] VM_HOOK on = CREATE cannot work on remote hosts
Date: Mon, 18 Jun 2012 18:19:16 +0200
Hi,
When a VM is created it is not assigned yet to any host, until the
scheduler decides to deploy it, thus the hook can't be remote
Hi,
You don't need to start the VNC server inside the guest OS, OpenNebula can
create a VNC connection directly to the hypervisor. To do that, use the
GRAPHICS attribute [1]. You can then connect to the Host's IP, or just
click the VNC icon in Sunstone.
Regards
[1]
Hi Florian,
The installation process creates a new 'oneadmin' user and group, although
the group may be named 'cloud' by some distros packages... I'm not sure
right now of which ones.
Through the documentation we refer to the oneadmin's group, so the actual
name is not really relevant.
Best
Dear community,
We have updated the Demo Cloud [1] to the just released OpenNebula 3.6 Beta.
I'm sure all of you already have an account, but just in case we have
also published some screenshots in our blog [2] to let everybody get an
idea of the new Sunstone look.
Enjoy, test, and report your
Hi,
Are you using 3.4.0, or 3.4.1? We had a bug in 3.4.0 [1] that could cause
those negative running VMs counters
Regards
[1] http://dev.opennebula.org/issues/1237
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org |
Hi Rolandas,
Thank you for reporting this, we'll take a look soon [1].
Best regards
[1] http://dev.opennebula.org/issues/1316
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
Hi,
Since OpenNebula 3.4 you can change the restricted attributes in oned.conf,
see [1].
If for any reason you can't upgrade, the only option you have left is to
modify the code. The list of restricted VM attributes is defined in
src/vm/VirtualMachineTemplate.cc,
Hi,
You need to set the network driver to 'dummy', not 'vnm_dummy'. Did you
copy the driver name from somewhere in our documentation?
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
Hi,
Is the datastore using the shared tm_mad driver [1]? Can you check if the
datastores are correctly exported to the Host, and with the right
permissions so oneadmin can use them?
Regards
[1] http://opennebula.org/documentation:rel3.4:fs_ds
--
Carlos Martín, MSc
Project Engineer
OpenNebula -
Hi,
Could you please paste the complete output of 'onevm show' and 'onevm show
-x' ?
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
://www.opennebula.org/ | cmar...@opennebula.org |
@OpenNebula http://twitter.com/opennebula cmar...@opennebula.org
Thanks
On 2 July 2012 21:02, Carlos Martín Sánchez cmar...@opennebula.orgwrote:
Hi,
The easiest way to add any attributes to the deployment file is using the
RAW attribute inside
Hi,
Did you follow the configuration steps to enable TCP [1]?
Regards
[1] http://opennebula.org/documentation:rel3.6:kvmg#configuration
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
Fixed, thanks!
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org
On Tue, Jul 10, 2012 at 6:18 PM, christopher barry
Hi,
Persistent Images cannot be used by more than one VM, but as you said
simultaneous clone operations should be allowed.
Thank you for reporting it, we'll fix it in the next release [1]
Regards,
Carlos
[1] http://dev.opennebula.org/issues/1357
--
Carlos Martín, MSc
Project Engineer
OpenNebula
Hi,
We forgot to announce in this thread that this issue was addressed for the
final 3.6 release.
You can now have more than one system datastores, and set a different one
for each cluster. See [1] for more information.
Thank you for your feedback!
Carlos.
[1]
Hi,
Non-persistent images [1] can be used by many VMs simultaneously. You can
create a non-persistent CDROM type Image, and set in your VM template [2]:
OS = [ BOOT = cdrom ]
Regards
[1] http://opennebula.org/documentation:rel3.6:img_guide
[2]
Hi,
OpenNebula will only monitor and manage VMs created through it, it will not
interfere with your existing VMs.
But because OpenNebula assumes exclusive use of the hosts, the scheduler
won't take into account the resources used by the previous VMs. You can
however set manually how much host
Hi,
I saw in your other email that you fixed the migration, nevertheless this
sql error is a bug that will be fixed in the next release [1].
The error message is unrelated to the migration action, and can be ignored
as it does not really introduce any problem.
Cheers
[1]
Hi,
If I understood correctly, you want to share the qcow datastore, but use
local storage for the system datastore.
To do this, simply set the ssh TM driver for the system DS [1], and make
sure you are exporting only /var/lib/one/datastores/100.
Regards
[1]
Fixed. Thanks for reporting!
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org
On Tue, Jul 17, 2012 at 10:17 PM, Mark Wagner
/18 Carlos Martín Sánchez cmar...@opennebula.org
Hi,
If I understood correctly, you want to share the qcow datastore, but use
local storage for the system datastore.
To do this, simply set the ssh TM driver for the system DS [1], and make
sure you are exporting only /var/lib/one/datastores
Hi,
Does the file /vmlinuz exist in the host?
Actually, kvm usually does not need an external kernel, see [1]
Regards
[1]
http://lists.opennebula.org/pipermail/users-opennebula.org/2012-May/009119.html
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center
Hi,
Can you provide some more information, like the installation method, and
the permissions of those directories (if they exist)?
Please check also if the processes 'oned' and 'mm_sched' are running.
Maybe you started the core daemon, 'oned', instead of 'one start'... or
tried to execute 'one
Hi Michael and Jan,
I've been trying to reproduce your problem, and everything works fine for
me. Maybe this is a documentation problem, and some concepts are not as
clear as we thought.
Each resource has an owner and group, and permissions for each of them. The
permissions are set with the
Hi,
The scenario you describe is already supported by OpenNebula.
If you use any of the network isolation drivers [1], users can create new
isolated networks using the VLAN=YES attribute in the VNet template [2].
To make things even easier, we recently released a virtual router appliance
[3].
that.
Maybe the same issue has Michael.
Michael - can you confirm it?
Jan
Dňa 23.07.2012 11:57, Carlos Martín Sánchez wrote / napísal(a):
Hi Michael and Jan,
I've been trying to reproduce your problem, and everything works fine
for me. Maybe this is a documentation problem, and some
;-)
** **
Best Regards
Michael
** **
*Von:* Jan Benadik [mailto:jan.bena...@atos.net]
*Gesendet:* Dienstag, 24. Juli 2012 08:30
*An:* Carlos Martín Sánchez
*Cc:* rus...@rus.uni-stuttgart.de; users@lists.opennebula.org
*Betreff:* Re: [one-users] Problem with virtual network ACLs for multiple
Hi,
This is a known issue [1, 2], but thanks for the great feedback!
Cheers
[1] http://opennebula.org/documentation:rel3.6:known_issues
[2] http://dev.opennebula.org/issues/1351
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
Hi,
The VM instance cannot be redefined, except for disk hot-plugging.
A VM Template is like a text file that OpenNebula stores for you, but once
a VM instance is created, there is no relation between the two of them. So
as you said, the changes you make to a Template will only affect new VM
Hi,
Looks like you datastore location is not properly exported to the host. By
default, the Transfer Manager (TM) driver for the System (0) and Default
(1) Datastores is 'shared'.
This driver assumes that /var/lib/one/datastores/0 and .../1 are mounted.
More information here:
Hi,
On Fri, Jul 27, 2012 at 2:02 AM, Jason Fairfield pegasu...@siu.edu wrote:
First, what is the 3.6 equivalent for this command (from the Open Nebula
Book):
*onehost create host09 im_kvm vmm_kvm tm_ssh dummy*
Trying to follow the man page as closely as possible I'm using this:
onehost
Hi,
Can you please explain a bit more about that hack needed for Ubuntu?
Cheers
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org
Hi,
As you said, the common cause for the MAD did not answer INIT command is
that the timeout is reached because of high load... I can't remember any
other obvious reason.
If you find the same problem again, you could try to execute the following:
$ . /usr/lib/one/mads/one_im_ssh -r 0 -t 15
Hi,
Can you share some more information about your scenario? Are you using
sqlite, or mysql? MySQL can drastically improve the performance over sqlite.
How are you querying OpenNebula, are you using the CLI, our ruby/java OCA?
The response time can be affected by the xml processing that the OCA
Hi,
I can't reproduce this behaviour.
I'm using the drivers -,shared for the system datastore; and fs,shared for
the default datastore. What drivers are you using? Are your images
persistent?
About the CLI-Sunstone commands consistency, both end calling the ruby OCA
and then the core, so they
Hi,
OpenNebula has the objects cached in memory, so you cannot simply inject
new data into the DB. Besides, the (live)migration operation is not only a
change in the VM hostname, it involves history records, accounting, and
Host capacity.
The ideal solution would be to make that load balancer
using KVM and my images are only clones...
Regards
Cyrille
Mardi 04/09/2012 à 15:14 Carlos Martín Sánchez a écrit:
Hi,
I can't reproduce this behaviour.
I'm using the drivers -,shared for the system datastore; and fs,shared for
the default datastore. What drivers are you using? Are your
every part of my code with time measurements and traced it down
to the xml-rpc requests to opennebula.
Hope this help.
Regards,
Christoph Robbert
[1] https://github.com/lukaszo/python-oca
Am 04.09.2012 12:59, schrieb Carlos Martín Sánchez:
Hi,
Can you share some more information
Well, it depends on what you consider healthy.
If you only need to check if the process is responding, a simple oneuser
show would be enough.
Or you could check the number of Hosts or VMs in 'fail' state to get a
quick warning about a problem in your cloud.
Regards
--
Carlos Martín, MSc
Project
component.
This drivers and the complete cluster stack will be released soon.
Regards,
Nicolas AGIUS
--- En date de : *Mar 4.9.12, Carlos Martín Sánchez
cmar...@opennebula.org* a écrit :
De: Carlos Martín Sánchez cmar...@opennebula.org
Objet: Re: [one-users] Update hostname after
Hi,
Looks like the problem is related to the ssh keys. Can you check if
oneadmin can ssh without being prompt for a password to 'kvm01' and 'front'?
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org |
Hi,
If you are using the 'shared' Transfer Manager (TM) driver, then the
directory /var/lib/one/var/datastores/0/0 should exist in both the
front-end and the host.
Please check if /var/lib/one/var/datastores/0 and /1 are correctly mounted,
and have the right permissions so oneadmin can use them.
,
Thanks for your quick answer.
I'm using KVM and my images are only clones...
Regards
Cyrille
Mardi 04/09/2012 à 15:14 Carlos Martín Sánchez a écrit:
Hi,
I can't reproduce this behaviour.
I'm using the drivers -,shared for the system datastore; and fs,shared
for the default datastore
Hi,
We don't have global quotas, the limits can only be set for each user or
group. You could develop an authentication driver that denies the creation
of new VMs based on global usage limits; but this is not straight-forward.
If you execute any one* command from the drivers, the request will be
Hi,
The ACPU (Available CPU) shows how much CPU percentage is still not
reserved for the running VMs. In your case it is 0, so you'll need to
shutdown some of the running VMs.
You can use cpu over-commitment with the CPU VCPU attributes. For
example, if need a VM with 2 Virtual CPUs, you don't
Hi,
New Users are created by default in the same group as the user creating it,
except for the oneadmin group, that creates them in 'users'.
So, the easy way to do it is create the new users from another user that
belongs to the different group. Let's call this group 'newgroup', what you
need to
Martín Sánchez
cmar...@opennebula.org wrote:
If you are using the 'shared' Transfer Manager (TM) driver, then the
directory /var/lib/one/var/datastores/0/**0 should exist in both the
front-end and the host.
Huh? Why is the front-end involved at all? The copying or staging of the
disk
understood well the upgrade guide, I have to go from 3.2 to 3.4 and
to 3.4 to 3.6 ?
If so, I read that I have to shutdown or delete all active VMs, should I
just stop them ?
Thanks in advance
Cyrille
Lundi 10/09/2012 à 12:36 Carlos Martín Sánchez a écrit:
Hi,
Datastores were introduced
Hi,
I'm not sure what can be wrong without more information.
What opennebula version are you running? Can you paste the error you get
from oned.log?
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org |
Just forwarding a mail that I assume was intended for the list...
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org
--
Dear Ingo,
It looks indeed like a permissions problem. Let's check.
* the id of oneadmin in the OpenNebula front-end and ESX is 1?
* can you do a touch /vmfs/volumes/srvcloudonevar in the ESX?
Not sure about this, but you should try without the leading slash:
vmkfstools -username root
a 400 seconds stuck.
Thanks for your help.
Regards,
Christoph Robbert
Am 05.09.2012 14:17, schrieb Carlos Martín Sánchez:
Hi,
Let's try to rule out one thing at a time.
Did you set any timer values in oned.conf that may overload opennebula?
If the values of MANAGER_TIMER, HOST and VM
20.09.2012 11:51, Carlos Martín Sánchez wrote / napísal(a):
- time frame defined running state (when VM is deployed, it can be
specified to run and stop it at certain time, i.e. start daily at 6.00 AM
and stop 20.00 PM in working days only OR deployment of VM for time limited
period - i.e. 2
Please, try to follow some basic netiquette rules.
Your message is totally unrelated to the topic of this thread; you will get
more help if you open a new thread with a well developed question.
Regarding your question, it will probably be easier in the distribution you
are familiar with.
--
in Version 1.8.7 (2010-08-16 patchlevel 302).
I didn't find any installation of the nokogiri ruby gem. So i can't
provide you a version.
Regards,
Christoph
Am 21.09.2012 12:28, schrieb Carlos Martín Sánchez:
Hi Christoph,
I'm sorry but I ran out of ideas. The only thing I can try
is more important than Knowledge
Albert Einstein
Lundi 01/10/2012 à 12:57 Carlos Martín Sánchez a écrit:
Hi,
The files should have been deleted. Do you have any error messages in
/var/log/one/oned.log?
You can also check each image file path using 'oneimage show', it's the
SOURCE attribute
Hi there Gandalf,
If a Host fails, the VM memory state is lost and OpenNebula can't migrate
it. But the failure will be detected and the VM can be redeployed in
another Host, see [1]
Using the 'onevm resched' command, VMs can be re-scheduled and migrated to
another host. For more information see
/opennebulacmar...@opennebula.org
On Tue, Oct 2, 2012 at 2:21 PM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2012/10/2 Carlos Martín Sánchez cmar...@opennebula.org:
Hi there Gandalf,
If a Host fails, the VM memory state is lost and OpenNebula can't migrate
it. But the failure
Carlos Martín Sánchez cmar...@opennebula.org:
You'll need to do a script for that.
What do you plan to use as a trigger for the rescheduling?
I'm still looking. Do you have any advice on which is the best for this?
___
Users mailing list
Users
state which is using
it... :)
Kind regards
CyD
Blog : http://blog.cduverne.com
Twitter : @CydsWorld
Imagination is more important than Knowledge
Albert Einstein
Mardi 02/10/2012 à 13:42 Carlos Martín Sánchez a écrit:
Hi,
On Mon, Oct 1, 2012 at 2:42 PM, Duverne, Cyrille
Hi,
The scheduler log is referring to the CPU and MEMORY requirements.
The available Host CPU is the ACPU column of 'onehost list'. If you do a
'onehost show', you'll see this value corresponds to
TOTAL CPU - USED CPU (ALLOCATED). The available memory follows the same
idea.
Now, here's the
Hi,
The core automatically sets the CLUSTER requirement if the VM uses one of
the following:
- an Image stored in a Datastore assigned to a Cluster
- a VNet assigned to a Cluster
You will need to add the Host to the Cluster, or use Images/Vnets available
to that Host.
By the way, I recommend
, 2012 at 11:46 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2012/10/5 Carlos Martín Sánchez cmar...@opennebula.org:
Since you are using the shared transfer manager drivers, the first thing
to
check is that /var/lib/one/datastores/1 is correctly mounted in xen01
Hi,
Is that image working if you use xen directly?
Some hints about your template:
- KERNEL and INITRD paths must be accesible in the Host, not the front-end
[1].
- The SOURCE attribute does not mean anything to opennebula, it is ignored.
- NIC: It's easier to create a vnet [2] and then use
Hi,
The log messages do not show any error because the operation ImageDelete
method invoked is successful: the image is ready, not used by any VM, and
the _request_ to delete it is granted.
After the 'oneimage delete' request is returned, the core starts a
Datastore RM operation. This
Hi,
Please find my comments inline
On Thu, Oct 4, 2012 at 10:28 PM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
Let's assume a single OpenNebula installation with 5 nodes managed.
In a dedicated VM (outside Opennebula) i'll install oZones.
Since you are only going to
Hi,
You may find these links useful:
http://opennebula.org/documentation:rel3.6:manage_users#users
http://opennebula.org/doc/3.6/oca/java/
http://opennebula.org/doc/3.6/oca/java/org/opennebula/client/Client.html
http://opennebula.org/documentation:rel3.6:api
Hi,
You can't have 2 different OpenNebula daemons managing the VMs, because all
the data is cached, and that would cause inconsistency issues.
Both of them will try to monitor the VMs and Hosts, and the schedulers will
deploy VMs without any synchronization.
Is there any reason you need to do
If you are referring to the DB upgrade, there is no process to upgrade to
master, the onedb code will only upgrade to a specific version.
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org
Please stop sending a daily mail with the same problem. It's annoying and
counter-productive.
You will get better help if you put some effort in your question. I really
like how this is explained in the Stack Overflow guide on how to ask [1].
Can you execute CLI commands with those
Hi,
This is all explained in the Contextualization guide. This [1] is a link to
the 3.8 guides, but the contextualization mechanism is compatible with
previous versions and you can benefit from the new contextualization
packages [2].
Regards
[1]
Hi,
If you don't get any error message, it may be that opennebula did not
finish moving the disk files.
The final destination for the disk files in the Host is
/var/lib/one/datastores/0/vm_id, take a look there to see if the files
are still being copied.
Regards
--
Carlos Martín, MSc
Project
301 - 400 of 994 matches
Mail list logo