Hello again!
In the documentation you have sent me I did not find anything related to the
migration of a VM from one data centre to another one, but only about managing
them from within a data centre.
Thanks!
Date: Fri, 17 Jan 2014 19:09:06 +0200
From: io...@hackaserver.com
To:
thanks, Jaime
On Tue, Jan 21, 2014 at 3:11 PM, Jaime Melis jme...@opennebula.org wrote:
Hi,
you can check out the single lock shared lvm addon:
http://wiki.opennebula.org/shared_lvm
https://github.com/OpenNebula/addon-shared-lvm-single-lock
cheers,
Jaime
On Tue, Jan 21, 2014 at 6:32
Hi,
On Mon, Jan 20, 2014 at 4:46 PM, Stuart Kenny stuart.ke...@scss.tcd.iewrote:
Hi, is it possible to tell OpenNebula that a VM that is in the unknown
state on a failed host is now running on a new host? It doesn't seem to be
possible to edit the database to do this as the changes get
Hi, thanks for the reply. Could you tell me where the cached data is stored?
Thanks,
Stuart.
On 21/01/2014 09:48, Carlos Martín Sánchez wrote:
Hi,
On Mon, Jan 20, 2014 at 4:46 PM, Stuart Kenny
stuart.ke...@scss.tcd.ie mailto:stuart.ke...@scss.tcd.ie wrote:
Hi, is it possible to tell
It seems that there are more people having this problem and we are
taking a look on several ways to fix this. One problem with /var/run
is that it is normally owned by root and a process started by oneadmin
user can not write there. In the frontend a new directory for
OpenNebula pid files is
Hi,
On Tue, Jan 21, 2014 at 11:02 AM, Stuart Kenny stuart.ke...@scss.tcd.iewrote:
Hi, thanks for the reply. Could you tell me where the cached data is
stored?
Thanks,
Stuart.
In memory, the oned process caches the info read from the DB.
Regards
On 21/01/2014 09:48, Carlos Martín
Javier Fontan jfon...@opennebula.org writes:
It seems that there are more people having this problem and we are
taking a look on several ways to fix this. One problem with /var/run
is that it is normally owned by root and a process started by oneadmin
user can not write there. In the frontend
Hi,
On Fri, Jan 17, 2014 at 9:25 PM, ML mail mlnos...@yahoo.com wrote:
Hello,
I am creating a few OS images in qcow2 format of various Linux
distributions for my OpenNebula 4.4 installation and was wondering what
image size would you recommend? I see mostly images being between 5 to 10
GB.
Hi Dirk,
So libvirt looks to be working. Weird, sure there are no more error
messages in the log files?
Let's see if the VMware deploy works correctly in your front-end (may
be missing dependecies). For that, please poweroff VM 38, and run the
following in the front-end:
$
Hi,
I've gotten down to only one collestd-client.rb process (see
below). Are the multiple kvm-probes OK?
Regards,
Gerry
root@host101:~# ps -ef | grep one
oneadmin 3349 1 0 12:23 ?00:00:00 ruby
/var/tmp/one/im/kvm.d/collectd-client.rb kvm
Hi,
Is there any recommended way to reboot a host without migrating the
VM from it to another host?
The issue I have is that I need to reboot all the hosts but as they
are running a lot of large Windows images (up 100GB) the logistics of
live migration if complicated.
Can the
I have the same needs for maintenance and performance tunning of my
organization system. Suggestions are highly appreciated.
Gene
On Tue 21 Jan 2014 08:12:24 AM EST, Gerry O'Brien wrote:
Hi,
Is there any recommended way to reboot a host without migrating
the VM from it to another host?
Hi
First, we'd like to thank you all the comments and feedback for OpenNebula
4.6. We really appreciate it :) Our goal for 4.6 is two-fold: improve
federation module (aka ozones) and improve OpenNebula usability (based on
OpenNebula user comments received during the last releases).
It's been
Probably: poweroff or suspend the VMs, then reboot the host. Once the host
is back online just resume the VMs... (double check that any shared FS is
properly mounted, specially those with the VM disk images)
You can also use --hard in case your VMs are not ACPI aware. Note that
suspends saves the
14 matches
Mail list logo