Hello
Is it right opennebula currently does not support live migration without
shared storage? I get this error when trying.
Fri Jul 11 22:14:32 2014 [VMM][E]: migrate: Command "virsh --connect
qemu:///system migrate --live one-11 qemu+ssh://192.168.122.200/system"
failed: error: unable to res
Hi Tino,
I saw your response online, but it hasn't shown up in my inbox just yet (I
blame Yahoo! Mail). These are ESX Hypervisors. So I'm pretty sure I can set
de-dupe options on the storage section of the vSphere client, but I'm not
really sure how that interacts with OpenNebula.
Thanks!
Brad
Hi,
I have 1 ON server and 1 additional ON host.
I would like the setup so that ALL storage / datastores are on the iSCSI
SAN.
The iSCSI volume is mounted on each server ext4 netdev iSCSI using native
CentOS drivers.
The question is, i read that the /var/lib/one should be shared using NFS
from
Hi Alberto,
If you see your 5 IT VMs in the UK opennebula, maybe is because the whole
master DB was copied into the slave, and not just the few tables needed for
the OpenNebula federation.
Can you please confirm that the dump made during this step [1] only
contains these tables:
user_pool
group_
Hi Brad,
Are you planning to use OpenNebula with ESX hypervisors, or with KVM/XEN?
Best,
-Tino
> From: Brad
> Date: Fri, Jul 11, 2014 at 4:58 PM
> Subject: [one-users] Netapp Deduplication
> To: Users OpenNebula
>
>
> Hi,
>
> I was looking to see if OpenNebula supports deduplication; like th
Hi,
I was looking to see if OpenNebula supports deduplication; like that which you
can use with VMware and Netapp as described here:
https://communities.netapp.com/community/netapp-blogs/virtualization/blog/2011/03/24/the-4-most-common-misconfigurations-with-netapp-deduplication
I'm currently w
Hi,
After reboot my kvm host I have some VM's in state FAILED. There is no
other option to repair this VM than use "delete --recreate", but if I
use this, data stored into VM disrepair. What I'm doing wrong? How can I
get recover VM from damaged host without data lost?
Second question, if I l
Hi,
I think this is problem with the ruby version. I did a quick test, and
1.8.7 accepts your command, while 1.9 and 2.1 need dd/mm/yy.
We will try to find a way to force the same format, or at least we will
document which one should be used [1] in each case.
Regards.
[1] http://dev.opennebula.o
Hi Leszek,
I've faced a similar problem with kvm virtualization (floating ip for HA
not working at all), and the origin was the network filtering rules
applied by libvirt at the host level. Setting the NIC option in
/etc/one/vmm_exec/vmm_exec_kvm.conf to
NIC = [ model="virtio" ]
instead of my pr
Hello. I've got problem with adding secondary ip to my VM network
interface. If i add ip using:
ip a a 192.168.100.100/24 brd 192.168.100.255 dev eth0
it won't resolve to pings. I used tcpdump on this interface inside vm and
it responds to ping, but the response does go to a diffrent mac that it
Am 11.07.14 12:09, schrieb Jaime Melis:
> Sure, take a look at this section. It can be customized globally in the
> vmm_exec_kvm.conf
> http://docs.opennebula.org/4.6/administration/virtualization/kvmg.html#default-attributes
Hello Jaime.
Thank you. Sorry for all the stupid questions. :-)
cheers
Hello dear OpenNebula enthusiasts!
After last years successful debut with participants from 12 countries, we are
hosting the OpenNebula Conf again in Berlin. From December 02nd to 04th experts
will join the only Conference about OpenNebula worldwide with their talks.
Amongst the first confirmed
Sure, take a look at this section. It can be customized globally in the
vmm_exec_kvm.conf
http://docs.opennebula.org/4.6/administration/virtualization/kvmg.html#default-attributes
On Fri, Jul 11, 2014 at 12:06 PM, Thomas Stein
wrote:
> Hello.
>
> Opennebula complains about the non-existing bina
Hello.
Opennebula complains about the non-existing binary called /usr/bin/kvm.
I'm on gentoo and there is no such binary anymore. It's called
qemu-system-x86_64. I could make a symlink yes, but is it also possible
to tell opennebula where to look?
thanks and best regards
t.
__
Am 11.07.14 11:24, schrieb Valentin Bud:
> Hello Thomas,
Hello Valentin.
> The probes that leave under /var/lib/one/remotes are copied by OpenNebula
> on the compute node when you create the host. They are also synced
> whenever you want them to with `onehost sync`.
Thanks. That explains it to m
Hello Thomas,
The probes that leave under /var/lib/one/remotes are copied by OpenNebula
on the compute node when you create the host. They are also synced
whenever you want them to with `onehost sync`.
By default OpenNebula stores the copy on the compute nodes in
/var/tmp/one.
This is documented
Yes true.
On Fri, Jul 11, 2014 at 2:36 PM, Joaquin Villanueva
wrote:
> Hi Thomas,
>
> I believe both dirs are correct. /var/lib/one/remotes hold the probes used
> by onehost sync command to be transferred to the hosts. Then, the probes
> are stored at the hosts at /var/tmp/one and executed fro
Hi Thomas,
I believe both dirs are correct. /var/lib/one/remotes hold the probes
used by onehost sync command to be transferred to the hosts. Then, the
probes are stored at the hosts at /var/tmp/one and executed from there,
if i understand correctly the probes mechanism.
Best regards,
El 11/07
Am 11.07.14 10:46, schrieb Sudeep Narayan Banerjee:
> Hello,
>
> This picture might help you.
There is no /var/tmp/one at all. Could this be right?
cheers
t.
> Regards,
> Sudeep
>
>
> On Fri, Jul 11, 2014 at 1:59 PM, Thomas Stein wrote:
>
>> Hello.
>>
>> Just wondering. What is the correct
Hello.
Just wondering. What is the correct location for the run_probes
commands? They seemed to be installed to /var/lib/one/remotes/ but the
searched for by oned in /var/tmp/one.
best regards
t.
___
Users mailing list
Users@lists.opennebula.org
http://
Dear All,
I tried to deploy opennebula 4.6.2 base glusterfs 3.5.1
all three server:
1. one server,ip is 192.168.0.198;
2. node0,ip is 192.168.0.199;
3. node1,ip is 192.168.0.200;
Datastores configuration:
[oneadmin@fc-server ]$ onedatastore show 102
DATASTORE 102 INFORMATION
ID : 102
Dear All,
I tried to deploy opennebula 4.6.2 base glusterfs 3.5.1all three server: 1. one
server,ip is 192.168.0.198;2. node0,ip is 192.168.0.199;3. node1,ip is
192.168.0.200;
Datastores configuration:
[oneadmin@fc-server ]$ onedatastore show 102DATASTORE 102 INFORMATION
22 matches
Mail list logo