On the VM host there should be a file called deployment.0
What's in there?
Steve Timm
From: Users [users-boun...@lists.opennebula.org] on behalf of Роман Мальцев
[neo...@mail.ru]
Sent: Sunday, February 15, 2015 3:23 PM
To: users@lists.opennebula.org
Subject:
: Friday, February 13, 2015 3:11 PM
To: Steven C Timm
Cc: users@lists.opennebula.org
Subject: Re: [one-users] clusters in 4.8
Hi
If both clusters has access to the same datastores, just move them out
of the first cluster. When a datastore or network is not assigned to
any cluster (cluster default
%) on
From: Ruben S. Montero [rsmont...@opennebula.org]
Sent: Friday, February 13, 2015 4:49 PM
To: Steven C Timm
Cc: users@lists.opennebula.org
Subject: Re: [one-users] clusters in 4.8
Yes, you can do:
Cluster A: Host_A0, Host_A1... + VNET_A0, VNET_A1...
Cluster B: HostB0, HostB1... + VNET_B0, VNET_B1
From: Steven C Timm
Sent: Friday, February 13, 2015 6:01 PM
To: Ruben S. Montero
Cc: users@lists.opennebula.org
Subject: RE: [one-users] clusters in 4.8
PS--if there are other vm's still launched and running from the time when the
datastore used to be part of
a cluster, could that confuse anything
PS--if there are other vm's still launched and running from the time when the
datastore used to be part of
a cluster, could that confuse anything? Do I have to restart oned to clear
anything up?
Steve Timm
From: Steven C Timm
Sent: Friday, February 13
Which linux distribution are you running on?
Steve
From: Users [users-boun...@lists.opennebula.org] on behalf of anagha b
[banag...@gmail.com]
Sent: Friday, February 06, 2015 12:42 PM
To: Users@lists.opennebula.org
Subject: [one-users] @installation problem
Hi,
This is my datastore definition for my qcow2-based image store.
I am not using the qcow2 TM_MAD to move the image from one place to the other
but I do declare it as the driver for all the images. (this is a shared image
store using NFS).
In this mode it brings a separate copy of my 2GB
At Fermilab we have the use case where we do not use the IPv6 autoconfig
features, but
instead assign individual ipv4 and ipv6 addresses to each host name.
Is there any way that the structure of the address range table in open nebula
can be modified
such that we can uniquely specify both ipv4
I have recently installed and configured sunstone on my open nebula 4.8
installation.
I get the login screen and it recognizes me as a user and lets me log in, but
only shows the five
menu bars at the left of the screen (dashboard, system, virtual resources,
infrastructure, one flow).
Nothing
We have noticed this problem recently in OpenNebula 4.6 and 4.8 recently. We
also
had to make a similar patch in OpenNebula 3.2
Our use case:
we have a SHARED image datastore
DATASTORE 102 INFORMATION
ID : 102
NAME : cloud_images
USER : oneadmin
GROUP
I recently had to restore my system from backup but forgot to include the .one/
directory in the backups.. thus I have no one_auth file at the moment for the
oneadmin user.
Without that, OpenNebula won't start and thus I can't do oneuser login which
is what I would
need to re-create it.
Good idea. In general it would be nice to be able to set the various
quantities which can
be controlled by the RHEV extended libvirt functions which include IOPS but
also network bandwidth.
Steve
From: Users [users-boun...@lists.opennebula.org] on behalf of
We did this on a test server a while ago in an early open nebula version.
Check to be sure
that you have all the required Ruby gems installed. I believe there is an
aws-specific one
you have to install just to make the AWS stuff work.
Steve Timm
From:
Due to an ongoing issue with having to purge the vm_pool of my database from
time
to time, I inadvertently did the purge while 8 vm's were still active and thus
leases
were still allocated.
I ran onedb fsck to clean it up. It successfully cleaned up the host_pool
table to show
no running
We have been doing bulk tests of the OpenNebula 4.8 econe-server.
With just a straight econe-run-instances we can get up to 1000 VM's (the limit
of our current subnet)
started fairly quickly (about 30 minutes)
But in practice we are using a more complicated sequence of EC2 calls via
HTCondor.
During most of the time we have been running OpenNebula 2 and 3 we have been
using a
rank based on FREEMEMORY. We are now doing tests using OpenNebula 4.8, in a
use case
where we are filling up an empty cloud. FREEMEMORY still in theory should be
an accurate value
but the problem is that 6-7
3:50 AM
To: Steven C Timm
Cc: Javier Fontan; users@lists.opennebula.org
Subject: Re: [one-users] Thin provisioning and image size
Hi
Yes, the value for the size of the image is computed in libfs.sh (fs_size) for
different file types. For qcow2 images:
SIZE=$($QEMU_IMG info $1 | sed -n 's
Can you show the output of
oneimage show
for the image you are using?
The error you are getting seems to be saying that the image you are trying to
copy
from the image datastore to the system datastore is somehow a directory rather
than a file.
Verify that
I am wondering if there are any other big OpenNebula clouds out there using
RHEL 6.3 or 6.4,
Centos 6.3 or 6.4, or Scientific Linux 6.3 or 6.4?
We are seeing a fairly nasty performance problem, but only on intel-based
Sandy Bridge or Ivy Bridge
based hardware. If you have N kvm-based virtual
I like it—is this Sunstone reimagined/rebranded or is it in addition to
sunstone?
Steve Timm
From: users-boun...@lists.opennebula.org
[mailto:users-boun...@lists.opennebula.org] On Behalf Of Jaime Melis
Sent: Tuesday, April 15, 2014 6:01 AM
To: Users OpenNebula
Subject: [one-users] OpenNebula
I would like to configure some OpenNebula VM's to have a serial console that
can be accessed via the virsh console command.
The following xml works if it is inserted manually before the /devices tag
in the deployment.0
serial type='pty'
target
AT the moment we have an active/passive head node setup for OpenNebula with a
SAN backend for the image repository. Opennebula service is managed by redhat
clustering as is the GFS2+CLVM file system. We run a few virtual machines on
the frontend machines as well. If we were using an
It is always possible to run some kind of virtualization technology, no matter
what motherboard you have.
I have no personal experience with the model of desktop board that you mention
but I am familiar with
Intel's server products from the same 2007-2008 era and I can say that those
boards,
VCPU is the parameter that controls how many cores appear internally in the
virtual machine. I. e. if you have VCPU=4
Your VM will have 4 cores, but there will still only be one kvm process as seen
in the hypervisor that corresponds to it.
In a typical KVM setup it is possible to allocate more
You have to make the bridges on all the host machines first. OpenNebula
doesn't do it for you.
But you also have to get the remote scripts on the VM host, there is a onehost
command to do that.
Steve Timm
From: users-boun...@lists.opennebula.org
[mailto:users-boun...@lists.opennebula.org]
[mailto:jfon...@opennebula.org]
Sent: Wednesday, January 16, 2013 7:37 AM
To: Steven C Timm
Cc: users@lists.opennebula.org
Subject: Re: [one-users] ONE 2.0 and Ruby versions
OpenNebula 2.0 is made to be compatible with both 1.8.5 and 1.8.7. I think it
is better if you install again those gems
We currently have a production cloud of two head nodes plus five VM hosts
running OpenNebula 3.2 in a ssh-based transfer mode with the
Image repo stored on local disk. We have three more nodes which are currently
hooked up to a SAN with a shared GFS-based
File system.
I would like to add
verbosity.
From: Ruben S. Montero [mailto:rsmont...@opennebula.org]
Sent: Sunday, January 06, 2013 4:24 PM
To: Steven C Timm
Cc: users@lists.opennebula.org
Subject: Re: [one-users] Race condition--onevm shutdown vs. onevm delete
Hi Steven
and libvirtd dies with a segfault.
The strange thing
Agree—it would be very helpful to have such a feature, both from sunstone and
from the CLI.
Steve
From: users-boun...@lists.opennebula.org
[mailto:users-boun...@lists.opennebula.org] On Behalf Of Gary S. Cuozzo
Sent: Friday, December 14, 2012 9:34 AM
To: Users OpenNebula
Subject: [one-users]
I run high-availability squid servers on virtual machines although not yet in
OpenNebula.
It can be done with very high availability.
I am not familiar with Ubuntu Server 12.04 but if it has libvirt 0.9.7 or
better, and you are
Using KVM hypervisor, you should be able to use the cpu-pinning and
The last column in the onehost list shows AMEM (Available Memory) is only 1.1G
Check the syntax on template memory, I don't think you should use the quotes,
also check the memory allocated by
All the other templates.
Is this a KVM hypervisor?
Steve Timm
From: users-boun...@lists.opennebula.org
I386 architectures can't run KVM hypervisor at all. They can run some older
versions of Xen with paravirtualized VM's but it has been a long time since I
did it. Also in Centos/SL/Redhat 6.x you would have to patch Xen in by hand
because it's not part of the distro anymore. Better to try to
For those of you who are running the oned on a VM, which image repository type
are you using? Are you having any problems with performance loading VM images
into or out of the image repository?
Steve
-Original Message-
From: users-boun...@lists.opennebula.org
Has anyone managed to successfully run the OpenNebula head node in OpenNebula
3.x as a virtual machine in production?
I am interested in doing this for ease of migration and/or failover with
heartbeat/DRBD.
If so, have you done it with a Shared file system such as GFS, and let GFS be
seen by
: Shankhadeep Shome [mailto:shank15...@gmail.com]
Sent: Tuesday, April 10, 2012 9:35 PM
To: Steven C Timm
Cc: users@lists.opennebula.org
Subject: Re: [one-users] OpenNebula head node as a virtual machine?
I run the head node as a VM but purely as a KVM vm controlled through virsh.
The back-end
35 matches
Mail list logo