I have a set of VM's whose template normally has a NIC section
that looks like this:
NIC=[
IP=131.225.154.144,
MODEL=virtio,
NETWORK=StaticIP,
NETWORK_UNAME=oneadmin ]
The template is identical for all except that each one
has a different static IP.
My question--
I see that
, host, network, datastores) more here
http://opennebula.org/4-12-features-virtual-data-center-redesign/
Cheers
On Fri, Feb 13, 2015 at 6:37 PM, Steven Timm t...@fnal.gov wrote:
I have had my one4.8 host up for a while with a single cluster
that has 150 hosts, one vnet, and a system and image
My one4.8 installation is set up
with NFS-based shared image store #102 and local-based system
datastore #100 instantiated in local disk on each VM host.
Thus far it seems that I am forced to set up
the datastore path as
/var/lib/one/datastores/dsno
So that means I have to NFS export
I have had my one4.8 host up for a while with a single cluster
that has 150 hosts, one vnet, and a system and image datastore.
I am now adding hosts from a different vnet.
want to make second host + vnet cluster but still use
the same system and image data stores.
What's the right way to do
of the libvirt commands via the libvirt socket.
Steve Timm
On Tue, 3 Feb 2015, Steven Timm wrote:
I am trying to debug an error in a Opennebula 4.8 installation
in which I am refactoring my hypervisor installation with a
different configuration system. My test host is giving me this
Error
By default when you do
oneuser login username --x509
It will put the new credential in $HOME/.one/one_x509
That's fine unless you happen to be logged in as root or oneadmin
at the time in which case it can then clobber oneadmin's credential.
Is there any way to set an alternative path for
(and maybe a PR) against the puppet module on
github?
Many thanks,
Martin
On 13 Jan 2015, at 22:55, Steven Timm t...@fnal.gov wrote:
It appears that the sunstone-views.yaml and admin.yaml
put in by the puppet module are not compatible with opennebula 4.8.
Once I find the files from the unmodified
it to:
TYPE=kvm
Let me know if that fixes the issue.
Regards,
Jaime
On Mon, Dec 29, 2014 at 11:04 PM, Steven Timm t...@fnal.gov wrote:
Jaime.
Sorry I didn't see this earlier, before the holidays.
Output of onevm show -x and onetemplate show -x is attached.
Steve Timm
It appears that the sunstone-views.yaml and admin.yaml
put in by the puppet module are not compatible with opennebula 4.8.
Once I find the files from the unmodified rpms
and put them in their place then sunstone works just fine.
Steve Timm
On Tue, 13 Jan 2015, Steven Timm wrote:
I
I mentioned earlier the problem that the sunstone-server.log
and econe-server.log don't rotate properly with logrotate.
Now I have another problem, this is again with OpenNebula 4.8
Opennebula (as installed via the puppet module made by epost)
doesn't bring up sunstone correctly. All I see is
Is anyone out there using the puppet module that comes from epost?
If so, how do you get it to turn sunstone on? I am using it
and it keeps turning sunstone off even though I think I have all
the right parameters set.
Steve Timm
I have a logrotate set up for /var/log/one/oned.log and sched.log
that works just fine
[timm@snowball logrotate.d]$ more oned
/var/log/one/*.log {
missingok
daily
notifempty
sharedscripts
postrotate
killall -HUP oned
killall -HUP mm_sched
endscript
}
Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula
On Thu, Nov 13, 2014 at 4:34 PM, Steven Timm t...@fnal.gov wrote:
Under OpenNebula 3.2 we would include in the
contextualization section the field $USER[TEMPLATE
, Steven Timm t...@fnal.gov wrote:
What's the output of onedatastore show,
particularly the DISK_TYPE and DS_MAD and TM_MAD settings?
Steve Timm
On Tue, 30 Dec 2014, ramanadh ravinuthala wrote:
Hi all,
I am trying to integrate ceph as the data store for open nebula and i am
successfull able
I am currently running opennebula 4.8.
The default output of onevm list only allows 10 characters
under hostname.
my hostnames are cloudworker which means that all identifying
information is truncated to cloudworke
Is there any way to change the format of onevm list to show the whole node
This trick does work to start the particular VM in question
in this particular use case of shared repo and cloning.
I think we didn't make this change in our ONE3.2 installation because
we had other reasons why we needed the qemu user to run KVM
in other transfer mode situations. Will have to
wrote:
Hi,
Can you send us onevm show -x of the VM and the onetemplate show -x?
Regards,
Jaime
On Dec 15, 2014 11:47 PM, Steven Timm t...@fnal.gov wrote:
Under OpenNebula 3.2 we used the RAW section to declare a serial
console in libvirt which could then be attached
Under OpenNebula 3.2 we used the RAW section to declare a serial
console in libvirt which could then be attached to on the hypervisor
via virsh console.
(this comes in very handy if you are dealing with windows or mac
desktops which don't have a very good X emulation for VNC viewers).
Template
of onedatastore show -x
id where
id is the ceph's ds id.
Regards,
Jaime
On Wed, Sep 24, 2014 at 4:49 PM, Steven Timm t...@fnal.gov wrote:
Now have upgraded to opennebula 4.8.0 and still struggling
with successfully launching a Ceph VM. We have worked out
all permissions issues
/cong.html
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula
On Thu, Nov 13, 2014 at 4:34 PM, Steven Timm t...@fnal.gov wrote:
Under OpenNebula 3.2 we would include in the
contextualization
On Mon, 17 Nov 2014, Carlos Martín Sánchez wrote:
Hi,
On Thu, Oct 23, 2014 at 3:25 AM, Steven C Timm t...@fnal.gov wrote:
Due to an ongoing issue with having to purge the vm_pool of my
database from time to time, I inadvertently did the purge while
8 vm's were still active and
We added an Are You Sure command to the onevm delete,
oneimage delete, and onetemplate delete commands, that in itself
has helped a lot. I believe we contributed it back to upstream
but don't know the status.
In general we try to have the important vm's owned by a different
user group than the
In several occasions in the past I have forgotten to
tell my tmpwatch utility not to delete files out of /var/tmp/one
and thus I have seen most of the remotes get deleted off of
my cloud hosts, but opennebula 4.8 doesn't report any problems.
I try to fix the remotes by doing onehost sync and
Under OpenNebula 3.2 we would include in the
contextualization section the field $USER[TEMPLATE]
and then add a contextualization script such that
we would grab the field /USER/NAME out of the base64 encoded
template information.
In Opennebula 4.8 you can still put $USER[TEMPLATE]
into your
This text is the output of
kvm -machine ?
Likely it is somehow connected to a kvm fault on the machine
to which you are migrating.
Steve Timm
On Fri, 7 Nov 2014, Rhesa Mahendra wrote:
Hi,
I got error like this when i try to live migrate:
Fri Nov 7 14:19:02 2014 [VMM][D]: Message
/CLUSTER_TEMPLATE/
Cheers
Ruben
On Wed, Oct 29, 2014 at 10:34 PM, Steven Timm t...@fnal.gov wrote:
Under OpenNebula 3 we have been using RANK=FREEMEMORY
to launch the next virtual machine on the hypervisor which
has the most memory available at the time. I am using the
same
On Thu, 30 Oct 2014, Ruben S. Montero wrote:
On Thu, Oct 30, 2014 at 2:57 PM, Steven Timm t...@fnal.gov wrote:
I think I see what went wrong by looking at the documentation.
The value as shown in onehost show -x
is now labeled FREE_MEM instead of FREEMEMORY
Under OpenNebula 3 we have been using RANK=FREEMEMORY
to launch the next virtual machine on the hypervisor which
has the most memory available at the time. I am using the
same in OpenNebula 4.8 but it does not appear to be working,
even though FREEMEMORY is still a documented selection variable
-describe-instance
On 15 October 2014 19:33, Steven Timm t...@fnal.gov wrote:
We are running the econe service behind Apache and using mod_ssl to do
the https work.
Steve Timm
On Wed, 15 Oct 2014, Parag Mhashilkar wrote:
Is this the default if not configured
If you call the same CreateInstances command more than once
is there any way that it will create the instance twice or not.
Steve Timm
On Wed, 15 Oct 2014, Daniel Molina wrote:
Hi,
What do you mean with idempotent? As long as the client implements the ec2
API, it should work
Cheers
On 13
Backend is mysql 5.1.
mysql describe user_pool;
+-+--+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+-+--+--+-+-+---+
| oid | int(11) | NO | PRI | NULL| |
| name|
Just the same error
Fri Oct 10 00:03:51 2014 [E]: Connection reset by peer
Fri Oct 10 00:03:51 2014 [I]: 131.225.155.244 - - [10/Oct/2014 00:03:51]
GET /?
Action=CreateKeyPairKeyName=SSH_fermicloud189%20fnal%20gov%209618_schedd_glidei
, Steven Timm t...@fnal.gov wrote:
Just the same error
Fri Oct 10 00:03:51 2014 [E]: Connection reset by peer
Fri Oct 10 00:03:51 2014 [I]: 131.225.155.244 - - [10/Oct/2014 00:03:51] GET /?
Action=CreateKeyPairKeyName=SSH_fermicloud189%20fnal%20gov%209618_schedd_glidei
ns2%20fermicloud189%20fnal
We are seeing our OpenNebula 4 database grow very quickly
due to the large monitoring database. Our test node has been up
only 5 days and already we are at almost 5GB of data, this is
much faster growth than we saw in OpenNebula 3.
Question--is there a way to purge the monitoring database
We are commissioning a new OpenNebula 4.8 head node and so
far we are seeing very good scalability. But
in tests where we are launching a lot of virtual machines at once
we sometimes see the error:
Tue Oct 7 08:46:43 2014 [Z0][VMM][I]: error: Failed to create domain from
I am seeing the opposite problem in One 4.x
and have been ever since we started testing it.
When I do oneimage create using a qcow2 image,
opennebula always reports the size as the absolute
maximum to which the qcow2 file system could expand.
This keeps us from being able to over-provision our
We are running an Infiniband setup on FermiCloud using the SR-IOV feature
of our Mellanox clouds. In the MPI jobs we have done we have used
IPoIB for communication. For point to point communication between VM's
the bandwidth is similar to that between bare metal machines.
In a bigger cluster
on this error anywhere. Any help is appreciated.
We are trying to replace an old san-gfs file store with Ceph
but need better success than that if this is going to work.
Steve Timm
On Wed, 10 Sep 2014, Steven Timm wrote:
The first and most obvious problem below was that we were running an
old version
At the OpenNebula tutorials they often have a memory stick
which will install two virtual Box virtual machines on a laptop
and install opennebula inside of them.
If you look at www.opennebula.org under try-out you
will see a virtualBox Sandbox (there's also a vmware one).
Steve Timm
On Fri,
but it is still not quite clear what libvirt is doing
under the covers to talk to rbd, and if it can be tested manually.
Steve Timm
On Wed, 10 Sep 2014, Steven Timm wrote:
The first and most obvious problem below was that we were running an
old version of qemu-img and qemu-kvm that ships with RHEL6
=libvirt:key=AQCv0RlUEOPfIhAAZLbGzwrfjyC39iapXVOFEg==:auth_supported=cephx\;none:mon_host=stkendca01a\:6789\;stkendca04a\:6789\;stkendca02a\:6789,if=none,id=drive-virtio-disk0,format=qcow2,cache=none:
error reading header from one-19-58-0
On Wed, 17 Sep 2014, Steven Timm wrote:
Does anyone
from the qemu-kvm process would be very valuable indeed.
Steve Timm
On Wed, 17 Sep 2014, Steven Timm wrote:
This is the kvm command our libvirt is trying to do, from
/var/log/libvirt/qemu/one-58.log
Clearly it is an authentication error, the only question is
what and why.. we have made
I have configured a Ceph datastore on one 4.6 and have gotten as
far as to get opennebula to accept the datastore. But when we
try to do the first oneimage create into the datastore we get the
following error in oned.log :
Wed Sep 10 13:10:44 2014 [ImM][I]: Command execution fail:
gets easier pretty soon now that RedHat has bought Ceph
and all the right packages will be in RHEL7. Or will they be only
proprietarily available in Redhat Enterprise Virtualization? Has anyone
tried yet?
Steve
On Wed, 10 Sep 2014, Steven Timm wrote:
I have configured a Ceph datastore
operation.
http://wiki.libvirt.org/page/VM_lifecycle
regards,
Jaime
On Thu, Jul 24, 2014 at 4:00 PM, Steven Timm t...@fnal.gov wrote:
When OpenNebula creates a checkpoint file either as part
of a onevm migrate or onevm suspend, what libvirt function
is it calling to do
did you restart opennebula after you did the password reset?
The symptom that is here is that the sched daemon doesn't have
a correct authentication token and you could have an old
one that is cached.
Steve
On Thu, 31 Jul 2014, Randall Svancara wrote:
Hi,
When I create a VM it stays in the
back in and restarted
opennebula.
Steve Timm
On Jul 28, 2014 6:32 PM, Steven Timm t...@fnal.gov wrote:
I am currently dealing with an unexplained monitoring question
in OpenNebula 4.6 on my development cloud.
I frequently see OpenNebula return that the status of a ONe
)
---
On Wed, 30 Jul 2014, Steven Timm wrote:
On Wed, 30 Jul 2014, Ruben S. Montero wrote:
Not really sure what can be going on... The monitor scripts return the
information of all VMs running in the node. In 4.6 the
monitoring system uses a push approach, through UDP, so you may have
On Wed, Jul 30, 2014 at 3:50 PM, Steven Timm t...@fnal.gov wrote:
On Wed, 30 Jul 2014, Ruben S. Montero wrote:
Maybe you could try to execute the monitor probes in the node,
1. ssh the node
2. Go to /var/tmp/one/im
3. Execute
I am still trying to debug a nasty monitoring inconsistency.
-bash-4.1$ onevm list | grep fgtest14
26 oneadmin oneadmin fgt6x4-26 runn6 4G fgtest14 117d 19h50
27 oneadmin oneadmin fgt5x4-27 runn 10 4G fgtest14 117d 17h57
28 oneadmin oneadmin fgt1x1-28
I am currently dealing with an unexplained monitoring question
in OpenNebula 4.6 on my development cloud.
I frequently see OpenNebula return that the status of a ONe
host is ON even in the case of a system misconfiguration where,
given the credentials, it is impossible for opennebula to
even
We have been initializing Kerberos logins using opennebula
contextualizaton on FermiCloud since the beginning.
The UNAME field automatically defined but not automatically passed to
contextualization scripts inside the VMs.
Here is a snippet of code that we use to do just that.
(The username
When OpenNebula creates a checkpoint file either as part
of a onevm migrate or onevm suspend, what libvirt function
is it calling to do the checkpoint?
We are seeing some issues on our new Ivy Bridge hardware
that sometimes in the process of a (non-live) migration,
the clock can get confused in
The errors that you are pointing out, don't seem to have
any direct connection to the fact that you are trying to
install a windows VM, there is nothing windows-specific about them.
Looks like OpenNebula failed to clone the blank windows image
to the system datastore due to some permissions
We have also seen this behavior in OpenNebula 3.2.
It appears that the failure mode occurs because the onevm delete
(or shutdown or migrate)
doesn't correctly verify that the virtual machine has gone away.
It sends the acpi terminate signal to the virtual machine
but if that fails, the VM will
This error is an SELinux error. Is it possible
for you to disable SELinux on the VM host in question?
Steve Timm
On Tue, 24 Jun 2014, Sudeep Narayan Banerjee wrote:
Dear Sir,
This is in response to the subject that I posted on June-21, but still no
response.
I have tried to improve little
Sudeep Narayan Banerjee
On Tue, Jun 24, 2014 at 7:12 PM, Steven Timm t...@fnal.gov wrote:
This error is an SELinux error. Is it possible
for you to disable SELinux on the VM host in question?
Steve Timm
On Tue, 24 Jun 2014, Sudeep Narayan Banerjee wrote:
Dear
Yes, we do this all the time. Use a FIXED network and in the
NIC section of the template just specify the IP you want.
Steve Timm
On Thu, 24 Apr 2014, ML mail wrote:
Hello,
Is it possible for force the deployment of a VM to have a specific IP address?
That IP address would be one which
Is there any way as part of the upgrading from older versions to newer
versions to automatically modify the templates?
We have 200-some user templates which specify PORT=-1 which is no longer
allowed in ONE4.4. We have already checked and found that the onedb
upgrade command does not remove
This is about a test OpenNebula 4.4 installation but we have the same
problem in OpenNebula 3.2 and have been kludging around it.
Head node and hypervisors, scientific linux 6,
libvirt qemu.conf has dynamic_ownership=0 as recommended in the guide.
non-default libvirt settings in libvirt.conf
Thanks, Javier, I will give it a try.
Steve
On Tue, 1 Apr 2014, Javier Fontan wrote:
You can add the devices section in that stanza, libvirt will mix both
devices sections. For example:
RAW=[
TYPE=kvm,
DATA=devices
serial type='pty'
target port='0'/
I am in early stages of testing out a new ONe 4.4 installation and
I am finding that there are some mandatory changes to the template syntax.
PORT=-1 for VNC no longer allowed, CPU, once optional, now required, and
a few other changes.
My question for those of you who have done the upgrade
cloud pretty fast.
Thanks
Steve Timm
Add any other parameter from the table I've linked to configure other
parameters.
I hope it helps.
[1]
http://docs.opennebula.org/4.4/user/virtual_machine_setup/cong.html#network-configuration
On Wed, Mar 26, 2014 at 11:04 PM, Steven Timm t...@fnal.gov
Below is the network template that I used to
successfully create the IPV4 side of a dual stack ipv4/ipv6 network
in one4.4.
-bash-4.1$ cat static-ipv6-net
NAME = Static_IPV6_Public
TYPE = FIXED
#Now we'll use the cluster private network (physical)
BRIDGE = br0
DNS = 131.225.0.254
GATEWAY =
-related field that is accepted in the LEASES
field of a fixed-network network template? This is of some urgency.
(I promised my users who depend on ipv6 cloud vm's I would have them
up this morning local time and it is now quitting time today).
Steve
On Wed, 26 Mar 2014, Steven Timm wrote
Does anyone have a worked example of an OpenNebula virtual network
that has a so-called dual stack setup, i.e with both ipv4 and
ipv6 addresses created and assigned by OpenNebula in a fixed network?
The documents appear to indicate that you can do only ipv4 or ipv6,
not both.
Steve
We recently deployed several new and bigger hosts on our OpenNebula 3.2
cloud and are seeing some issues. At this point we are not sure if we are
dealing with an OS problem with the sshd or something else.
But the symptom is that we see a OpenNebula monitoring process come into
the VM host
For those of us who can't make it to Brussels, it would be interesting
to hear if there have been any changes in OpenNebula on CentOS that
have come about because of the new affiliation of Centos and Red Hat.
In particular, might there now be a path to get OpenNebula and some
of the other
Hi Alexandr
Fermilab is a kerberos-based organization but what we do
to authenticate to OpenNebula is to use what is called
a Kerberos Certificate Authority to make X.509 credentials
from the kerberos credentials.
I know at one time there was a rudimentary kerberos 5 authentication
plugin for
OpenNebula Frontend in a VM It can be done, but how well it performs
depends on what data store you
are using. We do it in our development OpenNebula cloud but not
in production yet because we are using a shared image store with
GFS2 and CLVM and it is difficult with our current situation to
I just had a user successfully launch a 64GB VM even though none
of my hosts have more than 48GB of physical RAM. All works fine
until you actually try to use more than 48G of RAM, in which case it goes
deep into swap.
Is there any way that OpenNebula can be configured to just outright
reject
We do not use ldap but we have seen a similar issue with opennebula
authentication with X.509 certificates. OpenNebula compresses all the
white space out of the Distinguished Name of the certificate.. I would
not be surprised if it did the same for ldap. As far as I know it is
meant to be a
6, 2013 at 1:09 PM, Steven Timm t...@fnal.gov wrote:
Thanks--any ideas on how to do what that operation does,
in opennebula 2.0?
Steve Timm
On Tue, 6 Aug 2013, Jaime Melis wrote:
Hi Steven,
yes, starting with OpenNebula 4.0 there's a new 'recover
Has anyone been able to use OpenNebula to assign both IPv4 and
IPv6 addresses in a FIXED vnet? If so, how? At least in ONE3.2
the schema of the leases table doesn't seem to allow for it.
Steve Timm
--
Steven C. Timm, Ph.D
Is there any reason that the one-context scripts that were
released with ONE 3.8 couldn't be used in a ONE 3.2 cloud?
I am hoping I can use them to solve some nasty udev/network problems I've
been having on 32-bit guests.
Steve Timm
You're using the wrong cloud.. You should be using OpenNebula
and not OpenStack and then you would not be having these OpenStack
problems. Probably better to get help on OpenStack on the OpenSTack list.
Steve Timm
On Thu, 18 Apr 2013, Javier Alvarez wrote:
Hello all,
Here it is my
What is the syntax if you want to use more than one REQUIREMENT
in the REQUIREMENTS statement of an OpenNebula template.
For instance
suppose I want to have both
REQUIREMENTS = PARTTHREE=\/dev/mapper/fcl319sata-vm2t1\
and
REQUIREMENTS = HYPERVISOR=\kvm\
are the (and), || (or) operators
It is usually a case of permissions when you get this error.
What user are you logged into sunstone as?
If it's oneadmin, then try to put the image file somewhere else
than under /var/lib/one/images, there are restrictions on what
you can do.
Steve Timm
On Thu, 24 Jan 2013,
We are currently in the process of deploying Puppet 3 to manage my
VM hosts. But when it comes to doing the head nodes we are running
into some conflicts between the ruby gem dependencies of the puppet labs
RPMS and the ruby gem dependencies of OpenNebula.
Puppet Labs rpm for puppet 3.0.1 has
I am seeing this happen in
OpenNebula 2.0, hypervisor KVM.
The user invokes onevm shutdown 3823 followed about 30
seconds later by onevm delete 3823. The VM is still in epilog
state at that time.
Fri Jan 4 07:17:07 2013 [VMM][D]: Monitor Information:
CPU : -1
Memory:
In OpenNebula 2.0 there was a onecluster command which
allowed me to divide my cloud up into two logical clusters,
one with KVM hypervisors and one with Xen.
In the OpenNebula 3.x series this appears to have gone away.
What is the replacement for this functionality? Should we
be using the Zones
are there any messages in your sched.log?
What about in the individual 4.log?
Something doesn't match.. are you sure you are doing
onevm list on the right virtual machine? If onevm list shows pending
but onevm show vmid shows done something is not right.
Steve Timm
On Tue, 2 Oct 2012, Chlon
On Wed, 19 Sep 2012, Qiubo Su (David Su) wrote:
Dear OpenNebula Team,
I want to download OpenNebula and see there are options like OpenNebula
3.2.1 Ubuntu 10.0.4 amd64 and OpenNebula 3.2.1 CentOS 6.0 x86_64.
For OpenNebula 3.2.1 Ubuntu 10.0.4 amd64, we have to buy AMD processor, but
for
I am currently running both OpenNebula 3.2 and OpenNebula 2.0,
both using KVM hypervisors and network interfaces that involve
virtio drivers
I've noticed that the onevm suspend action does something different
than virsh suspend.
With onevm suspend, the kvm process writes a memory checkpoint
On Thu, 7 Jun 2012, Tino Vazquez wrote:
Hi Steven,
comments inline,
On Tue, Jun 5, 2012 at 9:10 PM, Steven Timm t...@fnal.gov wrote:
My production cloud is still running OpenNebula 2.0 with Sci. Linux
5 VM hosts using KVM hypervisor.
We have seen two fairly frequent failure modes:
1
My production cloud is still running OpenNebula 2.0 with Sci. Linux
5 VM hosts using KVM hypervisor.
We have seen two fairly frequent failure modes:
1) The virtual machine gets hung in such a way that it is pingable
and you get a login prompt on the VNC console, but once you try
to log in,
It looks like a permissions issue where the oneadmin user
doesn't have the right permissions to launch a KVM virtual machine
via libvirt. You have to change the default permissions on
the libvirt sockets, there are details in the ONE docs on how to do that.
Steve Timm
On Wed, 16 May 2012,
on in
the BIOS but the only other time I've seen that error, it wasn't on.
Steve Timm
On Wed, 16 May 2012, Steven Timm wrote:
It looks like a permissions issue where the oneadmin user
doesn't have the right permissions to launch a KVM virtual machine
via libvirt. You have to change the default
A couple of years ago there was work done with an alternate scheduler
called Haizea in the OpenNebula 1.x series. Has that been kept
up to date and would it still work with OpenNebula 3.x?
That scheduler was supposed to address fair-share issues like that.
Steve Timm
On Mon, 9 Apr 2012,
I have never seen negative number of running VM's but
I have seen that onehost list will under-report VM's
and if you do
onevm list and count the number of vm's on each host, it is not
the same as the RVM count. This is also a problem in the OpenNebula 2.0
series.
Steve Timm
On Wed, 4 Apr
This is my template from xen, dom0=sci. linux 5.7, opennebula 2.0
I have a cloud with both KVM and Xen hosts in it.
I would strongly urge you to not use the kernel/initrd approach
and instead use the pygrub option as I have below, with the kernel
stored inside the domU. It is a lot easier to
This is very interesting work. Jan, do you have any write-up on how
you were able to set up the gfs and clvm setup to work with this driver?
It's a use case very similar to what we are considering for FermiCloud.
Two other questions that weren't immediately obvious from looking
at the code:
What version are you running? We saw a very similar problem in 3.0
and they say it is fixed in the latest release candidate.
Steve Timm
On Tue, 10 Jan 2012, Alberto Picón Couselo wrote:
Hi everybody:
Please, can you help us with the following oneacctd error?. After 10 or 15
minutes of
On Fri, 16 Dec 2011, Daniel Molina wrote:
Dear Farooq,
I think the problem is the driver assigned to serveradmin (x509), you
must change it to server_x509 [1]. Otherwise it will not use the
certificates specified in server_x509_auht.conf. x509 driver should be
used by regular users and not by
On Fri, 16 Dec 2011, Daniel Molina wrote:
On 16 December 2011 16:08, Steven Timm t...@fnal.gov wrote:
On Fri, 16 Dec 2011, Daniel Molina wrote:
Dear Farooq,
I think the problem is the driver assigned to serveradmin (x509), you
must change it to server_x509 [1]. Otherwise it will not use
On Fri, 16 Dec 2011, richard -rw- weinberger wrote:
Hi!
It looks like OpenNebula keeps all used VM Ids.
Using onevm ID I can see all old VMs in state DONE.
And in /var/lib/one/ exists for each old VM a directory.
How can I get rid of this?
Especially the directories in /var/lib/one/ are
On Tue, 13 Dec 2011, Daniel Molina wrote:
Hi Farooq,
On 12 December 2011 19:04, Faarooq Lowe l...@fnal.gov wrote:
What is the core_auth setting for in sunstone-server.conf? There isn't any
reference to it in the documentation.
This parameter defines the method used by the server to
What kind of transfer method are you using? shared, ssh, lvm?
You can load the .raw file into the image repository, make
it persistent, and that will take care of it.
Steve
On Mon, 12 Dec 2011, richard -rw- weinberger wrote:
Hi!
Assume I have two templates.
vm1.one:
---8---
NAME = vm1
What hypervisor are you using? If you are using KVM then this
behavior is normal, KVM only actually occupies the amount of RAM
that the virtual machine is actually using at the time and
OpenNebula reports the usage accordingly. this makes it possible
to overbook and put 16 2-GB KVM VM's on a
Hi Graeme--
I'm not an OpenNebula developer...our organization currently
has requirements similar to yours except that for some cases
we also allow X.509 authentication. We had to make the choice
which would be easier to implement and it was one of our developers
who contributed the code for
1 - 100 of 140 matches
Mail list logo