[one-users] Private cloud with Front-end on Virtual MAchine HA

2014-11-13 Thread Giancarlo De Filippis
 

Hi all, 

someone have some documentation for a sample of private cloud structure
with: 

- Two nodes in HA 

- Front-end on virtual machine HA 

With storage file system on glusterfs 

Thanks all.. 

GD ___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Private cloud with Front-end on Virtual MAchine HA

2014-11-13 Thread Bart
Hi,

Do you mean to have the frontend (OpenNebula management) running on the
actual OpenNebula cluster?

If that's the case then I would also be very interested in this scenario :)

As for GlusterFS, we've followed these instructions with success:


   - http://docs.opennebula.org/4.10/
   - http://docs.opennebula.org/4.10/administration/storage/gluster_ds.html


Also, all other instructions that we found were also on the documentation
site.

With it, we've created the following:


   - *GlusterFS:* Two storage hosts (running CentOS 7) in the back-end
   which provide the whole image/datastore for OpenNebula. HA is setup via
   round robin DNS. After testing we've found this to work quite well in terms
   of performance and redundancy (better then expected). All network ports are
   setup as a bond using balance-alb.
   - *Hypervisors: *a.t.m. 4 hypervisors (will eventually grow to 8)
   running CentOS 7. They have 2 network ports setup using bonding with
   balance-alb, they are connected to the virt/storage VLAN. And two network
   ports setup in a bond using active-backup, this is used for the bridge port
   (with VLAN tagging enabled). Active-backup seems to be the proper setting
   for a bridge port since a bridge can't work that well with balance-alb.
   - *OpenNebula management node:* This includes the OpenNebula daemon,
   sunstone and all other functions. Currently, since we're building up this
   environment. Hypervisors and storage have been arranged and we're happy
   with how those parts are setup, however the management node is currently
   still something we're not very sure about. We're considering buying two
   physical servers to do this job and create an active backup solution as
   described in the documentation:
   
http://docs.opennebula.org/4.10/advanced_administration/high_availability/oneha.html.
   However, buying hardware just for this seems a waste since these servers
   won't be doing that much (resource wise).
  - We do have plans on testing this node, dual, on the actual
  OpenNebula cluster. But something is holding us back. I can't find any
  proof of someone achieving this. Also, we're seeing some issues
that might
  occur when OpenNebula is managing itself (e.g. live migration, reboot,
  shutdown, etc.).
   - *Virtual machine HA setup: *We haven't actually started with this part
   yet, but this document describes on how you can create a setup where
   virtual machines become HA:
   
http://docs.opennebula.org/4.10/advanced_administration/high_availability/ftguide.html
   For us this is something we'll probably start looking into once we've found
   a proper setup for the OpenNebula management node.


Hopefully this info helps you a little and I also hope someone else can
elaborate on running OpenNebula as a VM on it's own cluster. Or, at least
other best practices on how to run OpenNebula in a VM without buying two
physical servers for this job, (any insight on this subject would be
helpful :)


-- Bart



2014-11-13 9:50 GMT+01:00 Giancarlo De Filippis gdefilip...@ltbl.it:

  Hi all,

 someone have some documentation for a sample of private cloud structure
 with:

 - Two nodes in HA

 - Front-end on virtual machine HA

 With storage file system on glusterfs

 Thanks all..

 GD

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Bart G.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Private cloud with Front-end on Virtual MAchine HA

2014-11-13 Thread Giancarlo De Filippis
 

Thanks so much Bart, 

That's the case. because vm running on cluster is more similar to
VMware HA solution (only two hosts without front-end) 

For the glusterfs i've already used this solution on my public cloud
with success. 

You helped me for port-setup of hypervisors :) 

I hope (like you) that someone (users or OpenNebula Team) have best
practices on how to run OpenNebula in a VM. 

Thks. 

Giancarlo 

Il 13/11/2014 11:38 Bart ha scritto: 

 Hi, 
 
 Do you mean to have the frontend (OpenNebula management) running on the 
 actual OpenNebula cluster? 
 
 If that's the case then I would also be very interested in this scenario :) 
 
 As for GlusterFS, we've followed these instructions with success: 
 
 * http://docs.opennebula.org/4.10/ [2]
 * http://docs.opennebula.org/4.10/administration/storage/gluster_ds.html [3]
 
 Also, all other instructions that we found were also on the documentation 
 site. 
 
 With it, we've created the following: 
 
 * GLUSTERFS: Two storage hosts (running CentOS 7) in the back-end which 
 provide the whole image/datastore for OpenNebula. HA is setup via round robin 
 DNS. After testing we've found this to work quite well in terms of 
 performance and redundancy (better then expected). All network ports are 
 setup as a bond using balance-alb.
 * HYPERVISORS: a.t.m. 4 hypervisors (will eventually grow to 8) running 
 CentOS 7. They have 2 network ports setup using bonding with balance-alb, 
 they are connected to the virt/storage VLAN. And two network ports setup in a 
 bond using active-backup, this is used for the bridge port (with VLAN tagging 
 enabled). Active-backup seems to be the proper setting for a bridge port 
 since a bridge can't work that well with balance-alb.
 
 * OPENNEBULA MANAGEMENT NODE: This includes the OpenNebula daemon, sunstone 
 and all other functions. Currently, since we're building up this environment. 
 Hypervisors and storage have been arranged and we're happy with how those 
 parts are setup, however the management node is currently still something 
 we're not very sure about. We're considering buying two physical servers to 
 do this job and create an active backup solution as described in the 
 documentation: 
 http://docs.opennebula.org/4.10/advanced_administration/high_availability/oneha.html
  [4]. However, buying hardware just for this seems a waste since these 
 servers won't be doing that much (resource wise). 
 
 * We do have plans on testing this node, dual, on the actual OpenNebula 
 cluster. But something is holding us back. I can't find any proof of someone 
 achieving this. Also, we're seeing some issues that might occur when 
 OpenNebula is managing itself (e.g. live migration, reboot, shutdown, etc.).
 
 * VIRTUAL MACHINE HA SETUP: We haven't actually started with this part yet, 
 but this document describes on how you can create a setup where virtual 
 machines become HA: 
 http://docs.opennebula.org/4.10/advanced_administration/high_availability/ftguide.html
  [5] For us this is something we'll probably start looking into once we've 
 found a proper setup for the OpenNebula management node.
 
 Hopefully this info helps you a little and I also hope someone else can 
 elaborate on running OpenNebula as a VM on it's own cluster. Or, at least 
 other best practices on how to run OpenNebula in a VM without buying two 
 physical servers for this job, (any insight on this subject would be helpful 
 :) 
 
 -- Bart 
 
 2014-11-13 9:50 GMT+01:00 Giancarlo De Filippis gdefilip...@ltbl.it:
 
 Hi all, 
 
 someone have some documentation for a sample of private cloud structure 
 with: 
 
 - Two nodes in HA 
 
 - Front-end on virtual machine HA 
 
 With storage file system on glusterfs 
 
 Thanks all.. 
 
 GD 
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1]
 
 -- 
 
 Bart G.
 

Links:
--
[1] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[2] http://docs.opennebula.org/4.10/
[3]
http://docs.opennebula.org/4.10/administration/storage/gluster_ds.html
[4]
http://docs.opennebula.org/4.10/advanced_administration/high_availability/oneha.html
[5]
http://docs.opennebula.org/4.10/advanced_administration/high_availability/ftguide.html___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Bulk delete of vCenter VM's leaves stray VM's

2014-11-13 Thread Javier Fontan
Hi,

We have opened an issue to track this problem:

http://dev.opennebula.org/issues/3334

Meanwhile you can decrease the number of actions sent changing in
/etc/one/oned.conf the parameter -t (number of threads) for VM driver. For
example:

VM_MAD = [
name   = vcenter,
executable = one_vmm_sh,
arguments  = -p -t 2 -r 0 vcenter -s sh,
type   = xml ]

Cheers

On Wed Nov 12 2014 at 5:40:00 PM Sebastiaan Smit b...@echelon.nl wrote:

 Hi list,

 We're testing the vCenter functionality in version 4.10 and see some
 strange behaviour while doing bulk actions.

 Deleting VM's sometimes leave stray VM's on our cluster. We see the
 following in de VM log:

 Sun Nov  9 15:51:34 2014 [Z0][LCM][I]: New VM state is RUNNING
 Wed Nov 12 17:30:36 2014 [Z0][LCM][I]: New VM state is CLEANUP.
 Wed Nov 12 17:30:36 2014 [Z0][VMM][I]: Driver command for 60 cancelled
 Wed Nov 12 17:30:36 2014 [Z0][DiM][I]: New VM state is DONE
 Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Command execution
 fail: /var/lib/one/remotes/vmm/vcenter/cancel 
 '423cdcae-b6b3-07c1-def6-96b9f3f4b7b3'
 'demo-01' 60 demo-01
 Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Cancel of VM
 423cdcae-b6b3-07c1-def6-96b9f3f4b7b3 on host demo-01 failed due to
 ManagedObjectNotFound: The object has already been deleted or has not been
 completely created
 Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 ExitCode: 255
 Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Failed to execute
 virtualization driver operation: cancel.
 Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Successfully
 execute network driver operation: clean.
 Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: CLEANUP SUCCESS 60

 We see it in a different manner while bulk creating VM's (20+ at a time):

 Sun Nov  9 16:01:34 2014 [Z0][DiM][I]: New VM state is ACTIVE.
 Sun Nov  9 16:01:34 2014 [Z0][LCM][I]: New VM state is PROLOG.
 Sun Nov  9 16:01:34 2014 [Z0][LCM][I]: New VM state is BOOT
 Sun Nov  9 16:01:34 2014 [Z0][VMM][I]: Generating deployment file:
 /var/lib/one/vms/81/deployment.0
 Sun Nov  9 16:01:34 2014 [Z0][VMM][I]: Successfully execute network driver
 operation: pre.
 Sun Nov  9 16:01:36 2014 [Z0][VMM][I]: Command execution fail:
 /var/lib/one/remotes/vmm/vcenter/deploy '/var/lib/one/vms/81/deployment.0'
 'demo-01' 81 demo-01
 Sun Nov  9 16:01:36 2014 [Z0][VMM][I]: Deploy of VM 81 on host demo-01
 with /var/lib/one/vms/81/deployment.0 failed due to undefined method
 `uuid' for nil:NilClass
 Sun Nov  9 16:01:36 2014 [Z0][VMM][I]: ExitCode: 255
 Sun Nov  9 16:01:36 2014 [Z0][VMM][I]: Failed to execute virtualization
 driver operation: deploy.
 Sun Nov  9 16:01:36 2014 [Z0][VMM][E]: Error deploying virtual machine
 Sun Nov  9 16:01:36 2014 [Z0][DiM][I]: New VM state is FAILED
 Wed Nov 12 17:30:19 2014 [Z0][DiM][I]: New VM state is DONE.
 Wed Nov 12 17:30:19 2014 [Z0][LCM][E]: epilog_success_action, VM in a
 wrong state


 I think these have two different root causes. The cluster is not under
 load.


 Has anyone else seen this behaviour?

 Best regards,
 --
 Sebastiaan Smit
 Echelon BV

 E: b...@echelon.nl
 W: www.echelon.nl
 T: (088) 3243566 (gewijzigd nummer)
 T: (088) 3243505 (servicedesk)
 F: (053) 4336222

 KVK: 06055381


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How do you change the '-vga cirrus' option on KVM

2014-11-13 Thread Gerry O'Brien
Hi Daniel,

  Thanks for the info. In general, do you know what the mapping is
between the XML 'model type' and the parameter that gets passed to KVM
via the -vga switch?

Regards,
Gerry



On 10/11/2014 18:04, Daniel Dehennin wrote:
 Gerry O'Brien ge...@scss.tcd.ie writes:

 Hi,
 Hello,

 How do you change the '-vga cirrus' option for KVM to something
 else, e.g. '-vga std'? Is it done via the RAW data in the template?
 That's how I do it:

 RAW=[TYPE=kvm,DATA=devicesvideomodel type=\vga\ 
 heads=\1\/model/video/devices]

 It could be interesting to be able to select it in the template:

 1. provide a default in /etc/one/vmm_exec/vmm_exec_kvm.conf (cirrus for 
 example)
 2. if graphics is “SPICE”, set the card to “qxl” in the template
 2. user can override the card type for this template

 If you define the RAW DATA in /etc/one/vmm_exec/vmm_exec_kvm.conf, then
 you can not override it in the template because RAW are concatenated.

 Regards.


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


-- 
Gerry O'Brien

Systems Manager
School of Computer Science and Statistics
Trinity College Dublin
Dublin 2
IRELAND

00 353 1 896 1341

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] econe server and sha1

2014-11-13 Thread Daniel Molina
Hi Alejandro,

I cannot find out the problem:
[oneadmin@node1 ~]$ oneuser show | grep PASSWORD
PASSWORD: 86f7e437faa5a7fce15d1ddcb9eaeaea
[oneadmin@node1 ~]$ econe-describe-instances --access-key oneadmin
--secret-key 86f7e437faa5a7fce15d
1ddcb9eaeaea
  instanceId   ImageId  State IP instanceType
  i-  pending
  i-0001  running

Let's check a few thins:
* Any relevant error in econe.log|error oned.log|error
* Could you try using -K and -S options instead of the verbose ones
* You can use the euc2ools to interact with the econe server, are you
getting the same error with this cli?

Cheers

On 12 November 2014 13:18, Alejandro Feijóo alfei...@cesga.es wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 hi!

 #
 # Auth
 #

 # Authentication driver for incomming requests
 #   - ec2, default Acess key and Secret key scheme
 #   - x509, for x509 certificates based authentication
 :auth: ec2

 # Authentication driver to communicate with OpenNebula core
 #   - cipher, for symmetric cipher encryption of tokens
 #   - x509, for x509 certificate encryption of tokens
 :core_auth: cipher



 in sunstone-server we have:
 :auth: opennebula

 :core_auth: cipher





 El 11/11/14 15:02, Daniel Molina escribió:
  Hi,
 
  What auth driver are you using in econe.conf?
 
  Cheers
 
  On 10 November 2014 09:17, Alejandro Feijóo alfei...@cesga.es wrote:
 
  Any idea where to touch... because still with the same error.
 
  [oneadmin@test11 ~]$ oneuser show 17
  USER 17 INFORMATION
  ID  : 17
  NAME: alfeijooec2
  GROUP   : users
  PASSWORD: 30e3ef7255df9b52fa130697ef83348f7ed5
  AUTH_DRIVER : core
  ENABLED : Yes
 
  USER TEMPLATE
  DEFAULT_VIEW=user
  TOKEN_PASSWORD=7aafcdf5de6b49c4bbc8e31787a674080c9c
 
  RESOURCE USAGE  QUOTAS
 
  DATASTORE ID   IMAGESSIZE
 1 1 /-  40M /-
 
  [oneadmin@test11 ~]$ econe-describe-images --access-key alfeijooec2
  --secret-key 30e3ef7255df9b52fa130697ef83348f7ed5
 
  econe-describe-images: The username or password is not correct
 
  [oneadmin@test11 ~]$ econe-describe-images --access-key alfeijooec2
  --secret-key 7aafcdf5de6b49c4bbc8e31787a674080c9c
 
  econe-describe-images: The username or password is not correct
 
 
  pd. alfeijooec2 can instantiate mv through ONE and Sunstone.
 
  Thanks again :D
 
  El 06/11/14 09:27, Daniel Molina escribió:
  That's right, if you use alfeijooec2 through ec2 you don't have to
 change
  anything, just use the password returned by oneshow user show
 alfeijooec2
  as AWS_SECRET_KEY
 
  On 6 November 2014 09:23, Alejandro Feijóo alfei...@cesga.es wrote:
 
  Oh...
 
  I have that 2 users.
 
14 alfeijoousers  ldap   1 /   -   1024M /   -
  1.0 /   -
17 alfeijooec2 users  core - -
 -
 
  I understant that user 14 never work in these scenario?  but user 17
 may?
 
  Thanks :)
 
 
  El 06/11/14 08:56, Daniel Molina escribió:
  Hi,
 
  LDAP authentication is not supported through ec2, at least using
  regular
  clients. If you want to use this kind of authentication you have to
  change
  the auth method to opennebla in the econe.conf file and include the
  Basic
  Auth headers in every ec2 request
 
  Cheers
 
  On 6 November 2014 08:27, Alejandro Feijóo alfei...@cesga.es
 wrote:
 
  Hi sorry for the delay... i was on vacation...
 
  Yes that was one of the test that i did... but with the same error.
 
  its possible any kind of problems when use ldap?
 
  any random recomendation? :D
 
  Thanks in advance.
 
 
  El 24/10/14 10:30, Daniel Molina escribió:
 
 
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 
 
 
 
 
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 
 
 
 
 
 
 
 
 

 - --
 Alejandro Feijóo Fraga
 Systems Technician
 CESGA
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 Comment: GPGTools - https://gpgtools.org

 iQEcBAEBCgAGBQJUY1AwAAoJEKshAoM6XWq5LTIH/0vdyauqZcWr6MdemCEKPzM3
 i01ZBGMXkHdbvJLI2y2WN6kiseIMPJMzPvOEmvDVDEnfoMQ8TpLRKi7XL4lKFLs7
 YSACDWsz9Kq58fCsVKXr/WXs7HPiDdtN615UrUdRk0DbC78PRJ4vTbhLZMmUQ4Z1
 rTB3GJxJiGcfAt0AREOb7DRJXL2sQLpYfy0ejpHfpx69DT1cFbm7ntPbhN/D5KiO
 60ErHu690Np7jdS2b4jyIvUwdCuBXa3lTEuBT/5qj14zYt6DI1ZrjltBHZhlM+80
 87h01cME22tg1+fhTsS4+FDbvqg8xcfEAN+lJ43/1lGWErZT8tGdrgozH6gk2Hs=
 =7Frd
 -END PGP SIGNATURE-




-- 
--
Daniel Molina
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula

[one-users] How to fix hosts with deleted /var/tmp/one files

2014-11-13 Thread Steven Timm


In several occasions in the past I have forgotten to
tell my tmpwatch utility not to delete files out of /var/tmp/one
and thus I have seen most of the remotes get deleted off of
my cloud hosts, but opennebula 4.8 doesn't report any problems.

I try to fix the remotes by doing onehost sync and nothing happens.
I try to disable/enable the host and nothing happens there either.
The only thing that seems to revive it is if I actually
onehost delete/ onehost create.

IS it possible for the remotes to add an integrity check?
They should not be reporting the host is on if the datastores
directory of remotes has been totally deleted.
Also is it possible to configure the directory other than the
default /var/tmp/one?  Finally is it possible to
redistribute the remotes short of a onehost delete / onehost create?

Steve Timm


--
Steven C. Timm, Ph.D  (630) 840-8525
t...@fnal.gov  http://home.fnal.gov/~timm/
Office:  Wilson Hall room 804
Fermilab Scientific Computing Division,
Currently transitioning from:
Scientific Computing Services Quadrant
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing

To:
Scientific Computing Facilities Quadrant.,
Experimental Computing Facilities Dept.,
Project Lead for Virtual Facility Project.


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Picking certain fields /USER/NAME out of $USER[TEMPLATE]

2014-11-13 Thread Steven Timm

Under OpenNebula 3.2 we would include in the
contextualization section the field $USER[TEMPLATE]
and then add a contextualization script such that
we would grab the field /USER/NAME out of the base64 encoded
template information.

In Opennebula 4.8 you can still put $USER[TEMPLATE]
into your contextualization but now there is a lot more junk
in $USER[TEMPLATE], namely all the key pairs that have been
created via ec2CreateKeyPair for each user.  At first we
did not know what was happening because $USER[TEMPLATE]
grew to over 300kb, a size that actually breaks bash! 
(you can't assign a shell variable to a value that big).


So now the question is--is there a way to include only the
user name field of the template in the contextualization section,
and nothing else, through some combination of ruby syntax.
If so, how?

Steve Timm


--
Steven C. Timm, Ph.D  (630) 840-8525
t...@fnal.gov  http://home.fnal.gov/~timm/
Office:  Wilson Hall room 804
Fermilab Scientific Computing Division,
Currently transitioning from:
Scientific Computing Services Quadrant
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing

To:
Scientific Computing Facilities Quadrant.,
Experimental Computing Facilities Dept.,
Project Lead for Virtual Facility Project.


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How do you change the '-vga cirrus' option on KVM

2014-11-13 Thread Daniel Dehennin
Gerry O'Brien ge...@scss.tcd.ie writes:

 Hi Daniel,

   Thanks for the info. In general, do you know what the mapping is
 between the XML 'model type' and the parameter that gets passed to KVM
 via the -vga switch?

I found it in the libvirt documentation[1].

Regards.

Footnotes: 
[1]  http://libvirt.org/formatdomain.html#elementsVideo

-- 
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF


signature.asc
Description: PGP signature
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Ports OpenNebula

2014-11-13 Thread Carlos Martín Sánchez
Hi,

On Wed, Nov 12, 2014 at 11:16 AM, Maria Jular maria.ju...@fcsc.es wrote:

 Hello everyone,

 Is there an OpenNebula document where all necessary ports to access the
 platform from remote sites appear?


We don't have a document specifically for that, but you can find all the
needed ports in the conf files.

For sunstone [1], you will need to open :port and :vnc_proxy_port. By
default, 9869 and 29876.

If you also need to access oned, to use the CLI remotely, the port is in
oned.conf [2]. OneFlow is a separate server, and you need to open its port
[3] also to use the CLI remotely.

Regards

[1]
http://docs.opennebula.org/4.10/administration/sunstone_gui/sunstone.html#configuration
[2] http://docs.opennebula.org/4.10/administration/references/oned_conf.html
[3]
http://docs.opennebula.org/4.10/advanced_administration/application_flow_and_auto-scaling/appflow_configure.html
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org http://www.opennebula.org/ | cmar...@opennebula.org |
@OpenNebula http://twitter.com/opennebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] IOPS in quota section for 4.12

2014-11-13 Thread kiran ranjane
Hi Team,

I think we should add IOPS in quota section where in oneadmin can assign
IOPS at user and group level.

For example :

-- Oneadmin assigns 2000 IOPS to a particular user, which means that
particular user should not exceed 2000 IOPS on his total number of VM that
he is running.

-- After IOPS is assigned to user/group through quota section, Cloud view
user should be able to assign IOPS at VM disks level as per his choice.
(eg: if a user/group is assigned quota of 2000 IOPS then he should have the
ability to set IOPS of the VM disk before starting the VM.)

User/Group have total IOPS Quota = 2000

User Sets IOPS at Disk Level :
Virtual Machine OS disk = 300 IOPS
Virtual Machine Data Disk  = 300 IOPS

Total remaining IOPS = 1400 IOPS

I really think this feature should be a must add in quota section as it
will easy many things related to default way of assigning IOPS to VM.

Regards
Kiran Ranjane
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] IOPS in quota section for 4.12

2014-11-13 Thread Steven C Timm
Good idea.  In general it would be nice to be able to set the various 
quantities which can
be controlled by the RHEV extended libvirt functions which include IOPS but 
also network bandwidth.

Steve


From: Users [users-boun...@lists.opennebula.org] on behalf of kiran ranjane 
[kiran.ranj...@gmail.com]
Sent: Thursday, November 13, 2014 5:16 PM
To: users
Subject: [one-users] IOPS in quota section for 4.12

Hi Team,

I think we should add IOPS in quota section where in oneadmin can assign IOPS 
at user and group level.

For example :

-- Oneadmin assigns 2000 IOPS to a particular user, which means that particular 
user should not exceed 2000 IOPS on his total number of VM that he is running.

-- After IOPS is assigned to user/group through quota section, Cloud view user 
should be able to assign IOPS at VM disks level as per his choice. (eg: if a 
user/group is assigned quota of 2000 IOPS then he should have the ability to 
set IOPS of the VM disk before starting the VM.)

User/Group have total IOPS Quota = 2000

User Sets IOPS at Disk Level :
Virtual Machine OS disk = 300 IOPS
Virtual Machine Data Disk  = 300 IOPS

Total remaining IOPS = 1400 IOPS

I really think this feature should be a must add in quota section as it will 
easy many things related to default way of assigning IOPS to VM.

Regards
Kiran Ranjane
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org