[one-users] OpenNebula drivers for Test Kitchen

2014-11-11 Thread Javier Fontan
Hi,

I would like to let you know that a driver for Test Kitchen was open
sourced by Blackberry. I hope it can help you test your software in your
OpenNebula infrastructure.

https://github.com/test-kitchen/kitchen-opennebula

Also interesting are patches for Fog library to interface with OpenNebula.

https://github.com/blackberry/fog

Cheers
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] econe server and sha1

2014-11-11 Thread Daniel Molina
Hi,

What auth driver are you using in econe.conf?

Cheers

On 10 November 2014 09:17, Alejandro Feijóo alfei...@cesga.es wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 Any idea where to touch... because still with the same error.

 [oneadmin@test11 ~]$ oneuser show 17
 USER 17 INFORMATION
 ID  : 17
 NAME: alfeijooec2
 GROUP   : users
 PASSWORD: 30e3ef7255df9b52fa130697ef83348f7ed5
 AUTH_DRIVER : core
 ENABLED : Yes

 USER TEMPLATE
 DEFAULT_VIEW=user
 TOKEN_PASSWORD=7aafcdf5de6b49c4bbc8e31787a674080c9c

 RESOURCE USAGE  QUOTAS

 DATASTORE ID   IMAGESSIZE
1 1 /-  40M /-

 [oneadmin@test11 ~]$ econe-describe-images --access-key alfeijooec2
 - --secret-key 30e3ef7255df9b52fa130697ef83348f7ed5

 econe-describe-images: The username or password is not correct

 [oneadmin@test11 ~]$ econe-describe-images --access-key alfeijooec2
 - --secret-key 7aafcdf5de6b49c4bbc8e31787a674080c9c

 econe-describe-images: The username or password is not correct


 pd. alfeijooec2 can instantiate mv through ONE and Sunstone.

 Thanks again :D

 El 06/11/14 09:27, Daniel Molina escribió:
  That's right, if you use alfeijooec2 through ec2 you don't have to change
  anything, just use the password returned by oneshow user show alfeijooec2
  as AWS_SECRET_KEY
 
  On 6 November 2014 09:23, Alejandro Feijóo alfei...@cesga.es wrote:
 
  Oh...
 
  I have that 2 users.
 
14 alfeijoousers  ldap   1 /   -   1024M /   -
  1.0 /   -
17 alfeijooec2 users  core - -
 -
 
  I understant that user 14 never work in these scenario?  but user 17 may?
 
  Thanks :)
 
 
  El 06/11/14 08:56, Daniel Molina escribió:
  Hi,
 
  LDAP authentication is not supported through ec2, at least using
 regular
  clients. If you want to use this kind of authentication you have to
  change
  the auth method to opennebla in the econe.conf file and include the
 Basic
  Auth headers in every ec2 request
 
  Cheers
 
  On 6 November 2014 08:27, Alejandro Feijóo alfei...@cesga.es wrote:
 
  Hi sorry for the delay... i was on vacation...
 
  Yes that was one of the test that i did... but with the same error.
 
  its possible any kind of problems when use ldap?
 
  any random recomendation? :D
 
  Thanks in advance.
 
 
  El 24/10/14 10:30, Daniel Molina escribió:
 
 
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 
 
 
 
 
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 
 
 
 

 - --
 Alejandro Feijóo Fraga
 Systems Technician
 CESGA
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 Comment: GPGTools - https://gpgtools.org

 iQEcBAEBCgAGBQJUYHSKAAoJEKshAoM6XWq57YMH/3I7ON+2txmvrzEzgXYGr+Kx
 DcjiB5XJn9+G0vOWDZSt8El9i7mvnEaO4CRpvYjJebUk2y3p3ZBKCellV0lnlJEY
 Ub9iWVIGG/heKBe94fxf824jAFnQmxtEuccrSrJkExURBY6KdWLTsTblkeVUM3Uf
 EBSumNhXqagjRIE/qU8MkEuRi2Mh2c/ov19BD0VdT5+PvN9HNhtV+gJlXmp8daE5
 eDBfQfTh+dRTnzA8BL0xCN+u5k8tu7fPNZ8WlST8d4qk8zwRl9zoZtDORBl/GyfN
 vgnRJTGrshoQpnSCzrFuajtJ5b2no18FY/jydAO3tubAzfGsDarB1KGJwH6SP4U=
 =cFmb
 -END PGP SIGNATURE-




-- 
--
Daniel Molina
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] OpenNebula, can it help in a VMware based security lab?

2014-11-11 Thread Tino Vazquez
Hi Paul,

Your scenario can be implemented currently with the integration with
OpenNebula and vCenter, with a limitation. You can define a VM
template in vCenter that can later be consumed by your students.

The limitation at the moment is that no attach/detach NIC
functionality is present, so a NIC to a network interconnecting all
the VMs needs to exist in each VM at the time of deployment. At the
time of probing each other networks, that NIC needs to be brought up
(using if up for instance if they are using linux).

We are working towards adding the attach/detach NIC functionality in
vCenter in the next OpenNebula release.

Let us know if you need more information about your scenario and OpenNebula.

Best,

-Tino



--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect | Head of Research at OpenNebula
Systems (formerly C12G Labs)
cvazquez@OpenNebula.Systems | @OpenNebula

--
Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of OpenNebula.Systems S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at abuse@opennebula.systems and delete the
e-mail and attachments and any copy from your system. OpenNebula's
thanks you for your cooperation.


On 10 November 2014 18:27, Paul Griffith pa...@cse.yorku.ca wrote:
 Hello,

 Please forgive me if this question has been asked before. I am tasked with
 upgrading our security lab from VMware Lab manager ( now end of life) to
 vCloud Director/Suite, which is being replaced with vCAC.   My time line is
 4 weeks.


 A little back ground, our security lab consists on 9 servers running VMs.

 - 9 x servers for the students to run their VMs (Linux and Windows) to
 explore security exploits and vulnerabilities.  These 9 servers consists on
 6 x whitebox Xeon 5650, 2.67Ghz, 16GB and 3 x Dell R410, 5675, 3.06Ghz, 32GB

 - 1 x services server running AD, DNS and DHCP (Windows Server)

 - 1 x NFS server (12 x 900GB 10K SAS drives)

 When the lab is running it is disconnected from our network :) . Students
 access their VMs by using a KVM (Raritan) to connect to a Windows PC, where
 they run vSphere client to connect to their VMs (via vCenter) on a private
 internal network.

 Since we are upgrading and with all the cloud talk, I wanted to see if there
 was something that didn't need as much work as vCloud Director or vCAC.

  Our Students are just working with VMs and virtual networks. At the end of
 the course they connect their separate networks and probe each others
 network.

 Our department is a member of VMware Academic Program (VMAP), for a yearly
 fee we get access to

 - VMware Workstation
 - VMware Fusion
 - VMware vCloud Suite Standard
 - VMware vSphere Enterprise
 - VMware vCenter Server Standard
 - VMware vCloud Director


 I was wondering if OpenNebula can help in anyway, or am I just better
 working with VMware.

 Thank you for taking the time

 Paul Griffith
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Backing up virtual machines.

2014-11-11 Thread Ruben S. Montero
Hi Tao

The OpenNebula VM backup can be performed in  two different ways:

1.- via disk snapshots. Note that these are copies of the VM disks, and
they are stored in an Image Datastore.
  * The volume needs to be offline to guarantee FS consistency (e.g.
poweroff the VM or umount it from the VM). No need to shutdown the VM,
though: onevm disk-snapshot --live (onevm saveas is deprecated, use onevm
disk-snapshot instead).
  * If needed the operation can be automated by a simple script or an
scheduled action.
  * The VM can be recovered manually from OpenNebula (defining a new VM
based on the saved disk). Note that the life-cycle of the saved disk is
different from the original VM (no need to have it running for example)

2.- via the underlying storage system. You can create a special system DS
with its own backup policies (e.g. leveraging NAS features or FS features).
Note:
  * You would probably require some kind of synchronization from the VM to
prevent FS corruption.
  * The backup procedure is handled transparently to OpenNebula
  * The recovery process cannot be initiated from OpenNebula and need to be
in place.

In your case you could increase the reliability (different from backup)  by
configuring some of the local volumes used for the SSH Datastore with DRDB
or a RAID configuration.

Cheers

Ruben


On Mon Nov 10 2014 at 10:41:46 PM Tao Craig t...@ispot.tv wrote:

 Hi,



 I have a question about backing up and restoring virtual machines.



 I am using OpenNebula 4.8 and the SSH transfer drivers as a means to
 deploy virtual machines. My images are not persistent and I am using the
 qcow2 image format. All of my virtual machines and physical hosts are
 running CentOS 6.5 and using ext3 file systems.



 Is it possible in my current environment to back up a virtual machine
 without shutting it down?



 I know I can snapshot a virtual machine, but what if that machine is
 suddenly deleted for some reason (i.e. physical host crashes and has to be
 rebuilt). I’m under the impression snapshots are only usable as long as the
 original virtual machine is still running.



 I know I can also use the onevm saveas command, but that only works if you
 plan on shutting the virtual machine down at some point.



 Ideally, I would like to do something like copy the (running) disk image
 to offline storage and then have the ability to swap that out with a new
 running virtual machine to restore the back up in case the original virtual
 machine is lost. Rebooting in the case of restoring is acceptable of
 course, but I can’t shut my machines down every time I want to back them up.



 Is something like this possible?



 Thanks.






 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Do we need to mount /var/lib/one with ceph

2014-11-11 Thread Ruben S. Montero
The system datastore is accessed from the front-end to generate the context
iso. Note that you don't need to be it exported from the front-end, the
nodes and the front-end itself can mount it from an different NAS server.

The /var/lib/one contents are needed in the front-end but not by the nodes.
Just the system datastore directory.

If the VM tries to access a disk from a device that is not mounted or the
NAS server is down you'll be in trouble. However, note that context ISO is
only accessed during the boot state and the main disks of the VM are in the
ceph pool, so you'll probably be fine (but if not tested it)

Cheers


On Mon Nov 10 2014 at 3:38:14 AM Huynh Dac Nguyen ndhu...@spsvietnam.vn
wrote:

 Dear Ruben,

 Thank you for replying

 So what happens if the front-end opennebula is down
 the /var/lib/one isn't mounted,only System Datastore is mounted,
 then the VM can't work, right? (the vm requires image and additional
 files)

 Can you explain why we don't need to export the whole /var/lib/one ?

 Regards,
 Ndhuynh

  Ruben S. Montero rsmont...@opennebula.org 11/7/2014 4:52 PM
 
 Hi Ndhuynh

 Ceph storage in OpenNebula is handled as follows:


 1.- Image Datastores, hold disk images repository as well as images for
 running VMs in a Ceph volume
 2.- System Datastore holds additional VM files, checkpoints, context
 disks and the like.


 If you need live-migration the easiest way is to have a shared
 filesystem for the System Datastore. You don't need to export the whole
 /var/lib/one, just the datastore directory, though.


 If you do not need to live-migrate VMs you should be ok with a ssh
 based system datastore


 Cheers






 On Wed Nov 05 2014 at 12:09:47 PM Huynh Dac Nguyen
 ndhu...@spsvietnam.vn wrote:

 Hi All,


 I'm researching opennebula with ceph, i saw that most of guide focus on
 using ceph as datastore - block device, right?


 Do we need to mount ceph to /var/lib/one as file system to prevent the
 opennebula frontend down expectedly?


 My script is:


 1) Make a ceph file system : one and mount to /var/lib/one in all
 opennebula node (front-end and node)
 2) Create a ceph block device and add to opennebula as datastore


 Is this a right way?


 Regards,
 Ndhuynh
 ndhu...@spsvietnam.vn

 This e-mail message including any attachments is for the sole use of
 the intended(s) and may contain privileged or confidential information.
 Any unauthorized review, use, disclosure or distribution is prohibited.
 If you are not intended recipient, please immediately contact the sender
 by reply e-mail and delete the original message and destroy all copies
 thereof.

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


 This e-mail message including any attachments is for the sole use of the
 intended(s) and may contain privileged or confidential information. Any
 unauthorized review, use, disclosure or distribution is prohibited. If
 you are not intended recipient, please immediately contact the sender by
 reply e-mail and delete the original message and destroy all copies
 thereof.

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] OpenNebula, can it help in a VMware based security lab?

2014-11-11 Thread Tino Vazquez
Hi Paul,

BTW, I invite you to try out vOneCloud, in very little time and in a
non intrusive way you can try vCenter support to see if it fit your
needs.

vOneCloud web page: http://vonecloud.today/
vOneCloud documentation: http://docs.vonecloud.today/

Regards,

-Tino
--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect | Head of Research at OpenNebula
Systems (formerly C12G Labs)
cvazquez@OpenNebula.Systems | @OpenNebula

--
Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of OpenNebula.Systems S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at abuse@opennebula.systems and delete the
e-mail and attachments and any copy from your system. OpenNebula's
thanks you for your cooperation.


On 11 November 2014 15:20, Tino Vazquez cvazquez@opennebula.systems wrote:
 Hi Paul,

 Your scenario can be implemented currently with the integration with
 OpenNebula and vCenter, with a limitation. You can define a VM
 template in vCenter that can later be consumed by your students.

 The limitation at the moment is that no attach/detach NIC
 functionality is present, so a NIC to a network interconnecting all
 the VMs needs to exist in each VM at the time of deployment. At the
 time of probing each other networks, that NIC needs to be brought up
 (using if up for instance if they are using linux).

 We are working towards adding the attach/detach NIC functionality in
 vCenter in the next OpenNebula release.

 Let us know if you need more information about your scenario and OpenNebula.

 Best,

 -Tino



 --
 OpenNebula - Flexible Enterprise Cloud Made Simple

 --
 Constantino Vázquez Blanco, PhD, MSc
 Senior Infrastructure Architect | Head of Research at OpenNebula
 Systems (formerly C12G Labs)
 cvazquez@OpenNebula.Systems | @OpenNebula

 --
 Confidentiality Warning: The information contained in this e-mail and
 any accompanying documents, unless otherwise expressly indicated, is
 confidential and privileged, and is intended solely for the person
 and/or entity to whom it is addressed (i.e. those identified in the
 To and cc box). They are the property of OpenNebula.Systems S.L..
 Unauthorized distribution, review, use, disclosure, or copying of this
 communication, or any part thereof, is strictly prohibited and may be
 unlawful. If you have received this e-mail in error, please notify us
 immediately by e-mail at abuse@opennebula.systems and delete the
 e-mail and attachments and any copy from your system. OpenNebula's
 thanks you for your cooperation.


 On 10 November 2014 18:27, Paul Griffith pa...@cse.yorku.ca wrote:
 Hello,

 Please forgive me if this question has been asked before. I am tasked with
 upgrading our security lab from VMware Lab manager ( now end of life) to
 vCloud Director/Suite, which is being replaced with vCAC.   My time line is
 4 weeks.


 A little back ground, our security lab consists on 9 servers running VMs.

 - 9 x servers for the students to run their VMs (Linux and Windows) to
 explore security exploits and vulnerabilities.  These 9 servers consists on
 6 x whitebox Xeon 5650, 2.67Ghz, 16GB and 3 x Dell R410, 5675, 3.06Ghz, 32GB

 - 1 x services server running AD, DNS and DHCP (Windows Server)

 - 1 x NFS server (12 x 900GB 10K SAS drives)

 When the lab is running it is disconnected from our network :) . Students
 access their VMs by using a KVM (Raritan) to connect to a Windows PC, where
 they run vSphere client to connect to their VMs (via vCenter) on a private
 internal network.

 Since we are upgrading and with all the cloud talk, I wanted to see if there
 was something that didn't need as much work as vCloud Director or vCAC.

  Our Students are just working with VMs and virtual networks. At the end of
 the course they connect their separate networks and probe each others
 network.

 Our department is a member of VMware Academic Program (VMAP), for a yearly
 fee we get access to

 - VMware Workstation
 - VMware Fusion
 - VMware vCloud Suite Standard
 - VMware vSphere Enterprise
 - VMware vCenter Server Standard
 - VMware vCloud Director


 I was wondering if OpenNebula can help in anyway, or am I just better
 working with VMware.

 Thank you for taking the time

 Paul Griffith
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org

Re: [one-users] Do we need to mount /var/lib/one with ceph

2014-11-11 Thread Huynh Dac Nguyen

Dear Ruben,  

You mean: 
- /var/lib/one is mounted from NAS server just for Front-end 
- The datastore is mounted from Ceph for frond-end and node 

So  
Front-end servers must be mounted NAS (/var/lib/one) and Ceph
(/var/lib/one/datastores/[number]), 2 mounted points. 
Node servers must be mounted only Ceph
(/var/lib/one/datastores/[number]), 1 mounted point 
and the passwordless file must be updated manually 

Is it correct?  
so why don't we use ceph for both of shared location?  or just
/var/lib/one?  I really don't want to manage more device.  
Could you show me the best solution? I'm confused to continue because
of stuck in integrate Ceph and opennebula 


Regards,  
Ndhuynh 


--


Message: 1 
Date: Tue, 11 Nov 2014 14:30:01 + 
From: Ruben S. Montero rsmont...@opennebula.org 
To: Huynh Dac Nguyen ndhu...@spsvietnam.vn, 
users@lists.opennebula.org 
Subject: Re: [one-users] Do we need to mount /var/lib/one with ceph 
Message-ID: 
cagi56tetpmozcek+ddzlulj7y_yc-py40av0rbgopudb7mc...@mail.gmail.com 
Content-Type: text/plain; charset=utf-8 

The system datastore is accessed from the front-end to generate the
context 
iso. Note that you don't need to be it exported from the front-end, the

nodes and the front-end itself can mount it from an different NAS
server. 

The /var/lib/one contents are needed in the front-end but not by the
nodes. 
Just the system datastore directory. 

If the VM tries to access a disk from a device that is not mounted or
the 
NAS server is down you'll be in trouble. However, note that context ISO
is 
only accessed during the boot state and the main disks of the VM are in
the 
ceph pool, so you'll probably be fine (but if not tested it) 

Cheers 


On Mon Nov 10 2014 at 3:38:14 AM Huynh Dac Nguyen
ndhu...@spsvietnam.vn 
wrote: 

 Dear Ruben, 
 
 Thank you for replying 
 
 So what happens if the front-end opennebula is down 
 the /var/lib/one isn't mounted,only System Datastore is mounted, 
 then the VM can't work, right? (the vm requires image and additional

 files) 
 
 Can you explain why we don't need to export the whole /var/lib/one ?

 
 Regards, 
 Ndhuynh 
 
  Ruben S. Montero rsmont...@opennebula.org 11/7/2014 4:52 PM 
  
 Hi Ndhuynh 
 
 Ceph storage in OpenNebula is handled as follows: 
 
 
 1.- Image Datastores, hold disk images repository as well as images
for 
 running VMs in a Ceph volume 
 2.- System Datastore holds additional VM files, checkpoints, context

 disks and the like. 
 
 
 If you need live-migration the easiest way is to have a shared 
 filesystem for the System Datastore. You don't need to export the
whole 
 /var/lib/one, just the datastore directory, though. 
 
 
 If you do not need to live-migrate VMs you should be ok with a ssh 
 based system datastore 
 
 
 Cheers 
 
 
 
 
 
 
 On Wed Nov 05 2014 at 12:09:47 PM Huynh Dac Nguyen 
 ndhu...@spsvietnam.vn wrote: 
 
 Hi All, 
 
 
 I'm researching opennebula with ceph, i saw that most of guide focus
on 
 using ceph as datastore - block device, right? 
 
 
 Do we need to mount ceph to /var/lib/one as file system to prevent
the 
 opennebula frontend down expectedly? 
 
 
 My script is: 
 
 
 1) Make a ceph file system : one and mount to /var/lib/one in all 
 opennebula node (front-end and node) 
 2) Create a ceph block device and add to opennebula as datastore 
 
 
 Is this a right way? 
 
 
 Regards, 
 Ndhuynh 
 ndhu...@spsvietnam.vn 
 
 This e-mail message including any attachments is for the sole use of

 the intended(s) and may contain privileged or confidential
information. 
 Any unauthorized review, use, disclosure or distribution is
prohibited. 
 If you are not intended recipient, please immediately contact the
sender 
 by reply e-mail and delete the original message and destroy all
copies 
 thereof. 
 
 ___ 
 Users mailing list 
 Users@lists.opennebula.org 
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 
 
 
 This e-mail message including any attachments is for the sole use of
the 
 intended(s) and may contain privileged or confidential information.
Any 
 unauthorized review, use, disclosure or distribution is prohibited.
If 
 you are not intended recipient, please immediately contact the sender
by 
 reply e-mail and delete the original message and destroy all copies 
 thereof. 
 
This e-mail message including any attachments is for the sole use of the
intended(s) and may contain privileged or confidential information. Any
unauthorized review, use, disclosure or distribution is prohibited. If
you are not intended recipient, please immediately contact the sender by
reply e-mail and delete the original message and destroy all copies
thereof.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org