[one-users] OpenNebula 4.4 Retina is Out!

2013-12-03 Thread Tino Vazquez
Dear Community,

This is the official announcement of the release of OpenNebula 4.4,
codename Retina. The principal aim is to develop the most demanded and
useful features, reducing the complexity in order to better support
the most used functionality. As a project driven by user needs, this
release includes important features that meet real demands from
production environments, with a focus on optimization of storage,
monitoring, cloud bursting, and public cloud interfaces.

OpenNebula 4.4 Retina includes support for multiple system datastores,
which enables a much more efficient usage of the storage resources for
running Virtual Machines. This feature ships with different scheduling
policies for storage load balancing, intended to instruct OpenNebula
to spread the running Virtual Machines across different storage
mediums to optimize their use. This translates in the ability to
define more than one disk (or other backend) to hold running VMs in a
particular cluster. Monitoring subsystem in OpenNebula underwent a
major redesign as well, effectively switching from a pulling mechanism
to a pushing model, with the implications in scalability improvements.

An important effort has been made in the hybrid cloud model (cloud
bursting). Using the AWS API tools have been deprecated in favor of
the new Ruby SDK released, which allows the support of new AWS
mechanisms like for instance IAM. Also, now is possible to fully
support hybrid VM templates. Moreover, the AWS public cloud interface
implemented by OpenNebula has been revisited and extended to support
new functionality, as well as improved so the instance types are
offered to the end user from OpenNebula templates.

This is a stable release and so a recommended update that incorporates
several bug fixes since 4.4 RC. We've done our best to keep
compatibility with OpenNebula 4.2, so any application developed for
previous versions should work without effort. In any case, be sure to
check the compatibility and upgrade guides. We invite you to download
it and to check the QuickStart guides, as well as to browse the
documentation, which has also been properly updated.

We would like to thank the community for its quality feedback and
contributions (check the growing addons catalog), OpenNebula 4.4
Retina wouldn't be nearly as good without your support!

As usual OpenNebula releases are named after a Nebula. The Retina
Nebula (IC 4406) is a planetary nebula near the western border of the
constellation Lupus, the Wolf. It has dust clouds and has the shape of
a torus.

No need for glasses to understand the cloud with Retina ;)

The OpenNebula Team

LINKS
  * Release Notes: http://www.opennebula.org/software:rnotes:rn-rel4.4
  * Download: http://opennebula.org/software:software
  * Documentation: http://opennebula.org/documentation:rel4.4
  * Compatibility guide:
http://opennebula.org/documentation:rel4.4:compatibility
  * Upgrade guide: http://opennebula.org/documentation:rel4.4:upgrade
  * QuickStart guides:
http://opennebula.org/documentation:rel4.4#designing_and_installing_your_cloud_infrastructure
  * Addons catalog: http://opennebula.org/addons:catalog

--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] empty qcow2 datablock image with preallocation

2013-12-03 Thread Lorenzo Faleschini

Hi

is it possible to create in sunstone (or via CLI) an empty-datablock 
qcow2 with preallocation (-o preallocation=metadata).


I need it to avoid the auto-grow overhead.

for now I create the prealloc qcow2 image then I import it in one.

is it posible to avoid this step?
any suggestions or alternatives?

thanks

I'm using 4.0 now.. is this possible in 4.4?

Lorenzo


--
Lorenzo Faleschini
Responsabile Sistemi Informativi

skype: falegalizeit
mobile: +39 335 6055225
tel: +39 0427 807934
fax: +39 0434 1820145
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-03 Thread Kenneth
 

Ceph won't be the default image datastore, but you can always choose
it whenever you create an image. 

You said you don't have an NFS disk
and your just use plain disk on your system datastore so you SHOULD use
ssh in order to have live migrations. 

Mine uses shared as datastore
since I mounted a shared folder on each nebula node.

---

Thanks,
Kenneth
Apollo Global Corp.

On 12/03/2013 03:01 PM, Mario
Giammarco wrote: 

 First, thanks you for your very detailed reply!


 2013/12/3 Kenneth kenn...@apolloglobal.net
 
 You don't need to
replace existing datastores, the important is you edit the system
datastore as ssh because you still need to transfer files in each node
when you deploy a VM.
 
 So I lose live migration, right? 
 If I
understand correctly ceph cannot be default datastore also. 
 
 Next,
you should make sure that all your node are able to communicate with the
ceph cluster. Issue the command ceph -s on all nodes including the
front end to be sure that they are connected to ceph.
 
 ... will
check... 
 
 oneadmin@cloud-node1:~$ onedatastore list
 ___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] oneimage create command in ON3.4 and up

2013-12-03 Thread Carlos Martín Sánchez
Hi,

The datastore size is initialized to 0, and then populated with the right
size after the monitorization begins to work. Maybe there was a problem
trying to monitor it. Take a look at /var/log/one/oned.log, you should have
messages like these:

Fri Nov 29 17:29:19 2013 [InM][D]: Monitoring datastore default (1)
Fri Nov 29 17:29:20 2013 [ImM][D]: Datastore default (1) successfully
monitored.

Or an error message from the drivers.

Regards

--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org


On Mon, Dec 2, 2013 at 11:37 PM, Hyun Woo Kim hyun...@fnal.gov wrote:

  Hi,

  Let me ask a basic question.

  I just started testing ON4.2
 and when I use oneimage command to register a first image,
 I am getting the following error message

  -bash-4.1$ oneimage create firstimage.template  --datastore default
 Not enough space in datastore

  I guess this is because the default datastore has 0M size as shown
 below,
  -bash-4.1$ onedatastore list
   ID NAMESIZE AVAIL CLUSTERIMAGES TYPE DS   TM

0 system - - -
  0 sys  -ssh
1 default   0M - production0   img
 fs   ssh


  My question is, how can I increase this size?

  I could not find relevant information from onedatastore help,
 I guessed this size will increase when I add a new host, create a cluster,
 add the host in this cluster
 and create a new virtual network and add it to the cluster etc,
 but I am still getting the same error message..

  Thanks in advance.
 HyunWoo KIM
 FermiCloud


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-03 Thread Mario Giammarco
My problem was that  because ceph is a distributed filesystem (and so it
can be used as an alternative to nfs) I supposed I can use as a shared
system datastore.
Reading your reply I can see it is not true. Probably official
documentation should clarify this.

Infact I hoped to use ceph as system datastore because ceph is fault
tolerant and nfs is not.

Thanks for help,
Mario


2013/12/3 Kenneth kenn...@apolloglobal.net

  Ceph won't be the default image datastore, but you can always choose it
 whenever you create an image.

 You said you don't have an NFS disk and your just use plain disk on your
 system datastore so you *should* use ssh in order to have live
 migrations.

 Mine uses shared as datastore since I mounted a shared folder on each
 nebula node.
 ---

 Thanks,
 Kenneth
 Apollo Global Corp.

  On 12/03/2013 03:01 PM, Mario Giammarco wrote:

 First, thanks you for your very detailed reply!


 2013/12/3 Kenneth kenn...@apolloglobal.net

  You don't need to replace existing datastores, the important is you
 edit the system datastore as ssh because you still need to transfer files
 in each node when you deploy a VM.


 So I lose live migration, right?
 If I understand correctly ceph cannot be default datastore also.

  Next, you should make sure that all your node are able to communicate
 with the ceph cluster. Issue the command ceph -s on all nodes including
 the front end to be sure that they are connected to ceph.




 ... will check...



 oneadmin@cloud-node1:~$ onedatastore list

   ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS
 TM

0 system - - - 0 sys  -
 shared

1 default 7.3G 71%   - 1 img  fs
 shared

2 files   7.3G 71%   - 0 fil  fs   ssh

 * 100 cephds  5.5T 59%   - 3 img  ceph
 ceph*

 Once you add the you have verified that the ceph datastore is active you
 can upload images on the sunstone GUI. Be aware that conversion of images
 to RBD format of ceph may take quite some time.


 I see in your configuration that system datastore is shared!

 Thanks again,
 Mario


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-03 Thread Kenneth
 

Actually, I'm using ceph as the system datastore. I used the cephfs
(CEPH FUSE) and mounted it on all nodes on /var/lib/one/datastores/0/


Regarding ssh for trasfer driver, I haven't really used it since I'm
all on ceph, both system and image datastore. I may be wrong but that is
how I understand it from the docs. 
---

Thanks,
Kenneth
Apollo Global
Corp.

On 12/03/2013 06:11 PM, Mario Giammarco wrote: 

 My problem was
that because ceph is a distributed filesystem (and so it can be used as
an alternative to nfs) I supposed I can use as a shared system
datastore. Reading your reply I can see it is not true. Probably
official documentation should clarify this.
 
 Infact I hoped to use
ceph as system datastore because ceph is fault tolerant and nfs is
not.
 
 Thanks for help, Mario 
 
 2013/12/3 Kenneth
kenn...@apolloglobal.net
 
 Ceph won't be the default image
datastore, but you can always choose it whenever you create an image.

 
 You said you don't have an NFS disk and your just use plain disk
on your system datastore so you SHOULD use ssh in order to have live
migrations. 
 
 Mine uses shared as datastore since I mounted a
shared folder on each nebula node. 
 
 ---
 
 Thanks,

Kenneth
 Apollo Global Corp.
 
 On 12/03/2013 03:01 PM, Mario
Giammarco wrote: 
 
 First, thanks you for your very detailed
reply!
 
 2013/12/3 Kenneth kenn...@apolloglobal.net
 

You don't need to replace existing datastores, the important is you edit
the system datastore as ssh because you still need to transfer files
in each node when you deploy a VM.
 
 So I lose live migration,
right? 
 If I understand correctly ceph cannot be default datastore
also. 
 
 Next, you should make sure that all your node are able
to communicate with the ceph cluster. Issue the command ceph -s on all
nodes including the front end to be sure that they are connected to
ceph.
 
 ... will check... 
 
 oneadmin@cloud-node1:~$
onedatastore list
 ___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] (block) iotune support in templates

2013-12-03 Thread Carlos Martín Sánchez
Hi Stefan,

On Sun, Nov 17, 2013 at 9:46 PM, Stefan Kooman ste...@bit.nl wrote:

 Dear list,

 Recently I've been playing with OpenNebula on Ubuntu Saucy (13.10). It
 comes with libvirt-bin version 1.1.1 and qemu 1.5. One of the cool
 things libvirtd / qemu are able to do now is block io throttling, or
 iotune as libvirt calls it. You can now define min, max, total
 KBytes/Sec for read, write and total, as wel as for IOPS/Sec. I would
 like to have support for this in OpenNebula vm templates just like there
 is for CPU (cputune). It would be nice to able to set a total maximum
 IOPS / bandwitch for a specific user/group, just like cpu, mem, images,
 #vm's, etc. (and have strict enforcement).

 Use cases:

 - prevent denial of service of virtual machines consuming too much IOPS
   / bandwith (io starvation, storage link saturation)
 - be able to guarantee (minimal) disk IO performance. In combination with
 per
   user system datastores (ONE 4.4) you can choose the right disk type
   to satisfy latency needs.

 What do you think?

 Gr. Stefan

 P.s. with something like vdc-nebula in place [1], the need to restrict IO
 will be much less then in traditional io settings (very cool project
 indeed).

 [1]: http://blog.opennebula.org/?p=5408


That's really interesting, indeed.
Let's think of how this would look in OpenNebula.

The first and easiest option that comes to mind is to allow a new attribute
DISK/RAW, that would look like:

DISK = [
  IMAGE_ID = 7,
  RAW = iotune
total_bytes_sec1000/total_bytes_sec
read_iops_sec40/read_iops_sec
write_iops_sec10/write_iops_sec
  /iotune
]

I think this is useful by itself, to allow generic customization of the
devices in the deployment file. A similar NIC/RAW attribute would be also
easy to add.

A prettier way would be to parse the elements DISK/READ_BYTES_SEC
 DISK/WRITE_BYTES_SEC, etc. These iotune attributes could be set as
restricted attributes [1], and let the admin set defaults for each Image or
each DS, using the inherited attributes functionality introduced in
OpenNebula 4.4 [2].



Regarding the user and group quotas, that's something that I'd consider for
a second phase. If this is specific for kvm and we don't have an equivalent
for xen and vmware, I don't know if it would be acceptable to have an
exception hardcoded in the quotas:

VM_QUOTA
  VM
CPU
CPU_USED
MEMORY
MEMORY_USED
..
TOTAL_BYTES_SEC
  /VM
/VM_QUOTA

Or if we should think of a more flexible definition of the quotas
mechanism. I'm thinking of something like this in oned.conf:

VM_QUOTA_DEFINITION = [
  VM_ATTRS = /VM/DISK/TOTAL_BYTES_SEC /VM/DISK/READ_BYTES_SEC
/VM/DISK/WRITE_BYTES_SEC
  NAME = DISK_IOTUNE
]

But this may be a bit overkill.



I opened a couple of tickets [3,4] with the features that could be added in
the short term. Thanks a lot for your feedback.

Regards,
Carlos.

[1]
http://opennebula.org/documentation:rel4.4:oned_conf#restricted_attributes_configuration
[2]
http://opennebula.org/documentation:rel4.4:oned_conf#inherited_attributes_configuration
[3] http://dev.opennebula.org/issues/2530
[4] http://dev.opennebula.org/issues/2530
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org http://www.opennebula.org/ | cmar...@opennebula.org |
@OpenNebula http://twitter.com/opennebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] A wishlist for open nebula 4.6?

2013-12-03 Thread Jaime Melis
Hi,

With regard to the glusterfs support, we have postponed it for a bit
because the block based implementation of glusterfs will change in the near
future. Once it stabilizes we will continue with adding support.
http://dev.opennebula.org/issues/2202

cheers,
Jaime


On Fri, Nov 29, 2013 at 6:02 PM, Shankhadeep Shome shank15...@gmail.comwrote:

 Ok cool, that would work I suppose.


 On Fri, Nov 29, 2013 at 10:22 AM, Gareth Bult gar...@linux.co.uk wrote:

 Hi Shankhadeep,

 I'm using virtio-scsi by default on all my instances (I need it for TRIM
 support) .. works fine on 4.3 and 4.4.
 All you need to do is modify /etc/one/vmm_exec/vmm_exec_kvm.conf.

 I use this;

 DISK = [ BUS=scsi, DISCARD=unmap]
 RAW = devicesvideomodel type='vga' vram='9216' heads='1'
 //videocontroller type='scsi' index='0' model='virtio-scsi'address 
 type='pci'
 domain='0x' bus='0x00' slot='0x08'
 function='0x0'//controller/devices

 Then use sd in DEV_PREFIX in the template ..

 hth
 Gareth.
 --
 *Gareth Bult*
 “The odds of hitting your target go up dramatically when you aim at it.”



 --
 *From: *Shankhadeep Shome shank15...@gmail.com
 *To: *users users@lists.opennebula.org
 *Sent: *Friday, 29 November, 2013 2:54:05 PM
 *Subject: *[one-users] A wishlist for open nebula 4.6?


 1. GlusterFS libgfapi support, I was hoping this would be completed for
 this release, alas I'm stuck with fuse mount another release :)
 2. Support virtio-scsi, this is the future of KVM block IO, its not as
 fast as virtio-blk but its getting there and it is far more flexible. For
 example, you don't need to have pci-hotplug support to add another disk, a
 simple scsi bus scan will work. Also support for faster block io access
 will be in place soon like tcm-vhost in most distros.

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


-- 
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] need to create Flows? for openvswitch-based ONE (4.2) setup -- (passed on ebtables)

2013-12-03 Thread Jaime Melis
Hi Mark,

there should be more info in the log file at /var/log/one/24.log

what does it say there?

cheers,
Jaime


On Wed, Nov 20, 2013 at 11:08 PM, Mark Biggers mbigg...@ine.com wrote:

  Hello ONE team,

 I have passed on the ebtables configuration for networking in 4.2 ONE.
 We'll need OpenVSwitch anyways to manage the VMs VLANS, so I have moved on.

 I *think* I have an almost working OpenVSwitch configuration.   Must I
 manually create flows for each VM/MAC-addr to enable IP traffic, across
 the OVS vbridge (vbr0), in this case?

 The info on my new (OVS networking) setup, is included, at the end of this
 message.  Thank you.  (The platform is still openSUSE 12.3 on a Thinkpad
 W530...)


 On 11/19/2013 05:43 AM, Jaime Melis wrote:

 Hi Mark,

  I have the feeling the NAT policies are interfering with this. Can you
 try without applying NAT rules?


 On Wed, Nov 13, 2013 at 9:08 PM, Mark Biggers mbigg...@ine.com wrote:

 The subject says it all.  I am available on IRC -- see my signature, and
 Google chat.

 I can get no networking across a bridge working, for the ONE ebtables
 model.


 === edited out


  --
  Jaime Melis
 Project Engineer
 OpenNebula - Flexible Enterprise Cloud Made Simple
 www.OpenNebula.org | jme...@opennebula.org


 Script started on Wed Nov 20 16:27:05 2013

 r...@sealion.ine.corp:one # netstat -nr
 Kernel IP routing table
 DestinationGatewayGenmaskFlagsMSS Window  irtt
 Iface
 0.0.0.0192.168.1.10.0.0.0UG  0 0   0 vbr0
 67.139.46.149192.168.1.1255.255.255.255 UGH  0 0   0
 vbr0
 127.0.0.00.0.0.0255.255.255.0U  0 0   0 lo
 127.0.0.00.0.0.0255.0.0.0U  0 0   0 lo
 192.168.1.00.0.0.0255.255.255.0U  0 0   0 vbr0

 r...@sealion.ine.corp:one # ip addr
 1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state
 UP qlen 1000
 link/ether 3c:97:0e:ab:0a:de brd ff:ff:ff:ff:ff:ff
 3: wlan0: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN qlen 1000
 link/ether 6c:88:14:da:0b:44 brd ff:ff:ff:ff:ff:ff
 4: ovs-system: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
 link/ether 0a:0e:fd:bb:5a:8a brd ff:ff:ff:ff:ff:ff
 7: vbr0: BROADCAST,PROMISC,UP,LOWER_UP mtu 1500 qdisc noqueue state
 UNKNOWN
 link/ether 3c:97:0e:ab:0a:de brd ff:ff:ff:ff:ff:ff
 inet 192.168.1.250/24 scope global vbr0
 12: vnet0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast
 state UNKNOWN qlen 500
 link/ether fe:00:0a:00:00:03 brd ff:ff:ff:ff:ff:ff
 13: vnet1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast
 state UNKNOWN qlen 500
 link/ether fe:00:0a:00:00:04 brd ff:ff:ff:ff:ff:ff

 r...@sealion.ine.corp:one # BRIDGE_DEV=vbr0
 r...@sealion.ine.corp:one # sudo ovs-ofctl dump-desc $BRIDGE_DEV
 OFPST_DESC reply (xid=0x2):
 Manufacturer: Nicira, Inc.
 Hardware: Open vSwitch
 Software: 1.11.0
 Serial Num: None
 DP Description: None

 r...@sealion.ine.corp:one # sudo ovs-vsctl show
 001119d6-32d7-4db8-8015-229b271cca6a
 Bridge vbr0
 Controller ptcp:
 fail_mode: standalone
 Port vnet0
 tag: 0
 Interface vnet0
 Port vnet1
 tag: 0
 Interface vnet1
 Port eth0
 Interface eth0
 Port vbr0
 Interface vbr0
 type: internal
 ovs_version: 1.11.0

 r...@sealion.ine.corp:one # sudo ovs-ofctl show $BRIDGE_DEV
 OFPT_FEATURES_REPLY (xid=0x2): dpid:3c970eab0ade
 n_tables:254, n_buffers:256
 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
 actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST
 SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
  1(eth0): addr:3c:97:0e:ab:0a:de
  config: 0
  state: STP_FORWARD
  current: 1GB-FD COPPER AUTO_NEG
  advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER AUTO_NEG
  supported: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER
 AUTO_NEG
  speed: 1000 Mbps now, 1000 Mbps max
  2(vnet0): addr:fe:00:0a:00:00:03
  config: 0
  state: 0
  current: 10MB-FD COPPER
  speed: 10 Mbps now, 0 Mbps max
  3(vnet1): addr:fe:00:0a:00:00:04
  config: 0
  state: 0
  current: 10MB-FD COPPER
  speed: 10 Mbps now, 0 Mbps max
  LOCAL(vbr0): addr:3c:97:0e:ab:0a:de
  config: 0
  state: 0
  speed: 0 Mbps now, 0 Mbps max
 OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

 r...@sealion.ine.corp:one # sudo ovs-ofctl dump-flows $BRIDGE_DEV
 NXST_FLOW reply (xid=0x4):
  cookie=0x0, duration=8382.092s, table=0, n_packets=4, n_bytes=240,
 idle_age=8381, priority=4,in_port=2,dl_src=02:00:0a:00:00:03
 actions=NORMAL
  cookie=0x0, 

Re: [one-users] Performance with system datastore on NFS

2013-12-03 Thread Jaime Melis
Hi Daniel,

choosing the adequate CACHE option will definitely help. See:
http://opennebula.org/documentation:rel4.4:template#disks_section
http://libvirt.org/formatdomain.html#elementsDevices (look for cache)

cheers,
Jaime


On Wed, Nov 20, 2013 at 4:16 PM, Daniel Dehennin 
daniel.dehen...@baby-gnu.org wrote:

 Hello,

 I just finalize the migration our 3.8.3 ONE to 4.2 and was forced to
 change my plan to put datastore 0 on NFS4 because of slow accesses.

 For non persistent images they use local datastore 0, but persistent
 ones use NFS (via the symlink).

 All my images are qcow2.

 I have a dedicated VLAN for storage access, and my nodes are mounting
 the datastores as:

 10.255.255.2:/one-datastores on /var/lib/one/datastores type nfs4
 (rw,relatime,vers=4,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.255.255.4,minorversion=0,fsc,local_lock=none,addr=10.255.255.2,_netdev)

 I remember seeing a document about different cache scenarios and
 performance but I can not remember where.

 My setup is a tree nodes ONE:

 - one not very powerfull frontend
 - two quite powerful nodes to run VMs (core i7 + 16Go RAM)

 My storage is what they call “workgroup NAS” with dual gigabit nics
 configured in bonding.

 I'm wondering about using RAID1+0 instead of the RAID5 to improve disk
 access performances and lower CPU usage.

 Any hints or idea?

 Regards.

 --
 Daniel Dehennin
 Récupérer ma clef GPG:
 gpg --keyserver pgp.mit.edu --recv-keys 0x7A6FE2DF

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] 4.2 - snapshot creation hang.

2013-12-03 Thread Jaime Melis
Hi,

could you be running out of space in the node?

cheers,
jaime


On Fri, Nov 22, 2013 at 2:57 PM, Giancarlo gdefilip...@ltbl.it wrote:

  Yes, nothing of special in  oned.log,
 the VM is in SNAPSHOT state in sunstone, via virsh is in paused state.


 Il 22/11/2013 14:30, Olivier Sallou ha scritto:


 On 11/22/2013 02:13 PM, Giancarlo wrote:

 Thanks.
 This is my libvirt version:
 013-11-22 13:11:33.511+: 18442: info : libvirt version: 0.10.2,
 package: 18.el6_4.15 (CentOS BuildSystem 
 http://bugs.centos.orghttp://bugs.centos.org,
 2013-11-13-10:44:00, c6b8.bsys.dev.centos.org)

  Can't remember required version, but yours looks not so old. Did you
 check at oned.log and vm logs?


 Il 22/11/2013 14:08, Olivier Sallou ha scritto:

 You should look at your oned.log but it might be that your libvirtd
 version is not recent enough.

 The snapshot requires a recent (don't remember version) libvirtd. If too
 old it fails and remains stuck (I faced the case)

 Olivier

 On 11/22/2013 10:48 AM, Giancarlo wrote:

 Hi all...
 this is the VM template:
   SNAPSHOT
   SNAPSHOT_ID 0  NAME Testuno  TIME 1385109979  HYPERVISOR_ID
   ACTIVE YES  AUTOMATIC_REQUIREMENTS CLUSTER_ID = 100  GRAPHICS
   LISTEN 0.0.0.0  TYPE VNC  PASSWD pippo  PORT 5901  MEMORY 512  DISK
   TM_MAD shared  TARGET hda  SAVE YES  PERSISTENT YES  DRIVER qcow2
 READONLY NO  IMAGE_ID 2  DEV_PREFIX hd  CLONE NO  DATASTORE default
 CLUSTER_ID 100  TYPE FILE  DISK_ID 0  DATASTORE_ID 1  SOURCE
 /var/lib/one/datastores/1/4c528e7310f5bc47d8b43af453fc4d78  IMAGE
 CentOS-6.4_x86_64  CPU 1  VMID 7  CONTEXT
   TARGET hdb  SSH_PUBLIC_KEY   ETH0_NETWORK xxx
 ETH0_MASK 255.255.255.224  NETWORK YES  ETH0_GATEWAY x  DISK_ID 1
 ETH0_IP   ETH0_DNS 8.8.8.8  TEMPLATE_ID 1  OS
   ARCH x86_64  NIC
   NETWORK_ID 0  IP xx  VLAN NO  BRIDGE vbr0  NIC_ID 0  CLUSTER_ID
 100  NETWORK External LAN  MAC 02:00:b2:21:43:e1o
   IP6_LINK

 Hypervisor is KVM on centos 6.4.
 When a select : Take a snapshot the VM remain in pausing state and the
 snapshot creation hangs.
 Someone can help me?
 Thanks...



 ___
 Users mailing 
 listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org


 --
 Olivier Sallou
 IRISA / University of Rennes 1
 Campus de Beaulieu, 35000 RENNES - FRANCE
 Tel: 02.99.84.71.95

 gpg key id: 4096R/326D8438  (keyring.debian.org)
 Key fingerprint = 5FB4 6F83 D3B9 5204 6335  D26D 78DC 68DB 326D 8438




 --
 Olivier Sallou
 IRISA / University of Rennes 1
 Campus de Beaulieu, 35000 RENNES - FRANCE
 Tel: 02.99.84.71.95

 gpg key id: 4096R/326D8438  (keyring.debian.org)
 Key fingerprint = 5FB4 6F83 D3B9 5204 6335  D26D 78DC 68DB 326D 8438




 ___
 Users mailing 
 listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org



 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How do i know the cluster's datastore's capacity?

2013-12-03 Thread Carlos Martín Sánchez
Hi there,

On Tue, Nov 26, 2013 at 9:48 AM, 曹海峰 ca...@wedogame.com wrote:

 Thanks for replay!
 But if I use a shared folder ,such as /var/lib/datastore/1/ on host A  and
 mount it on all hosts in a cluster. the whole storage capacity is very
 small,Opennebula can't use other hosts's disk space.?


On Tue, Nov 26, 2013 at 9:53 AM, Kenneth kenn...@apolloglobal.net wrote:

 You can of course. But it is up to you to monitor.

Not in OpenNebula 4.4 :)
Local (ssh) system datastores are also monitored, but the storage is
reported for each host: in the onehost show output, or selecting the host
in Sunstone.

Best regards.
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org


On Tue, Nov 26, 2013 at 9:53 AM, Kenneth kenn...@apolloglobal.net wrote:

  You can of course. But it is up to you to monitor. And using storage on
 different hosts will cause the VMs to be copied via scp which is pretty
 slow.

 Another way to have a large storage is to remove other drives on the nodes
 (or use small drives in them) and then plug big capacity drives on the main
 NFS node. I mean put everything on the host A and let other have a small
 hard drive just enough for the OS itself.

 ---

 Thanks,
 Kenneth
 Apollo Global Corp.

  On 11/26/2013 04:48 PM, 曹海峰 wrote:

 Thanks for replay!
 But if I use a shared folder ,such as /var/lib/datastore/1/ on host A  and
 mount it on all hosts in a cluster. the whole storage capacity is very
 small,Opennebula can't use other hosts's disk space.?


 --
   Best Wishes!
  Dennis

  *From:* users-boun...@lists.opennebula.org
 *Date:* 2013-11-26 16:27
 *To:* users@lists.opennebula.org
 *Subject:* Re: [one-users] How do i know the cluster's datastore's
 capacity?

 That's because you are not using a shared folder for all hosts. A better
 way to do it is to make the /var/lib/datastore/1/ folder on the sunstone as
 a NFS folder and then mount it all other hosts on the same location
 /var/lib/datastore/1/. This will enable you to do live migrations and very
 fast deployment of vms.
 ---

 Thanks,
 Kenneth
 Apollo Global Corp.

  On 11/26/2013 04:17 PM, caohf wrote:

 Dear all:
 How do i get the whole capacity for a  cluster's datastores,
 I have two hosts in a cluster, every host has 100GB space for
 /var/lib/datastore.
 In sunstone i find the capacity of cluster only display the capacity of
 host where the sunstone deployed.

 --
   Best Wishes!
  Dennis

 ___
 Users mailing 
 listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org


 ___
 Users mailing 
 listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] how to reset VM ID???

2013-12-03 Thread Jaime Melis
Hi,

 should I open all port?? from 5900 to 65535??

Yes, this shouldn't be a problem. With iptables you can open a range and
allow only connections from the frontend.

cheers,
Jaime


On Wed, Nov 13, 2013 at 2:22 AM, 염재근 jaekeun0...@gmail.com wrote:

 should I open all port?? from 5900 to 65535??




 2013/11/12 Javier Fontan jfon...@opennebula.org

 There's not an way to reset ID and we discourage it. The code to
 generate the VNC port is

 --8--
 int limit = 65535;
 oss  ( base_port + ( oid % (limit - base_port) ));
 --8--

 base_port is 5900 by default. You should not have problems with the
 port as it goes back to 5900 after reaching 65535.



 On Tue, Nov 12, 2013 at 9:35 AM, 염재근 jaekeun0...@gmail.com wrote:
  Dear All
 
  how to reset VM ID??
 
  I already exceed over 10400.
 
  actually, I want to control vnc port.
  I saw the VM log file in qemu. It show me that the vnc port is same
 with VM
  ID.
 
  If I specify the vnc port in opennebula template, I can`t make 2 more
 VMs
  with same template because of vnc port duplication.
 
  thanks :D
 
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 



 --
 Javier Fontán Muiños
 Developer
 OpenNebula - The Open Source Toolkit for Data Center Virtualization
 www.OpenNebula.org | @OpenNebula | github.com/jfontan



 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] could not open disk image : Permission denied

2013-12-03 Thread Jaime Melis
Hi Geeta,

did it work after Javier and Marios's suggestions?

cheers,
Jaime


On Tue, Nov 19, 2013 at 12:30 PM, Javier Fontan jfon...@opennebula.orgwrote:

 The images of the VM have mixed permissions (oneadmin and root).

   * Make sure that oned is running as oneadmin, not root.
   * qemu.conf is correct. Leave it like this and reboot kpkvm1. It may
 be that even that the configuration is OK libvirt/kvm modules are
 still trying to run as root and can not use files owned by oneadmin
 user.

 Cheers



 On Tue, Nov 19, 2013 at 9:59 AM, Geeta Gulabani
 geeta.gulab...@kpoint.com wrote:
  Hi,
 
  My Environment :
 
  Frontend and host : centos 6.4
  Opennebula version : 4.2
 
  Tue Nov 19 13:58:16 2013 [VMM][I]: Command execution fail: cat  EOT |
  /var/tmp/one/vmm/kvm/deploy '/var/lib/one//datastores/0/58/deployment.0'
  'kpkvm1.gslab.com' 58 kpkvm1.gslab.com
  Tue Nov 19 13:58:16 2013 [VMM][I]: error: Failed to create domain from
  /var/lib/one//datastores/0/58/deployment.0
  Tue Nov 19 13:58:16 2013 [VMM][I]: error: internal error process exited
  while connecting to monitor: qemu-kvm: -drive
 
 file=/var/lib/one//datastores/0/58/disk.0,if=none,id=drive-ide0-0-0,format=raw,cache=none:
  could not open disk image /var/lib/one//datastores/0/58/disk.0:
 Permission
  denied
  Tue Nov 19 13:58:16 2013 [VMM][I]:
  Tue Nov 19 13:58:16 2013 [VMM][E]: Could not create domain from
  /var/lib/one//datastores/0/58/deployment.0
  Tue Nov 19 13:58:16 2013 [VMM][I]: ExitCode: 255
  Tue Nov 19 13:58:16 2013 [VMM][I]: Failed to execute virtualization
 driver
  operation: deploy.
  Tue Nov 19 13:58:16 2013 [VMM][E]: Error deploying virtual machine: Could
  not create domain from /var/lib/one//datastores/0/58/deployment.0
  Tue Nov 19 13:58:16 2013 [DiM][I]: New VM state is FAILED
 
  Here are some other details :
 
  [root@kpkvm1 56]# grep -vE '^($|#)' /etc/libvirt/qemu.conf
  user  = oneadmin
  group = oneadmin
  dynamic_ownership = 0
 
 
 
  [root@kpkvm1 58]# grep -vE '^($|#)' /etc/libvirt/libvirtd.conf
  [root@kpkvm1 58]#
 
 
  [root@kpkvm1 58]# cat
 
 /etc/polkit-1/localauthority/50-local.d/50-org.libvirt.unix.manage-opennebula.pkla
  [Allow oneadmin user to manage virtual machines]
  Identity=unix-user:oneadmin
  Action=org.libvirt.unix.manage
  #Action=org.libvirt.unix.monitor
  ResultAny=yes
  ResultInactive=yes
  ResultActive=yes
 
 
 
 
 
 
  [root@kpkvm1 58]# id oneadmin
  uid=9869(oneadmin) gid=9869(oneadmin) groups=9869(oneadmin)
 
 
  [root@kpkvm1 58]# cat /etc/libvirt/qemu.conf
  user  = oneadmin
  group = oneadmin
  dynamic_ownership = 0
 
 
  I tried changing dynamic_ownership to 1 , started libvirtd and oned and
  again tried creating the vm
  sudo -u oneadmin virsh -c qemu:///system create deployment.0
  it gives the same error
  error: Failed to create domain from deployment.0
  error: internal error process exited while connecting to monitor:
 qemu-kvm:
  -drive
 
 file=/var/lib/one//datastores/0/58/disk.0,if=none,id=drive-ide0-0-0,format=raw,cache=none:
  could not open disk image /var/lib/one//datastores/0/58/disk.0:
 Permission
  denied
 
 
 
  Here are the libvirtd logs :
 
  2013-11-19 08:52:21.246+: 8915: info : libvirt version: 0.10.2,
 package:
  18.el6_4.14 (CentOS BuildSystem http://bugs.centos.org,
  2013-09-19-19:15:27, c6b8.bsys.dev.centos.org)
  2013-11-19 08:52:21.246+: 8915: error : qemuMonitorOpenUnix:293 :
 failed
  to connect to monitor socket: No such process
  2013-11-19 08:52:21.246+: 8915: error :
 qemuProcessWaitForMonitor:1768 :
  internal error process exited while connecting to monitor: qemu-kvm:
 -drive
 
 file=/var/lib/one//datastores/0/58/disk.0,if=none,id=drive-ide0-0-0,format=raw,cache=none:
  could not open disk image /var/lib/one//datastores/0/58/disk.0:
 Permission
  denied
 
 
  [root@kpkvm1 58]# ls -lrt
  total 6171016
  -rw-r--r-- 1 oneadmin oneadmin 6318739456 Nov 19 13:58 disk.0
  -rw-r--r-- 1 root root 372736 Nov 19 13:58 disk.1
  lrwxrwxrwx 1 root root 35 Nov 19 13:58 disk.1.iso -
  /var/lib/one/datastores/0/58/disk.1
  -rw-r--r-- 1 root root912 Nov 19 13:58 deployment.0
  [root@kpkvm1 58]# sudo -u oneadmin virsh -c qemu:///system create
  deployment.0
  error: Failed to create domain from deployment.0
  error: internal error process exited while connecting to monitor:
 qemu-kvm:
  -drive
 
 file=/var/lib/one//datastores/0/58/disk.0,if=none,id=drive-ide0-0-0,format=raw,cache=none:
  could not open disk image /var/lib/one//datastores/0/58/disk.0:
 Permission
  denied
 
 
 
  Can anybody help?
 
 
  Regards,
  Geeta.
 
 
 
 
 
 
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 



 --
 Javier Fontán Muiños
 Developer
 OpenNebula - The Open Source Toolkit for Data Center Virtualization
 www.OpenNebula.org | @OpenNebula | github.com/jfontan
 ___
 Users 

[one-users] NFS datastore file system

2013-12-03 Thread Neelaya Dhatchayani
Hi

Can anyone tell me what has to be done on the frontend and hosts inorder to
use shared transfer driver and with respect to NFS.

Thanks in advance
neelaya
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] NFS datastore file system

2013-12-03 Thread Jaime Melis
Hi Neelaya,

the frontend and the nodes must share /var/lib/one/datastores. Any node can
export this share, preferably a NAS system, but if you don't have been, you
can export it from the frontend.

cheers,
Jaime


On Tue, Dec 3, 2013 at 12:16 PM, Neelaya Dhatchayani
neels.v...@gmail.comwrote:

 Hi

 Can anyone tell me what has to be done on the frontend and hosts inorder
 to use shared transfer driver and with respect to NFS.

 Thanks in advance
 neelaya

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] reg oneflow - service template

2013-12-03 Thread Carlos Martín Sánchez
Hi,

On Mon, Dec 2, 2013 at 4:51 AM, Rajendar K k.rajen...@gmail.com wrote:

 Hi Carlos,
   I have updated the files [oneflow-templates and service]
 as per your instructions. Hereby specify the output of oneflow

 = Elasticty conditions

  min_vms: 1,
   max_vms: 3,
 *  cooldown: 60, = At what period , this parameter is being
 employed?   *
   elasticity_policies: [
 {
   type: CHANGE,
   adjust: 1,
   expression: CPU  60,
   period: 3,
   period_number: 30,
   cooldown: 30
 }
   ],
   scheduled_policies: [

   ]
 }


 kindly provide detail on how auto-scaling is happened in the above sample,
 with relates to period_number , period and cooldown


The period_number, period and cooldown attributes are explained in detail
in the documentation [1].
In your example, the expression CPU  60 must be true 30 times, each 3
seconds.

After the scaling, your service will be in the cooldown period for 30
seconds before returning to running. The only defined policy is overriding
the default cooldown of 60 that you set.


In a previous email you mentioned:

On Fri, Nov 29, 2013 at 5:19 AM, Rajendar K k.rajen...@gmail.com wrote:

 (ii) Scheduled policies
  - For my trial i have specified
  start time - 2013-11-29 10:30:30
   is my time format is correct? unable to scale up/down
 using this time format.


But the output you provided does not have any scheduled_policies. Was that
another different template?

Regards.

[1] http://opennebula.org/documentation:rel4.4:appflow_elasticity
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org http://www.opennebula.org/ | cmar...@opennebula.org |
@OpenNebula http://twitter.com/opennebula cmar...@opennebula.org



On Mon, Dec 2, 2013 at 4:51 AM, Rajendar K k.rajen...@gmail.com wrote:

 Hi Carlos,
   I have updated the files [oneflow-templates and service]
 as per your instructions. Hereby specify the output of oneflow


 root@:/srv/cloud/one# oneflow show 179
 SERVICE 179
 INFORMATION
 ID  : 179
 NAME: Sampletest
 USER: root
 GROUP   : oneadmin
 STRATEGY: straight
 SERVICE STATE   : RUNNING
 SHUTDOWN: shutdown

 PERMISSIONS

 OWNER   : um-
 GROUP   : ---
 OTHER   : ---

 ROLE role1
 ROLE STATE  : RUNNING
 VM TEMPLATE : 590
 CARNIDALITY : 3
 MIN VMS : 1
 MAX VMS : 3
 COOLDOWN: 60s
 SHUTDOWN: shutdown
 NODES INFORMATION
  VM_ID NAMESTAT UCPUUMEM
 HOST   TIME
568 role1_0_(service_179)   runn0   1024M 10.1.26.32 0d
 01h24
569 role1_1_(service_179)   runn0   1024M 10.1.26.31 0d
 01h07
570 role1_2_(service_179)   runn0   1024M 10.1.26.32 0d
 00h51

 ELASTICITY RULES

 ADJUST   EXPRESSION   EVALS
 PERIOD  COOL
 + 1  CPU[0.0]  6030 /
 3s   30s


 LOG
 MESSAGES
 12/02/13 10:06 [I] New state: DEPLOYING
 12/02/13 10:07 [I] New state: RUNNING
 12/02/13 10:22 [I] Role role1 scaling up from 1 to 2 nodes
 12/02/13 10:22 [I] New state: SCALING
 12/02/13 10:23 [I] New state: COOLDOWN
 12/02/13 10:23 [I] New state: RUNNING
 12/02/13 10:38 [I] Role role1 scaling up from 2 to 3 nodes
 12/02/13 10:38 [I] New state: SCALING
 12/02/13 10:39 [I] New state: COOLDOWN
 12/02/13 10:39 [I] New state: RUNNING

 = Elasticty conditions

  min_vms: 1,
   max_vms: 3,
 *  cooldown: 60, = At what period , this parameter is being
 employed?   *
   elasticity_policies: [
 {
   type: CHANGE,
   adjust: 1,
   expression: CPU  60,
   period: 3,
   period_number: 30,
   cooldown: 30
 }
   ],
   scheduled_policies: [

   ]
 }


 kindly provide detail on how auto-scaling is happened in the above sample,
 with relates to period_number , period and cooldown

 with regards
 Raj


 Raj,

 Believe Yourself...


 On Sat, Nov 30, 2013 at 2:19 AM, Carlos Martín Sánchez 
 cmar...@opennebula.org wrote:

 Hi,

 On Fri, Nov 29, 2013 at 5:19 AM, Rajendar K k.rajen...@gmail.com wrote:

 Hi All,
   I am using opennebula 4.2, i have the following
 queries related to auto scaling features

 (i) Even when conditions met, scaling up takes long time to trigger.
 For my trial,  I have used #periods 5   #period 30 cooldown
 30

 - i can able to c the values 5/5 in the log,  and then it
 crosses 16/5 in course of time
 - Unable to trace down when it trigger actually.
 hope it should trigger when it reaches 5/5 ?


 This is a known bug, the period and period 

Re: [one-users] NFS datastore file system

2013-12-03 Thread Jaime Melis
Hi,

Please reply to the mailing list as well.

Yes. It is a basic requirement that all the nodes (frontend + hypervisors)
should have a oneadmin account, and they should be able to ssh
passwordlessly from any node to any other node.

cheers,
Jaime



On Tue, Dec 3, 2013 at 12:39 PM, Neelaya Dhatchayani
neels.v...@gmail.comwrote:

 Hi Jaime,

 Thanks a lot for your reply. I have one more doubt. Should I have to ssh
 passwordless to the frontend if I am using ssh transfer manager. I know
 that it has to be done for the hosts.

 neelaya




 On Tue, Dec 3, 2013 at 4:51 PM, Jaime Melis jme...@c12g.com wrote:

 Hi Neelaya,

 the frontend and the nodes must share /var/lib/one/datastores. Any node
 can export this share, preferably a NAS system, but if you don't have been,
 you can export it from the frontend.

 cheers,
 Jaime


 On Tue, Dec 3, 2013 at 12:16 PM, Neelaya Dhatchayani 
 neels.v...@gmail.com wrote:

 Hi

 Can anyone tell me what has to be done on the frontend and hosts inorder
 to use shared transfer driver and with respect to NFS.

 Thanks in advance
 neelaya

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




 --
 Jaime Melis
 C12G Labs - Flexible Enterprise Cloud Made Simple
 http://www.c12g.com | jme...@c12g.com

 --

 Confidentiality Warning: The information contained in this e-mail and
 any accompanying documents, unless otherwise expressly indicated, is
 confidential and privileged, and is intended solely for the person
 and/or entity to whom it is addressed (i.e. those identified in the
 To and cc box). They are the property of C12G Labs S.L..
 Unauthorized distribution, review, use, disclosure, or copying of this
 communication, or any part thereof, is strictly prohibited and may be
 unlawful. If you have received this e-mail in error, please notify us
 immediately by e-mail at ab...@c12g.com and delete the e-mail and
 attachments and any copy from your system. C12G's thanks you for your
 cooperation.





-- 
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] could not open disk image : Permission denied

2013-12-03 Thread Jaime Melis
Hi,

Please reply to the mailing list as well.

If you keep those settings you will probably have problems with snapshots.

This is probably due to the NFS configuration.

cheers,
Jaime


On Tue, Dec 3, 2013 at 12:27 PM, Geeta Gulabani
geeta.gulab...@kpoint.comwrote:

 Hi Jaime,

 As per javier suggestions i tried reboot but it didnt work.
 But i changed the permissions in ./etc/libvirt/libvirtd.conf from oneadmin
 to root and it worked


 user  = root
 group = root
 dynamic_ownership = 0


 -- Geeta


 On Tue, Dec 3, 2013 at 4:42 PM, Jaime Melis jme...@c12g.com wrote:

 Hi Geeta,

 did it work after Javier and Marios's suggestions?

 cheers,
 Jaime


 On Tue, Nov 19, 2013 at 12:30 PM, Javier Fontan 
 jfon...@opennebula.orgwrote:

 The images of the VM have mixed permissions (oneadmin and root).

   * Make sure that oned is running as oneadmin, not root.
   * qemu.conf is correct. Leave it like this and reboot kpkvm1. It may
 be that even that the configuration is OK libvirt/kvm modules are
 still trying to run as root and can not use files owned by oneadmin
 user.

 Cheers



 On Tue, Nov 19, 2013 at 9:59 AM, Geeta Gulabani
 geeta.gulab...@kpoint.com wrote:
  Hi,
 
  My Environment :
 
  Frontend and host : centos 6.4
  Opennebula version : 4.2
 
  Tue Nov 19 13:58:16 2013 [VMM][I]: Command execution fail: cat  EOT |
  /var/tmp/one/vmm/kvm/deploy
 '/var/lib/one//datastores/0/58/deployment.0'
  'kpkvm1.gslab.com' 58 kpkvm1.gslab.com
  Tue Nov 19 13:58:16 2013 [VMM][I]: error: Failed to create domain from
  /var/lib/one//datastores/0/58/deployment.0
  Tue Nov 19 13:58:16 2013 [VMM][I]: error: internal error process exited
  while connecting to monitor: qemu-kvm: -drive
 
 file=/var/lib/one//datastores/0/58/disk.0,if=none,id=drive-ide0-0-0,format=raw,cache=none:
  could not open disk image /var/lib/one//datastores/0/58/disk.0:
 Permission
  denied
  Tue Nov 19 13:58:16 2013 [VMM][I]:
  Tue Nov 19 13:58:16 2013 [VMM][E]: Could not create domain from
  /var/lib/one//datastores/0/58/deployment.0
  Tue Nov 19 13:58:16 2013 [VMM][I]: ExitCode: 255
  Tue Nov 19 13:58:16 2013 [VMM][I]: Failed to execute virtualization
 driver
  operation: deploy.
  Tue Nov 19 13:58:16 2013 [VMM][E]: Error deploying virtual machine:
 Could
  not create domain from /var/lib/one//datastores/0/58/deployment.0
  Tue Nov 19 13:58:16 2013 [DiM][I]: New VM state is FAILED
 
  Here are some other details :
 
  [root@kpkvm1 56]# grep -vE '^($|#)' /etc/libvirt/qemu.conf
  user  = oneadmin
  group = oneadmin
  dynamic_ownership = 0
 
 
 
  [root@kpkvm1 58]# grep -vE '^($|#)' /etc/libvirt/libvirtd.conf
  [root@kpkvm1 58]#
 
 
  [root@kpkvm1 58]# cat
 
 /etc/polkit-1/localauthority/50-local.d/50-org.libvirt.unix.manage-opennebula.pkla
  [Allow oneadmin user to manage virtual machines]
  Identity=unix-user:oneadmin
  Action=org.libvirt.unix.manage
  #Action=org.libvirt.unix.monitor
  ResultAny=yes
  ResultInactive=yes
  ResultActive=yes
 
 
 
 
 
 
  [root@kpkvm1 58]# id oneadmin
  uid=9869(oneadmin) gid=9869(oneadmin) groups=9869(oneadmin)
 
 
  [root@kpkvm1 58]# cat /etc/libvirt/qemu.conf
  user  = oneadmin
  group = oneadmin
  dynamic_ownership = 0
 
 
  I tried changing dynamic_ownership to 1 , started libvirtd and oned and
  again tried creating the vm
  sudo -u oneadmin virsh -c qemu:///system create deployment.0
  it gives the same error
  error: Failed to create domain from deployment.0
  error: internal error process exited while connecting to monitor:
 qemu-kvm:
  -drive
 
 file=/var/lib/one//datastores/0/58/disk.0,if=none,id=drive-ide0-0-0,format=raw,cache=none:
  could not open disk image /var/lib/one//datastores/0/58/disk.0:
 Permission
  denied
 
 
 
  Here are the libvirtd logs :
 
  2013-11-19 08:52:21.246+: 8915: info : libvirt version: 0.10.2,
 package:
  18.el6_4.14 (CentOS BuildSystem http://bugs.centos.org,
  2013-09-19-19:15:27, c6b8.bsys.dev.centos.org)
  2013-11-19 08:52:21.246+: 8915: error : qemuMonitorOpenUnix:293 :
 failed
  to connect to monitor socket: No such process
  2013-11-19 08:52:21.246+: 8915: error :
 qemuProcessWaitForMonitor:1768 :
  internal error process exited while connecting to monitor: qemu-kvm:
 -drive
 
 file=/var/lib/one//datastores/0/58/disk.0,if=none,id=drive-ide0-0-0,format=raw,cache=none:
  could not open disk image /var/lib/one//datastores/0/58/disk.0:
 Permission
  denied
 
 
  [root@kpkvm1 58]# ls -lrt
  total 6171016
  -rw-r--r-- 1 oneadmin oneadmin 6318739456 Nov 19 13:58 disk.0
  -rw-r--r-- 1 root root 372736 Nov 19 13:58 disk.1
  lrwxrwxrwx 1 root root 35 Nov 19 13:58 disk.1.iso -
  /var/lib/one/datastores/0/58/disk.1
  -rw-r--r-- 1 root root912 Nov 19 13:58 deployment.0
  [root@kpkvm1 58]# sudo -u oneadmin virsh -c qemu:///system create
  deployment.0
  error: Failed to create domain from deployment.0
  error: internal error process exited while connecting to monitor:
 qemu-kvm:
  -drive
 
 

Re: [one-users] incorrect owner of files in one-context_4.4.0.deb

2013-12-03 Thread Jaime Melis
Hi Rolandas,

I've uploaded a new version and updated the documentation.

http://dev.opennebula.org/attachments/download/750/one-context_4.4.0.deb
http://dev.opennebula.org/issues/2532

Thanks

Jaime


On Tue, Dec 3, 2013 at 8:41 AM, Rolandas Naujikas 
rolandas.nauji...@mif.vu.lt wrote:

 Hi,

 Incorrect owner (uid=1000, gid=1000) of files in
 http://dev.opennebula.org/attachments/download/746/one-context_4.4.0.deb
 Should be root:root.

 Regards, Rolandas
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] how to turn off xml_rpc log?

2013-12-03 Thread Carlos Martín Sánchez
Hi,

It can't be turned off, but we'll change that for the next versions [1].

Regards.

[1] http://dev.opennebula.org/issues/2533

--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org


On Mon, Dec 2, 2013 at 5:18 PM, Liu, Guang Jun (Gene) 
gene@alcatel-lucent.com wrote:

 The log file name is actually one_xmlrpc.log.

 Thanks,

 Gene Liu

 On Mon 02 Dec 2013 09:48:32 AM EST, Liu, Guang Jun (Gene) wrote:
  Hi everyone,
 
  I noticed there are lots of log in xml_rpc.log file, and the content of
  the log is not useful at all. Do you know how to turn off (or configure)
  the xml_rpc log?
 
  Thanks,
 
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Failure when deploying OneFlow Service

2013-12-03 Thread Carlos Martín Sánchez
Hi Bill,

Can you confirm that you can instantiate the template directly, instead of
launching the service?
I suspect it may have to do with the Role name, which one did you set?

Cheers

--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org


On Mon, Dec 2, 2013 at 10:49 PM, Campbell, Bill 
bcampb...@axcess-financial.com wrote:

 Been seeing this issue since RC and now in final 4.4, but was wondering if
 anybody has run across this before prior to submitting a bug report.

 Anytime I deploy a service, I get the following:

 *Mon Dec 2 16:44:03 2013 [ReM][E]: Req:864 UID:2 TemplateInstantiate
 result FAILURE Parse error: syntax error, unexpected $end, expecting EQUAL
 or EQUAL_EMPTY at line 2, columns 33:41*


 Has anybody run across this problem before?  Seems to happen whether it's
 a MySQL or SQLite database, and with services that have single roles or
 multiple roles.


 I've cleared my browser cache as well, just to be certain.



 Any ideas?

  --

 *Bill Campbell*
 Infrastructure Architect

   Axcess Financial Services, Inc.
 7755 Montgomery Rd., Suite 400
 Cincinnati, OH  45236


 *NOTICE: Protect the information in this message in accordance with the
 company's security policies. If you received this message in error,
 immediately notify the sender and destroy all copies.*


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] qcow2: empty datablock image creation with preallocation

2013-12-03 Thread Jaime Melis
Hi Lorenzo,

not out of the box. You will need to manually modify the DS drivers. You
will need to modify:
https://github.com/OpenNebula/one/blob/release-4.4/src/datastore_mad/remotes/fs/mkfs

And replace the 'dd' command with your own.

I would probably do something like: https://gist.github.com/jmelis/7768171
(I haven't tested it, so it probably has bugs. You also need to write the
actual command)

Let me know if you need any further advice.

Cheers,
Jaime


On Mon, Dec 2, 2013 at 6:29 PM, Lorenzo Faleschini 
lorenzo.falesch...@nordestsystems.com wrote:

 Hi

 is it possible to create in sunstone (or via CLI) an empty-datablock qcow2
 with preallocation (-o preallocation=metadata).

 I need it to avoid the auto-grow overhead.

 for now I create the prealloc qcow2 image then I import it in one.

 is it posible to avoid this step?
 any suggestions or alternatives?

 thanks

 Lorenzo
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] empty qcow2 datablock image with preallocation

2013-12-03 Thread Jaime Melis
Hi Lorenzo,

I just realized you sent the same email a couple of times to the mailing
list. It might have been an accident, but if it wasn't I'd like to kindly
point out that this is a best effort mailing list, sometimes getting an
aswer takes some time.

Please read this document for faster support alternatives:
http://opennebula.org/support:support

cheers,
Jaime



On Tue, Dec 3, 2013 at 10:51 AM, Lorenzo Faleschini 
lorenzo.falesch...@nordestsystems.com wrote:

  Hi

 is it possible to create in sunstone (or via CLI) an empty-datablock qcow2
 with preallocation (-o preallocation=metadata).

 I need it to avoid the auto-grow overhead.

 for now I create the prealloc qcow2 image then I import it in one.

 is it posible to avoid this step?
 any suggestions or alternatives?

 thanks

 I'm using 4.0 now.. is this possible in 4.4?

 Lorenzo


 --
 Lorenzo Faleschini
 Responsabile Sistemi Informativi

 skype: falegalizeit
 mobile: +39 335 6055225
 tel: +39 0427 807934
 fax: +39 0434 1820145

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] empty qcow2 datablock image with preallocation

2013-12-03 Thread Lorenzo Faleschini

Sorry about that Jamie, sorry to all other users too.

I clearly understand and know how mailing list works but I recived a 2 
bounceback error mails from mailerdeamon in sending the mail to the 
list.. so I supposed it was not sent correctly and sent it over again 
(the 3rd time i didn't recive any error).


I'm gonna check this twice with my provider.

I didn't meant to be overbearing. Sorry again.

cheers
Lorenzo

Il 03/12/2013 13:12, Jaime Melis ha scritto:

Hi Lorenzo,

I just realized you sent the same email a couple of times to the 
mailing list. It might have been an accident, but if it wasn't I'd 
like to kindly point out that this is a best effort mailing list, 
sometimes getting an aswer takes some time.


Please read this document for faster support alternatives:
http://opennebula.org/support:support

cheers,
Jaime



On Tue, Dec 3, 2013 at 10:51 AM, Lorenzo Faleschini 
lorenzo.falesch...@nordestsystems.com 
mailto:lorenzo.falesch...@nordestsystems.com wrote:


Hi

is it possible to create in sunstone (or via CLI) an
empty-datablock qcow2 with preallocation (-o preallocation=metadata).

I need it to avoid the auto-grow overhead.

for now I create the prealloc qcow2 image then I import it in one.

is it posible to avoid this step?
any suggestions or alternatives?

thanks

I'm using 4.0 now.. is this possible in 4.4?

Lorenzo


-- 
Lorenzo Faleschini

Responsabile Sistemi Informativi

skype: falegalizeit
mobile: +39 335 6055225 tel:%2B39%20335%206055225
tel: +39 0427 807934 tel:%2B39%200427%20807934
fax: +39 0434 1820145 tel:%2B39%200434%201820145

___
Users mailing list
Users@lists.opennebula.org mailto:Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




--
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com mailto:jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com mailto:ab...@c12g.com and 
delete the e-mail and

attachments and any copy from your system. C12G's thanks you for your
cooperation.



--
Lorenzo Faleschini
Responsabile Sistemi Informativi

skype: falegalizeit
mobile: +39 335 6055225
tel: +39 0427 807934
fax: +39 0434 1820145
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Failure when deploying OneFlow Service

2013-12-03 Thread Campbell, Bill
Carlos, 
Good call on the name, was using Load Balancer. Switched to Load_Balancer 
and it is deploying appropriately. 

Thanks! 

- Original Message -

From: Carlos Martín Sánchez cmar...@opennebula.org 
To: Bill Campbell bcampb...@axcess-financial.com 
Cc: Users OpenNebula users@lists.opennebula.org 
Sent: Tuesday, December 3, 2013 7:06:38 AM 
Subject: Re: [one-users] Failure when deploying OneFlow Service 

Hi Bill, 

Can you confirm that you can instantiate the template directly, instead of 
launching the service? 
I suspect it may have to do with the Role name, which one did you set? 

Cheers 

-- 
Carlos Martín, MSc 
Project Engineer 
OpenNebula - Flexible Enterprise Cloud Made Simple 
www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula 


On Mon, Dec 2, 2013 at 10:49 PM, Campbell, Bill  
bcampb...@axcess-financial.com  wrote: 



Been seeing this issue since RC and now in final 4.4, but was wondering if 
anybody has run across this before prior to submitting a bug report. 

Anytime I deploy a service, I get the following: 



Mon Dec 2 16:44:03 2013 [ReM][E]: Req:864 UID:2 TemplateInstantiate result 
FAILURE Parse error: syntax error, unexpected $end, expecting EQUAL or 
EQUAL_EMPTY at line 2, columns 33:41 




Has anybody run across this problem before? Seems to happen whether it's a 
MySQL or SQLite database, and with services that have single roles or multiple 
roles. 




I've cleared my browser cache as well, just to be certain. 







Any ideas? 




Bill Campbell 
Infrastructure Architect 




Axcess Financial Services, Inc. 
7755 Montgomery Rd., Suite 400 
Cincinnati, OH 45236 


NOTICE: Protect the information in this message in accordance with the 
company's security policies. If you received this message in error, immediately 
notify the sender and destroy all copies. 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







NOTICE: Protect the information in this message in accordance with the 
company's security policies. If you received this message in error, immediately 
notify the sender and destroy all copies.

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] qcow2: empty datablock image creation with preallocation

2013-12-03 Thread Liu, Guang Jun (Gene)
Thank you all for your answer and information!

Gene
On 13-12-03 07:09 AM, Jaime Melis wrote:
 Hi Lorenzo,

 not out of the box. You will need to manually modify the DS drivers.
 You will need to modify:
 https://github.com/OpenNebula/one/blob/release-4.4/src/datastore_mad/remotes/fs/mkfs

 And replace the 'dd' command with your own.

 I would probably do something like: https://gist.github.com/jmelis/7768171
 (I haven't tested it, so it probably has bugs. You also need to write
 the actual command)

 Let me know if you need any further advice.

 Cheers,
 Jaime


 On Mon, Dec 2, 2013 at 6:29 PM, Lorenzo Faleschini
 lorenzo.falesch...@nordestsystems.com
 mailto:lorenzo.falesch...@nordestsystems.com wrote:

 Hi

 is it possible to create in sunstone (or via CLI) an
 empty-datablock qcow2 with preallocation (-o preallocation=metadata).

 I need it to avoid the auto-grow overhead.

 for now I create the prealloc qcow2 image then I import it in one.

 is it posible to avoid this step?
 any suggestions or alternatives?

 thanks

 Lorenzo
 ___
 Users mailing list
 Users@lists.opennebula.org mailto:Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




 -- 
 Jaime Melis
 C12G Labs - Flexible Enterprise Cloud Made Simple
 http://www.c12g.com | jme...@c12g.com mailto:jme...@c12g.com

 --

 Confidentiality Warning: The information contained in this e-mail and
 any accompanying documents, unless otherwise expressly indicated, is
 confidential and privileged, and is intended solely for the person
 and/or entity to whom it is addressed (i.e. those identified in the
 To and cc box). They are the property of C12G Labs S.L..
 Unauthorized distribution, review, use, disclosure, or copying of this
 communication, or any part thereof, is strictly prohibited and may be
 unlawful. If you have received this e-mail in error, please notify us
 immediately by e-mail at ab...@c12g.com mailto:ab...@c12g.com and
 delete the e-mail and
 attachments and any copy from your system. C12G's thanks you for your
 cooperation.


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VM is in PROLOG state more than 1 hour

2013-12-03 Thread Documented Facts
No, I think because I have SWAP partition 10 GB. So total of the VM storage
size 15 GB that might be the reason. Thank for taking your time to reply
but no need it further since I use SSH storage mechanism.

Thank You


On Tue, Dec 3, 2013 at 4:32 PM, Jaime Melis jme...@c12g.com wrote:

 Hi,

 configuration seems fine. It must be a problem with your physical network
 link. Have you tried manually scp'ing the 5GB image over NFS, to rule out
 OpenNebula from the equation?

 cheers,
 Jaime


 On Wed, Nov 20, 2013 at 8:18 PM, Documented Facts 
 documentedfa...@gmail.com wrote:

 No, I don't think there is huge traffic. I have two machines one has
 front end and other one has a host with XEN virtualization. Machines
 doesn't have hardware virtualization support. Machines are connected via a
 switch.

 I have given how my process to have NFS storage. do you see any wrong
 there ?.

 Thanks

 Front-End

-

Install NFS Server
 -

   sudo apt-get install nfs-kernel-server
-

Let others access data store directory
 -

   sudo nano /etc/exports
   -

   put in that file bellow
-

  /var/lib/one
  10.*.*.161(rw,async,no_subtree_check,no_root_squash)
   -

   reload services with bellow commands
-

  $ sudo /etc/init.d/nfs-kernel-server reload

 Hosts

-

Install NFS Client
 -

   sudo apt-get install nfs-common
-

configure to access data stores at start up
 -

   in /etc/fstab add
-

  on-front:/var/lib/one /var/lib/one nfs udp,_netdev 0 0
   -

   sudo mount /var/lib/one


 After this I can see front end var/lib/one directory from hosts.

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




 --
 Jaime Melis
 C12G Labs - Flexible Enterprise Cloud Made Simple
 http://www.c12g.com | jme...@c12g.com

 --

 Confidentiality Warning: The information contained in this e-mail and
 any accompanying documents, unless otherwise expressly indicated, is
 confidential and privileged, and is intended solely for the person
 and/or entity to whom it is addressed (i.e. those identified in the
 To and cc box). They are the property of C12G Labs S.L..
 Unauthorized distribution, review, use, disclosure, or copying of this
 communication, or any part thereof, is strictly prohibited and may be
 unlawful. If you have received this e-mail in error, please notify us
 immediately by e-mail at ab...@c12g.com and delete the e-mail and
 attachments and any copy from your system. C12G's thanks you for your
 cooperation.

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] qcow2: empty datablock image creation with preallocation

2013-12-03 Thread Lorenzo Faleschini

thanks Jamie I'll try as soon as I have some time on a testbox

cheers
Lorenzo

Il 03/12/2013 13:09, Jaime Melis ha scritto:

Hi Lorenzo,

not out of the box. You will need to manually modify the DS drivers. 
You will need to modify:

https://github.com/OpenNebula/one/blob/release-4.4/src/datastore_mad/remotes/fs/mkfs

And replace the 'dd' command with your own.

I would probably do something like: https://gist.github.com/jmelis/7768171
(I haven't tested it, so it probably has bugs. You also need to write 
the actual command)


Let me know if you need any further advice.

Cheers,
Jaime


On Mon, Dec 2, 2013 at 6:29 PM, Lorenzo Faleschini 
lorenzo.falesch...@nordestsystems.com 
mailto:lorenzo.falesch...@nordestsystems.com wrote:


Hi

is it possible to create in sunstone (or via CLI) an
empty-datablock qcow2 with preallocation (-o preallocation=metadata).

I need it to avoid the auto-grow overhead.

for now I create the prealloc qcow2 image then I import it in one.

is it posible to avoid this step?
any suggestions or alternatives?

thanks

Lorenzo
___
Users mailing list
Users@lists.opennebula.org mailto:Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




--
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com mailto:jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com mailto:ab...@c12g.com and 
delete the e-mail and

attachments and any copy from your system. C12G's thanks you for your
cooperation.



--
Lorenzo Faleschini
Responsabile Sistemi Informativi

skype: falegalizeit
mobile: +39 335 6055225
tel: +39 0427 807934
fax: +39 0434 1820145
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to use ceph filesystem

2013-12-03 Thread Jaime Melis
Hi Mario,

Cephfs CAN be used a shared filesystem datastore. I don't completely agree
with Kenneth's recommendation of using 'ssh' as the TM for the system
datastore. I think you can go for 'shared' as long as you have the
/var/lib/one/datastores/... shared via Cephfs. OpenNebula doesn't care
about what DFS solution you're using, it will simply assume files are
already there.

Another thing worth mentioning, from 4.4 onwards the HOST attribute of the
datastore should be renamed as BRIDGE_LIST.

cheers,
Jaime


On Tue, Dec 3, 2013 at 11:28 AM, Kenneth kenn...@apolloglobal.net wrote:

  Actually, I'm using ceph as the system datastore. I used the cephfs
 (CEPH FUSE) and mounted it on all nodes on /var/lib/one/datastores/0/

 Regarding ssh for trasfer driver, I haven't really used it since I'm all
 on ceph, both system and image datastore. I may be wrong but that is how I
 understand it from the docs.
 ---

 Thanks,
 Kenneth
 Apollo Global Corp.

  On 12/03/2013 06:11 PM, Mario Giammarco wrote:

   My problem was that  because ceph is a distributed filesystem (and so
 it can be used as an alternative to nfs) I supposed I can use as a shared
 system datastore.
 Reading your reply I can see it is not true. Probably official
 documentation should clarify this.

 Infact I hoped to use ceph as system datastore because ceph is fault
 tolerant and nfs is not.

 Thanks for help,
 Mario


 2013/12/3 Kenneth kenn...@apolloglobal.net

  Ceph won't be the default image datastore, but you can always choose it
 whenever you create an image.

 You said you don't have an NFS disk and your just use plain disk on your
 system datastore so you *should* use ssh in order to have live
 migrations.

 Mine uses shared as datastore since I mounted a shared folder on each
 nebula node.
  ---

 Thanks,
 Kenneth
 Apollo Global Corp.

   On 12/03/2013 03:01 PM, Mario Giammarco wrote:

 First, thanks you for your very detailed reply!


 2013/12/3 Kenneth kenn...@apolloglobal.net

  You don't need to replace existing datastores, the important is you
 edit the system datastore as ssh because you still need to transfer files
 in each node when you deploy a VM.


 So I lose live migration, right?
 If I understand correctly ceph cannot be default datastore also.

  Next, you should make sure that all your node are able to communicate
 with the ceph cluster. Issue the command ceph -s on all nodes including
 the front end to be sure that they are connected to ceph.




 ... will check...



 oneadmin@cloud-node1:~$ onedatastore list

   ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS
 TM

0 system - - - 0 sys  -
 shared

1 default 7.3G 71%   - 1 img  fs
 shared

2 files   7.3G 71%   - 0 fil  fs   ssh

 * 100 cephds  5.5T 59%   - 3 img  ceph
 ceph*

 Once you add the you have verified that the ceph datastore is active you
 can upload images on the sunstone GUI. Be aware that conversion of images
 to RBD format of ceph may take quite some time.


 I see in your configuration that system datastore is shared!

 Thanks again,
 Mario


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] oneimage create command in ON3.4 and up

2013-12-03 Thread Hyun Woo Kim
Hi,
Thanks for the reply.

My oned.log has those messages
Tue Dec  3 08:53:08 2013 [InM][I]: Monitoring datastore default (1)
Tue Dec  3 08:53:08 2013 [ImM][I]: Datastore default (1) successfully monitored.

but I still can not do oneimage create with the same error message;
-bash-4.1$ oneimage create firstimage.template  --datastore default
Not enough space in datastore

Any other suggestion?

Thank you.
HyunWoo
FermiCloud


From: Carlos Martín Sánchez 
cmar...@opennebula.orgmailto:cmar...@opennebula.org
Date: Tuesday, December 3, 2013 4:06 AM
To: Hyunwoo Kim hyun...@fnal.govmailto:hyun...@fnal.gov
Cc: users@lists.opennebula.orgmailto:users@lists.opennebula.org 
users@lists.opennebula.orgmailto:users@lists.opennebula.org
Subject: Re: [one-users] oneimage create command in ON3.4 and up

Hi,

The datastore size is initialized to 0, and then populated with the right size 
after the monitorization begins to work. Maybe there was a problem trying to 
monitor it. Take a look at /var/log/one/oned.log, you should have messages like 
these:

Fri Nov 29 17:29:19 2013 [InM][D]: Monitoring datastore default (1)
Fri Nov 29 17:29:20 2013 [ImM][D]: Datastore default (1) successfully monitored.

Or an error message from the drivers.

Regards

--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.orghttp://www.OpenNebula.org | 
cmar...@opennebula.orgmailto:cmar...@opennebula.org | 
@OpenNebulahttp://twitter.com/opennebulamailto:cmar...@opennebula.org


On Mon, Dec 2, 2013 at 11:37 PM, Hyun Woo Kim 
hyun...@fnal.govmailto:hyun...@fnal.gov wrote:
Hi,

Let me ask a basic question.

I just started testing ON4.2
and when I use oneimage command to register a first image,
I am getting the following error message

-bash-4.1$ oneimage create firstimage.template  --datastore default
Not enough space in datastore

I guess this is because the default datastore has 0M size as shown below,
-bash-4.1$ onedatastore list
  ID NAMESIZE AVAIL CLUSTERIMAGES TYPE DS   TM
   0 system - - -  0
 sys  -ssh
   1 default   0M - production0   img 
fs   ssh


My question is, how can I increase this size?

I could not find relevant information from onedatastore help,
I guessed this size will increase when I add a new host, create a cluster, add 
the host in this cluster
and create a new virtual network and add it to the cluster etc,
but I am still getting the same error message..

Thanks in advance.
HyunWoo KIM
FermiCloud


___
Users mailing list
Users@lists.opennebula.orgmailto:Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] oneimage create command in ON3.4 and up

2013-12-03 Thread Hyun Woo Kim
Carlos,
in your email, you said,
The datastore size is initialized to 0, and then populated with the right size

What do you mean by right size here?
How do we set the right size in the first place?

Thanks,
HyunWoo
FermiCloud


From: Hyunwoo Kim hyun...@fnal.govmailto:hyun...@fnal.gov
Date: Tuesday, December 3, 2013 8:59 AM
To: Carlos Martín Sánchez 
cmar...@opennebula.orgmailto:cmar...@opennebula.org
Cc: users@lists.opennebula.orgmailto:users@lists.opennebula.org 
users@lists.opennebula.orgmailto:users@lists.opennebula.org
Subject: Re: [one-users] oneimage create command in ON3.4 and up

Hi,
Thanks for the reply.

My oned.log has those messages
Tue Dec  3 08:53:08 2013 [InM][I]: Monitoring datastore default (1)
Tue Dec  3 08:53:08 2013 [ImM][I]: Datastore default (1) successfully monitored.

but I still can not do oneimage create with the same error message;
-bash-4.1$ oneimage create firstimage.template  --datastore default
Not enough space in datastore

Any other suggestion?

Thank you.
HyunWoo
FermiCloud


From: Carlos Martín Sánchez 
cmar...@opennebula.orgmailto:cmar...@opennebula.org
Date: Tuesday, December 3, 2013 4:06 AM
To: Hyunwoo Kim hyun...@fnal.govmailto:hyun...@fnal.gov
Cc: users@lists.opennebula.orgmailto:users@lists.opennebula.org 
users@lists.opennebula.orgmailto:users@lists.opennebula.org
Subject: Re: [one-users] oneimage create command in ON3.4 and up

Hi,

The datastore size is initialized to 0, and then populated with the right size 
after the monitorization begins to work. Maybe there was a problem trying to 
monitor it. Take a look at /var/log/one/oned.log, you should have messages like 
these:

Fri Nov 29 17:29:19 2013 [InM][D]: Monitoring datastore default (1)
Fri Nov 29 17:29:20 2013 [ImM][D]: Datastore default (1) successfully monitored.

Or an error message from the drivers.

Regards

--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.orghttp://www.OpenNebula.org | 
cmar...@opennebula.orgmailto:cmar...@opennebula.org | 
@OpenNebulahttp://twitter.com/opennebulamailto:cmar...@opennebula.org


On Mon, Dec 2, 2013 at 11:37 PM, Hyun Woo Kim 
hyun...@fnal.govmailto:hyun...@fnal.gov wrote:
Hi,

Let me ask a basic question.

I just started testing ON4.2
and when I use oneimage command to register a first image,
I am getting the following error message

-bash-4.1$ oneimage create firstimage.template  --datastore default
Not enough space in datastore

I guess this is because the default datastore has 0M size as shown below,
-bash-4.1$ onedatastore list
  ID NAMESIZE AVAIL CLUSTERIMAGES TYPE DS   TM
   0 system - - -  0
 sys  -ssh
   1 default   0M - production0   img 
fs   ssh


My question is, how can I increase this size?

I could not find relevant information from onedatastore help,
I guessed this size will increase when I add a new host, create a cluster, add 
the host in this cluster
and create a new virtual network and add it to the cluster etc,
but I am still getting the same error message..

Thanks in advance.
HyunWoo KIM
FermiCloud


___
Users mailing list
Users@lists.opennebula.orgmailto:Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Sunstone problems

2013-12-03 Thread Ionut Popovici

Hello .. i have a problem.
I have updated to openenbula 4.4.0
On CLI everything seams ok.

oneadmin@serv:/var/tmp/one/im/kvm.d$ onedatastore list
  ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS   TM
   0 system  2.1T 81%   - 0 sys -shared
   1 default 2.1T 81%   -   131 img fs   shared
   2 files   2.1T 81%   - 0 fil fs   ssh

oneadmin@serv:/var/tmp/one/im/kvm.d$ oneimage list
  ID USER   GROUP  NAMEDATASTORE SIZE TYPE PER 
STAT RVMS
   0 mcabrerizo oneadmin   ttylinux - kvm  default40M OS No 
rdy 0
   1 mcabrerizo oneadmin   Debian 7.1.0 Ne default   222M CD No 
rdy 0
   2 mcabrerizo oneadmin   Ubuntu 13.04 Se default   701M CD No 
rdy 0


ID USER GROUPNAMESTAT UCPUUMEM 
HOST TIME
   242 x  oneadmin ro01-x.hrunn01024M p.h  
28d 03h56
   251 x oneadmin mini4  runn01024M 
p.h  27d 18h43
   252 x oneadmin mini5 runn0 1024M 
pl.h  27d 18h43
   253 x oneadmin mini6 runn0 1024M 
pl.h  27d 18h43
   278 x oneadmin cactirunn0 1024M pl.h  
26d 16h26


oneadmin@serv:/var/tmp/one/im/kvm.d$ onehost list
  ID NAMECLUSTER   RVM  ALLOCATED_CPU ALLOCATED_MEM STAT
   0 p.h-  75789 / 800 (98%) 
40G / 31.4G (127%) on


But in sunstone i dont see the Datastore and the Virtual networks and 
VMS and images.


On cli commands everything was ok ..
The only problem that i had it was after updated to 4.4.0 sunstone had a 
problem with sha1 password for user server admin

so i used
oneuser passwd serveradmin pass --sha1

Wired is that i can connect on sunstone and se the host info .. and i 
see all my 70 machines as zombie :(


any ideea ? Why i  can't se datastores vms images and vnet's in sunstone ?





___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] oneimage create command in ON3.4 and up

2013-12-03 Thread Hyun Woo Kim
Carlos,
I found what was wrong with my configuration;

onedatastore show default   reveals that BASE_PATH was pointing at the wrong 
path.
(This is because I started with one path and then later renamed the directory.)

onedatastore update default  did not let me change this BASE_PATH,
so I had to modify the DB content directly,

mysql update datastore_pool set body=DATASTORE … /DATASTORE where oid=1;
and also had to restart oned in order to flush the memory contents.

After this,  onedatastore list now shows
-bash-4.1$ od list
  ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS   TM
   0 system - - - 0 sys  -ssh
   1 default   434.9G 91%   production0 img  fs   ssh
   2 files 0M - - 0 fil  fs   ssh


Thanks again,
HyunWoo


From: Hyunwoo Kim hyun...@fnal.govmailto:hyun...@fnal.gov
Date: Tuesday, December 3, 2013 9:15 AM
To: Hyunwoo Kim hyun...@fnal.govmailto:hyun...@fnal.gov, Carlos Martín 
Sánchez cmar...@opennebula.orgmailto:cmar...@opennebula.org
Cc: users@lists.opennebula.orgmailto:users@lists.opennebula.org 
users@lists.opennebula.orgmailto:users@lists.opennebula.org
Subject: Re: [one-users] oneimage create command in ON3.4 and up

Carlos,
in your email, you said,
The datastore size is initialized to 0, and then populated with the right size

What do you mean by right size here?
How do we set the right size in the first place?

Thanks,
HyunWoo
FermiCloud


From: Hyunwoo Kim hyun...@fnal.govmailto:hyun...@fnal.gov
Date: Tuesday, December 3, 2013 8:59 AM
To: Carlos Martín Sánchez 
cmar...@opennebula.orgmailto:cmar...@opennebula.org
Cc: users@lists.opennebula.orgmailto:users@lists.opennebula.org 
users@lists.opennebula.orgmailto:users@lists.opennebula.org
Subject: Re: [one-users] oneimage create command in ON3.4 and up

Hi,
Thanks for the reply.

My oned.log has those messages
Tue Dec  3 08:53:08 2013 [InM][I]: Monitoring datastore default (1)
Tue Dec  3 08:53:08 2013 [ImM][I]: Datastore default (1) successfully monitored.

but I still can not do oneimage create with the same error message;
-bash-4.1$ oneimage create firstimage.template  --datastore default
Not enough space in datastore

Any other suggestion?

Thank you.
HyunWoo
FermiCloud


From: Carlos Martín Sánchez 
cmar...@opennebula.orgmailto:cmar...@opennebula.org
Date: Tuesday, December 3, 2013 4:06 AM
To: Hyunwoo Kim hyun...@fnal.govmailto:hyun...@fnal.gov
Cc: users@lists.opennebula.orgmailto:users@lists.opennebula.org 
users@lists.opennebula.orgmailto:users@lists.opennebula.org
Subject: Re: [one-users] oneimage create command in ON3.4 and up

Hi,

The datastore size is initialized to 0, and then populated with the right size 
after the monitorization begins to work. Maybe there was a problem trying to 
monitor it. Take a look at /var/log/one/oned.log, you should have messages like 
these:

Fri Nov 29 17:29:19 2013 [InM][D]: Monitoring datastore default (1)
Fri Nov 29 17:29:20 2013 [ImM][D]: Datastore default (1) successfully monitored.

Or an error message from the drivers.

Regards

--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.orghttp://www.OpenNebula.org | 
cmar...@opennebula.orgmailto:cmar...@opennebula.org | 
@OpenNebulahttp://twitter.com/opennebulamailto:cmar...@opennebula.org


On Mon, Dec 2, 2013 at 11:37 PM, Hyun Woo Kim 
hyun...@fnal.govmailto:hyun...@fnal.gov wrote:
Hi,

Let me ask a basic question.

I just started testing ON4.2
and when I use oneimage command to register a first image,
I am getting the following error message

-bash-4.1$ oneimage create firstimage.template  --datastore default
Not enough space in datastore

I guess this is because the default datastore has 0M size as shown below,
-bash-4.1$ onedatastore list
  ID NAMESIZE AVAIL CLUSTERIMAGES TYPE DS   TM
   0 system - - -  0
 sys  -ssh
   1 default   0M - production0   img 
fs   ssh


My question is, how can I increase this size?

I could not find relevant information from onedatastore help,
I guessed this size will increase when I add a new host, create a cluster, add 
the host in this cluster
and create a new virtual network and add it to the cluster etc,
but I am still getting the same error message..

Thanks in advance.
HyunWoo KIM
FermiCloud


___
Users mailing list
Users@lists.opennebula.orgmailto:Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Failure when deploying OneFlow Service

2013-12-03 Thread Carlos Martín Sánchez
Fixed!

You can apply the patch in this ticket [1] to the file
/usr/lib/one/oneflow/lib/models/role.rb

Regards

[1] http://dev.opennebula.org/issues/2535

--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org


On Tue, Dec 3, 2013 at 2:39 PM, Campbell, Bill 
bcampb...@axcess-financial.com wrote:

 Carlos,
 Good call on the name, was using Load Balancer.  Switched to
 Load_Balancer and it is deploying appropriately.

 Thanks!

 --
 *From: *Carlos Martín Sánchez cmar...@opennebula.org
 *To: *Bill Campbell bcampb...@axcess-financial.com
 *Cc: *Users OpenNebula users@lists.opennebula.org
 *Sent: *Tuesday, December 3, 2013 7:06:38 AM
 *Subject: *Re: [one-users] Failure when deploying OneFlow Service


 Hi Bill,

 Can you confirm that you can instantiate the template directly, instead of
 launching the service?
 I suspect it may have to do with the Role name, which one did you set?

 Cheers

 --
 Carlos Martín, MSc
 Project Engineer
 OpenNebula - Flexible Enterprise Cloud Made Simple
 www.OpenNebula.org | cmar...@opennebula.org | 
 @OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org


 On Mon, Dec 2, 2013 at 10:49 PM, Campbell, Bill 
 bcampb...@axcess-financial.com wrote:

 Been seeing this issue since RC and now in final 4.4, but was wondering
 if anybody has run across this before prior to submitting a bug report.

 Anytime I deploy a service, I get the following:

 *Mon Dec 2 16:44:03 2013 [ReM][E]: Req:864 UID:2 TemplateInstantiate
 result FAILURE Parse error: syntax error, unexpected $end, expecting EQUAL
 or EQUAL_EMPTY at line 2, columns 33:41*


 Has anybody run across this problem before?  Seems to happen whether it's
 a MySQL or SQLite database, and with services that have single roles or
 multiple roles.


 I've cleared my browser cache as well, just to be certain.



 Any ideas?

 --

 *Bill Campbell*
 Infrastructure Architect


 Axcess Financial Services, Inc.
 7755 Montgomery Rd., Suite 400
 Cincinnati, OH  45236


 *NOTICE: Protect the information in this message in accordance with the
 company's security policies. If you received this message in error,
 immediately notify the sender and destroy all copies.*


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




 *NOTICE: Protect the information in this message in accordance with the
 company's security policies. If you received this message in error,
 immediately notify the sender and destroy all copies.*


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Weekly snapshots

2013-12-03 Thread Giancarlo De Filippis


Hi all, 

How can i schedule a weekly (or daily... or monthly) backup via
scheduler? 

Thanks all for response!!! 

Giancarlo 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] datastore log info missing from oned.log in v4.2

2013-12-03 Thread Ruben S. Montero
Can you send the logging that you are trying to send to oned.log? TM log
messages are sent to the vm.log or oned.log. Nothing has change, can you
share the code you are trying to log?

Cheers

Ruben




On Tue, Nov 26, 2013 at 4:38 PM, Gary S. Cuozzo g...@isgsoftware.netwrote:

 Then it seems that this has changed since the 3.8.x series.  I use
 log_debug script function to log various messages in my drivers.  I have my
 log level set to DEBUG in conf file.  I used to see those messages, as I
 would expect, unless I changed the log level to something above DEBUG.  The
 messages were logged regardless of the outcome of the script.

 It seems that if that is no longer the case, then the log facility can't
 be used for anything other than error conditions and having DEBUG, INFO,
 etc. is not very useful anymore.

 Maybe I'm missing the point of the feature, but it definitely did not work
 this way before and was something I relied on for certain things.  I'm now
 overriding the log_debug function locally in my script and writing the
 output to my own log file.  That is not as nice because now the log entries
 are not intertwined with other events which are happening in the oned.log
 file.

 IMO, any message should be written to the file based on the configured log
 level.

 Let me know your thoughts.

 Thanks for the reply,
 gary

 - Original Message -
 From: Javier Fontan jfon...@opennebula.org

 STDERR is only logged when the action is not successful, maybe that is
 your problem.

 On Sat, Nov 23, 2013 at 11:35 PM, Gary S. Cuozzo g...@isgsoftware.net
 wrote:
  Hi All,
  I finally got some time to get back to looking at this.  My original
 task was to get my custom drivers working with ONE 4.2.  I was able to do
 so.  It was related to a bug fix where the SRC  DST arguments to
 pre/postmigrate were swapped.  I simply had to swap them back.  The irony
 in it is that I was the one that suggested the pre/postmigrate feature
 request and reported that bug.  ;)
 
  So, I'm happily able to live migrate again.
 
  Oddly though, I am unable to get any log messages to show up in oned.log
 file.  I'm guessing some of the common script libraries have gotten
 reorganized, so I need to dig in a bit more on this issue.
 
  Cheers,
  gary
 
  - Original Message -
  From: Gary S. Cuozzo g...@isgsoftware.net
  To: Users OpenNebula users@lists.opennebula.org
  Sent: Friday, November 8, 2013 8:53:32 AM
  Subject: Re: [one-users] datastore log info missing from oned.log in v4.2
 
  Is this a new behavior?  I used to see any logged messages up to the
 level of the log mode in the oned.log file.  It seems like 'debug' mode
 should log regardless of the exit code of the script.  Even if I changed
 the logs from log_debug to log_error, still no output.  I'll try forcing
 exit 1 to see if that causes the logs to show up.
 
  Also, I validated that when I first installed ONE 4.2, the log messages
 were there, but now are not.  I went back to my logs from a few weeks ago
 and see all the messages I would expect.  I did not have time to work on
 this, so the server sat for a few weeks.  Now I'm back to working on it and
 not seeing the messages.  Very odd.
 
  Will let you know if setting exit 1 helps.
 
  Thanks for the reply,
  gary
 
  - Original Message -
  Normally the transfers are not logged when everything goes ok, that
  is, the script exists with 0.
 
  On Fri, Nov 8, 2013 at 1:17 AM, Gary S. Cuozzo g...@isgsoftware.net
 wrote:
  Hi All,
  I have recently upgraded our environment from 3.8.x to 4.2 version.
  Running
  on Ubuntu 12.04LTS.  I have some custom datastore/tm drivers that I
 wrote
  which worked well in 3.8.  I am trying to troubleshoot something
 related to
  live migration and my log messages are not getting to
 /var/log/one/oned.log
  file.  I have logging set to debug in oned.conf.  I see debug level
 messages
  for other subsystems, such as [ReM] but nothing related to the
 datastore or
  tm.
 
  Just as a sanity check, I have another server that I did a clean
 install on
  and I'm seeing the same results.
 
  I also tested adding some log entries into the stock ONE drivers and I
 do
  not see anything logged.
 
  Any ideas?
 
  Thanks in advance,
  gary
 
 
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 
 
 
 
  --
  Javier Fontán Muiños
  Developer
  OpenNebula - The Open Source Toolkit for Data Center Virtualization
  www.OpenNebula.org | @OpenNebula | github.com/jfontan
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



 --
 Javier Fontán Muiños
 Developer
 OpenNebula - The Open Source 

Re: [one-users] base path of vmfs system datastore

2013-12-03 Thread Tino Vazquez
Hi,

There must be an error monitoring the datastores, do you see any
relevant in oned.log? Otherwise send it though for analysis.

It would be useful to know the output of:

onedatastore show ds_id -x

where ds_id is the id of the system datastore.

There is no problem in using local storage for VMware datastores, I've
updated the documentation to reflect this (it was indeed misleading).

Please consider upgrading to 4.4.

Regards,

-Tino

--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G thanks you for your
cooperation.


On Mon, Dec 2, 2013 at 10:36 AM, Daems Dirk dirk.da...@vito.be wrote:
 Hi,

 I created both the system and image datastores.
 In the oned.log file I see that my new 'VMWareImageDatastore' can be 
 monitored successfully.

 However, when I try to deploy an image in that VMFS image datastore, I get an 
 error indicating that there is not enough space in the datastore.
 But in the vSphere client there is 275 GB free space and the image is several 
 times smaller.

 Currently (for testing), my VMFS volumes are not exported through a SAN, so I 
 don't use iSCSI but a serial attached SCSI drive.
 Could that be the problem? Is it mandatory to use iSCSI?
 From the documentation [1] it looks like it is mandatory to use iSCSI; see 
 'Infrastructure Configuration', second bullet:

 The ESX servers needs to present or mount as iSCSI both the system datastore 
 and the image datastore (naming them with just the datastore-id, for 
 instance 0 for the system datastore and 1 for the image datastore).

 [1] - 
 http://opennebula.org/documentation:rel4.2:vmware_ds#configuring_the_datastore_drivers_for_vmware

 Regards,
 Dirk

 -Original Message-
 From: Tino Vazquez [mailto:cvazq...@c12g.com]
 Sent: donderdag 28 november 2013 18:05
 To: Daems Dirk
 Cc: users@lists.opennebula.org
 Subject: Re: [one-users] base path of vmfs system datastore

 Hi,

 You are right, there is a chicken-and-egg problem with VMware and DS 0 and 1 
 in OpenNebula 4.2 that prevents the use of the default datastores in VMware. 
 This is fixed in the imminent OpenNebula 4.4 Retina.

 The workaround is to simply use two new datastores (say, 100 and 101), 
 configure them with the appropriate BRIDGE_LIST and 'vmfs' drivers and 
 DATASTORE_LOCATION, rename the DS in the ESX hosts and you should be good to 
 go.

 Regards,

 -Tino

 --
 OpenNebula - Flexible Enterprise Cloud Made Simple

 --
 Constantino Vázquez Blanco, PhD, MSc
 Senior Infrastructure Architect at C12G Labs www.c12g.com | @C12G | 
 es.linkedin.com/in/tinova

 --
 Confidentiality Warning: The information contained in this e-mail and any 
 accompanying documents, unless otherwise expressly indicated, is confidential 
 and privileged, and is intended solely for the person and/or entity to whom 
 it is addressed (i.e. those identified in the To and cc box). They are 
 the property of C12G Labs S.L..
 Unauthorized distribution, review, use, disclosure, or copying of this 
 communication, or any part thereof, is strictly prohibited and may be 
 unlawful. If you have received this e-mail in error, please notify us 
 immediately by e-mail at ab...@c12g.com and delete the e-mail and attachments 
 and any copy from your system. C12G thanks you for your cooperation.


 On Thu, Nov 28, 2013 at 5:56 PM, Daems Dirk dirk.da...@vito.be wrote:
 Hi list,



 I am quite new to OpenNebula.

 I installed the OpenNebula 4.2 client and configured my VMWare
 hypervisor. I can create an OpenNebula host which connects to the
 VMWare hypervisor without problems.



 However, I can't seem to get the pre-defined system and image
 datastores to work with the vmfs datastore type. I set the
 DATASTORE_LOCATION in the oned.conf file to /vmfs/volumes. SSH access
 for the oneadmin account is working fine. I also updated the
 pre-defined system (0) and image (1) datastores using the correct
 attribute values for DS_MAD, TM_MAD and BRIDGE_LIST. In the vSphere
 client, I created VMFS datastores reflecting the datastore id's 0 and 1.



 Now the base path of these datastores still points to
 /var/lib/one/datastores/0 and /var/lib/one/datastores/0 which is not
 correct.

 I would think they need to point to 

Re: [one-users] Deployment error

2013-12-03 Thread Tino Vazquez
Hi Eduardo,

I think you might be missing a configuration step, check the
requirements section of VMware networking:

http://opennebula.org/documentation:rel4.4:vmwarenet#requirements

You need to execute the following in the ESX as root:

 $ chmod +s /sbin/esxcfg-vswitch


Regards,

-Tino
--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G thanks you for your
cooperation.


On Fri, Nov 29, 2013 at 11:21 PM, Eduardo Roloff rol...@gmail.com wrote:
 I check the oneadmin permissions and he is in the administrators
 group, this is the only thing that I can verify to assure that he has
 full permissions.

 I changed to use predefined network and I got a different error.

 Fri Nov 29 20:09:43 2013 [LCM][I]: New VM state is BOOT
 Fri Nov 29 20:09:43 2013 [VMM][I]: Generating deployment file:
 /var/lib/one/vms/14/deployment.0
 Fri Nov 29 20:09:44 2013 [VMM][I]: Successfully execute network driver
 operation: pre.
 Fri Nov 29 20:09:53 2013 [VMM][I]: Command execution fail:
 /var/lib/one/remotes/vmm/vmware/deploy
 '/var/lib/one/vms/14/deployment.0' 'gn901cb' 14 gn901cb
 Fri Nov 29 20:09:53 2013 [VMM][D]: deploy: Successfully defined domain one-14.
 Fri Nov 29 20:09:53 2013 [VMM][E]: deploy: Error executing: virsh -c
 'esx://gn901cb/?no_verify=1' start one-14 err: ExitCode: 1
 Fri Nov 29 20:09:53 2013 [VMM][I]: out:
 Fri Nov 29 20:09:53 2013 [VMM][I]: error: Failed to start domain one-14
 Fri Nov 29 20:09:53 2013 [VMM][I]: error: internal error Could not
 start domain: FileNotFound - File [126] 14/disk.0/disk.vmdk was not
 found
 Fri Nov 29 20:09:53 2013 [VMM][I]:
 Fri Nov 29 20:09:53 2013 [VMM][I]: ExitCode: 1
 Fri Nov 29 20:09:53 2013 [VMM][I]: Failed to execute virtualization
 driver operation: deploy.
 Fri Nov 29 20:09:53 2013 [VMM][E]: Error deploying virtual machine
 Fri Nov 29 20:09:53 2013 [DiM][I]: New VM state is FAILED

 On Fri, Nov 29, 2013 at 7:02 PM, Ruben S. Montero
 rsmont...@opennebula.org wrote:
 Just double checking... have you give full permissions to oneadmin?

 http://opennebula.org/documentation:rel4.2:evmwareg#users_groups

 It seems that you are using the dynamic network mode, you can move on by
 falling back to the predefined one. And once you have setup everything else
 revisit the problem...

 http://opennebula.org/documentation:rel4.2:vmwarenet#using_the_pre-defined_network_mode

 Cheers

 Ruben


 On Fri, Nov 29, 2013 at 8:34 PM, Eduardo Roloff rol...@gmail.com wrote:

 Hi again Tino,

 I change the configuration to use the image datastore locally in the
 front-end, and deploy the VM using vmfs.

 When I try to create a VM I got the error messages below.

 Seems that is a permission issue, but I can't find what is wrong.

 --

 Fri Nov 29 17:33:11 2013 [TM][D]: Message received: TRANSFER SUCCESS 12 -

 Fri Nov 29 17:33:11 2013 [VMM][D]: Message received: LOG I 12 Command
 execution fail: /var/lib/one/remotes/vnm/vmware/pre

 

Re: [one-users] Deployment error

2013-12-03 Thread Eduardo Roloff
Hello Tino,

I started a fresh installation using ONE 4.4.

We can close this thread now and I'll ask if I have problems with the
new installation.

Thank you very much.

Eduardo

On Tue, Dec 3, 2013 at 2:40 PM, Tino Vazquez cvazq...@c12g.com wrote:
 Hi Eduardo,

 I think you might be missing a configuration step, check the
 requirements section of VMware networking:

 http://opennebula.org/documentation:rel4.4:vmwarenet#requirements

 You need to execute the following in the ESX as root:

  $ chmod +s /sbin/esxcfg-vswitch


 Regards,

 -Tino
 --
 OpenNebula - Flexible Enterprise Cloud Made Simple

 --
 Constantino Vázquez Blanco, PhD, MSc
 Senior Infrastructure Architect at C12G Labs
 www.c12g.com | @C12G | es.linkedin.com/in/tinova

 --
 Confidentiality Warning: The information contained in this e-mail and
 any accompanying documents, unless otherwise expressly indicated, is
 confidential and privileged, and is intended solely for the person
 and/or entity to whom it is addressed (i.e. those identified in the
 To and cc box). They are the property of C12G Labs S.L..
 Unauthorized distribution, review, use, disclosure, or copying of this
 communication, or any part thereof, is strictly prohibited and may be
 unlawful. If you have received this e-mail in error, please notify us
 immediately by e-mail at ab...@c12g.com and delete the e-mail and
 attachments and any copy from your system. C12G thanks you for your
 cooperation.


 On Fri, Nov 29, 2013 at 11:21 PM, Eduardo Roloff rol...@gmail.com wrote:
 I check the oneadmin permissions and he is in the administrators
 group, this is the only thing that I can verify to assure that he has
 full permissions.

 I changed to use predefined network and I got a different error.

 Fri Nov 29 20:09:43 2013 [LCM][I]: New VM state is BOOT
 Fri Nov 29 20:09:43 2013 [VMM][I]: Generating deployment file:
 /var/lib/one/vms/14/deployment.0
 Fri Nov 29 20:09:44 2013 [VMM][I]: Successfully execute network driver
 operation: pre.
 Fri Nov 29 20:09:53 2013 [VMM][I]: Command execution fail:
 /var/lib/one/remotes/vmm/vmware/deploy
 '/var/lib/one/vms/14/deployment.0' 'gn901cb' 14 gn901cb
 Fri Nov 29 20:09:53 2013 [VMM][D]: deploy: Successfully defined domain 
 one-14.
 Fri Nov 29 20:09:53 2013 [VMM][E]: deploy: Error executing: virsh -c
 'esx://gn901cb/?no_verify=1' start one-14 err: ExitCode: 1
 Fri Nov 29 20:09:53 2013 [VMM][I]: out:
 Fri Nov 29 20:09:53 2013 [VMM][I]: error: Failed to start domain one-14
 Fri Nov 29 20:09:53 2013 [VMM][I]: error: internal error Could not
 start domain: FileNotFound - File [126] 14/disk.0/disk.vmdk was not
 found
 Fri Nov 29 20:09:53 2013 [VMM][I]:
 Fri Nov 29 20:09:53 2013 [VMM][I]: ExitCode: 1
 Fri Nov 29 20:09:53 2013 [VMM][I]: Failed to execute virtualization
 driver operation: deploy.
 Fri Nov 29 20:09:53 2013 [VMM][E]: Error deploying virtual machine
 Fri Nov 29 20:09:53 2013 [DiM][I]: New VM state is FAILED

 On Fri, Nov 29, 2013 at 7:02 PM, Ruben S. Montero
 rsmont...@opennebula.org wrote:
 Just double checking... have you give full permissions to oneadmin?

 http://opennebula.org/documentation:rel4.2:evmwareg#users_groups

 It seems that you are using the dynamic network mode, you can move on by
 falling back to the predefined one. And once you have setup everything else
 revisit the problem...

 http://opennebula.org/documentation:rel4.2:vmwarenet#using_the_pre-defined_network_mode

 Cheers

 Ruben


 On Fri, Nov 29, 2013 at 8:34 PM, Eduardo Roloff rol...@gmail.com wrote:

 Hi again Tino,

 I change the configuration to use the image datastore locally in the
 front-end, and deploy the VM using vmfs.

 When I try to create a VM I got the error messages below.

 Seems that is a permission issue, but I can't find what is wrong.

 --

 Fri Nov 29 17:33:11 2013 [TM][D]: Message received: TRANSFER SUCCESS 12 -

 Fri Nov 29 17:33:11 2013 [VMM][D]: Message received: LOG I 12 Command
 execution fail: /var/lib/one/remotes/vnm/vmware/pre

 

Re: [one-users] empty qcow2 datablock image with preallocation

2013-12-03 Thread Ruben S. Montero
This cannot be specified through the interface CLI or sunstone (just
fstype, an size).

However you can easily add it to the filesystem creation script. Look for
scrtips_common.sh and mkfs_command, around line 311.


Cheers


On Tue, Dec 3, 2013 at 10:51 AM, Lorenzo Faleschini 
lorenzo.falesch...@nordestsystems.com wrote:

  Hi

 is it possible to create in sunstone (or via CLI) an empty-datablock qcow2
 with preallocation (-o preallocation=metadata).

 I need it to avoid the auto-grow overhead.

 for now I create the prealloc qcow2 image then I import it in one.

 is it posible to avoid this step?
 any suggestions or alternatives?

 thanks

 I'm using 4.0 now.. is this possible in 4.4?

 Lorenzo


 --
 Lorenzo Faleschini
 Responsabile Sistemi Informativi

 skype: falegalizeit
 mobile: +39 335 6055225
 tel: +39 0427 807934
 fax: +39 0434 1820145

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

 --
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 --
 Ruben S. Montero, PhD
 Project co-Lead and Chief 
 Architecthttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 OpenNebula - Flexible Enterprise Cloud Made Simple
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] oZones Server Incomplete Information

2013-12-03 Thread Campbell, Bill
Hello, 
I'm having some weird issues with oZones. Trying to configure it and add our 
zones, seems to work okay, but only seeing some of the information. I can see 
Datastores, Users, and Hosts, but I'm missing VMs, Templates, Images, and 
Virtual Networks. All show No Data in Table 

Is there something I'm missing in configuration? Should I see this information 
upon connecting it to my OpenNebula instances? 




Bill Campbell 
Infrastructure Architect 




Axcess Financial Services, Inc. 
7755 Montgomery Rd., Suite 400 
Cincinnati, OH 45236 


NOTICE: Protect the information in this message in accordance with the 
company's security policies. If you received this message in error, immediately 
notify the sender and destroy all copies.___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] reg oneflow - service template

2013-12-03 Thread Rajendar K
Hi Carlos,
Thanks for the mail. Here are my queries,


On Tue, Dec 3, 2013 at 7:52 PM, Carlos Martín Sánchez 
cmar...@opennebula.org wrote:

 Hi,

 On Mon, Dec 2, 2013 at 4:51 AM, Rajendar K k.rajen...@gmail.com wrote:

 Hi Carlos,
   I have updated the files [oneflow-templates and
 service] as per your instructions. Hereby specify the output of oneflow

 = Elasticty conditions

  min_vms: 1,
   max_vms: 3,
 *  cooldown: 60, = At what period , this parameter is
 being employed?   *
   elasticity_policies: [
 {
   type: CHANGE,
   adjust: 1,
   expression: CPU  60,
   period: 3,
   period_number: 30,
   cooldown: 30
 }
   ],
   scheduled_policies: [

   ]
 }


 kindly provide detail on how auto-scaling is happened in the above
 sample, with relates to period_number , period and cooldown


 The period_number, period and cooldown attributes are explained in detail
 in the documentation [1].
 In your example, the expression CPU  60 must be true 30 times, each 3
 seconds.



*So the scaling should be triggered after 1.30 minutes (30 * 3 seconds) (90
seconds)  is it right?*

*The log shows that scaling is triggered after 16 minutes,*

LOG MESSAGES

12/02/13 10:06 [I] New state: DEPLOYING
*12/02/13 10:07 [I] New state: RUNNING*
*12/02/13 10:22 [I] Role role1 scaling up from 1 to 2 nodes*
12/02/13 10:22 [I] New state: SCALING
12/02/13 10:23 [I] New state: COOLDOWN
12/02/13 10:23 [I] New state: RUNNING
*12/02/13 10:38 [I] Role role1 scaling up from 2 to 3 nodes*
12/02/13 10:38 [I] New state: SCALING
12/02/13 10:39 [I] New state: COOLDOWN
12/02/13 10:39 [I] New state: RUNNING




 After the scaling, your service will be in the cooldown period for 30
 seconds before returning to running. The only defined policy is overriding
 the default cooldown of 60 that you set.


if my understanding is correct, if we didn't specify any cooldown for each
role , it takes the default policy as 60 using that parameter?

In a previous email you mentioned:

 On Fri, Nov 29, 2013 at 5:19 AM, Rajendar K k.rajen...@gmail.com wrote:

 (ii) Scheduled policies
  - For my trial i have specified
  start time - 2013-11-29 10:30:30
   is my time format is correct? unable to scale up/down
 using this time format.


 But the output you provided does not have any scheduled_policies. Was that
 another different template?


Yes, its a  different template



 Regards.

 [1] http://opennebula.org/documentation:rel4.4:appflow_elasticity
 --
 Carlos Martín, MSc
 Project Engineer
 OpenNebula - Flexible Enterprise Cloud Made Simple
 www.OpenNebula.org http://www.opennebula.org/ | cmar...@opennebula.org
  | @OpenNebula http://twitter.com/opennebula cmar...@opennebula.org



 On Mon, Dec 2, 2013 at 4:51 AM, Rajendar K k.rajen...@gmail.com wrote:

 Hi Carlos,
   I have updated the files [oneflow-templates and
 service] as per your instructions. Hereby specify the output of oneflow


 root@:/srv/cloud/one# oneflow show 179
 SERVICE 179
 INFORMATION
 ID  : 179
 NAME: Sampletest
 USER: root
 GROUP   : oneadmin
 STRATEGY: straight
 SERVICE STATE   : RUNNING
 SHUTDOWN: shutdown

 PERMISSIONS

 OWNER   : um-
 GROUP   : ---
 OTHER   : ---

 ROLE role1
 ROLE STATE  : RUNNING
 VM TEMPLATE : 590
 CARNIDALITY : 3
 MIN VMS : 1
 MAX VMS : 3
 COOLDOWN: 60s
 SHUTDOWN: shutdown
 NODES INFORMATION
  VM_ID NAMESTAT UCPUUMEM
 HOST   TIME
568 role1_0_(service_179)   runn0   1024M 10.1.26.32
 0d 01h24
569 role1_1_(service_179)   runn0   1024M 10.1.26.31
 0d 01h07
570 role1_2_(service_179)   runn0   1024M 10.1.26.32
 0d 00h51

 ELASTICITY RULES

 ADJUST   EXPRESSION   EVALS
 PERIOD  COOL
 + 1  CPU[0.0]  6030 /
 3s   30s


 LOG
 MESSAGES
 12/02/13 10:06 [I] New state: DEPLOYING
 12/02/13 10:07 [I] New state: RUNNING
 12/02/13 10:22 [I] Role role1 scaling up from 1 to 2 nodes
 12/02/13 10:22 [I] New state: SCALING
 12/02/13 10:23 [I] New state: COOLDOWN
 12/02/13 10:23 [I] New state: RUNNING
 12/02/13 10:38 [I] Role role1 scaling up from 2 to 3 nodes
 12/02/13 10:38 [I] New state: SCALING
 12/02/13 10:39 [I] New state: COOLDOWN
 12/02/13 10:39 [I] New state: RUNNING

 = Elasticty conditions

  min_vms: 1,
   max_vms: 3,
 *  cooldown: 60, = At what period , this parameter is
 being employed?   *
   elasticity_policies: [
 {
   type: CHANGE,
   adjust: 1,
   expression: CPU  60,
   period: 3,
   period_number: 30,
   cooldown: 30
 }
   

Re: [one-users] How do i know the cluster's datastore's capacity?

2013-12-03 Thread caohf
Yes.
When I use onehost show ID,I got the message




 Best Wishes!
 Dennis

From: Carlos Martín Sánchez
Date: 2013-12-03 19:06
To: Kenneth
CC: users; 曹海峰
Subject: Re: [one-users] How do i know the cluster's datastore's capacity?
Hi there,


On Tue, Nov 26, 2013 at 9:48 AM, 曹海峰 ca...@wedogame.com wrote:

Thanks for replay!
But if I use a shared folder ,such as /var/lib/datastore/1/ on host A  and 
mount it on all hosts in a cluster. the whole storage capacity is very 
small,Opennebula can't use other hosts's disk space.?


On Tue, Nov 26, 2013 at 9:53 AM, Kenneth kenn...@apolloglobal.net wrote:

You can of course. But it is up to you to monitor.
Not in OpenNebula 4.4 :)
Local (ssh) system datastores are also monitored, but the storage is reported 
for each host: in the onehost show output, or selecting the host in Sunstone.


Best regards.
--

Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple

www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula



On Tue, Nov 26, 2013 at 9:53 AM, Kenneth kenn...@apolloglobal.net wrote:

You can of course. But it is up to you to monitor. And using storage on 
different hosts will cause the VMs to be copied via scp which is pretty slow.
Another way to have a large storage is to remove other drives on the nodes (or 
use small drives in them) and then plug big capacity drives on the main NFS 
node. I mean put everything on the host A and let other have a small hard drive 
just enough for the OS itself.
---
Thanks,
KennethApollo Global Corp.
On 11/26/2013 04:48 PM, 曹海峰 wrote:
Thanks for replay!
But if I use a shared folder ,such as /var/lib/datastore/1/ on host A  and 
mount it on all hosts in a cluster. the whole storage capacity is very 
small,Opennebula can't use other hosts's disk space.?





 Best Wishes!
 Dennis

From: users-boun...@lists.opennebula.org
Date: 2013-11-26 16:27
To: users@lists.opennebula.org
Subject: Re: [one-users] How do i know the cluster's datastore's capacity?
That's because you are not using a shared folder for all hosts. A better way to 
do it is to make the /var/lib/datastore/1/ folder on the sunstone as a NFS 
folder and then mount it all other hosts on the same location 
/var/lib/datastore/1/. This will enable you to do live migrations and very fast 
deployment of vms.
---

Thanks,
KennethApollo Global Corp.
On 11/26/2013 04:17 PM, caohf wrote:
Dear all:
How do i get the whole capacity for a  cluster's datastores,
I have two hosts in a cluster, every host has 100GB space for 
/var/lib/datastore.
In sunstone i find the capacity of cluster only display the capacity of  host 
where the sunstone deployed.




 Best Wishes!
 Dennis


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Sunstone problems

2013-12-03 Thread Vladislav Gorbunov
Check that new parameter exist in the /etc/one/sunstone-server.conf
# Default table order
:table_order: desc

Without  it Sunstone don't show any data. Look like a bug.

2013/12/4 Ionut Popovici io...@hackaserver.com:
 Hello .. i have a problem.
 I have updated to openenbula 4.4.0
 On CLI everything seams ok.

 oneadmin@serv:/var/tmp/one/im/kvm.d$ onedatastore list
   ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS   TM
0 system  2.1T 81%   - 0 sys -shared
1 default 2.1T 81%   -   131 img fs   shared
2 files   2.1T 81%   - 0 fil fs   ssh

 oneadmin@serv:/var/tmp/one/im/kvm.d$ oneimage list
   ID USER   GROUP  NAMEDATASTORE SIZE TYPE PER STAT
 RVMS
0 mcabrerizo oneadmin   ttylinux - kvm  default40M OS No rdy
 0
1 mcabrerizo oneadmin   Debian 7.1.0 Ne default   222M CD No rdy
 0
2 mcabrerizo oneadmin   Ubuntu 13.04 Se default   701M CD No rdy
 0

 ID USER GROUPNAMESTAT UCPUUMEM HOST
 TIME
242 x  oneadmin ro01-x.hrunn01024M p.h  28d
 03h56
251 x oneadmin mini4  runn01024M p.h  27d
 18h43
252 x oneadmin mini5 runn0 1024M pl.h
 27d 18h43
253 x oneadmin mini6 runn0 1024M pl.h
 27d 18h43
278 x oneadmin cactirunn0 1024M pl.h  26d
 16h26

 oneadmin@serv:/var/tmp/one/im/kvm.d$ onehost list
   ID NAMECLUSTER   RVM  ALLOCATED_CPU ALLOCATED_MEM STAT
0 p.h-  75789 / 800 (98%) 40G /
 31.4G (127%) on

 But in sunstone i dont see the Datastore and the Virtual networks and VMS
 and images.

 On cli commands everything was ok ..
 The only problem that i had it was after updated to 4.4.0 sunstone had a
 problem with sha1 password for user server admin
 so i used
 oneuser passwd serveradmin pass --sha1

 Wired is that i can connect on sunstone and se the host info .. and i see
 all my 70 machines as zombie :(

 any ideea ? Why i  can't se datastores vms images and vnet's in sunstone ?





 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] NFS datastore file system

2013-12-03 Thread Neelaya Dhatchayani
Hi Jaime,

Thanks. My doubt is if my frontend is installed in a host called
onedaemon, should i ve to ssh passwordless from onedaemon to
onedaemon sorry if my question is silly..

regards
neelaya


On Tue, Dec 3, 2013 at 5:23 PM, Jaime Melis jme...@c12g.com wrote:

 Hi,

 Please reply to the mailing list as well.

 Yes. It is a basic requirement that all the nodes (frontend + hypervisors)
 should have a oneadmin account, and they should be able to ssh
 passwordlessly from any node to any other node.

 cheers,
 Jaime



 On Tue, Dec 3, 2013 at 12:39 PM, Neelaya Dhatchayani neels.v...@gmail.com
  wrote:

 Hi Jaime,

 Thanks a lot for your reply. I have one more doubt. Should I have to ssh
 passwordless to the frontend if I am using ssh transfer manager. I know
 that it has to be done for the hosts.

 neelaya




 On Tue, Dec 3, 2013 at 4:51 PM, Jaime Melis jme...@c12g.com wrote:

 Hi Neelaya,

 the frontend and the nodes must share /var/lib/one/datastores. Any node
 can export this share, preferably a NAS system, but if you don't have been,
 you can export it from the frontend.

 cheers,
 Jaime


 On Tue, Dec 3, 2013 at 12:16 PM, Neelaya Dhatchayani 
 neels.v...@gmail.com wrote:

 Hi

 Can anyone tell me what has to be done on the frontend and hosts
 inorder to use shared transfer driver and with respect to NFS.

 Thanks in advance
 neelaya

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




 --
 Jaime Melis
 C12G Labs - Flexible Enterprise Cloud Made Simple
 http://www.c12g.com | jme...@c12g.com

 --

 Confidentiality Warning: The information contained in this e-mail and
 any accompanying documents, unless otherwise expressly indicated, is
 confidential and privileged, and is intended solely for the person
 and/or entity to whom it is addressed (i.e. those identified in the
 To and cc box). They are the property of C12G Labs S.L..
 Unauthorized distribution, review, use, disclosure, or copying of this
 communication, or any part thereof, is strictly prohibited and may be
 unlawful. If you have received this e-mail in error, please notify us
 immediately by e-mail at ab...@c12g.com and delete the e-mail and
 attachments and any copy from your system. C12G's thanks you for your
 cooperation.





 --
 Jaime Melis
 C12G Labs - Flexible Enterprise Cloud Made Simple
 http://www.c12g.com | jme...@c12g.com

 --

 Confidentiality Warning: The information contained in this e-mail and
 any accompanying documents, unless otherwise expressly indicated, is
 confidential and privileged, and is intended solely for the person
 and/or entity to whom it is addressed (i.e. those identified in the
 To and cc box). They are the property of C12G Labs S.L..
 Unauthorized distribution, review, use, disclosure, or copying of this
 communication, or any part thereof, is strictly prohibited and may be
 unlawful. If you have received this e-mail in error, please notify us
 immediately by e-mail at ab...@c12g.com and delete the e-mail and
 attachments and any copy from your system. C12G's thanks you for your
 cooperation.

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Errors, problems when creating VM using OpenNebula 3.6.0 in Ubuntu 12.04 with iSCSI datastore

2013-12-03 Thread Qiubo Su (David Su)
Dear OpenNebula Community,

1) To create VM using OpenNebula 3.6.0 in Ubuntu 12.04 with iSCSI
datastore, I create Front-end, NodeServer and iSCSIHost (an iSCSI target VM
created within the NodeServer. The IP of iSCSIHost is 192.168.1.7).

The required setup, configuration for Front-end, NodeServer and iSCSIHost
looks fine.

The host, image, vnet and datastore creation under NodeServer are ok
(except of the VM creation in below 2) ), but when oneimage show 0, can't
see output of

SOURCE: iqn.2013-12.iSCSIHost:192.168.1.7.vg-one.lv-one-13
PATH: /var/lib/images/win-xp.qcow2

while only can see SOURCE: /var/lib/images/win-xp.qcow2.

the opennebula 3.6.0 used was downloaded in more than one year ago. is it
the correct version of opennebula to use?

2) the VM creation is failed and got error executing image transfer
script: error cloning NodeServer:/dev//var/lib/image/win-xp-2/qcow2-0.
Below is the vm.log for this VM creation:

Wed Dec  4 17:38:56 2013 [DiM][I]: New VM state is ACTIVE.
Wed Dec  4 17:38:56 2013 [LCM][I]: New VM state is PROLOG.
Wed Dec  4 17:38:56 2013 [TM][I]: Command execution fail:
/var/lib/one/var/remotes/tm/iscsi/clone
Front-end:/var/lib/image/win-xp-2.qcow2
NodeServer:/var/lib/one/var//datastores/0/0/disk.0 0 101
Wed Dec  4 17:38:56 2013 [TM][E]: clone: Command set -e
Wed Dec  4 17:38:56 2013 [TM][I]:
Wed Dec  4 17:38:56 2013 [TM][I]: # get size
Wed Dec  4 17:38:56 2013 [TM][I]: SIZE=$(sudo lvs --noheadings -o lv_size
/dev//var/lib/image/win-xp-2/qcow2)
Wed Dec  4 17:38:56 2013 [TM][I]:
Wed Dec  4 17:38:56 2013 [TM][I]: # create lv
Wed Dec  4 17:38:56 2013 [TM][I]: sudo lvcreate -L${SIZE}
/var/lib/image/win-xp-2 -n qcow2-0
Wed Dec  4 17:38:56 2013 [TM][I]:
Wed Dec  4 17:38:56 2013 [TM][I]: # clone lv with dd
Wed Dec  4 17:38:56 2013 [TM][I]: sudo dd
if=/dev//var/lib/image/win-xp-2/qcow2
of=/dev//var/lib/image/win-xp-2/qcow2-0 bs=64k
Wed Dec  4 17:38:56 2013 [TM][I]:
Wed Dec  4 17:38:56 2013 [TM][I]: # new iscsi target
Wed Dec  4 17:38:56 2013 [TM][I]: TID=$(sudo tgtadm --lld iscsi --op show
--mode target | grep Target | tail -n 1 | awk
'{split($2,tmp,:); print tmp[1]+1;}')
Wed Dec  4 17:38:56 2013 [TM][I]:
Wed Dec  4 17:38:56 2013 [TM][I]: sudo tgtadm --lld iscsi --op new --mode
target --tid $TID  --targetname Front-end:/var/lib/image/win-xp-2.qcow2-0
Wed Dec  4 17:38:56 2013 [TM][I]: sudo tgtadm --lld iscsi --op bind --mode
target --tid $TID -I ALL
Wed Dec  4 17:38:56 2013 [TM][I]: sudo tgtadm --lld iscsi --op new --mode
logicalunit --tid $TID  --lun 1 --backing-store
/dev//var/lib/image/win-xp-2/qcow2-0
Wed Dec  4 17:38:56 2013 [TM][I]: sudo tgt-admin --dump |sudo tee
/etc/tgt/targets.conf  /dev/null failed: ssh: Could not resolve hostname
/var/lib/image/win-xp-2.qcow2: Name or service not known
Wed Dec  4 17:38:56 2013 [TM][E]: Error cloning
NodeServer:/dev//var/lib/image/win-xp-2/qcow2-0
Wed Dec  4 17:38:56 2013 [TM][I]: ExitCode: 255
Wed Dec  4 17:38:56 2013 [TM][E]: Error executing image transfer script:
Error cloning NodeServer:/dev//var/lib/image/win-xp-2/qcow2-0
Wed Dec  4 17:38:56 2013 [DiM][I]: New VM state is FAILED

3) the iSCSI INITIATOR is only installed in the NodeServer. the tgt (for
tgt admin) is only installed in the iSCSIHost, so the /etc/tgt/targets.conf
is only in the iSCSIHost.

From the result, it doesn't look like the Front-end and NodeServer can well
communicate with the iSCSIHost, can access the iSCSI storage partition
created under the iSCSIHost target.

It is much appreciated if anyone can help with the above.

Thanks kindly,
Q.S.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] opennebula4.2 and esxi5.1 pls for help

2013-12-03 Thread hansz
I have  try install windows on opennebula with esxi5.1 for a long days,but 
always could not make the vms successfully, if i select a  virtual networks 
,the  windows vms will show a mismatch the nic driver, i also try follow the 
http://wiki.ieeta.pt/wiki/index.php/OpenNebula#Using_Windows_Images_for_new_Virtual_Machines
 , but it for  vm install on kvm, clould you kind enough  guide me ? and 
another Headache question , is the novnc  on sunstone could not enter uppercase 
letters, my another friend  also have the same question. pls for help.___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org