Re: [one-users] Users Digest, Vol 70, Issue 51

2013-12-12 Thread ??????
maybe  a bug 






when i want to edit  the "RAW" of  template ,it shows me :


$onetemplate show 0 


.
RAW=[
  DATA="   100
90",
  TYPE="kvm" ]

...




then i go to sea it in WEB,it is the same 




-- Original --
From:  "users-request";;
Date:  Tue, Dec 10, 2013 02:07 AM
To:  "users"; 

Subject:  Users Digest, Vol 70, Issue 51



Send Users mailing list submissions to
users@lists.opennebula.org

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
or, via email, send a message with subject or body 'help' to
users-requ...@lists.opennebula.org

You can reach the person managing the list at
users-ow...@lists.opennebula.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."


Today's Topics:

   1. Re: opennebula 4.4 esxi 5.1 windows vms lose nic dirver
  (Tino Vazquez)
   2. Re: VMWare image on ONE 4.4 (Eduardo Roloff)
   3. Re: VMWare image on ONE 4.4 (Tino Vazquez)
   4. Re: VMWare image on ONE 4.4 (Eduardo Roloff)
   5. Re: VM State Pending and Stuck in it (Carlos Mart?n S?nchez)


--

Message: 1
Date: Mon, 9 Dec 2013 15:44:36 +0100
From: Tino Vazquez 
To: hansz 
Cc: users 
Subject: Re: [one-users] opennebula 4.4 esxi 5.1 windows vms lose nic
dirver
Message-ID:

Content-Type: text/plain; charset=UTF-8

Hi,

Without the E1000 model, does the VM boot? If so, does it recognise
the NIC if you install the vmware tools?

This may be of help:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1001805

I suggest approaching the problem by creating a Windows VM in VMware
that recognises the NIC, and afterwards creating the same VM through
OpenNebula.

Regards,

-Tino
--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino V?zquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
"To" and "cc" box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G thanks you for your
cooperation.


On Fri, Dec 6, 2013 at 8:47 AM, hansz  wrote:
> hi man,
> yesterday i install opennebula4.4 on centos,but the windows2008r2 vms also
> has  NIC driver problem
> if i add  model =e1000 ,the vms will black screen ?when the vms startup, the
> follow are the network and the template
> the network
> [oneadmin@nebula ~]$ onevnet show 0 -x
> 
>   0
>   0
>   0
>   oneadmin
>   oneadmin
>   EsxiNetwork
>   
> 1
> 1
> 0
> 0
> 0
> 0
> 0
> 0
> 0
>   
>   -1
>   
>   0
>   VM Network
>   0
>   
>   
>   
>   
>   
> 10.24.101.95
> 10.24.101.103
>   
>   2
>   
> 
> 
> 
>   
>   
> 
>   02:00:0a:18:65:5f
>   10.24.101.95
>   fe80::400:aff:fe18:655f
>   1
>   2
> 
> 
>   02:00:0a:18:65:60
>   10.24.101.96
>   fe80::400:aff:fe18:6560
>   1
>   3
> 
>   
> 
> [oneadmin@nebula ~]$
>
> the template
> [oneadmin@nebula ~]$ onetemplate show 2 -x
> 
>   2
>   0
>   0
>   oneadmin
>   oneadmin
>   windows2
>   
> 1
> 1
> 0
> 0
> 0
> 0
> 0
> 0
> 0
>   
>   1386342845
>   
> 
>   
>   
> 
> 
> 
>   
>   
> 
> 
>   
> 
> 
>   
>   
> 
> 
>   
>   
> 
> 
>   
>   
> 
> 
>   
>   
> 
> 
> 
>   
>   
>   
> 
> 
>   
> 
> 
>   
> 
> [oneadmin@nebula ~]$
>
>
>
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>


--

Message: 2
Date: Mon, 9 Dec 2013 13:09:59 -0200
From: Eduardo Roloff 
To: Tino Vazquez 
Cc: users 
Subject: Re: [one-users] VMWare image on ONE 4.4
Message-ID:

Content-Type: text/plain; charset=ISO-8859-1

Hello Tino,

here are the output.


  1
  0
  0
  oneadmin
  oneadmin
  default
  
1
1
0
1
0
0
1
0
0
  
  fs
  vmfs
  /var/lib/one/datastores/1
  0
  0
  -1
  
  10079
  5402
  2692
  
4
  
  






  


On 

[one-users] Create VM in Opennebula for OpenVZ failed

2013-12-12 Thread Catalina Quinde
Hi friends,

1.I created a VM in Opennebula for OpenVZ but this is failed, this is your
log:

oneadmin@ubuntuOpNeb:~$ cat /var/log/one/1.log
Fri Dec 13 01:45:56 2013 [DiM][I]: New VM state is ACTIVE.
Fri Dec 13 01:45:56 2013 [LCM][I]: New VM state is PROLOG.
Fri Dec 13 01:46:15 2013 [LCM][I]: New VM state is BOOT
Fri Dec 13 01:46:15 2013 [VMM][I]: Generating deployment file:
/var/lib/one/vms/1/deployment.0
Fri Dec 13 01:46:16 2013 [VMM][I]: ExitCode: 0
Fri Dec 13 01:46:16 2013 [VMM][I]: Successfully execute network driver
operation: pre.
Fri Dec 13 01:46:21 2013 [VMM][I]: Command execution fail: cat << EOT |
/vz/one/scripts/vmm/ovz/deploy '/vz/one/datastores/0/1/deployment.0'
'ubuntu' 1 ubuntu
Fri Dec 13 01:46:21 2013 [VMM][I]: deploy: Executed "/usr/bin/sudo mv
"/var/lib/vz/template/cache/debian-7.0-x86_64.tar.gz"
"/var/lib/vz/template/cache/debian-7.0-x86_64.tar.gz.1386917185" 2>
/dev/null; true".
Fri Dec 13 01:46:21 2013 [VMM][I]: deploy: Executed "/usr/bin/sudo ln -sf
"/vz/one/datastores/0/1/disk.0"
"/var/lib/vz/template/cache/debian-7.0-x86_64.tar.gz"".
Fri Dec 13 01:46:21 2013 [VMM][E]: deploy: Command "/usr/bin/sudo
/usr/sbin/vzctl create 1001 --layout ploop --ostemplate "debian-7.0-x86_64"
--private "/vz/one/datastores/0/1/private" --root
"/vz/one/datastores/0/1/root"" failed.
Fri Dec 13 01:46:21 2013 [VMM][E]: deploy: Can't create directory
/vz/one/datastores/0/1/private.tmp: Permission denied
Fri Dec 13 01:46:21 2013 [VMM][I]: Unable to create directory
/vz/one/datastores/0/1/private.tmp: Permission denied
Fri Dec 13 01:46:21 2013 [VMM][I]: Creation of container private area failed
Fri Dec 13 01:46:21 2013 [VMM][E]: Can't create directory
/vz/one/datastores/0/1/private.tmp: Permission denied
Fri Dec 13 01:46:21 2013 [VMM][E]: Unable to create directory
/vz/one/datastores/0/1/private.tmp: Permission denied
Fri Dec 13 01:46:21 2013 [VMM][E]: Creation of container private area failed
Fri Dec 13 01:46:21 2013 [VMM][E]:
Fri Dec 13 01:46:21 2013 [VMM][I]: ExitCode: 48
Fri Dec 13 01:46:21 2013 [VMM][I]: Failed to execute virtualization driver
operation: deploy.
Fri Dec 13 01:46:21 2013 [VMM][E]: Error deploying virtual machine: Can't
create directory /vz/one/datastores/0/1/private.tmp: Permission denied
Fri Dec 13 01:46:21 2013 [DiM][I]: New VM state is FAILED

2. My template file contains:

NIC=[NETWORK_ID="0"]
OSTEMPLATE="debian-7.0-x86_64"
DISK=[IMAGE_ID="0"]
DISK=[SIZE="512",TYPE="swap"]
CPU="0.01"
VE_LAYOUT="ploop"
RCLOCAL="rc.local"
OS=[ARCH="x86_64"]
CLUSTER_100="100"
VCPU="1"
REQUIREMENTS="CLUSTER_ID=\"100\""
MEMORY="512"
CONTEXT=[SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]",NETWORK="YES"]

3. My datastores list contains:

oneadmin@ubuntuOpNeb:~$ onedatastore list
  ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS
TM
   0 system - - - 0 sys  -ssh
   1 default15.6G 62%   ovz_x64   1 img  fs   ssh
   2 files  15.6G 62%   - 0 fil  fs   ssh

Please help me solve this, is very important
Thanks, Caty.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] error deploying virtual machine

2013-12-12 Thread Neelaya Dhatchayani
Hi,

I get the following error while deploying a virtual linux in opennebula and
below is the  template after giving onevm show 24

kindly help me on this

Fri Dec 13 10:34:42 2013 [DiM][I]: New VM state is ACTIVE.
Fri Dec 13 10:34:42 2013 [LCM][I]: New VM state is PROLOG.
Fri Dec 13 10:34:49 2013 [LCM][I]: New VM state is BOOT
Fri Dec 13 10:34:49 2013 [VMM][I]: Generating deployment file:
/var/lib/one/vms/24/deployment.0
Fri Dec 13 10:34:49 2013 [VMM][I]: ExitCode: 0
Fri Dec 13 10:34:49 2013 [VMM][I]: Successfully execute network driver
operation: pre.
Fri Dec 13 10:35:20 2013 [VMM][I]: Command execution fail: cat << EOT |
/var/tmp/one/vmm/xen3/deploy '/var/lib/one//datastores/0/24/deployment.0'
'onehost2' 24 onehost2
Fri Dec 13 10:35:20 2013 [VMM][I]: *Error: Device 768 (tap) could not be
connected.*
Fri Dec 13 10:35:20 2013 [VMM][I]: *...:/var/lib/one//datastores/0/24/disk.0
does not exist.*
Fri Dec 13 10:35:20 2013 [VMM][E]: Unable
Fri Dec 13 10:35:20 2013 [VMM][I]: ExitCode: 1
Fri Dec 13 10:35:20 2013 [VMM][I]: Failed to execute virtualization driver
operation: deploy.
Fri Dec 13 10:35:20 2013 [VMM][E]: Error deploying virtual machine: Unable
Fri Dec 13 10:35:20 2013 [DiM][I]: New VM state is FAILED


*Template *

[oneadmin@onedaemon ~]$ onevm show 24
VIRTUAL MACHINE 24
INFORMATION
ID  : 24
NAME: vm07
USER: oneadmin
GROUP   : oneadmin
STATE   : FAILED
LCM_STATE   : LCM_INIT
RESCHED : No
START TIME  : 12/13 10:34:24
END TIME: 12/13 10:35:20
DEPLOY ID   : -

VIRTUAL MACHINE
MONITORING
USED CPU: 0
USED MEMORY : 0K
NET_RX  : 0K
NET_TX  : 0K

PERMISSIONS

OWNER   : um-
GROUP   : ---
OTHER   : ---

VM
DISKS

 ID TARGET IMAGE   TYPE SAVE SAVE_AS
  0 hdaPuppyLinux  file   NO   -

VIRTUAL MACHINE
HISTORY
SEQ HOSTACTION   REAS   STARTTIME
PROLOG
  0 onehost2none erro  12/13 10:34:42   0d 00h00m
0h00m07s

USER
TEMPLATE
ERROR="Fri Dec 13 10:35:20 2013 : Error deploying virtual machine: Unable"
SCHED_REQUIREMENTS="ID=\"2\""

VIRTUAL MACHINE
TEMPLATE
CONTEXT=[
  DISK_ID="1",
  NETWORK="YES",
  TARGET="hdb" ]
CPU="1"
MEMORY="256"
TEMPLATE_ID="10"
VMID="24"
[oneadmin@onedaemon ~]$


regards
neelaya
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] reg oneflow - service template

2013-12-12 Thread Rajendar K
Hi Carlos,
 Thanks for your mail. Still my doubt is


On Tue, Dec 10, 2013 at 2:15 AM, Carlos Martín Sánchez <
cmar...@opennebula.org> wrote:

> Hi,
>
> On Wed, Dec 4, 2013 at 2:40 AM, Rajendar K  wrote:
>
>> Hi Carlos,
>> Thanks for the mail. Here are my queries,
>>
>>
>> On Tue, Dec 3, 2013 at 7:52 PM, Carlos Martín Sánchez <
>> cmar...@opennebula.org> wrote:
>>
>>> Hi,
>>>
>>>  On Mon, Dec 2, 2013 at 4:51 AM, Rajendar K 
>>>  wrote:
>>>
 Hi Carlos,
   I have updated the files [oneflow-templates and
 service] as per your instructions. Hereby specify the output of oneflow

 => Elasticty conditions

  "min_vms": 1,
   "max_vms": 3,
 *  "cooldown": 60, => At what period , this parameter is
 being employed?   *
   "elasticity_policies": [
 {
   "type": "CHANGE",
   "adjust": 1,
   "expression": "CPU < 60",
   "period": 3,
   "period_number": 30,
   "cooldown": 30
 }
   ],
   "scheduled_policies": [

   ]
 }


 kindly provide detail on how auto-scaling is happened in the above
 sample, with relates to "period_number" , "period" and "cooldown"

>>>
>>> The period_number, period and cooldown attributes are explained in
>>> detail in the documentation [1].
>>> In your example, the expression CPU < 60 must be true 30 times, each 3
>>> seconds.
>>>
>>
>>
>> *So the scaling should be triggered after 1.30 minutes (30 * 3 seconds)
>> (90 seconds)  is it right?*
>>
>> *The log shows that scaling is triggered after 16 minutes,*
>>
>> LOG MESSAGES
>>
>> 12/02/13 10:06 [I] New state: DEPLOYING
>> *12/02/13 10:07 [I] New state: RUNNING*
>> *12/02/13 10:22 [I] Role role1 scaling up from 1 to 2 nodes*
>> 12/02/13 10:22 [I] New state: SCALING
>> 12/02/13 10:23 [I] New state: COOLDOWN
>> 12/02/13 10:23 [I] New state: RUNNING
>> *12/02/13 10:38 [I] Role role1 scaling up from 2 to 3 nodes*
>> 12/02/13 10:38 [I] New state: SCALING
>> 12/02/13 10:39 [I] New state: COOLDOWN
>> 12/02/13 10:39 [I] New state: RUNNING
>>
>
> The role will scale up *if the expression is true* those 90 seconds. When
> the expression is false, the counter is reset.
>
>


*(QUERY 1) How to know about that the counter has been reset, is there any
log files which makes entry? *
*We can see only the log stating how many times the the statement is true,
as seen below*

*ADJUST   EXPRESSION   EVALS PERIOD
 COOL*
*+ 1  CPU[0.0] < 60
13 /  3s   60s*


*(QUERY 2) Can you specify the time format used for scheduled policy - >
start time?*






>
>>> After the scaling, your service will be in the cooldown period for 30
>>> seconds before returning to running. The only defined policy is overriding
>>> the default cooldown of 60 that you set.
>>>
>>>
>> if my understanding is correct, if we didn't specify any cooldown for
>> each role , it takes the default policy as "60" using that parameter?
>>
>
> That's right, if you set a default cooldown for the service you can leave
> it unset for the roles.
>
>
> Regards
>
> --
> Carlos Martín, MSc
> Project Engineer
> OpenNebula - Flexible Enterprise Cloud Made Simple
> www.OpenNebula.org  | cmar...@opennebula.org
>  | @OpenNebula 
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] NFS datastore file system

2013-12-12 Thread Neelaya Dhatchayani
Hi

I mounted /var/one/lib/datastores in all hosts but i get error saying cd
/var/one/lib/datastores/0/12 file or directory doest not exists

but when i mount /var/one/lib/datastores/100 transfer script manager is
executed successfully. but i get error same error at the last stage. here
100 is the name of the datastore i created to add an iso image

pls help me on this

regards
neelaya


On Thu, Dec 5, 2013 at 8:42 PM, Dmitri Chebotarov  wrote:

>  Hi
>
>  The host running ONED needs to ssh to all VM hosts using public key
> (passwordless).
> ONED will use ‘oneadmin’ account to access all VM hosts.
> B/c /var/lib/one is shared (using NFS) between ONED and all hosts you need
> to setup public key auth only once for user ‘oneadmin’.
>
>  Setup your NFS server/filer, login to ONED controller, mount
> /var/lib/one, su – oneadmin;
>
>  cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
> Chmod 600  ~/.ssh/authorized_keys
>
>  Now, you can ssh to any host as user ‘oneadmin’ using public key.
> Please note, that each host needs /var/lib/one mounted from the same
> location as other hosts and ONED server.
>
>  If you chose to only use NFS for your datastores (I.e.
> /var/lib/one/datastores/) then you need to add public key on ALL your
> hosts, not just one.
> --
> Thank you,
>
> Dmitri Chebotarov
> VCL Sys Eng, Engineering & Architectural
> Support, TSD - Ent Servers & Messaging
> 223 Aquia Building, Ffx, MSN: 1B5
> Phone: (703) 993-6175 | Fax: (703) 993-3404
>
>
>   From: Neelaya Dhatchayani 
> Date: Tuesday, December 3, 2013 at 23:33
> To: Jaime Melis , opennebula 
> Subject: Re: [one-users] NFS datastore file system
>
>Hi Jaime,
>
>  Thanks. My doubt is if my frontend is installed in a host called
> onedaemon, should i ve to ssh passwordless from onedaemon to
> onedaemon sorry if my question is silly..
>
>  regards
>  neelaya
>
>
> On Tue, Dec 3, 2013 at 5:23 PM, Jaime Melis  wrote:
>
>>  Hi,
>>
>>  Please reply to the mailing list as well.
>>
>>  Yes. It is a basic requirement that all the nodes (frontend +
>> hypervisors) should have a oneadmin account, and they should be able to ssh
>> passwordlessly from any node to any other node.
>>
>>  cheers,
>> Jaime
>>
>>
>>
>> On Tue, Dec 3, 2013 at 12:39 PM, Neelaya Dhatchayani <
>> neels.v...@gmail.com> wrote:
>>
>>>  Hi Jaime,
>>>
>>>  Thanks a lot for your reply. I have one more doubt. Should I have to
>>> ssh passwordless to the frontend if I am using ssh transfer manager. I know
>>> that it has to be done for the hosts.
>>>
>>>  neelaya
>>>
>>>
>>>
>>>
>>> On Tue, Dec 3, 2013 at 4:51 PM, Jaime Melis  wrote:
>>>
 Hi Neelaya,

  the frontend and the nodes must share /var/lib/one/datastores. Any
 node can export this share, preferably a NAS system, but if you don't have
 been, you can export it from the frontend.

  cheers,
 Jaime


  On Tue, Dec 3, 2013 at 12:16 PM, Neelaya Dhatchayani <
 neels.v...@gmail.com> wrote:

>Hi
>
>  Can anyone tell me what has to be done on the frontend and hosts
> inorder to use shared transfer driver and with respect to NFS.
>
>  Thanks in advance
>  neelaya
>
>  ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>


  --
 Jaime Melis
 C12G Labs - Flexible Enterprise Cloud Made Simple
 http://www.c12g.com | jme...@c12g.com

  --

  Confidentiality Warning: The information contained in this e-mail and
 any accompanying documents, unless otherwise expressly indicated, is
 confidential and privileged, and is intended solely for the person
 and/or entity to whom it is addressed (i.e. those identified in the
 "To" and "cc" box). They are the property of C12G Labs S.L..
 Unauthorized distribution, review, use, disclosure, or copying of this
 communication, or any part thereof, is strictly prohibited and may be
 unlawful. If you have received this e-mail in error, please notify us
 immediately by e-mail at ab...@c12g.com and delete the e-mail and
 attachments and any copy from your system. C12G's thanks you for your
 cooperation.

>>>
>>>
>>
>>
>>  --
>> Jaime Melis
>> C12G Labs - Flexible Enterprise Cloud Made Simple
>> http://www.c12g.com | jme...@c12g.com
>>
>>  --
>>
>>  Confidentiality Warning: The information contained in this e-mail and
>> any accompanying documents, unless otherwise expressly indicated, is
>> confidential and privileged, and is intended solely for the person
>> and/or entity to whom it is addressed (i.e. those identified in the
>> "To" and "cc" box). They are the property of C12G Labs S.L..
>> Unauthorized distribution, review, use, disclosure, or copying of this
>> communication, or any part thereof, is strictly prohibited and may be
>> unlawful. If you have received this e-mail in error

[one-users] Error monitoring host

2013-12-12 Thread (unknown)
Hi,
When i try to create a new host using sunstone (OpenNebula 3.6), i have the 
following error : Error monitoring host 8 : Monitor Failure 8 could not update 
remotes.
- Parameters of my host :
Information Manager driver = KVM
Virtual Machine Manager driver =KVM
Network Manager driver : dummy


Any ideas to resolve this problem please ???


Best Regards
--
HASSAN Karim
PhD student in computer science.
Researcher at LATICE Laboratory ENSIT
TUNISIA___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Datastores Opennebula for OpenVZ

2013-12-12 Thread Catalina Quinde
Hi, everybody,

Please can you tell me the correct settings for OpenNebula datastores forOpenVZ
, if you could send the output of command "onedatastore list" and the contents
of the file "/etc/exports" to successfully create VM   from OpenNebula to
OpenVZ.

Thanks.

Caty.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] device onebrxxxx alreay exists can't create bridge with the same name

2013-12-12 Thread cmcc.dylan

Hi.
  I don't use >= 4.0. But I think the code has a little problem, that is, we 
should add a lock for "get_interfaces" not for "create_bridge".




At 2013-12-12 17:05:03,"Jaime Melis"  wrote:

Hi,


not sure I follow, but given that the rules are idempotent if the bridge 
doesn't exist it will be created, and if it does, it won't.


Have you tried this with ONE >= 4.0 and still fails?


regards,
Jaime



On Thu, Dec 12, 2013 at 4:32 AM, cmcc.dylan  wrote:

Hi,Jainme.
 
  I think curruent codes don't have solved the bug complelely. The key problems 
the the following snippets are executed  parallel.
class OpenNebulaHM < OpenNebulaNetwork
XPATH_FILTER = "TEMPLATE/NIC[VLAN='YES']"
def initialize(vm, deploy_id = nil, hypervisor = nil)
super(vm,XPATH_FILTER,deploy_id,hypervisor)
@bridges = get_interfaces
end
 
so bridges variable maybe have the same name bridge. because bridge is a ruby 
instance variable,not a ruby class variable.





At 2013-12-12 01:53:18,"Jaime Melis"  wrote:

Hi,


yes, this is a known bug which is already solved in OpenNebula >= 4.0 by 
implementing locking mechanisms.
http://dev.opennebula.org/issues/1722



cheers,
Jaime







On Wed, Dec 11, 2013 at 9:46 AM, cmcc.dylan  wrote:

Hi everyone!
 
   I find a problem when we create two or more instances on one host at the 
same time,we meet the error "device onebr alreay exists can't create bridge 
with the same name".
   The reason is that instances all try to create their bridge,although they 
check whether or not their birdge is existed. because it's at the same time, 
they all get a result that their bridge is not existed, and then they create it.
   But when they really create, the same bridge has already been created by 
other instances.
 
Has the problem been fixed now? I use opennebula-3.8.1.
 
   Look forward your answers!
 
   dylan.



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org








--

Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com



--


Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
"To" and "cc" box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.








--

Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com



--


Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
"To" and "cc" box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Ceph and thin provision

2013-12-12 Thread Campbell, Bill
Yes, Dumpling supports format 2 images (I think Bobtail 0.56 was the first 
release that did). 

I'll be submitting my modified driver to the development team for 
inclusion/modification (ideally we should be able to select which format we 
want to use, so further modifications would be necessary) and hopefully it 
would be included in the next version. 

In the interim, I can share with you the drivers we are using, but be advised 
this would be UNSUPPORTED by the OpenNebula development/support team. It has 
been working rather well for us though. 

- Original Message -

From: "Kenneth"  
To: "Bill Campbell"  
Cc: users@lists.opennebula.org 
Sent: Thursday, December 12, 2013 9:29:49 AM 
Subject: Re: [one-users] Ceph and thin provision 



This all is good news. And I think this will solve my problem of a bit slow (a 
few minutes) of deploying a VM, that is cloning is really time consuming. 

Although I really like this RBD format 2, I'm not quite adept yet on how to 
implement it in nebula. And my ceph version is dumpling 0.67, does it support 
rbd format 2? 

If you have any docs, I'd greatly appreciate it. Or rather I'm willing to wait 
a little longer, maybe on the next release of nebula(?), to make rbd format 2 
to be the default format? 
--- 
Thanks,
Kenneth 
Apollo Global Corp. 


On 12/12/2013 09:48 PM, Campbell, Bill wrote: 


Ceph's RBD Format 2 images support the copy-on-write clones/snapshots for quick 
provisioning, where essentially the following happens: 
Snapshot of Image created --> Snapshot protected from deletion --> Clone image 
created from snapshot 
The protected snapshot acts as a base image for the clone, where only the 
additional data is stored in the clone. See more here: 
http://ceph.com/docs/master/rbd/rbd-snapshot/#layering 
For our environment here I have modified the included datastore/tm drivers for 
Ceph to take advantage of these format 2 images/layering for Non-Persistent 
images. It works rather well, and all image functions work appropriately for 
non-persistent images (save as, etc.). One note/requirement is to be using a 
newer Ceph release (recommend Dumpling or newer) and newer versions of 
QEMU/Libvirt (there were some bugs in older releases, but the versions from 
Ubuntu Cloud Archive for 12.04 work fine). I did submit them for improvement 
prior to the 4.0 release, but the simple format 1 images are the default 
currently for OpenNebula. 
I think this would be a good question for the developers. Would creating the 
option for Format 2 images (either in the image template as a parameter or on 
the Datastore as a configuration attribute) and then developing the DS/TM 
drivers further to accommodate this option be worth the effort? I can see use 
cases for both (separate images vs. cloned images having to rely on the base 
image), but cloned images are WAY faster to deploy. 
I have the basic code for format 2 images, I think the logic for looking up the 
parameter/attribute and then applying appropriate action should be rather 
simple. Could collaborate/share if you'd like. 
- Original Message -

From: "Kenneth"  
To: users@lists.opennebula.org 
Sent: Thursday, December 12, 2013 6:11:15 AM 
Subject: Re: [one-users] Ceph and thin provision 


Yes, that is possible. But as I said, all my images were all preallocated as I 
haven't created any image from sunstone. 


--- 
Thanks,
Kenneth 
Apollo Global Corp. 


On 12/12/2013 06:25 PM, Michael wrote: 


This doesn't appear to be the case, I've 2TB of images on Ceph and 380GB 
data reported by Ceph (760G after replication). All of these Ceph images 
were created through the Opennebula Sunstone template GUI.

-Michael

On 12/12/2013 09:11, Kenneth wrote: 


I haven't tried creating a thin or thick provision in ceph rbd from scratch. So 
basically, I can say that a 100GB disk will consume 100GB RBD in ceph (of 
course it will be 200GB in ceph storage since ceph duplicates the disks by 
default). 


___
Users mailing list Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 



___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

NOTICE: Protect the information in this message in accordance with the 
company's security policies. If you received this message in error, immediately 
notify the sender and destroy all copies. 




NOTICE: Protect the information in this message in accordance with the 
company's security policies. If you received this message in error, immediately 
notify the sender and destroy all copies.___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] upload image error

2013-12-12 Thread Tino Vazquez
Hi,

To upload a VMware image through Sunstone, since it potentially has
more than one file, it has to be contained in a directory. This
directory then needs to be compressed (with tar.gz or bzip2), and then
uploaded to Sunstone.

Is this the process you are currently followed?

Regards,

-Tino

--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
"To" and "cc" box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G thanks you for your
cooperation.


On Thu, Dec 12, 2013 at 10:01 AM, hansz  wrote:
> Hi Tino,
> i use opennebula4.4(centos6.4) and esxi5.1 and i follow this guide
> http://opennebula.org/documentation:rel4.4:vmware_ds
> but when i upload the image through the sunstone ,it faild
> the error
> Thu Dec 12 10:53:32 2013 : Error copying image in the datastore: Error
> renaming file /var/lib/one/tmp/75e6961fc99c4392b979ac319a25b27a/ to
> /var/lib/one/tmp/75e6961fc99c4392b979ac319a25b27a/disk.vmdk
>
> I  yum install the libvirt-0.10.2.rpm
> it can monitored my datasotes
>
> [oneadmin@nebula ~]$ onedatastore show 102
> DATASTORE 102 INFORMATION
> ID : 102
> NAME   : Vos2
> USER   : oneadmin
> GROUP  : oneadmin
> CLUSTER: -
> TYPE   : SYSTEM
> DS_MAD : -
> TM_MAD : vmfs
> BASE PATH  : /vmfs/volumes/102
> DISK_TYPE  : FILE
>
> DATASTORE CAPACITY
> TOTAL: : 199.8G
> FREE:  : 198.8G
> USED:  : 972M
> LIMIT: : -
>
> PERMISSIONS
> OWNER  : um-
> GROUP  : u--
> OTHER  : ---
>
> DATASTORE TEMPLATE
> BRIDGE_LIST="10.24.101.72"
> SHARED="YES"
> TM_MAD="vmfs"
> TYPE="SYSTEM_DS"
>
>
>
> [oneadmin@nebula ~]$ onedatastore show 103
> DATASTORE 103 INFORMATION
> ID : 103
> NAME   : Vimage2
> USER   : oneadmin
> GROUP  : oneadmin
> CLUSTER: -
> TYPE   : IMAGE
> DS_MAD : vmfs
> TM_MAD : vmfs
> BASE PATH  : /vmfs/volumes/103
> DISK_TYPE  : FILE
>
> DATASTORE CAPACITY
> TOTAL: : 302.3G
> FREE:  : 301.3G
> USED:  : 771M
> LIMIT: : -
>
> PERMISSIONS
> OWNER  : um-
> GROUP  : u--
> OTHER  : ---
>
> DATASTORE TEMPLATE
> BRIDGE_LIST="10.24.101.72"
> CLONE_TARGET="SYSTEM"
> DISK_TYPE="FILE"
> DS_MAD="vmfs"
> LN_TARGET="NONE"
> TM_MAD="vmfs"
> TYPE="IMAGE_DS"
>
> IMAGES
> 9
> [oneadmin@nebula ~]$
>
>
>
>
>
>
>
>
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Ceph and thin provision

2013-12-12 Thread Kenneth
 

This all is good news. And I think this will solve my problem of a
bit slow (a few minutes) of deploying a VM, that is cloning is really
time consuming. 

Although I really like this RBD format 2, I'm not
quite adept yet on how to implement it in nebula. And my ceph version is
dumpling 0.67, does it support rbd format 2? 

 If you have any docs,
I'd greatly appreciate it. Or rather I'm willing to wait a little
longer, maybe on the next release of nebula(?), to make rbd format 2 to
be the default format? 
---

Thanks,
Kenneth
Apollo Global Corp.

On
12/12/2013 09:48 PM, Campbell, Bill wrote: 

> Ceph's RBD Format 2
images support the copy-on-write clones/snapshots for quick
provisioning, where essentially the following happens: 
> 
> Snapshot of
Image created --> Snapshot protected from deletion --> Clone image
created from snapshot 
> 
> The protected snapshot acts as a base image
for the clone, where only the additional data is stored in the clone.
See more here: http://ceph.com/docs/master/rbd/rbd-snapshot/#layering
[2] 
> 
> For our environment here I have modified the included
datastore/tm drivers for Ceph to take advantage of these format 2
images/layering for Non-Persistent images. It works rather well, and all
image functions work appropriately for non-persistent images (save as,
etc.). One note/requirement is to be using a newer Ceph release
(recommend Dumpling or newer) and newer versions of QEMU/Libvirt (there
were some bugs in older releases, but the versions from Ubuntu Cloud
Archive for 12.04 work fine). I did submit them for improvement prior to
the 4.0 release, but the simple format 1 images are the default
currently for OpenNebula. 
> 
> I think this would be a good question
for the developers. Would creating the option for Format 2 images
(either in the image template as a parameter or on the Datastore as a
configuration attribute) and then developing the DS/TM drivers further
to accommodate this option be worth the effort? I can see use cases for
both (separate images vs. cloned images having to rely on the base
image), but cloned images are WAY faster to deploy. 
> 
> I have the
basic code for format 2 images, I think the logic for looking up the
parameter/attribute and then applying appropriate action should be
rather simple. Could collaborate/share if you'd like. 
> 
>
-
> 
> FROM: "Kenneth"

> TO: users@lists.opennebula.org
> SENT:
Thursday, December 12, 2013 6:11:15 AM
> SUBJECT: Re: [one-users] Ceph
and thin provision
> 
> Yes, that is possible. But as I said, all my
images were all preallocated as I haven't created any image from
sunstone. 
> 
> ---
> 
> Thanks,
> Kenneth
> Apollo Global Corp.
> 
> On
12/12/2013 06:25 PM, Michael wrote: 
> 
>> This doesn't appear to be the
case, I've 2TB of images on Ceph and 380GB 
>> data reported by Ceph
(760G after replication). All of these Ceph images 
>> were created
through the Opennebula Sunstone template GUI.
>> 
>> -Michael
>> 
>> On
12/12/2013 09:11, Kenneth wrote:
>> 
>>> I haven't tried creating a thin
or thick provision in ceph rbd from scratch. So basically, I can say
that a 100GB disk will consume 100GB RBD in ceph (of course it will be
200GB in ceph storage since ceph duplicates the disks by default).
>>

>> ___
>> Users mailing
list
>> Users@lists.opennebula.org
>>
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1]
> 
>
___
> Users mailing list
>
Users@lists.opennebula.org
>
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 
> 
>
NOTICE: PROTECT THE INFORMATION IN THIS MESSAGE IN ACCORDANCE WITH THE
COMPANY'S SECURITY POLICIES. IF YOU RECEIVED THIS MESSAGE IN ERROR,
IMMEDIATELY NOTIFY THE SENDER AND DESTROY ALL COPIES.



Links:
--
[1]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[2]
http://ceph.com/docs/master/rbd/rbd-snapshot/#layering
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Booting stopped VMs a second time after an error cause problem with CDROM link

2013-12-12 Thread Daniel Dehennin
Javier Fontan  writes:

> This bug is solved in one 4.4:
>
> http://dev.opennebula.org/issues/2462

Thanks a lot and sorry for the noise, I'll finish by subscribe to some
RSS feed to be aware ;-)

Regards.
-- 
Daniel Dehennin
Récupérer ma clef GPG:
gpg --keyserver pgp.mit.edu --recv-keys 0x7A6FE2DF


pgpp_xjFtvxXi.pgp
Description: PGP signature
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Ceph and thin provision

2013-12-12 Thread Campbell, Bill
Kenneth, 
The allocation of images consuming the total image size looks to be a bug in 
Ceph: 

http://tracker.ceph.com/issues/6257 

They've identified, but doesn't look like there's been any movement on it since 
the bug was opened. 

- Original Message -

From: "Kenneth"  
To: "Mario Giammarco"  
Cc: users@lists.opennebula.org 
Sent: Thursday, December 12, 2013 4:11:17 AM 
Subject: Re: [one-users] Ceph and thin provision 



I haven't tested much on non-persistent image as I have no use on them unless 
on experiments. Also, I haven't tried any volatile image, sorry. 

A not persistent image is writeable? 

Short answer: NO 

Long answer: Yes, sort of. When you instantiate a non persistent image, nebula 
create a "another disk" in the background temporarily. You can check that on 
when you issue "rbd ls -p one". You'll see something like this. 

one-34 ---> this is the non persistent image disk 
one-34-73-0 > this is the "temporary clone" of the disk when you 
instantiate a VM 
one-34-80-0 -> another VM which uses the non persistent image one-34 

This is why you can instantiate two or more VMs using a non-persistent image. 
If I'm not mistaken, the temporary disk will be destoyed once you shutdown the 
VM from nebula sunstone. But as long as the VM is running, the data is there. 
You can even reboot the VM with non-persistent disk and still have data. You 
lose the data once Nebula destroys VM disk, that is, when you SHUTDOWN or 
DELETE the VM from nebula sunstone. 

As for thick and thin provision, all of my images in ceph are thick, because my 
base image is 25 GB disk from a KVM template and then I imported it in ceph (it 
was converted from qcow2 to rbd). It consumes whole 25GB on my ceph storage. I 
just clone that "template image" every time I deploy a new VM. 

I haven't tried creating a thin or thick provision in ceph rbd from scratch. So 
basically, I can say that a 100GB disk will consume 100GB RBD in ceph (of 
course it will be 200GB in ceph storage since ceph duplicates the disks by 
default). 
--- 
Thanks,
Kenneth 
Apollo Global Corp. 


On 12/12/2013 04:52 PM, Mario Giammarco wrote: 


In several virtualization systems you can have a virtual disk drive: 

-thick, so a thick disk of 100gb uses 100gb of space; 
-thin, so a thin disk of 100gb uses 0gb when empty and starts using space when 
the virtual machine fills it. 

So I can have a real hdd of 250gb with inside ten virtual thin disks of 1000gb 
each, if they are almost empty. 
I have checked again and ceph rbd are "thin". 

BTW: I thank you for you explanation of persistent/not persistent, I was not 
able to find it in docs. Can you explain me also what a "volatile disk" is? 
A not persistent image is writeable? 
When you reboot a vm with a not persistent image you lose all datda written to 
it? 

Thanks again, 
Mario 


2013/12/12 Kenneth < kenn...@apolloglobal.net > 





Hi, 

Can you elaborate more on what you want to achieve? 

If you have a 100GB image and it is set to persistent, you can instantiate that 
image immediately and deploy/live migrate it to any nebula node. Only one 
running instance of VM of this image is allowed. 

If it is a 100GB non persistent image, you'll have to wait for ceph to "create 
a copy" of it once you deploy it. But you can use this image multiple times 
simutaneously. 
--- 
Thanks,
Kenneth 
Apollo Global Corp. 


On 12/11/2013 07:28 PM, Mario Giammarco wrote: 



Hello, 
I am using ceph with opennebula. 
I have created a 100gb disk image and I do not understand if it is thin or 
thick. 

I hope I can have thin provision. 

Thanks, 
Mario 
___
Users mailing list Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 









___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 


NOTICE: Protect the information in this message in accordance with the 
company's security policies. If you received this message in error, immediately 
notify the sender and destroy all copies.___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Ceph and thin provision

2013-12-12 Thread Campbell, Bill
Ceph's RBD Format 2 images support the copy-on-write clones/snapshots for quick 
provisioning, where essentially the following happens: 

Snapshot of Image created --> Snapshot protected from deletion --> Clone image 
created from snapshot 

The protected snapshot acts as a base image for the clone, where only the 
additional data is stored in the clone. See more here: 
http://ceph.com/docs/master/rbd/rbd-snapshot/#layering 


For our environment here I have modified the included datastore/tm drivers for 
Ceph to take advantage of these format 2 images/layering for Non-Persistent 
images. It works rather well, and all image functions work appropriately for 
non-persistent images (save as, etc.). One note/requirement is to be using a 
newer Ceph release (recommend Dumpling or newer) and newer versions of 
QEMU/Libvirt (there were some bugs in older releases, but the versions from 
Ubuntu Cloud Archive for 12.04 work fine). I did submit them for improvement 
prior to the 4.0 release, but the simple format 1 images are the default 
currently for OpenNebula. 

I think this would be a good question for the developers. Would creating the 
option for Format 2 images (either in the image template as a parameter or on 
the Datastore as a configuration attribute) and then developing the DS/TM 
drivers further to accommodate this option be worth the effort? I can see use 
cases for both (separate images vs. cloned images having to rely on the base 
image), but cloned images are WAY faster to deploy. 

I have the basic code for format 2 images, I think the logic for looking up the 
parameter/attribute and then applying appropriate action should be rather 
simple. Could collaborate/share if you'd like. 

- Original Message -

From: "Kenneth"  
To: users@lists.opennebula.org 
Sent: Thursday, December 12, 2013 6:11:15 AM 
Subject: Re: [one-users] Ceph and thin provision 



Yes, that is possible. But as I said, all my images were all preallocated as I 
haven't created any image from sunstone. 


--- 
Thanks,
Kenneth 
Apollo Global Corp. 


On 12/12/2013 06:25 PM, Michael wrote: 


This doesn't appear to be the case, I've 2TB of images on Ceph and 380GB 
data reported by Ceph (760G after replication). All of these Ceph images 
were created through the Opennebula Sunstone template GUI.

-Michael

On 12/12/2013 09:11, Kenneth wrote: 


I haven't tried creating a thin or thick provision in ceph rbd from scratch. So 
basically, I can say that a 100GB disk will consume 100GB RBD in ceph (of 
course it will be 200GB in ceph storage since ceph duplicates the disks by 
default). 


___
Users mailing list Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 



___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 


NOTICE: Protect the information in this message in accordance with the 
company's security policies. If you received this message in error, immediately 
notify the sender and destroy all copies.___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] No system datastore with enough capacity form the VM

2013-12-12 Thread Carlos Martín Sánchez
So the scheduler thinks that the VM needs 100 GB in the system DS instead
of the ceph image DS.
Could you please paste the output of onedatastore show 100 & 104 ?

Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebula


On Wed, Dec 11, 2013 at 8:53 PM, Mario Giammarco wrote:

> Here are some parts of the log, please note that I tried two things:
>
> - clean some requirements of the vm (datastore rank stripping);
> - adding another system datastore (I have one shared and one ssh);
> - please also note that system datastore with id 0 was never used because
> it was not part of the cluster or "public";
> - I have also deleted and recreated VM;
>
> Wed Dec 11 10:02:06 2013 [SCHED][I]: Init Scheduler Log system
> Wed Dec 11 10:02:06 2013 [SCHED][I]: Starting Scheduler Daemon
> 
>  Scheduler Configuration File
> 
> DEFAULT_DS_SCHED=POLICY=1
> DEFAULT_SCHED=POLICY=1
> HYPERVISOR_MEM=0.1
> LIVE_RESCHEDS=0
> LOG=DEBUG_LEVEL=3,SYSTEM=file
> MAX_DISPATCH=30
> MAX_HOST=1
> MAX_VM=5000
> ONED_PORT=2633
> SCHED_INTERVAL=30
> 
> Wed Dec 11 10:02:06 2013 [SCHED][I]: Starting scheduler loop...
> Wed Dec 11 10:02:06 2013 [SCHED][I]: Scheduler loop started.
> Wed Dec 11 10:08:36 2013 [VM][D]: Pending/rescheduling VM and capacity
> requirements:
>   VM  CPU  Memory   System DS  Image DS
> 
>4  100  524288   0  DS 100: 0
> Wed Dec 11 10:08:36 2013 [HOST][D]: Discovered Hosts (enabled):
>  0
> Wed Dec 11 10:08:36 2013 [SCHED][D]: VM 4: Datastore 0 filtered out. It
> does not fulfill SCHED_DS_REQUIREMENTS.
> Wed Dec 11 10:08:36 2013 [SCHED][I]: Scheduling Results:
> Virtual Machine: 4
>
> PRI ID - HOSTS
> 
> 0   0
>
> PRI ID - DATASTORES
> 
> 1   104
>
> Wed Dec 11 10:08:36 2013 [VM][I]: Dispatching VM 4 to host 0 and datastore
> 104
> Wed Dec 11 10:10:35 2013 [VM][D]: Pending/rescheduling VM and capacity
> requirements:
>   VM  CPU  Memory   System DS  Image DS
> 
>5  100  524288   0  DS 100: 0
> Wed Dec 11 10:10:35 2013 [HOST][D]: Discovered Hosts (enabled):
>  0
> Wed Dec 11 10:10:35 2013 [SCHED][D]: VM 5: Datastore 0 filtered out. It
> does not fulfill SCHED_DS_REQUIREMENTS.
> Wed Dec 11 10:10:35 2013 [SCHED][I]: Scheduling Results:
> Virtual Machine: 5
>
> PRI ID - HOSTS
> 
> 0   0
>
> PRI ID - DATASTORES
> 
> 1   104
>
>
> Wed Dec 11 10:10:35 2013 [VM][I]: Dispatching VM 5 to host 0 and datastore
> 104
> Wed Dec 11 11:24:20 2013 [VM][D]: Pending/rescheduling VM and capacity
> requirements:
>   VM  CPU  Memory   System DS  Image DS
> 
>6  100 1048576  102400  DS 100: 0
> Wed Dec 11 11:24:20 2013 [HOST][D]: Discovered Hosts (enabled):
>  0
> Wed Dec 11 11:24:20 2013 [SCHED][D]: VM 6: Datastore 0 filtered out. It
> does not fulfill SCHED_DS_REQUIREMENTS.
> Wed Dec 11 11:24:20 2013 [SCHED][D]: VM 6: Datastore 104 filtered out. Not
> enough capacity.
> Wed Dec 11 11:24:20 2013 [SCHED][I]: Scheduling Results:
>
> Wed Dec 11 11:24:50 2013 [VM][D]: Pending/rescheduling VM and capacity
> requirements:
>   VM  CPU  Memory   System DS  Image DS
> 
>6  100 1048576  102400  DS 100: 0
> Wed Dec 11 11:24:50 2013 [HOST][D]: Discovered Hosts (enabled):
>
> ...
> ...
>
>
>
>
> ed Dec 11 20:51:20 2013 [SCHED][I]: Scheduling
> Results:
>
> Virtual Machine:
> 7
>
>
> PRI ID - HOSTS
> 
> 0   0
>
> PRI ID - DATASTORES
> 
> 0   105
>
>
> Wed Dec 11 20:51:20 2013 [SCHED][D]: VM 7: Local Datastore 105 in Host 0
> filtered out. Not enough capacity.
> Wed Dec 11 20:51:20 2013 [SCHED][I]: VM 7: No suitable System DS found for
> Host: 0. Filtering out host.
> Wed Dec 11 20:51:50 2013 [VM][D]: Pending/rescheduling VM and capacity
> requirements:
>   VM  CPU  Memory   System DS  Image DS
> 
>7  100 1048576  102400  DS 100: 0
> Wed Dec 11 20:51:50 2013 [HOST][D]: Discovered Hosts (enabled):
>  0
> Wed Dec 11 20:51:50 2013 [SCHED][D]: VM 7: Datastore 0 filtered out. It
> does not fulfill SCHED_DS_REQUIREMENTS.
> Wed Dec 11 20:51:50 2013 [SCHED][D]: VM 7: Datastore 104 filtered out. Not
> enough capacity.
> Wed Dec 11 20:51:50 2013 [SCHED][

Re: [one-users] Opennebula 4.4 - System Datastore cannot be used

2013-12-12 Thread Carlos Martín Sánchez
Hi Pascal,

On Thu, Dec 12, 2013 at 12:32 PM, Pascal Petsch 
wrote:
> Hello,
>
> I'm facing a problem when creating a new VM since the update from One 4.2
to
> 4.4.
> The sched.log tells me the following:
>
> Thu Dec 12 12:24:31 2013 [SCHED][D]: VM 573: Local Datastore 109 in Host 9
> filtered out. Not enough capacity.
> Thu Dec 12 12:24:31 2013 [SCHED][I]: VM 573: No suitable System DS found
for
> Host: 9. Filtering out host.
>
> Everything worked fine before and there is more than enough space left on
> the disk.

If the DS path does not exist (/var/lib/one/datastores/109), the scheduler
will use the storage reported by the host in /var/lib/one/datastores.

The storage that the scheduler is considering can be seen with   onehost
show 0 -x | grep FREE_DISK.
sched.log should also contain the MB that the VM is requesting from the
system DS.

> Shouldn't the directory be created automatically?

The first deployment will create the dir. From that point, the monitored
storage will be reported in the onedatastore output.

Regards.
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Booting stopped VMs a second time after an error cause problem with CDROM link

2013-12-12 Thread Javier Fontan
This bug is solved in one 4.4:

http://dev.opennebula.org/issues/2462

On Thu, Dec 12, 2013 at 10:39 AM, Daniel Dehennin
 wrote:
> Hello,
>
> On ONE 4.2, testing why the stop did not work[1]:
>
> 1. start a non persistent VM
> 2. stop it => successfully copied to frontend datastores/0
> 3. resume the VM, transfer failed => the VM state is back to STOPPED
> 4. fix the transfer problem[2]
> 5. resume the VM =>
>
> [TM][I]: Command execution fail: /var/lib/one/remotes/tm/ssh/context 
> /var/lib/one/vms/1565/context.sh 
> grichka:/var/lib/one//datastores/0/1565/disk.1 1565 0
> [TM][I]: context: Generating context block device at 
> grichka:/var/lib/one//datastores/0/1565/disk.1
> [TM][E]: context: Command "ln -s /var/lib/one/datastores/0/1565/disk.1 
> /var/lib/one/datastores/0/1565/disk.1.iso" failed: ln: unable to create 
> symlink “/var/lib/one/datastores/0/1565/disk.1.iso”: file exists
> [TM][E]: Error creating ISO symbolic link
> [TM][I]: ExitCode: 1
> [TM][E]: Error executing image transfer script: Error creating ISO 
> symbolic link
> [DiM][I]: New VM state is FAILED
>
> Maybe a clean should be done on host before resuming from STOPPED?
>
> Regards.
>
> Footnotes:
> [1]  each node must be able to SSH password-less to them self
>
> [2]  was a host key verification failed
>
> --
> Daniel Dehennin
> Récupérer ma clef GPG:
> gpg --keyserver pgp.mit.edu --recv-keys 0x7A6FE2DF
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Javier Fontán Muiños
Developer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | @OpenNebula | github.com/jfontan
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Opennebula 4.4 - System Datastore cannot be used

2013-12-12 Thread Pascal Petsch

Hello,

I'm facing a problem when creating a new VM since the update from One 
4.2 to 4.4.

The sched.log tells me the following:

Thu Dec 12 12:24:31 2013 [SCHED][D]: VM 573: Local Datastore 109 in Host 
9 filtered out. Not enough capacity.
Thu Dec 12 12:24:31 2013 [SCHED][I]: VM 573: No suitable System DS found 
for Host: 9. Filtering out host.


Everything worked fine before and there is more than enough space left 
on the disk.

I followed the instructions of the documentation to create a system ds.
This is my configuration of the DS:

DATASTORE 109 INFORMATION
ID : 109
NAME   : system_01
USER   : oneadmin
GROUP  : oneadmin
CLUSTER: Cluster_01
TYPE   : SYSTEM
DS_MAD : -
TM_MAD : shared
BASE PATH  : /var/lib/one/datastores/109
DISK_TYPE  : FILE

...

DATASTORE TEMPLATE
SHARED="YES"
TM_MAD="shared"
TYPE="SYSTEM_DS"

The setup is a single host running one and sunstone.
Shouldn't the directory be created automatically?

I hope you can help me!

Kind Regards

Pascal Petsch

Student Business Information Systems
Pforzheim University
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Services migration

2013-12-12 Thread Javier Fontan
Migration is complete and those services should be working now.

In case something does not work please tell us.

Cheers

On Thu, Dec 12, 2013 at 11:11 AM, Javier Fontan  wrote:
> We are about to migrate some services to a new server and there is
> going to be some downtime. The services that will be unavailable are:
>
> * downloads.opennebula.org: packages and distro repos
> * dev.opennebula.org
> * images from marketplace
>
> They should be ready in a couple of hours. Sorry for the inconveniences.
>
> Cheers
>
> --
> Javier Fontán Muiños
> Developer
> OpenNebula - The Open Source Toolkit for Data Center Virtualization
> www.OpenNebula.org | @OpenNebula | github.com/jfontan



-- 
Javier Fontán Muiños
Developer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | @OpenNebula | github.com/jfontan
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Ceph and thin provision

2013-12-12 Thread Kenneth
 

Yes, that is possible. But as I said, all my images were all
preallocated as I haven't created any image from sunstone.


---

Thanks,
Kenneth
Apollo Global Corp.

On 12/12/2013 06:25 PM,
Michael wrote: 

> This doesn't appear to be the case, I've 2TB of
images on Ceph and 380GB 
> data reported by Ceph (760G after
replication). All of these Ceph images 
> were created through the
Opennebula Sunstone template GUI.
> 
> -Michael
> 
> On 12/12/2013
09:11, Kenneth wrote:
> 
>> I haven't tried creating a thin or thick
provision in ceph rbd from scratch. So basically, I can say that a 100GB
disk will consume 100GB RBD in ceph (of course it will be 200GB in ceph
storage since ceph duplicates the disks by default).
> 
>
___
> Users mailing list
>
Users@lists.opennebula.org
>
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1]



Links:
--
[1]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] NFS datastore file system

2013-12-12 Thread Neelaya Dhatchayani
Hi Jaime,

Sorry i forgot...

And thank you i solved this problem ssh to same host for the home directory
/var/lib/one by setting selinux to permissive

Thanks for your help

Regards
neelaya


On Thu, Dec 12, 2013 at 4:17 PM, Neelaya Dhatchayani
wrote:

> Hi Jaime,
>
> Sorry i forgot...
>
> And thank you i solved this problem ssh to same host for the home
> directory /var/lib/one by setting selinux to permissive
>
> Thanks for your help
>
> Regards
> neelaya
>
>
> On Thu, Dec 12, 2013 at 3:36 PM, Jaime Melis  wrote:
>
>> Hi Neelaya,
>>
>> Please reply to the mailing list as well.
>>
>>
>>
>>
>>
>> On Thu, Dec 12, 2013 at 11:02 AM, Neelaya Dhatchayani <
>> neels.v...@gmail.com> wrote:
>>
>>>
>>> i checked for permissions set .ssh to 700, 755 and 777 and also the same
>>> for its files 600, 755, 754,
>>>
>>> i did not check SELinux what is that ??
>>>
>>>
>>>
>>>
>>> On Thu, Dec 12, 2013 at 2:58 PM, Jaime Melis  wrote:
>>>
 Hi Neelaya,

 it might be due to permissions or SELinux, have you checked both?


 On Thu, Dec 12, 2013 at 10:08 AM, Neelaya Dhatchayani <
 neels.v...@gmail.com> wrote:

> Thanks Jaime,
>
> But when my home directory is /var/lib/one my ssh passwordless is not
> working, if it is /home/neelaya then it is workingwhy is it so?? u 
> have
> any idea??
>
> regards
> neelaya
>
>
> On Wed, Dec 11, 2013 at 11:06 PM, Jaime Melis  wrote:
>
>> Hi Neelaya,
>>
>> that's actually a good question. Yes, you need to be able to ssh to
>> the same host.
>>
>> cheers,
>> Jaime
>>
>>
>> On Wed, Dec 4, 2013 at 5:33 AM, Neelaya Dhatchayani <
>> neels.v...@gmail.com> wrote:
>>
>>>  Hi Jaime,
>>>
>>> Thanks. My doubt is if my frontend is installed in a host
>>> called onedaemon, should i ve to ssh passwordless from onedaemon to
>>> onedaemon sorry if my question is silly..
>>>
>>> regards
>>> neelaya
>>>
>>>
>>> On Tue, Dec 3, 2013 at 5:23 PM, Jaime Melis  wrote:
>>>
 Hi,

 Please reply to the mailing list as well.

 Yes. It is a basic requirement that all the nodes (frontend +
 hypervisors) should have a oneadmin account, and they should be able 
 to ssh
 passwordlessly from any node to any other node.

 cheers,
 Jaime



 On Tue, Dec 3, 2013 at 12:39 PM, Neelaya Dhatchayani <
 neels.v...@gmail.com> wrote:

> Hi Jaime,
>
> Thanks a lot for your reply. I have one more doubt. Should I have
> to ssh passwordless to the frontend if I am using ssh transfer 
> manager. I
> know that it has to be done for the hosts.
>
> neelaya
>
>
>
>
> On Tue, Dec 3, 2013 at 4:51 PM, Jaime Melis wrote:
>
>> Hi Neelaya,
>>
>> the frontend and the nodes must share /var/lib/one/datastores.
>> Any node can export this share, preferably a NAS system, but if you 
>> don't
>> have been, you can export it from the frontend.
>>
>> cheers,
>> Jaime
>>
>>
>> On Tue, Dec 3, 2013 at 12:16 PM, Neelaya Dhatchayani <
>> neels.v...@gmail.com> wrote:
>>
>>> Hi
>>>
>>> Can anyone tell me what has to be done on the frontend and hosts
>>> inorder to use shared transfer driver and with respect to NFS.
>>>
>>> Thanks in advance
>>> neelaya
>>>
>>> ___
>>> Users mailing list
>>> Users@lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>>
>>
>>
>> --
>> Jaime Melis
>> C12G Labs - Flexible Enterprise Cloud Made Simple
>> http://www.c12g.com | jme...@c12g.com
>>
>> --
>>
>> Confidentiality Warning: The information contained in this e-mail
>> and
>> any accompanying documents, unless otherwise expressly indicated,
>> is
>> confidential and privileged, and is intended solely for the person
>> and/or entity to whom it is addressed (i.e. those identified in
>> the
>> "To" and "cc" box). They are the property of C12G Labs S.L..
>> Unauthorized distribution, review, use, disclosure, or copying of
>> this
>> communication, or any part thereof, is strictly prohibited and
>> may be
>> unlawful. If you have received this e-mail in error, please
>> notify us
>> immediately by e-mail at ab...@c12g.com and delete the e-mail and
>> attachments and any copy from your system. C12G's thanks you for
>> your

Re: [one-users] Ceph and thin provision

2013-12-12 Thread Michael
This doesn't appear to be the case, I've 2TB of images on Ceph and 380GB 
data reported by Ceph (760G after replication). All of these Ceph images 
were created through the Opennebula Sunstone template GUI.


-Michael

On 12/12/2013 09:11, Kenneth wrote:
I haven't tried creating a thin or thick provision in ceph rbd from 
scratch. So basically, I can say that a 100GB disk will consume 100GB 
RBD in ceph (of course it will be 200GB in ceph storage since ceph 
duplicates the disks by default).


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Services migration

2013-12-12 Thread Javier Fontan
We are about to migrate some services to a new server and there is
going to be some downtime. The services that will be unavailable are:

* downloads.opennebula.org: packages and distro repos
* dev.opennebula.org
* images from marketplace

They should be ready in a couple of hours. Sorry for the inconveniences.

Cheers

-- 
Javier Fontán Muiños
Developer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | @OpenNebula | github.com/jfontan
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Snapshots and other.....

2013-12-12 Thread Carlos Martín Sánchez
Hi,

On Wed, Dec 11, 2013 at 4:59 PM, Giancarlo  wrote:

> Hi,
> the VM after is correctly running. But my question is: it's normal that
> the snapshot take 48 minutes and not 21 seconds as indicated in snapshot
> view?
>

I see, I misunderstood your question.
The Scheduled Actions table in Sunstone shows two times:
TIME is the requested time
DONE is when the scheduler actually executes the action. But since the
action is asynchronous, the scheduler doesn't know when the snapshot
process ends, just when it is started.

Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebula


On Wed, Dec 11, 2013 at 4:59 PM, Giancarlo  wrote:

>  Hi,
> the VM after is correctly running. But my question is: it's normal that
> the snapshot take 48 minutes and not 21 seconds as indicated in snapshot
> view?
>
>
> Il 11/12/2013 16:17, Carlos Martín Sánchez ha scritto:
>
> Hi,
>
>  That's not looking good, the VM should not go to UNKNOWN after the
> snapshot...
> What is the output of virsh while the VM is in unknown?
>
>  Regards.
>
>   --
> Carlos Martín, MSc
> Project Engineer
> OpenNebula - Flexible Enterprise Cloud Made Simple
> www.OpenNebula.org | cmar...@opennebula.org | 
> @OpenNebula
>
>
> On Mon, Dec 9, 2013 at 2:24 PM, Giancarlo De Filippis  > wrote:
>
>>  Hi all,
>>
>> i've scheduled a snapshot-creation: (in snapshots view)
>>   *snapshot-create* *9/12/2013 12:00:00* *9/12/2013 12:00:21*
>>
>> It seems terminate after 21 seconds.
>>
>> If i look in VM log i see:
>>
>> Mon Dec 9 *12:48:58 2013 [VMM][I]: VM Snapshot successfully created.*
>> Mon Dec 9 12:48:59 2013 [VMM][I]: VM running but it was not found. Boot
>> and delete actions available or try to recover it manually
>> Mon Dec 9 12:48:59 2013 [LCM][I]: New VM state is UNKNOWN
>> Mon Dec 9 12:49:19 2013 [VMM][I]: VM found again, state is RUNNING
>>
>> . After 48 minutes.
>>
>> How i can check for this delay
>>
>> Image is qcow2, on kvm opennebula 4.4 and a shared storage glusterfs
>> mounted on /var/lib/one/datastores.
>>
>> Another trouble tha vm clock ha a delay of about 60 minutesand i
>> see this message with DMESG: Clocksource tsc unstable...
>>
>> Someone can help me Thanks...
>>
>> Cheers
>>
>> Giancarlo
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Booting stopped VMs a second time after an error cause problem with CDROM link

2013-12-12 Thread Daniel Dehennin
Hello,

On ONE 4.2, testing why the stop did not work[1]:

1. start a non persistent VM
2. stop it => successfully copied to frontend datastores/0
3. resume the VM, transfer failed => the VM state is back to STOPPED
4. fix the transfer problem[2]
5. resume the VM =>

[TM][I]: Command execution fail: /var/lib/one/remotes/tm/ssh/context 
/var/lib/one/vms/1565/context.sh grichka:/var/lib/one//datastores/0/1565/disk.1 
1565 0
[TM][I]: context: Generating context block device at 
grichka:/var/lib/one//datastores/0/1565/disk.1
[TM][E]: context: Command "ln -s /var/lib/one/datastores/0/1565/disk.1 
/var/lib/one/datastores/0/1565/disk.1.iso" failed: ln: unable to create symlink 
“/var/lib/one/datastores/0/1565/disk.1.iso”: file exists
[TM][E]: Error creating ISO symbolic link
[TM][I]: ExitCode: 1
[TM][E]: Error executing image transfer script: Error creating ISO symbolic 
link
[DiM][I]: New VM state is FAILED

Maybe a clean should be done on host before resuming from STOPPED?

Regards.

Footnotes: 
[1]  each node must be able to SSH password-less to them self

[2]  was a host key verification failed

-- 
Daniel Dehennin
Récupérer ma clef GPG:
gpg --keyserver pgp.mit.edu --recv-keys 0x7A6FE2DF


pgpBH4Dg_fWBV.pgp
Description: PGP signature
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] NFS datastore file system

2013-12-12 Thread Jaime Melis
Hi Neelaya,

it might be due to permissions or SELinux, have you checked both?


On Thu, Dec 12, 2013 at 10:08 AM, Neelaya Dhatchayani
wrote:

> Thanks Jaime,
>
> But when my home directory is /var/lib/one my ssh passwordless is not
> working, if it is /home/neelaya then it is workingwhy is it so?? u have
> any idea??
>
> regards
> neelaya
>
>
> On Wed, Dec 11, 2013 at 11:06 PM, Jaime Melis  wrote:
>
>> Hi Neelaya,
>>
>> that's actually a good question. Yes, you need to be able to ssh to the
>> same host.
>>
>> cheers,
>> Jaime
>>
>>
>> On Wed, Dec 4, 2013 at 5:33 AM, Neelaya Dhatchayani > > wrote:
>>
>>>  Hi Jaime,
>>>
>>> Thanks. My doubt is if my frontend is installed in a host called
>>> onedaemon, should i ve to ssh passwordless from onedaemon to
>>> onedaemon sorry if my question is silly..
>>>
>>> regards
>>> neelaya
>>>
>>>
>>> On Tue, Dec 3, 2013 at 5:23 PM, Jaime Melis  wrote:
>>>
 Hi,

 Please reply to the mailing list as well.

 Yes. It is a basic requirement that all the nodes (frontend +
 hypervisors) should have a oneadmin account, and they should be able to ssh
 passwordlessly from any node to any other node.

 cheers,
 Jaime



 On Tue, Dec 3, 2013 at 12:39 PM, Neelaya Dhatchayani <
 neels.v...@gmail.com> wrote:

> Hi Jaime,
>
> Thanks a lot for your reply. I have one more doubt. Should I have to
> ssh passwordless to the frontend if I am using ssh transfer manager. I 
> know
> that it has to be done for the hosts.
>
> neelaya
>
>
>
>
> On Tue, Dec 3, 2013 at 4:51 PM, Jaime Melis  wrote:
>
>> Hi Neelaya,
>>
>> the frontend and the nodes must share /var/lib/one/datastores. Any
>> node can export this share, preferably a NAS system, but if you don't 
>> have
>> been, you can export it from the frontend.
>>
>> cheers,
>> Jaime
>>
>>
>> On Tue, Dec 3, 2013 at 12:16 PM, Neelaya Dhatchayani <
>> neels.v...@gmail.com> wrote:
>>
>>> Hi
>>>
>>> Can anyone tell me what has to be done on the frontend and hosts
>>> inorder to use shared transfer driver and with respect to NFS.
>>>
>>> Thanks in advance
>>> neelaya
>>>
>>> ___
>>> Users mailing list
>>> Users@lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>>
>>
>>
>> --
>> Jaime Melis
>> C12G Labs - Flexible Enterprise Cloud Made Simple
>> http://www.c12g.com | jme...@c12g.com
>>
>> --
>>
>> Confidentiality Warning: The information contained in this e-mail and
>> any accompanying documents, unless otherwise expressly indicated, is
>> confidential and privileged, and is intended solely for the person
>> and/or entity to whom it is addressed (i.e. those identified in the
>> "To" and "cc" box). They are the property of C12G Labs S.L..
>> Unauthorized distribution, review, use, disclosure, or copying of this
>> communication, or any part thereof, is strictly prohibited and may be
>> unlawful. If you have received this e-mail in error, please notify us
>> immediately by e-mail at ab...@c12g.com and delete the e-mail and
>> attachments and any copy from your system. C12G's thanks you for your
>> cooperation.
>>
>
>


 --
 Jaime Melis
 C12G Labs - Flexible Enterprise Cloud Made Simple
 http://www.c12g.com | jme...@c12g.com

 --

 Confidentiality Warning: The information contained in this e-mail and
 any accompanying documents, unless otherwise expressly indicated, is
 confidential and privileged, and is intended solely for the person
 and/or entity to whom it is addressed (i.e. those identified in the
 "To" and "cc" box). They are the property of C12G Labs S.L..
 Unauthorized distribution, review, use, disclosure, or copying of this
 communication, or any part thereof, is strictly prohibited and may be
 unlawful. If you have received this e-mail in error, please notify us
 immediately by e-mail at ab...@c12g.com and delete the e-mail and
 attachments and any copy from your system. C12G's thanks you for your
 cooperation.

>>>
>>>
>>
>>
>> --
>> Jaime Melis
>> C12G Labs - Flexible Enterprise Cloud Made Simple
>> http://www.c12g.com | jme...@c12g.com
>>
>> --
>>
>> Confidentiality Warning: The information contained in this e-mail and
>> any accompanying documents, unless otherwise expressly indicated, is
>> confidential and privileged, and is intended solely for the person
>> and/or entity to whom it is addressed (i.e. those identified in the
>> "To" and "cc" box). They are the property of C12G Labs S.L..
>> Unauthorized distribution, review, use, disclosure, or copying of this
>> communication, or any part thereof, is strictly prohibited and may

Re: [one-users] Ceph and thin provision

2013-12-12 Thread Kenneth
 

I haven't tested much on non-persistent image as I have no use on
them unless on experiments. Also, I haven't tried any volatile image,
sorry. 

_A not persistent image is writeable?_ 

Short answer: NO


Long answer: Yes, sort of. When you instantiate a non persistent
image, nebula create a "another disk" in the background temporarily. You
can check that on when you issue "rbd ls -p one". You'll see something
like this. 

one-34 ---> this is the non persistent image
disk
one-34-73-0 > this is the "temporary clone" of the disk
when you instantiate a VM
one-34-80-0 -> another VM which uses
the non persistent image one-34 

This is why you can instantiate two or
more VMs using a non-persistent image. If I'm not mistaken, the
temporary disk will be destoyed once you shutdown the VM from nebula
sunstone. But as long as the VM is running, the data is there. You can
even reboot the VM with non-persistent disk and still have data. You
lose the data once Nebula destroys VM disk, that is, when you SHUTDOWN
or DELETE the VM from nebula sunstone. 

As for thick and thin
provision, all of my images in ceph are thick, because my base image is
25 GB disk from a KVM template and then I imported it in ceph (it was
converted from qcow2 to rbd). It consumes whole 25GB on my ceph storage.
I just clone that "template image" every time I deploy a new VM. 

I
haven't tried creating a thin or thick provision in ceph rbd from
scratch. So basically, I can say that a 100GB disk will consume 100GB
RBD in ceph (of course it will be 200GB in ceph storage since ceph
duplicates the disks by default). 
---

Thanks,
Kenneth
Apollo Global
Corp.

On 12/12/2013 04:52 PM, Mario Giammarco wrote: 

> In several
virtualization systems you can have a virtual disk drive:
> 
> -thick,
so a thick disk of 100gb uses 100gb of space; -thin, so a thin disk of
100gb uses 0gb when empty and starts using space when the virtual
machine fills it.
> 
> So I can have a real hdd of 250gb with inside ten
virtual thin disks of 1000gb each, if they are almost empty. 
> I have
checked again and ceph rbd are "thin". 
> 
> BTW: I thank you for you
explanation of persistent/not persistent, I was not able to find it in
docs. Can you explain me also what a "volatile disk" is? 
> A not
persistent image is writeable? When you reboot a vm with a not
persistent image you lose all datda written to it?
> 
> Thanks again,
>
Mario 
> 
> 2013/12/12 Kenneth 
> 
>> Hi, 
>>

>> Can you elaborate more on what you want to achieve? 
>> 
>> If you
have a 100GB image and it is set to persistent, you can instantiate that
image immediately and deploy/live migrate it to any nebula node. Only
one running instance of VM of this image is allowed. 
>> 
>> If it is a
100GB non persistent image, you'll have to wait for ceph to "create a
copy" of it once you deploy it. But you can use this image multiple
times simutaneously. 
>> ---
>> 
>> Thanks,
>> Kenneth
>> Apollo Global
Corp.
>> 
>> On 12/11/2013 07:28 PM, Mario Giammarco wrote: 
>> 
>>>
Hello, I am using ceph with opennebula. I have created a 100gb disk
image and I do not understand if it is thin or thick.
>>> 
>>> I hope I
can have thin provision.
>>> 
>>> Thanks,
>>> Mario 
>>> 
>>>
___
>>> Users mailing
list
>>> Users@lists.opennebula.org
>>>
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1]



Links:
--
[1]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] NFS datastore file system

2013-12-12 Thread Neelaya Dhatchayani
Thanks Jaime,

But when my home directory is /var/lib/one my ssh passwordless is not
working, if it is /home/neelaya then it is workingwhy is it so?? u have
any idea??

regards
neelaya


On Wed, Dec 11, 2013 at 11:06 PM, Jaime Melis  wrote:

> Hi Neelaya,
>
> that's actually a good question. Yes, you need to be able to ssh to the
> same host.
>
> cheers,
> Jaime
>
>
> On Wed, Dec 4, 2013 at 5:33 AM, Neelaya Dhatchayani 
> wrote:
>
>> Hi Jaime,
>>
>> Thanks. My doubt is if my frontend is installed in a host called
>> onedaemon, should i ve to ssh passwordless from onedaemon to
>> onedaemon sorry if my question is silly..
>>
>> regards
>> neelaya
>>
>>
>> On Tue, Dec 3, 2013 at 5:23 PM, Jaime Melis  wrote:
>>
>>> Hi,
>>>
>>> Please reply to the mailing list as well.
>>>
>>> Yes. It is a basic requirement that all the nodes (frontend +
>>> hypervisors) should have a oneadmin account, and they should be able to ssh
>>> passwordlessly from any node to any other node.
>>>
>>> cheers,
>>> Jaime
>>>
>>>
>>>
>>> On Tue, Dec 3, 2013 at 12:39 PM, Neelaya Dhatchayani <
>>> neels.v...@gmail.com> wrote:
>>>
 Hi Jaime,

 Thanks a lot for your reply. I have one more doubt. Should I have to
 ssh passwordless to the frontend if I am using ssh transfer manager. I know
 that it has to be done for the hosts.

 neelaya




 On Tue, Dec 3, 2013 at 4:51 PM, Jaime Melis  wrote:

> Hi Neelaya,
>
> the frontend and the nodes must share /var/lib/one/datastores. Any
> node can export this share, preferably a NAS system, but if you don't have
> been, you can export it from the frontend.
>
> cheers,
> Jaime
>
>
> On Tue, Dec 3, 2013 at 12:16 PM, Neelaya Dhatchayani <
> neels.v...@gmail.com> wrote:
>
>> Hi
>>
>> Can anyone tell me what has to be done on the frontend and hosts
>> inorder to use shared transfer driver and with respect to NFS.
>>
>> Thanks in advance
>> neelaya
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>
>
> --
> Jaime Melis
> C12G Labs - Flexible Enterprise Cloud Made Simple
> http://www.c12g.com | jme...@c12g.com
>
> --
>
> Confidentiality Warning: The information contained in this e-mail and
> any accompanying documents, unless otherwise expressly indicated, is
> confidential and privileged, and is intended solely for the person
> and/or entity to whom it is addressed (i.e. those identified in the
> "To" and "cc" box). They are the property of C12G Labs S.L..
> Unauthorized distribution, review, use, disclosure, or copying of this
> communication, or any part thereof, is strictly prohibited and may be
> unlawful. If you have received this e-mail in error, please notify us
> immediately by e-mail at ab...@c12g.com and delete the e-mail and
> attachments and any copy from your system. C12G's thanks you for your
> cooperation.
>


>>>
>>>
>>> --
>>> Jaime Melis
>>> C12G Labs - Flexible Enterprise Cloud Made Simple
>>> http://www.c12g.com | jme...@c12g.com
>>>
>>> --
>>>
>>> Confidentiality Warning: The information contained in this e-mail and
>>> any accompanying documents, unless otherwise expressly indicated, is
>>> confidential and privileged, and is intended solely for the person
>>> and/or entity to whom it is addressed (i.e. those identified in the
>>> "To" and "cc" box). They are the property of C12G Labs S.L..
>>> Unauthorized distribution, review, use, disclosure, or copying of this
>>> communication, or any part thereof, is strictly prohibited and may be
>>> unlawful. If you have received this e-mail in error, please notify us
>>> immediately by e-mail at ab...@c12g.com and delete the e-mail and
>>> attachments and any copy from your system. C12G's thanks you for your
>>> cooperation.
>>>
>>
>>
>
>
> --
> Jaime Melis
> C12G Labs - Flexible Enterprise Cloud Made Simple
> http://www.c12g.com | jme...@c12g.com
>
> --
>
> Confidentiality Warning: The information contained in this e-mail and
> any accompanying documents, unless otherwise expressly indicated, is
> confidential and privileged, and is intended solely for the person
> and/or entity to whom it is addressed (i.e. those identified in the
> "To" and "cc" box). They are the property of C12G Labs S.L..
> Unauthorized distribution, review, use, disclosure, or copying of this
> communication, or any part thereof, is strictly prohibited and may be
> unlawful. If you have received this e-mail in error, please notify us
> immediately by e-mail at ab...@c12g.com and delete the e-mail and
> attachments and any copy from your system. C12G's thanks you for your
> cooperation.
>
___
Users mailing list
Users@lists.opennebu

Re: [one-users] tap:aio error

2013-12-12 Thread Neelaya Dhatchayani
hi jaime

this is the template of vm

VIRTUAL MACHINE
TEMPLATE
CONTEXT=[
  DISK_ID="1",
  NETWORK="YES",
  TARGET="hdb" ]
CPU="1"
MEMORY="256"
TEMPLATE_ID="7"
VMID="18"

only this much of template info i m getting

regards
neelaya



On Wed, Dec 11, 2013 at 11:07 PM, Jaime Melis  wrote:

> Hi Neelaya,
>
> can you please send us the whole vm template? onevm show 9
>
> cheers,
> Jaime
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] device onebrxxxx alreay exists can't create bridge with the same name

2013-12-12 Thread Jaime Melis
Hi,

not sure I follow, but given that the rules are idempotent if the bridge
doesn't exist it will be created, and if it does, it won't.

Have you tried this with ONE >= 4.0 and still fails?

regards,
Jaime


On Thu, Dec 12, 2013 at 4:32 AM, cmcc.dylan  wrote:

> Hi,Jainme.
>
>   I think curruent codes don't have solved the bug complelely. The key
> problems the the following snippets are executed  parallel.
> class OpenNebulaHM < OpenNebulaNetwork
> XPATH_FILTER = "TEMPLATE/NIC[VLAN='YES']"
> def initialize(vm, deploy_id = nil, hypervisor = nil)
> super(vm,XPATH_FILTER,deploy_id,hypervisor)
> @bridges = get_interfaces
> end
>
> so bridges variable maybe have the same name bridge. because bridge is a
> ruby instance variable,not a ruby class variable.
>
>
>
> At 2013-12-12 01:53:18,"Jaime Melis"  wrote:
>
> Hi,
>
> yes, this is a known bug which is already solved in OpenNebula >= 4.0 by
> implementing locking mechanisms.
> http://dev.opennebula.org/issues/1722
>
> cheers,
> Jaime
>
>
>
>
> On Wed, Dec 11, 2013 at 9:46 AM, cmcc.dylan  wrote:
>
>>  Hi everyone!
>>
>>I find a problem when we create two or more instances on one host at
>> the same time,we meet the error "device onebr alreay exists can't
>> create bridge with the same name".
>>The reason is that instances all try to create their bridge,although
>> they check whether or not their birdge is existed. because it's at the same
>> time, they all get a result that their bridge is not existed, and then they
>> create it.
>>But when they really create, the same bridge has already been created
>> by other instances.
>>
>> Has the problem been fixed now? I use opennebula-3.8.1.
>>
>>Look forward your answers!
>>
>>dylan.
>>
>>
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>
>
> --
> Jaime Melis
> C12G Labs - Flexible Enterprise Cloud Made Simple
> http://www.c12g.com | jme...@c12g.com
>
> --
>
> Confidentiality Warning: The information contained in this e-mail and
> any accompanying documents, unless otherwise expressly indicated, is
> confidential and privileged, and is intended solely for the person
> and/or entity to whom it is addressed (i.e. those identified in the
> "To" and "cc" box). They are the property of C12G Labs S.L..
> Unauthorized distribution, review, use, disclosure, or copying of this
> communication, or any part thereof, is strictly prohibited and may be
> unlawful. If you have received this e-mail in error, please notify us
> immediately by e-mail at ab...@c12g.com and delete the e-mail and
> attachments and any copy from your system. C12G's thanks you for your
> cooperation.
>
>
>
>


-- 
Jaime Melis
C12G Labs - Flexible Enterprise Cloud Made Simple
http://www.c12g.com | jme...@c12g.com

--

Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
"To" and "cc" box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G's thanks you for your
cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] upload image error

2013-12-12 Thread hansz
Hi Tino,
i use opennebula4.4(centos6.4) and esxi5.1 and i follow this guide  
http://opennebula.org/documentation:rel4.4:vmware_ds
but when i upload the image through the sunstone ,it faild 
the error
Thu Dec 12 10:53:32 2013 : Error copying image in the datastore: Error renaming 
file /var/lib/one/tmp/75e6961fc99c4392b979ac319a25b27a/ to 
/var/lib/one/tmp/75e6961fc99c4392b979ac319a25b27a/disk.vmdk


I  yum install the libvirt-0.10.2.rpm 
it can monitored my datasotes


[oneadmin@nebula ~]$ onedatastore show 102
DATASTORE 102 INFORMATION   
ID : 102 
NAME   : Vos2
USER   : oneadmin
GROUP  : oneadmin
CLUSTER: -   
TYPE   : SYSTEM  
DS_MAD : -   
TM_MAD : vmfs
BASE PATH  : /vmfs/volumes/102   
DISK_TYPE  : FILE


DATASTORE CAPACITY  
TOTAL: : 199.8G  
FREE:  : 198.8G  
USED:  : 972M
LIMIT: : -   


PERMISSIONS 
OWNER  : um- 
GROUP  : u-- 
OTHER  : --- 


DATASTORE TEMPLATE  
BRIDGE_LIST="10.24.101.72"
SHARED="YES"
TM_MAD="vmfs"
TYPE="SYSTEM_DS"






[oneadmin@nebula ~]$ onedatastore show 103
DATASTORE 103 INFORMATION   
ID : 103 
NAME   : Vimage2 
USER   : oneadmin
GROUP  : oneadmin
CLUSTER: -   
TYPE   : IMAGE   
DS_MAD : vmfs
TM_MAD : vmfs
BASE PATH  : /vmfs/volumes/103   
DISK_TYPE  : FILE


DATASTORE CAPACITY  
TOTAL: : 302.3G  
FREE:  : 301.3G  
USED:  : 771M
LIMIT: : -   


PERMISSIONS 
OWNER  : um- 
GROUP  : u-- 
OTHER  : --- 


DATASTORE TEMPLATE  
BRIDGE_LIST="10.24.101.72"
CLONE_TARGET="SYSTEM"
DISK_TYPE="FILE"
DS_MAD="vmfs"
LN_TARGET="NONE"
TM_MAD="vmfs"
TYPE="IMAGE_DS"


IMAGES 
9  
[oneadmin@nebula ~]$








___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Ceph and thin provision

2013-12-12 Thread Mario Giammarco
In several virtualization systems you can have a virtual disk drive:

-thick, so a thick disk of 100gb uses 100gb of space;
-thin,  so a thin disk of 100gb uses 0gb when empty and starts using space
when the virtual machine fills it.

So I can have a real hdd of 250gb with inside ten virtual thin disks of
1000gb each, if they are almost empty.

I have checked again and ceph rbd are "thin".

BTW: I thank you for you explanation of persistent/not persistent, I was
not able to find it in docs. Can you explain me also what a "volatile disk"
is?
A not persistent image is writeable?
When you reboot a vm with a not persistent image you lose all datda written
to it?

Thanks again,
Mario


2013/12/12 Kenneth 

>  Hi,
>
> Can you elaborate more on what you want to achieve?
>
> If you have a 100GB image and it is set to persistent, you can instantiate
> that image immediately and deploy/live migrate it to any nebula node. Only
> one running instance of VM of this image is allowed.
>
> If it is a 100GB non persistent image, you'll have to wait for ceph to
> "create a copy" of it once you deploy it. But you can use this image
> multiple times simutaneously.
> ---
>
> Thanks,
> Kenneth
> Apollo Global Corp.
>
>  On 12/11/2013 07:28 PM, Mario Giammarco wrote:
>
>   Hello,
> I am using ceph with opennebula.
> I have created a 100gb disk image and I do not understand if it is thin or
> thick.
>
> I hope I can have thin provision.
>
> Thanks,
> Mario
>
> ___
> Users mailing 
> listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org