Re: [one-users] openvz+opennebula vm boot failure

2015-02-15 Thread Steven C Timm

On the VM host there should be a file  called deployment.0
What's in there?

Steve Timm


From: Users [users-boun...@lists.opennebula.org] on behalf of Роман Мальцев 
[neo...@mail.ru]
Sent: Sunday, February 15, 2015 3:23 PM
To: users@lists.opennebula.org
Subject: [one-users] openvz+opennebula vm boot failure

Hello, everyone.

I am shouting for help!

I created my own cloud via this guide (for opennebula 4.2) 
https://bitbucket.org/hpcc_kpi/opennebula-openvz/wiki/Home , but i used 
opennebula 4.10.1 version.
I've got a trouble with my VM after creation.
The trouble is - VM can't boot:

Mon Jan 26 01:37:29 2015 [Z0][DiM][I]: New VM state is ACTIVE.
Mon Jan 26 01:37:29 2015 [Z0][LCM][I]: New VM state is PROLOG.
Mon Jan 26 01:37:43 2015 [Z0][LCM][I]: New VM state is BOOT
Mon Jan 26 01:37:43 2015 [Z0][VMM][I]: Generating deployment file: 
/var/lib/one/vms/8/deployment.0
Mon Jan 26 01:37:43 2015 [Z0][VMM][I]: ExitCode: 0
Mon Jan 26 01:37:43 2015 [Z0][VMM][I]: Successfully execute network driver 
operation: pre.
Mon Jan 26 01:38:31 2015 [Z0][VMM][I]: Command execution fail: cat  EOT | 
/vz/one/scripts/vmm/ovz/deploy '/vz/one/datastores/0/8/deployment.0' 
'vm175.jinr.ru' 8 vm175.jinr.ru
Mon Jan 26 01:38:31 2015 [Z0][VMM][I]: 
/usr/lib/ruby/gems/1.8/gems/xml-mapping-0.10.0/lib/xml/mapping/base.rb:683:in 
`xml_to_obj': no value, and no default value: Attribute vmid not set 
(XXPathError: path not found: VMID) (XML::MappingError)
Mon Jan 26 01:38:31 2015 [Z0][VMM][I]: from 
/usr/lib/ruby/gems/1.8/gems/xml-mapping-0.10.0/lib/xml/mapping/base.rb:186:in 
`fill_from_xml'
Mon Jan 26 01:38:31 2015 [Z0][VMM][I]: from 
/usr/lib/ruby/gems/1.8/gems/xml-mapping-0.10.0/lib/xml/mapping/base.rb:185:in 
`each'
Mon Jan 26 01:38:31 2015 [Z0][VMM][I]: from 
/usr/lib/ruby/gems/1.8/gems/xml-mapping-0.10.0/lib/xml/mapping/base.rb:185:in 
`fill_from_xml'
Mon Jan 26 01:38:31 2015 [Z0][VMM][I]: from 
/usr/lib/ruby/gems/1.8/gems/xml-mapping-0.10.0/lib/xml/mapping/base.rb:362:in 
`load_from_xml'
Mon Jan 26 01:38:31 2015 [Z0][VMM][I]: from 
/vz/one/scripts/vmm/ovz/open_vz_data.rb:28:in `load_from_stream'
Mon Jan 26 01:38:31 2015 [Z0][VMM][I]: from 
/vz/one/scripts/vmm/ovz/open_vz_data.rb:75:in `new'
Mon Jan 26 01:38:31 2015 [Z0][VMM][I]: from /vz/one/scripts/vmm/ovz/deploy:24
Mon Jan 26 01:38:31 2015 [Z0][VMM][I]: ExitCode: 1
Mon Jan 26 01:38:31 2015 [Z0][VMM][I]: Failed to execute virtualization driver 
operation: deploy.
Mon Jan 26 01:38:31 2015 [Z0][VMM][E]: Error deploying virtual machine
Mon Jan 26 01:38:31 2015 [Z0][DiM][I]: New VM state is FAILED

The next string looks really strange:

Mon Jan 26 01:38:31 2015 [Z0][VMM][I]: 
/usr/lib/ruby/gems/1.8/gems/xml-mapping-0.10.0/lib/xml/mapping/base.rb:683:in 
`xml_to_obj': no value, and no default value: Attribute vmid not set 
(XXPathError: path not found: VMID) (XML::MappingError)

I don't understand why opennebula can't read VMID field, it isn't empty:
[root@vm127 one]# onevm show 8 | tail -1
VMID=8

What am i supposed to do?

--
Best Regards,
Maltcev Roman.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] clusters in 4.8

2015-02-13 Thread Steven C Timm
I know if I just take the vnet and the datastore out of the cluster, and have 
no clusters at all,  then everything
will work.. I was hoping to have a cluster structure of (host,vnet) pairings 
that could
all share a common data store.  However from the documentation, it looks like 
if 
your template requests any resource that is part of a cluster (vnet or image 
from datastore)
then the scheduler will constrain you to resources that are part of that same 
cluster.

Is that correct?

Steve Timm


From: Ruben S. Montero [rsmont...@opennebula.org]
Sent: Friday, February 13, 2015 3:11 PM
To: Steven C Timm
Cc: users@lists.opennebula.org
Subject: Re: [one-users] clusters in 4.8

Hi

If both clusters has access to the same datastores, just move them out
of the first cluster. When a datastore or network is not assigned to
any cluster (cluster default) OpenNebula assumes it can be used with
any host (no matter in which cluster is set).

BTW, although you do not needed for your use case, 4.12 will come with
extended VDC support to create complex provision scenarios. Basically
you can define generic resource providers that aggregate any
resource (cluster, host, network, datastores)  more here
http://opennebula.org/4-12-features-virtual-data-center-redesign/

Cheers

On Fri, Feb 13, 2015 at 6:37 PM, Steven Timm t...@fnal.gov wrote:

 I have had my one4.8 host up for a while with a single cluster
 that has 150 hosts, one vnet, and a system and image datastore.

 I am now adding hosts from a different vnet.
 want to make second host + vnet cluster but still use
 the same system and image data stores.

 What's the right way to do that.. just remove the datastores
 from the first cluster... they can't be in more than one
 cluster at a time, can they?


 Thanks for any suggestions.

 Steve Timm


 --
 Steven C. Timm, Ph.D  (630) 840-8525
 t...@fnal.gov  http://home.fnal.gov/~timm/
 Office:  Wilson Hall room 804
 Fermilab Scientific Computing Division,
 Scientific Computing Facilities Quadrant.,
 Experimental Computing Facilities Dept.,
 Project Lead for Virtual Facility Project.


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



--
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] clusters in 4.8

2015-02-13 Thread Steven C Timm
OK here we go:

VM in question is taking an image from image store 102 (currently in no 
cluster),
vnet 0 routable private from cluster 100 cloud worker
also a number of hosts, including hosts # 0 and 2, also part of cluster cloud 
worker

VM stays pending for ever, hold reason is below.--it is requiring that cluster 
ID has to be 100.

Same image and same datastore and same vnet outside of the cluster, work just 
fine.

Seems like if I require any resource from the cluster, in this case a vnet, 
then all resources
have to be in the cluster.  Am I missing something?

Steve Timm



[root@fclheadgpvm01 one]# onevm show 1054 | more
VIRTUAL MACHINE 1054 INFORMATION
ID  : 1054
NAME: CLI_PRIV_SLF6Vanilla-1054
USER: oneadmin
GROUP   : oneadmin
STATE   : PENDING 
LCM_STATE   : LCM_INIT
RESCHED : No  
START TIME  : 02/13 17:44:52  
END TIME: -   
DEPLOY ID   : -   

VIRTUAL MACHINE MONITORING  
NET_RX  : 0K  
USED MEMORY : 0K  
USED CPU: 0   
NET_TX  : 0K  

PERMISSIONS 
OWNER   : um- 
GROUP   : --- 
OTHER   : --- 

VM DISKS
 ID TARGET IMAGE   TYPE SAVE SAVE_AS
  0 vdaSLF6Vanilla file   NO   -

VM NICS 
 ID NETWORK  VLAN BRIDGE   IP  MAC  
  0 routable-private   no br1  10.128.1.9  54:52:00:02:0d:09

USER TEMPLATE   
NPTYPE=NPERNLM
SCHED_MESSAGE=Fri Feb 13 17:46:29 2015 : No system datastore meets SCHED_DS_REQ
UIREMENTS: CLUSTER_ID = 100  !(PUBLIC_CLOUD = YES)
SCHED_RANK=FREE_MEM
SCHED_REQUIREMENTS=HYPERVISOR=\kvm\  HOSTNAME=\cloudworker*\

VIRTUAL MACHINE TEMPLATE
AUTOMATIC_REQUIREMENTS=CLUSTER_ID = 100  !(PUBLIC_CLOUD = YES)
CONTEXT=[
  CTX_USER=PFVTRVI+PElEPjA8L0lEPjxHSUQ+MDwvR0lEPjxHUk9VUFM+PElEPjA8L0lEPjwvR1JP
VVBTPjxHTkFNRT5vbmVhZG1pbjwvR05BTUU+PE5BTUU+b25lYWRtaW48L05BTUU+PFBBU1NXT1JEPi9E
Qz1jb20vREM9RGlnaUNlcnQtR3JpZC9PPU9wZW5cMjBTY2llbmNlXDIwR3JpZC9PVT1TZXJ2aWNlcy9D
Tj1mY2xoZWFkZ3B2bTAxLmZuYWwuZ292PC9QQVNTV09SRD48QVVUSF9EUklWRVI+eDUwOTwvQVVUSF9E
UklWRVI+PEVOQUJMRUQ+MTwvRU5BQkxFRD48VEVNUExBVEU+PFRPS0VOX1BBU1NXT1JEPjwhW0NEQVRB
[root@fclheadgpvm01 one]# onedatastore list
  ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS   TM  
   0 system0M - - 0 sys  -shared
   1 default21.2G 85%   - 0 img  fs   shared
   2 files  21.2G 85%   - 0 fil  fs   ssh
 100 localnode  - - - 0 sys  -ssh
 102 cloud_images 20T 75%   - 2 img  fs   shared
[root@fclheadgpvm01 one]# onevnet list
  ID USERGROUPNAMECLUSTERBRIDGE   LEASES
   0 oneadminoneadmin routable-privatecloudworke br1   8
   2 oneadminoneadmin DynamicIP   -  br0  13
   3 oneadminoneadmin StaticIP-  br0   0
[root@fclheadgpvm01 one]# onehost list | more
  ID NAMECLUSTER   RVM  ALLOCATED_CPU  ALLOCATED_MEM STAT  
   0 cloudworker1200 cloudwork   4400 / 800 (50%) 7.4G / 15.6G (47%) on
   1 cloudworker1201 cloudwork   0  -  - off
   2 cloudworker1202 cloudwork   0   0 / 800 (0%)0K / 15.6G (0%) on

From: Ruben S. Montero [rsmont...@opennebula.org]
Sent: Friday, February 13, 2015 4:49 PM
To: Steven C Timm
Cc: users@lists.opennebula.org
Subject: Re: [one-users] clusters in 4.8

Yes, you can do:

Cluster A: Host_A0, Host_A1...  + VNET_A0, VNET_A1...
Cluster B: HostB0, HostB1... + VNET_B0, VNET_B1...
Cluster Default: DS, DS_System

Then a VM that uses VNET_A0 + DS would be scheduled to Cluster A. Note
that using VNET_A0 constrain resources from Cluster A + Cluster
Default.

Cheers

Ruben

On Fri, Feb 13, 2015 at 10:42 PM, Steven C Timm t...@fnal.gov wrote:
 I know if I just take the vnet and the datastore out of the cluster, and have 
 no clusters at all,  then everything
 will work.. I was hoping to have a cluster structure of (host,vnet) pairings 
 that could
 all share a common data

Re: [one-users] clusters in 4.8

2015-02-13 Thread Steven C Timm
One more followup:

host 156 + vnet2 + ds 100/102, all outside the cluster, no problem
host 156 + vnet2 + ds100/102, all in the cluster, no problem

host 156 and vnet2 in the cluster, DS outside of the cluster, problem.
SCHED_MESSAGE=Fri Feb 13 18:06:29 2015 : No system datastore meets 
SCHED_DS_REQUIREMENTS: CLUSTER_ID = 101  !(PUBLIC_CLOUD = YES)

host 156 in the cluster, vnet2 and DS out of the cluster
No error message but it never matches either.

Fri Feb 13 18:24:29 2015 [Z0][HOST][D]: Discovered Hosts (enabled):
 0 2 156
Fri Feb 13 18:24:29 2015 [Z0][SCHED][D]: VM 1058: Host 0 filtered out. It does 
not fulfill SCHED_REQUIREMENTS.
Fri Feb 13 18:24:29 2015 [Z0][SCHED][D]: VM 1058: Host 2 filtered out. It does 
not fulfill SCHED_REQUIREMENTS.
Fri Feb 13 18:24:29 2015 [Z0][SCHED][I]: Scheduling Results:
Virtual Machine: 1058

PRI ID - HOSTS

1   156

PRI ID - DATASTORES

0   100
0   0


Fri Feb 13 18:24:29 2015 [Z0][SCHED][I]: VM 1058: No suitable System DS found 
for Host: 156. Filtering out host.

Steve Timm




From: Steven C Timm
Sent: Friday, February 13, 2015 6:01 PM
To: Ruben S. Montero
Cc: users@lists.opennebula.org
Subject: RE: [one-users] clusters in 4.8

PS--if there are other vm's still launched and running from the time when the 
datastore used to be part of
a cluster, could that confuse anything?  Do I have to restart oned to clear 
anything up?

Steve Timm


From: Steven C Timm
Sent: Friday, February 13, 2015 5:56 PM
To: Ruben S. Montero
Cc: users@lists.opennebula.org
Subject: RE: [one-users] clusters in 4.8

OK here we go:

VM in question is taking an image from image store 102 (currently in no 
cluster),
vnet 0 routable private from cluster 100 cloud worker
also a number of hosts, including hosts # 0 and 2, also part of cluster cloud 
worker

VM stays pending for ever, hold reason is below.--it is requiring that cluster 
ID has to be 100.

Same image and same datastore and same vnet outside of the cluster, work just 
fine.

Seems like if I require any resource from the cluster, in this case a vnet, 
then all resources
have to be in the cluster.  Am I missing something?

Steve Timm



[root@fclheadgpvm01 one]# onevm show 1054 | more
VIRTUAL MACHINE 1054 INFORMATION
ID  : 1054
NAME: CLI_PRIV_SLF6Vanilla-1054
USER: oneadmin
GROUP   : oneadmin
STATE   : PENDING
LCM_STATE   : LCM_INIT
RESCHED : No
START TIME  : 02/13 17:44:52
END TIME: -
DEPLOY ID   : -

VIRTUAL MACHINE MONITORING
NET_RX  : 0K
USED MEMORY : 0K
USED CPU: 0
NET_TX  : 0K

PERMISSIONS
OWNER   : um-
GROUP   : ---
OTHER   : ---

VM DISKS
 ID TARGET IMAGE   TYPE SAVE SAVE_AS
  0 vdaSLF6Vanilla file   NO   -

VM NICS
 ID NETWORK  VLAN BRIDGE   IP  MAC
  0 routable-private   no br1  10.128.1.9  54:52:00:02:0d:09

USER TEMPLATE
NPTYPE=NPERNLM
SCHED_MESSAGE=Fri Feb 13 17:46:29 2015 : No system datastore meets SCHED_DS_REQ
UIREMENTS: CLUSTER_ID = 100  !(PUBLIC_CLOUD = YES)
SCHED_RANK=FREE_MEM
SCHED_REQUIREMENTS=HYPERVISOR=\kvm\  HOSTNAME=\cloudworker*\

VIRTUAL MACHINE TEMPLATE
AUTOMATIC_REQUIREMENTS=CLUSTER_ID = 100  !(PUBLIC_CLOUD = YES)
CONTEXT=[
  CTX_USER=PFVTRVI+PElEPjA8L0lEPjxHSUQ+MDwvR0lEPjxHUk9VUFM+PElEPjA8L0lEPjwvR1JP
VVBTPjxHTkFNRT5vbmVhZG1pbjwvR05BTUU+PE5BTUU+b25lYWRtaW48L05BTUU+PFBBU1NXT1JEPi9E
Qz1jb20vREM9RGlnaUNlcnQtR3JpZC9PPU9wZW5cMjBTY2llbmNlXDIwR3JpZC9PVT1TZXJ2aWNlcy9D
Tj1mY2xoZWFkZ3B2bTAxLmZuYWwuZ292PC9QQVNTV09SRD48QVVUSF9EUklWRVI+eDUwOTwvQVVUSF9E
UklWRVI+PEVOQUJMRUQ+MTwvRU5BQkxFRD48VEVNUExBVEU+PFRPS0VOX1BBU1NXT1JEPjwhW0NEQVRB
[root@fclheadgpvm01 one]# onedatastore list
  ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS   TM
   0 system0M - - 0 sys  -shared
   1 default21.2G 85%   - 0 img  fs   shared
   2 files  21.2G 85%   - 0 fil  fs   ssh
 100 localnode  - - - 0 sys  -ssh
 102 cloud_images 20T 75%   - 2 img  fs   shared
[root@fclheadgpvm01 one]# onevnet list
  ID USERGROUPNAMECLUSTERBRIDGE   LEASES
   0 oneadminoneadmin routable-privatecloudworke br1   8
   2 oneadminoneadmin DynamicIP   -  br0  13
   3 oneadminoneadmin StaticIP-  br0   0
[root@fclheadgpvm01 one]# onehost list | more
  ID NAMECLUSTER   RVM  ALLOCATED_CPU  ALLOCATED_MEM STAT
   0 cloudworker1200

Re: [one-users] clusters in 4.8

2015-02-13 Thread Steven C Timm
PS--if there are other vm's still launched and running from the time when the 
datastore used to be part of 
a cluster, could that confuse anything?  Do I have to restart oned to clear 
anything up?

Steve Timm


From: Steven C Timm
Sent: Friday, February 13, 2015 5:56 PM
To: Ruben S. Montero
Cc: users@lists.opennebula.org
Subject: RE: [one-users] clusters in 4.8

OK here we go:

VM in question is taking an image from image store 102 (currently in no 
cluster),
vnet 0 routable private from cluster 100 cloud worker
also a number of hosts, including hosts # 0 and 2, also part of cluster cloud 
worker

VM stays pending for ever, hold reason is below.--it is requiring that cluster 
ID has to be 100.

Same image and same datastore and same vnet outside of the cluster, work just 
fine.

Seems like if I require any resource from the cluster, in this case a vnet, 
then all resources
have to be in the cluster.  Am I missing something?

Steve Timm



[root@fclheadgpvm01 one]# onevm show 1054 | more
VIRTUAL MACHINE 1054 INFORMATION
ID  : 1054
NAME: CLI_PRIV_SLF6Vanilla-1054
USER: oneadmin
GROUP   : oneadmin
STATE   : PENDING
LCM_STATE   : LCM_INIT
RESCHED : No
START TIME  : 02/13 17:44:52
END TIME: -
DEPLOY ID   : -

VIRTUAL MACHINE MONITORING
NET_RX  : 0K
USED MEMORY : 0K
USED CPU: 0
NET_TX  : 0K

PERMISSIONS
OWNER   : um-
GROUP   : ---
OTHER   : ---

VM DISKS
 ID TARGET IMAGE   TYPE SAVE SAVE_AS
  0 vdaSLF6Vanilla file   NO   -

VM NICS
 ID NETWORK  VLAN BRIDGE   IP  MAC
  0 routable-private   no br1  10.128.1.9  54:52:00:02:0d:09

USER TEMPLATE
NPTYPE=NPERNLM
SCHED_MESSAGE=Fri Feb 13 17:46:29 2015 : No system datastore meets SCHED_DS_REQ
UIREMENTS: CLUSTER_ID = 100  !(PUBLIC_CLOUD = YES)
SCHED_RANK=FREE_MEM
SCHED_REQUIREMENTS=HYPERVISOR=\kvm\  HOSTNAME=\cloudworker*\

VIRTUAL MACHINE TEMPLATE
AUTOMATIC_REQUIREMENTS=CLUSTER_ID = 100  !(PUBLIC_CLOUD = YES)
CONTEXT=[
  CTX_USER=PFVTRVI+PElEPjA8L0lEPjxHSUQ+MDwvR0lEPjxHUk9VUFM+PElEPjA8L0lEPjwvR1JP
VVBTPjxHTkFNRT5vbmVhZG1pbjwvR05BTUU+PE5BTUU+b25lYWRtaW48L05BTUU+PFBBU1NXT1JEPi9E
Qz1jb20vREM9RGlnaUNlcnQtR3JpZC9PPU9wZW5cMjBTY2llbmNlXDIwR3JpZC9PVT1TZXJ2aWNlcy9D
Tj1mY2xoZWFkZ3B2bTAxLmZuYWwuZ292PC9QQVNTV09SRD48QVVUSF9EUklWRVI+eDUwOTwvQVVUSF9E
UklWRVI+PEVOQUJMRUQ+MTwvRU5BQkxFRD48VEVNUExBVEU+PFRPS0VOX1BBU1NXT1JEPjwhW0NEQVRB
[root@fclheadgpvm01 one]# onedatastore list
  ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS   TM
   0 system0M - - 0 sys  -shared
   1 default21.2G 85%   - 0 img  fs   shared
   2 files  21.2G 85%   - 0 fil  fs   ssh
 100 localnode  - - - 0 sys  -ssh
 102 cloud_images 20T 75%   - 2 img  fs   shared
[root@fclheadgpvm01 one]# onevnet list
  ID USERGROUPNAMECLUSTERBRIDGE   LEASES
   0 oneadminoneadmin routable-privatecloudworke br1   8
   2 oneadminoneadmin DynamicIP   -  br0  13
   3 oneadminoneadmin StaticIP-  br0   0
[root@fclheadgpvm01 one]# onehost list | more
  ID NAMECLUSTER   RVM  ALLOCATED_CPU  ALLOCATED_MEM STAT
   0 cloudworker1200 cloudwork   4400 / 800 (50%) 7.4G / 15.6G (47%) on
   1 cloudworker1201 cloudwork   0  -  - off
   2 cloudworker1202 cloudwork   0   0 / 800 (0%)0K / 15.6G (0%) on

From: Ruben S. Montero [rsmont...@opennebula.org]
Sent: Friday, February 13, 2015 4:49 PM
To: Steven C Timm
Cc: users@lists.opennebula.org
Subject: Re: [one-users] clusters in 4.8

Yes, you can do:

Cluster A: Host_A0, Host_A1...  + VNET_A0, VNET_A1...
Cluster B: HostB0, HostB1... + VNET_B0, VNET_B1...
Cluster Default: DS, DS_System

Then a VM that uses VNET_A0 + DS would be scheduled to Cluster A. Note
that using VNET_A0 constrain resources from Cluster A + Cluster
Default.

Cheers

Ruben

On Fri, Feb 13, 2015 at 10:42 PM, Steven C Timm t...@fnal.gov wrote:
 I know if I just take the vnet and the datastore out of the cluster, and have 
 no clusters at all,  then everything
 will work.. I was hoping to have a cluster structure of (host,vnet) pairings 
 that could
 all share a common data store.  However from the documentation, it looks like 
 if
 your template requests any resource that is part of a cluster (vnet or image 
 from datastore)
 then the scheduler will constrain you to resources that are part of that same 
 cluster.

 Is that correct?

 Steve Timm

Re: [one-users] @installation problem

2015-02-06 Thread Steven C Timm
Which linux distribution are you running on?

Steve


From: Users [users-boun...@lists.opennebula.org] on behalf of anagha b 
[banag...@gmail.com]
Sent: Friday, February 06, 2015 12:42 PM
To: Users@lists.opennebula.org
Subject: [one-users] @installation problem

Hi,

I am trying to intall opennebula-3.8.3  and got following error

oneadmin@x:~/opennebula-3.8.3$ scons sqlite=no mysql=yes
scons: Reading SConscript files ...
Testing recipe: pkg-config
  Error calling pkg-config xmlrpc_server_abyss++ --static --libs
Testing recipe: xmlrpc-c-config
/usr/lib/ruby/1.8/fileutils.rb:243:in `mkdir': Permission denied - .xmlrpc_test 
(Errno::EACCES)
from /usr/lib/ruby/1.8/fileutils.rb:243:in `fu_mkdir'
from /usr/lib/ruby/1.8/fileutils.rb:217:in `mkdir_p'
from /usr/lib/ruby/1.8/fileutils.rb:215:in `reverse_each'
from /usr/lib/ruby/1.8/fileutils.rb:215:in `mkdir_p'
from /usr/lib/ruby/1.8/fileutils.rb:201:in `each'
from /usr/lib/ruby/1.8/fileutils.rb:201:in `mkdir_p'
from share/scons/get_xmlrpc_config:209:in `gen_test_file'
from share/scons/get_xmlrpc_config:225:in `test_config'
from share/scons/get_xmlrpc_config:240:in `search_config'
from share/scons/get_xmlrpc_config:239:in `each'
from share/scons/get_xmlrpc_config:239:in `search_config'
from share/scons/get_xmlrpc_config:251

Error searching for xmlrpc-c libraries. Please check this things:

 * You have installed development libraries for xmlrpc-c. One way to check
   this is calling xmlrpc-c-config that is provided with the development
   package.
 * Check that the version of xmlrpc-c is at least 1.06. You can do this also
   calling:
   $ xmlrpc-c-config --version
 * If all this requirements are already met please send log files located in
   .xmlrpc_test to the mailing list.



plz help
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Thin provisioning and qcow2

2015-02-02 Thread Steven C Timm
This is my datastore definition for my qcow2-based image store.
I am not using the qcow2 TM_MAD to move the image from one place to the other
but I do declare it as the driver for all the images.  (this is a shared image
store using NFS).

In this mode it brings a separate copy of my 2GB compressed image to
the node every time a VM is launched, rather than forking off of the existing
copy, but it does not expand the qcow2..in other words what starts as 2GB stays 
2GB.

The same thing works if you are using TM_SSH, i.e that transfer manager does
not expand the image.
Usually if you look in to the numberd VM log that is made when the VM is 
launched
or the image is stored you can figure out what is going on and which driver is 
being called.

Steve Timm




DATASTORE 102 INFORMATION
ID : 102
NAME   : cloud_images
USER   : oneadmin
GROUP  : oneadmin
CLUSTER: cloudworker
TYPE   : IMAGE
DS_MAD : fs
TM_MAD : shared
BASE PATH  : /var/lib/one/datastores/102
DISK_TYPE  : FILE

DATASTORE CAPACITY
TOTAL: : 7T
FREE:  : 2.1T
USED:  : 39G
LIMIT: : -

PERMISSIONS
OWNER  : um-
GROUP  : u--
OTHER  : ---

DATASTORE TEMPLATE
BASE_PATH=/var/lib/one/datastores/
CLONE_TARGET=SYSTEM
DATASTORE_CAPACITY_CHECK=NO
DISK_TYPE=FILE
DS_MAD=fs
LN_TARGET=NONE
TM_MAD=shared
TYPE=IMAGE_DS

IMAGES
4
5
[root@fclheadgpvm01 ~]#



From: Users [users-boun...@lists.opennebula.org] on behalf of Schroeder, Nils 
[nils.schroe...@cewe.de]
Sent: Monday, February 02, 2015 9:47 AM
To: users@lists.opennebula.org
Subject: [one-users] Thin provisioning and qcow2

Hello,

I want to use thin provisioning for my images and therefore I use the qcow2 
format but for some reason ONE does not use sparse file copy / move operations. 
Here is what I did:

I have a datastore with qcow2 driver:
oneadmin@one01:~$ onedatastore show 101
DATASTORE 101 INFORMATION
ID : 101
NAME   : rz1-01-image
USER   : oneadmin
GROUP  : oneadmin
CLUSTER: RZ1
TYPE   : IMAGE
DS_MAD : fs
TM_MAD : qcow2
BASE PATH  : /var/lib/one/datastores/101
DISK_TYPE  : FILE

DATASTORE CAPACITY
TOTAL: : 6.5T
FREE:  : 6.3T
USED:  : 132.9G
LIMIT: : -

PERMISSIONS
OWNER  : um-
GROUP  : u--
OTHER  : ---

DATASTORE TEMPLATE
BASE_PATH=/var/lib/one/datastores/
CLONE_TARGET=SYSTEM
DISK_TYPE=FILE
DS_MAD=fs
LN_TARGET=NONE
TM_MAD=qcow2
TYPE=IMAGE_DS

IMAGES
...


I created a new empty image:
oneadmin@one01:~$ qemu-img create -f qcow2 -o preallocation=metadata 
debian.qcow2 4G

This file takes only a few KB of disk space:
oneadmin@one01:$ qemu-img info debian.qcow2
image: debian.qcow2
file format: qcow2
virtual size: 4.0G (4294967296 bytes)
disk size: 784K
cluster_size: 65536

So I import it into the Datastore with driver qcow2:
oneadmin@one01:~$ oneimage create -d 101 --name debian --path ~/debian.qcow2 
--prefix vd --type OS --driver qcow2

But as you can see after the import the file is 4GB in size:
oneadmin@one01:~$ oneimage show debian
IMAGE 28 INFORMATION
ID : 28
NAME   : debian
USER   : oneadmin
GROUP  : oneadmin
DATASTORE  : rz1-01-image
TYPE   : OS
REGISTER TIME  : 02/02 14:35:42
PERSISTENT : No
SOURCE : /var/lib/one/datastores/101/0b96c87ba5445c347ad407bb1f4950ff
PATH   : /var/lib/one/debian.qcow2
SIZE   : 4G
STATE  : rdy
RUNNING_VMS: 0

PERMISSIONS
OWNER  : um-
GROUP  : ---
OTHER  : ---

IMAGE TEMPLATE
DEV_PREFIX=vd
DRIVER=qcow2

VIRTUAL MACHINES

oneadmin@one01:~$ qemu-img info 
/var/lib/one/datastores/101/0b96c87ba5445c347ad407bb1f4950ff
image: /var/lib/one/datastores/101/0b96c87ba5445c347ad407bb1f4950ff
file format: qcow2
virtual size: 4.0G (4294967296 bytes)
disk size: 4.0G
cluster_size: 65536

I think this is due to the underlying copy operation does not handle qcow2 / 
sparse files correctly. You can reproduce same failures in bash: If you cp the 
file everything is fine but if you do a rsync or scp the file has its virtual 
size afterwards. For rsync there is the switch -S to correctly handle sparse 
files, there is no such switch for scp tough.

What am I doing wrong? Am I using the right way to create empty qcow2 / thin 
provisioned images? I want to create a vm and do a pxe install afterwards 
(works fine I only have the thin provisioning problem). I also tried to create 
an empty datablock using driver qcow2 and fs type ext4 but in that way I get a 
raw file. That one seems to be thin provisioned but I think I have some 
advantages if I use qcow2 (copy on write / snapshot performance).

Thank you very much any help is very appreciated,
Nils Schröder



CEWE Stiftung  Co. KGaA mit Sitz in Oldenburg; Registergericht Oldenburg HR B 
208214;
Persönlich haftende geschäftsführende und 

[one-users] specifying pairs of ipv4/ipv6 ip addresses in dual stack address range

2015-01-21 Thread Steven C Timm
At Fermilab we have the use case where we do not use the IPv6 autoconfig 
features, but
instead assign individual ipv4 and ipv6 addresses to each host name.

Is there any way that the structure of the address range table in open nebula 
can be modified
such that we can uniquely specify both ipv4 and ipv6 addresses to be the
exact address we want?

I suppose if I creatively assign MAC addresses to the VM I might be able to do 
it in
a backhanded sort of way, e.g. to get the autoconf utility to give me the ipv6 
address I want, but
it would be nice to have a more straightforward way to make it happen.  Right 
now the
only way I see is to write a special contexualiztion script outside of the 
one-context rpm
to do an nslookup on the ipv6 address and add it after the fact.

Anyone else who has done this or figured out how to do it, let me know.

Steve Timm

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Sunstone comes up but nothing happens

2014-12-31 Thread Steven C Timm
I have recently installed and configured sunstone on my open nebula 4.8 
installation.
I get the login screen and it recognizes me as a user and lets me log in, but 
only shows the five
menu bars at the left of the screen  (dashboard, system, virtual resources, 
infrastructure, one flow).
Nothing else after that, and none of those buttons do anything.  The browser 
(firefox) keeps reloading
the main page every 5-10 seconds or so.  Safari can't bring it up at all.

Any common failure mode that might lead to that?

In addition, after 20-25 minutes the sunstone daemon just exits for no good 
reason.
(This is an improvement over before when it was exiting after 2-3 minutes.. 
what got me this far
was installing all the required *-devel dependencies.)

Steve Timm

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Question on /var/lib/remotes/tm/shared/clone script

2014-12-29 Thread Steven C Timm
We have noticed this problem recently in OpenNebula 4.6 and 4.8 recently.  We 
also
had to make a similar patch in OpenNebula 3.2

Our use case:
we have a SHARED image datastore


DATASTORE 102 INFORMATION

ID : 102

NAME   : cloud_images

USER   : oneadmin

GROUP  : oneadmin

CLUSTER: cloudworker

TYPE   : IMAGE

DS_MAD : fs

TM_MAD : shared

BASE PATH  : /var/lib/one/datastores/102

DISK_TYPE  : FILE


-


We have a non-shared SYSTEM data store (local to each of 250-some nodes)


[root@fclheadgpvm01 shared]# onedatastore show 100

DATASTORE 100 INFORMATION

ID : 100

NAME   : localnode

USER   : oneadmin

GROUP  : oneadmin

CLUSTER: cloudworker

TYPE   : SYSTEM

DS_MAD : -

TM_MAD : ssh

BASE PATH  : /var/lib/one/datastores/100/100

DISK_TYPE  : FILE



When we launch a VM the tm/shared/clone procedure is invoked


[root@cloudworker1359 1]# ls -lrt

total 2390660

-rw-r--r-- 1 oneadmin oneadmin 2447638528 Dec 29 20:52 disk.0

-rw-r--r-- 1 oneadmin oneadmin 389120 Dec 29 20:52 disk.1

lrwxrwxrwx 1 oneadmin oneadmin 36 Dec 29 20:52 disk.1.iso - 
/var/lib/one/datastores/100/1/disk.1

-rw-r--r-- 1 oneadmin oneadmin922 Dec 29 20:52 deployment.0


It clones disk.0 to the appropriate directory in the local system datastore

but we get a permission denied when we try to launch the VM.


The fix, is to hack the clone script to chmod the file to 664 and then the VM 
will launch.


--

The relevant configurations, we think:


Non-default settings in qemu.conf


[root@cloudworker1359 libvirt]# grep -v ^# qemu.conf | grep .

dynamic_ownership = 0


Non-default settings in libvirtd.conf:


[root@cloudworker1359 libvirt]# grep -v ^# libvirtd.conf | grep .

unix_sock_group = libvirtd

unix_sock_ro_perms = 0777

unix_sock_rw_perms = 0770

auth_unix_ro = none

auth_unix_rw = none

log_level = 2

log_outputs = 2:syslog:libvirtd

host_uuid = a68ca77f-dab0-5873-be6f-2216635204d1


[root@cloudworker1359 libvirt]# grep qemu /etc/passwd

qemu:x:107:107:qemu user:/:/sbin/nologin

[root@cloudworker1359 libvirt]# grep libvirt /etc/passwd

[root@cloudworker1359 libvirt]# grep qemu /etc/passwd

qemu:x:107:107:qemu user:/:/sbin/nologin

[root@cloudworker1359 libvirt]# grep oneadmin /etc/passwd

oneadmin:x:44897:10040::/var/lib/one:/bin/bash

[root@cloudworker1359 libvirt]# grep qemu /etc/group

disk:x:6:qemu

kvm:x:36:qemu

qemu:x:107:

oneadmin:x:10040:qemu





Three questions:


1) why can the VM not be launched with the default permissions

2) are there any system configurations I can fix to make it launch?

3) If I have to continue patching the tm/shared/clone script is there any way to

   push it out to the other nodes?  one host sync doesn't appear to change 
the remotes.


Steve Timm





___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Recovering opennebula 4.8 with missing .one/one_auth file

2014-12-10 Thread Steven C Timm
I recently had to restore my system from backup but forgot to include the .one/
directory in the backups.. thus I have no one_auth file at the moment for the 
oneadmin user.
Without that, OpenNebula won't start and thus  I can't do oneuser login which 
is what I would
need to re-create it.

oneadmin user is currently using x.509 authentication.
Any way to reset it to core authentication in the current state?

Steve

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] IOPS in quota section for 4.12

2014-11-13 Thread Steven C Timm
Good idea.  In general it would be nice to be able to set the various 
quantities which can
be controlled by the RHEV extended libvirt functions which include IOPS but 
also network bandwidth.

Steve


From: Users [users-boun...@lists.opennebula.org] on behalf of kiran ranjane 
[kiran.ranj...@gmail.com]
Sent: Thursday, November 13, 2014 5:16 PM
To: users
Subject: [one-users] IOPS in quota section for 4.12

Hi Team,

I think we should add IOPS in quota section where in oneadmin can assign IOPS 
at user and group level.

For example :

-- Oneadmin assigns 2000 IOPS to a particular user, which means that particular 
user should not exceed 2000 IOPS on his total number of VM that he is running.

-- After IOPS is assigned to user/group through quota section, Cloud view user 
should be able to assign IOPS at VM disks level as per his choice. (eg: if a 
user/group is assigned quota of 2000 IOPS then he should have the ability to 
set IOPS of the VM disk before starting the VM.)

User/Group have total IOPS Quota = 2000

User Sets IOPS at Disk Level :
Virtual Machine OS disk = 300 IOPS
Virtual Machine Data Disk  = 300 IOPS

Total remaining IOPS = 1400 IOPS

I really think this feature should be a must add in quota section as it will 
easy many things related to default way of assigning IOPS to VM.

Regards
Kiran Ranjane
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Add Ec2 Host to OpenNebula 4.8 for Cloudbursting

2014-10-22 Thread Steven C Timm
We did this on a test server a while ago in an early open nebula version.  
Check to be sure
that you have all the required Ruby gems installed.  I believe there is an 
aws-specific one
you have to install just to make the AWS stuff work.

Steve Timm


From: Users [users-boun...@lists.opennebula.org] on behalf of i...@cyle.ch 
[i...@cyle.ch]
Sent: Wednesday, October 22, 2014 3:03 PM
To: users@lists.opennebula.org
Subject: [one-users] Add Ec2 Host to OpenNebula 4.8 for Cloudbursting

Dear Community

i try to add an ec2 host for Cloudbursting to my OpenNebula 4.8 Private Cloud.

i have configured

IM_MAD = [
 name   = ec2,
 executable = one_im_sh,
 arguments  = -c -t 1 -r 0 ec2 ]

VM_MAD = [
 name   = ec2,
 executable = one_vmm_sh,
 arguments  = -t 15 -r 0 ec2,
 type   = xml ]

in the /etc/one/oned.conf

AND:

regions:
 default:
 region_name: eu-west-1
 access_key_id: My_ACCESS_KEY
 secret_access_key: MY_SECRET_ACCESS_KEY
 capacity:
 m1.small: 5
 m1.large: 0
 m1.xlarge: 0

in /etc/one/ec2_driver.conf

but, if i create the ec2 host with:
onehost create ec2 --im ec2 --vm ec2 --net dummy
i always receive an error message

ec2 err | ERROR Wed Oct 22 21:52:54 2014 : Error monitoring Host
eu-west-1 (26): Error executing poll


Thanks for assistance
Best regards
Cyrill

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] network_pool out of sync with vm_pool ONE4.8

2014-10-22 Thread Steven C Timm
Due to an ongoing issue with having to purge the vm_pool of my database from 
time
to time, I inadvertently did the purge while 8 vm's were still active and thus 
leases
were still allocated.

I ran onedb fsck to clean it up.  It successfully cleaned up the host_pool 
table to show
no running vm's.  It also said that it updated the network_pool table to show
no allocated leases but it did not do so.

The network_pool table  in ONE4 holds all 1000 leases of this particular vnet 
in a single row of the table.
Correctly executing a manual mysql query to clean it up would be almost 
impossible.

Suggestions on what to do to clean it up?

Steve Timm

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] econe server connection reset by peer

2014-10-10 Thread Steven C Timm
We have been doing bulk tests of the OpenNebula 4.8 econe-server.
With just a straight econe-run-instances we can get up to 1000 VM's (the limit 
of our current subnet)
started fairly quickly (about 30 minutes)

But in practice we are using a more complicated sequence of EC2 calls via 
HTCondor.
In particular it is doing a CreateKeyPair call before it launches each VM and 
then
calling the RunInstances method with the --keypair option, a unique keypair for 
each VM.
After the VM exits, it called a DeleteKeyPair call.

IT appears there is a hard limit of the number of key pairs that can be stored 
in
any one user's template and that hard limit is 301.  Any further CreateKeyPair 
calls
return with connection reset by peer causing HTCondor to mark the VM as held.
Fortunately it is possible to override this and tell HTCondor to continue, but 
it's a pain.
We do have ways to log into the vm's without the ssh key pair so we wouldn't 
even really need to register
them at all.

Is my analysis correct?  Is there a hard limit of the number of keys that can 
be stored in the user template?
If so, how best to get around this limit?

Steve Timm


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] FREEMEMORY as RANK variable

2014-10-10 Thread Steven C Timm
During most of the time we have been running OpenNebula 2 and 3 we have been 
using a
rank based on FREEMEMORY.  We are now doing tests using OpenNebula 4.8, in a 
use case
where we are filling up an empty cloud.  FREEMEMORY still in theory should be 
an accurate value
but the problem is that 6-7 VM's are typically being launched on every SCHED 
cycle and so a
node that starts out with all of its memory free will end up full of virtual 
machines in a single cycle.

Once we got to a full cloud and steady-state it would be fine but when you have 
8 VM's
starting at once on an old 8-core node, it takes much longer than it otherwise 
would.

Any cheap suggestions to get the default scheduler to do a more horizontal fill?

Steve Timm
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Thin provisioning and image size

2014-10-08 Thread Steven C Timm
Interestingly, I found that I was able to do some overcommitting after all, 
somewhat to my surprise,
as my earlier tests of open nebula 4.4 and 4.6 didn't seem to allow that.  This 
is what I think is happening:

Beginning setup:  Image datastore is on NFS, system datastore is on local disk, 
launching
non-persistent image via CLONE operation in opennebula.
  DATASTORE_CAPACITY_CHECK = NO for both.

Initial size of system data store, 450 GB.
Max size of qcow2 image--262 GB
Actuall size of qcow2 image on disk, unexpanded-2.6 GB

Launch first VM--SCHED asks--does system datastore have 262 GB available?  Yes.
Launch second VM--SCHED asks--does system datastore have 262 GB available?  
Yes.. because first one
hasn't expanded out.  and so forth.

That amount of overprovisioning is enough for our purposes--we can get 8-9 
images (actually a lot more than
that) into the datastore and will only get in trouble if our users all run out 
the file to full size, which most
of them do not.  The system runs out of RAM and CPU before it runs out of disk 
in this scenario.

Steve Timm

From: Ruben S. Montero [rube...@dacya.ucm.es]
Sent: Wednesday, October 08, 2014 3:50 AM
To: Steven C Timm
Cc: Javier Fontan; users@lists.opennebula.org
Subject: Re: [one-users] Thin provisioning and image size

Hi

Yes,  the value for the size of the image is computed in libfs.sh (fs_size) for 
different file types.  For qcow2 images:

SIZE=$($QEMU_IMG info $1 | sed -n 's/.*(\([0-9]*\) bytes).*/\1/p')

I.e. the virtual size in bytes is used. As this is the max. value of the image 
it is not updated.

Some thoughts:

1.- Using the max size, enables a proper enforcement of disk quotas. And so 
prevents a kind of DoS attack , when a user starts a VM and grows the disk over 
the size of the Datastore, effectively disabling the hypervisor.

2.- Initially we thought of updating the image size every time it is copied 
back to the image datastore, but this may conflict with the quota system, as 
mentioned above.

3.- If needed we could implement datastore over-commitment  as we do with the 
hosts, i.e. extend the LIMIT_MB attribute to allow OpenNebula use more 
datastore storage. (See,
http://docs.opennebula.org/4.8/administration/references/ds_conf.html)

Cheers

Ruben



On Mon, Oct 6, 2014 at 7:45 PM, Steven Timm 
t...@fnal.govmailto:t...@fnal.gov wrote:

I am seeing the opposite problem in One 4.x
and have been ever since we started testing it.
When I do oneimage create using a qcow2 image,
opennebula always reports the size as the absolute
maximum to which the qcow2 file system could expand.
This keeps us from being able to over-provision our
disk on the VM hosts as we've done under Opennebula 3.2 for a long time.

For instance:

[oneadmin@fermicloud198 ~]$ oneimage show 5
IMAGE 5 INFORMATION
ID : 5
NAME   : SLF6Vanilla
USER   : oneadmin
GROUP  : oneadmin
DATASTORE  : cloud_images
TYPE   : OS
REGISTER TIME  : 10/03 16:31:31
PERSISTENT : No
SOURCE : /var/lib/one/datastores/102/180caf99a13146dbd1b60593378d4479
PATH   : /tmp/55c42a4cc7f87ea3390bc2bef14212c5
SIZE   : 256G
STATE  : used
RUNNING_VMS: 1

PERMISSIONS
OWNER  : um-
GROUP  : u--
OTHER  : u--

IMAGE TEMPLATE
DESCRIPTION=SLF6 Vanilla
DEV_PREFIX=vd
DRIVER=qcow2
EC2_AMI=YES



[oneadmin@fermicloud198 ~]$ onedatastore show 102
DATASTORE 102 INFORMATION
ID : 102
NAME   : cloud_images
USER   : njp
GROUP  : oneadmin
CLUSTER: cloudworker
TYPE   : IMAGE
DS_MAD : fs
TM_MAD : shared
BASE PATH  : /var/lib/one/datastores/102
DISK_TYPE  : FILE

DATASTORE CAPACITY
TOTAL: : 7T
FREE:  : 1.6T
USED:  : 4.1G
LIMIT: : -

PERMISSIONS
OWNER  : um-
GROUP  : u--
OTHER  : ---

DATASTORE TEMPLATE
BASE_PATH=/var/lib/one/datastores/
CLONE_TARGET=SYSTEM
DATASTORE_CAPACITY_CHECK=NO
DISK_TYPE=FILE
DS_MAD=fs
LN_TARGET=NONE
TM_MAD=shared
TYPE=IMAGE_DS

IMAGES
5
6

[oneadmin@fermicloud198 ~]$ onedatastore show 100
DATASTORE 100 INFORMATION
ID : 100
NAME   : localnode
USER   : oneadmin
GROUP  : oneadmin
CLUSTER: cloudworker
TYPE   : SYSTEM
DS_MAD : -
TM_MAD : ssh
BASE PATH  : /var/lib/one/datastores/100/100
DISK_TYPE  : FILE

DATASTORE CAPACITY
TOTAL: : -
FREE:  : -
USED:  : -
LIMIT: : -

PERMISSIONS
OWNER  : um-
GROUP  : u--
OTHER  : ---

DATASTORE TEMPLATE
BASE_PATH=/var/lib/one/datastores/100/
DATASTORE_CAPACITY_CHECK=no
SHARED=NO
TM_MAD=ssh
TYPE=SYSTEM_DS

IMAGES
[oneadmin@fermicloud198 ~]$

-

Any suggestions?

Steve




On Mon, 8 Sep 2014, Javier Fontan wrote:

Then the value makes sense as the units stored are Megabytes.

On Fri, Sep 5, 2014 at 3:34 PM, Daniel

Re: [one-users] Can't start VM

2014-09-27 Thread Steven C Timm
Can you show the output of 

oneimage show

for the image you are using?

The error you are getting seems to be saying that the image you are trying to 
copy
from the image datastore to the system datastore is somehow a directory rather 
than a file.
Verify  that /var/lib/one/datastores/1/22fb6766c2abcd61d1b1540796bd9705
is really a file.

Also give us the output of onedatastore show 0 
and onedatastore show 1.

But I think it's a problem somehow with the way you declared your image.

Steve Timm





From: Users [users-boun...@lists.opennebula.org] on behalf of Brendan Rose 
[bren...@bluecurtain.ca]
Sent: Saturday, September 27, 2014 2:05 AM
To: users@lists.opennebula.org
Subject: [one-users] Can't start VM

Hey!

I've tried several times with both OpenNebula 4.6 and 4.8 on Ubuntu
12.04 and 14.04 and keep running into the same issues.

The current setup is the front end with the storage space and hostname
Zeus. I also have a node using NFS to mount /var/lib/one. I've double
checked that ssh is setup to work without a password.

Currently this is the log I'm getting:

Sat Sep 27 01:00:05 2014 [Z0][DiM][I]: New VM state is PENDING
Sat Sep 27 01:00:22 2014 [Z0][DiM][I]: New VM state is ACTIVE.
Sat Sep 27 01:00:22 2014 [Z0][LCM][I]: New VM state is PROLOG.
Sat Sep 27 01:00:23 2014 [Z0][TM][I]: Command execution fail:
/var/lib/one/remotes/tm/shared/clone
Zeus:/var/lib/one//datastores/1/22fb6766c2abcd61d1b1540796bd9705
Zeus:/var/lib/one//datastores/0/5/disk.0 5 1
Sat Sep 27 01:00:23 2014 [Z0][TM][I]: clone: Cloning
/var/lib/one/datastores/1/22fb6766c2abcd61d1b1540796bd9705 in
Zeus:/var/lib/one//datastores/0/5/disk.0
Sat Sep 27 01:00:23 2014 [Z0][TM][E]: clone: Command cd
/var/lib/one/datastores/0/5; cp
/var/lib/one/datastores/1/22fb6766c2abcd61d1b1540796bd9705
/var/lib/one/datastores/0/5/disk.0 failed: Warning: Permanently added
'zeus' (ECDSA) to the list of known hosts.
Sat Sep 27 01:00:23 2014 [Z0][TM][I]: cp: omitting directory
'/var/lib/one/datastores/1/22fb6766c2abcd61d1b1540796bd9705'
Sat Sep 27 01:00:23 2014 [Z0][TM][E]: Error copying
Zeus:/var/lib/one//datastores/1/22fb6766c2abcd61d1b1540796bd9705 to
Zeus:/var/lib/one//datastores/0/5/disk.0
Sat Sep 27 01:00:23 2014 [Z0][TM][I]: ExitCode: 1
Sat Sep 27 01:00:23 2014 [Z0][TM][E]: Error executing image transfer
script: Error copying
Zeus:/var/lib/one//datastores/1/22fb6766c2abcd61d1b1540796bd9705 to
Zeus:/var/lib/one//datastores/0/5/disk.0
Sat Sep 27 01:00:24 2014 [Z0][DiM][I]: New VM state is FAILED

My Datastore info is:

   ID NAMESIZE AVAIL CLUSTER  IMAGES   TYPE DS   TM
0 system 7.2T   99%   BlueCurtain   0 sys
- shared
1 default 7.2T   99%   BlueCurtain   1 img
fs   shared
2 files  7.2T   99%   BlueCurtain   0 fil
fsssh

Any direction would be greatly appreciated! I'm tired of banging my head
against this error!

Thanks!

---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] OpenNebula plus RHEL./Centos/Sci. Linux 6.3 or 6.4

2014-04-24 Thread Steven C Timm
I am wondering if there are any other big OpenNebula clouds out there using 
RHEL 6.3 or 6.4,
Centos 6.3 or 6.4, or Scientific Linux 6.3 or 6.4?

We are seeing a fairly nasty performance problem, but only on intel-based 
Sandy Bridge or Ivy Bridge
based hardware.  If you have N kvm-based virtual machines running (N=4 as far 
as I can tell)
and then do a lot of disk and I/O  activity on the hypervisor, for example 
migrating several more virtual machines to or from the bare metal, and if at 
least one of those virtual machines is doing some I/O too, there is a failure
mode such that you start seeing sshd processes (from oneadmin monitoring or 
otherwise) hanging and taking 100%
of CPU. Ping times to virtual machines become very widely varied, in extreme 
cases the VM can even go
off the network entirely in such a fashion that ifdown/ifup doesn't bring it 
back and sometimes you can't even kill
it with virsh destroy.  A couple times we have even managed to crash the 
hypervisor irreversibly so it has to be power cycled.

If all the surviving virtual machines are shut down, the system then returns to 
normal and all the hung processes exit.

Has anyone else seen problems iike this?  If so please let me know.  There 
seems to be little if anything out there about this bug and that is strange 
since it has been out there for a while.

Steve Timm


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] OpenNebula Cloud Provisioning Portal

2014-04-15 Thread Steven C Timm
I like it—is this Sunstone reimagined/rebranded or is it in addition to 
sunstone?

Steve Timm

From: users-boun...@lists.opennebula.org 
[mailto:users-boun...@lists.opennebula.org] On Behalf Of Jaime Melis
Sent: Tuesday, April 15, 2014 6:01 AM
To: Users OpenNebula
Subject: [one-users] OpenNebula Cloud Provisioning Portal

Dear all,

we have uploaded a new screencast to demonstrate the Cloud Provisioning Portal 
interface of OpenNebula 4.6.

Check it out here (2 mins):
https://www.youtube.com/watch?v=NyOWEWAbeIo

cheers,
Jaime

--
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.orghttp://www.OpenNebula.org | 
jme...@opennebula.orgmailto:jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] RAW section--can it be included in devices/devices?

2014-03-31 Thread Steven C Timm
I would like to configure some OpenNebula VM's to have a serial console that 
can be accessed via the virsh console command.

The following xml works if it is inserted  manually  before the /devices tag 
in the deployment.0

serial type='pty'
target port='0'/
/serial
console type='pty'
target type='serial' port='0'/
/console

But if I put it in the RAW section it ends up outside of the /devices tag at 
the very end of the xml.  Is there any way to get the RAW section inside the 
devices/devices section of the XML?

Steve Timm
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Architecture advice

2014-03-28 Thread Steven C Timm
AT the moment we have an active/passive head node setup for OpenNebula with a 
SAN backend for the image repository. Opennebula service is managed by redhat 
clustering  as is the GFS2+CLVM file system.  We run a few virtual machines on 
the frontend machines as well.  If we were using an ssh-based transfer manager 
that would not work, nor did it work when we were using a gfs2+drbd file system 
for the image repo.

We are thinking instead to shift the ONE head node to be a live-migrateable 
virtual machine (or machines), the problem is how to get the VM to hook into 
the clustered file system with good bandwidth.

Steve Timm


From: users-boun...@lists.opennebula.org 
[mailto:users-boun...@lists.opennebula.org] On Behalf Of Shankhadeep Shome
Sent: Friday, March 28, 2014 4:53 PM
To: Stuart Longland
Cc: users
Subject: Re: [one-users] Architecture advice

What is your definition of large? This is a difficult question to answer 
without more details.

On Wed, Mar 26, 2014 at 7:08 PM, Stuart Longland 
stua...@vrt.com.aumailto:stua...@vrt.com.au wrote:
On 18/03/14 02:35, Gandalf Corvotempesta wrote:
 Hi to all
 i'm planning a brand new cloud infrastructure with opennebula.
 I'll have many KVM nodes and 3 management nodes where I would like
 to place OpenNebula, Sunstone and something else used to orchestrate
 the whole infrastructure

 Simple question: can I use these 3 nodes to power on OpenNebula (in
 HA-Configuration) and also host some virtual machines managed by
 OpenNebula ?
I could be wrong, I'm new to OpenNebula myself, but from what I've seen,
the management node isn't a particularly heavyweight process.

I had it running semi-successfully on an old server here (pre
virtualisation-technology).  I say semi-successfully; the machine had a
SCSI RAID card that took a dislike to Ubuntu 12.04, so the machine I had
as the master would die after 8 hours.

I was using SSH based transfers (so no shared storage) at the time.

Despite this, the VMs held up, they just couldn't be managed.  This
won't be the case if your VM hosts mount any space off the frontend
node: in which case a true HA set-up is needed.  (And lets face it, I
wouldn't recommend running the master node as a VM if you're going to be
mounting storage directly off it for the hosts.)

Based on this it would seem you could do a HA setup with some shared
storage between the nodes, either a common SAN or DR:BD to handle the
OpenNebula frontend.

Seeing as OpenNebula will want to control libvirt on the hosts (being
that you're also suggesting making these run the OpenNebula-managed VMs
too, this might have to be a KVM process managed outside of libvirt.
Not overly difficult, just fiddly.

But, as I say, I could be wrong, so take the above advice with a grain
of salt.

Regards,
--
Stuart Longland
Systems Engineer
 _ ___
\  /|_) |   T: +61 7 3535 
9619tel:%2B61%207%203535%209619
 \/ | \ | 38b Douglas StreetF: +61 7 3535 
9699tel:%2B61%207%203535%209699
   SYSTEMSMilton QLD 4064   http://www.vrt.com.au


___
Users mailing list
Users@lists.opennebula.orgmailto:Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] how to verify whether or not the motherboard support virtualization technology

2013-10-20 Thread Steven C Timm
It is always possible to run some kind of virtualization technology, no matter 
what motherboard you have.
I have no personal experience with the model of desktop board that you mention 
but I am familiar with
Intel's server products from the same 2007-2008 era and I can say that those 
boards, which used the same socket,
Support hardware-assisted virtualization.

Steve Timm


From: users-boun...@lists.opennebula.org 
[mailto:users-boun...@lists.opennebula.org] On Behalf Of Qiubo Su (David Su)
Sent: Sunday, October 20, 2013 8:17 PM
To: Users OpenNebula; disc...@lists.opennebula.org
Subject: [one-users] how to verify whether or not the motherboard support 
virtualization technology

Dear OpenNebula Community,
I want to upgrade an desktop PC bought 6 years ago to support virtualization 
technology. For the whole PC to support virtualization, apart from the CPU 
supporting virtualization, the motherboard needs to support virtualization as 
well. It is easy to upgrade the CPU, but difficult to upgrade the motherboard. 
If the motherboard already supports virtualization, then I don't need to 
upgrade the motherboard.
Does anyone know how to check whether the motherboard support virtualization or 
not?

in the Ubuntu terminal, run the below commands and get the corresponding return:

1) dmidecode | grep -A4 'Base Board Information'
Base Board Information
Manufacturer: Intel Corporation
Product Name: D945GCCR
Version: AAD78647-301
Serial Number: BTCR71100WJ1

2) dmidecode -t 4 | grep Socket
Socket Designation: LGA 775
The above return is the information of the motherboard of my desktop PC. don't 
know whether or not it support virtualization.
Thanks for your help !
Kind regards,
Q.S.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] the problem of the CPU in the virtual machine's template

2013-01-23 Thread Steven C Timm
VCPU is the parameter that controls how many cores appear internally in the 
virtual machine.  I. e. if you have VCPU=4
Your VM will have 4 cores, but there will still only be one kvm process as seen 
in the hypervisor that corresponds to it.
In a typical KVM setup it is possible to allocate more VCPU per VM host than 
the VM host has real cores.
I am not exactly sure what CPU does, but it does affect the FCPU and ACPU as 
seen in the onehost list output.

Steve Timm

From: users-boun...@lists.opennebula.org 
[mailto:users-boun...@lists.opennebula.org] On Behalf Of cmcc.dylan
Sent: Wednesday, January 23, 2013 9:26 PM
To: users@lists.opennebula.org
Subject: [one-users] the problem of the CPU in the virtual machine's template

Hi, everyone!

I have a doubt what's the accurately means of CPU in the vm's template.
For a example, if we define a vm which has CPU=1 and VCPU = 4. In this 
condition , what's result in the host os?
Does the host os fork 4 process on behalf of this vm and does  the 4 
process get 4 cores if the host's scheduler allows that.

I want to know the differences between CPU=4,VCPU=4 and CPU=1,VCPU=4.


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] network problem opennebula 3.8

2013-01-22 Thread Steven C Timm
You have to make the bridges on all the host machines first.  OpenNebula 
doesn't do it for you.
But you also have to get the remote scripts on the VM host, there is a onehost 
command to do that.

Steve Timm



From: users-boun...@lists.opennebula.org 
[mailto:users-boun...@lists.opennebula.org] On Behalf Of GyeongRyoon Kim
Sent: Tuesday, January 22, 2013 4:00 AM
To: opennebula mailing list
Subject: [one-users] network problem opennebula 3.8

Hi all,

I'm trying to launch virtual machine on my cloud with opennebula 3.8 and kvm 
hypervisor.

When I create vm, I got a this error message


Tue Jan 22 15:29:37 2013 [DiM][I]: New VM state is ACTIVE.

Tue Jan 22 15:29:38 2013 [LCM][I]: New VM state is PROLOG.

Tue Jan 22 15:29:38 2013 [VM][I]: Virtual Machine has no context

Tue Jan 22 15:29:57 2013 [TM][I]: clone: Cloning 
gcloud-front.sdfarm.kr:/gcloud/one/var/datastores/101/ba084f2e80cf462c39f3d63e67c280f0
 in /gcloud/one/var/datastores/0/113/disk.0

Tue Jan 22 15:29:57 2013 [TM][I]: ExitCode: 0

Tue Jan 22 15:30:00 2013 [TM][I]: mkimage: Making filesystem of 10240M and type 
ext3 at gcloud01:/gcloud/one/var//datastores/0/113/disk.1

Tue Jan 22 15:30:00 2013 [TM][I]: ExitCode: 0

Tue Jan 22 15:30:01 2013 [TM][I]: mkimage: Making filesystem of 1024M and type 
swap at gcloud01:/gcloud/one/var//datastores/0/113/disk.2

Tue Jan 22 15:30:01 2013 [TM][I]: ExitCode: 0

Tue Jan 22 15:30:01 2013 [LCM][I]: New VM state is BOOT

Tue Jan 22 15:30:01 2013 [VMM][I]: Generating deployment file: 
/gcloud/one/var/vms/113/deployment.0

Tue Jan 22 15:30:01 2013 [VMM][I]: Remote worker node files not found

Tue Jan 22 15:30:01 2013 [VMM][I]: Updating remotes

Tue Jan 22 15:30:01 2013 [VMM][I]: ExitCode: 0

Tue Jan 22 15:30:01 2013 [VMM][I]: ExitCode: 0

Tue Jan 22 15:30:01 2013 [VMM][I]: Command execution fail: 
/var/tmp/one/vnm/tm_ssh/pre 
PFZNPjxJRD4xMTM8L0lEPjxVSUQ+MDwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5vbmVhZG1pbjwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5zbF9saXZlPC9OQU1FPjxQRVJNSVNTSU9OUz48T1dORVJfVT4xPC9PV05FUl9VPjxPV05FUl9NPjE8L09XTkVSX00+PE9XTkVSX0E+MDwvT1dORVJfQT48R1JPVVBfVT4wPC9HUk9VUF9VPjxHUk9VUF9NPjA8L0dST1VQX00+PEdST1VQX0E+MDwvR1JPVVBfQT48T1RIRVJfVT4wPC9PVEhFUl9VPjxPVEhFUl9NPjA8L09USEVSX00+PE9USEVSX0E+MDwvT1RIRVJfQT48L1BFUk1JU1NJT05TPjxMQVNUX1BPTEw+MDwvTEFTVF9QT0xMPjxTVEFURT4zPC9TVEFURT48TENNX1NUQVRFPjI8L0xDTV9TVEFURT48UkVTQ0hFRD4wPC9SRVNDSEVEPjxTVElNRT4xMzU4ODM2MTUyPC9TVElNRT48RVRJTUU+MDwvRVRJTUU+PERFUExPWV9JRC8+PE1FTU9SWT4wPC9NRU1PUlk+PENQVT4wPC9DUFU+PE5FVF9UWD4wPC9ORVRfVFg+PE5FVF9SWD4wPC9ORVRfUlg+PFRFTVBMQVRFPjxDUFU+PCFbQ0RBVEFbMC41XV0+PC9DUFU+PERJU0s+PENMT05FPjwhW0NEQVRBW1lFU11dPjwvQ0xPTkU+PERBVEFTVE9SRT48IVtDREFUQVtTTDYuM19pbnN0YWxsX09TXV0+PC9EQVRBU1RPUkU+PERBVEFTVE9SRV9JRD48IVtDREFUQVsxMDFdXT48L0RBVEFTVE9

 SRV9JRD48

REVWX1BSRUZJWD48IVtDREFUQVtoZF1dPjwvREVWX1BSRUZJWD48RElTS19JRD48IVtDREFUQVswXV0+PC9ESVNLX0lEPjxJTUFHRT48IVtDREFUQVtTTDYgbGl2ZSBjZF1dPjwvSU1BR0U+PElNQUdFX0lEPjwhW0NEQVRBWzExXV0+PC9JTUFHRV9JRD48UkVBRE9OTFk+PCFbQ0RBVEFbWUVTXV0+PC9SRUFET05MWT48U0FWRT48IVtDREFUQVtOT11dPjwvU0FWRT48U09VUkNFPjwhW0NEQVRBWy9nY2xvdWQvb25lL3Zhci9kYXRhc3RvcmVzLzEwMS9iYTA4NGYyZTgwY2Y0NjJjMzlmM2Q2M2U2N2MyODBmMF1dPjwvU09VUkNFPjxUQVJHRVQ+PCFbQ0RBVEFbaGRhXV0+PC9UQVJHRVQ+PFRNX01BRD48IVtDREFUQVtzc2hdXT48L1RNX01BRD48VFlQRT48IVtDREFUQVtDRFJPTV1dPjwvVFlQRT48L0RJU0s+PERJU0s+PEJVUz48IVtDREFUQVt2aXJ0aW9dXT48L0JVUz48REVWX1BSRUZJWD48IVtDREFUQVtoZF1dPjwvREVWX1BSRUZJWD48RElTS19JRD48IVtDREFUQVsxXV0+PC9ESVNLX0lEPjxGT1JNQVQ+PCFbQ0RBVEFbZXh0M11dPjwvRk9STUFUPjxTSVpFPjwhW0NEQVRBWzEwMjQwXV0+PC9TSVpFPjxUQVJHRVQ+PCFbQ0RBVEFbdmRhXV0+PC9UQVJHRVQ+PFRZUEU+PCFbQ0RBVEFbZnNdXT48L1RZUEU+PC9ESVNLPjxESVNLPjxCVVM+PCFbQ0RBVEFbdmlydGlvXV0+PC9CVVM+PERFVl9QUkVGSVg+PCFbQ0RBVEFbaGRdXT48L0RFVl9QUkVGSVg+PERJU0tfSUQ+PCFbQ0RBVEFbMl1dPjwvRElTS19JRD48Uk

 VBRE9OTFk

+PCFbQ0RBVEFbbm9dXT48L1JFQURPTkxZPjxTSVpFPjwhW0NEQVRBWzEwMjRdXT48L1NJWkU+PFRBUkdFVD48IVtDREFUQVt2ZGJdXT48L1RBUkdFVD48VFlQRT48IVtDREFUQVtzd2FwXV0+PC9UWVBFPjwvRElTSz48R1JBUEhJQ1M+PExJU1RFTj48IVtDREFUQVswLjAuMC4wXV0+PC9MSVNURU4+PFBPUlQ+PCFbQ0RBVEFbNjAxM11dPjwvUE9SVD48VFlQRT48IVtDREFUQVt2bmNdXT48L1RZUEU+PC9HUkFQSElDUz48TUVNT1JZPjwhW0NEQVRBWzUxMl1dPjwvTUVNT1JZPjxOQU1FPjwhW0NEQVRBW3NsX2xpdmVdXT48L05BTUU+PE5JQz48QlJJREdFPjwhW0NEQVRBW2V0aDFdXT48L0JSSURHRT48SVA+PCFbQ0RBVEFbMTkyLjE2OC41Ni40XV0+PC9JUD48TUFDPjwhW0NEQVRBWzAyOjAwOmMwOmE4OjM4OjA0XV0+PC9NQUM+PE5FVFdPUks+PCFbQ0RBVEFbUHJpdmF0ZSBMQU4gd2l0aCBldGgxXV0+PC9ORVRXT1JLPjxORVRXT1JLX0lEPjwhW0NEQVRBWzhdXT48L05FVFdPUktfSUQ+PFZMQU4+PCFbQ0RBVEFbTk9dXT48L1ZMQU4+PC9OSUM+PE9TPjxCT09UPjwhW0NEQVRBW2Nkcm9tXV0+PC9CT09UPjwvT1M+PFZNSUQ+PCFbQ0RBVEFbMTEzXV0+PC9WTUlEPjwvVEVNUExBVEU+PEhJU1RPUllfUkVDT1JEUz48SElTVE9SWT48T0lEPjExMzwvT0lEPjxTRVE+MDwvU0VRPjxIT1NUTkFNRT5nY2xvdWQwMTwvSE9TVE5BTUU+PEhJRD4xOTwvSElEPjxTVElNRT4xMzU4ODM2MTc3PC9TVElNRT48RVRJTUU+MDwvR

 VRJTUU+PF


Re: [one-users] ONE 2.0 and Ruby versions

2013-01-16 Thread Steven C Timm
Thanks for the explanation, Javier--I was wondering because the opennebula 2.0 
documentation states that ruby 1.8.6 or greater is required, and yet all the 
evidence showed that we had been running stably for two years with ruby 1.8.5.  
If I remember correctly we had to force old versions of gems to make it work 
with ruby 1.8.5, some of which are no longer available.  We will update the 
gems before we restart, and stay with ruby 1.8.7.And then get all our vm's 
over to our new opennebula 3.x based cloud as soon as we can.

Steve Timm



-Original Message-
From: Javier Fontan [mailto:jfon...@opennebula.org] 
Sent: Wednesday, January 16, 2013 7:37 AM
To: Steven C Timm
Cc: users@lists.opennebula.org
Subject: Re: [one-users] ONE 2.0 and Ruby versions

OpenNebula 2.0 is made to be compatible with both 1.8.5 and 1.8.7. I think it 
is better if you install again those gems as some have compiled parts and you 
will have less risks if they are compiled for the same ruby version you are 
using. Using the latest versions of those gems should be ok.

On Fri, Jan 11, 2013 at 11:31 PM, Steven Timm t...@fnal.gov wrote:

 I have a OpenNebula 2.0 installation that has been running for a 
 couple of years.  It was running on Sci. Linux 5 and using the stock 
 ruby 1.8.5.  As part of an attempted and failed Puppet install on this 
 machine, Ruby has now been upgraded to 1.8.7 but I have not yet 
 restarted oned.  During the install of opennebula 2.0 there were a 
 number of manual gems installed of a version that matched ruby 1.8.5.
   This is my current gem list.

 [root@fcl002 log]# gem list

 *** LOCAL GEMS ***

 amazon-ec2 (0.9.15)
 aws-s3 (0.6.2)
 builder (2.1.2)
 crack (0.1.7)
 curb (0.7.8)
 daemons (1.1.0)
 eventmachine (0.12.10)
 haml (3.0.18)
 htmlentities (4.2.1)
 macaddr (1.0.0)
 mime-types (1.16)
 mkrf (0.2.3)
 nokogiri (1.4.3.1, 1.4.2)
 rack (1.1.0)
 rake (0.8.7)
 RedCloth (4.2.3)
 require (0.2.7)
 rmagick (2.13.1)
 sequel (3.15.0)
 sinatra (1.0)
 sqlite3-ruby (1.2.4)
 thin (1.2.7)
 uuid (2.3.1)
 xml-simple (1.0.12)
 xmlparser (0.6.81)

 


 Four questions:

 1) Does oned 2.0 work with ruby 1.8.7 at all under any configuration?

 2) If I restart oned 2.0 now with ruby 1.8.7 and the above collection
of gems, is it likely to start?

 3) If not, are versions of gems that are compatible with ruby 1.8.7
and oned 2.0 still out there?

 4) Am I best to try to roll back to ruby 1.8.5?
 --
 Steven C. Timm, Ph.D  (630) 840-8525
 t...@fnal.gov  http://home.fnal.gov/~timm/ Fermilab Computing 
 Division, Scientific Computing Facilities, Grid Facilities Department, 
 FermiGrid Services Group, Group Leader.
 Lead of FermiCloud project.
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



--
Javier Fontán Muiños
Project Engineer
OpenNebula - The Open Source Toolkit for Data Center Virtualization 
www.OpenNebula.org | jfon...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] OpenNebula 3.2 and hybrid shared / ssh transfer mode

2013-01-16 Thread Steven C Timm
We currently have a production cloud of two head nodes plus five VM hosts 
running OpenNebula 3.2 in a ssh-based transfer mode with the
Image repo stored on local disk.   We have three more nodes which are currently 
hooked up to a SAN with a shared GFS-based
File system.

I would like to add these three nodes to the cloud with shared transfer mode 
and do some tests before I add the other nodes to the SAN and make that our 
default transfer mode.  I know OpenNebula 3.8 would let us define an 
alternative data store and an alternative cluster.  But is there any way to do 
it under OpenNebula 3.2 without actually moving my image repo to the SAN?
The only  thing I can think of is to rsync the local-disk shared repo to the 
SAN for the time being.  Other ideas are welcome.

Steve Timm

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Race condition--onevm shutdown vs. onevm delete

2013-01-06 Thread Steven C Timm
Thanks for the information, Ruben.
One followup:


1)  Does anyone know the right configuration options for libvirtd.conf to 
make it so that it will log the commands it is receiving such as shutdown, 
stop, start, etc.  Up until now I haven't been able to get libvirtd to log 
these at any verbosity.


From: Ruben S. Montero [mailto:rsmont...@opennebula.org]
Sent: Sunday, January 06, 2013 4:24 PM
To: Steven C Timm
Cc: users@lists.opennebula.org
Subject: Re: [one-users] Race condition--onevm shutdown vs. onevm delete

Hi Steven


and libvirtd dies with a segfault.

The strange thing is--according to this template, there is
nothing to save at all.  Why would it go into epil state at all?

The epilog state is also used to clean up the host, so even if nothing has to 
be saved at all the VMs go through this state (e.g. to remove links, log out 
from iSCSI sessions...).


I am presuming that this condition can also exist in OpenNebula 3.x
versions.  Is there any way to prevent it?

Yes it is there, and we are working in producing a synchronous delete operation 
that waits for the cancel of the VM. The current suggested **work around** is 
to introduce ad-hoc waiting times in the epilog scripts to accommodate the 
cancel operation. This can be easily added to the epilog scripts.


As it is, a determined
user by doing onevm shutdown/onevm delete can crash my whole set
of VM hosts if he wants to.  A similar race condition can
exist with onevm stop / onevm delete.

Apart from doing a synchronous request we are also evaluating moving the DELETE 
operation to the ADMIN set. So only oneadmin and oneadmin group are granted 
permissions to delete a VM. Regular users will still have CANCEL to get rid of 
running VMs...

Thanks for the feedback!

Ruben


Steve Timm




[oneadmin@fcl002 one]$ onevm show 3823
VIRTUAL MACHINE 3823 INFORMATION
ID : 3823
NAME   : gums-5
STATE  : DONE
LCM_STATE  : LCM_INIT
START TIME : 01/03 14:12:57
END TIME   : 01/04 07:20:44
DEPLOY ID: : one-3823

VIRTUAL MACHINE MONITORING
NET_TX : 0
USED CPU   : 0
USED MEMORY: 2097152
NET_RX : 0

VIRTUAL MACHINE TEMPLATE
CONTEXT=[
  FILES=/cloud/images/OpenNebula/templates/init.sh 
/cloud/login/weigand/OpenNebula/k5login,
  GATEWAY=131.225.154.1,
  IP_PUBLIC=131.225.154.44tel:131.225.154.44,
  NETMASK=255.255.254.0,
  NS=131.225.8.120,
  ROOT_PUBKEY=id_dsa.pub,
  TARGET=hdc,
  USERNAME=opennebula,
  USER_PUBKEY=id_dsa.pub ]
DISK=[
  BUS=virtio,
  CLONE=YES,
  DISK_ID=0,
  IMAGE=SLF 5 Base,
  IMAGE_ID=159,
  READONLY=NO,
  SAVE=NO,
  SOURCE=/var/lib/one/image-repo/e0db5bdb2592065514ddda06ef52caf6fc7971f2,
  TARGET=vda,
  TYPE=DISK ]
DISK=[
  DISK_ID=1,
  SIZE=4096,
  TARGET=vdb,
  TYPE=swap ]
FEATURES=[
  ACPI=yes ]
GRAPHICS=[
  AUTOPORT=yes,
  KEYMAP=en-us,
  LISTEN=127.0.0.1,
  PORT=-1,
  TYPE=vnc ]
MEMORY=2048
NAME=gums-5
NIC=[
  BRIDGE=br0,
  IP=131.225.154.44tel:131.225.154.44,
  MAC=54:52:00:02:13:00,
  MODEL=virtio,
  NETWORK=FermiCloud,
  NETWORK_ID=2 ]
PUBLIC=YES
RANK=FREEMEMORY
REQUIREMENTS=HYPERVISOR=kvm
VCPU=1
VMID=3823
[oneadmin@f


--
Steven C. Timm, Ph.D  (630) 840-8525tel:%28630%29%20840-8525
t...@fnal.govmailto:t...@fnal.gov  http://home.fnal.gov/~timm/
Fermilab Computing Division, Scientific Computing Facilities,
Grid Facilities Department, FermiGrid Services Group, Group Leader.
Lead of FermiCloud project.
___
Users mailing list
Users@lists.opennebula.orgmailto:Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



--
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - The Open Source Solution for Data Center Virtualization
www.OpenNebula.orghttp://www.OpenNebula.org | 
rsmont...@opennebula.orgmailto:rsmont...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] sunstone gui suggestion #2

2012-12-14 Thread Steven C Timm
Agree—it would be very helpful to have such a feature, both from sunstone and 
from the CLI.

Steve


From: users-boun...@lists.opennebula.org 
[mailto:users-boun...@lists.opennebula.org] On Behalf Of Gary S. Cuozzo
Sent: Friday, December 14, 2012 9:34 AM
To: Users OpenNebula
Subject: [one-users] sunstone gui suggestion #2

Hello,
When you instantiate a template via the templates tab, you end up with vm's 
which are just named according to the vmid.  When you create a new vm using the 
virtual machines tab, you are able to provide a name to the vm and select a 
template.

For instantiating via the templates tab, I think it would be nice to have a 
feature where the vm could get named according to the template name with an 
ordinal or some other index to avoid name clashes when multiple vm's are 
instantiated from a single template.

For example, a template named http_server could get instantiated as http-1, 
http-2, etc.

Let me know what you think,
gary
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Very high unavailable service

2012-08-25 Thread Steven C Timm
I run high-availability squid servers on virtual machines although not yet in 
OpenNebula.
It can be done with very high availability.
I am not familiar with Ubuntu Server 12.04 but if it has libvirt 0.9.7 or 
better, and you are
Using KVM hypervisor, you should be able to use the cpu-pinning and numa-aware 
features of libvirt to pin
each virtual machine to a given physical cpu.   That will beat the migration 
issue you are seeing now.
With Xen hypervisor you can (and should) also pin.
I think if you beat the cpu and memory pinning problem you will be OK.

However, you did not say what network topology you are using for your virtual 
machine, and what kind of virtual network drivers,
That is important too.Also-is your squid cache mostly disk-resident or 
mostly RAM-resident?  If the former then the virtual disk drivers matter too, a 
lot.

Steve Timm



From: users-boun...@lists.opennebula.org 
[mailto:users-boun...@lists.opennebula.org] On Behalf Of Erico Augusto 
Cavalcanti Guedes
Sent: Saturday, August 25, 2012 6:33 PM
To: users@lists.opennebula.org
Subject: [one-users] Very high unavailable service

Dears,

I 'm running Squid Web Cache Proxy server on Ubuntu Server 12.04 VMs (kernel 
3.2.0-23-generic-pae), OpenNebula 3.4.
My private cloud is composed by one frontend and three nodes. VMs are running 
on that 3 nodes, initially one by node.
Outside cloud, there are 2 hosts, one working as web clients and another as web 
server, using Web Polygraph Benchmakring Tool.

The goal of tests is stress Squid cache running on VMs.
When same test is executed outside the cloud, using the three nodes as Physical 
Machines, there are 100% of cache service availability.
Nevertheless, when cache service is provided by VMs, nothing better than 45% of 
service availability is reached.
Web clients do not receive responses from squid when it is running on VMs in 
55% of the time.

I have monitored load average of VMs and PMs where VMs are been executed. First 
load average field reaches 15 after some hours of tests on VMs, and 3 on 
physical machines.
Furthermore, there is a set of processes, called migration/X, that are 
champions in CPU TIME when VMs are in execution. A sample:

top - 20:01:38 up 1 day,  3:36,  1 user,  load average: 5.50, 5.47, 4.20

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+TIME COMMAND
   13 root  RT   0 000 S0  0.0 408:27.25 408:27 migration/2
8 root  RT   0 000 S0  0.0 404:13.63 404:13 migration/1
6 root  RT   0 000 S0  0.0 401:36.78 401:36 migration/0
   17 root  RT   0 000 S0  0.0 400:59.10 400:59 migration/3


It isn't possible to offer web cache service via VMs in the way the service is 
behaving, with so small availability.

So, my questions:

1. Does anybody has experienced a similar problem of unresponsive service? 
(Whatever service).
2. How to state the bootleneck that is overloading the system, so that it can 
be minimized?

Thanks a lot,

Erico.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VM PENDING Status +OpenNebula v3.6+CentOS v6.2x86_64bit

2012-08-15 Thread Steven C Timm
The last column in the onehost list shows AMEM (Available Memory) is only 1.1G
Check the syntax on template memory, I don't think you should use the quotes, 
also check the memory allocated by
All the other templates.
Is this a KVM hypervisor?

Steve Timm

From: users-boun...@lists.opennebula.org 
[mailto:users-boun...@lists.opennebula.org] On Behalf Of Lawrence Chiong
Sent: Wednesday, August 15, 2012 4:41 PM
To: users
Subject: [one-users] VM PENDING Status +OpenNebula v3.6+CentOS v6.2x86_64bit


Hello,

When I deployed my VM I always got PENDING status unless I manually deployed 
it. How can I resolved this issue considering I still have enough CPUs and 
Memory.

This is my deployed VM template -

CPU=0.1
DISK=[
  IMAGE=empty-100m,
  IMAGE_UNAME=oneadmin ]
GRAPHICS=[
  LISTEN=0.0.0.0,
  TYPE=vnc ]
MEMORY=128
NAME=test
NIC=[
  NETWORK=testNetwork,
  NETWORK_UNAME=oneadmin ]
OS=[
  ARCH=x86_64,
  BOOT=hd ]
RAW=[
  TYPE=kvm ]
TEMPLATE_ID=111

My sched.log during VM deployment -

0]$ tail /var/log/one/sched.log
Wed Aug 15 15:36:52 2012 [VM][D]: Pending and rescheduling VMs:
 297
Wed Aug 15 15:36:52 2012 [SCHED][D]: Host 2 filtered out. Not enough capacity.

Wed Aug 15 15:36:52 2012 [SCHED][I]: Selected hosts:
 PRIHID  VM: 297
---


and, my host list showing memory and cpu info -

0]$ onehost list
  ID NAMECLUSTER   RVM TCPU FCPU ACPUTMEMFMEMAMEM STAT
   2 localhost   -   7 1600 1472  500   23.6G   16.1G1.1G on

Any ideas is very much appreciated. Thanks.

Jun
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Help to Download OpenNebula 3.6

2012-07-19 Thread Steven C Timm
I386 architectures can't run KVM hypervisor at all.  They can run some older 
versions of Xen with paravirtualized VM's but it has been a long time since I 
did it.  Also in Centos/SL/Redhat 6.x you would have to patch Xen in by hand 
because it's not part of the distro anymore.  Better to try  to get to some 
x86_64 architecture of Linux if you can.

Steve Timm


From: users-boun...@lists.opennebula.org 
[mailto:users-boun...@lists.opennebula.org] On Behalf Of Olivier Sallou
Sent: Thursday, July 19, 2012 7:37 AM
Cc: Users@lists.opennebula.org
Subject: Re: [one-users] Help to Download OpenNebula 3.6


Le 7/19/12 12:01 PM, Tuan Le Doan a écrit :
Hi all,

I'm using CentOS 6.3 i386, and i want to install OpenNebula 3.6. But when i 
search the rpm file in this link:  http://downloads.opennebula.org/ , it just 
have OpenNebula 3.6 for CentOS x86-64, don't support for CentOs i386 :(
Can anyone help me? How can i find the setup file of OpenNebula 3.6 for my OS?
OpenNebula targets servers, not computers (except for testing) and I don't 
think that i386 archs are really usefull. All servers are x86_64.

On old i386 computers, not sure computer is powerful enough to handle kvm + 
VMs.

Olivier


Thank you so much for your help.

--
Tuan Le Doan
Tel: +84 987 248 215
School of Electronics and Telecommunication
Ha Noi University of Science and Technology





___

Users mailing list

Users@lists.opennebula.orgmailto:Users@lists.opennebula.org

http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



--

Olivier Sallou

IRISA / University of Rennes 1

Campus de Beaulieu, 35000 RENNES - FRANCE

Tel: 02.99.84.71.95



gpg key id: 4096R/326D8438  (keyring.debian.org)

Key fingerprint = 5FB4 6F83 D3B9 5204 6335  D26D 78DC 68DB 326D 8438



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] OpenNebula on a VM

2012-06-08 Thread Steven C Timm

For those of you who are running the oned on a VM, which image repository type 
are you using?  Are you having any problems with performance loading VM images 
into or out of the image repository?

Steve

-Original Message-
From: users-boun...@lists.opennebula.org 
[mailto:users-boun...@lists.opennebula.org] On Behalf Of Jhon Masschelein
Sent: Friday, June 08, 2012 1:28 AM
To: computed...@gmail.com
Cc: users@lists.opennebula.org
Subject: Re: [one-users] OpenNebula on a VM

Hi,

We run the oned and mm_sched daemons in a VM and have sunstone running in its 
own VM. This way, even if the sunstone get overloaded (it is a public web 
site), the oned won't suffer.


Our opennebula VM has 4 cores and 32 GB Ram, but in our experience, the
load of the  VM is very low and if we ever need the resurces for 
something else, we'll probably shrink this VM to half its size.

The sunstone VM is just a website, so you have to scale it according to
the use. For now it's a one core, 512MB VM and it runs quite smoothly.

Wkr,

Jhon

On 06/07/2012 05:50 PM, computed...@gmail.com wrote:
 Hi,

 Are there any restrictions on hosting OpenNebula Front-end on a
 virtual machine if enough compute resources were guaranteed for that
 VM?

 Is there a recommended systems requirements for OpenNebula
 installation (CPU/RAM) .. I know it all depends on the environment
 size, num of VMs/hypervisors :) .. but a general guide line would be
 helpful..

 thanks.

 Sent from my BlackBerry(r) smartphone
 ___ Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


-- 
Jhon Masschelein
Senior Systeemprogrammeur
SARA - HPCV

Science Park 140
1098 XG Amsterdam
T +31 (0)20 592 8099
F +31 (0)20 668 3167
M +31 (0)6 4748 9328
E jhon.masschel...@sara.nl
http://www.sara.nl


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] OpenNebula head node as a virtual machine?

2012-04-10 Thread Steven C Timm
Has anyone managed to successfully run the OpenNebula head node in OpenNebula 
3.x as a virtual machine in production?
I am interested in doing this for ease of migration and/or failover with 
heartbeat/DRBD.
If so, have you done it with a Shared file system such as GFS, and let GFS be 
seen by the head node VM.

Steve Timm
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] OpenNebula head node as a virtual machine?

2012-04-10 Thread Steven C Timm
I am also thinking about running the head node as a pure KVM VM through virsh, 
i.e. outside of OpenNebula.
Do GlusterFS clients work OK inside of virtual machines?  How is the 
performance?  Have heard bits and pieces about
GlusterFS but not seen the full  package in operation,.

Steve Timm

From: Shankhadeep Shome [mailto:shank15...@gmail.com]
Sent: Tuesday, April 10, 2012 9:35 PM
To: Steven C Timm
Cc: users@lists.opennebula.org
Subject: Re: [one-users] OpenNebula head node as a virtual machine?

I run the head node as a VM but purely as a KVM vm controlled through virsh. 
The back-end storage is glusterfs presented from the hyper-visor nodes 
themselves.
On Tue, Apr 10, 2012 at 9:26 PM, Steven C Timm 
t...@fnal.govmailto:t...@fnal.gov wrote:
Has anyone managed to successfully run the OpenNebula head node in OpenNebula 
3.x as a virtual machine in production?
I am interested in doing this for ease of migration and/or failover with 
heartbeat/DRBD.
If so, have you done it with a Shared file system such as GFS, and let GFS be 
seen by the head node VM.

Steve Timm

___
Users mailing list
Users@lists.opennebula.orgmailto:Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org