[one-users] Bulk delete of vCenter VM's leaves stray VM's

2014-11-12 Thread Sebastiaan Smit
Hi list,

We're testing the vCenter functionality in version 4.10 and see some strange 
behaviour while doing bulk actions.

Deleting VM's sometimes leave stray VM's on our cluster. We see the following 
in de VM log:

Sun Nov  9 15:51:34 2014 [Z0][LCM][I]: New VM state is RUNNING
Wed Nov 12 17:30:36 2014 [Z0][LCM][I]: New VM state is CLEANUP.
Wed Nov 12 17:30:36 2014 [Z0][VMM][I]: Driver command for 60 cancelled
Wed Nov 12 17:30:36 2014 [Z0][DiM][I]: New VM state is DONE
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Command execution 
fail: /var/lib/one/remotes/vmm/vcenter/cancel 
'423cdcae-b6b3-07c1-def6-96b9f3f4b7b3' 'demo-01' 60 demo-01
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Cancel of VM 
423cdcae-b6b3-07c1-def6-96b9f3f4b7b3 on host demo-01 failed due to 
ManagedObjectNotFound: The object has already been deleted or has not been 
completely created
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 ExitCode: 255
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Failed to execute 
virtualization driver operation: cancel.
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Successfully execute 
network driver operation: clean.
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: CLEANUP SUCCESS 60

We see it in a different manner while bulk creating VM's (20+ at a time):

Sun Nov  9 16:01:34 2014 [Z0][DiM][I]: New VM state is ACTIVE.
Sun Nov  9 16:01:34 2014 [Z0][LCM][I]: New VM state is PROLOG.
Sun Nov  9 16:01:34 2014 [Z0][LCM][I]: New VM state is BOOT
Sun Nov  9 16:01:34 2014 [Z0][VMM][I]: Generating deployment file: 
/var/lib/one/vms/81/deployment.0
Sun Nov  9 16:01:34 2014 [Z0][VMM][I]: Successfully execute network driver 
operation: pre.
Sun Nov  9 16:01:36 2014 [Z0][VMM][I]: Command execution fail: 
/var/lib/one/remotes/vmm/vcenter/deploy '/var/lib/one/vms/81/deployment.0' 
'demo-01' 81 demo-01
Sun Nov  9 16:01:36 2014 [Z0][VMM][I]: Deploy of VM 81 on host demo-01 with 
/var/lib/one/vms/81/deployment.0 failed due to undefined method `uuid' for 
nil:NilClass
Sun Nov  9 16:01:36 2014 [Z0][VMM][I]: ExitCode: 255
Sun Nov  9 16:01:36 2014 [Z0][VMM][I]: Failed to execute virtualization driver 
operation: deploy.
Sun Nov  9 16:01:36 2014 [Z0][VMM][E]: Error deploying virtual machine
Sun Nov  9 16:01:36 2014 [Z0][DiM][I]: New VM state is FAILED
Wed Nov 12 17:30:19 2014 [Z0][DiM][I]: New VM state is DONE.
Wed Nov 12 17:30:19 2014 [Z0][LCM][E]: epilog_success_action, VM in a wrong 
state


I think these have two different root causes. The cluster is not under load.


Has anyone else seen this behaviour?

Best regards,
-- 
Sebastiaan Smit
Echelon BV

E: b...@echelon.nl
W: www.echelon.nl
T: (088) 3243566 (gewijzigd nummer)
T: (088) 3243505 (servicedesk)
F: (053) 4336222

KVK: 06055381


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Deleting vcenter VM's in POWEROFF state leaves stray VM's on cluster

2015-02-12 Thread Sebastiaan Smit
Hi Tino,

I think we have a misunderstanding. This is what I did:

- Shutdown the VM from within the guest
- OpenNebula learned that the machine was powered off
- From within OpenNebula delete the VM
- VM is removed from OpenNebula's database
- VM stays on vcenter cluster (in poweroff state)

I expected that OpenNebula would delete the VM from my vcenter cluster.

If I look at the VM state diagram (*1) I interpret that from any state you can 
go do a delete and go to done. Or is my assumption wrong?

Best regards,

*1) http://archives.opennebula.org/_media/documentation:rel4.4:states-simple.png
--
Sebastiaan Smit
Echelon BV
 
E: b...@echelon.nl
T: (088) 3243566 (hoofdnummer)
T: (088) 3243505 (servicedesk)
F: (053) 4336222
W: www.echelon.nl
 
KVK: 06055381


-Oorspronkelijk bericht-
Van: Tino Vazquez [mailto:cvazquez@opennebula.systems] 
Verzonden: woensdag 11 februari 2015 12:44
Aan: Sebastiaan Smit
CC: users@lists.opennebula.org
Onderwerp: Re: [one-users] Deleting vcenter VM's in POWEROFF state leaves stray 
VM's on cluster

Hi Sebastiaan,

Thanks a lot for the testing, this is precious feedback.

In this I'm afraid that, by design, and to maintain compatibility with other 
hypervisors, we cannot change the OpenNebula assumption that if the VM is in 
poweroff, OpenNebula thinks it has been removed from the hypervisor.

There are two alternatives:

   * at the time of powering off in vCenter, unregister it as well

  * poweron the VM first, and then delete it

Best,

-Tino
--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
VP of Engineering | Head of Research at OpenNebula Systems 
cvazquez@OpenNebula.Systems | @OpenNebula

--
Confidentiality Warning: The information contained in this e-mail and any 
accompanying documents, unless otherwise expressly indicated, is confidential 
and privileged, and is intended solely for the person and/or entity to whom it 
is addressed (i.e. those identified in the To and cc box). They are the 
property of OpenNebula.Systems S.L..
Unauthorized distribution, review, use, disclosure, or copying of this 
communication, or any part thereof, is strictly prohibited and may be unlawful. 
If you have received this e-mail in error, please notify us immediately by 
e-mail at abuse@opennebula.systems and delete the e-mail and attachments and 
any copy from your system. OpenNebula's thanks you for your cooperation.


On 11 February 2015 at 11:55, Sebastiaan Smit b...@echelon.nl wrote:
 Hi Tino,

 Yes I did. It was a robustness check. One detected the poweroff state, 
 but was not able to clean it up after removal.


 Best,

 Sebastiaan



 Op 11 feb. 2015 om 11:09 heeft Tino Vazquez 
 cvazquez@opennebula.systems het volgende geschreven:

 Hi Sebastiaan,

 We are trying to reproduce this problem. From the log it looks like 
 the VM went to poweroff state on its own. Did you power it off from the 
 vCenter?

 Best,

 -Tino

 On Tue Feb 10 2015 at 1:04:33 PM Sebastiaan Smit b...@echelon.nl wrote:

 Hi list,



 I’ve encountered strange behavior with my vcenter 4.10.2 demo setup. 
 When I delete a VM which is in POWEROFF state, it disappears as 
 expected from OpenNebula, but remains in poweroff state on my vcenter 
 cluster. The last log lines of the VM are the following:



 Wed Feb  4 22:32:47 2015 [Z0][VMM][I]: Successfully execute network 
 driver
 operation: pre.

 Wed Feb  4 22:41:23 2015 [Z0][VMM][I]: Successfully execute 
 virtualization driver operation: deploy.

 Wed Feb  4 22:41:23 2015 [Z0][VMM][I]: Successfully execute network 
 driver
 operation: post.

 Wed Feb  4 22:41:23 2015 [Z0][LCM][I]: New VM state is RUNNING

 Tue Feb 10 12:55:06 2015 [Z0][VMM][I]: VM running but monitor state 
 is POWEROFF

 Tue Feb 10 12:55:06 2015 [Z0][DiM][I]: New VM state is POWEROFF

 Tue Feb 10 12:56:18 2015 [Z0][VMM][I]: VM running but monitor state 
 is POWEROFF

 Tue Feb 10 12:57:30 2015 [Z0][VMM][I]: VM running but monitor state 
 is POWEROFF

 Tue Feb 10 12:57:37 2015 [Z0][DiM][I]: New VM state is DONE.

 Tue Feb 10 12:57:37 2015 [Z0][LCM][E]: epilog_success_action, VM in a 
 wrong state



 Has anybody else encountered the same situation?



 Thanks in advance,

 --

 Sebastiaan Smit

 Echelon BV

 E: b...@echelon.nl

 T: (088) 3243566 (hoofdnummer)

 T: (088) 3243505 (servicedesk)

 F: (053) 4336222

 W: www.echelon.nl

 KVK: 06055381



 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Bulk delete of vCenter VM's leaves stray VM's

2015-01-05 Thread Sebastiaan Smit
Hi Javier,

The bug concerning the bulk creation of VM’s works as expected now. Do you have 
an idea of what the problem is while bulk deleting vm’s?

Best regards,

Sebastiaan Smit

Van: Javier Fontan [mailto:jfon...@opennebula.org]
Verzonden: vrijdag 14 november 2014 15:44
Aan: Sebastiaan Smit; users@lists.opennebula.org
Onderwerp: Re: [one-users] Bulk delete of vCenter VM's leaves stray VM's

There was a bug in the driver that caused error when deploying several VMs at 
the same time. To fix it change the file 
/var/lib/one/remotes/vmm/vcenter/vcenter_driver.rb at line 120 from this code:

def find_vm_template(uuid)
vms = @dc.vmFolder.childEntity.grep(RbVmomi::VIM::VirtualMachine)

return vms.find{ |v| v.config.uuid == uuid }
end

to this other one:

def find_vm_template(uuid)
vms = @dc.vmFolder.childEntity.grep(RbVmomi::VIM::VirtualMachine)

return vms.find{ |v| v.config  v.config.uuid == uuid }
end

We are still looking into the problem when deleting several VMs.

Thanks for telling us.

On Thu Nov 13 2014 at 12:59:55 PM Javier Fontan 
jfon...@opennebula.orgmailto:jfon...@opennebula.org wrote:
Hi,

We have opened an issue to track this problem:

http://dev.opennebula.org/issues/3334

Meanwhile you can decrease the number of actions sent changing in 
/etc/one/oned.conf the parameter -t (number of threads) for VM driver. For 
example:

VM_MAD = [
name   = vcenter,
executable = one_vmm_sh,
arguments  = -p -t 2 -r 0 vcenter -s sh,
type   = xml ]

Cheers

On Wed Nov 12 2014 at 5:40:00 PM Sebastiaan Smit 
b...@echelon.nlmailto:b...@echelon.nl wrote:
Hi list,

We're testing the vCenter functionality in version 4.10 and see some strange 
behaviour while doing bulk actions.

Deleting VM's sometimes leave stray VM's on our cluster. We see the following 
in de VM log:

Sun Nov  9 15:51:34 2014 [Z0][LCM][I]: New VM state is RUNNING
Wed Nov 12 17:30:36 2014 [Z0][LCM][I]: New VM state is CLEANUP.
Wed Nov 12 17:30:36 2014 [Z0][VMM][I]: Driver command for 60 cancelled
Wed Nov 12 17:30:36 2014 [Z0][DiM][I]: New VM state is DONE
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Command execution 
fail: /var/lib/one/remotes/vmm/vcenter/cancel 
'423cdcae-b6b3-07c1-def6-96b9f3f4b7b3' 'demo-01' 60 demo-01
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Cancel of VM 
423cdcae-b6b3-07c1-def6-96b9f3f4b7b3 on host demo-01 failed due to 
ManagedObjectNotFound: The object has already been deleted or has not been 
completely created
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 ExitCode: 255
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Failed to execute 
virtualization driver operation: cancel.
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Successfully execute 
network driver operation: clean.
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: CLEANUP SUCCESS 60

We see it in a different manner while bulk creating VM's (20+ at a time):

Sun Nov  9 16:01:34 2014 [Z0][DiM][I]: New VM state is ACTIVE.
Sun Nov  9 16:01:34 2014 [Z0][LCM][I]: New VM state is PROLOG.
Sun Nov  9 16:01:34 2014 [Z0][LCM][I]: New VM state is BOOT
Sun Nov  9 16:01:34 2014 [Z0][VMM][I]: Generating deployment file: 
/var/lib/one/vms/81/deployment.0
Sun Nov  9 16:01:34 2014 [Z0][VMM][I]: Successfully execute network driver 
operation: pre.
Sun Nov  9 16:01:36 2014 [Z0][VMM][I]: Command execution fail: 
/var/lib/one/remotes/vmm/vcenter/deploy '/var/lib/one/vms/81/deployment.0' 
'demo-01' 81 demo-01
Sun Nov  9 16:01:36 2014 [Z0][VMM][I]: Deploy of VM 81 on host demo-01 with 
/var/lib/one/vms/81/deployment.0 failed due to undefined method `uuid' for 
nil:NilClass
Sun Nov  9 16:01:36 2014 [Z0][VMM][I]: ExitCode: 255
Sun Nov  9 16:01:36 2014 [Z0][VMM][I]: Failed to execute virtualization driver 
operation: deploy.
Sun Nov  9 16:01:36 2014 [Z0][VMM][E]: Error deploying virtual machine
Sun Nov  9 16:01:36 2014 [Z0][DiM][I]: New VM state is FAILED
Wed Nov 12 17:30:19 2014 [Z0][DiM][I]: New VM state is DONE.
Wed Nov 12 17:30:19 2014 [Z0][LCM][E]: epilog_success_action, VM in a wrong 
state


I think these have two different root causes. The cluster is not under load.


Has anyone else seen this behaviour?

Best regards,
--
Sebastiaan Smit
Echelon BV

E: b...@echelon.nlmailto:b...@echelon.nl
W: www.echelon.nlhttp://www.echelon.nl
T: (088) 3243566 (gewijzigd nummer)
T: (088) 3243505 (servicedesk)
F: (053) 4336222

KVK: 06055381


___
Users mailing list
Users@lists.opennebula.orgmailto:Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Deleting vcenter VM's in POWEROFF state leaves stray VM's on cluster

2015-02-10 Thread Sebastiaan Smit
Hi list,

I've encountered strange behavior with my vcenter 4.10.2 demo setup. When I 
delete a VM which is in POWEROFF state, it disappears as expected from 
OpenNebula, but remains in poweroff state on my vcenter cluster. The last log 
lines of the VM are the following:

Wed Feb  4 22:32:47 2015 [Z0][VMM][I]: Successfully execute network driver 
operation: pre.
Wed Feb  4 22:41:23 2015 [Z0][VMM][I]: Successfully execute virtualization 
driver operation: deploy.
Wed Feb  4 22:41:23 2015 [Z0][VMM][I]: Successfully execute network driver 
operation: post.
Wed Feb  4 22:41:23 2015 [Z0][LCM][I]: New VM state is RUNNING
Tue Feb 10 12:55:06 2015 [Z0][VMM][I]: VM running but monitor state is POWEROFF
Tue Feb 10 12:55:06 2015 [Z0][DiM][I]: New VM state is POWEROFF
Tue Feb 10 12:56:18 2015 [Z0][VMM][I]: VM running but monitor state is POWEROFF
Tue Feb 10 12:57:30 2015 [Z0][VMM][I]: VM running but monitor state is POWEROFF
Tue Feb 10 12:57:37 2015 [Z0][DiM][I]: New VM state is DONE.
Tue Feb 10 12:57:37 2015 [Z0][LCM][E]: epilog_success_action, VM in a wrong 
state

Has anybody else encountered the same situation?

Thanks in advance,
--
Sebastiaan Smit
Echelon BV
E: b...@echelon.nlmailto:b...@echelon.nl
T: (088) 3243566 (hoofdnummer)
T: (088) 3243505 (servicedesk)
F: (053) 4336222
W: www.echelon.nlhttp://www.echelon.nl/
KVK: 06055381

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Deleting vcenter VM's in POWEROFF state leaves stray VM's on cluster

2015-02-11 Thread Sebastiaan Smit
Hi Tino,

Yes I did. It was a robustness check. One detected the poweroff state, but was 
not able to clean it up after removal.


Best,

Sebastiaan



Op 11 feb. 2015 om 11:09 heeft Tino Vazquez 
cvazquez@opennebula.systemsmailto:cvazquez@opennebula.systems het volgende 
geschreven:

Hi Sebastiaan,

We are trying to reproduce this problem. From the log it looks like the VM went 
to poweroff state on its own. Did you power it off from the vCenter?

Best,

-Tino

On Tue Feb 10 2015 at 1:04:33 PM Sebastiaan Smit 
b...@echelon.nlmailto:b...@echelon.nl wrote:
Hi list,

I’ve encountered strange behavior with my vcenter 4.10.2 demo setup. When I 
delete a VM which is in POWEROFF state, it disappears as expected from 
OpenNebula, but remains in poweroff state on my vcenter cluster. The last log 
lines of the VM are the following:

Wed Feb  4 22:32:47 2015 [Z0][VMM][I]: Successfully execute network driver 
operation: pre.
Wed Feb  4 22:41:23 2015 [Z0][VMM][I]: Successfully execute virtualization 
driver operation: deploy.
Wed Feb  4 22:41:23 2015 [Z0][VMM][I]: Successfully execute network driver 
operation: post.
Wed Feb  4 22:41:23 2015 [Z0][LCM][I]: New VM state is RUNNING
Tue Feb 10 12:55:06 2015 [Z0][VMM][I]: VM running but monitor state is POWEROFF
Tue Feb 10 12:55:06 2015 [Z0][DiM][I]: New VM state is POWEROFF
Tue Feb 10 12:56:18 2015 [Z0][VMM][I]: VM running but monitor state is POWEROFF
Tue Feb 10 12:57:30 2015 [Z0][VMM][I]: VM running but monitor state is POWEROFF
Tue Feb 10 12:57:37 2015 [Z0][DiM][I]: New VM state is DONE.
Tue Feb 10 12:57:37 2015 [Z0][LCM][E]: epilog_success_action, VM in a wrong 
state

Has anybody else encountered the same situation?

Thanks in advance,
--
Sebastiaan Smit
Echelon BV
E: b...@echelon.nlmailto:b...@echelon.nl
T: (088) 3243566 (hoofdnummer)
T: (088) 3243505 (servicedesk)
F: (053) 4336222
W: www.echelon.nlhttp://www.echelon.nl/
KVK: 06055381

___
Users mailing list
Users@lists.opennebula.orgmailto:Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org