Re: [one-users] Shutting down a VM from within the VM

2013-11-08 Thread Carlos Martín Sánchez
Hi Simon,

On Tue, Oct 29, 2013 at 6:04 PM, Simon Boulet si...@nostalgeek.com wrote:

  Rubén could not retrieve that 'paused' state from libvirt, no matter how
 the
  vm was destroyed, he always got 'stopped'. Are we missing something?

 It depends of the Libvirt backend you're using and how it detects the
 state change. The paused state in libvirt is supposed to be reported
 when the VM is paused (and it's state, memory, etc. preserved for
 being resumed later). You need to trick the hypervisor in thinking the
 VM has been paused when the shutdown is initiated from inside the VM.
 It's a hack, it wont work out of the box with the stock libvirt
 backends.


Oh, ok, thanks for clearing that up.


 Generally I think the Core should be more lightweight and make better
 use of external drivers, hooks, etc. limiting the Core to state
 change, consistency, scheduling events, etc. Spreading out the
 workflow / drivers has much as possible makes it much more easier to
 customize OpenNebula to each environments. Also keeping the Core
 lightweight makes it a lot much easier to maintain and optimize.
 That's why I'm generally in favour or trying to implement as much as
 we can outside from the Core, when it's possible.


I totally agree, that's one of the big advantages of opennebula: everything
that interacts with external components is done via drivers. But in this
case I'm not so sure there is any advantage to the hook approach.

When the functionality will vary depending on the underlying
components, it's clearly something that must be done with a new driver
action.
For this feature, if it is setup in oned.conf or a hook, both will behave
in the same way:
A default transition to one of done, poweroff or undeployed; and a VM
attribute to override this for each VM.

There's another reason I'm not in favor of using hooks for any important
feature. Compared to driver actions, they are executed asynchronously, and
the core cannot know if the execution failed or not, we cannot put timeouts
or retries in place, etc.


 What we need is a way to let the Core know that the VM was
 successfully monitored, but that the hypervisor reported the VM is
 not running.


Off the top of my head, I believe we are already doing this. A successful
monitorization of a gone VM should be reported as a POLL SUCCESS,
STATE='-'.


 Have you investigated Libvirt defined VMs list? Libvirt maintains
 two different lists of VM: The active VMs and the defined VM. I'm
 thinking a VM that is NOT active but that is defined is a VM that was
 shutdown... If OpenNebula finds a VM is defined but inactive, and it
 expected the VM to be active, then it knowns the VM was unexpectedly
 shutdown (by the user from inside the VM, or by some admin accessing
 the hypervisor directly - not through OpenNebula).


I know there was a reason against this in the first OpenNebula versions,
I'll try to ask other team members about this. My guess is that it would
break the management consistency between kvm and xen, since we don't use
libvirt for xen VMs.


 One thing to keep in mind as well for implementing this is when a Host
 is rebooted it may take sometime for the hypervisor to restart all
 VMs. During that time Libvirt may report a VM as defined but not
 active. I am not sure if that's an issue or not, perhaps it depends
 of your hypervisor, and the order in which services are started at
 boot (are the VMs being restarted before Libvirtd is started, etc.)


One scenario where I see this being problematic is if the fault tolerance
hook has already re-deployed the VM in another host. I guess this should be
something configurable that the admin can disable.


Regards,
Carlos

--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org


On Tue, Oct 29, 2013 at 6:04 PM, Simon Boulet si...@nostalgeek.com wrote:

 On Tue, Oct 29, 2013 at 12:26 PM, Carlos Martín Sánchez
 cmar...@opennebula.org wrote:
  Hi,
 
  On Tue, Oct 29, 2013 at 4:43 PM, Simon Boulet si...@nostalgeek.com
 wrote:
 
  The libvirt paused method I
  suggested is a hack that works with OpenNebula and turns the VM that
  are internally shutdown to SUSPENDED in OpenNebula.
 
 
  Rubén could not retrieve that 'paused' state from libvirt, no matter how
 the
  vm was destroyed, he always got 'stopped'. Are we missing something?

 It depends of the Libvirt backend you're using and how it detects the
 state change. The paused state in libvirt is supposed to be reported
 when the VM is paused (and it's state, memory, etc. preserved for
 being resumed later). You need to trick the hypervisor in thinking the
 VM has been paused when the shutdown is initiated from inside the VM.
 It's a hack, it wont work out of the box with the stock libvirt
 backends.

 
  One comment though, perhaps the extra attribute in the VM template
  could be managed outside the core, and have this 

Re: [one-users] Shutting down a VM from within the VM

2013-11-04 Thread Nistor Andrei
On Tue, Oct 29, 2013 at 7:04 PM, Simon Boulet si...@nostalgeek.com wrote:

 Oh, yes, I get your point. The Core uses disappear for setting the
 VM as UNKNOWN. I think we need to keep disappear as it is, or at
 least keep the current UNKNOWN behaviour. If the VM can't be monitored
 for some reason (the host is down, network issues, timeout, etc.), it
 enters UNKNOWN state and keeps monitoring the VM every interval until
 is is reported as RUNNING (or STOPPED or what ever other state
 change).

I agree, UNKNOWN should mean I don't know what happened to it

 What we need is a way to let the Core know that the VM was
 successfully monitored, but that the hypervisor reported the VM is
 not running.

 Have you investigated Libvirt defined VMs list? Libvirt maintains
 two different lists of VM: The active VMs and the defined VM. I'm
 thinking a VM that is NOT active but that is defined is a VM that was
 shutdown... If OpenNebula finds a VM is defined but inactive, and it
 expected the VM to be active, then it knowns the VM was unexpectedly
 shutdown (by the user from inside the VM, or by some admin accessing
 the hypervisor directly - not through OpenNebula).

You hit the nail in the head here, OpenNebula is currently using
libvirt's transient domain feature -
http://wiki.libvirt.org/page/VM_lifecycle#Transient_guest_domains_vs_Persistent_guest_domains

I think we can modify the vmm_mad to register the deploymen.x file in
libvirt before starting the VM, unregister it when the VM enters DONE
state and also move the domain around when the VM is migrated. This
will give us a clear picture of the state a VM is in, if it was
shutdown from outside opennebula the domain would remain in the
defined/stopped state. With transient domains once the VM is shut down
it completely disappears.

(I feel I've repeated almost everything Simon said, but I just wanted
to avoid a simple +1 post :)

 One thing to keep in mind as well for implementing this is when a Host
 is rebooted it may take sometime for the hypervisor to restart all
 VMs. During that time Libvirt may report a VM as defined but not
 active. I am not sure if that's an issue or not, perhaps it depends
 of your hypervisor, and the order in which services are started at
 boot (are the VMs being restarted before Libvirtd is started, etc.)

There is an init script in CentOS and Debian at least that takes care
of guests restarting after a reboot (it honors the autostart setting
in the guest's domain.xml). Our domain.xmls should not have the
autostart flag set, since OpenNebula should handle host failures
according to the hooks enabled in oned.conf.
It should also check the defined templates on the host the first time
it monitors it after a failure and undefine the unneeded domains
(domains redeployed on other hosts, for example).

Looking forward for your feedback,
Andrei
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Shutting down a VM from within the VM

2013-10-29 Thread Simon Boulet
Hi Carlos

 We could have a global default in oned.conf, and then allow to change the 
 behaviour with an attribute in the
 VM template. This wouldn't require any extra hooks, and it would work with 
 any hypervisor.


I think thats the ideal solution! The libvirt paused method I
suggested is a hack that works with OpenNebula and turns the VM that
are internally shutdown to SUSPENDED in OpenNebula.

One comment though, perhaps the extra attribute in the VM template
could be managed outside the core, and have this managed by a hook.
Ex. if someone wanted to have the Amazon
instance-initiated-shutdown-behavior:

- Set the oned defaut when a VM disappears to POWEROFF.
- Have a state change hooks that picks up the POWEROFF state change,
parse the VM template to see if an INITIATED_SHUTDOWN_BEHAVIOR user
attribute is set. If so, parse the attribute, if it's set to ex.
TERMINATE, cancel / delete the VM.

Simon

On Tue, Oct 29, 2013 at 7:58 AM, Carlos Martín Sánchez
cmar...@opennebula.org wrote:
 Hi,

 I find this thread interesting, especially the
 --instance-initiated-shutdown-behavior option.
 In our case, when the driver reports that the VM has disappeared, we could
 choose to move it to the following states: unknown, done, poweroff,
 undeployed.

 We could have a global default in oned.conf, and then allow to change the
 behaviour with an attribute in the VM template. This wouldn't require any
 extra hooks, and it would work with any hypervisor.

 What do you guys think?
 --
 Carlos Martín, MSc
 Project Engineer
 OpenNebula - Flexible Enterprise Cloud Made Simple
 www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula


 On Thu, Oct 10, 2013 at 9:30 PM, Nistor Andrei coder@gmail.com wrote:

 Hi Ruben,

 Do we really care if the VM was shut down from the inside or not? I was
 thinking of a hook script like the following:

 #!/bin/bash

 VMID=$(echo $1 | cut -d- -f2)

 if [ $2 == stopped ]; then
   onevm shutdown $VMID
 fi

 It's obviously just a big fat (untested) hack, which will probably
 backfire when you use onevm poweroff or onevm stop.

 Anyway, the point is that we can use those hooks to notify oned that the
 VM is shut down. Then oned can decide if the shut down was initiated by the
 user via onevm {shutdown,poweroff,stop}, in which case it would take the
 appropriate action. If it wasn't initiated via onevm* commands, but was
 initiated by the guest itself, we can take some configurable action -- shut
 it down for batch jobs, or power it off for say... hosting customers.

 Cheers,

 Andrei

 On Thu, Oct 10, 2013 at 12:38 PM, Ruben S. Montero
 rsmont...@opennebula.org wrote:

 Hi Simon + Nistor,

 We've done some tests in the stock drivers when the VM is shutdown (from
 inside) the VM disappears from the list, show we cannot get the state (not
 even with --all). How do you get the paused state?

 On the other hand, the libvirt hook seems a good approach, since we could
 create a file in the VM directory (e.g. .shutdown-inside) and report the
 state accordingly. However we made some tests and there is no difference
 between the two.

 This hook:

 #!/bin/bash

 echo `date`: $*  /tmp/hook

 Gives the same in both cases:

 Thu Oct 10 11:27:56 CEST 2013: one-17 stopped end -   shutdown
 inside
 Thu Oct 10 11:27:56 CEST 2013: one-17 release end -
 Thu Oct 10 11:33:07 CEST 2013: one-17 prepare begin -
 Thu Oct 10 11:33:07 CEST 2013: one-17 start begin -  boot
 Thu Oct 10 11:33:07 CEST 2013: one-17 started begin -
 Thu Oct 10 11:34:02 CEST 2013: one-17 stopped end -  shutdown via
 libvirt
 Thu Oct 10 11:34:02 CEST 2013: one-17 release end -

 So, the real problem is how to determine if the VM has been shutdown from
 inside or not

 Cheers

 Ruben




 On Mon, Oct 7, 2013 at 12:37 PM, Nistor Andrei coder@gmail.com
 wrote:

 Hi,

 Maybe you can use libvirt hooks[1] to notify oned via the xmlrpc that
 the VMs have shut down?

 [1] http://libvirt.org/hooks.html

 Andrei


 On Fri, Oct 4, 2013 at 6:45 PM, Simon Boulet si...@nostalgeek.com
 wrote:

 Hi,

 Here our driver reports the state as returned by Libvirt [1], which
 reports VM terminated from the inside (shutdown) as Paused. When the
 OpenNebula driver sees a VM as being reported as paused [2], it
 switches the VM to SUSPENDED state in OpenNebula. Then you can restart
 the VM by issuing the resume action [3].

 So, I think OpenNebula has the building blocks for that, but I'm just
 unsure how it is implemented in the different OpenNebula drivers.

 [1]
 http://wiki.libvirt.org/page/VM_lifecycle#States_that_a_guest_domain_can_be_in
 [2]
 http://opennebula.org/documentation:rel4.2:devel-vmm#poll_information
 [3] http://opennebula.org/documentation:rel4.2:api#onevmaction

 Simon

 On Fri, Oct 4, 2013 at 9:43 AM, Parag Mhashilkar pa...@fnal.gov
 wrote:
  Hi Sharuzzaman,
 
  Thanks for your response. I am aware of the fact that OpenNebula
  requires human intervention when shutdown is issued from inside the VM. 
  We
  can write 

Re: [one-users] Shutting down a VM from within the VM

2013-10-29 Thread Carlos Martín Sánchez
Hi,

On Tue, Oct 29, 2013 at 4:43 PM, Simon Boulet si...@nostalgeek.com wrote:

 The libvirt paused method I
 suggested is a hack that works with OpenNebula and turns the VM that
 are internally shutdown to SUSPENDED in OpenNebula.


Rubén could not retrieve that 'paused' state from libvirt, no matter how
the vm was destroyed, he always got 'stopped'. Are we missing something?

One comment though, perhaps the extra attribute in the VM template
 could be managed outside the core, and have this managed by a hook.
 Ex. if someone wanted to have the Amazon
 instance-initiated-shutdown-behavior:



- Set the oned defaut when a VM disappears to POWEROFF.
 - Have a state change hooks that picks up the POWEROFF state change,
 parse the VM template to see if an INITIATED_SHUTDOWN_BEHAVIOR user
 attribute is set. If so, parse the attribute, if it's set to ex.
 TERMINATE, cancel / delete the VM.


I don't see any advantage to this, honestly. If you set the default
behaviour to DONE, you can't undo that with a hook and set the VM back to
poweroff...
Plus I think it's much safer to do it in the core. For example, when a Host
returns a monitor failure, all the VMs are set to UNKNOWN. But this doesn't
mean that the VM disappeared from the hypervisor, just that the VM could
not be monitored.

Cheers
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula

--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org


On Tue, Oct 29, 2013 at 4:43 PM, Simon Boulet si...@nostalgeek.com wrote:

 Hi Carlos

  We could have a global default in oned.conf, and then allow to change
 the behaviour with an attribute in the
  VM template. This wouldn't require any extra hooks, and it would work
 with any hypervisor.
 

 I think thats the ideal solution! The libvirt paused method I
 suggested is a hack that works with OpenNebula and turns the VM that
 are internally shutdown to SUSPENDED in OpenNebula.

 One comment though, perhaps the extra attribute in the VM template
 could be managed outside the core, and have this managed by a hook.
 Ex. if someone wanted to have the Amazon
 instance-initiated-shutdown-behavior:

 - Set the oned defaut when a VM disappears to POWEROFF.
 - Have a state change hooks that picks up the POWEROFF state change,
 parse the VM template to see if an INITIATED_SHUTDOWN_BEHAVIOR user
 attribute is set. If so, parse the attribute, if it's set to ex.
 TERMINATE, cancel / delete the VM.

 Simon

 On Tue, Oct 29, 2013 at 7:58 AM, Carlos Martín Sánchez
 cmar...@opennebula.org wrote:
  Hi,
 
  I find this thread interesting, especially the
  --instance-initiated-shutdown-behavior option.
  In our case, when the driver reports that the VM has disappeared, we
 could
  choose to move it to the following states: unknown, done, poweroff,
  undeployed.
 
  We could have a global default in oned.conf, and then allow to change the
  behaviour with an attribute in the VM template. This wouldn't require any
  extra hooks, and it would work with any hypervisor.
 
  What do you guys think?
  --
  Carlos Martín, MSc
  Project Engineer
  OpenNebula - Flexible Enterprise Cloud Made Simple
  www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula
 
 
  On Thu, Oct 10, 2013 at 9:30 PM, Nistor Andrei coder@gmail.com
 wrote:
 
  Hi Ruben,
 
  Do we really care if the VM was shut down from the inside or not? I was
  thinking of a hook script like the following:
 
  #!/bin/bash
 
  VMID=$(echo $1 | cut -d- -f2)
 
  if [ $2 == stopped ]; then
onevm shutdown $VMID
  fi
 
  It's obviously just a big fat (untested) hack, which will probably
  backfire when you use onevm poweroff or onevm stop.
 
  Anyway, the point is that we can use those hooks to notify oned that the
  VM is shut down. Then oned can decide if the shut down was initiated by
 the
  user via onevm {shutdown,poweroff,stop}, in which case it would take the
  appropriate action. If it wasn't initiated via onevm* commands, but was
  initiated by the guest itself, we can take some configurable action --
 shut
  it down for batch jobs, or power it off for say... hosting customers.
 
  Cheers,
 
  Andrei
 
  On Thu, Oct 10, 2013 at 12:38 PM, Ruben S. Montero
  rsmont...@opennebula.org wrote:
 
  Hi Simon + Nistor,
 
  We've done some tests in the stock drivers when the VM is shutdown
 (from
  inside) the VM disappears from the list, show we cannot get the state
 (not
  even with --all). How do you get the paused state?
 
  On the other hand, the libvirt hook seems a good approach, since we
 could
  create a file in the VM directory (e.g. .shutdown-inside) and report
 the
  state accordingly. However we made some tests and there is no
 difference
  between the two.
 
  This hook:
 
  #!/bin/bash
 
  echo `date`: $*  /tmp/hook
 
  Gives the same 

Re: [one-users] Shutting down a VM from within the VM

2013-10-29 Thread Simon Boulet
On Tue, Oct 29, 2013 at 12:26 PM, Carlos Martín Sánchez
cmar...@opennebula.org wrote:
 Hi,

 On Tue, Oct 29, 2013 at 4:43 PM, Simon Boulet si...@nostalgeek.com wrote:

 The libvirt paused method I
 suggested is a hack that works with OpenNebula and turns the VM that
 are internally shutdown to SUSPENDED in OpenNebula.


 Rubén could not retrieve that 'paused' state from libvirt, no matter how the
 vm was destroyed, he always got 'stopped'. Are we missing something?

It depends of the Libvirt backend you're using and how it detects the
state change. The paused state in libvirt is supposed to be reported
when the VM is paused (and it's state, memory, etc. preserved for
being resumed later). You need to trick the hypervisor in thinking the
VM has been paused when the shutdown is initiated from inside the VM.
It's a hack, it wont work out of the box with the stock libvirt
backends.


 One comment though, perhaps the extra attribute in the VM template
 could be managed outside the core, and have this managed by a hook.
 Ex. if someone wanted to have the Amazon
 instance-initiated-shutdown-behavior:



 - Set the oned defaut when a VM disappears to POWEROFF.
 - Have a state change hooks that picks up the POWEROFF state change,
 parse the VM template to see if an INITIATED_SHUTDOWN_BEHAVIOR user
 attribute is set. If so, parse the attribute, if it's set to ex.
 TERMINATE, cancel / delete the VM.


 I don't see any advantage to this, honestly.


Generally I think the Core should be more lightweight and make better
use of external drivers, hooks, etc. limiting the Core to state
change, consistency, scheduling events, etc. Spreading out the
workflow / drivers has much as possible makes it much more easier to
customize OpenNebula to each environments. Also keeping the Core
lightweight makes it a lot much easier to maintain and optimize.
That's why I'm generally in favour or trying to implement as much as
we can outside from the Core, when it's possible.

 If you set the default
 behaviour to DONE, you can't undo that with a hook and set the VM back to
 poweroff...


Yes, of course, it wouldn't work with a default to DONE because once
the VM has entered in DONE state it can't be recovered. But it would
work for other defaults for example POWEROFF state can be resumed
(although VM in POWEROFF can't be cancelled, it can only be
deleted)


 Plus I think it's much safer to do it in the core. For example, when a Host
 returns a monitor failure, all the VMs are set to UNKNOWN. But this doesn't
 mean that the VM disappeared from the hypervisor, just that the VM could not
 be monitored.



Oh, yes, I get your point. The Core uses disappear for setting the
VM as UNKNOWN. I think we need to keep disappear as it is, or at
least keep the current UNKNOWN behaviour. If the VM can't be monitored
for some reason (the host is down, network issues, timeout, etc.), it
enters UNKNOWN state and keeps monitoring the VM every interval until
is is reported as RUNNING (or STOPPED or what ever other state
change).

What we need is a way to let the Core know that the VM was
successfully monitored, but that the hypervisor reported the VM is
not running.

Have you investigated Libvirt defined VMs list? Libvirt maintains
two different lists of VM: The active VMs and the defined VM. I'm
thinking a VM that is NOT active but that is defined is a VM that was
shutdown... If OpenNebula finds a VM is defined but inactive, and it
expected the VM to be active, then it knowns the VM was unexpectedly
shutdown (by the user from inside the VM, or by some admin accessing
the hypervisor directly - not through OpenNebula).

One thing to keep in mind as well for implementing this is when a Host
is rebooted it may take sometime for the hypervisor to restart all
VMs. During that time Libvirt may report a VM as defined but not
active. I am not sure if that's an issue or not, perhaps it depends
of your hypervisor, and the order in which services are started at
boot (are the VMs being restarted before Libvirtd is started, etc.)

Simon
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Shutting down a VM from within the VM

2013-10-10 Thread Ruben S. Montero
Hi Simon + Nistor,

We've done some tests in the stock drivers when the VM is shutdown (from
inside) the VM disappears from the list, show we cannot get the state (not
even with --all). How do you get the paused state?

On the other hand, the libvirt hook seems a good approach, since we could
create a file in the VM directory (e.g. .shutdown-inside) and report the
state accordingly. However we made some tests and there is no difference
between the two.

This hook:

#!/bin/bash

echo `date`: $*  /tmp/hook

Gives the same in both cases:

Thu Oct 10 11:27:56 CEST 2013: one-17 stopped end -   shutdown inside
Thu Oct 10 11:27:56 CEST 2013: one-17 release end -
Thu Oct 10 11:33:07 CEST 2013: one-17 prepare begin -
Thu Oct 10 11:33:07 CEST 2013: one-17 start begin -  boot
Thu Oct 10 11:33:07 CEST 2013: one-17 started begin -
Thu Oct 10 11:34:02 CEST 2013: one-17 stopped end -  shutdown via
libvirt
Thu Oct 10 11:34:02 CEST 2013: one-17 release end -

So, the real problem is how to determine if the VM has been shutdown from
inside or not

Cheers

Ruben




On Mon, Oct 7, 2013 at 12:37 PM, Nistor Andrei coder@gmail.com wrote:

 Hi,

 Maybe you can use libvirt hooks[1] to notify oned via the xmlrpc that the
 VMs have shut down?

 [1] http://libvirt.org/hooks.html

 Andrei


 On Fri, Oct 4, 2013 at 6:45 PM, Simon Boulet si...@nostalgeek.com wrote:

 Hi,

 Here our driver reports the state as returned by Libvirt [1], which
 reports VM terminated from the inside (shutdown) as Paused. When the
 OpenNebula driver sees a VM as being reported as paused [2], it
 switches the VM to SUSPENDED state in OpenNebula. Then you can restart
 the VM by issuing the resume action [3].

 So, I think OpenNebula has the building blocks for that, but I'm just
 unsure how it is implemented in the different OpenNebula drivers.

 [1]
 http://wiki.libvirt.org/page/VM_lifecycle#States_that_a_guest_domain_can_be_in
 [2] http://opennebula.org/documentation:rel4.2:devel-vmm#poll_information
 [3] http://opennebula.org/documentation:rel4.2:api#onevmaction

 Simon

 On Fri, Oct 4, 2013 at 9:43 AM, Parag Mhashilkar pa...@fnal.gov wrote:
  Hi Sharuzzaman,
 
  Thanks for your response. I am aware of the fact that OpenNebula
 requires human intervention when shutdown is issued from inside the VM. We
 can write scripts to do lot of things, but when in the business of resource
 provisioning, the resource provider does not necessarily control what runs
 in the VM, application that launches them and for obvious reasons I am not
 giving any access to ONE's database to the users. So these alternatives
 seem merely hacks rather than a much cleaner solution from the service.
 
  Such a feature is useful from a infrastructure provider's point of
 view. If AWS has done it (and Openstack I think) then there be a way out.
 
  -Parag
 
 
  On Oct 3, 2013, at 9:27 PM, Sharuzzaman Ahmat Raslan wrote:
 
  Hi Parag,
 
  I believe OpenNebula need to have human intervention to really
 determine whether to remove or not the VM that it has deployed.
 
  I also think that you can write a script that signal or call
 OpenNebula command as soon as the task finish, to shutdown the VM. Or if
 direct calling command not possible, maybe your application can write some
 status in a database, and a script in OpenNebula read that status and make
 decision from it.
 
  Thanks.
 
 
  On Thu, Oct 3, 2013 at 11:38 PM, Parag Mhashilkar pa...@fnal.gov
 wrote:
  Hi,
 
  Does OpenNebula EC2 interface support shutting down a VM from with in
 the VM itself and have the scheduler recognize that VM has been
 stopped/shutdown? How do we enable this feature? At Fermi, we have
 OpenNebula v3.2 and when the VM is shutdown it stays in the UNKNOWN state.
 Can OpenNebula get this ACPI shutdown info from virsh and handle the
 situation more gracefully rather than putting the VM in UKNOWN state?
 
  Here is an example why I think something like this is useful:
 
  When VMs are launched to perform certain tasks (classical equivalent
 of batch nodes), only the processes running in the VM know when the task is
 done and can shutdown the VM freeing up the resources. Running VM past the
 task life is wasted resources and controlling the lifetime of VM from
 outside is not always possible.
 
  In case of AWS, it supports following which is very good feature to
 have when controlling the VMs in above scenario.
  ec2-run-instaces --instance-initiated-shutdown-behavior
 stop|terminate
 
  How do we achieve this with Opennebula?
 
  Thanks  Regards
  +==
  | Parag Mhashilkar
  | Fermi National Accelerator Laboratory, MS 120
  | Wilson  Kirk Road, Batavia, IL - 60510
  |--
  | Phone: 1 (630) 840-6530 Fax: 1 (630) 840-2783
  |--
  | Wilson Hall, 806E (Nov 8, 2012 - To date)
  | Wilson Hall, 867E (Nov 17, 2010 - Nov 7, 2012)
  | 

Re: [one-users] Shutting down a VM from within the VM

2013-10-10 Thread Nistor Andrei
Hi Ruben,

Do we really care if the VM was shut down from the inside or not? I was
thinking of a hook script like the following:

#!/bin/bash

VMID=$(echo $1 | cut -d- -f2)

if [ $2 == stopped ]; then
  onevm shutdown $VMID
fi

It's obviously just a big fat (untested) hack, which will probably backfire
when you use onevm poweroff or onevm stop.

Anyway, the point is that we can use those hooks to notify oned that the VM
is shut down. Then oned can decide if the shut down was initiated by the
user via onevm {shutdown,poweroff,stop}, in which case it would take the
appropriate action. If it wasn't initiated via onevm* commands, but was
initiated by the guest itself, we can take some configurable action -- shut
it down for batch jobs, or power it off for say... hosting customers.

Cheers,

Andrei

On Thu, Oct 10, 2013 at 12:38 PM, Ruben S. Montero rsmont...@opennebula.org
 wrote:

 Hi Simon + Nistor,

 We've done some tests in the stock drivers when the VM is shutdown (from
 inside) the VM disappears from the list, show we cannot get the state (not
 even with --all). How do you get the paused state?

 On the other hand, the libvirt hook seems a good approach, since we could
 create a file in the VM directory (e.g. .shutdown-inside) and report the
 state accordingly. However we made some tests and there is no difference
 between the two.

 This hook:

 #!/bin/bash

 echo `date`: $*  /tmp/hook

 Gives the same in both cases:

  Thu Oct 10 11:27:56 CEST 2013: one-17 stopped end -   shutdown
 inside
 Thu Oct 10 11:27:56 CEST 2013: one-17 release end -
 Thu Oct 10 11:33:07 CEST 2013: one-17 prepare begin -
 Thu Oct 10 11:33:07 CEST 2013: one-17 start begin -  boot
 Thu Oct 10 11:33:07 CEST 2013: one-17 started begin -
 Thu Oct 10 11:34:02 CEST 2013: one-17 stopped end -  shutdown via
 libvirt
 Thu Oct 10 11:34:02 CEST 2013: one-17 release end -

 So, the real problem is how to determine if the VM has been shutdown from
 inside or not

 Cheers

 Ruben




 On Mon, Oct 7, 2013 at 12:37 PM, Nistor Andrei coder@gmail.comwrote:

 Hi,

 Maybe you can use libvirt hooks[1] to notify oned via the xmlrpc that the
 VMs have shut down?

 [1] http://libvirt.org/hooks.html

 Andrei


 On Fri, Oct 4, 2013 at 6:45 PM, Simon Boulet si...@nostalgeek.comwrote:

 Hi,

 Here our driver reports the state as returned by Libvirt [1], which
 reports VM terminated from the inside (shutdown) as Paused. When the
 OpenNebula driver sees a VM as being reported as paused [2], it
 switches the VM to SUSPENDED state in OpenNebula. Then you can restart
 the VM by issuing the resume action [3].

 So, I think OpenNebula has the building blocks for that, but I'm just
 unsure how it is implemented in the different OpenNebula drivers.

 [1]
 http://wiki.libvirt.org/page/VM_lifecycle#States_that_a_guest_domain_can_be_in
 [2]
 http://opennebula.org/documentation:rel4.2:devel-vmm#poll_information
 [3] http://opennebula.org/documentation:rel4.2:api#onevmaction

 Simon

 On Fri, Oct 4, 2013 at 9:43 AM, Parag Mhashilkar pa...@fnal.gov wrote:
  Hi Sharuzzaman,
 
  Thanks for your response. I am aware of the fact that OpenNebula
 requires human intervention when shutdown is issued from inside the VM. We
 can write scripts to do lot of things, but when in the business of resource
 provisioning, the resource provider does not necessarily control what runs
 in the VM, application that launches them and for obvious reasons I am not
 giving any access to ONE's database to the users. So these alternatives
 seem merely hacks rather than a much cleaner solution from the service.
 
  Such a feature is useful from a infrastructure provider's point of
 view. If AWS has done it (and Openstack I think) then there be a way out.
 
  -Parag
 
 
  On Oct 3, 2013, at 9:27 PM, Sharuzzaman Ahmat Raslan wrote:
 
  Hi Parag,
 
  I believe OpenNebula need to have human intervention to really
 determine whether to remove or not the VM that it has deployed.
 
  I also think that you can write a script that signal or call
 OpenNebula command as soon as the task finish, to shutdown the VM. Or if
 direct calling command not possible, maybe your application can write some
 status in a database, and a script in OpenNebula read that status and make
 decision from it.
 
  Thanks.
 
 
  On Thu, Oct 3, 2013 at 11:38 PM, Parag Mhashilkar pa...@fnal.gov
 wrote:
  Hi,
 
  Does OpenNebula EC2 interface support shutting down a VM from with in
 the VM itself and have the scheduler recognize that VM has been
 stopped/shutdown? How do we enable this feature? At Fermi, we have
 OpenNebula v3.2 and when the VM is shutdown it stays in the UNKNOWN state.
 Can OpenNebula get this ACPI shutdown info from virsh and handle the
 situation more gracefully rather than putting the VM in UKNOWN state?
 
  Here is an example why I think something like this is useful:
 
  When VMs are launched to perform certain tasks (classical equivalent
 of batch nodes), only the processes running in the VM 

Re: [one-users] Shutting down a VM from within the VM

2013-10-07 Thread Nistor Andrei
Hi,

Maybe you can use libvirt hooks[1] to notify oned via the xmlrpc that the
VMs have shut down?

[1] http://libvirt.org/hooks.html

Andrei


On Fri, Oct 4, 2013 at 6:45 PM, Simon Boulet si...@nostalgeek.com wrote:

 Hi,

 Here our driver reports the state as returned by Libvirt [1], which
 reports VM terminated from the inside (shutdown) as Paused. When the
 OpenNebula driver sees a VM as being reported as paused [2], it
 switches the VM to SUSPENDED state in OpenNebula. Then you can restart
 the VM by issuing the resume action [3].

 So, I think OpenNebula has the building blocks for that, but I'm just
 unsure how it is implemented in the different OpenNebula drivers.

 [1]
 http://wiki.libvirt.org/page/VM_lifecycle#States_that_a_guest_domain_can_be_in
 [2] http://opennebula.org/documentation:rel4.2:devel-vmm#poll_information
 [3] http://opennebula.org/documentation:rel4.2:api#onevmaction

 Simon

 On Fri, Oct 4, 2013 at 9:43 AM, Parag Mhashilkar pa...@fnal.gov wrote:
  Hi Sharuzzaman,
 
  Thanks for your response. I am aware of the fact that OpenNebula
 requires human intervention when shutdown is issued from inside the VM. We
 can write scripts to do lot of things, but when in the business of resource
 provisioning, the resource provider does not necessarily control what runs
 in the VM, application that launches them and for obvious reasons I am not
 giving any access to ONE's database to the users. So these alternatives
 seem merely hacks rather than a much cleaner solution from the service.
 
  Such a feature is useful from a infrastructure provider's point of view.
 If AWS has done it (and Openstack I think) then there be a way out.
 
  -Parag
 
 
  On Oct 3, 2013, at 9:27 PM, Sharuzzaman Ahmat Raslan wrote:
 
  Hi Parag,
 
  I believe OpenNebula need to have human intervention to really
 determine whether to remove or not the VM that it has deployed.
 
  I also think that you can write a script that signal or call OpenNebula
 command as soon as the task finish, to shutdown the VM. Or if direct
 calling command not possible, maybe your application can write some status
 in a database, and a script in OpenNebula read that status and make
 decision from it.
 
  Thanks.
 
 
  On Thu, Oct 3, 2013 at 11:38 PM, Parag Mhashilkar pa...@fnal.gov
 wrote:
  Hi,
 
  Does OpenNebula EC2 interface support shutting down a VM from with in
 the VM itself and have the scheduler recognize that VM has been
 stopped/shutdown? How do we enable this feature? At Fermi, we have
 OpenNebula v3.2 and when the VM is shutdown it stays in the UNKNOWN state.
 Can OpenNebula get this ACPI shutdown info from virsh and handle the
 situation more gracefully rather than putting the VM in UKNOWN state?
 
  Here is an example why I think something like this is useful:
 
  When VMs are launched to perform certain tasks (classical equivalent of
 batch nodes), only the processes running in the VM know when the task is
 done and can shutdown the VM freeing up the resources. Running VM past the
 task life is wasted resources and controlling the lifetime of VM from
 outside is not always possible.
 
  In case of AWS, it supports following which is very good feature to
 have when controlling the VMs in above scenario.
  ec2-run-instaces --instance-initiated-shutdown-behavior stop|terminate
 
  How do we achieve this with Opennebula?
 
  Thanks  Regards
  +==
  | Parag Mhashilkar
  | Fermi National Accelerator Laboratory, MS 120
  | Wilson  Kirk Road, Batavia, IL - 60510
  |--
  | Phone: 1 (630) 840-6530 Fax: 1 (630) 840-2783
  |--
  | Wilson Hall, 806E (Nov 8, 2012 - To date)
  | Wilson Hall, 867E (Nov 17, 2010 - Nov 7, 2012)
  | Wilson Hall, 863E (Apr 24, 2007 - Nov 16, 2010)
  | Wilson Hall, 856E (Mar 21, 2005 - Apr 23, 2007)
  +==
 
 
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 
 
 
 
  --
  Sharuzzaman Ahmat Raslan
 
 
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Shutting down a VM from within the VM

2013-10-04 Thread Parag Mhashilkar
Hi Sharuzzaman,

Thanks for your response. I am aware of the fact that OpenNebula requires human 
intervention when shutdown is issued from inside the VM. We can write scripts 
to do lot of things, but when in the business of resource provisioning, the 
resource provider does not necessarily control what runs in the VM, application 
that launches them and for obvious reasons I am not giving any access to ONE's 
database to the users. So these alternatives seem merely hacks rather than a 
much cleaner solution from the service.

Such a feature is useful from a infrastructure provider's point of view. If AWS 
has done it (and Openstack I think) then there be a way out.

-Parag


On Oct 3, 2013, at 9:27 PM, Sharuzzaman Ahmat Raslan wrote:

 Hi Parag,
 
 I believe OpenNebula need to have human intervention to really determine 
 whether to remove or not the VM that it has deployed.
 
 I also think that you can write a script that signal or call OpenNebula 
 command as soon as the task finish, to shutdown the VM. Or if direct calling 
 command not possible, maybe your application can write some status in a 
 database, and a script in OpenNebula read that status and make decision from 
 it.
 
 Thanks.
 
 
 On Thu, Oct 3, 2013 at 11:38 PM, Parag Mhashilkar pa...@fnal.gov wrote:
 Hi,
 
 Does OpenNebula EC2 interface support shutting down a VM from with in the VM 
 itself and have the scheduler recognize that VM has been stopped/shutdown? 
 How do we enable this feature? At Fermi, we have OpenNebula v3.2 and when the 
 VM is shutdown it stays in the UNKNOWN state. Can OpenNebula get this ACPI 
 shutdown info from virsh and handle the situation more gracefully rather than 
 putting the VM in UKNOWN state?
 
 Here is an example why I think something like this is useful:
 
 When VMs are launched to perform certain tasks (classical equivalent of batch 
 nodes), only the processes running in the VM know when the task is done and 
 can shutdown the VM freeing up the resources. Running VM past the task life 
 is wasted resources and controlling the lifetime of VM from outside is not 
 always possible.
 
 In case of AWS, it supports following which is very good feature to have when 
 controlling the VMs in above scenario.
 ec2-run-instaces --instance-initiated-shutdown-behavior stop|terminate
 
 How do we achieve this with Opennebula?
 
 Thanks  Regards
 +==
 | Parag Mhashilkar
 | Fermi National Accelerator Laboratory, MS 120
 | Wilson  Kirk Road, Batavia, IL - 60510
 |--
 | Phone: 1 (630) 840-6530 Fax: 1 (630) 840-2783
 |--
 | Wilson Hall, 806E (Nov 8, 2012 - To date)
 | Wilson Hall, 867E (Nov 17, 2010 - Nov 7, 2012)
 | Wilson Hall, 863E (Apr 24, 2007 - Nov 16, 2010)
 | Wilson Hall, 856E (Mar 21, 2005 - Apr 23, 2007)
 +==
 
 
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 
 
 
 
 -- 
 Sharuzzaman Ahmat Raslan



smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Shutting down a VM from within the VM

2013-10-04 Thread Simon Boulet
Hi,

Here our driver reports the state as returned by Libvirt [1], which
reports VM terminated from the inside (shutdown) as Paused. When the
OpenNebula driver sees a VM as being reported as paused [2], it
switches the VM to SUSPENDED state in OpenNebula. Then you can restart
the VM by issuing the resume action [3].

So, I think OpenNebula has the building blocks for that, but I'm just
unsure how it is implemented in the different OpenNebula drivers.

[1] 
http://wiki.libvirt.org/page/VM_lifecycle#States_that_a_guest_domain_can_be_in
[2] http://opennebula.org/documentation:rel4.2:devel-vmm#poll_information
[3] http://opennebula.org/documentation:rel4.2:api#onevmaction

Simon

On Fri, Oct 4, 2013 at 9:43 AM, Parag Mhashilkar pa...@fnal.gov wrote:
 Hi Sharuzzaman,

 Thanks for your response. I am aware of the fact that OpenNebula requires 
 human intervention when shutdown is issued from inside the VM. We can write 
 scripts to do lot of things, but when in the business of resource 
 provisioning, the resource provider does not necessarily control what runs in 
 the VM, application that launches them and for obvious reasons I am not 
 giving any access to ONE's database to the users. So these alternatives seem 
 merely hacks rather than a much cleaner solution from the service.

 Such a feature is useful from a infrastructure provider's point of view. If 
 AWS has done it (and Openstack I think) then there be a way out.

 -Parag


 On Oct 3, 2013, at 9:27 PM, Sharuzzaman Ahmat Raslan wrote:

 Hi Parag,

 I believe OpenNebula need to have human intervention to really determine 
 whether to remove or not the VM that it has deployed.

 I also think that you can write a script that signal or call OpenNebula 
 command as soon as the task finish, to shutdown the VM. Or if direct calling 
 command not possible, maybe your application can write some status in a 
 database, and a script in OpenNebula read that status and make decision from 
 it.

 Thanks.


 On Thu, Oct 3, 2013 at 11:38 PM, Parag Mhashilkar pa...@fnal.gov wrote:
 Hi,

 Does OpenNebula EC2 interface support shutting down a VM from with in the VM 
 itself and have the scheduler recognize that VM has been stopped/shutdown? 
 How do we enable this feature? At Fermi, we have OpenNebula v3.2 and when 
 the VM is shutdown it stays in the UNKNOWN state. Can OpenNebula get this 
 ACPI shutdown info from virsh and handle the situation more gracefully 
 rather than putting the VM in UKNOWN state?

 Here is an example why I think something like this is useful:

 When VMs are launched to perform certain tasks (classical equivalent of 
 batch nodes), only the processes running in the VM know when the task is 
 done and can shutdown the VM freeing up the resources. Running VM past the 
 task life is wasted resources and controlling the lifetime of VM from 
 outside is not always possible.

 In case of AWS, it supports following which is very good feature to have 
 when controlling the VMs in above scenario.
 ec2-run-instaces --instance-initiated-shutdown-behavior stop|terminate

 How do we achieve this with Opennebula?

 Thanks  Regards
 +==
 | Parag Mhashilkar
 | Fermi National Accelerator Laboratory, MS 120
 | Wilson  Kirk Road, Batavia, IL - 60510
 |--
 | Phone: 1 (630) 840-6530 Fax: 1 (630) 840-2783
 |--
 | Wilson Hall, 806E (Nov 8, 2012 - To date)
 | Wilson Hall, 867E (Nov 17, 2010 - Nov 7, 2012)
 | Wilson Hall, 863E (Apr 24, 2007 - Nov 16, 2010)
 | Wilson Hall, 856E (Mar 21, 2005 - Apr 23, 2007)
 +==


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




 --
 Sharuzzaman Ahmat Raslan


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Shutting down a VM from within the VM

2013-10-03 Thread Parag Mhashilkar
Hi,

Does OpenNebula EC2 interface support shutting down a VM from with in the VM 
itself and have the scheduler recognize that VM has been stopped/shutdown? How 
do we enable this feature? At Fermi, we have OpenNebula v3.2 and when the VM is 
shutdown it stays in the UNKNOWN state. Can OpenNebula get this ACPI shutdown 
info from virsh and handle the situation more gracefully rather than putting 
the VM in UKNOWN state?

Here is an example why I think something like this is useful:

When VMs are launched to perform certain tasks (classical equivalent of batch 
nodes), only the processes running in the VM know when the task is done and can 
shutdown the VM freeing up the resources. Running VM past the task life is 
wasted resources and controlling the lifetime of VM from outside is not always 
possible.

In case of AWS, it supports following which is very good feature to have when 
controlling the VMs in above scenario.
ec2-run-instaces --instance-initiated-shutdown-behavior stop|terminate

How do we achieve this with Opennebula?

Thanks  Regards
+==
| Parag Mhashilkar
| Fermi National Accelerator Laboratory, MS 120
| Wilson  Kirk Road, Batavia, IL - 60510
|--
| Phone: 1 (630) 840-6530 Fax: 1 (630) 840-2783
|--
| Wilson Hall, 806E (Nov 8, 2012 - To date)
| Wilson Hall, 867E (Nov 17, 2010 - Nov 7, 2012)
| Wilson Hall, 863E (Apr 24, 2007 - Nov 16, 2010)
| Wilson Hall, 856E (Mar 21, 2005 - Apr 23, 2007)
+==



smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Shutting down a VM from within the VM

2013-10-03 Thread Sharuzzaman Ahmat Raslan
Hi Parag,

I believe OpenNebula need to have human intervention to really determine
whether to remove or not the VM that it has deployed.

I also think that you can write a script that signal or call OpenNebula
command as soon as the task finish, to shutdown the VM. Or if direct
calling command not possible, maybe your application can write some status
in a database, and a script in OpenNebula read that status and make
decision from it.

Thanks.


On Thu, Oct 3, 2013 at 11:38 PM, Parag Mhashilkar pa...@fnal.gov wrote:

 Hi,

 Does OpenNebula EC2 interface support shutting down a VM from with in the
 VM itself and have the scheduler recognize that VM has been
 stopped/shutdown? How do we enable this feature? At Fermi, we have
 OpenNebula v3.2 and when the VM is shutdown it stays in the UNKNOWN state.
 Can OpenNebula get this ACPI shutdown info from virsh and handle the
 situation more gracefully rather than putting the VM in UKNOWN state?

 Here is an example why I think something like this is useful:

 When VMs are launched to perform certain tasks (classical equivalent of
 batch nodes), only the processes running in the VM know when the task is
 done and can shutdown the VM freeing up the resources. Running VM past the
 task life is wasted resources and controlling the lifetime of VM from
 outside is not always possible.

 In case of AWS, it supports following which is very good feature to have
 when controlling the VMs in above scenario.
 ec2-run-instaces --instance-initiated-shutdown-behavior stop|terminate

 How do we achieve this with Opennebula?

 Thanks  Regards
 +==
 | Parag Mhashilkar
 | Fermi National Accelerator Laboratory, MS 120
 | Wilson  Kirk Road, Batavia, IL - 60510
 |--
 | Phone: 1 (630) 840-6530 Fax: 1 (630) 840-2783
 |--
 | Wilson Hall, 806E (Nov 8, 2012 - To date)
 | Wilson Hall, 867E (Nov 17, 2010 - Nov 7, 2012)
 | Wilson Hall, 863E (Apr 24, 2007 - Nov 16, 2010)
 | Wilson Hall, 856E (Mar 21, 2005 - Apr 23, 2007)
 +==


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Sharuzzaman Ahmat Raslan
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org