[openstack-dev] New OpenStack project for rolling maintenance and upgrade in interaction with application on top of it

2018-05-29 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Hi,

I am the PTL of the OPNFV Doctor project.

I have been working for a couple of years figuring out the infrastructure 
maintenance in interaction with application on top of it. Looked into Nova, 
Craton and had several Ops sessions. Past half a year there has been couple of 
different POCs, the last in March in the ONS [1] [2]

In OpenStack Vancouver summit last week it was time to present [3]. In Forum 
discussion following the presentation it was whether to make this just by 
utilizing different existing projects, but to make this generic, pluggable, 
easily adapted and future proof, it now goes down to start what I almost 
started a couple of years ago; the OpenStack Fenix project [4].

On behalf of OPNFV Doctor I would welcome any last thoughts before starting the 
project and would also love to see somebody joining to make the Fenix fly.

Main use cases to list most of them:
*   As a cloud admin I want to maintain and upgrade my infrastructure in a 
rolling fashion.
*   As a cloud admin I want to have a pluggable workflow to maintain and 
upgrade my infrastructure, to ensure it can be done with complicated 
infrastructure components and in interaction with different application 
payloads on top of it.
*   As a infrastructure service, I need to know whether infrastructure 
unavailability is because of planned maintenance.
*   As a critical application owner, I want to be aware of any planned 
downtime effecting to my service.
*   As a critical application owner, I want to have interaction with 
infrastructure rolling maintenance workflow to have a time window to ensure 
zero down time for my service and to be able to decide to make admin actions 
like migration of my instance.
*   As an application owner, I need to know when admin action like 
migration is complete.
*   As an application owner, I want to know about new capabilities coming 
because of infrastructure maintenance or upgrade, so I can take it also into 
use by my application. This could be hardware capability or for example 
OpenStack upgrade.
*   As a critical application that needs to scale by varying load, I need 
to interactively know about infrastructure resources scaling up and down, so I 
can scale my application at the same and keeping zero downtime for my service
*   As a critical application, I want to have retirement of my service done 
in controlled fashion.

[1] Infrastructure Maintenance & Upgrade: Zero VNF Downtime with OPNFV Doctor 
on OCP Hardware video
[2] Infrastructure Maintenance & Upgrade: Zero VNF Downtime with OPNFV Doctor 
on OCP Hardware 
slides
[3] How to gain VNF zero down-time during Infrastructure Maintenance and 
Upgrade
[4] Fenix project wiki
[5] Doctor design guideline 
draft


Best Regards,
Tomi Juvonen



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] [aodh] Integration with doctor

2017-07-09 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Hi,

Inline for “set instance state”

Br,
Tomi


From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Sunday, July 09, 2017 3:14 PM
To: dong.wenj...@zte.com.cn; openstack-dev@lists.openstack.org
Subject: Suspected SPAM - Re: [openstack-dev] [Vitrage] [aodh] Integration with 
doctor

Hi dwj,

Adding [aodh] for item #2.
Please see my answers inline.

Best Regards,
Ifat.



From: "dong.wenj...@zte.com.cn" 
>
Date: Friday, 7 July 2017 at 10:53

Hi,



For the integration Vitrage with doctor, I have some issues which need to be 
confirmed~  :)



1.  Notification strategies: conservative (nova->Aodh)

@Ifat

In the host_down_scenarios.yaml[1], we only set the host state as error.

But in doctor current use case, we need to call nova API 'nova reset-state' to 
set instance state as error to trigger the event alarm and notify the consumer.

Is it needed to be fix?

[Tomi] Either to have this fixed or propose an alternative in Doctor project 
where project (tenant) is able to have alarm about instances affected by other 
means. Project anyhow already know about “host forced down” in Nova servers API 
trough “host_status”, so it is just about getting an alarm on instance (server) 
level also.

We need to add 'Aodh' and 'Nova' notifier plugins in Vitrage config file for 
doctor integration,  right?



[Ifat] Vitrage currently doesn’t call set instance state, but this can easily 
be added. The required steps are:

  *   Add “set state” action for the instances in the template yaml file
  *   In the Nova notifier, add a call to nova reset-state API
  *   The Aodh plugin is not needed in this case, since the flow is Vitrage -> 
Nova -> Aodh with no direct calls from Vitrage to Aodh





2. Notification strategies: shortcut  (inspector->Aodh)

@Ryota

Do we have any plans to change to shortcut notification?

@Ifat

If we use the shortcut notification, in the Aodh notifier plugin, maybe we need 
to create aodh alarm with 'alarm_actions'.



[Ifat] The Aodh plugin is defined as a POC, and is not very usable. We have 
discussed with the Aodh team possible enhancements (like adding an 
“external/custon alarm” in Aodh) that will enable creating an Aodh alarm from 
Vitrage, but we didn’t reach a clear conclusion. There are a few problems with 
the current implementation, related to the facts that the event-alarm does not 
exactly suit the use case; and that Vitrage may raise the same alarm several 
times for different instances. If you chose the shortcut strategy, we will have 
to open this discussion again.

Having that said, I believe that the shortcut strategy is much better for 
Vitrage. While on the conservative strategy we can support notifications to 
Nova, in the shortcut strategy we can write the Aodh plugin once and support 
all kinds of notifications (to Nova, Neutron, Heat or even external components) 
with no additional effort.





Correct me if I miss something.



[1]. 
https://github.com/openstack/vitrage/blob/master/etc/vitrage/templates.sample/host_down_scenarios.yaml



BR,

dwj










__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Arrivederci

2017-03-22 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Hi Ian,

Nice to have known you trough Craton project. Thanks for everything you have 
done.

All the best,
Tomi

> -Original Message-
> From: Ian Cordasco [mailto:sigmaviru...@gmail.com]
> Sent: Wednesday, March 22, 2017 2:07 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] Arrivederci
> 
> Hi everyone,
> 
> Friday 24 March 2017 will be my last day working on OpenStack. I'll remove
> myself from teams (glance, craton, security, hacking) on Friday and
> unsubscribe
> from the OpenStack mailing lists.
> 
> I want to thank all of you for the last ~3 years. I've learned quite a bit
> from all of you. It's been a unique privilege to call the people in this
> community my colleagues. Treat each other well. Don't let minor technical
> arguments cause rifts in the community. Lift each other up.
> 
> As for me, I'm moving onto something completely different. You all are
> welcome
> to keep in touch via email, IRC, or some other method. At the very
> least, I'll see y'all
> around PyCon, the larger F/OSS world, etc.
> 
> --
> Ian Cordasco
> IRC/Git{Hub,Lab}/Twitter: sigmavirus24
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Craton] NFV planned host maintenance

2016-11-17 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Many thanks Cratoners,

I think all this looks promising while some things need to be worked out. Have 
to chew this a bit and also join to IRC to discuss further (just that I am in 
different timezone).

As seen in the discussion, there might be a problem in making notification that 
can be used to have an alarm for tenant about his VMs effecting by maintenance. 
Anyhow as one should have the capability to disable host monitoring, it would 
mean the monitoring service should then be aware of the planned maintenance. 
Currently monitoring service like Vitrage (actually sitting on top of raw 
monitoring SW and aware of cloud topology) will be able to make notifications 
that can be consumed by tenant as alarm. Meaning it should easily make tenant 
specific alarm about the planned maintenance.

Nova servers API now has host_status (Mitaka: get-valid-server-state BP) that 
can tell tenant the nova-compute is disabled ( “MAINTENANCE”). New alarm would 
then state it is because  planned maintenance. Surely API would be missing 
where tenant could see the same when querying his servers (where the Nova work 
I mentioned aims to have a link). This might be something needing more thinking.

See you in #craton and happy to work with such a great project!

Br,
Tomi

From: Jim Baker [mailto:jim.ba...@python.org]
Sent: Thursday, November 17, 2016 4:52 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Craton] NFV planned host maintenance



On Wed, Nov 16, 2016 at 10:54 AM, Sulochan Acharya 
<sulo.f...@gmail.com<mailto:sulo.f...@gmail.com>> wrote:
Hi,

On Wed, Nov 16, 2016 at 2:46 PM, Ian Cordasco 
<sigmaviru...@gmail.com<mailto:sigmaviru...@gmail.com>> wrote:
-----Original Message-----
From: Juvonen, Tomi (Nokia - FI/Espoo) 
<tomi.juvo...@nokia.com<mailto:tomi.juvo...@nokia.com>>
Reply: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: November 11, 2016 at 02:27:19
To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject:  [openstack-dev] [Craton] NFV planned host maintenance

> I have been looking in past two OpenStack summits to have changes needed to
> fulfill OPNFV Doctor use case for planned host maintenance and at the same
> time trying to find other Ops requirements to satisfy different needs. I was
> just about to start a new project (Fenix), but looking Craton, it seems
> a good alternative and was proposed to me in Barcelona meetup. Here is some
> ideas and would like a comment wither Craton could be used here.

Hi Tomi,

Thanks for your interest in craton! I'm replying in-line, but please
come and join us in #craton on Freenode as well!

> OPNFV Doctor / NFV requirements are described here:
> http://artifacts.opnfv.org/doctor/docs/requirements/02-use_cases.html#nvfi-maintenance
> http://artifacts.opnfv.org/doctor/docs/requirements/03-architecture.html#nfvi-maintenance
> http://artifacts.opnfv.org/doctor/docs/requirements/05-implementation.html#nfvi-maintenance
>
> My rough thoughts about what would be initially needed (as short as I can):
>
> - There should be a database of all hosts matching to what is known by Nova.

So I think this might be the first problem that you'll run into with Craton.

Craton is designed to specifically manage the physical devices in a
data centre. At the moment, it only considers the hosts that you'd run
Nova on, not the Virtual Machines that Nova is managing on the Compute
hosts.

Craton's inventory supports the following modeling:

  1.  devices, which may have a parent (so a strict tree); we map this against 
such entities as top-of-rack switches; hosts; and containers
  2.  logical relationships for these devices, including project, region, cell 
(optional); and arbitrary labels (tags)
  3.  key/value variables on most entities, including devices. Variables 
support resolution - an override mechanism where values are looked up against 
some chain (for device, that's the device tree, cell, region, in that order). 
Values are typed JSON in the underlying (and default) SQLAlchemy model we use.
Craton users synchronize the device inventory from other source of truth 
systems, such as an asset database; or perhaps manually. Meanwhile, variables 
can reflect desired state configuration (so like Ansible); as well as captured 
information.


It's plausible that we could add the ability to track virtual
machines, but Craton is meant to primarily work underneath the cloud.
I think this might be changing since Craton is looking forward to
helping manage a multi-cloud environment, so it's possible this won't
be an issue for long.

Craton's device-focused model, although oriented to hardware, is rather 
arbit

[openstack-dev] [Craton] NFV planned host maintenance

2016-11-11 Thread Juvonen, Tomi (Nokia - FI/Espoo)
I have been looking in past two OpenStack summits to have changes needed to
fulfill OPNFV Doctor use case for planned host maintenance and at the same
time trying to find other Ops requirements to satisfy different needs. I was
just about to start a new project (Fenix), but looking Craton, it seems
a good alternative and was proposed to me in Barcelona meetup. Here is some
ideas and would like a comment wither Craton could be used here.

OPNFV Doctor / NFV requirements are described here:
http://artifacts.opnfv.org/doctor/docs/requirements/02-use_cases.html#nvfi-maintenance
http://artifacts.opnfv.org/doctor/docs/requirements/03-architecture.html#nfvi-maintenance
http://artifacts.opnfv.org/doctor/docs/requirements/05-implementation.html#nfvi-maintenance

My rough thoughts about what would be initially needed (as short as I can):

- There should be a database of all hosts matching to what is known by Nova.
- There should by an API for Cloud Admin to set planned maintenance window
  for a host (maybe aggregate, group of hosts), when in maintenance and unset
  when finished. There might be some optional parameters like target host
  where to move things currently running on effected host. could also be
  used for retirement of a host.
- There should be project(tenant) and host specific notifications that could:
- Trigger alarm in Aodh so Application would be aware of maintenance state
  changes effecting to his servers, so zero downtime of application could
  be guaranteed.
- Notification could be consumed by workflow engine like Mistral, where
  application server specific actions flows and admin action flows could
  be performed (to move servers away, disable host,...).
- Host monitoring like Vitrage could consume notification to disable
  alarms for host as of planned maintenance ongoing and not down by fault.
- There should be admin and project level API to query maintenance session
  status.
- Workflow status should be queried or read as notification to keep internal
  state and send further notification.
- Some more discussion also in "BCN-ops-informal-meetup" that goes beyond this:
  https://etherpad.openstack.org/p/BCN-ops-informal-meetup

What else, details, problems:

There is a problem in flow engine actions. Depending on how long maintenance
would take or what type of server is running, application wants flows to behave
differently. Application specific flows could surely be done, but problem is
that they should make admin actions. It should be solved how application can
decide actions flows while only admin can run them. Should admin make
the flows and let application a power to choose by hint in nova metadata or
in notification going to flow engine.

Started a discussion in Austin summit about extending the planned host
maintenance in Nova, but it was agreed there could just be a link to external
tool. Now if this tool would exist in OpenStack, I would suggest to link it
like this, but surely this is to be seen after the external tool
implementation exists:
- Nova Services API could have a way for admin to set and unset a "base URL"
  pointing to external tool about planned maintenance effecting to a host.
- Admin should see link to external tool when querying services via services
  API. This might be formed like: {base URL}/{host_name}
- Project should have a project specific link to external tool when querying
  via Nova servers API. This might be: {base URL}/project/{hostId}.
  hostId is exposed to project as it do not tell exact host, but otherwise as
  a unique identifier for host:
  hashlib.sha224(projectid + host_name).hexdigest()

Br,
Tomi Juvonen
Senior SW Architect, Nokia







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RFC Host Maintenance

2016-04-29 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Hi,

Maintenance was discussed in the OpenStack summit in "Ops: Nova Maintenance - 
how do you do it?"

It was decided the best alternative is just to expose the nova-compute 
disabled_reason to owner of the server. This field can then have URL to more 
detailed status given in external tool. There was also all kind of requirements 
from operators that we did not have time to go through, but those are as a base 
what the external tool could handle.

As result, original spec is now abandoned:
https://review.openstack.org/296995
And new made:
https://review.openstack.org/310510
Also as part of the whole story, filtering by host_status:
https://review.openstack.org/276671

Thank you for all the comments and getting this NFV requirement forwards.

Br,
Tomi

> -Original Message-
> From: Juvonen, Tomi (Nokia - FI/Espoo)
> Sent: Wednesday, April 13, 2016 2:02 AM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: RE: [openstack-dev] [Nova] RFC Host Maintenance
> 
> > -Original Message-
> > From: EXT Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
> > Sent: Tuesday, April 12, 2016 4:46 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > <openstack-dev@lists.openstack.org>
> > Subject: Re: [openstack-dev] [Nova] RFC Host Maintenance
> >
> > On Thu, Apr 07, 2016 at 06:36:20AM -0400, Sean Dague wrote:
> > > On 04/07/2016 03:26 AM, Juvonen, Tomi (Nokia - FI/Espoo) wrote:
> > > > Hi Nova, Ops, stackers,
> > > >
> > > > I am trying to figure out different use cases and requirements there
> > > > would be for host maintenance and would like to get feedback and
> > > > transfer all this to spec and discussion what could and should land
> for
> > > > Nova or other places.
> > > >
> > > > As working in OPNFV Doctor project that has the Telco perspective
> about
> > > > related requirements, I started to draft a spec based on something
> > > > smaller that would be nice to have in Nova and less complicated to
> have
> > > > it in single cycle. Anyhow the feedback from Nova API team was to
> look
> > > > this as a whole and gather more. This is why asking this here and not
> > > > just trough spec, to get input for requirements and use cases with
> > wider
> > > > audience. Here is the draft spec proposing first just maintenance
> > window
> > > > to be added:
> > > > _https://review.openstack.org/296995/_
> > > >
> > > > Here is link to OPNFV Doctor requirements:
> > > > _http://artifacts.opnfv.org/doctor/docs/requirements/02-
> > use_cases.html#nvfi-maintenance_
> > > > _http://artifacts.opnfv.org/doctor/docs/requirements/03-
> > architecture.html#nfvi-maintenance_
> > > > _http://artifacts.opnfv.org/doctor/docs/requirements/05-
> > implementation.html#nfvi-maintenance_
> > > >
> > > > Here is what I could transfer as use cases, but would ask feedback to
> > > > get more:
> > > >
> > > > As admin I want to set maintenance period for certain host.
> > > >
> > > > As admin I want to know when host is ready to actions to be done by
> > admin
> > > > during the maintenance. Meaning physical resources are emptied.
> > > >
> > > > As owner of a server I want to prepare for maintenance to minimize
> > downtime,
> > > > keep capacity on needed level and switch HA service to server not
> > > > affected by
> > > > maintenance.
> > > >
> > > > As owner of a server I want to know when my servers will be down
> > because of
> > > > host maintenance as it might be servers are not moved to another
> host.
> > > >
> > > > As owner of a server I want to know if host is to be totally removed,
> > so
> > > > instead of keeping my servers on host during maintenance, I want to
> > move
> > > > them
> > > > to somewhere else.
> > > >
> > > > As owner of a server I want to send acknowledgement to be ready for
> > host
> > > > maintenance and I want to state if servers are to be moved or kept on
> > host.
> > > > Removal and creating of server is in owner's control already.
> > Optionally
> > > > server
> > > > Configuration data could hold information about automatic actions to
> be
> > > > done
> > > > when host is going down unexpectedly or in contro

Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files for testing?

2016-04-22 Thread Juvonen, Tomi (Nokia - FI/Espoo)
This is so great. Then you should also know that the combination you are trying 
in local.conf has worked and your problem is perhaps not that one.
+1

Br,
Tomi

> -Original Message-
> From: EXT Markus Zoeller [mailto:mzoel...@de.ibm.com]
> Sent: Thursday, April 14, 2016 10:32 AM
> To: openstack-dev 
> Subject: [openstack-dev] [all] [devstack] Adding example "local.conf" files
> for testing?
> 
> Sometimes (especially when I try to reproduce bugs) I have the need
> to set up a local environment with devstack. Everytime I have to look
> at my notes to check which option in the "local.conf" have to be set
> for my needs. I'd like to add a folder in devstacks tree which hosts
> multiple example local.conf files for different, often used setups.
> Something like this:
> 
> example-confs
> --- newton
> --- --- x86-ubuntu-1404
> --- --- --- minimum-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- --- serial-console-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- --- live-migration-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf.controller
> --- --- --- --- local.conf.compute1
> --- --- --- --- local.conf.compute2
> --- --- --- minimal-neutron-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- s390x-1.1.1-vulcan
> --- --- --- minimum-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- --- live-migration-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf.controller
> --- --- --- --- local.conf.compute1
> --- --- --- --- local.conf.compute2
> --- mitaka
> --- --- # same structure as master branch. omitted for brevity
> --- liberty
> --- --- # same structure as master branch. omitted for brevity
> 
> Thoughts?
> 
> Regards, Markus Zoeller (markus_z)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RFC Host Maintenance

2016-04-13 Thread Juvonen, Tomi (Nokia - FI/Espoo)
> -Original Message-
> From: EXT Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
> Sent: Tuesday, April 12, 2016 4:46 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Nova] RFC Host Maintenance
> 
> On Thu, Apr 07, 2016 at 06:36:20AM -0400, Sean Dague wrote:
> > On 04/07/2016 03:26 AM, Juvonen, Tomi (Nokia - FI/Espoo) wrote:
> > > Hi Nova, Ops, stackers,
> > >
> > > I am trying to figure out different use cases and requirements there
> > > would be for host maintenance and would like to get feedback and
> > > transfer all this to spec and discussion what could and should land for
> > > Nova or other places.
> > >
> > > As working in OPNFV Doctor project that has the Telco perspective about
> > > related requirements, I started to draft a spec based on something
> > > smaller that would be nice to have in Nova and less complicated to have
> > > it in single cycle. Anyhow the feedback from Nova API team was to look
> > > this as a whole and gather more. This is why asking this here and not
> > > just trough spec, to get input for requirements and use cases with
> wider
> > > audience. Here is the draft spec proposing first just maintenance
> window
> > > to be added:
> > > _https://review.openstack.org/296995/_
> > >
> > > Here is link to OPNFV Doctor requirements:
> > > _http://artifacts.opnfv.org/doctor/docs/requirements/02-
> use_cases.html#nvfi-maintenance_
> > > _http://artifacts.opnfv.org/doctor/docs/requirements/03-
> architecture.html#nfvi-maintenance_
> > > _http://artifacts.opnfv.org/doctor/docs/requirements/05-
> implementation.html#nfvi-maintenance_
> > >
> > > Here is what I could transfer as use cases, but would ask feedback to
> > > get more:
> > >
> > > As admin I want to set maintenance period for certain host.
> > >
> > > As admin I want to know when host is ready to actions to be done by
> admin
> > > during the maintenance. Meaning physical resources are emptied.
> > >
> > > As owner of a server I want to prepare for maintenance to minimize
> downtime,
> > > keep capacity on needed level and switch HA service to server not
> > > affected by
> > > maintenance.
> > >
> > > As owner of a server I want to know when my servers will be down
> because of
> > > host maintenance as it might be servers are not moved to another host.
> > >
> > > As owner of a server I want to know if host is to be totally removed,
> so
> > > instead of keeping my servers on host during maintenance, I want to
> move
> > > them
> > > to somewhere else.
> > >
> > > As owner of a server I want to send acknowledgement to be ready for
> host
> > > maintenance and I want to state if servers are to be moved or kept on
> host.
> > > Removal and creating of server is in owner's control already.
> Optionally
> > > server
> > > Configuration data could hold information about automatic actions to be
> > > done
> > > when host is going down unexpectedly or in controlled manner. Also
> > > actions at
> > > the same if down permanently or only temporarily. Still this needs
> > > acknowledgement from server owner as he needs time for application
> level
> > > controlled HA service switchover.
> >
> > While I definitely understand the value of these in a deployement, I'm a
> > bit concerned of baking all this structured data into Nova itself. As it
> > effectively means putting some degree of a ticket management system in
> > Nova that's specific to a workflow you've decided on here. Baked in
> > workflow is hard to change when the needs of an industry do.
> >
> > My counter proposal on your spec was to provide a free form field
> > associated with maintenance mode which could contain a url linking to
> > the details. This could be a jira ticket, or a REST url for some other
> > service. This would actually be much like how we handle images in Nova,
> > with a url to glance to find more info.
> 
> FWIW, this is what we do in ironic. A maintenance boolean, and a
> maintenance_reason text field that operators can dump text/links/etc in.
> 
> As an example:
> $ ironic node-set-maintenance $uuid on --reason "Dead fiber // ticket 123
> // jroll 2016/04/12"
> 
> It's worked well for Rackspace's deployment, at least, and I seem to
> remember others being happy with it as well.


Re: [openstack-dev] [Nova] RFC Host Maintenance

2016-04-12 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Hi,

> -Original Message-
> From: EXT Balázs Gibizer [mailto:balazs.gibi...@ericsson.com]
> Sent: Tuesday, April 12, 2016 10:14 AM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Nova] RFC Host Maintenance
> 
> > -Original Message-----
> > From: Juvonen, Tomi (Nokia - FI/Espoo) [mailto:tomi.juvo...@nokia.com]
> > Sent: April 11, 2016 09:06
> >
> > Hi,
> >
> > Looking the discussion so far:
> > -Suggestion to have extended information for maintenance to somewhere
> > outside Nova.
> > -Notification about Nova state changes.
> >
> > So how about if the whole logic of maintenance would be triggered by Nova
> > API disable/enable service notification, but otherwise the business logic
> > would be outside Nova?!
> 
> I think in this scenario the module that holds the business logic outside
> of
> Nova can be used by the admin to trigger the maintenance and one of the
> business logic piece would be to set the respective service(s) disabled in
> OpenStack.

Yes.
> 
> >
> > -Extended host information needed by maintenance should be outside of
> > Nova (extended information like maintenance window, more precise
> > maintenance state and version information) -Extended server information
> > needed by maintenance should be outside of Nova (configuration for
> > automatic actions in different use cases) -The communicating and action
> flow
> > with server owner and admin should be outside of Nova.
> >
> > One thing is now as there is accepted that host fault monitoring is to be
> > external, it might be best fit also for some of this maintenance logic.
> > Monitoring SW is also the place with the best knowledge about the host
> > state and if looking to build any automated actions on fault scenarios,
> then
> > surely maintenance would be close to that also. Monitoring also need to
> > know what host is in maintenance. Logic is very similar from server point
> of
> > view when looking server actions and communication with server owner.
> >
> > Might this be the way to go?
> 
> What impact this solution has on Nova? As far as I see it is very limited
> if not
> zero.

If the conclusion is really this, then by fast no changes to Nova.

> 
> Cheers,
> Gibi
> 
> >
> > Br,
> > Tomi
> >
> > > -Original Message-
> > > From: EXT Qiming Teng [mailto:teng...@linux.vnet.ibm.com]
> > > Sent: Friday, April 08, 2016 2:38 PM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > <openstack-dev@lists.openstack.org>
> > > Subject: Re: [openstack-dev] [Nova] RFC Host Maintenance
> > >
> > > On Fri, Apr 08, 2016 at 09:52:31AM +, Balázs Gibizer wrote:
> > > > > -Original Message-
> > > > > From: Qiming Teng [mailto:teng...@linux.vnet.ibm.com]
> > > > > Sent: April 07, 2016 15:42
> > > > >
> > > > > The only gap based on my limited understanding is that nova is not
> > > emitting
> > > > > events compute host state changes. This knowledge is still kept
> > > > > inside
> > > nova
> > > > > as some service states. If that info is posted to oslo messaging,
> > > > > a lot
> > > of usage
> > > > > scenarios can be enabled and we can avoid too much churns to nova
> > > itself.
> > > >
> > > > Nova does not really know the state of the compute host, it knows
> > > > only
> > > the state of the nova-compute service running on the compute host. In
> > > Mitaka we added notification about the service status[2].
> > > > Also there is a proposal about notification about hypervisor info
> > > > change
> > > [1].
> > > >
> > > > Cheers,
> > > > Gibi
> > > >
> > > > [1] https://review.openstack.org/#/c/299807/
> > > > [2]
> > > > http://docs.openstack.org/developer/nova/notifications.html#existing
> > > > -
> > > versioned-notifications
> > > >
> > >
> > > Thanks for the sharing, Balázs. The mitaka service status notification
> > > looks pretty useful, I'll try it.
> > >
> > > Regards,
> > >   Qiming
> > >
> > > > >
> > > > > Regards,
> > > > >   Qiming
> > > > >
> > > > >
> > > > >
> > _

Re: [openstack-dev] [Nova] RFC Host Maintenance

2016-04-11 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Hi,

Looking the discussion so far: 
-Suggestion to have extended information for maintenance to somewhere outside 
Nova.
-Notification about Nova state changes.

So how about if the whole logic of maintenance would be triggered by Nova API 
disable/enable service notification, but otherwise the business logic would be 
outside Nova?!

-Extended host information needed by maintenance should be outside of Nova 
(extended information like maintenance window, more precise maintenance state 
and version information)
-Extended server information needed by maintenance should be outside of Nova 
(configuration for automatic actions in different use cases) 
-The communicating and action flow with server owner and admin should be 
outside of Nova.

One thing is now as there is accepted that host fault monitoring is to be 
external, it might be best fit also for some of this maintenance logic. 
Monitoring SW is also the place with the best knowledge about the host state 
and if looking to build any automated actions on fault scenarios, then surely 
maintenance would be close to that also. Monitoring also need to know what host 
is in maintenance. Logic is very similar from server point of view when looking 
server actions and communication with server owner.

Might this be the way to go?

Br,
Tomi

> -Original Message-
> From: EXT Qiming Teng [mailto:teng...@linux.vnet.ibm.com]
> Sent: Friday, April 08, 2016 2:38 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [Nova] RFC Host Maintenance
> 
> On Fri, Apr 08, 2016 at 09:52:31AM +, Balázs Gibizer wrote:
> > > -Original Message-
> > > From: Qiming Teng [mailto:teng...@linux.vnet.ibm.com]
> > > Sent: April 07, 2016 15:42
> > >
> > > The only gap based on my limited understanding is that nova is not
> emitting
> > > events compute host state changes. This knowledge is still kept inside
> nova
> > > as some service states. If that info is posted to oslo messaging, a lot
> of usage
> > > scenarios can be enabled and we can avoid too much churns to nova
> itself.
> >
> > Nova does not really know the state of the compute host, it knows only
> the state of the nova-compute service running on the compute host. In
> Mitaka we added notification about the service status[2].
> > Also there is a proposal about notification about hypervisor info change
> [1].
> >
> > Cheers,
> > Gibi
> >
> > [1] https://review.openstack.org/#/c/299807/
> > [2] http://docs.openstack.org/developer/nova/notifications.html#existing-
> versioned-notifications
> >
> 
> Thanks for the sharing, Balázs. The mitaka service status notification
> looks pretty useful, I'll try it.
> 
> Regards,
>   Qiming
> 
> > >
> > > Regards,
> > >   Qiming
> > >
> > >
> > > __
> > > 
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-
> > > requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RFC Host Maintenance

2016-04-07 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Thanks Sean,

I totally understand your comment and this logic might really be somewhere 
else, not to overload Nova with all kind of things and the level of exposing 
you suggested might then be enough.

Anyhow I am also asking more user or operator perspectives to get more use 
cases. Surely if building this mostly externally (to Nova) those could also be 
added later.

Br,
Tomi

> -Original Message-
> From: EXT Sean Dague [mailto:s...@dague.net]
> Sent: Thursday, April 07, 2016 1:36 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] RFC Host Maintenance
> 
> On 04/07/2016 03:26 AM, Juvonen, Tomi (Nokia - FI/Espoo) wrote:
> > Hi Nova, Ops, stackers,
> >
> > I am trying to figure out different use cases and requirements there
> > would be for host maintenance and would like to get feedback and
> > transfer all this to spec and discussion what could and should land for
> > Nova or other places.
> >
> > As working in OPNFV Doctor project that has the Telco perspective about
> > related requirements, I started to draft a spec based on something
> > smaller that would be nice to have in Nova and less complicated to have
> > it in single cycle. Anyhow the feedback from Nova API team was to look
> > this as a whole and gather more. This is why asking this here and not
> > just trough spec, to get input for requirements and use cases with wider
> > audience. Here is the draft spec proposing first just maintenance window
> > to be added:
> > _https://review.openstack.org/296995/_
> >
> > Here is link to OPNFV Doctor requirements:
> > _http://artifacts.opnfv.org/doctor/docs/requirements/02-
> use_cases.html#nvfi-maintenance_
> > _http://artifacts.opnfv.org/doctor/docs/requirements/03-
> architecture.html#nfvi-maintenance_
> > _http://artifacts.opnfv.org/doctor/docs/requirements/05-
> implementation.html#nfvi-maintenance_
> >
> > Here is what I could transfer as use cases, but would ask feedback to
> > get more:
> >
> > As admin I want to set maintenance period for certain host.
> >
> > As admin I want to know when host is ready to actions to be done by admin
> > during the maintenance. Meaning physical resources are emptied.
> >
> > As owner of a server I want to prepare for maintenance to minimize
> downtime,
> > keep capacity on needed level and switch HA service to server not
> > affected by
> > maintenance.
> >
> > As owner of a server I want to know when my servers will be down because
> of
> > host maintenance as it might be servers are not moved to another host.
> >
> > As owner of a server I want to know if host is to be totally removed, so
> > instead of keeping my servers on host during maintenance, I want to move
> > them
> > to somewhere else.
> >
> > As owner of a server I want to send acknowledgement to be ready for host
> > maintenance and I want to state if servers are to be moved or kept on
> host.
> > Removal and creating of server is in owner's control already. Optionally
> > server
> > Configuration data could hold information about automatic actions to be
> > done
> > when host is going down unexpectedly or in controlled manner. Also
> > actions at
> > the same if down permanently or only temporarily. Still this needs
> > acknowledgement from server owner as he needs time for application level
> > controlled HA service switchover.
> 
> While I definitely understand the value of these in a deployement, I'm a
> bit concerned of baking all this structured data into Nova itself. As it
> effectively means putting some degree of a ticket management system in
> Nova that's specific to a workflow you've decided on here. Baked in
> workflow is hard to change when the needs of an industry do.
> 
> My counter proposal on your spec was to provide a free form field
> associated with maintenance mode which could contain a url linking to
> the details. This could be a jira ticket, or a REST url for some other
> service. This would actually be much like how we handle images in Nova,
> with a url to glance to find more info.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] RFC Host Maintenance

2016-04-07 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Hi Nova, Ops, stackers,

I am trying to figure out different use cases and requirements there would be 
for host maintenance and would like to get feedback and transfer all this to 
spec and discussion what could and should land for Nova or other places.

As working in OPNFV Doctor project that has the Telco perspective about related 
requirements, I started to draft a spec based on something smaller that would 
be nice to have in Nova and less complicated to have it in single cycle. Anyhow 
the feedback from Nova API team was to look this as a whole and gather more. 
This is why asking this here and not just trough spec, to get input for 
requirements and use cases with wider audience. Here is the draft spec 
proposing first just maintenance window to be added:
https://review.openstack.org/296995/

Here is link to OPNFV Doctor requirements:
http://artifacts.opnfv.org/doctor/docs/requirements/02-use_cases.html#nvfi-maintenance
http://artifacts.opnfv.org/doctor/docs/requirements/03-architecture.html#nfvi-maintenance
http://artifacts.opnfv.org/doctor/docs/requirements/05-implementation.html#nfvi-maintenance

Here is what I could transfer as use cases, but would ask feedback to get more:

As admin I want to set maintenance period for certain host.

As admin I want to know when host is ready to actions to be done by admin
during the maintenance. Meaning physical resources are emptied.

As owner of a server I want to prepare for maintenance to minimize downtime,
keep capacity on needed level and switch HA service to server not affected by
maintenance.

As owner of a server I want to know when my servers will be down because of
host maintenance as it might be servers are not moved to another host.

As owner of a server I want to know if host is to be totally removed, so
instead of keeping my servers on host during maintenance, I want to move them
to somewhere else.

As owner of a server I want to send acknowledgement to be ready for host
maintenance and I want to state if servers are to be moved or kept on host.
Removal and creating of server is in owner's control already. Optionally server
Configuration data could hold information about automatic actions to be done
when host is going down unexpectedly or in controlled manner. Also actions at
the same if down permanently or only temporarily. Still this needs
acknowledgement from server owner as he needs time for application level
controlled HA service switchover.

Br,
Tomi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] the spec of Add-ServiceGroup-using-Tooz

2016-01-31 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Hi Whang Hao,

OPNFV Doctor meeting details can be found here:
https://wiki.opnfv.org/meetings/doctor

Br,
Tomi

From: EXT hao wang [mailto:sxmatch1...@gmail.com]
Sent: Monday, February 01, 2016 9:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] the spec of Add-ServiceGroup-using-Tooz

Hi, dong.wenjuan

Sounds very interesting about "doctor" project. What's timezone about the 
weekly meeting?
I want to join it if I have time by then.

Thanks
Wang Hao


2016-02-01 14:43 GMT+08:00 
>:

Hi all,

I proposed the spec of Add-ServiceGroup-using-Tooz in Ciner[1].

Project doctor[2] in OPNFV community is its upstream.
The goal of this project is to build fault management and maintenance 
framework for high availability of Network Services on top of virtualized 
infrastructure.
The key feature is immediate notification of unavailability of virtualized 
resources from VIM, to process recovery of VNFs on them.

But in Cinder, the service reports it's status with a delay. So I proposed 
adding Tooz as cinder ServiceGroup driver to report the service states without 
a dely.

I'm a new in Cinder. :) So I wants to invite some Cinder exports to discuss 
the spec in the doctor's weekly meeting at 14:00 on Tuesday this week. Is 
anyone interested in it? Thanks~

[1]https://review.openstack.org/#/c/258968/
[2]https://wiki.opnfv.org/doctor



董文娟   Wenjuan Dong

控制器四部 / 无线产品   Controller Dept Ⅳ. / Wireless Product Operation

[icon]

[logo]
上海市浦东新区碧波路889号中兴通讯D3
D3, ZTE, No. 889, Bibo Rd.
T: +86 021 85922M: +86 13661996389
E: dong.wenj...@zte.com.cn
www.ztedevice.com








ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Get-validserver-state default policy

2016-01-14 Thread Juvonen, Tomi (Nokia - FI/Espoo)
This API change was agreed is the spec review to be "rule: admin_or_owner", but 
during code review "rule: admin_api" was also wanted.
Link to spec to see details what this is about 
(https://review.openstack.org/192246/):
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/get-valid-server-state.html

In my deployment where this is crucial information for the owner, this will 
certainly be "admin_or_owner". The question is now what is the general feeling 
about the default value in policy.json and should it just be as agreed in spec 
or should it be changed still.

Br,
Tomi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Get-validserver-state default policy

2016-01-14 Thread Juvonen, Tomi (Nokia - FI/Espoo)
>-Original Message-
>From: EXT Jay Pipes [mailto:jaypi...@gmail.com] 
>Sent: Friday, January 15, 2016 9:25 AM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [Nova] Get-validserver-state default policy
>
>On 01/15/2016 01:50 AM, Juvonen, Tomi (Nokia - FI/Espoo) wrote:
>> This API change was agreed is the spec review to be "rule:
>> admin_or_owner", but during code review "rule: admin_api" was also wanted.
>> Link to spec to see details what this is about
>> (https://review.openstack.org/192246/):
>> _http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/get-valid-server-state.html_
>> In my deployment where this is crucial information for the owner, this
>> will certainly be "admin_or_owner". The question is now what is the
>> general feeling about the default value in policy.json and should it
>> just be as agreed in spec or should it be changed still.
>
>The host state is NOT something that a regular cloud user should be able 
>to query, IMHO. Only admins should be able to see anything about the 
>underlying compute hardware.
>
>Exposing hardware information and statuses out through the REST API is a 
>bad leak of implementation.

Jay, yes agreed in code review. The question just rose again as the code change 
was against spec. I guess the spec can still be revisited. I have a small bit 
to spec anyhow, so can make "rule: admin_api"  at the same :)

Br,
Tomi

>Best,
>-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-novaclient] history of virtual-interface commands

2015-12-10 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Hi,

Also hitting this as making new microversion and would need to have support in 
CLI. Seems CLI works just by bumping API_MAX_VERSION (as I am only adding one 
new attribute to existing API). Anyhow cannot do this because microversion 2.12 
is not implemented (actually API_MAX_VERSION is currently 2.9, but 2.7-2.11 
should be under way).

Br,
Tomi

From: EXT Andrey Kurilin [mailto:akuri...@mirantis.com]
Sent: Friday, December 04, 2015 3:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [python-novaclient] history of virtual-interface 
commands

Hi stackers!

I have found code in novaclient related to virtual-interfaces extension[1], but 
there are no cli commands for it. Since rackspace docs include reference to 
`virtual-interface-list` command[2], I wonder, is there a reason for which 
commands related to virtual-interfaces are missed from upstream master?
Does anyone know the history of virtual-interfaces extension and CLI entrypoint 
for it?

[1] - 
https://github.com/openstack/python-novaclient/blob/2.35.0/novaclient/v2/virtual_interfaces.py
[2] - 
http://docs.rackspace.com/servers/api/v2/cs-gettingstarted/content/nova_list_virt_interfaces_for_server.html

--
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Juvonen, Tomi (Nokia - FI/Espoo)
+1 
Good work indeed.
>From: EXT John Garbutt [mailto:j...@johngarbutt.com] 
>Sent: Friday, November 06, 2015 5:32 PM
>To: OpenStack Development Mailing List
>Subject: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core
>
>Hi,
>
>I propose we add Alex Xu[1] to nova-core.
>
>Over the last few cycles he has consistently been doing great work,
>including some quality reviews, particularly around the API.
>
>Please respond with comments, +1s, or objections within one week.
>
>Many thanks,
>John
>
>[1]http://stackalytics.com/?module=nova-group_id=xuhj=all
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread Juvonen, Tomi (Nokia - FI/Espoo)
+1 :)
>From: EXT John Garbutt [mailto:j...@johngarbutt.com] 
>Sent: Friday, November 06, 2015 5:32 PM
>To: OpenStack Development Mailing List
>Subject: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core
>
>Hi,
>
>I propose we add Sylvain Bauza[1] to nova-core.
>
>Over the last few cycles he has consistently been doing great work,
>including some quality reviews, particularly around the Scheduler.
>
>Please respond with comments, +1s, or objections within one week.
>
>Many thanks,
>John
>
>[1] 
>http://stackalytics.com/?module=nova-group_id=sylvain-bauza=all
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] It's time to update the Liberty release notes

2015-10-07 Thread Juvonen, Tomi (Nokia - FI/Espoo)
This also had DocImpact, but flag was not there.

ff80032 Roman Dobosz   New nova API call to mark nova-compute down

br,
Tomi

-Original Message-
From: EXT Alexis Lee [mailto:lx...@hpe.com] 
Sent: Wednesday, October 07, 2015 4:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] It's time to update the Liberty release 
notes

Now with committer names.

Matt Riedemann said on Thu, Oct 01, 2015 at 01:27:38PM -0500:
> Here are the commits in liberty that had the UpgradeImpact tag:

% git log --format='%h %<(18,trunc)%cn %s' -i --grep UpgradeImpact \
remotes/origin/stable/kilo..remotes/origin/stable/liberty
0b49934 Zhenyu Zheng   CONF.allow_resize_to_same_host should check only 
once in controller
4a9e14a Sylvain Bauza  Update ComputeNode values with allocation ratios in 
the RT
4a18f7d John Garbutt   api: use v2.1 only in api-paste.ini
11507ee John Garbutt   api: deprecate the concept of extensions in v2.1
9c91781 Eli Qiao   Add missing rules in policy.json
1b8a2e0 Dan Smith  Adding user_id handling to keypair index, show and 
create api calls
0283234 Maxim Nestratovlibvirt: Always default device names at boot
725c54e He Jie Xu  Remove db layer hard-code permission checks for 
quota_class_create/update
1dbb322 He Jie Xu  Remove db layer hard-code permission checks for 
quota_class_get_all_by_name
4d6a50a ShaoHe FengRemove db layer hard-code permission checks for 
floating_ip_dns
55e63f8 Davanum Srinivas.. Allow non-admin to list all tenants based on policy
92807d6 jichenjc   Remove redundant policy check from 
security_group_default_rule
2a01a1b Matt Riedemann Remove hv_type translation shim for powervm
dcd4be6 He Jie Xu  Remove db layer hard-code permission checks for 
quota_get_all_*
06e6056 jichenjc   Remove cell policy check
d03b716 Matt Riedemann libvirt: deprecate libvirt version usage < 0.10.2
5309120 Dan Smith  Update kilo version alias

> Here are the DocImpact changes:

% git log --format='%h %<(18,trunc)%cn %s' -i --grep DocImpact \
remotes/origin/stable/kilo..remotes/origin/stable/liberty
bc6f30d He Jie Xu  Give instance default hostname if hostname is empty
4ee4f9f Nikola Dipanov RT: track evacuation migrations
9095b36 Davanum Srinivas.. Expose keystoneclient's session and auth plugin 
loading parameters
4a9e14a Sylvain Bauza  Update ComputeNode values with allocation ratios in 
the RT
4a18f7d John Garbutt   api: use v2.1 only in api-paste.ini
11507ee John Garbutt   api: deprecate the concept of extensions in v2.1
45d1e3c ghanshyam  Expose VIF net-id attribute in os-virtual-interfaces
9d353e5 Michael Still  libvirt: take account of disks in migration data size
17e5911 Michael Still  Add deprecated_for_removal parm for deprecated 
neutron_ops
95940cc Michael Still  Don't allow instance to overcommit against itself
9cd9e66 Davanum Srinivas   Add rootwrap daemon mode support
c250aca Jay Pipes  Allow compute monitors in different namespaces
434ce2a Marian Horban  Added processing /compute URL
2c0a306 Dan Smith  Limit parallel live migrations in progress
da33ab4 Daniel P. Berrange libvirt: set caps on maximum live migration time
07c7e5c Daniel P. Berrange libvirt: support management of downtime during 
migration
60d08e6 Chuck Carmack  Add documentation for the nova-cells command.
ae5a329 Marian Horban  libvirt:Rsync remote FS driver was added
9a09674 Vladik Romanovsky  libvirt: enable virtio-net multiqueue
8a7b1e8 Chuck Carmack  :Add documentation for the nova-idmapshift command.
bf91d9f Sergey Nikitin Added missed '-' to the rest_api_version_history.rst
1b8a2e0 Dan Smith  Adding user_id handling to keypair index, show and 
create api calls
622a845 Gary KottonMetadata: support proxying loadbalancers
2f7403b Radoslav Gerganov  VMware: map one nova-compute to one VC cluster
ace11d3 Radoslav Gerganov  VMware: add serial port device
ab35779 Radomir Dopieral.. Handle SSL termination proxies for version list
6739df7 Dan Smith  Include DiskFilter in the default list
5e5ef99 Thang Pham VMware: Add support for swap disk
49a572a Ghanshyam Mann Show 'locked' information in server details
4252420 Gary KottonVMware: add resource limits for disk
f1f46a0 Gary KottonVMware: Resource limits for memory
7aec88c Gary KottonVMware: add support for cores per socket
bc3b6cc Maxim Nestratovlibvirt: rename parallels driver to virtuozzo
95f1d47 Mike DormanAdd console allowed origins setting
d0ee3ab Shiina, Hironori   libvirt:Add a driver API to inject an NMI
50c8f93 Radoslav Gerganov  Add MKS console support
abf20cd abhishekkekane Execute _poll_shelved_instances only if 
shelved_offload_time is > 0
973f312 Jay Pipes  Use stevedore for loading monitor extensions
9260ea1 andrewbogott 

Re: [openstack-dev] [nova] How to properly detect and fence a compromised host (and why I dislike TrustedFilter)

2015-06-25 Thread Juvonen, Tomi (Nokia - FI/Espoo)
-Original Message-
From: ext John Garbutt [mailto:j...@johngarbutt.com] 
Sent: Thursday, June 25, 2015 4:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] How to properly detect and fence a 
compromised host (and why I dislike TrustedFilter)

On 25 June 2015 at 14:09, Dulko, Michal michal.du...@intel.com wrote:
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: Thursday, June 25, 2015 2:22 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] How to properly detect and fence a
 compromised host (and why I dislike TrustedFilter)

 On 24 June 2015 at 09:35, Dulko, Michal michal.du...@intel.com wrote:
  -Original Message-
  From: Sylvain Bauza [mailto:sba...@redhat.com]
  Sent: Wednesday, June 24, 2015 9:39 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [nova] How to properly detect and fence
  a compromised host (and why I dislike TrustedFilter)

 (snip)

   So I would suggest using the 3rd-party tools as enhancing way to
  supplement our TCP/trustedfilter feature. And the 3rd party tools can
  also call attestation API for host attestation.
 
  I don't see much benefits of keeping such filter for the reasons I
  mentioned below. Again, if you want to fence one host, you can just
  disable its service, that's enough.
 
  This won't address the case in which you have heterogenic environment
 and you want only some important VMs to run on trusted hosts (and for the
 rest of the VMs you don't care).

 This is an interesting one to dig into.

 I had assumed in this case you put all the VMs that want the attestation
 check in a subset of nodes that are setup to use that set.
 You can do that using host aggregates and our existing filters.

 An external system could then just disable hosts within that subset of hosts
 that have the attestation check working.

 Does that work for your use case?

 It should be fine for this case.  But then - why not go further and remove 
 SG API? Let's leave monitoring of services to Pacemaker and NagiOS and they 
 disable them if they consider that service is down.

Honestly, I find that idea very attractive.

The mark down API is basically going down that route.
http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/mark-host-down.html

 My point is that following this logic we may use external services to replace 
 any filter that has such simple logic. Is this the right direction?

If its an external system, and you can integrate more efficiently by
disabling hosts, then yes thats awesome.

Thats not always going to be the correct direction, but we need to
look at if something can be done externally first. Nova is too big
already, we are actively trying to not expand its scope.

So I worked this mark down API spec and now still working on the server 
states (VM states) as they stay in incorrect state if host suddenly goes 
down. Would appreciate comment on https://review.openstack.org/#/c/192246 to 
have right tract to do it. Maybe directly change the VM states when mark down 
API called and not like now proposed. And yes, there are use cases where one 
do not evacuate the VMs, so it will be valuable to see those states correct.

Related, I am working in OPNFV to bring Doctor project as external system that 
could use under the hood different existing opensource projects like Pacemaker 
or Nagios to detect any kind of host fault fast and use this mark down API to 
tell this to Nova. This Doctor will be opensource and for anybody to use. It 
also has now Ceilometer BP approved to enhance direct alarming for user without 
polling. So let's see what will happen when this work is completed. Could even 
be a component inside openstack someday when reach that kind of maturity 
(detecting faults, fence and doing automatic correlation based on VM specific 
configuration and faults specific configuration if wanted so..).

Br,
Tomi

Thanks,
John



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Service group foundations and features

2015-05-11 Thread Juvonen, Tomi (Nokia - FI/Espoo)
From: ext Chris Friesen [mailto:chris.frie...@windriver.com] 
Sent: Monday, May 11, 2015 6:09 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Service group foundations and features

On 05/11/2015 07:13 AM, Attila Fazekas wrote:
 From: John Garbutt j...@johngarbutt.com

 * From the RPC api point of view, do we want to send a cast to
 something that we know is dead, maybe we want to? Should we wait for
 calls to timeout, or give up quicker?

 How to fail sooner:
 https://bugs.launchpad.net/oslo.messaging/+bug/1437955

 We do not need a dedicated is_up just for this.

Is that really going to help?  As I understand it if nova-compute dies (or is 
isolated) then the queue remains present on the server but nothing will 
process 
messages from it.

Chris

for queued messages if the forced_down proposed in 
https://review.openstack.org/#/c/169836/ is set to true, I'm up message 
should be ignored until this forced_down is cleared as this flag is there also 
to prevent service state to turn 'up'. So flag is there to have fast way to 
state service is down to enable evacuate (prevent scheduling VM to host..), but 
also to prevent invalid state. Even abort nova-compute startup as mentioned in 
review comments. This should make things quite safe.

-Tomi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev