Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Mark Baker
Certainly the aim is to support upgrades between LTS releases.
Getting a meaningful keynote slot at an OpenStack summit is more of a
challenge.

On 6 Nov 2015 9:27 pm, "Jonathan Proulx"  wrote:
>
> On Fri, Nov 06, 2015 at 05:28:13PM +, Mark Baker wrote:
> :Worth mentioning that OpenStack releases that come out at the same time
as
> :Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka) are
> :supported for 5 years by Canonical so are already kind of an LTS. Support
> :in this context means patches, updates and commercial support (for a
fee).
> :For paying customers 3 years of patches, updates and commercial support
for
> :April releases, (Kilo, O, Q etc..) is also available.
>
> 
> And Canonical will support a live upgarde directly from Essex to
> Icehouse and Icehouse to Mitaka?
>
> I'd love to see Shuttleworth do that that as a live keynote, but only
> on a system with at least hundres on nodes and many VMs...
> 
>
> That's where LTS falls down conceptually we're struggling to make
> single release upgrades work at this point.
>
> I do agree LTS for release would be great but honestly OpenStack isn't
> Mature enough for that yet.
>
> -Jon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Reminder: Team meeting on Monday at 2100 UTC

2015-11-06 Thread Armando M.
A kind reminder for next week's meeting.

Please add agenda items to the meeting here [1].

Cheers,
Armando

[1] https://wiki.openstack.org/wiki/Network/Meetings
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight] Mitaka Priorities discussion mapped to launchpad

2015-11-06 Thread Tripp, Travis S
Hello Searchlighters,

I just wanted to let everybody know that I’ve captured the results of our 
priorities discussion on the Searchlight launchpad blueprints page.  In some 
cases, this meant creating a new blueprint (sometimes with only a little 
information).  Next week, it would be great if we can review this in our weekly 
meeting.

Thanks,
Travis



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Next steps: openstack-ansible-security

2015-11-06 Thread Jesse Pretorius
On Friday, 6 November 2015, Major Hayden  wrote:
>
> At this moment, openstack-ansible-security[1] is feature complete and all
> of the Ansible tasks and documentation for the STIGs are merged.  Exciting!


Excellent work, thank you!


> I've done lots of work to ensure that the role uses sane defaults so that
> it can be applied to the majority of OpenStack deployments without
> disrupting services.  It only supports Ubuntu 14.04 for now, but that's
> openstack-ansible's supported platform as well.


We're on a trajectory to get other platforms supported too, so I think that
work in this regards may as well get going. If there are parties interested
in adding role support for Fedora, Gentoo and others then I'd say that it
should be spec'd and can go ahead!


> I'd like to start by adding it to the gate-check-commit.sh script so that
> the security configurations are applied prior to running tempest.


While I applaud the idea, changing the current commit integration test is
probably not the best approach. We're in the middle of splitting the roles
out into their own repositories and also extending the gate checks into
multiple use-cases.

I think that the best option for now will be to add the implementation of
the security role as an additional use-case. Depending on the results there
we can figure out whether the role should be a default in all use cases.


-- 
Jesse Pretorius
mobile: +44 7586 906045
email: jesse.pretor...@gmail.com
skype: jesse.pretorius
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] summarizing the cross-project summit session on Mitaka themes

2015-11-06 Thread Doug Hellmann
One thing I forgot to mention in my original email was the discussion
about when we would have this themes conversation for the N cycle.
I had originally hoped we would discuss the themes online before
the summit, and that those would inform decisions about summit
sessions. Several other folks in the room made the point that we
were unlikely to come up with a theme so surprising that we would
add or drop a summit session from any existing planning, so having
the discussion in person at the summit to add background to the
other sessions for the week was more constructive. I'd like to hear
from some folks about whether that worked out this time, and then
we can decide closer to the N summit whether to use an email thread
or some other venue instead of (or in addition to) a summit session
in Austin.

I also plan to start some email threads this cycle after each
milestone to re-consider the themes and get feedback about how we're
making progress.  I hope the release liaisons, at least, will
participate in those discussions, and it would be great to have the
product working group involved as well.

As far as rolling upgrades, I know a couple of projects are thinking
about that this cycle. As I said in the summary of that part of the
session, it's not really a feature that we're going to implement
and call "done" so much as a shift in thinking about how we design
things in the future. Tracking the specs and blueprints for work
related to that across all projects would be helpful, especially
early in the cycle like this where feedback on requirements will
make the most difference.

Doug

Excerpts from Barrett, Carol L's message of 2015-11-06 21:32:11 +:
> Doug - Thanks for leading the session and this summary. What is your view on 
> next steps to establish themes for the N-release? And specifically around 
> rolling upgrades (my personal favorite).
> 
> Thanks
> Carol
> 
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com] 
> Sent: Friday, November 06, 2015 1:11 PM
> To: openstack-dev
> Subject: [openstack-dev] [all] summarizing the cross-project summit session 
> on Mitaka themes
> 
> At the summit last week one of the early cross-project sessions tried to 
> identify some common “themes” or “goals” for the Mitaka cycle. I proposed the 
> session to talk about some of the areas of work that all of our teams need to 
> do, but that fall by the wayside when we don't pull the whole community 
> together to focus attention on them. We had several ideas proposed, and some 
> lively discussion about them. The notes are in the etherpad [1], and I will 
> try to summarize the discussion here.
> 
> 1. Functional testing, especially of client libraries, came up as a result of 
> a few embarrassingly broken client releases during Liberty.  Those issues 
> were found and fixed quickly, but they exposed a gap in our test coverage.
> 
> 2. Adding tests useful for DefCore and similar interoperability testing was 
> suggested in part because of our situation in Glance, where many of the 
> image-related API tests actually talk to the Nova API instead of the Glance 
> API. We may have other areas where additional tests in tempest could 
> eventually find their way into the DefCore definition, ensuring more 
> interoperability between deployed OpenStack clouds.
> 
> 3. We talked for a while about being more opinionated in things like 
> architecture and deployment dependencies. I don’t think we resolved this one, 
> but I’m sure the discussion fed into the DLM discussion later that day in a 
> separate session.
> 
> 4. Improving consistency of quota management across projects came up.  We’ve 
> talked in the past about a separate quota management library or service, but 
> no one has yet stepped up to spearhead the effort to launch such a project.
> 
> 5. Rolling upgrades was a very popular topic, in the room and on the product 
> working group’s priority list. The point was made that this requires a shift 
> in thinking about how to design and implement projects, not just some simple 
> code changes that can be rolled out in a single cycle. I know many teams are 
> looking at addressing rolling upgrades.
> 
> 6. os-cloud-config support in clients was raised. There is a cross-project 
> spec at https://review.openstack.org/#/c/236712/ to cover this.
> 
> 7. "Fixing existing things as a priority over features” came up, and has been 
> a recurring topic of discussion for a few cycles now.
> The idea of having a “maintenance” cycle where all teams was floated, though 
> it might be tough to get everyone aligned to doing that at the same time.  
> Alternately, if we work out a way to support individual teams doing that we 
> could let teams schedule them as they feel they are useful. We could also 
> dedicate more review time to maintenance than features, without excluding 
> features entirely.
> There seemed to be quite a bit of support in the room for the general idea, 
> though making it 

Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-06 Thread Tim Hinrichs
Congress allows users to write a policy that executes an action under
certain conditions.

The conditions can be based on any data Congress has access to, which
includes nova servers, neutron networks, cinder storage, keystone users,
etc.  We also have some Ceilometer statistics; I'm not sure about whether
it's easy to get the Keystone notifications that you're talking about
today, but notifications are on our roadmap.  If the user's login is
reflected in the Keystone API, we may already be getting that event.

The action could in theory be a mistral/heat API or an arbitrary script.
Right now we're set up to invoke any method on any of the python-clients
we've integrated with.  We've got an integration with heat but not
mistral.  New integrations are typically easy.

Happy to talk more.

Tim



On Fri, Nov 6, 2015 at 9:17 AM Doug Hellmann  wrote:

> Excerpts from Dolph Mathews's message of 2015-11-05 16:31:28 -0600:
> > On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann 
> wrote:
> >
> > > Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
> > > > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
> > > > > Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> > > > > > Can people help me work through the right set of tools for this
> use
> > > case
> > > > > > (has come up from several Operators) and map out a plan to
> implement
> > > it:
> > > > > >
> > > > > > Large cloud with many users coming from multiple Federation
> sources
> > > has
> > > > > > a policy of providing a minimal setup for each user upon first
> visit
> > > to
> > > > > > the cloud:  Create a project for the user with a minimal quota,
> and
> > > > > > provide them a role assignment.
> > > > > >
> > > > > > Here are the gaps, as I see it:
> > > > > >
> > > > > > 1.  Keystone provides a notification that a user has logged in,
> but
> > > > > > there is nothing capable of executing on this notification at the
> > > > > > moment.  Only Ceilometer listens to Keystone notifications.
> > > > > >
> > > > > > 2.  Keystone does not have a workflow engine, and should not be
> > > > > > auto-creating projects.  This is something that should be
> performed
> > > via
> > > > > > a Heat template, and Keystone does not know about Heat, nor
> should
> > > it.
> > > > > >
> > > > > > 3.  The Mapping code is pretty static; it assumes a user entry
> or a
> > > > > > group entry in identity when creating a role assignment, and
> neither
> > > > > > will exist.
> > > > > >
> > > > > > We can assume a special domain for Federated users to have
> per-user
> > > > > > projects.
> > > > > >
> > > > > > So; lets assume a Heat Template that does the following:
> > > > > >
> > > > > > 1. Creates a user in the per-user-projects domain
> > > > > > 2. Assigns a role to the Federated user in that project
> > > > > > 3. Sets the minimal quota for the user
> > > > > > 4. Somehow notifies the user that the project has been set up.
> > > > > >
> > > > > > This last probably assumes an email address from the Federated
> > > > > > assertion.  Otherwise, the user hits Horizon, gets a "not
> > > authenticated
> > > > > > for any projects" error, and is stumped.
> > > > > >
> > > > > > How is quota assignment done in the other projects now?  What
> happens
> > > > > > when a project is created in Keystone?  Does that information
> gets
> > > > > > transferred to the other services, and, if so, how?  Do most
> people
> > > use
> > > > > > a custom provisioning tool for this workflow?
> > > > > >
> > > > >
> > > > > I know at Dreamhost we built some custom integration that was
> triggered
> > > > > when someone turned on the Dreamcompute service in their account
> in our
> > > > > existing user management system. That integration created the
> account
> > > in
> > > > > keystone, set up a default network in neutron, etc. I've long
> thought
> > > we
> > > > > needed a "new tenant creation" service of some sort, that sits
> outside
> > > > > of our existing services and pokes them to do something when a new
> > > > > tenant is established. Using heat as the implementation makes
> sense,
> > > for
> > > > > things that heat can control, but we don't want keystone to depend
> on
> > > > > heat and we don't want to bake such a specialized feature into heat
> > > > > itself.
> > > > >
> > > >
> > > > I agree, an automation piece that is built-in and easy to add to
> > > > OpenStack would be great.
> > > >
> > > > I do not agree that it should be Heat. Heat is for managing stacks
> that
> > > > live on and change over time and thus need the complexity of the
> graph
> > > > model Heat presents.
> > > >
> > > > I'd actually say that Mistral or Ansible are better choices for
> this. A
> > > > service which listens to the notification bus and triggered a
> workflow
> > > > defined somewhere in either Ansible playbooks or Mistral's workflow
> > > > language would simply run through the "skel" 

Re: [openstack-dev] [openstack-ansible][security] Creating a CA for openstack-ansible deployments?

2015-11-06 Thread Jesse Pretorius
On Friday, 6 November 2015, Major Hayden  wrote:
>
> I found a CA role[1] for Ansible on Galaxy, but it appears to be GPLv3
> code. :/


Considering that the role would not be imported into the OpenStack-Ansible
code tree, the license for this role would not be an issue I don't think.
What matters more is whether the role is functional for the purpose of
building an integration test use-case.


-- 
Jesse Pretorius
mobile: +44 7586 906045
email: jesse.pretor...@gmail.com
skype: jesse.pretorius
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-06 Thread Shraddha Pandhe
Replies inline.


On Fri, Nov 6, 2015 at 1:48 PM, Salvatore Orlando 
wrote:

> More comments inline.
> I shall stop trying being ironic (pun intended) in my posts.
>

:(


>
> Salvatore
>
> On 5 November 2015 at 18:37, Kyle Mestery  wrote:
>
>> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes  wrote:
>>
>>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>>
 Hi Salvatore,

 Thanks for the feedback. I agree with you that arbitrary JSON blobs will
 make IPAM much more powerful. Some other projects already do things like
 this.

>>>
>>> :( Actually, though "powerful" it also leads to implementation details
>>> leaking directly out of the public REST API. I'm very negative on this and
>>> would prefer an actual codified REST API that can be relied on regardless
>>> of backend driver or implementation.
>>>
>>
>> I agree with Jay here. We've had people propose similar things in Neutron
>> before, and I've been against them. The entire point of the Neutron REST
>> API is to not leak these details out. It dampens the strength of the
>> logical model, and it tends to have users become reliant on backend
>> implementations.
>>
>
> I see I did not manage to convey accurately irony and sarcasm in my
> previous post ;)
> The point was that thanks to a blooming number of extensions the Neutron
> API is already hardly portable. Blob attributes (or dict attributes, or
> key/value list attributes, or whatever does not have a precise schema) are
> a nail in the coffin, and also violate the only tenet Neutron has somehow
> managed to honour, which is being backend agnostic.
> And the fact that the port binding extension is pretty much that is not a
> valid argument, imho.
> On the other hand, I'm all in for extending DB schema and driver logic to
> suit all IPAM needs; at the end of the day that's what do with plugins for
> all sort of stuff.
>


Agreed. Filed an rfe bug: https://bugs.launchpad.net/neutron/+bug/1513981.
Spec coming up for review.



>
>
>
>>
>>
>>>
>>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
 'extras' arbitrary JSON field. This allows us to put any information in
 there that we think is important for us.

>>>
>>> Yeah, and this is a bad thing, IMHO. Public REST APIs should be
>>> structured, not a Wild West free-for-all. The biggest problem with using
>>> free-form JSON blobs in RESTful APIs like this is that you throw away the
>>> ability to evolve the API in a structured, versioned way. Instead of
>>> evolving the API using microversions, instead every vendor just jams
>>> whatever they feel like into the JSON blob over time. There's no way for
>>> clients to know what the server will return at any given time.
>>>
>>> Achieving consensus on a REST API that meets the needs of a variety of
>>> backend implementations is *hard work*, yes, but it's what we need to do if
>>> we are to have APIs that are viewed in the industry as stable,
>>> discoverable, and reliably useful.
>>>
>>
>> ++, this is the correct way forward.
>>
>
> Cool, but let me point out that experience has thought us that anything
> that is a result of a compromise between several parties following
> different agendas is bound to failure as it does not fully satisfy the
> requirements of any stakeholder.
> If these information are needed for making scheduling decisions based on
> network requirements, then it makes sense to expose this information also
> at the API layer (I assume there also plans for making the scheduler
> *seriously* network aware). However, this information should have a
> well-defined schema with no leeway for 'extensions; such schema can evolve
> over time.
>
>
>> Thanks,
>> Kyle
>>
>>
>>>
>>> Best,
>>> -jay
>>>
>>> Best,
>>> -jay
>>>
>>> Hoping to get some positive feedback from API and DB lieutenants too.


 On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
 > wrote:

 Arbitrary blobs are a powerful tools to circumvent limitations of an
 API, as well as other constraints which might be imposed for
 versioning or portability purposes.
 The parameters that should end up in such blob are typically
 specific for the target IPAM driver (to an extent they might even
 identify a specific driver to use), and therefore an API consumer
 who knows what backend is performing IPAM can surely leverage it.

 Therefore this would make a lot of sense, assuming API portability
 and not leaking backend details are not a concern.
 The Neutron team API & DB lieutenants will be able to provide more
 input on this regard.

 In this case other approaches such as a vendor specific extension
 are not a solution - assuming your granularity level is the
 allocation pool; indeed allocation pools are not first-class neutron
 

Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Juvonen, Tomi (Nokia - FI/Espoo)
+1 
Good work indeed.
>From: EXT John Garbutt [mailto:j...@johngarbutt.com] 
>Sent: Friday, November 06, 2015 5:32 PM
>To: OpenStack Development Mailing List
>Subject: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core
>
>Hi,
>
>I propose we add Alex Xu[1] to nova-core.
>
>Over the last few cycles he has consistently been doing great work,
>including some quality reviews, particularly around the API.
>
>Please respond with comments, +1s, or objections within one week.
>
>Many thanks,
>John
>
>[1]http://stackalytics.com/?module=nova-group_id=xuhj=all
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] mutihost networking with nova vm as docker host

2015-11-06 Thread Vikas Choudhary
+1 for "container-in-vm"

On Fri, Nov 6, 2015 at 10:48 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

>
>
> On Fri, Nov 6, 2015 at 1:20 PM, Baohua Yang  wrote:
>
>> It does cause confusing by calling container-inside-vm as nested
>> container.
>>
>> The "nested" term in container area usually means
>> container-inside-container.
>>
>
> I try to always put it as VM-nested container. But I probably slipped in
> some mentions.
>
>
>> we may refer this (container-inside-vm) explicitly as vm-holding
>> container.
>>
>
> container-in-vm?
>
>
>>
>> On Fri, Nov 6, 2015 at 12:13 PM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>> @Gal, I was asking about "container in nova vm" case.
>>> Not sure if you were referring to this case as nested containers case. I
>>> guess nested containers case would be "containers inside containers" and
>>> this could be hosted on nova vm and nova bm node. Is my understanding
>>> correct?
>>>
>>> Thanks Gal and Toni, for now i got answer to my query related to
>>> "container in vm" case.
>>>
>>> -Vikas
>>>
>>> On Thu, Nov 5, 2015 at 6:00 PM, Gal Sagie  wrote:
>>>
 The current OVS binding proposals are not for nested containers.
 I am not sure if you are asking about that case or about the nested
 containers inside a VM case.

 For the nested containers, we will use Neutron solutions that support
 this kind of configuration, for example
 if you look at OVN you can define "parent" and "sub" ports, so OVN
 knows to perform the logical pipeline in the compute host
 and only perform VLAN tagging inside the VM (as Toni mentioned)

 If you need more clarification you can catch me on IRC as well and we
 can talk.

 On Thu, Nov 5, 2015 at 8:03 AM, Vikas Choudhary <
 choudharyvika...@gmail.com> wrote:

> Hi All,
>
> I would appreciate inputs on following queries:
> 1. Are we assuming nova bm nodes to be docker host for now?
>
> If Not:
>  - Assuming nova vm as docker host and ovs as networking
> plugin:
> This line is from the etherpad[1], "Eachdriver would have
> an executable that receives the name of the veth pair that has to be bound
> to the overlay" .
> Query 1:  As per current ovs binding proposals by
> Feisky[2] and Diga[3], vif seems to be binding with br-int on vm. I am
> unable to understand how overlay will work. AFAICT , neutron will 
> configure
> br-tun of compute machines ovs only. How overlay(br-tun) configuration 
> will
> happen inside vm ?
>
>  Query 2: Are we having double encapsulation(both at vm
> and compute)? Is not it possible to bind vif into compute host br-int?
>
>  Query3: I did not see subnet tags for network plugin
> being passed in any of the binding patches[2][3][4]. Dont we need that?
>
>
> [1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
> [2]  https://review.openstack.org/#/c/241558/
> [3]  https://review.openstack.org/#/c/232948/1
> [4]  https://review.openstack.org/#/c/227972/
>
>
> -Vikas Choudhary
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 --
 Best Regards ,

 The G.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best wishes!
>> Baohua
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development 

Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Qiao, Liyong
+1, Alex worked on Nova project for a long time, and push lot of API feature in 
last few cycles,
And spend lots of time doing reviewing, I am glad to add my +1 to him.

BR, Eli(Li Yong)Qiao

-Original Message-
From: Ed Leafe [mailto:e...@leafe.com] 
Sent: Saturday, November 07, 2015 2:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

On Nov 6, 2015, at 9:32 AM, John Garbutt  wrote:

> I propose we add Alex Xu[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work, 
> including some quality reviews, particularly around the API.
> 
> Please respond with comments, +1s, or objections within one week.

I'm not a core, but would like to add my hearty +1.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread Clint Byrum
Excerpts from Tony Breeds's message of 2015-11-05 22:08:59 -0800:
> Hello all,
> I came across [1] which is notionally an ironic bug in that horizon 
> presents
> VM operations (like suspend) to users.  Clearly these options don't make sense
> to ironic which can be confusing.
> 
> There is a horizon fix that just disables migrate/suspened and other 
> functaions
> if the operator sets a flag say ironic is present.  Clealy this is sub optimal
> for a mixed hv environment.
> 
> The data needed (hpervisor type) is currently avilable only to admins, a quick
> hack to remove this policy restriction is functional.
> 
> There are a few ways to solve this.
> 
>  1. Change the default from "rule:admin_api" to "" (for 
> os_compute_api:os-extended-server-attributes and
> os_compute_api:os-hypervisors), and set a list of values we're
> comfortbale exposing the user (hypervisor_type and
> hypervisor_hostname).  So a user can get the hypervisor_name as part of
> the instance deatils and get the hypervisor_type from the
> os-hypervisors.  This would work for horizon but increases the API load
> on nova and kinda implies that horizon would have to cache the data and
> open-code assumptions that hypervisor_type can/can't do action $x
> 
>  2. Include the hypervisor_type with the instance data.  This would place the 
> burdon on nova.  It makes the looking up instance details slightly more
> complex but doesn't result in additional API queries, nor caching
> overhead in horizon.  This has the same opencoding issues as Option 1.
> 
>  3. Define a service user and have horizon look up the hypervisors details 
> via 
> that role.  Has all the drawbacks as option 1 and I'm struggling to
> think of many benefits.
> 
>  4. Create a capabilitioes API of some description, that can be queried so 
> that
> consumers (horizon) can known
> 

This.

A large part of why we want "clouds" and not "virtualization frontends"
is that we want to relieve the user of any need to know what hypervisor
is in use if we can. We want to provide them computers, and tools to
manage their computers. Whether they are VMs or real machines, our tools
should strive to be about the user and their workload, and not about the
operator and their technology.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread Sylvain Bauza



Le 06/11/2015 07:08, Tony Breeds a écrit :

Hello all,
 I came across [1] which is notionally an ironic bug in that horizon 
presents
VM operations (like suspend) to users.  Clearly these options don't make sense
to ironic which can be confusing.

There is a horizon fix that just disables migrate/suspened and other functaions
if the operator sets a flag say ironic is present.  Clealy this is sub optimal
for a mixed hv environment.

The data needed (hpervisor type) is currently avilable only to admins, a quick
hack to remove this policy restriction is functional.

There are a few ways to solve this.

  1. Change the default from "rule:admin_api" to "" (for
 os_compute_api:os-extended-server-attributes and
 os_compute_api:os-hypervisors), and set a list of values we're
 comfortbale exposing the user (hypervisor_type and
 hypervisor_hostname).  So a user can get the hypervisor_name as part of
 the instance deatils and get the hypervisor_type from the
 os-hypervisors.  This would work for horizon but increases the API load
 on nova and kinda implies that horizon would have to cache the data and
 open-code assumptions that hypervisor_type can/can't do action $x

  2. Include the hypervisor_type with the instance data.  This would place the
 burdon on nova.  It makes the looking up instance details slightly more
 complex but doesn't result in additional API queries, nor caching
 overhead in horizon.  This has the same opencoding issues as Option 1.

  3. Define a service user and have horizon look up the hypervisors details via
 that role.  Has all the drawbacks as option 1 and I'm struggling to
 think of many benefits.

  4. Create a capabilitioes API of some description, that can be queried so that
 consumers (horizon) can known

  5. Some other way for users to know what kind of hypervisor they're on, 
Perhaps
 there is an established image property that would work here?

If we're okay with exposing the hypervisor_type to users, then #2 is pretty
quick and easy, and could be done in Mitaka.  Option 4 is probably the best
long term solution but I think is best done in 'N' as it needs lots of
discussion.


I'm pretty opposed to giving hypervisor details to end-users for many 
reasons (security flaw, cloud abstractional model and API not being a 
discovery tool are my first top things coming in mind).


I'd rather prefer to see Horizon as an admin able to get the specific 
bits about the driver and only show to the user what the driver can support.


That's also IMHO a bit tied to the Hypervisor Support Matrix [1] and 
from a better and more maintenable standpoint, the Feature 
Classification effort [2] because it would ensure that the 
'capabilities' API that you mention is accurate and up-to-date.


-Sylvain

[1] http://docs.openstack.org/developer/nova/support-matrix.html
[2] 
https://review.openstack.org/#/c/215664/4/doc/source/feature_classification.rst,cm

Yours Tony.

[1] https://bugs.launchpad.net/nova/+bug/1483639


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Change VIP address via API

2015-11-06 Thread Aleksandr Didenko
Hi,

Mike, that's exactly how you can use this VIPs allocation functionality.
Nailgun will save that VIP so it's not going to be auto-assigned to any
node and serialize it into astute.yaml (vip_name: IP). After that you can
get your VIP via Hiera and use it in your deployment scripts.

Guys, could you please clarify what's the purpose of 'node_roles' in VIP
description?

Regards,
Alex

On Fri, Nov 6, 2015 at 5:15 AM, Mike Scherbakov 
wrote:

> Is there a way to make it more generic, not "VIP" specific? Let's say I
> want to reserve address(-es) for something for whatever reason, and then I
> want to use them by some tricky way.
> More specifically, can we reserve IP address(-es) with some codename, and
> use it later?
> 12.12.12.12 - my-shared-ip
> 240.0.0.2 - my-multicast
> and then use them in puppet / whatever deployment code by $my-shared-ip,
> $my-multicast?
>
> Thanks,
>
> On Tue, Nov 3, 2015 at 8:49 AM Aleksey Kasatkin 
> wrote:
>
>> Folks,
>>
>> Here is a resume of our recent discussion:
>>
>> 1. Add new URLs for processing VIPs:
>>
>> /clusters//network_configuration/vips/ (GET)
>> /clusters//network_configuration/vips// (GET, PUT)
>>
>> where  is the id in ip_addrs table.
>> So, user can get all VIPS, get one VIP by id, change parameters (IP
>> address) for one VIP by its id.
>> More possibilities can be added later.
>>
>> Q. Any allocated IP could be accessible via these handlers, so now we can
>> restrict user to access VIPs only
>> and answer with some error to other ip_addrs ids.
>>
>> 2. Add current VIP meta into ip_addrs table.
>>
>> Create new field in ip_addrs table for placing VIP metadata there.
>> Current set of ip_addrs fields:
>> id (int),
>> network (FK),
>> node (FK),
>> ip_addr (string),
>> vip_type (string),
>> network_data (relation),
>> node_data (relation)
>>
>> Q. We could replace vip_type (it contains VIP name now) with vip_info.
>>
>> 3. Allocate VIPs on cluster creation and seek VIPs at all network changes.
>>
>> So, VIPs will be checked (via network roles descriptions) and
>> re-allocated in ip_addrs table
>> at these points:
>> a. create cluster
>> b. modify networks configuration
>> c. modify one network
>> d. modify network template
>> e. change nodes set for cluster
>> f. change node roles set on nodes
>> g. modify cluster attributes (change set of plugins)
>> h. modify release
>>
>> 4. Add 'manual' field into VIP meta to indicate whether it is
>> auto-allocated or not.
>>
>> So, whole VIP description may look like:
>> {
>> 'name': 'management'
>> 'network_role': 'mgmt/vip',
>> 'namespace': 'haproxy',
>> 'node_roles': ['controller'],
>> 'alias': 'management_vip',
>> 'manual': True,
>> }
>>
>> Example of current VIP description:
>>
>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L207
>>
>> Nailgun will re-allocate VIP address if 'manual' == False.
>>
>> 5. Q. what to do when the given address overlaps with the network from
>> another
>> environment? overlaps with the network of current environment which does
>> not match the
>> network role of the VIP?
>>
>> Use '--force' parameter to change it. PUT will fail otherwise.
>>
>>
>> Guys, please review this and share your comments here,
>>
>> Thanks,
>>
>>
>>
>> Aleksey Kasatkin
>>
>>
>> On Tue, Nov 3, 2015 at 10:47 AM, Aleksey Kasatkin > > wrote:
>>
>>> Igor,
>>>
>>> > For VIP allocation we should use POST request. It's ok to use PUT for
>>> setting (changing) IP address.
>>>
>>> My proposal is about setting IP addresses for VIPs only (auto and
>>> manual).
>>> No any other allocations.
>>> Do you propose to use POST for first-time IP allocation and PUT for IP
>>> re-allocation?
>>> Or use POST for adding entries to some new 'vips' table (so that all
>>> VIPs descriptions
>>> will be added there from network roles)?
>>>
>>> > We don't store network_role, namespace and node_roles within VIPs.
>>> > They are belonged to network roles. So how are you going to retrieve
>>> > them? Did you plan to make some changes to our data model? You know,
>>> > it's not a good idea to make connections between network roles and
>>> > VIPs each time your make a GET request to list them.
>>>
>>> It's our current format we use in API when VIPs are being retrieved.
>>> Do you propose to use different one for address allocation?
>>>
>>> > Should we return VIPs that aren't allocated, and if so - why? If they
>>> > would be just, you know, fetched from network roles - that's a bad
>>> > design. Each VIP should have an explicit entry in VIPs database table.
>>>
>>> I propose to return VIPs even w/o IP addresses to show user what VIPs he
>>> has
>>> so he can assign IP addresses to them. Yes, I supposed that the
>>> information
>>> will be retrieved from network roles as it is done now. Do you propose
>>> to create
>>> separate table for VIPs or extend ip_addrs table to 

Re: [openstack-dev] [Fuel] Change VIP address via API

2015-11-06 Thread Vladimir Kuklin
+1 to Mike

It would be awesome to get an API handler that allows one to actually add
an ip address to IP_addrs table. As well as an IP range to ip_ranges table.

On Fri, Nov 6, 2015 at 6:15 AM, Mike Scherbakov 
wrote:

> Is there a way to make it more generic, not "VIP" specific? Let's say I
> want to reserve address(-es) for something for whatever reason, and then I
> want to use them by some tricky way.
> More specifically, can we reserve IP address(-es) with some codename, and
> use it later?
> 12.12.12.12 - my-shared-ip
> 240.0.0.2 - my-multicast
> and then use them in puppet / whatever deployment code by $my-shared-ip,
> $my-multicast?
>
> Thanks,
>
> On Tue, Nov 3, 2015 at 8:49 AM Aleksey Kasatkin 
> wrote:
>
>> Folks,
>>
>> Here is a resume of our recent discussion:
>>
>> 1. Add new URLs for processing VIPs:
>>
>> /clusters//network_configuration/vips/ (GET)
>> /clusters//network_configuration/vips// (GET, PUT)
>>
>> where  is the id in ip_addrs table.
>> So, user can get all VIPS, get one VIP by id, change parameters (IP
>> address) for one VIP by its id.
>> More possibilities can be added later.
>>
>> Q. Any allocated IP could be accessible via these handlers, so now we can
>> restrict user to access VIPs only
>> and answer with some error to other ip_addrs ids.
>>
>> 2. Add current VIP meta into ip_addrs table.
>>
>> Create new field in ip_addrs table for placing VIP metadata there.
>> Current set of ip_addrs fields:
>> id (int),
>> network (FK),
>> node (FK),
>> ip_addr (string),
>> vip_type (string),
>> network_data (relation),
>> node_data (relation)
>>
>> Q. We could replace vip_type (it contains VIP name now) with vip_info.
>>
>> 3. Allocate VIPs on cluster creation and seek VIPs at all network changes.
>>
>> So, VIPs will be checked (via network roles descriptions) and
>> re-allocated in ip_addrs table
>> at these points:
>> a. create cluster
>> b. modify networks configuration
>> c. modify one network
>> d. modify network template
>> e. change nodes set for cluster
>> f. change node roles set on nodes
>> g. modify cluster attributes (change set of plugins)
>> h. modify release
>>
>> 4. Add 'manual' field into VIP meta to indicate whether it is
>> auto-allocated or not.
>>
>> So, whole VIP description may look like:
>> {
>> 'name': 'management'
>> 'network_role': 'mgmt/vip',
>> 'namespace': 'haproxy',
>> 'node_roles': ['controller'],
>> 'alias': 'management_vip',
>> 'manual': True,
>> }
>>
>> Example of current VIP description:
>>
>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L207
>>
>> Nailgun will re-allocate VIP address if 'manual' == False.
>>
>> 5. Q. what to do when the given address overlaps with the network from
>> another
>> environment? overlaps with the network of current environment which does
>> not match the
>> network role of the VIP?
>>
>> Use '--force' parameter to change it. PUT will fail otherwise.
>>
>>
>> Guys, please review this and share your comments here,
>>
>> Thanks,
>>
>>
>>
>> Aleksey Kasatkin
>>
>>
>> On Tue, Nov 3, 2015 at 10:47 AM, Aleksey Kasatkin > > wrote:
>>
>>> Igor,
>>>
>>> > For VIP allocation we should use POST request. It's ok to use PUT for
>>> setting (changing) IP address.
>>>
>>> My proposal is about setting IP addresses for VIPs only (auto and
>>> manual).
>>> No any other allocations.
>>> Do you propose to use POST for first-time IP allocation and PUT for IP
>>> re-allocation?
>>> Or use POST for adding entries to some new 'vips' table (so that all
>>> VIPs descriptions
>>> will be added there from network roles)?
>>>
>>> > We don't store network_role, namespace and node_roles within VIPs.
>>> > They are belonged to network roles. So how are you going to retrieve
>>> > them? Did you plan to make some changes to our data model? You know,
>>> > it's not a good idea to make connections between network roles and
>>> > VIPs each time your make a GET request to list them.
>>>
>>> It's our current format we use in API when VIPs are being retrieved.
>>> Do you propose to use different one for address allocation?
>>>
>>> > Should we return VIPs that aren't allocated, and if so - why? If they
>>> > would be just, you know, fetched from network roles - that's a bad
>>> > design. Each VIP should have an explicit entry in VIPs database table.
>>>
>>> I propose to return VIPs even w/o IP addresses to show user what VIPs he
>>> has
>>> so he can assign IP addresses to them. Yes, I supposed that the
>>> information
>>> will be retrieved from network roles as it is done now. Do you propose
>>> to create
>>> separate table for VIPs or extend ip_addrs table to store VIPs
>>> information?
>>>
>>> > We definitely should handle `null` this way, but I think from API POV
>>> > it would be more clearer just do not pass `ipaddr` value if user wants
>>> > it to be auto allocated. I mean, let's 

Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread Daniel P. Berrange
On Fri, Nov 06, 2015 at 05:08:59PM +1100, Tony Breeds wrote:
> Hello all,
> I came across [1] which is notionally an ironic bug in that horizon 
> presents
> VM operations (like suspend) to users.  Clearly these options don't make sense
> to ironic which can be confusing.
> 
> There is a horizon fix that just disables migrate/suspened and other 
> functaions
> if the operator sets a flag say ironic is present.  Clealy this is sub optimal
> for a mixed hv environment.
> 
> The data needed (hpervisor type) is currently avilable only to admins, a quick
> hack to remove this policy restriction is functional.
> 
> There are a few ways to solve this.
> 
>  1. Change the default from "rule:admin_api" to "" (for 
> os_compute_api:os-extended-server-attributes and
> os_compute_api:os-hypervisors), and set a list of values we're
> comfortbale exposing the user (hypervisor_type and
> hypervisor_hostname).  So a user can get the hypervisor_name as part of
> the instance deatils and get the hypervisor_type from the
> os-hypervisors.  This would work for horizon but increases the API load
> on nova and kinda implies that horizon would have to cache the data and
> open-code assumptions that hypervisor_type can/can't do action $x
> 
>  2. Include the hypervisor_type with the instance data.  This would place the 
> burdon on nova.  It makes the looking up instance details slightly more
> complex but doesn't result in additional API queries, nor caching
> overhead in horizon.  This has the same opencoding issues as Option 1.
> 
>  3. Define a service user and have horizon look up the hypervisors details 
> via 
> that role.  Has all the drawbacks as option 1 and I'm struggling to
> think of many benefits.
> 
>  4. Create a capabilitioes API of some description, that can be queried so 
> that
> consumers (horizon) can known
> 
>  5. Some other way for users to know what kind of hypervisor they're on, 
> Perhaps
> there is an established image property that would work here?
> 
> If we're okay with exposing the hypervisor_type to users, then #2 is pretty
> quick and easy, and could be done in Mitaka.  Option 4 is probably the best
> long term solution but I think is best done in 'N' as it needs lots of
> discussion.

I think that exposing hypervisor_type is very much the *wrong* approach
to this problem. The set of allowed actions varies based on much more than
just the hypervisor_type. The hypervisor version may affect it, as may
the hypervisor architecture, and even the version of Nova. If horizon
restricted its actions based on hypevisor_type alone, then it is going
to inevitably prevent the user from performing otherwise valid actions
in a number of scenarios.

IMHO, a capabilities based approach is the only viable solution to
this kind of problem.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] Kilo is 'security-supported'. What does it imply?

2015-11-06 Thread Thierry Carrez
Carl Baldwin wrote:
>> - StableBranch page though requires that we don’t merge non-critical bug
>> fixes there: "Only critical bugfixes and security patches are acceptable”
> 
> Seems a little premature for Kilo.  It is little more than 6 months old.
> 
>> Some projects may want to continue backporting reasonable (even though
>> non-critical) fixes to older stable branches. F.e. in neutron, I think there
>> is will to continue providing backports for the branch.
> 
> +1  I'd like to reiterate my support for backporting appropriate and
> sensible bug fixes to Kilo.

"Stable" always had two conflicting facets: it means working well, and
it means changing less. In the first stage of stable maintenance the
focus is on "working well", with lots of backports for issues discovered
in the initial release. But after some time you caught all of the major
issues and the focus shifts to "changing less". This is what the support
phases are about, gradually shifting from one facet to another.

That said, that can certainly be revisited. I suppose as long as extra
care is taken in selecting appropriate fixes for older branches, we can
get the best of both worlds.

Note that we'll likely spin out the stable branch maintenance team into
its own project team (outside of the release management team), now that
its focus is purely on defining a common stable branch policy and making
sure it's respected across a wide range of project-specific maintenance
teams. So that new team could definitely change the common rules there
:) More on that soon.

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [deb-packaging] [infra] I need help to move forward with the build CI: building a sbuild Debian image

2015-11-06 Thread Thomas Goirand
Hi,

As the first package source got approved a week before the Tokyo summit
(ie: openstack/deb-openstack-pkg-tools), in the Tokyo design summit, we
had a 2 sessions to discuss packaging within OpenStack upstream infra.
Here's the Etherpad:
https://etherpad.openstack.org/p/mitaka-deb-packaging

I was told during the session that I just should RTFM, and it will be
easy. Truth is exactly what I feared: after reading the links added to
the end of the Etherpad page, I'm still lost, and I don't know what I
should be doing to get further.

I do know what should be done overall, what I need is help on how to
actually implement this.

What should be done first:
=-=-=-=-=-=-=-=-=-=-=-=-=-

What I would like to start doing is creating a new VM image which would
include:
- sbuild already setup
- a copy of all the Git (re-using the DIB elements which already exist)

Later on:
=-=-=-=-=

Then once that is done, the first package build job should be added to
the openstack/deb-openstack-pkg-tools.git as a CI check, and a publish
job should be done when the patch is merged. This will lead to a first
package reaching a new Debian repository from OpenStack infra, which we
can later on add in /home/ftp/debian on the sbuild Debian image.

Also, the Debian sbuild image will be re-used for running a full
single-node deployment of OpenStack on which we will run tempest.

Help that I need:
=-=-=-=-=-=-=-=-=
So, I need help for the first part (ie: creating the VM image patch).
Once I get a first image, I believe it should be a way more easy for me
to get going. Note that I do have the sbuild setup script already
available here:
https://github.com/openstack/deb-openstack-pkg-tools/blob/debian/liberty/build-tools/pkgos-setup-sbuild

so I just need help on the new image part, and then know where I can
hook my script.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][dns]What the meaning of "dns_assignment" and "dns_name"?

2015-11-06 Thread Neil Jerram
Hi Miguel,

I’ve just been looking at this, and have deduced the following summary of the 
new dns_name and dns_assignment fields:

- dns_name is a simple name, like 'vm17'. It is a writable port field, and gets 
combined with a dns_domain that is specified elsewhere. 

- dns_assignment is a server-generated read-only field‎, holding a list of 
dicts like {'hostname': 'vm17', 'ip_address': '10.65.0.4', 'fqdn': 
'vm17.datcon.co.uk'}.

Can you confirm whether that's correct?

What is the reason (or requirement) for dns_assignment being able to specify 
hostname and fqdn on a per-IP-address basis?  Does it ever make sense for a VM 
to associate a different hostname with different NICs or IP addresses?

Many thanks,
Neil



From: Miguel Lavalle [mailto:mig...@mlavalle.com] 
Sent: 14 October 2015 04:22
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron][dns]What the meaning of "dns_assignment" 
and "dns_name"?

Zhi Chang,
Thank you for your questions. We are in the process of integrating Neutron and 
Nova with an external DNS service, using Designate as the reference 
implementation. This integration is being achieved in 3 steps. What you are 
seeing is the result of only the first one. These steps are:
1) Internal DNS integration in Neutron, which merged recently: 
https://review.openstack.org/#/c/200952/. As you may know, Neutron has an 
internal DHCP / DNS service based on dnsmasq for each virtual network that you 
create. Previously, whenever you created a port on a given network, your port 
would get a default host name in dnsmasq of the form 
'host-xx-xx-xx-xx.openstacklocal.", where xx-xx-xx-xx came from the port's 
fixed ip address "xx.xx.xx.xx" and "openstacklocal" is the default domain used 
by Neutron. This name was generated by the dhcp agent. In the above mentioned 
patchset, we are moving the generation of these dns names to the Neutron 
server, with the intent of allowing the user to specify it. In order to do 
that, you need to enable it by defining in neutron.conf the 'dns_domain' 
parameter with a value different to the default 'openstacklocal'. Once you do 
that, you can create or update a port and assign a value to its 'dns_name' 
attribute. Why is this useful? Please read on.

2) External DNS integration in Neutron. The patchset is being worked now: 
https://review.openstack.org/#/c/212213/. The functionality implemented here 
allows Neutron to publish the dns_name associated with a floating ip under a 
domain in an external dns service. We are using Designate as the reference 
implementation, but the idea is that in the future other DNS services can be 
integrated.. Where does the dns name and domain of the floating ip come from? 
It can come from 2 sources. Source number 1 is the floating ip itself, because 
in this patchset we are adding a dns_name and a dns_domain attributes to it. If 
the floating ip doesn't have a dns name and domain associated with it, then 
they can come from source number 2: the port that the floating ip is associated 
with (as explained in point 1, ports now can have a dns_name attribute) and the 
port's network, since this patchset adds dns_domain to networks.
3) Integration of Nova with Neutron's DNS. I have started the implementation of 
this and over the next few days will push the code to Gerrit for first review. 
When an instance is created, nova will request to Neutron the creation of the 
corresponding port specifying the instance's hostname in the port's 'dns_name' 
attribute (as explained in point 1). If the network where that port lives has a 
dns_domain associated with it (as explained in point 2) and you assign a 
floating ip to the port, your instance's hostname will be published in the 
external dns service.
To make it clearer, here I walk you through an example that I executed in my 
devstack: http://paste.openstack.org/show/476210/
As mentioned above, we also allow the dns_name and dns_domain to be published 
in the external dns to be defined at the floating ip level. The reason for this 
is that we envision a use case where the name and ip address made public in the 
dns service are stable, regardless of the nova instance associated with the 
floating ip.
If you are attending the upcoming Tokyo summit, you could attend the following 
talk for further information:  
http://openstacksummitoctober2015tokyo.sched.org/event/5cbdd5fb4a6d080f93a5f321ff59009c#.Vh3KMZflRz2
 Looking forward to see you there!

Hope this answers your questions
Best regards
Miguel Lavalle

On Tue, Oct 13, 2015 at 9:58 AM, Zhi Chang  wrote:
Hi, all
    I install the latest devstack and create a vm by nova. I get the port's 
info which created by Neutron. I'm confused that what the meaning of column 
"dns_assignment" and column "dns_name".
    First, column "dns_assignment" is a read-only attribute. What is it used 
for? I think that this column is useless 

[openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Vladimir Kozhukalov
Dear colleagues,

At the moment I'm working on deprecating Fuel upgrade tarball. Currently,
it includes the following:

* RPM repository (upstream + mos)
* DEB repository (mos)
* openstack.yaml
* version.yaml
* upgrade script itself (+ virtualenv)

Apart from upgrading docker containers this upgrade script makes copies of
the RPM/DEB repositories and puts them on the master node naming these
repository directories depending on what is written in openstack.yaml and
version.yaml. My plan was something like:

1) deprecate version.yaml (move all fields from there to various places)
2) deliver openstack.yaml with fuel-openstack-metadata package
3) do not put new repos on the master node (instead we should use online
repos or use fuel-createmirror to make local mirrors)
4) deliver fuel-upgrade package (throw away upgrade virtualenv)

Then UX was supposed to be roughly like:

1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
2) yum install fuel-upgrade
3) /usr/bin/fuel-upgrade (script was going to become lighter, because there
should have not be parts coping RPM/DEB repos)

However, it turned out that Fuel 8.0 is going to be run on Centos 7 and it
is not enough to just do things which we usually did during upgrades. Now
there are two ways to upgrade:
1) to use the official Centos upgrade script for upgrading from 6 to 7
2) to backup the master node, then reinstall it from scratch and then apply
backup

Upgrade team is trying to understand which way is more appropriate.
Regarding to my tarball related activities, I'd say that this package based
upgrade approach can be aligned with (1) (fuel-upgrade would use official
Centos upgrade script as a first step for upgrade), but it definitely can
not be aligned with (2), because it assumes reinstalling the master node
from scratch.

Right now, I'm finishing the work around deprecating version.yaml and my
further steps would be to modify fuel-upgrade script so it does not copy
RPM/DEB repos, but those steps make little sense taking into account Centos
7 feature.

Colleagues, let's make a decision about how we are going to upgrade the
master node ASAP. Probably my tarball related work should be reduced to
just throwing tarball away.


Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread Ghe Rivero
Quoting Clint Byrum (2015-11-06 09:07:20)
> Whether they are VMs or real machines, our tools
> should strive to be about the user and their workload, and not about the
> operator and their technology.

+1 This should be in a poster in every office.

Ghe Rivero

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Role for Fuel Master Node

2015-11-06 Thread Evgeniy L
Javeria,

In your case, I think it's easier to generate config on the target node,
using puppet for example, since the information which you may need
is placed in /etc/astute.yaml file. Also it may be a problem to retrieve
all required information about the cluster, since API is protected with
keystone authentication.

Thanks,

On Thu, Nov 5, 2015 at 5:35 PM, Javeria Khan  wrote:

> Hi Evgeniy,
>
>>
>> 1. what version of Fuel do you use?
>>
> Using 7.0
>
>
>> 2. could you please clarify what did you mean by "moving to
>> deployment_tasks.yaml"?
>>
> I tried changing my tasks.yaml to a deployment_tasks.yaml as the wiki
> suggests for 7.0. However I kept hitting issues.
>
>
>> 3. could you please describe your use-case a bit more? Why do you want to
>> run
>> tasks on the host itself?
>>
>
> I have a monitoring tool that accompanies my new plugin, which basically
> uses a config file that contains details about the cluster (IPs, VIPs,
> networks etc). This config file is typically created on the installer nodes
> through the deployment, Fuel Master in this case.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-06 Thread Bhandaru, Malini K
+1 on Chris comments on implementation and API.
Migrate, if all is ideal, should take the initial launch flavor.

-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com] 
Sent: Thursday, November 05, 2015 8:46 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Migration state machine proposal.

On 11/05/2015 08:33 AM, Andrew Laski wrote:
> On 11/05/15 at 01:28pm, Murray, Paul (HP Cloud) wrote:

>> Or more specifically, the migrate and resize API actions both call 
>> the resize function in the compute api. As Ed said, they are 
>> basically the same behind the scenes. (But the API difference is 
>> important.)
>
> Can you be a little more specific on what API difference is important to you?
> There are two differences currently between migrate and resize in the API:
>
> 1. There is a different policy check, but this only really protects the next 
> bit.
>
> 2. Resize passes in a new flavor and migration does not.
>
> Both actions result in an instance being scheduled to a new host.  If 
> they were consolidated into a single action with a policy check to 
> enforce that users specified a new flavor and admins could leave that 
> off would that be problematic for you?


To me, the fact that resize and cold migration share the same implementation is 
just that, an implementation detail.

 From the outside they are different things...one is "take this instance and 
move it somewhere else", and the other "take this instance and change its 
resource profile".

To me, the external API would make more sense as:

1) resize

2) migrate (with option of cold or live, and with option to specify a 
destination, and with option to override the scheduler if the specified 
destination doesn't pass filters)


And while we're talking, I don't understand why "allow_resize_to_same_host" 
defaults to False.  The comments in https://bugs.launchpad.net/nova/+bug/1251266
say that it's not intended to be used in production, but doesn't give a 
rationale for that statement.  If you're using local storage and you just want 
to add some more CPUs/RAM to the instance, wouldn't it be beneficial to avoid 
the need to copy the rootfs?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Role for Fuel Master Node

2015-11-06 Thread Javeria Khan
Thank Evgeniy.  I came to the same conclusion.


--
Javeria

On Fri, Nov 6, 2015 at 1:41 PM, Evgeniy L  wrote:

> Javeria,
>
> In your case, I think it's easier to generate config on the target node,
> using puppet for example, since the information which you may need
> is placed in /etc/astute.yaml file. Also it may be a problem to retrieve
> all required information about the cluster, since API is protected with
> keystone authentication.
>
> Thanks,
>
> On Thu, Nov 5, 2015 at 5:35 PM, Javeria Khan 
> wrote:
>
>> Hi Evgeniy,
>>
>>>
>>> 1. what version of Fuel do you use?
>>>
>> Using 7.0
>>
>>
>>> 2. could you please clarify what did you mean by "moving to
>>> deployment_tasks.yaml"?
>>>
>> I tried changing my tasks.yaml to a deployment_tasks.yaml as the wiki
>> suggests for 7.0. However I kept hitting issues.
>>
>>
>>> 3. could you please describe your use-case a bit more? Why do you want
>>> to run
>>> tasks on the host itself?
>>>
>>
>> I have a monitoring tool that accompanies my new plugin, which basically
>> uses a config file that contains details about the cluster (IPs, VIPs,
>> networks etc). This config file is typically created on the installer nodes
>> through the deployment, Fuel Master in this case.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Role for Fuel Master Node

2015-11-06 Thread Javeria Khan
Sounds great.


--
Javeria

On Fri, Nov 6, 2015 at 2:31 PM, Evgeniy L  wrote:

> Great, let us know, if you have any problems.
>
> Also for the future we have some ideas/plans to provide a way for
> a plugin to retrieve any information from API.
>
> Thanks,
>
> On Fri, Nov 6, 2015 at 12:16 PM, Javeria Khan 
> wrote:
>
>> Thank Evgeniy.  I came to the same conclusion.
>>
>>
>> --
>> Javeria
>>
>> On Fri, Nov 6, 2015 at 1:41 PM, Evgeniy L  wrote:
>>
>>> Javeria,
>>>
>>> In your case, I think it's easier to generate config on the target node,
>>> using puppet for example, since the information which you may need
>>> is placed in /etc/astute.yaml file. Also it may be a problem to retrieve
>>> all required information about the cluster, since API is protected with
>>> keystone authentication.
>>>
>>> Thanks,
>>>
>>> On Thu, Nov 5, 2015 at 5:35 PM, Javeria Khan 
>>> wrote:
>>>
 Hi Evgeniy,

>
> 1. what version of Fuel do you use?
>
 Using 7.0


> 2. could you please clarify what did you mean by "moving to
> deployment_tasks.yaml"?
>
 I tried changing my tasks.yaml to a deployment_tasks.yaml as the wiki
 suggests for 7.0. However I kept hitting issues.


> 3. could you please describe your use-case a bit more? Why do you want
> to run
> tasks on the host itself?
>

 I have a monitoring tool that accompanies my new plugin, which
 basically uses a config file that contains details about the cluster (IPs,
 VIPs, networks etc). This config file is typically created on the installer
 nodes through the deployment, Fuel Master in this case.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Kuvaja, Erno
> -Original Message-
> From: Tony Breeds [mailto:t...@bakeyournoodle.com]
> Sent: Friday, November 06, 2015 6:15 AM
> To: OpenStack Development Mailing List
> Cc: openstack-operat...@lists.openstack.org
> Subject: [openstack-dev] [stable][all] Keeping Juno "alive" for longer.
> 
> Hello all,
> 
> I'll start by acknowledging that this is a big and complex issue and I do not
> claim to be across all the view points, nor do I claim to be particularly
> persuasive ;P
> 
> Having stated that, I'd like to seek constructive feedback on the idea of
> keeping Juno around for a little longer.  During the summit I spoke to a
> number of operators, vendors and developers on this topic.  There was some
> support and some "That's crazy pants!" responses.  I clearly didn't make it
> around to everyone, hence this email.

I'm not big fan of this idea, number of reasons below.
> 
> Acknowledging my affiliation/bias:  I work for Rackspace in the private cloud
> team.  We support a number of customers currently running Juno that are,
> for a variety of reasons, challenged by the Kilo upgrade.

I'm working at HPE in the Cloud Engineering team, fwiw.
> 
> Here is a summary of the main points that have come up in my
> conversations, both for and against.
> 
> Keep Juno:
>  * According to the current user survey[1] Icehouse still has the
>biggest install base in production clouds.  Juno is second, which makes
>sense. If we EOL Juno this month that means ~75% of production clouds
>will be running an EOL'd release.  Clearly many of these operators have
>support contracts from their vendor, so those operators won't be left
>completely adrift, but I believe it's the vendors that benefit from keeping
>Juno around. By working together *in the community* we'll see the best
>results.

As you say there should some support base for these releases. Unfortunately 
that has had really small reflection to upstream. It looks like these vendors 
and operators keep backporting to their own forks, but do not propose the 
backports to upstream branches, or these installations are not really 
maintained.
> 
>  * We only recently EOL'd Icehouse[2].  Sure it was well communicated, but
> we
>still have a huge Icehouse/Juno install base.
> 
> For me this is pretty compelling but for balance 
> 
> Keep the current plan and EOL Juno Real Soon Now:
>  * There is also no ignoring the elephant in the room that with HP stepping
>back from public cloud there are questions about our CI capacity, and
>keeping Juno will have an impact on that critical resource.

I leave this point open as I do not know what our plans towards infra are. 
Perhaps someone could shed some light who does know.
> 
>  * Juno (and other stable/*) resources have a non-zero impact on *every*
>project, esp. @infra and release management.  We need to ensure this
>isn't too much of a burden.  This mostly means we need enough
> trustworthy
>volunteers.

This has been the main driver for shorter support cycles so far. The group 
maintaining stable branches is small and at least I haven't seen huge increase 
on that lately. Stable branches are getting bit more attention again and some 
great work has been done to ease up the workloads, but same time we get loads 
of new features and projects in that has affect on infra (resource wise) and 
gate stability.
> 
>  * Juno is also tied up with Python 2.6 support. When
>Juno goes, so will Python 2.6 which is a happy feeling for a number of
>people, and more importantly reduces complexity in our project
>infrastructure.

I know lots of people have been waiting this, myself included.
> 
>  * Even if we keep Juno for 6 months or 1 year, that doesn't help vendors
>that are "on the hook" for multiple years of support, so for that case
>we're really only delaying the inevitable.
> 
>  * Some number of the production clouds may never migrate from $version,
> in
>which case longer support for Juno isn't going to help them.

Both very true.
> 
> 
> I'm sure these question were well discussed at the VYR summit where we
> set the EOL date for Juno, but I was new then :) What I'm asking is:
> 
> 1) Is it even possible to keep Juno alive (is the impact on the project as
>a whole acceptable)?

Based on current status I do not think so.
> 
> Assuming a positive answer:
> 
> 2) Who's going to do the work?
> - Me, who else?

This is one of the key questions.

> 3) What do we do if people don't actually do the work but we as a community
>have made a commitment?

This was done in YVR, we decided to cut the losses and EOL early.

> 4) If we keep Juno alive for $some_time, does that imply we also bump the
>life cycle on Kilo and liberty and Mitaka etc?

That would be logical thing to do. At least I don't think Juno was anything 
that special that it would deserve different schedule than Kilo, Liberty, etc.
> 
> Yours Tony.
> 
> [1] 

Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-06 Thread Tang Chen


On 11/06/2015 12:45 PM, Chris Friesen wrote:

On 11/05/2015 08:33 AM, Andrew Laski wrote:

On 11/05/15 at 01:28pm, Murray, Paul (HP Cloud) wrote:


Or more specifically, the migrate and resize API actions both call 
the resize
function in the compute api. As Ed said, they are basically the same 
behind

the scenes. (But the API difference is important.)


Can you be a little more specific on what API difference is important 
to you?
There are two differences currently between migrate and resize in the 
API:


1. There is a different policy check, but this only really protects 
the next bit.


2. Resize passes in a new flavor and migration does not.

Both actions result in an instance being scheduled to a new host.  If 
they were
consolidated into a single action with a policy check to enforce that 
users
specified a new flavor and admins could leave that off would that be 
problematic

for you?



To me, the fact that resize and cold migration share the same 
implementation is just that, an implementation detail.


From the outside they are different things...one is "take this 
instance and move it somewhere else", and the other "take this 
instance and change its resource profile".


To me, the external API would make more sense as:

1) resize

2) migrate (with option of cold or live, and with option to specify a 
destination, and with option to override the scheduler if the 
specified destination doesn't pass filters)


OK. Conceptually speaking, only one case that resize could reuse 
migration code: the current host cannot match the resize condition.

And the VM should be migrated to another host, and do the resize.

So I don't think resize should be one type of migration.

May I understand it like this:  what we are talking about here is a 
3-level conception.


user APInova service driver
migrate  live-migration   o ff compute node 
storage---shared file system
resizecold-migration  o n compute node 
storage---shared file system
rebuild   o n 
compute node storage---nonshared file system

evacuate

Indeed it is a implementation detail. If we can refactor the source code 
as above, maybe it is more clear.





And while we're talking, I don't understand why 
"allow_resize_to_same_host" defaults to False.  The comments in 
https://bugs.launchpad.net/nova/+bug/1251266 say that it's not 
intended to be used in production, but doesn't give a rationale for 
that statement.  If you're using local storage and you just want to 
add some more CPUs/RAM to the instance, wouldn't it be beneficial to 
avoid the need to copy the rootfs?


I'm sorry I don't know why it is False by default. But if we can 
refactor the source code, and split resize and migrate conceptually, I 
think we don't need this option any more.


And another question about resize, shall we think about CPU/memory 
hotplug ?  AFAIK, Linux kerenl and qemu are now supporting memory 
hotplug. CPU hotplug in qemu is still being developed. I was thinking 
resize could use these functionalities.


Thanks.



Chris

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][kuryr] network control plane (libkv role)

2015-11-06 Thread Vikas Choudhary
@Taku,

Ideally libnetwork should not be able to sync network state with driver
capability as "local". If that is the case, what is the purpose of having
"capability" feature. How drivers(those having their own control plane)
will be able to "mute" libnetwork. This will result in two "source of
truth" in that case.

Thoughts?


-Vikas

On Fri, Nov 6, 2015 at 9:19 AM, Vikas Choudhary 
wrote:

> @Taku,
>
> Please have a look on this discussion. This is all about local and global
> scope:
> https://github.com/docker/libnetwork/issues/486
>
>
> Plus, I used same docker options as you mentioned. Fact that it was
> working for networks created with overlay driver making me think it was not
> a configuration issue. Only networks created with kuryr were not getting
> synced.
>
>
> Thanks
> Vikas Choudhary
>
> On Fri, Nov 6, 2015 at 8:07 AM, Taku Fukushima 
> wrote:
>
>> Hi Vikas,
>>
>> I thought the "capability" affected the propagation of the network state
>> across nodes as well. However, in my environment, where I tried Consul and
>> ZooKeeper, I observed a new network created in a host is displayed on
>> another host when I hit "sudo docker network ls" even if I set the
>> capability to "local", which is the current default. So I'm just wondering
>> what this capability means. The spec doesn't say much about it.
>>
>>
>> https://github.com/docker/libnetwork/blob/8d03e80f21c2f21a792efbd49509f487da0d89cc/docs/remote.md#set-capability
>>
>> I saw your bug report that describes the network state propagation didn't
>> happen appropriately. I also experienced the issue and I'd say it would be
>> the configuration issue. Please try with the following option. I'm putting
>> it in /etc/default/docker and managing the docker daemon through "service"
>> command.
>>
>> DOCKER_OPTS="-D -H unix:///var/run/docker.sock -H :2376
>> --cluster-store=consul://192.168.11.14:8500 --cluster-advertise=
>> 192.168.11.18:2376"
>>
>> The network is the only user facing entity in libnetwork for now since
>> the concept of the "service" is abandoned in the stable Docker 1.9.0
>> release and it's shared by libnetwork through libkv across multiple hosts.
>> Endpoint information is stored as a part of the network information as you
>> documented in the devref and the network is all what we need so far.
>>
>>
>> https://github.com/openstack/kuryr/blob/d1f4272d6b6339686a7e002f8af93320f5430e43/doc/source/devref/libnetwork_remote_driver_design.rst#libnetwork-user-workflow-with-kuryr-as-remote-network-driver---host-networking
>>
>> Regarding changing the capability to "global", it totally makes sense and
>> we should change it despite the networks would be shared among multiple
>> hosts anyways.
>>
>> Best regards,
>> Taku Fukushima
>>
>>
>> On Thu, Nov 5, 2015 at 8:39 PM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>> Thanks Toni.
>>> On 5 Nov 2015 16:02, "Antoni Segura Puimedon" <
>>> toni+openstac...@midokura.com> wrote:
>>>


 On Thu, Nov 5, 2015 at 10:47 AM, Vikas Choudhary <
 choudharyvika...@gmail.com> wrote:

> ++ [Neutron] tag
>
>
> On Thu, Nov 5, 2015 at 10:40 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> Hi all,
>>
>> By network control plane i specifically mean here sharing network
>> state across docker daemons sitting on different hosts/nova_vms in
>> multi-host networking.
>>
>> libnetwork provides flexibility where vendors have a choice between
>> network control plane to be handled by libnetwork(libkv) or remote driver
>> itself OOB. Vendor can choose to "mute" libnetwork/libkv by advertising
>> remote driver capability as "local".
>>
>> "local" is our current default "capability" configuration in kuryr.
>>
>> I have following queries:
>> 1. Does it mean Kuryr is taking responsibility of sharing network
>> state across docker daemons? If yes, network created on one docker host
>> should be visible in "docker network ls" on other hosts. To achieve 
>> this, I
>> guess kuryr driver will need help of some distributed data-store like
>> consul etc. so that kuryr driver on other hosts could create network in
>> docker on other hosts. Is this correct?
>>
>> 2. Why we cannot  set default scope as "Global" and let libkv do the
>> network state sync work?
>>
>> Thoughts?
>>
>
 Hi Vikas,

 Thanks for raising this. As part of the current work on enabling
 multi-node we should be moving the default to 'global'.


>
>> Regards
>> -Vikas Choudhary
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [nova][api]

2015-11-06 Thread Salvatore Orlando
It makes sense to have a single point were response pagination is made in
API processing, rather than scattering pagination across Nova REST
controllers; unfortunately if I am not really able to comment how feasible
that would be in Nova's WSGI framework.

However, I'd just like to add that there is an approved guideline for API
response pagination [1], and if would be good if all these effort follow
the guideline.

Salvatore

[1]
https://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html

On 5 November 2015 at 03:09, Tony Breeds  wrote:

> Hi All,
> Around the middle of October a spec [1] was uploaded to add pagination
> support to the os-hypervisors API.  While I recognize the use case it
> seemed
> like adding another pagination implementation wasn't an awesome idea.
>
> Today I see 3 more requests to add pagination to APIs [2]
>
> Perhaps I'm over thinking it but should we do something more strategic
> rather
> than scattering "add pagination here".
>
> It looks to me like we have at least 3 parties interested in this.
>
> Yours Tony.
>
> [1] https://review.openstack.org/#/c/234038
> [2]
> https://review.openstack.org/#/q/message:pagination+project:openstack/nova-specs+status:open,n,z
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][kuryr] network control plane (libkv role)

2015-11-06 Thread Taku Fukushima
Hi Vikas,

> Ideally libnetwork should not be able to sync network state with driver
capability as "local". If that is the case, what is the purpose of having
"capability" feature. How drivers(those having their own control plane)
will be able to "mute" libnetwork. This will result in two "source of
truth" in that case.

It might be the case of the multiple Consul servers in the single or
multiple datacenters. But with the stable 1.9.0, I'm seeing the behaviour
although it's likely a bug.

Let me put the recorded videos. I prepared hostA(192.168.11.14) and
hostB(192.168.11.18). Due to the bad synchronization, there's an orphan,
the network "test", on hostB but please ignore it. To reproduce the
following cases, I'd strongly recommend to cleanup the created network and
so on not to break the synchronization. In my experience the coordination
with Consul is very fragile at this moment. If something went wrong and you
don't have important data in Consul, removing files under /tmp/consul and
starting over from scratch might solve your problem.

Mohammad (banix) also tried it and I heard he successfully have the network
state synchronized as well with Consul.

A. Single Consul agent
First, I tried with the single Consul agent. This would cover the case
multiple Docker daemons are coordinated with the single Consul server in
the single datacenter.

https://drive.google.com/file/d/0BwURaz1ic-5tUDJ1NFBJU1Bjc00/view?usp=sharing

I launched a Consul agent as a server on hostA, which has 192.168.11.14.

  hostA$ consul agent -server -client=192.168.11.14 -bootstrap -data-dir
/tmp/consul -node=agent-one -bind=192.168.11.14
  hostA$ cat /etc/default/docker | grep ^DOCKER
  DOCKER_OPTS="-D -H unix:///var/run/docker.sock -H :2376
--cluster-store=consul://192.168.11.14:8500 --cluster-advertise=
192.168.11.14:2376"

Then I configure another host, hostB, which has 192.168.11.18, as follow.

  hostB$ cat /etc/default/docker | grep ^DOCKER
  DOCKER_OPTS="-D -H unix:///var/run/docker.sock -H :2376
--cluster-store=consul://192.168.11.14:8500 --cluster-advertise=
192.168.11.18:2376"

The capability of Kuryr is set to "local" and we sill see the created
network on both host.

B. Multiple Consul agents, the server and the client
Second, I added another Consul agent as a client on hostB and let it join
the Consul server on hostA. This covers multiple Docker daemons are
coordinated with the Consul server and the Consul client in the single
datacenter.

https://drive.google.com/file/d/0BwURaz1ic-5tNTFtR3ZXRDZmM0k/view?usp=sharing

  hostB$ consul agent -client=192.168.11.18 -data-dir /tmp/consul
-node=agent-two -bind=192.168.11.18 -join=192.168.11.14

Then I modified the configuration of the Docker daemon on hostB to point to
the newly added Consul on hostB.

  hostB$ cat /etc/default/docker | grep ^DOCKER
  DOCKER_OPTS="-D -H unix:///var/run/docker.sock -H :2376
--cluster-store=consul://192.168.11.18:8500 --cluster-advertise=
192.168.11.18:2376"

To reflect the configuration change I restarted the Docker daemon. Then I
created a new network "multi" and it gets synchronized on both hosts. The
capability was still set to "local" but the both hosts saw the same network.

I may be doing wrong or misunderstanding things. Please let me know in that
case. And I haven't tested multiple Consul servers have the consensus with
Raft nor the Consul servers across the multiple datacenters but they're
supposed to work.

https://www.consul.io/docs/internals/architecture.html

Best regards,
Taku Fukushima

On Fri, Nov 6, 2015 at 5:48 PM, Vikas Choudhary 
wrote:

> @Taku,
>
> Ideally libnetwork should not be able to sync network state with driver
> capability as "local". If that is the case, what is the purpose of having
> "capability" feature. How drivers(those having their own control plane)
> will be able to "mute" libnetwork. This will result in two "source of
> truth" in that case.
>
> Thoughts?
>
>
> -Vikas
>
> On Fri, Nov 6, 2015 at 9:19 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> @Taku,
>>
>> Please have a look on this discussion. This is all about local and global
>> scope:
>> https://github.com/docker/libnetwork/issues/486
>>
>>
>> Plus, I used same docker options as you mentioned. Fact that it was
>> working for networks created with overlay driver making me think it was not
>> a configuration issue. Only networks created with kuryr were not getting
>> synced.
>>
>>
>> Thanks
>> Vikas Choudhary
>>
>> On Fri, Nov 6, 2015 at 8:07 AM, Taku Fukushima 
>> wrote:
>>
>>> Hi Vikas,
>>>
>>> I thought the "capability" affected the propagation of the network state
>>> across nodes as well. However, in my environment, where I tried Consul and
>>> ZooKeeper, I observed a new network created in a host is displayed on
>>> another host when I hit "sudo docker network ls" even if I set the
>>> capability to "local", which is the current default. So I'm just wondering
>>> what this 

Re: [openstack-dev] [Fuel][Plugins] Role for Fuel Master Node

2015-11-06 Thread Evgeniy L
Great, let us know, if you have any problems.

Also for the future we have some ideas/plans to provide a way for
a plugin to retrieve any information from API.

Thanks,

On Fri, Nov 6, 2015 at 12:16 PM, Javeria Khan  wrote:

> Thank Evgeniy.  I came to the same conclusion.
>
>
> --
> Javeria
>
> On Fri, Nov 6, 2015 at 1:41 PM, Evgeniy L  wrote:
>
>> Javeria,
>>
>> In your case, I think it's easier to generate config on the target node,
>> using puppet for example, since the information which you may need
>> is placed in /etc/astute.yaml file. Also it may be a problem to retrieve
>> all required information about the cluster, since API is protected with
>> keystone authentication.
>>
>> Thanks,
>>
>> On Thu, Nov 5, 2015 at 5:35 PM, Javeria Khan 
>> wrote:
>>
>>> Hi Evgeniy,
>>>

 1. what version of Fuel do you use?

>>> Using 7.0
>>>
>>>
 2. could you please clarify what did you mean by "moving to
 deployment_tasks.yaml"?

>>> I tried changing my tasks.yaml to a deployment_tasks.yaml as the wiki
>>> suggests for 7.0. However I kept hitting issues.
>>>
>>>
 3. could you please describe your use-case a bit more? Why do you want
 to run
 tasks on the host itself?

>>>
>>> I have a monitoring tool that accompanies my new plugin, which basically
>>> uses a config file that contains details about the cluster (IPs, VIPs,
>>> networks etc). This config file is typically created on the installer nodes
>>> through the deployment, Fuel Master in this case.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] apt segfaults when too many repositories are configured

2015-11-06 Thread Simon Pasquier
Hello,

While testing LMA with MOS 7.0, we got apt-get crashing and failing the
deployment. The details are in the LP bug [0], the TL;DR version is that
when more repositories are added (hence more packages), there is a risk
that apt-get commands fail badly when trying to remap memory.

The core issue should be fixed in apt or glibc but in the mean time,
increasing the APT::Cache-Start value makes the issue go way. This is what
we're going to do with the LMA plugin but since it's independent of LMA,
maybe it needs to be addressed at the Fuel level?

BR,
Simon

[0] https://bugs.launchpad.net/lma-toolchain/+bug/1513539
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Thierry Carrez
Tony Breeds wrote:
> [...]
> 1) Is it even possible to keep Juno alive (is the impact on the project as
>a whole acceptable)?

It is *technically* possible, imho. The main cost to keep it is that the
branches get regularly broken by various other changes, and those breaks
are non-trivial to fix (we have taken steps to make branches more
resilient, but those only started to appear in stable/liberty). The
issues sometimes propagate (through upgrade testing) to master, at which
point it becomes everyone's problem to fix it. The burden ends up
falling on the usual gate fixers heroes, a rare resource we need to protect.

So it's easy to say "we should keep the branch since so many people
still use it", unless we have significantly more people working on (and
capable of) fixing it when it's broken, the impact on the project is
just not acceptable.

It's not the first time this has been suggested, and every time our
answer was "push more resources in fixing existing stable branches and
we might reconsider it". We got promised lots of support. But I don't
think we have yet seen real change in that area (I still see the same
usual suspects fixing stable gates), and things can still barely keep
afloat with our current end-of-life policy...

Stable branches also come with security support, so keeping more
branches opened mechanically adds to the work of the Vulnerability
Management Team, another rare resource.

There are other hidden costs on the infrastructure side (we can't get
rid of a number of things that we have moved away from until the old
branch still needing those things is around), but I'll let someone
closer to the metal answer that one.

> Assuming a positive answer:
> 
> 2) Who's going to do the work?
> - Me, who else?
> 3) What do we do if people don't actually do the work but we as a community
>have made a commitment?

In the past, that generally meant people opposed to the idea of
extending support periods having to stand up for the community promise
and fix the mess in the end.

PS: stable gates are currently broken for horizon/juno, trove/kilo, and
neutron-lbaas/liberty.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] OpenStack Tokyo Summit Summary

2015-11-06 Thread Fei Long Wang

Sorry typo:  it should be 'pre-signed URL'

On 07/11/15 01:31, Fei Long Wang wrote:

Greetings,

Firstly, thank you for everyone joined Zaqar sessions at Tokyo summit. 
We definitely made some great progress for those working sessions. 
Here are the high level summary and those are basically our Mitaka 
priorities. I may miss something so please feel free to comment/reply 
this mail.


Sahara + Zaqar
-

We have a great discussion with Ethan Gafford from Sahara team. Sahara 
team is happy to use Zaqar to fix some potential security issues. The 
main user case will be covered in Mitaka is protecting tenant guest 
and data from administrative user. So what Zaqar team needs to do in 
Mitaka is completing the zaqar client function gaps for v2 to support 
pre-signed URL, which will be used by Sahara guest agent. Ethan will 
create a spec in Sahara to track this work. This is a POC of what it'd 
look like to have a guest agent in Sahara on top of Zaqar. The Sahara 
team has not decided to use Zaqar yet but this would be the bases for 
that discussion.


Horizon + Zaqar
--

We used 1 horizon work session and 1 Zaqar work session to discuss 
this topic. The main user case we would like to address is the async 
notification so that Horizon won't have to poll the other OpenStack 
components(e.g. Nova, Glance or Cinder) per second to get the latest 
status. And I'm really happy to see we worked out a basic plan by 
leveraging Zaqar's notification and websocket.


1. Implement a basic filter for Zaqar subscription, so that Zaqar can 
decide if the message should be posted/forwarded to the subscriber 
when there is a new message posted the queue. With this feature, 
Horizon will only be notified by its interested notifications.


https://blueprints.launchpad.net/zaqar/+spec/suport-filter-for-subscription

2. Listen OpenStack notifications

We may need more discussion about this to make sure if it should be in 
the scope of Zaqar's services. It could be separated process/service 
of Zaqar to listen/collect interested notifications/messages and post 
them in to particular Zaqar queues. It sounds very interesting and 
useful but we need to define the scope carefully for sure.



Pool Group and Flavor
-

Thanks MD MADEEM proposed this topic so that we have a chance to 
review the design of pool, pool group and flavor. Now the pool group 
and flavor has a 1:1 mapping relationship and the pool group and pool 
has a 1:n mapping relationship. But end user don't know the existence 
of pool, so flavor is the way for end user to select what kind of 
storage(based on capabilities) he want to use. Since pool group can't 
provide more information than flavor so it's not really necessary, so 
we decide to deprecate/remove it in Mitaka. Given this is hidden from 
users (done automatically by Zaqar), there won't be an impact on the 
end user and the API backwards compatibility will be kept.


https://blueprints.launchpad.net/zaqar/+spec/deprecate-pool-group

Zaqar Client


Some function gaps need to be filled in Mitaka. Personally, I would 
rate the client work as the 1st priority of M since it's very key for 
the integration with other OpenStack components. For v1.1, the support 
for pool and flavor hasn't been completed. For v2, we're stilling 
missing the support for subscription and pre-signed URL.


https://blueprints.launchpad.net/zaqar/+spec/finish-client-support-for-v1.1-features

SqlAlchemy Migration
-

Now we're missing the db migration support for SqlAlchemy, the control 
plane driver. We will fix it in M as well.


https://blueprints.launchpad.net/zaqar/+spec/sqlalchemy-migration


Guys, please contribute this thread to fill the points/things I missed 
or pop up in #openstack-zaqar channel directly with questions and 
suggestions.

--
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email:flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


--
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][bugs] Developers Guide: Who's mergingthat?

2015-11-06 Thread Markus Zoeller
Jeremy Stanley  wrote on 11/05/2015 07:11:37 PM:

> From: Jeremy Stanley 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 11/05/2015 07:17 PM
> Subject: Re: [openstack-dev] [all][bugs] Developers Guide: Who's merging 
that?
> 
> On 2015-11-05 16:23:56 +0100 (+0100), Markus Zoeller wrote:
> > some months ago I wrote down all the things a developer should know
> > about the bug handling process in general [1]. It is written as a
> > project agnostic thing and got some +1s but it isn't merged yet.
> > It would be helpful when I could use it to give this as a pointer
> > to new contributors as I'm under the impression that the mental image
> > differs a lot among the contributors. So, my questions are:
> > 
> > 1) Who's in charge of merging such non-project-specific things?
> [...]
> 
> This is a big part of the problem your addition is facing, in my
> opinion. The OpenStack Infrastructure Manual is an attempt at a
> technical manual for interfacing with the systems written and
> maintained by the OpenStack Project Infrastructure team. It has,
> unfortunately, also grown some sections which contain cultural
> background and related recommendations because until recently there
> was no better venue for those topics, but we're going to be ripping
> those out and proposing them to documents maintained by more
> appropriate teams at the earliest opportunity.

I've written this for the Nova docs originally but got sent to the
infra-manual as the "project agnostic thing". 

> Bug management falls into a grey area currently, where a lot of the
> information contributors need is cultural background mixed with
> workflow information on using Launchpad (which is not really managed
> by the Infra team). [...]

True, that's what I try to contribute here. I'm aware of the intended
change in our issue tracker and tried to write the text so it needs
only a few changes when this transition is done.
 
> Cultural content about the lifecycle of bugs, standard practices for
> triage, et cetera are likely better suited to the newly created
> Project Team Guide;[...]

The Project Team Guide was news to me, I'm going to have a look if
it would fit.
 
> So anyway, to my main point, topics in collaboratively-maintained
> documentation are going to end up being closely tied to the
> expertise of the review team for the document being targeted. In the
> case of the Infra Manual that's the systems administrators who
> configure and maintain our community infrastructure. I won't speak
> for others on the team, but I don't personally feel comfortable
> deciding what details a user should include in a bug report for
> python-novaclient, or how the Cinder team should triage their bug
> reports.
> 
> I expect that the lack of core reviews are due to:
> 
> 1. Few of the core reviewers feel they can accurately judge much of
> the content you've proposed in that change.
> 
> 2. Nobody feels empowered to tell you that this large and
> well-written piece of documentation you've spent a lot of time
> putting together is a poor fit and should be split up and much of it
> put somewhere else more suitable (especially without a suggestion as
> to where that might be).
> 
> 3. The core review team for this is the core review team for all our
> infrastructure systems, and we're all unfortunately very behind in
> handling the current review volume.

Maybe the time has come for me to think about starting a blog...
Thanks Stanley, for your time and feedback.

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-06 Thread Doug Hellmann
Excerpts from Dolph Mathews's message of 2015-11-05 16:31:28 -0600:
> On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann  wrote:
> 
> > Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
> > > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
> > > > Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> > > > > Can people help me work through the right set of tools for this use
> > case
> > > > > (has come up from several Operators) and map out a plan to implement
> > it:
> > > > >
> > > > > Large cloud with many users coming from multiple Federation sources
> > has
> > > > > a policy of providing a minimal setup for each user upon first visit
> > to
> > > > > the cloud:  Create a project for the user with a minimal quota, and
> > > > > provide them a role assignment.
> > > > >
> > > > > Here are the gaps, as I see it:
> > > > >
> > > > > 1.  Keystone provides a notification that a user has logged in, but
> > > > > there is nothing capable of executing on this notification at the
> > > > > moment.  Only Ceilometer listens to Keystone notifications.
> > > > >
> > > > > 2.  Keystone does not have a workflow engine, and should not be
> > > > > auto-creating projects.  This is something that should be performed
> > via
> > > > > a Heat template, and Keystone does not know about Heat, nor should
> > it.
> > > > >
> > > > > 3.  The Mapping code is pretty static; it assumes a user entry or a
> > > > > group entry in identity when creating a role assignment, and neither
> > > > > will exist.
> > > > >
> > > > > We can assume a special domain for Federated users to have per-user
> > > > > projects.
> > > > >
> > > > > So; lets assume a Heat Template that does the following:
> > > > >
> > > > > 1. Creates a user in the per-user-projects domain
> > > > > 2. Assigns a role to the Federated user in that project
> > > > > 3. Sets the minimal quota for the user
> > > > > 4. Somehow notifies the user that the project has been set up.
> > > > >
> > > > > This last probably assumes an email address from the Federated
> > > > > assertion.  Otherwise, the user hits Horizon, gets a "not
> > authenticated
> > > > > for any projects" error, and is stumped.
> > > > >
> > > > > How is quota assignment done in the other projects now?  What happens
> > > > > when a project is created in Keystone?  Does that information gets
> > > > > transferred to the other services, and, if so, how?  Do most people
> > use
> > > > > a custom provisioning tool for this workflow?
> > > > >
> > > >
> > > > I know at Dreamhost we built some custom integration that was triggered
> > > > when someone turned on the Dreamcompute service in their account in our
> > > > existing user management system. That integration created the account
> > in
> > > > keystone, set up a default network in neutron, etc. I've long thought
> > we
> > > > needed a "new tenant creation" service of some sort, that sits outside
> > > > of our existing services and pokes them to do something when a new
> > > > tenant is established. Using heat as the implementation makes sense,
> > for
> > > > things that heat can control, but we don't want keystone to depend on
> > > > heat and we don't want to bake such a specialized feature into heat
> > > > itself.
> > > >
> > >
> > > I agree, an automation piece that is built-in and easy to add to
> > > OpenStack would be great.
> > >
> > > I do not agree that it should be Heat. Heat is for managing stacks that
> > > live on and change over time and thus need the complexity of the graph
> > > model Heat presents.
> > >
> > > I'd actually say that Mistral or Ansible are better choices for this. A
> > > service which listens to the notification bus and triggered a workflow
> > > defined somewhere in either Ansible playbooks or Mistral's workflow
> > > language would simply run through the "skel" workflow for each user.
> > >
> > > The actual workflow would probably almost always be somewhat site
> > > specific, but it would make sense for Keystone to include a few basic
> > ones
> > > as "contrib" elements. For instance, the "notify the user" piece would
> > > likely be simplest if you just let the workflow tool send an email. But
> > > if your cloud has Zaqar, you may want to use that as well or instead.
> > >
> > > Adding Mistral here to see if they have some thoughts on how this
> > > might work.
> > >
> > > BTW, if this does form into a new project, I suggest naming it
> > > Skeleton[1]
> >
> > Following the pattern of Kite's naming, I think a Dirigible is a
> > better way to get users into the cloud. :-)
> >
> 
> lol +1
> 
> Is this use case specifically for keystone-to-keystone, or for federation
> in general?

The use case I had in mind was actually signing up a new user for
a cloud (at Dreamhost that meant enabling a paid service in their
account in the existing management tool outside of OpenStack). I'm not
sure how it relates to federation, but it seems like that might 

Re: [openstack-dev] [kuryr] mutihost networking with nova vm as docker host

2015-11-06 Thread Baohua Yang
It does cause confusing by calling container-inside-vm as nested container.

The "nested" term in container area usually means
container-inside-container.

we may refer this (container-inside-vm) explicitly as vm-holding container.

On Fri, Nov 6, 2015 at 12:13 PM, Vikas Choudhary  wrote:

> @Gal, I was asking about "container in nova vm" case.
> Not sure if you were referring to this case as nested containers case. I
> guess nested containers case would be "containers inside containers" and
> this could be hosted on nova vm and nova bm node. Is my understanding
> correct?
>
> Thanks Gal and Toni, for now i got answer to my query related to
> "container in vm" case.
>
> -Vikas
>
> On Thu, Nov 5, 2015 at 6:00 PM, Gal Sagie  wrote:
>
>> The current OVS binding proposals are not for nested containers.
>> I am not sure if you are asking about that case or about the nested
>> containers inside a VM case.
>>
>> For the nested containers, we will use Neutron solutions that support
>> this kind of configuration, for example
>> if you look at OVN you can define "parent" and "sub" ports, so OVN knows
>> to perform the logical pipeline in the compute host
>> and only perform VLAN tagging inside the VM (as Toni mentioned)
>>
>> If you need more clarification you can catch me on IRC as well and we can
>> talk.
>>
>> On Thu, Nov 5, 2015 at 8:03 AM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I would appreciate inputs on following queries:
>>> 1. Are we assuming nova bm nodes to be docker host for now?
>>>
>>> If Not:
>>>  - Assuming nova vm as docker host and ovs as networking plugin:
>>> This line is from the etherpad[1], "Eachdriver would have
>>> an executable that receives the name of the veth pair that has to be bound
>>> to the overlay" .
>>> Query 1:  As per current ovs binding proposals by Feisky[2]
>>> and Diga[3], vif seems to be binding with br-int on vm. I am unable to
>>> understand how overlay will work. AFAICT , neutron will configure br-tun of
>>> compute machines ovs only. How overlay(br-tun) configuration will happen
>>> inside vm ?
>>>
>>>  Query 2: Are we having double encapsulation(both at vm and
>>> compute)? Is not it possible to bind vif into compute host br-int?
>>>
>>>  Query3: I did not see subnet tags for network plugin being
>>> passed in any of the binding patches[2][3][4]. Dont we need that?
>>>
>>>
>>> [1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
>>> [2]  https://review.openstack.org/#/c/241558/
>>> [3]  https://review.openstack.org/#/c/232948/1
>>> [4]  https://review.openstack.org/#/c/227972/
>>>
>>>
>>> -Vikas Choudhary
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best Regards ,
>>
>> The G.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best wishes!
Baohua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Evgeniy L
Hi Vladimir,

Cannot say anything about 1st option, which is to use official Centos
scripts,
because I'm not familiar with the procedure, but since our installation is
not
really Centos, I have doubts that it's going to work correctly.

2nd option looks less risky. Also we should decide when to run containers
upgrade + host upgrade? Before or after new CentOS is installed? Probably
it should be done before we run backup, in order to get the latest scripts
for
backup/restore actions.

Thanks,

On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> At the moment I'm working on deprecating Fuel upgrade tarball. Currently,
> it includes the following:
>
> * RPM repository (upstream + mos)
> * DEB repository (mos)
> * openstack.yaml
> * version.yaml
> * upgrade script itself (+ virtualenv)
>
> Apart from upgrading docker containers this upgrade script makes copies of
> the RPM/DEB repositories and puts them on the master node naming these
> repository directories depending on what is written in openstack.yaml and
> version.yaml. My plan was something like:
>
> 1) deprecate version.yaml (move all fields from there to various places)
> 2) deliver openstack.yaml with fuel-openstack-metadata package
> 3) do not put new repos on the master node (instead we should use online
> repos or use fuel-createmirror to make local mirrors)
> 4) deliver fuel-upgrade package (throw away upgrade virtualenv)
>
> Then UX was supposed to be roughly like:
>
> 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
> 2) yum install fuel-upgrade
> 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
> there should have not be parts coping RPM/DEB repos)
>
> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and it
> is not enough to just do things which we usually did during upgrades. Now
> there are two ways to upgrade:
> 1) to use the official Centos upgrade script for upgrading from 6 to 7
> 2) to backup the master node, then reinstall it from scratch and then
> apply backup
>
> Upgrade team is trying to understand which way is more appropriate.
> Regarding to my tarball related activities, I'd say that this package based
> upgrade approach can be aligned with (1) (fuel-upgrade would use official
> Centos upgrade script as a first step for upgrade), but it definitely can
> not be aligned with (2), because it assumes reinstalling the master node
> from scratch.
>
> Right now, I'm finishing the work around deprecating version.yaml and my
> further steps would be to modify fuel-upgrade script so it does not copy
> RPM/DEB repos, but those steps make little sense taking into account Centos
> 7 feature.
>
> Colleagues, let's make a decision about how we are going to upgrade the
> master node ASAP. Probably my tarball related work should be reduced to
> just throwing tarball away.
>
>
> Vladimir Kozhukalov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] mutihost networking with nova vm as docker host

2015-11-06 Thread Antoni Segura Puimedon
On Fri, Nov 6, 2015 at 1:20 PM, Baohua Yang  wrote:

> It does cause confusing by calling container-inside-vm as nested
> container.
>
> The "nested" term in container area usually means
> container-inside-container.
>

I try to always put it as VM-nested container. But I probably slipped in
some mentions.


> we may refer this (container-inside-vm) explicitly as vm-holding container.
>

container-in-vm?


>
> On Fri, Nov 6, 2015 at 12:13 PM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> @Gal, I was asking about "container in nova vm" case.
>> Not sure if you were referring to this case as nested containers case. I
>> guess nested containers case would be "containers inside containers" and
>> this could be hosted on nova vm and nova bm node. Is my understanding
>> correct?
>>
>> Thanks Gal and Toni, for now i got answer to my query related to
>> "container in vm" case.
>>
>> -Vikas
>>
>> On Thu, Nov 5, 2015 at 6:00 PM, Gal Sagie  wrote:
>>
>>> The current OVS binding proposals are not for nested containers.
>>> I am not sure if you are asking about that case or about the nested
>>> containers inside a VM case.
>>>
>>> For the nested containers, we will use Neutron solutions that support
>>> this kind of configuration, for example
>>> if you look at OVN you can define "parent" and "sub" ports, so OVN knows
>>> to perform the logical pipeline in the compute host
>>> and only perform VLAN tagging inside the VM (as Toni mentioned)
>>>
>>> If you need more clarification you can catch me on IRC as well and we
>>> can talk.
>>>
>>> On Thu, Nov 5, 2015 at 8:03 AM, Vikas Choudhary <
>>> choudharyvika...@gmail.com> wrote:
>>>
 Hi All,

 I would appreciate inputs on following queries:
 1. Are we assuming nova bm nodes to be docker host for now?

 If Not:
  - Assuming nova vm as docker host and ovs as networking plugin:
 This line is from the etherpad[1], "Eachdriver would have
 an executable that receives the name of the veth pair that has to be bound
 to the overlay" .
 Query 1:  As per current ovs binding proposals by Feisky[2]
 and Diga[3], vif seems to be binding with br-int on vm. I am unable to
 understand how overlay will work. AFAICT , neutron will configure br-tun of
 compute machines ovs only. How overlay(br-tun) configuration will happen
 inside vm ?

  Query 2: Are we having double encapsulation(both at vm and
 compute)? Is not it possible to bind vif into compute host br-int?

  Query3: I did not see subnet tags for network plugin being
 passed in any of the binding patches[2][3][4]. Dont we need that?


 [1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
 [2]  https://review.openstack.org/#/c/241558/
 [3]  https://review.openstack.org/#/c/232948/1
 [4]  https://review.openstack.org/#/c/227972/


 -Vikas Choudhary


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Best Regards ,
>>>
>>> The G.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best wishes!
> Baohua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread Dan Smith
> I propose we add Sylvain Bauza[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work,
> including some quality reviews, particularly around the Scheduler.
> 
> Please respond with comments, +1s, or objections within one week.

+1 for tasty cheese.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Vladimir Kuklin
Just my 2 cents here - let's do docker backup and roll it up onto brand new
Fuel 8 node.

On Fri, Nov 6, 2015 at 7:54 PM, Oleg Gelbukh  wrote:

> Matt,
>
> You are talking about this part of Operations guide [1], or you mean
> something else?
>
> If yes, then we still need to extract data from backup containers. I'd
> prefer backup of DB in simple plain text file, since our DBs are not that
> big.
>
> [1]
> https://docs.mirantis.com/openstack/fuel/fuel-7.0/operations.html#howto-backup-and-restore-fuel-master
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Fri, Nov 6, 2015 at 6:03 PM, Matthew Mosesohn 
> wrote:
>
>> Oleg,
>>
>> All the volatile information, including a DB dump, are contained in the
>> small Fuel Master backup. There should be no information lost unless there
>> was manual customization done inside the containers (such as puppet
>> manifest changes). There shouldn't be a need to back up the entire
>> containers.
>>
>> The information we would lose would include the IP configuration
>> interfaces besides the one used for the Fuel PXE network and any custom
>> configuration done on the Fuel Master.
>>
>> I want #1 to work smoothly, but #2 should also be a safe route.
>>
>> On Fri, Nov 6, 2015 at 5:39 PM, Oleg Gelbukh 
>> wrote:
>>
>>> Evgeniy,
>>>
>>> On Fri, Nov 6, 2015 at 3:27 PM, Evgeniy L  wrote:
>>>
 Also we should decide when to run containers
 upgrade + host upgrade? Before or after new CentOS is installed?
 Probably
 it should be done before we run backup, in order to get the latest
 scripts for
 backup/restore actions.

>>>
>>> We're working to determine if we need to backup/upgrade containers at
>>> all. My expectation is that we should be OK with just backup of DB, IP
>>> addresses settings from astute.yaml for the master node, and credentials
>>> from configuration files for the services.
>>>
>>> --
>>> Best regards,
>>> Oleg Gelbukh
>>>
>>>

 Thanks,

 On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> At the moment I'm working on deprecating Fuel upgrade tarball.
> Currently, it includes the following:
>
> * RPM repository (upstream + mos)
> * DEB repository (mos)
> * openstack.yaml
> * version.yaml
> * upgrade script itself (+ virtualenv)
>
> Apart from upgrading docker containers this upgrade script makes
> copies of the RPM/DEB repositories and puts them on the master node naming
> these repository directories depending on what is written in 
> openstack.yaml
> and version.yaml. My plan was something like:
>
> 1) deprecate version.yaml (move all fields from there to various
> places)
> 2) deliver openstack.yaml with fuel-openstack-metadata package
> 3) do not put new repos on the master node (instead we should use
> online repos or use fuel-createmirror to make local mirrors)
> 4) deliver fuel-upgrade package (throw away upgrade virtualenv)
>
> Then UX was supposed to be roughly like:
>
> 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
> 2) yum install fuel-upgrade
> 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
> there should have not be parts coping RPM/DEB repos)
>
> However, it turned out that Fuel 8.0 is going to be run on Centos 7
> and it is not enough to just do things which we usually did during
> upgrades. Now there are two ways to upgrade:
> 1) to use the official Centos upgrade script for upgrading from 6 to 7
> 2) to backup the master node, then reinstall it from scratch and then
> apply backup
>
> Upgrade team is trying to understand which way is more appropriate.
> Regarding to my tarball related activities, I'd say that this package 
> based
> upgrade approach can be aligned with (1) (fuel-upgrade would use official
> Centos upgrade script as a first step for upgrade), but it definitely can
> not be aligned with (2), because it assumes reinstalling the master node
> from scratch.
>
> Right now, I'm finishing the work around deprecating version.yaml and
> my further steps would be to modify fuel-upgrade script so it does not 
> copy
> RPM/DEB repos, but those steps make little sense taking into account 
> Centos
> 7 feature.
>
> Colleagues, let's make a decision about how we are going to upgrade
> the master node ASAP. Probably my tarball related work should be reduced 
> to
> just throwing tarball away.
>
>
> Vladimir Kozhukalov
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [All][Glance Mitaka Priorities

2015-11-06 Thread Louis Taylor
On Fri, Nov 06, 2015 at 06:31:23PM +, Bhandaru, Malini K wrote:
> Hello Glance Team/Flavio
> 
> Would you please provide link to Glance priorities at 
> https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Glance
> 
> [ Malini] Regards
> Malini

I don't belive there was an etherpad for this session. We were discussing the
list of priorities for Mitaka located here:


https://specs.openstack.org/openstack/glance-specs/priorities/mitaka-priorities.html

I've updated the wiki page with a link to the spec.

Cheers,
Louis


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2015-11-06 10:28:41 -0800:
> Excerpts from Clint Byrum's message of 2015-11-06 10:12:21 -0800:
> > Excerpts from Dan Smith's message of 2015-11-06 09:37:44 -0800:
> > > > Worth mentioning that OpenStack releases that come out at the same time
> > > > as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
> > > > are supported for 5 years by Canonical so are already kind of an LTS.
> > > > Support in this context means patches, updates and commercial support
> > > > (for a fee).
> > > > For paying customers 3 years of patches, updates and commercial support
> > > > for April releases, (Kilo, O, Q etc..) is also available.
> > > 
> > > Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
> > > maintaining an older release for so long is a good use of people or CI
> > > resources, especially given how hard it can be for us to keep even
> > > recent stable releases working and maintained.
> > > 
> > 
> > The argument in the original post, I think, is that we should not
> > stand in the way of the vendors continuing to collaborate on stable
> > maintenance in the upstream context after the EOL date. We already have
> > distro vendors doing work in the stable branches, but at EOL we push
> > them off to their respective distro-specific homes.
> > 
> > As much as I'd like everyone to get on the CD train, I think it might
> > make sense to enable the vendors to not diverge, but instead let them
> > show up with people and commitment and say "Hey we're going to keep
> > Juno/Mitaka/etc alive!".
> > 
> > So perhaps what would make sense is defining a process by which they can
> > make that happen.
> 
> Do we need a new process? Aren't the existing stable maintenance
> and infrastructure teams clearly defined?
> 
> We have this discussion whenever a release is about to go EOL, and
> the result is more or less the same each time. The burden of
> maintaining stable branches for longer than we do is currently
> greater than the resources being applied upstream to do that
> maintenance. Until that changes, I don't realistically see us being
> able to increase the community's commitment. That's not a lack of
> willingness, just an assessment of our current resources.

I tend to agree with you. I only bring up a new process because I wonder
if the distro vendors would even be interested in collaborating on this,
or if this is just sort of "what they do" and we should accept that
they're going to do it outside upstream no matter how easy we make it.

If we do believe that, and are OK with that, then we should not extend
EOL's, and we should make sure users understand that when they choose
the source of their OpenStack software.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

2015-11-06 Thread Joshua Harlow
I just wanted to bring this up in its own thread as I know it was a 
concern of (some) folks at the DLM session in tokyo[0] and I'd like to 
try to bust this myth using hopefully objective people/users of 
zookeeper (besides myself and yahoo, the company I work for) so that 
this myth can be put to bed.


Basically here is the TLDR of the question/complaint:

'''
Zookeeper, a java application, will force you to install oracles virtual 
machine implementation for it to work, and it doesn't work with the 
openjdk, and if tooz (and oslo library) has a capable driver that uses 
zookeeper internally (via kazoo @ http://kazoo.readthedocs.org) then it 
will force deployers of openstack and its components that will use more 
of tooz to install oracles virtual machine implementation.


This will not work!!
There is no way I can do that!!
Yell!! Shout!! Cry!!
'''

That's the *jist* of it (with additional dramatization included).

So in order to dispel this, I tried in that session to say 'actually I 
have heard nothing saying it doesn't work with openjdk' in that session 
but the voices did not seem to hear that (or they were unable to listen 
due to there emotions stressed/high). Either way I wanted to ensure that 
people do know it does work with the openjdk and here is a set of 
testimonials from real users of zookeeper + openjdk that it does work there:


From Min Pae[1] on the Cue[2] team:

'''
<@sputnik13> harlowja for what it's worth we use zookeeper with openjdk
'''

From Greg Hill[3] who works on the rackspace bigdata[4] team:

'''
 and yes, we run Zookeeper on openjdk, and we haven't 
heard of any problems with it

'''

From Joe Smith[5][6] (who is at twitter, and is the Mesos/Aurora SRE 
Tech Lead there):


'''
 and yep, we (twitter) use zookeeper for service discovery
 someone asked me that question back at mesoscon in seattle, 
fwiw https://youtu.be/nNrh-gdu9m4?t=34m43s

 Yasumoto do u know if u use openjdk or oraclejdk?
 harlowja: yep, openjdk7
 but we're migrating up to 8
'''

From Martijn Verburg who is an an openjdk developer (and CEO)[7][8] 
that has some insightful info as well:


'''
So OpenJDK and Oracle JDK are almost identical in their make up 
*especially* on the server side. Many, many orgs like Google, Twitter, 
the biggest investment bank in the world, all use OpenJDK as opposed to 
Oracle's JDK.


---

The difference is the quality of the OpenJDK binaries built and released 
by package maintainers.


If you are getting IcedTea from RedHat (their supported OpenJDK binary) 
or Azul's Zulu (Fully supported OpenJDK) then you're *absolutely fine*.


If you're relying on the Debian or Fedora packages then *occasionally* 
those package maintainers don't put out a great binary as they don't run 
the TCK tests (partly because they can't as they are unwilling/unable to 
pay Oracle for that TCK).


Hope that all makes sense...
'''

So I hope the above is enough of *proof* that yes the openjdk is fine, 
there may have been some bugs in the past, but those afaik have all been 
resolved and there are major contributors stepping up (and continuing to 
step up) to make sure that zookeeper + openjdk continue to work (because 
companies/projects/people... like mentioned above depend on it).


-Josh

[0] https://etherpad.openstack.org/p/mitaka-cross-project-dlm
[1] https://launchpad.net/~sputnik13
[2] https://wiki.openstack.org/wiki/Cue
[3] https://launchpad.net/~greg-hill
[4] http://www.rackspace.com/cloud/big-data
[5] http://www.bjoli.com/
[6] https://github.com/Yasumoto
[7] http://martijnverburg.blogspot.com/
[8] http://www.infoq.com/interviews/verburg-ljc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Doug Hellmann's message of 2015-11-06 10:28:41 -0800:

Excerpts from Clint Byrum's message of 2015-11-06 10:12:21 -0800:

Excerpts from Dan Smith's message of 2015-11-06 09:37:44 -0800:

Worth mentioning that OpenStack releases that come out at the same time
as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
are supported for 5 years by Canonical so are already kind of an LTS.
Support in this context means patches, updates and commercial support
(for a fee).
For paying customers 3 years of patches, updates and commercial support
for April releases, (Kilo, O, Q etc..) is also available.

Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
maintaining an older release for so long is a good use of people or CI
resources, especially given how hard it can be for us to keep even
recent stable releases working and maintained.


The argument in the original post, I think, is that we should not
stand in the way of the vendors continuing to collaborate on stable
maintenance in the upstream context after the EOL date. We already have
distro vendors doing work in the stable branches, but at EOL we push
them off to their respective distro-specific homes.

As much as I'd like everyone to get on the CD train, I think it might
make sense to enable the vendors to not diverge, but instead let them
show up with people and commitment and say "Hey we're going to keep
Juno/Mitaka/etc alive!".

So perhaps what would make sense is defining a process by which they can
make that happen.

Do we need a new process? Aren't the existing stable maintenance
and infrastructure teams clearly defined?

We have this discussion whenever a release is about to go EOL, and
the result is more or less the same each time. The burden of
maintaining stable branches for longer than we do is currently
greater than the resources being applied upstream to do that
maintenance. Until that changes, I don't realistically see us being
able to increase the community's commitment. That's not a lack of
willingness, just an assessment of our current resources.


I tend to agree with you. I only bring up a new process because I wonder
if the distro vendors would even be interested in collaborating on this,
or if this is just sort of "what they do" and we should accept that
they're going to do it outside upstream no matter how easy we make it.

If we do believe that, and are OK with that, then we should not extend
EOL's, and we should make sure users understand that when they choose
the source of their OpenStack software.


Except for the fact that you are now forcing deployers that may or may 
not be ok with paying for paid support to now pay for it... What is the 
adoption rate/expected adoption rate of someone transitioning there 
current cloud (which they did not pay support for) to a paid support model?


Does that require them to redeploy/convert there whole cloud using 
vendors provided packages/deployment model... If so, jeez, that sounds 
iffy...


And if a large majority of deployers aren't able to do that conversion 
(or aren't willing to pay for support) and those same deployers are 
willing to provide developers/others to ensure the old branches continue 
to work and they know the issues of CI and they are willing to stay 
on-top of that (a old-branch-dictator/leader may be needed to ensure 
this?) then meh, I think we as a community should just let those 
deployers have at it (ensuring they keep on working on the old branches 
via what 'old-branch-dictator/leader/group' says is broken/needs fixing...)


My 2 cents,

Josh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-06 Thread Fox, Kevin M


> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com]
> Sent: Thursday, November 05, 2015 3:19 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [all] Outcome of distributed lock manager
> discussion @ the summit
> 
> Excerpts from Fox, Kevin M's message of 2015-11-05 13:18:13 -0800:
> > Your assuming there are only 2 choices,  zk or db+rabbit. I'm claiming
> > both hare suboptimal at present. a 3rd might be needed. Though even
> with its flaws, the db+rabbit choice has a few benefits too.
> >
> 
> Well, I'm assuming it is zk/etcd/consul, because while the java argument is
> rather religious, the reality is all three are significantly different from
> databases and message queues and thus will be "snowflakes". But yes, I
> _am_ assuming that Zookeeper is a natural, logical, simple choice, and that
> fact that it runs in a jvm is a poor reason to avoid it.

Yes. Having a snowflake there is probably unavoidable, but how much of one is.

I've had to tune jvm stuff like the java stack size when things spontaneously 
break, and then they tell you, oh, yeah, what that happens, go tweak such and 
such in the jvm... Unix sysadmins usually know the  common things for c  apps 
without much effort. And tend to know to look in advance. In my, somewhat 
limited experience with go, the runtime seems closer to regular unix programs 
then jvm ones.

The term 'java' is often conflated to mean both the java language, and the jvm 
runtime. When people talk about java, often they are talking about the jvm. I 
think this is one of those cases. Its easier to debug c/go for unix admins not 
trained specifically in jvm behaviors/tunables.

> 
> > You also seem to assert that to support large clouds, the default must be
> something that can scale that large. While that would be nice, I don't think
> its a requirement if its overly burdensome on deployers of non huge clouds.
> >
> 
> I think the current solution even scales poorly for medium sized clouds.
> Only the tiniest of clouds with the fewest nodes can really sustain all of 
> that
> polling without incurring cost for that overhead that would be better spent
> on serviceing users.

While not ideal, I've run clouds with around 100 nodes on a single controller. 
If its doable today, it should be doable with the new system. Its not ideal, 
but if it's a zero effort deploy, and easy to debug, that has something going 
for it.

> 
> > I don't have metrics, but I would be surprised if most deployments today
> (production + other) used 3 controllers with a full ha setup. I would guess
> that the majority are single controller setups. With those, the overhead of
> maintaining a whole dlm like zk seems like overkill. If db+rabbit would work
> for that one case, that would be one less thing to have to setup for an op.
> They already have to setup db+rabbit. Or even a clm plugin of some sort,
> that won't scale, but would be very easy to deploy, and change out later
> when needed would be very useful.
> >
> 
> We do have metrics:
> 
> http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf
> 
> Page 35, "How many physical compute nodes do OpenStack clouds have?"
> 

Not what I was asking. It was asking how many controllers, not how many compute 
nodes. Like I said above, 1 controller can handle quite a bit of compute nodes.

> 
> 10-99:42%
> 1-9:  36%
> 100-999:  15%
> 1000-: 7%
> 
> So for respondents to that survey, yes, "most" are running less than 100
> nodes. However, by compute node count, if we extrapolate a bit:
> 
> There were 154 respondents so:
> 
> 10-99 * 42% =640 - 6403 nodes
> 1-9 * 36% =  55 - 498 nodes
> 100-999 * 15% =  2300 - 23076 nodes
> 1000- * 7% = 1 - 107789 nodes
>

This is good, but I believe this is biased towards the top end.

Respondents are much more likely to respond if they have a larger cloud to brag 
about. Folks doing it for development, testing, and other reasons may not 
respond because its not worth the effort. 

> So in terms of the number of actual computers running OpenStack compute,
> as an example, from the survey respondents, there are more computes
> running in *one* of the clouds with more than 1000 nodes than there are in
> *all* of the clouds with less than 10 nodes, and certainly more in all of the
> clouds over 1000 nodes, than in all of the clouds with less than 100 nodes.

For the reason listed above, I don't think we have enough evidence draw too 
strong a conclusion from this.

> 
> What this means, to me, is that the investment in OpenStack should focus
> on those with > 1000, since those orgs are definitely investing a lot more
> today. We shouldn't make it _hard_ to do a tiny cloud, but I think it's ok to
> make the tiny cloud less efficient if it means we can grow it into a monster
> cloud at any point and we continue to garner support from orgs who need to
> build large scale clouds.

Yeah, I'd say, we for sure need a solution for 1000+.

We also need a really 

Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Doug Hellmann
Excerpts from Clint Byrum's message of 2015-11-06 10:50:23 -0800:
> Excerpts from Doug Hellmann's message of 2015-11-06 10:28:41 -0800:
> > Excerpts from Clint Byrum's message of 2015-11-06 10:12:21 -0800:
> > > Excerpts from Dan Smith's message of 2015-11-06 09:37:44 -0800:
> > > > > Worth mentioning that OpenStack releases that come out at the same 
> > > > > time
> > > > > as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + 
> > > > > Mitaka)
> > > > > are supported for 5 years by Canonical so are already kind of an LTS.
> > > > > Support in this context means patches, updates and commercial support
> > > > > (for a fee).
> > > > > For paying customers 3 years of patches, updates and commercial 
> > > > > support
> > > > > for April releases, (Kilo, O, Q etc..) is also available.
> > > > 
> > > > Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
> > > > maintaining an older release for so long is a good use of people or CI
> > > > resources, especially given how hard it can be for us to keep even
> > > > recent stable releases working and maintained.
> > > > 
> > > 
> > > The argument in the original post, I think, is that we should not
> > > stand in the way of the vendors continuing to collaborate on stable
> > > maintenance in the upstream context after the EOL date. We already have
> > > distro vendors doing work in the stable branches, but at EOL we push
> > > them off to their respective distro-specific homes.
> > > 
> > > As much as I'd like everyone to get on the CD train, I think it might
> > > make sense to enable the vendors to not diverge, but instead let them
> > > show up with people and commitment and say "Hey we're going to keep
> > > Juno/Mitaka/etc alive!".
> > > 
> > > So perhaps what would make sense is defining a process by which they can
> > > make that happen.
> > 
> > Do we need a new process? Aren't the existing stable maintenance
> > and infrastructure teams clearly defined?
> > 
> > We have this discussion whenever a release is about to go EOL, and
> > the result is more or less the same each time. The burden of
> > maintaining stable branches for longer than we do is currently
> > greater than the resources being applied upstream to do that
> > maintenance. Until that changes, I don't realistically see us being
> > able to increase the community's commitment. That's not a lack of
> > willingness, just an assessment of our current resources.
> 
> I tend to agree with you. I only bring up a new process because I wonder
> if the distro vendors would even be interested in collaborating on this,
> or if this is just sort of "what they do" and we should accept that
> they're going to do it outside upstream no matter how easy we make it.
> 
> If we do believe that, and are OK with that, then we should not extend
> EOL's, and we should make sure users understand that when they choose
> the source of their OpenStack software.

OK, sure. If we can improve the process, then we should discuss that. We
did accommodate distro requests to continue tagging stable releases for
Liberty, but I'm not sure that compromise was made as the result of
promises of more resources.

Thierry did bring up the idea that the stable maintenance team should
stand alone, rather than being part of the release management team. That
would give the team its own PTL, and give them more autonomy about
deciding stable processes. I support the idea, but no one has come
forward and offered to drive it, yet.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2015-11-06 11:11:02 -0800:
> Clint Byrum wrote:
> > Excerpts from Doug Hellmann's message of 2015-11-06 10:28:41 -0800:
> >> Excerpts from Clint Byrum's message of 2015-11-06 10:12:21 -0800:
> >>> Excerpts from Dan Smith's message of 2015-11-06 09:37:44 -0800:
> > Worth mentioning that OpenStack releases that come out at the same time
> > as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
> > are supported for 5 years by Canonical so are already kind of an LTS.
> > Support in this context means patches, updates and commercial support
> > (for a fee).
> > For paying customers 3 years of patches, updates and commercial support
> > for April releases, (Kilo, O, Q etc..) is also available.
>  Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
>  maintaining an older release for so long is a good use of people or CI
>  resources, especially given how hard it can be for us to keep even
>  recent stable releases working and maintained.
> 
> >>> The argument in the original post, I think, is that we should not
> >>> stand in the way of the vendors continuing to collaborate on stable
> >>> maintenance in the upstream context after the EOL date. We already have
> >>> distro vendors doing work in the stable branches, but at EOL we push
> >>> them off to their respective distro-specific homes.
> >>>
> >>> As much as I'd like everyone to get on the CD train, I think it might
> >>> make sense to enable the vendors to not diverge, but instead let them
> >>> show up with people and commitment and say "Hey we're going to keep
> >>> Juno/Mitaka/etc alive!".
> >>>
> >>> So perhaps what would make sense is defining a process by which they can
> >>> make that happen.
> >> Do we need a new process? Aren't the existing stable maintenance
> >> and infrastructure teams clearly defined?
> >>
> >> We have this discussion whenever a release is about to go EOL, and
> >> the result is more or less the same each time. The burden of
> >> maintaining stable branches for longer than we do is currently
> >> greater than the resources being applied upstream to do that
> >> maintenance. Until that changes, I don't realistically see us being
> >> able to increase the community's commitment. That's not a lack of
> >> willingness, just an assessment of our current resources.
> >
> > I tend to agree with you. I only bring up a new process because I wonder
> > if the distro vendors would even be interested in collaborating on this,
> > or if this is just sort of "what they do" and we should accept that
> > they're going to do it outside upstream no matter how easy we make it.
> >
> > If we do believe that, and are OK with that, then we should not extend
> > EOL's, and we should make sure users understand that when they choose
> > the source of their OpenStack software.
> 
> Except for the fact that you are now forcing deployers that may or may 
> not be ok with paying for paid support to now pay for it... What is the 
> adoption rate/expected adoption rate of someone transitioning there 
> current cloud (which they did not pay support for) to a paid support model?
> 
> Does that require them to redeploy/convert there whole cloud using 
> vendors provided packages/deployment model... If so, jeez, that sounds 
> iffy...
> 
> And if a large majority of deployers aren't able to do that conversion 
> (or aren't willing to pay for support) and those same deployers are 
> willing to provide developers/others to ensure the old branches continue 
> to work and they know the issues of CI and they are willing to stay 
> on-top of that (a old-branch-dictator/leader may be needed to ensure 
> this?) then meh, I think we as a community should just let those 
> deployers have at it (ensuring they keep on working on the old branches 
> via what 'old-branch-dictator/leader/group' says is broken/needs fixing...)

Right, what I think where this leads though is that those who have
developers converge on CD, and those who have no developers have to pay
for support anyway. Running without developers and without a support
entity that can actually fix things is an interesting combination and
I'd be very curious to hear if there are any deployers having a positive
experience working that way.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread John Garbutt
On 6 November 2015 at 14:09, Sean Dague  wrote:
> On 11/06/2015 08:44 AM, Alex Xu wrote:
>>
>>
>> 2015-11-06 20:59 GMT+08:00 Sean Dague > >:
>>
>> On 11/06/2015 07:28 AM, John Garbutt wrote:
>> > On 6 November 2015 at 12:09, Sean Dague > > wrote:
>> >> On 11/06/2015 04:49 AM, Daniel P. Berrange wrote:
>> >>> On Fri, Nov 06, 2015 at 05:08:59PM +1100, Tony Breeds wrote:
>>  Hello all,
>>  I came across [1] which is notionally an ironic bug in that
>> horizon presents
>>  VM operations (like suspend) to users.  Clearly these options
>> don't make sense
>>  to ironic which can be confusing.
>> 
>>  There is a horizon fix that just disables migrate/suspened and
>> other functaions
>>  if the operator sets a flag say ironic is present.  Clealy this
>> is sub optimal
>>  for a mixed hv environment.
>> 
>>  The data needed (hpervisor type) is currently avilable only to
>> admins, a quick
>>  hack to remove this policy restriction is functional.
>> 
>>  There are a few ways to solve this.
>> 
>>   1. Change the default from "rule:admin_api" to "" (for
>>  os_compute_api:os-extended-server-attributes and
>>  os_compute_api:os-hypervisors), and set a list of values we're
>>  comfortbale exposing the user (hypervisor_type and
>>  hypervisor_hostname).  So a user can get the
>> hypervisor_name as part of
>>  the instance deatils and get the hypervisor_type from the
>>  os-hypervisors.  This would work for horizon but increases
>> the API load
>>  on nova and kinda implies that horizon would have to cache
>> the data and
>>  open-code assumptions that hypervisor_type can/can't do
>> action $x
>> 
>>   2. Include the hypervisor_type with the instance data.  This
>> would place the
>>  burdon on nova.  It makes the looking up instance details
>> slightly more
>>  complex but doesn't result in additional API queries, nor
>> caching
>>  overhead in horizon.  This has the same opencoding issues
>> as Option 1.
>> 
>>   3. Define a service user and have horizon look up the
>> hypervisors details via
>>  that role.  Has all the drawbacks as option 1 and I'm
>> struggling to
>>  think of many benefits.
>> 
>>   4. Create a capabilitioes API of some description, that can be
>> queried so that
>>  consumers (horizon) can known
>> 
>>   5. Some other way for users to know what kind of hypervisor
>> they're on, Perhaps
>>  there is an established image property that would work here?
>> 
>>  If we're okay with exposing the hypervisor_type to users, then
>> #2 is pretty
>>  quick and easy, and could be done in Mitaka.  Option 4 is
>> probably the best
>>  long term solution but I think is best done in 'N' as it needs
>> lots of
>>  discussion.
>> >>>
>> >>> I think that exposing hypervisor_type is very much the *wrong*
>> approach
>> >>> to this problem. The set of allowed actions varies based on much
>> more than
>> >>> just the hypervisor_type. The hypervisor version may affect it,
>> as may
>> >>> the hypervisor architecture, and even the version of Nova. If
>> horizon
>> >>> restricted its actions based on hypevisor_type alone, then it is
>> going
>> >>> to inevitably prevent the user from performing otherwise valid
>> actions
>> >>> in a number of scenarios.
>> >>>
>> >>> IMHO, a capabilities based approach is the only viable solution to
>> >>> this kind of problem.
>> >>
>> >> Right, we just had a super long conversation about this in
>> #openstack-qa
>> >> yesterday with mordred, jroll, and deva around what it's going to
>> take
>> >> to get upgrade tests passing with ironic.
>> >>
>> >> Capabilities is the right approach, because it means we're future
>> >> proofing our interface by telling users what they can do, not some
>> >> arbitrary string that they need to cary around a separate library to
>> >> figure those things out.
>> >>
>> >> It seems like capabilities need to exist on flavor, and by proxy
>> instance.
>> >>
>> >> GET /flavors/bm.large/capabilities
>> >>
>> >> {
>> >>  "actions": {
>> >>  'pause': False,
>> >>  'unpause': False,
>> >>  'rebuild': True
>> >>  ..
>> >>   }
>> >>
>>
>>
>> Does this need admin to set the capabilities? If yes, that looks like
>> pain to admin to set capabilities for all the 

Re: [openstack-dev] [nova] [doc] How to support Microversions and Actions in Swagger Spec

2015-11-06 Thread Anne Gentle
On Thu, Nov 5, 2015 at 9:31 PM, Alex Xu  wrote:

> Hi, folks
>
> Nova API sub-team is working on the swagger generation. And there is PoC
> https://review.openstack.org/233446
>
> But before we are going to next step, I really hope we can get agreement
> with how to support Microversions and Actions. The PoC have demo about
> Microversions. It generates min version action as swagger spec standard,
> for the other version actions, it named as extended attribute, like:
>
> {
> '/os-keypairs': {
> "get": {
> 'x-start-version': '2.1',
> 'x-end-version': '2.1',
> 'description': '',
>
> },
> "x-get-2.2-2.9": {
> 'x-start-version': '2.2',
> 'x-end-version': '2.9',
> 'description': '',
> .
> }
> }
> }
>
> x-start-version and x-end-version are the metadata for Microversions,
> which should be used by UI code to parse.
>

The swagger.io editor will not necessarily recognize extended attributes
(x- are extended attributes), right? I don't think we intend for these
files to be hand-edited once they are generated, though, so I consider it a
non-issue that the editor can't edit microversioned source.


>
> This is just based on my initial thought, and there is another thought is
> generating a set full swagger specs for each Microversion. But I think how
> to show Microversions and Actions should be depended how the doc UI to
> parse that also.
>
> As there is doc project to turn swagger to UI:
> https://github.com/russell/fairy-slipper  But it didn't support
> Microversions. So hope doc team can work with us and help us to find out
> format to support Microversions and Actions which good for UI parse and
> swagger generation.
>

Last release was a proof of concept for being able to generate Swagger.
Next we'll bring fairy-slipper into OpenStack and work with the API working
group and the Nova API team to enhance it.

This release we can further enhance with microversions. Nothing's
preventing that to my knowledge, other than Russell needs more input to
make the output what we want. This email is a good start.

I'm pretty sure microversions are hard to document no matter what we do so
we just need to pick a way and move forward. Here's what is in the spec:
For microversions, we'll need at least 2 copies of the previous reference
info (enable a dropdown for the user to choose a prior version or one that
matches theirs) Need to keep deprecated options.  An example of version
comparisons https://libgit2.github.com/libgit2/#HEAD

Let's discuss weekly at both the Nova API meeting and the API Working group
meeting to refine the design. I'm back next week and plan to update the
spec.
Anne



>
> Any thoughts folks?
>
> Thanks
> Alex
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Oleg Gelbukh
Hi

We should think about separating packages for master node and openstack. I
> guess we should use 2 repository:
> 1. MOS - repository for OpenStack related nodes
> 2. MasterNode - repository for packages that are used for master node only.
>
>
At the moment, this is pretty simple as we only support Ubuntu as target
node system as of 7.0 and 8.0, and our Master node runs on CentOS. Thus,
our CentOS repo is for Fuel node, and Ubuntu repo is for OpenStack.


> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and it
>> is not enough to just do things which we usually did during upgrades. Now
>> there are two ways to upgrade:
>> 1) to use the official Centos upgrade script for upgrading from 6 to 7
>> 2) to backup the master node, then reinstall it from scratch and then
>> apply backup
>>
>
> +1 for 2. We cannot guarantee that #1 will work smoothly. Also, there is
> some technical dept we cannot solve with #1 (i.e. - Docker device mapper).
> Also, the customer might have environments running on CentOS 6 so
> supporting all scenarios is quite hard. IF we do this we can redesign
> docker related part so we'll have huge profit later on.
>
>
In Upgrade team, we researched these 2 options. Option #1 allows us to keep
procedure close to what we had in previous versions, but it won't be
automatic as there are too many changes in our flavor of CentOS 6.6. Option
#2, on the other hand, will require developing essentially a new workflow:
1. backup the DB and settings,
2. prepare custom config for bootstrap_master_node script (to retain IP
addressing),
3. reinstall Fuel node with 8.0,
4. upload and upgrade DB,
5. restore keystone/db credentials

This sequence of steps is high level, of course, and might change in the
development. Its additional value that backup/restore parts of it could be
used separately to create backups of the Fuel node.

Our current plan is to pursue option #2 in the following 3 weeks. I will
keep this list updated on our progress as soon as we have any.

--
Best regards,
Oleg Gelbukh


> A a company we will help the clients who might want to upgrade from
> 5.1-7.0 to 8.0, but that will include analysing environment/plugins and
> making personal scenario for upgrade. It might be 'fuel-octane' to migrate
> workload to a new cloud or some script/documentation to perform upgrade.
>
>
>>
>> Upgrade team is trying to understand which way is more appropriate.
>> Regarding to my tarball related activities, I'd say that this package based
>> upgrade approach can be aligned with (1) (fuel-upgrade would use official
>> Centos upgrade script as a first step for upgrade), but it definitely can
>> not be aligned with (2), because it assumes reinstalling the master node
>> from scratch.
>>
>> Right now, I'm finishing the work around deprecating version.yaml and my
>> further steps would be to modify fuel-upgrade script so it does not copy
>> RPM/DEB repos, but those steps make little sense taking into account Centos
>> 7 feature.
>>
>
> +1.
>
>
>> Colleagues, let's make a decision about how we are going to upgrade the
>> master node ASAP. Probably my tarball related work should be reduced to
>> just throwing tarball away.
>>
>
> +2. That will allow us to:
> 1. Reduce ISO size
> 2. Increase ISO compilation by including -j8
> 3. Speed up CI
>
>
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Understanding stable/branch process for Neutron subprojects

2015-11-06 Thread Neil Jerram
On 06/11/15 13:46, Ihar Hrachyshka wrote:
> Neil Jerram  wrote:
>
>> Prompted by the thread about maybe allowing subproject teams to do their
>> own stable maint, I have some questions about what I should be doing in
>> networking-calico; and I guess the answers may apply generally to
>> subprojects.
>>
>> Let's start from the text at
>> http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html:
>>
>>> Stable branches for libraries should be created at the same time when
>> "libraries"?  Should that say "subprojects”?
> Yes. Please send a patch to fix wording.

https://review.openstack.org/#/c/242506/

> I think I understand the point here.  However, networking-calico doesn't
> yet have a stable/liberty branch, and in practice its master branch
> currently targets Neutron stable/liberty code.  (For example, its
> DevStack setup instructions say "git checkout stable/liberty".)
>
> Well that’s unfortunate. You should allow devstack to check out the needed  
> branch for neutron instead of overwriting its choice.

I'm afraid I don't understand, could you explain further?  Here's what
the setup instructions [1] currently say:

  # Clone the DevStack repository.
  git clone https://git.openstack.org/openstack-dev/devstack

  # Use the stable/liberty branch.
  cd devstack
  git checkout stable/liberty

What should they say instead?

[1]
https://git.openstack.org/cgit/openstack/networking-calico/tree/devstack/bootstrap.sh

>
>> To get networking-calico into a correct state per the above guideline, I
>> think I'd need/want to
>>
>> - create a stable/liberty branch (from the current master, as there is
>> nothing in master that actually depends on Neutron changes since
>> stable/liberty)
>>
>> - continue developing useful enhancements on the stable/liberty branch -
>> because my primary target for now is the released Liberty - and then
>> merge those to master
>>
> Once spinned out, stable branches should receive bug fixes only. No new  
> features, db migrations, configuration changes are allowed in stable  
> branches.
>
>> - eventually, develop on the master branch also, to take advantage of
>> and keep current with changes in Neutron master.
>>
> All new features must go to master only. Your master should always be  
> tested and work with neutron master (meaning, your master should target  
> Mitaka, not Liberty).
>
>> But is that compatible with the permitted stable branch process?  It
>> sounds like the permitted process requires me to develop everything on
>> master first, then (ask to) cherry-pick specific changes to the stable
>> branch - which isn't actually natural for the current situation (or
>> targeting Liberty releases).
>>
> Yes, that’s what current stable branch process implies. All stadium  
> projects must follow the same stable branch process.
>
> Now, you may also not see any value in supporting Liberty, then you can  
> avoid creating a branch for it; but it seems it’s not the case here.
>
> All that said, we already have stadium projects that violate the usual  
> process for master (f.e. GBP project targets its master development to kilo  
> - sic!) I believe that’s something to clear up as part of discussion of  
> what it really means to be a stadium project. I believe following general  
> workflow that is common to the project as a whole is one of the  
> requirements that we should impose.

Thanks for these clear answers.  I'll work towards getting all this correct.

Regards,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api]

2015-11-06 Thread Everett Toews
On Nov 6, 2015, at 6:30 AM, John Garbutt 
> wrote:

On 6 November 2015 at 12:11, Sean Dague > 
wrote:
On 11/06/2015 04:13 AM, Salvatore Orlando wrote:
It makes sense to have a single point were response pagination is made
in API processing, rather than scattering pagination across Nova REST
controllers; unfortunately if I am not really able to comment how
feasible that would be in Nova's WSGI framework.

However, I'd just like to add that there is an approved guideline for
API response pagination [1], and if would be good if all these effort
follow the guideline.

Salvatore

[1] 
https://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html

The pagination part is just a TODO in there.

Ideally, I would like us to fill out that pagination part first.

If we can't get global agreement quickly, we should at least get a
Nova API wide standard pattern.

Am I missing something here?

When I sent my initial reply to this thread, I Cc'd the author of the 
pagination guideline at wu...@unitedstack.com. 
However, I got a bounce message so it's a bit unclear if wuhao is still working 
on this. If someone knows this person, can you please highlight this thread?

If we don't hear a response on this thread or the review, we can move forward 
another way.

Everett
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Oleg Gelbukh
Evgeniy,

On Fri, Nov 6, 2015 at 3:27 PM, Evgeniy L  wrote:

> Also we should decide when to run containers
> upgrade + host upgrade? Before or after new CentOS is installed? Probably
> it should be done before we run backup, in order to get the latest scripts
> for
> backup/restore actions.
>

We're working to determine if we need to backup/upgrade containers at all.
My expectation is that we should be OK with just backup of DB, IP addresses
settings from astute.yaml for the master node, and credentials from
configuration files for the services.

--
Best regards,
Oleg Gelbukh


>
> Thanks,
>
> On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> At the moment I'm working on deprecating Fuel upgrade tarball. Currently,
>> it includes the following:
>>
>> * RPM repository (upstream + mos)
>> * DEB repository (mos)
>> * openstack.yaml
>> * version.yaml
>> * upgrade script itself (+ virtualenv)
>>
>> Apart from upgrading docker containers this upgrade script makes copies
>> of the RPM/DEB repositories and puts them on the master node naming these
>> repository directories depending on what is written in openstack.yaml and
>> version.yaml. My plan was something like:
>>
>> 1) deprecate version.yaml (move all fields from there to various places)
>> 2) deliver openstack.yaml with fuel-openstack-metadata package
>> 3) do not put new repos on the master node (instead we should use online
>> repos or use fuel-createmirror to make local mirrors)
>> 4) deliver fuel-upgrade package (throw away upgrade virtualenv)
>>
>> Then UX was supposed to be roughly like:
>>
>> 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
>> 2) yum install fuel-upgrade
>> 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
>> there should have not be parts coping RPM/DEB repos)
>>
>> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and
>> it is not enough to just do things which we usually did during upgrades.
>> Now there are two ways to upgrade:
>> 1) to use the official Centos upgrade script for upgrading from 6 to 7
>> 2) to backup the master node, then reinstall it from scratch and then
>> apply backup
>>
>> Upgrade team is trying to understand which way is more appropriate.
>> Regarding to my tarball related activities, I'd say that this package based
>> upgrade approach can be aligned with (1) (fuel-upgrade would use official
>> Centos upgrade script as a first step for upgrade), but it definitely can
>> not be aligned with (2), because it assumes reinstalling the master node
>> from scratch.
>>
>> Right now, I'm finishing the work around deprecating version.yaml and my
>> further steps would be to modify fuel-upgrade script so it does not copy
>> RPM/DEB repos, but those steps make little sense taking into account Centos
>> 7 feature.
>>
>> Colleagues, let's make a decision about how we are going to upgrade the
>> master node ASAP. Probably my tarball related work should be reduced to
>> just throwing tarball away.
>>
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] BIOS Configuration

2015-11-06 Thread Serge Kovaleff
Mea culpa. I was suggested that New REST API entry will be added to Ironic
API and NOT IPA (Ironic Python Agent) as I misunderstood from the beginning.

Cheers,
Serge Kovaleff
http://www.mirantis.com
cell: +38 (063) 83-155-70

On Fri, Nov 6, 2015 at 1:33 PM, Serge Kovaleff 
wrote:

> Hi Lucas,
>
> 
> I meant if it's possible to access/update BIOS configuration without any
> agent.
> Something similar to remote execution engine via Ansible.
> I am inspired by agent-less "Ansible-deploy-driver"
> https://review.openstack.org/#/c/241946/
>
> There is definitely benefits of using the agent e.g. Heartbeats.
> Nevertheless, the idea of minimal agent-less environment is quite
> appealing for me.
>
> Cheers,
> Serge Kovaleff
>
>
> On Fri, Oct 23, 2015 at 4:58 PM, Lucas Alvares Gomes <
> lucasago...@gmail.com> wrote:
>
>> Hi,
>>
>> > I am interested in remote BIOS configuration.
>> > There is "New driver interface for BIOS configuration specification"
>> > https://review.openstack.org/#/c/209612/
>> >
>> > Is it possible to implement this without REST API endpoint?
>> >
>>
>> I may be missing something here but without the API how will the user
>> set the configurations? We need the ReST API so we can abstract the
>> interface for this for all the different drivers in Ironic.
>>
>> Also, feel free to add suggestions in the spec patch itself.
>>
>> Cheers,
>> Lucas
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [doc] How to support Microversions and Actions in Swagger Spec

2015-11-06 Thread Alex Xu
2015-11-06 22:22 GMT+08:00 Anne Gentle :

>
>
> On Thu, Nov 5, 2015 at 9:31 PM, Alex Xu  wrote:
>
>> Hi, folks
>>
>> Nova API sub-team is working on the swagger generation. And there is PoC
>> https://review.openstack.org/233446
>>
>> But before we are going to next step, I really hope we can get agreement
>> with how to support Microversions and Actions. The PoC have demo about
>> Microversions. It generates min version action as swagger spec standard,
>> for the other version actions, it named as extended attribute, like:
>>
>> {
>> '/os-keypairs': {
>> "get": {
>> 'x-start-version': '2.1',
>> 'x-end-version': '2.1',
>> 'description': '',
>>
>> },
>> "x-get-2.2-2.9": {
>> 'x-start-version': '2.2',
>> 'x-end-version': '2.9',
>> 'description': '',
>> .
>> }
>> }
>> }
>>
>> x-start-version and x-end-version are the metadata for Microversions,
>> which should be used by UI code to parse.
>>
>
> The swagger.io editor will not necessarily recognize extended attributes
> (x- are extended attributes), right? I don't think we intend for these
> files to be hand-edited once they are generated, though, so I consider it a
> non-issue that the editor can't edit microversioned source.
>
>

yes, right. The editor can just ignore the extended attributes. I just want
to show if we have something more than swagger standard spec to support
Microversions and Actions, we should use swagger spec supported way to
extend.


>
>> This is just based on my initial thought, and there is another thought is
>> generating a set full swagger specs for each Microversion. But I think how
>> to show Microversions and Actions should be depended how the doc UI to
>> parse that also.
>>
>> As there is doc project to turn swagger to UI:
>> https://github.com/russell/fairy-slipper  But it didn't support
>> Microversions. So hope doc team can work with us and help us to find out
>> format to support Microversions and Actions which good for UI parse and
>> swagger generation.
>>
>
> Last release was a proof of concept for being able to generate Swagger.
> Next we'll bring fairy-slipper into OpenStack and work with the API working
> group and the Nova API team to enhance it.
>
> This release we can further enhance with microversions. Nothing's
> preventing that to my knowledge, other than Russell needs more input to
> make the output what we want. This email is a good start.
>

yea, really appreciate if Russell can give some input as he work on
fairy-slipper.


>
> I'm pretty sure microversions are hard to document no matter what we do so
> we just need to pick a way and move forward. Here's what is in the spec:
> For microversions, we'll need at least 2 copies of the previous reference
> info (enable a dropdown for the user to choose a prior version or one that
> matches theirs) Need to keep deprecated options.  An example of version
> comparisons https://libgit2.github.com/libgit2/#HEAD
>
> Let's discuss weekly at both the Nova API meeting and the API Working
> group meeting to refine the design. I'm back next week and plan to update
> the spec.
>

yea, let's talk more at next meeting, thanks!


> Anne
>
>
>
>>
>> Any thoughts folks?
>>
>> Thanks
>> Alex
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Anne Gentle
> Rackspace
> Principal Engineer
> www.justwriteclick.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Oleg Gelbukh
On Fri, Nov 6, 2015 at 3:32 PM, Alexander Kostrikov  wrote:

> Hi, Vladimir!
> I think that option (2) 'to backup the master node, then reinstall it
> from scratch and then apply backup' is a better way for upgrade.
> In that way we are concentrating on two problems in one feature:
> backups and upgrades.
>
That will ease development, testing and also reduce feature creep.
>

Alexander, +1 on this.

--
Best regards,
Oleg Gelbukh

>
> P.S.
> It is hard to refer to (2) because You have thee (2)-s.
>
> On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> At the moment I'm working on deprecating Fuel upgrade tarball. Currently,
>> it includes the following:
>>
>> * RPM repository (upstream + mos)
>> * DEB repository (mos)
>> * openstack.yaml
>> * version.yaml
>> * upgrade script itself (+ virtualenv)
>>
>> Apart from upgrading docker containers this upgrade script makes copies
>> of the RPM/DEB repositories and puts them on the master node naming these
>> repository directories depending on what is written in openstack.yaml and
>> version.yaml. My plan was something like:
>>
>> 1) deprecate version.yaml (move all fields from there to various places)
>> 2) deliver openstack.yaml with fuel-openstack-metadata package
>> 3) do not put new repos on the master node (instead we should use online
>> repos or use fuel-createmirror to make local mirrors)
>> 4) deliver fuel-upgrade package (throw away upgrade virtualenv)
>>
>> Then UX was supposed to be roughly like:
>>
>> 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
>> 2) yum install fuel-upgrade
>> 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
>> there should have not be parts coping RPM/DEB repos)
>>
>> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and
>> it is not enough to just do things which we usually did during upgrades.
>> Now there are two ways to upgrade:
>> 1) to use the official Centos upgrade script for upgrading from 6 to 7
>> 2) to backup the master node, then reinstall it from scratch and then
>> apply backup
>>
>> Upgrade team is trying to understand which way is more appropriate.
>> Regarding to my tarball related activities, I'd say that this package based
>> upgrade approach can be aligned with (1) (fuel-upgrade would use official
>> Centos upgrade script as a first step for upgrade), but it definitely can
>> not be aligned with (2), because it assumes reinstalling the master node
>> from scratch.
>>
>> Right now, I'm finishing the work around deprecating version.yaml and my
>> further steps would be to modify fuel-upgrade script so it does not copy
>> RPM/DEB repos, but those steps make little sense taking into account Centos
>> 7 feature.
>>
>> Colleagues, let's make a decision about how we are going to upgrade the
>> master node ASAP. Probably my tarball related work should be reduced to
>> just throwing tarball away.
>>
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Kind Regards,
>
> Alexandr Kostrikov,
>
> Mirantis, Inc.
>
> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>
>
> Tel.: +7 (495) 640-49-04
> Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>
>
> Skype: akostrikov_mirantis
>
> E-mail: akostri...@mirantis.com 
>
> *www.mirantis.com *
> *www.mirantis.ru *
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [doc] How to support Microversions and Actions in Swagger Spec

2015-11-06 Thread Alex Xu
2015-11-06 20:46 GMT+08:00 John Garbutt :

> On 6 November 2015 at 03:31, Alex Xu  wrote:
> > Hi, folks
> >
> > Nova API sub-team is working on the swagger generation. And there is PoC
> > https://review.openstack.org/233446
> >
> > But before we are going to next step, I really hope we can get agreement
> > with how to support Microversions and Actions. The PoC have demo about
> > Microversions. It generates min version action as swagger spec standard,
> for
> > the other version actions, it named as extended attribute, like:
> >
> > {
> > '/os-keypairs': {
> > "get": {
> > 'x-start-version': '2.1',
> > 'x-end-version': '2.1',
> > 'description': '',
> >
> > },
> > "x-get-2.2-2.9": {
> > 'x-start-version': '2.2',
> > 'x-end-version': '2.9',
> > 'description': '',
> > .
> > }
> > }
> > }
> >
> > x-start-version and x-end-version are the metadata for Microversions,
> which
> > should be used by UI code to parse.
> >
> > This is just based on my initial thought, and there is another thought is
> > generating a set full swagger specs for each Microversion. But I think
> how
> > to show Microversions and Actions should be depended how the doc UI to
> parse
> > that also.
> >
> > As there is doc project to turn swagger to UI:
> > https://github.com/russell/fairy-slipper  But it didn't support
> > Microversions. So hope doc team can work with us and help us to find out
> > format to support Microversions and Actions which good for UI parse and
> > swagger generation.
> >
> > Any thoughts folks?
>
> I can't find the URL to the example, but I though the plan was each
> microversion generates a full doc tree.
>

yea, we said that in nova api meeting. and this is the example what we
expect UI looks like https://libgit2.github.com/libgit2/#HEAD

I just want to ensure with doc team and Russell this is good for them on
the implementation of fairy-slipper.


>
> It also notes the changes between the versions, so you look at the
> latest version, you can tell between which versions the API was
> modified.
>
> I remember annegentle had a great example of this style, will try ping
> here about that next week.
>


yea, let's talk about it in the meeting.


>
> Thanks,
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [doc] How to support Microversions and Actions in Swagger Spec

2015-11-06 Thread John Garbutt
On 6 November 2015 at 14:22, Anne Gentle  wrote:
> I'm pretty sure microversions are hard to document no matter what we do so
> we just need to pick a way and move forward.
> Here's what is in the spec:
> For microversions, we'll need at least 2 copies of the previous reference
> info (enable a dropdown for the user to choose a prior version or one that
> matches theirs)

+1

> Need to keep deprecated options.

Thats not really a thing in microversion land.
Things are present or deleted, in a particular version.
That should be simpler.

> An example of version
> comparisons https://libgit2.github.com/libgit2/#HEAD

:)
That the example I couldn't find.
I feel that maps (almost) perfectly to microversions.
I could be missing something obvious though.

> Let's discuss weekly at both the Nova API meeting and the API Working group
> meeting to refine the design. I'm back next week and plan to update the
> spec.

+1

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Matthew Mosesohn
Oleg,

All the volatile information, including a DB dump, are contained in the
small Fuel Master backup. There should be no information lost unless there
was manual customization done inside the containers (such as puppet
manifest changes). There shouldn't be a need to back up the entire
containers.

The information we would lose would include the IP configuration interfaces
besides the one used for the Fuel PXE network and any custom configuration
done on the Fuel Master.

I want #1 to work smoothly, but #2 should also be a safe route.

On Fri, Nov 6, 2015 at 5:39 PM, Oleg Gelbukh  wrote:

> Evgeniy,
>
> On Fri, Nov 6, 2015 at 3:27 PM, Evgeniy L  wrote:
>
>> Also we should decide when to run containers
>> upgrade + host upgrade? Before or after new CentOS is installed? Probably
>> it should be done before we run backup, in order to get the latest
>> scripts for
>> backup/restore actions.
>>
>
> We're working to determine if we need to backup/upgrade containers at all.
> My expectation is that we should be OK with just backup of DB, IP addresses
> settings from astute.yaml for the master node, and credentials from
> configuration files for the services.
>
> --
> Best regards,
> Oleg Gelbukh
>
>
>>
>> Thanks,
>>
>> On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Dear colleagues,
>>>
>>> At the moment I'm working on deprecating Fuel upgrade tarball.
>>> Currently, it includes the following:
>>>
>>> * RPM repository (upstream + mos)
>>> * DEB repository (mos)
>>> * openstack.yaml
>>> * version.yaml
>>> * upgrade script itself (+ virtualenv)
>>>
>>> Apart from upgrading docker containers this upgrade script makes copies
>>> of the RPM/DEB repositories and puts them on the master node naming these
>>> repository directories depending on what is written in openstack.yaml and
>>> version.yaml. My plan was something like:
>>>
>>> 1) deprecate version.yaml (move all fields from there to various places)
>>> 2) deliver openstack.yaml with fuel-openstack-metadata package
>>> 3) do not put new repos on the master node (instead we should use online
>>> repos or use fuel-createmirror to make local mirrors)
>>> 4) deliver fuel-upgrade package (throw away upgrade virtualenv)
>>>
>>> Then UX was supposed to be roughly like:
>>>
>>> 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
>>> 2) yum install fuel-upgrade
>>> 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
>>> there should have not be parts coping RPM/DEB repos)
>>>
>>> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and
>>> it is not enough to just do things which we usually did during upgrades.
>>> Now there are two ways to upgrade:
>>> 1) to use the official Centos upgrade script for upgrading from 6 to 7
>>> 2) to backup the master node, then reinstall it from scratch and then
>>> apply backup
>>>
>>> Upgrade team is trying to understand which way is more appropriate.
>>> Regarding to my tarball related activities, I'd say that this package based
>>> upgrade approach can be aligned with (1) (fuel-upgrade would use official
>>> Centos upgrade script as a first step for upgrade), but it definitely can
>>> not be aligned with (2), because it assumes reinstalling the master node
>>> from scratch.
>>>
>>> Right now, I'm finishing the work around deprecating version.yaml and my
>>> further steps would be to modify fuel-upgrade script so it does not copy
>>> RPM/DEB repos, but those steps make little sense taking into account Centos
>>> 7 feature.
>>>
>>> Colleagues, let's make a decision about how we are going to upgrade the
>>> master node ASAP. Probably my tarball related work should be reduced to
>>> just throwing tarball away.
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Matt Riedemann



On 11/6/2015 4:43 AM, Thierry Carrez wrote:

Tony Breeds wrote:

[...]
1) Is it even possible to keep Juno alive (is the impact on the project as
a whole acceptable)?


It is *technically* possible, imho. The main cost to keep it is that the
branches get regularly broken by various other changes, and those breaks
are non-trivial to fix (we have taken steps to make branches more
resilient, but those only started to appear in stable/liberty). The
issues sometimes propagate (through upgrade testing) to master, at which
point it becomes everyone's problem to fix it. The burden ends up
falling on the usual gate fixers heroes, a rare resource we need to protect.

So it's easy to say "we should keep the branch since so many people
still use it", unless we have significantly more people working on (and
capable of) fixing it when it's broken, the impact on the project is
just not acceptable.

It's not the first time this has been suggested, and every time our
answer was "push more resources in fixing existing stable branches and
we might reconsider it". We got promised lots of support. But I don't
think we have yet seen real change in that area (I still see the same
usual suspects fixing stable gates), and things can still barely keep
afloat with our current end-of-life policy...

Stable branches also come with security support, so keeping more
branches opened mechanically adds to the work of the Vulnerability
Management Team, another rare resource.

There are other hidden costs on the infrastructure side (we can't get
rid of a number of things that we have moved away from until the old
branch still needing those things is around), but I'll let someone
closer to the metal answer that one.


Assuming a positive answer:

2) Who's going to do the work?
 - Me, who else?
3) What do we do if people don't actually do the work but we as a community
have made a commitment?


In the past, that generally meant people opposed to the idea of
extending support periods having to stand up for the community promise
and fix the mess in the end.

PS: stable gates are currently broken for horizon/juno, trove/kilo, and
neutron-lbaas/liberty.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



In general I'm in favor of trying to keep the stable branches available 
as long as possible because of (1) lots of production deployments not 
upgrading as fast as we (the dev team) assume they are and (2) 
backporting security fixes upstream is much nicer as a community than 
doing it out of tree when you support 5+ years of releases.


Having said that, the downside points above are very valid, i.e. not 
enough resources to help, we want to drop py26, things get wedged easily 
and there aren't people around to monitor or fix it, or understand how 
all of the stable branch + infra + QA stuff fits together.


It also extends the life and number of tests that need to be run against 
things in Tempest, which already runs several dozen jobs per change 
proposed today (since Tempest is branchless).


At this point stable/juno is pretty much a goner, IMO. The last few 
months of activity that I've been involved in have been dealing with 
requirements capping issues, which as we've seen you fix one issue to 
unwedge a project and with the g-r syncs we end up breaking 2 other 
projects, and the cycle never ends.


This is not as problematic in stable/kilo because we've done a better 
job of isolating versions in g-r from the start, but things won't get 
really good until stable/liberty when we've got upper-constraints in action.


So I'm optimistic that we can keep stable/kilo around and working longer 
than what we've normally done in the past, but I don't hold out much 
hope for stable/juno at this point given it's current state.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread John Garbutt
Hi,

I propose we add Sylvain Bauza[1] to nova-core.

Over the last few cycles he has consistently been doing great work,
including some quality reviews, particularly around the Scheduler.

Please respond with comments, +1s, or objections within one week.

Many thanks,
John

[1] http://stackalytics.com/?module=nova-group_id=sylvain-bauza=all

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread John Garbutt
Hi,

I propose we add Alex Xu[1] to nova-core.

Over the last few cycles he has consistently been doing great work,
including some quality reviews, particularly around the API.

Please respond with comments, +1s, or objections within one week.

Many thanks,
John

[1]http://stackalytics.com/?module=nova-group_id=xuhj=all

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Matt Riedemann



On 11/6/2015 9:20 AM, Matt Riedemann wrote:



On 11/6/2015 4:43 AM, Thierry Carrez wrote:

Tony Breeds wrote:

[...]
1) Is it even possible to keep Juno alive (is the impact on the
project as
a whole acceptable)?


It is *technically* possible, imho. The main cost to keep it is that the
branches get regularly broken by various other changes, and those breaks
are non-trivial to fix (we have taken steps to make branches more
resilient, but those only started to appear in stable/liberty). The
issues sometimes propagate (through upgrade testing) to master, at which
point it becomes everyone's problem to fix it. The burden ends up
falling on the usual gate fixers heroes, a rare resource we need to
protect.

So it's easy to say "we should keep the branch since so many people
still use it", unless we have significantly more people working on (and
capable of) fixing it when it's broken, the impact on the project is
just not acceptable.

It's not the first time this has been suggested, and every time our
answer was "push more resources in fixing existing stable branches and
we might reconsider it". We got promised lots of support. But I don't
think we have yet seen real change in that area (I still see the same
usual suspects fixing stable gates), and things can still barely keep
afloat with our current end-of-life policy...

Stable branches also come with security support, so keeping more
branches opened mechanically adds to the work of the Vulnerability
Management Team, another rare resource.

There are other hidden costs on the infrastructure side (we can't get
rid of a number of things that we have moved away from until the old
branch still needing those things is around), but I'll let someone
closer to the metal answer that one.


Assuming a positive answer:

2) Who's going to do the work?
 - Me, who else?
3) What do we do if people don't actually do the work but we as a
community
have made a commitment?


In the past, that generally meant people opposed to the idea of
extending support periods having to stand up for the community promise
and fix the mess in the end.

PS: stable gates are currently broken for horizon/juno, trove/kilo, and
neutron-lbaas/liberty.



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



In general I'm in favor of trying to keep the stable branches available
as long as possible because of (1) lots of production deployments not
upgrading as fast as we (the dev team) assume they are and (2)
backporting security fixes upstream is much nicer as a community than
doing it out of tree when you support 5+ years of releases.

Having said that, the downside points above are very valid, i.e. not
enough resources to help, we want to drop py26, things get wedged easily
and there aren't people around to monitor or fix it, or understand how
all of the stable branch + infra + QA stuff fits together.

It also extends the life and number of tests that need to be run against
things in Tempest, which already runs several dozen jobs per change
proposed today (since Tempest is branchless).

At this point stable/juno is pretty much a goner, IMO. The last few
months of activity that I've been involved in have been dealing with
requirements capping issues, which as we've seen you fix one issue to
unwedge a project and with the g-r syncs we end up breaking 2 other
projects, and the cycle never ends.

This is not as problematic in stable/kilo because we've done a better
job of isolating versions in g-r from the start, but things won't get
really good until stable/liberty when we've got upper-constraints in
action.

So I'm optimistic that we can keep stable/kilo around and working longer
than what we've normally done in the past, but I don't hold out much
hope for stable/juno at this point given it's current state.



Didn't mean to break the cross-list chain.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Change VIP address via API

2015-11-06 Thread Aleksey Kasatkin
Mike, Vladimir,

Yes,
1. We need to add IPs on-the-fly (need to add POST functionality) ,
otherwise it will be VIP-like way (change network roles in plugin or
release).
2. We should allow to leave fields 'network_role', 'node_roles', 'namespace'
empty. So, validation should be changed.

So, answer here

> Q. Any allocated IP could be accessible via these handlers, so now we can
> restrict user to access VIPs only
> and answer with some error to other ip_addrs ids.
>
should be "Any allocated IP is accessible via these handlers", so URLs can
be changed to
/clusters//network_configuration/ips/
/clusters//network_configuration/ips//
Nodes IPs maybe the different story though.

Alex,

'node_roles' determines in what node group to allocate IP. So, it will be
group with controller nodes for our base VIPs
(they all have node_roles=['controller'] which is default setting).
It can be some other node group for nodes with different role. E.g. ceph
nodes use some ceph/vip network role and VIP is defined
for this network role (with 'network_role'='ceph/vip' and
'node_roles'=['ceph/osd']).
This VIP will be allocated
in the network that 'ceph/vip' is mapped to and in the node group where
ceph nodes are located. ceph nodes cannot be located
in more than one node group then (as VIP cannot migrate between node groups
now).



Aleksey Kasatkin


On Fri, Nov 6, 2015 at 10:20 AM, Vladimir Kuklin 
wrote:

> +1 to Mike
>
> It would be awesome to get an API handler that allows one to actually add
> an ip address to IP_addrs table. As well as an IP range to ip_ranges table.
>
> On Fri, Nov 6, 2015 at 6:15 AM, Mike Scherbakov 
> wrote:
>
>> Is there a way to make it more generic, not "VIP" specific? Let's say I
>> want to reserve address(-es) for something for whatever reason, and then I
>> want to use them by some tricky way.
>> More specifically, can we reserve IP address(-es) with some codename, and
>> use it later?
>> 12.12.12.12 - my-shared-ip
>> 240.0.0.2 - my-multicast
>> and then use them in puppet / whatever deployment code by $my-shared-ip,
>> $my-multicast?
>>
>> Thanks,
>>
>> On Tue, Nov 3, 2015 at 8:49 AM Aleksey Kasatkin 
>> wrote:
>>
>>> Folks,
>>>
>>> Here is a resume of our recent discussion:
>>>
>>> 1. Add new URLs for processing VIPs:
>>>
>>> /clusters//network_configuration/vips/ (GET)
>>> /clusters//network_configuration/vips// (GET, PUT)
>>>
>>> where  is the id in ip_addrs table.
>>> So, user can get all VIPS, get one VIP by id, change parameters (IP
>>> address) for one VIP by its id.
>>> More possibilities can be added later.
>>>
>>> Q. Any allocated IP could be accessible via these handlers, so now we
>>> can restrict user to access VIPs only
>>> and answer with some error to other ip_addrs ids.
>>>
>>> 2. Add current VIP meta into ip_addrs table.
>>>
>>> Create new field in ip_addrs table for placing VIP metadata there.
>>> Current set of ip_addrs fields:
>>> id (int),
>>> network (FK),
>>> node (FK),
>>> ip_addr (string),
>>> vip_type (string),
>>> network_data (relation),
>>> node_data (relation)
>>>
>>> Q. We could replace vip_type (it contains VIP name now) with vip_info.
>>>
>>> 3. Allocate VIPs on cluster creation and seek VIPs at all network
>>> changes.
>>>
>>> So, VIPs will be checked (via network roles descriptions) and
>>> re-allocated in ip_addrs table
>>> at these points:
>>> a. create cluster
>>> b. modify networks configuration
>>> c. modify one network
>>> d. modify network template
>>> e. change nodes set for cluster
>>> f. change node roles set on nodes
>>> g. modify cluster attributes (change set of plugins)
>>> h. modify release
>>>
>>> 4. Add 'manual' field into VIP meta to indicate whether it is
>>> auto-allocated or not.
>>>
>>> So, whole VIP description may look like:
>>> {
>>> 'name': 'management'
>>> 'network_role': 'mgmt/vip',
>>> 'namespace': 'haproxy',
>>> 'node_roles': ['controller'],
>>> 'alias': 'management_vip',
>>> 'manual': True,
>>> }
>>>
>>> Example of current VIP description:
>>>
>>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L207
>>>
>>> Nailgun will re-allocate VIP address if 'manual' == False.
>>>
>>> 5. Q. what to do when the given address overlaps with the network from
>>> another
>>> environment? overlaps with the network of current environment which does
>>> not match the
>>> network role of the VIP?
>>>
>>> Use '--force' parameter to change it. PUT will fail otherwise.
>>>
>>>
>>> Guys, please review this and share your comments here,
>>>
>>> Thanks,
>>>
>>>
>>>
>>> Aleksey Kasatkin
>>>
>>>
>>> On Tue, Nov 3, 2015 at 10:47 AM, Aleksey Kasatkin <
>>> akasat...@mirantis.com> wrote:
>>>
 Igor,

 > For VIP allocation we should use POST request. It's ok to use PUT for
 setting (changing) IP address.

 My proposal is about setting IP addresses for VIPs only (auto and

Re: [openstack-dev] [ironic] [inspector] Auto discovery extension for Ironic Inspector

2015-11-06 Thread Bruno Cornec

Hello,

Pavlo Shchelokovskyy said on Tue, Nov 03, 2015 at 09:41:51PM +:

For auto-setting driver options on enrollment, I would vote for option 2
with default being fake driver + optional CMDB integration. This would ease
managing a homogeneous pool of BMs, but still (using fake driver or data
from CMDB) work reasonably well in heterogeneous case.

As for setting a random password, CMDB integration is crucial IMO. Large
deployments usually have some sort of it already, and it must serve as a
single source of truth for the deployment. So if inspector is changing the
ipmi password, it should not only notify/update Ironic's knowledge on that
node, but also notify/update the CMDB on that change - at least there must
be a possibility (a ready-to-use plug point) to do that before we roll out
such feature.


wrt interaction with CMDB, we have investigating around some ideas tha
we have gathered at https://github.com/uggla/alexandria/wiki

Some code has been written to try to model some of these aspects, but
having more contributors and patches to enhance that integration would
be great ! Similarly available at https://github.com/uggla/alexandria

We had planned to talk about these ideas at the previous OpenStack
summit but didn't get enough votes it seems. So now aiming at preenting
to the next one ;-)

HTH,
Bruno.
--
Open Source Profession, Linux Community Lead WW  http://hpintelco.net
HPE EMEA EG Open Source Technology Strategist http://hp.com/go/opensource
FLOSS projects: http://mondorescue.org http://project-builder.org
Musique ancienne? http://www.musique-ancienne.org http://www.medieval.org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] shotgun code freeze

2015-11-06 Thread Dmitry Pyzhov
Great job! We are much closer to removal of fuel-web repo.

On Tue, Oct 27, 2015 at 7:35 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> I am glad to announce that since now shotgun is a separate project.
> fuel-web/shotgun directory has been deprecated. There is yet another patch
> that has not been merged https://review.openstack.org/238525 (adds
> .gitignore file to the new shotgun repo). Please review it.
>
> Shotgun
>
>- Launchpad bug https://bugs.launchpad.net/fuel/+bug/1506894
>- project-config patch https://review.openstack.org/235355 (DONE)
>- pypi (DONE)
>- run_tests.sh https://review.openstack.org/235368 (DONE)
>- rpm/deb specs https://review.openstack.org/#/c/235382/ (DONE)
>- fuel-ci verification jobs https://review.fuel-infra.org/12872 (DONE)
>- label jenkins slaves for verification (DONE)
>- directory freeze (DONE)
>- prepare upstream (DONE)
>- waiting for project-config patch to be merged (DONE)
>- .gitreview https://review.openstack.org/238476 (DONE)
>- .gitignore https://review.openstack.org/238525 (ON REVIEW)
>- custom jobs parameters https://review.fuel-infra.org/13209 (DONE)
>- fix core group (DONE)
>- fuel-main https://review.openstack.org/#/c/238953/ (DONE)
>- packaging-ci  https://review.fuel-infra.org/13181 (DONE)
>- MAINTAINERS https://review.openstack.org/239410 (DONE)
>- deprecate shotgun directory https://review.openstack.org/239407
>(DONE)
>- fix verify-fuel-web-docs job (it installs shotgun for some reason)
>https://review.fuel-infra.org/#/c/13194/ (DONE)
>- remove old shotgun package (DONE)
>
>
>
> Vladimir Kozhukalov
>
> On Wed, Oct 21, 2015 at 2:46 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> As you might know I'm working on splitting multiproject fuel-web
>> repository into several sub-projects. Shotgun is one of directories that
>> are to be moved to a separate git project.
>>
>> Checklist for this to happen is as follows:
>>
>>- Launchpad bug https://bugs.launchpad.net/fuel/+bug/1506894
>>- project-config patch  https://review.openstack.org/#/c/235355 (ON
>>REVIEW)
>>- pypi project
>>https://pypi.python.org/pypi?%3Aaction=pkg_edit=Shotgun (DONE)
>>- run_tests.sh  https://review.openstack.org/235368 (DONE)
>>- rpm/deb specs  https://review.openstack.org/#/c/235382 (DONE)
>>- fuel-ci verification jobs https://review.fuel-infra.org/#/c/12872/ (ON
>>REVIEW)
>>- label jenkins slaves for verification jobs (ci team)
>>- directory freeze (WE ARE HERE)
>>- prepare upstream (TODO)
>>- waiting for project-config patch to be merged (ON REVIEW)
>>- fuel-main patch (TODO)
>>- packaging-ci patch (TODO)
>>- deprecate fuel-web/shotgun directory (TODO)
>>
>> Now we are at the point where we need to freeze fuel-web/shotgun
>> directory. So, I'd like to announce code freeze for this directory and all
>> patches that make changes in the directory and are currently on review will
>> need to be backported to the new git repository.
>>
>> Vladimir Kozhukalov
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Understanding stable/branch process for Neutron subprojects

2015-11-06 Thread John Belamaric
>>> 
>> All new features must go to master only. Your master should always be  
>> tested and work with neutron master (meaning, your master should target  
>> Mitaka, not Liberty).
>> 
>>> 

We have a very similar situation in networking-infoblox to what Neil was saying 
about Calico. In our case, we needed the framework for pluggable IPAM to 
produce our driver. We are now targeting Liberty, but based on the above our 
plan is:

1) Release 1.0 from master (soon)
2) Create stable/liberty based on 1.0
3) Continue to add features in master, but *maintain compatibility with 
stable/liberty*.

It is important that our next version works with stable/liberty, not just 
master/Mitaka.

John



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread Daniel P. Berrange
On Fri, Nov 06, 2015 at 03:32:00PM +, John Garbutt wrote:
> Hi,
> 
> I propose we add Sylvain Bauza[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work,
> including some quality reviews, particularly around the Scheduler.
> 
> Please respond with comments, +1s, or objections within one week.

+1 from me, I think Sylvain will be a valuable addition to the team
for his scheduler expertize.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Daniel P. Berrange
On Fri, Nov 06, 2015 at 03:32:04PM +, John Garbutt wrote:
> Hi,
> 
> I propose we add Alex Xu[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work,
> including some quality reviews, particularly around the API.
> 
> Please respond with comments, +1s, or objections within one week.

+1 from me, the tireless API patch & review work has been very helpful
to our efforts in this area.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][UI] Fuel UI switched to a new build/module system

2015-11-06 Thread Vitaly Kramskikh
Hi,

I'd like to inform you that Fuel UI migrated from require.js to webpack. It
will give us lots of benefits like significant improvement of developer
experience and will allow us to easily separate Fuel UI from Nailgun. For
more information please read the spec
.

For those who use Nailgun in fake mode, it means that they need to take
some extra actions to make Fuel UI work - since we don't have anymore an
uncomressed UI version which compiles itself in the browser (and this
allowed us to resolve huge amount of tech debt - we have to support only
one environment). You need to run npm install to fetch new modules and
proceed with one of 2 possible ways:

   - If you don't plan to modify Fuel UI, it would be better just to
   compile Fuel UI by running gulp build - after that compiled UI will be
   served by Nailgun as usual. Don't forget to rerun npm install && gulp
   build after fetching new changes.
   - If you plan to modify Fuel UI, there is another option - use a
   development server. It watches for changes and files and automatically
   recompiles Fuel UI (using incremental compilation, which is usually much
   faster than gulp build) and triggers refresh in browser. You can run it
   via gulp dev-server.

If you have issues with the new code, feel free to contact us in #fuel-ui
or #fuel-dev channels.

-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Donald Talton
I like the idea of LTS releases. 

Speaking to my own deployments, there are many new features we are not 
interested in, and wouldn't be, until we can get organizational (cultural) 
change in place, or see stability and scalability. 

We can't rely on, or expect, that orgs will move to the CI/CD model for infra, 
when they aren't even ready to do that for their own apps. It's still a new 
"paradigm" for many of us. CI/CD requires a considerable engineering effort, 
and given that the decision to "switch" to OpenStack is often driven by 
cost-savings over enterprise virtualization, adding those costs back in via 
engineering salaries doesn't make fiscal sense.

My big argument is that if Icehouse/Juno works and is stable, and I don't need 
newer features from subsequent releases, why would I expend the effort until 
such a time that I do want those features? Thankfully there are vendors that 
understand this. Keeping up with the release cycle just for the sake of keeping 
up with the release cycle is exhausting.

-Original Message-
From: Tony Breeds [mailto:t...@bakeyournoodle.com] 
Sent: Thursday, November 05, 2015 11:15 PM
To: OpenStack Development Mailing List
Cc: openstack-operat...@lists.openstack.org
Subject: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

Hello all,

I'll start by acknowledging that this is a big and complex issue and I do not 
claim to be across all the view points, nor do I claim to be particularly 
persuasive ;P

Having stated that, I'd like to seek constructive feedback on the idea of 
keeping Juno around for a little longer.  During the summit I spoke to a number 
of operators, vendors and developers on this topic.  There was some support and 
some "That's crazy pants!" responses.  I clearly didn't make it around to 
everyone, hence this email.

Acknowledging my affiliation/bias:  I work for Rackspace in the private cloud 
team.  We support a number of customers currently running Juno that are, for a 
variety of reasons, challenged by the Kilo upgrade.

Here is a summary of the main points that have come up in my conversations, 
both for and against.

Keep Juno:
 * According to the current user survey[1] Icehouse still has the
   biggest install base in production clouds.  Juno is second, which makes
   sense. If we EOL Juno this month that means ~75% of production clouds
   will be running an EOL'd release.  Clearly many of these operators have
   support contracts from their vendor, so those operators won't be left 
   completely adrift, but I believe it's the vendors that benefit from keeping
   Juno around. By working together *in the community* we'll see the best
   results.

 * We only recently EOL'd Icehouse[2].  Sure it was well communicated, but we
   still have a huge Icehouse/Juno install base.

For me this is pretty compelling but for balance  

Keep the current plan and EOL Juno Real Soon Now:
 * There is also no ignoring the elephant in the room that with HP stepping
   back from public cloud there are questions about our CI capacity, and
   keeping Juno will have an impact on that critical resource.

 * Juno (and other stable/*) resources have a non-zero impact on *every*
   project, esp. @infra and release management.  We need to ensure this
   isn't too much of a burden.  This mostly means we need enough trustworthy
   volunteers.

 * Juno is also tied up with Python 2.6 support. When
   Juno goes, so will Python 2.6 which is a happy feeling for a number of
   people, and more importantly reduces complexity in our project
   infrastructure.

 * Even if we keep Juno for 6 months or 1 year, that doesn't help vendors
   that are "on the hook" for multiple years of support, so for that case
   we're really only delaying the inevitable.

 * Some number of the production clouds may never migrate from $version, in
   which case longer support for Juno isn't going to help them.


I'm sure these question were well discussed at the VYR summit where we set the 
EOL date for Juno, but I was new then :) What I'm asking is:

1) Is it even possible to keep Juno alive (is the impact on the project as
   a whole acceptable)?

Assuming a positive answer:

2) Who's going to do the work?
- Me, who else?
3) What do we do if people don't actually do the work but we as a community
   have made a commitment?
4) If we keep Juno alive for $some_time, does that imply we also bump the
   life cycle on Kilo and liberty and Mitaka etc?

Yours Tony.

[1] http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf
(page 20)
[2] http://git.openstack.org/cgit/openstack/nova/tag/?h=icehouse-eol


This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.
__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread Jay Pipes

+1

On 11/06/2015 10:32 AM, John Garbutt wrote:

Hi,

I propose we add Sylvain Bauza[1] to nova-core.

Over the last few cycles he has consistently been doing great work,
including some quality reviews, particularly around the Scheduler.

Please respond with comments, +1s, or objections within one week.

Many thanks,
John

[1] http://stackalytics.com/?module=nova-group_id=sylvain-bauza=all

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Jay Pipes

+1

On 11/06/2015 10:32 AM, John Garbutt wrote:

Hi,

I propose we add Alex Xu[1] to nova-core.

Over the last few cycles he has consistently been doing great work,
including some quality reviews, particularly around the API.

Please respond with comments, +1s, or objections within one week.

Many thanks,
John

[1]http://stackalytics.com/?module=nova-group_id=xuhj=all

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Sean Dague
On 11/06/2015 10:32 AM, John Garbutt wrote:
> Hi,
> 
> I propose we add Alex Xu[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work,
> including some quality reviews, particularly around the API.
> 
> Please respond with comments, +1s, or objections within one week.
> 
> Many thanks,
> John
> 
> [1]http://stackalytics.com/?module=nova-group_id=xuhj=all
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

+1

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread Sean Dague
On 11/06/2015 10:32 AM, John Garbutt wrote:
> Hi,
> 
> I propose we add Sylvain Bauza[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work,
> including some quality reviews, particularly around the Scheduler.
> 
> Please respond with comments, +1s, or objections within one week.
> 
> Many thanks,
> John
> 
> [1] 
> http://stackalytics.com/?module=nova-group_id=sylvain-bauza=all
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Oleg Gelbukh
Matt,

You are talking about this part of Operations guide [1], or you mean
something else?

If yes, then we still need to extract data from backup containers. I'd
prefer backup of DB in simple plain text file, since our DBs are not that
big.

[1]
https://docs.mirantis.com/openstack/fuel/fuel-7.0/operations.html#howto-backup-and-restore-fuel-master

--
Best regards,
Oleg Gelbukh

On Fri, Nov 6, 2015 at 6:03 PM, Matthew Mosesohn 
wrote:

> Oleg,
>
> All the volatile information, including a DB dump, are contained in the
> small Fuel Master backup. There should be no information lost unless there
> was manual customization done inside the containers (such as puppet
> manifest changes). There shouldn't be a need to back up the entire
> containers.
>
> The information we would lose would include the IP configuration
> interfaces besides the one used for the Fuel PXE network and any custom
> configuration done on the Fuel Master.
>
> I want #1 to work smoothly, but #2 should also be a safe route.
>
> On Fri, Nov 6, 2015 at 5:39 PM, Oleg Gelbukh 
> wrote:
>
>> Evgeniy,
>>
>> On Fri, Nov 6, 2015 at 3:27 PM, Evgeniy L  wrote:
>>
>>> Also we should decide when to run containers
>>> upgrade + host upgrade? Before or after new CentOS is installed? Probably
>>> it should be done before we run backup, in order to get the latest
>>> scripts for
>>> backup/restore actions.
>>>
>>
>> We're working to determine if we need to backup/upgrade containers at
>> all. My expectation is that we should be OK with just backup of DB, IP
>> addresses settings from astute.yaml for the master node, and credentials
>> from configuration files for the services.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>>
>>>
>>> Thanks,
>>>
>>> On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Dear colleagues,

 At the moment I'm working on deprecating Fuel upgrade tarball.
 Currently, it includes the following:

 * RPM repository (upstream + mos)
 * DEB repository (mos)
 * openstack.yaml
 * version.yaml
 * upgrade script itself (+ virtualenv)

 Apart from upgrading docker containers this upgrade script makes copies
 of the RPM/DEB repositories and puts them on the master node naming these
 repository directories depending on what is written in openstack.yaml and
 version.yaml. My plan was something like:

 1) deprecate version.yaml (move all fields from there to various places)
 2) deliver openstack.yaml with fuel-openstack-metadata package
 3) do not put new repos on the master node (instead we should use
 online repos or use fuel-createmirror to make local mirrors)
 4) deliver fuel-upgrade package (throw away upgrade virtualenv)

 Then UX was supposed to be roughly like:

 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
 2) yum install fuel-upgrade
 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
 there should have not be parts coping RPM/DEB repos)

 However, it turned out that Fuel 8.0 is going to be run on Centos 7 and
 it is not enough to just do things which we usually did during upgrades.
 Now there are two ways to upgrade:
 1) to use the official Centos upgrade script for upgrading from 6 to 7
 2) to backup the master node, then reinstall it from scratch and then
 apply backup

 Upgrade team is trying to understand which way is more appropriate.
 Regarding to my tarball related activities, I'd say that this package based
 upgrade approach can be aligned with (1) (fuel-upgrade would use official
 Centos upgrade script as a first step for upgrade), but it definitely can
 not be aligned with (2), because it assumes reinstalling the master node
 from scratch.

 Right now, I'm finishing the work around deprecating version.yaml and
 my further steps would be to modify fuel-upgrade script so it does not copy
 RPM/DEB repos, but those steps make little sense taking into account Centos
 7 feature.

 Colleagues, let's make a decision about how we are going to upgrade the
 master node ASAP. Probably my tarball related work should be reduced to
 just throwing tarball away.


 Vladimir Kozhukalov


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 

[openstack-dev] [nova][bugs] Weekly Status Report

2015-11-06 Thread Markus Zoeller
Hey folks,

below is the first report of bug stats I intend to post weekly.
We discussed it shortly during the Mitaka summit that this report
could be useful to keep the attention of the open bugs at a certain
level. Let me know if you think it's missing something.

Stats
=

New bugs which are *not* assigned to any subteam

count: 19
query: http://bit.ly/1WF68Iu


New bugs which are *not* triaged

subteam: libvirt 
count: 14 
query: http://bit.ly/1Hx3RrL
subteam: volumes 
count: 11
query: http://bit.ly/1NU2DM0
subteam: network : 
count: 4
query: http://bit.ly/1LVAQdq
subteam: db : 
count: 4
query: http://bit.ly/1LVATWG
subteam: 
count: 67
query: http://bit.ly/1RBVZLn


High prio bugs which are *not* in progress
--
count: 39
query: http://bit.ly/1MCKoHA


Critical bugs which are *not* in progress
-
count: 0
query: http://bit.ly/1kfntfk


Readings

* https://wiki.openstack.org/wiki/BugTriage
* https://wiki.openstack.org/wiki/Nova/BugTriage
* 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078252.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Gareth
+11

Alex, good news for you !

On Sat, Nov 7, 2015 at 12:36 AM, Daniel P. Berrange  wrote:
> On Fri, Nov 06, 2015 at 03:32:04PM +, John Garbutt wrote:
>> Hi,
>>
>> I propose we add Alex Xu[1] to nova-core.
>>
>> Over the last few cycles he has consistently been doing great work,
>> including some quality reviews, particularly around the API.
>>
>> Please respond with comments, +1s, or objections within one week.
>
> +1 from me, the tireless API patch & review work has been very helpful
> to our efforts in this area.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Gareth

Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
OpenStack contributor, kun_huang@freenode
My promise: if you find any spelling or grammar mistakes in my email
from Mar 1 2013, notify me
and I'll donate $1 or ¥1 to an open organization you specify.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread James King
+1 for some sort of LTS release system.

Telcos and risk-averse organizations working with sensitive data might not be 
able to upgrade nearly as fast as the releases keep coming out. From the summit 
in Japan it sounds like companies running some fairly critical public 
infrastructure on Openstack aren’t going to be upgrading to Kilo any time soon.

Public clouds might even benefit from this. I know we (Dreamcompute) are 
working towards tracking the upstream releases closer… but it’s not feasible 
for everyone.

I’m not sure whether the resources exist to do this but it’d be a nice to have, 
imho.

> On Nov 6, 2015, at 11:47 AM, Donald Talton  wrote:
> 
> I like the idea of LTS releases. 
> 
> Speaking to my own deployments, there are many new features we are not 
> interested in, and wouldn't be, until we can get organizational (cultural) 
> change in place, or see stability and scalability. 
> 
> We can't rely on, or expect, that orgs will move to the CI/CD model for 
> infra, when they aren't even ready to do that for their own apps. It's still 
> a new "paradigm" for many of us. CI/CD requires a considerable engineering 
> effort, and given that the decision to "switch" to OpenStack is often driven 
> by cost-savings over enterprise virtualization, adding those costs back in 
> via engineering salaries doesn't make fiscal sense.
> 
> My big argument is that if Icehouse/Juno works and is stable, and I don't 
> need newer features from subsequent releases, why would I expend the effort 
> until such a time that I do want those features? Thankfully there are vendors 
> that understand this. Keeping up with the release cycle just for the sake of 
> keeping up with the release cycle is exhausting.
> 
> -Original Message-
> From: Tony Breeds [mailto:t...@bakeyournoodle.com] 
> Sent: Thursday, November 05, 2015 11:15 PM
> To: OpenStack Development Mailing List
> Cc: openstack-operat...@lists.openstack.org
> Subject: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
> 
> Hello all,
> 
> I'll start by acknowledging that this is a big and complex issue and I do not 
> claim to be across all the view points, nor do I claim to be particularly 
> persuasive ;P
> 
> Having stated that, I'd like to seek constructive feedback on the idea of 
> keeping Juno around for a little longer.  During the summit I spoke to a 
> number of operators, vendors and developers on this topic.  There was some 
> support and some "That's crazy pants!" responses.  I clearly didn't make it 
> around to everyone, hence this email.
> 
> Acknowledging my affiliation/bias:  I work for Rackspace in the private cloud 
> team.  We support a number of customers currently running Juno that are, for 
> a variety of reasons, challenged by the Kilo upgrade.
> 
> Here is a summary of the main points that have come up in my conversations, 
> both for and against.
> 
> Keep Juno:
> * According to the current user survey[1] Icehouse still has the
>   biggest install base in production clouds.  Juno is second, which makes
>   sense. If we EOL Juno this month that means ~75% of production clouds
>   will be running an EOL'd release.  Clearly many of these operators have
>   support contracts from their vendor, so those operators won't be left 
>   completely adrift, but I believe it's the vendors that benefit from keeping
>   Juno around. By working together *in the community* we'll see the best
>   results.
> 
> * We only recently EOL'd Icehouse[2].  Sure it was well communicated, but we
>   still have a huge Icehouse/Juno install base.
> 
> For me this is pretty compelling but for balance  
> 
> Keep the current plan and EOL Juno Real Soon Now:
> * There is also no ignoring the elephant in the room that with HP stepping
>   back from public cloud there are questions about our CI capacity, and
>   keeping Juno will have an impact on that critical resource.
> 
> * Juno (and other stable/*) resources have a non-zero impact on *every*
>   project, esp. @infra and release management.  We need to ensure this
>   isn't too much of a burden.  This mostly means we need enough trustworthy
>   volunteers.
> 
> * Juno is also tied up with Python 2.6 support. When
>   Juno goes, so will Python 2.6 which is a happy feeling for a number of
>   people, and more importantly reduces complexity in our project
>   infrastructure.
> 
> * Even if we keep Juno for 6 months or 1 year, that doesn't help vendors
>   that are "on the hook" for multiple years of support, so for that case
>   we're really only delaying the inevitable.
> 
> * Some number of the production clouds may never migrate from $version, in
>   which case longer support for Juno isn't going to help them.
> 
> 
> I'm sure these question were well discussed at the VYR summit where we set 
> the EOL date for Juno, but I was new then :) What I'm asking is:
> 
> 1) Is it even possible to keep Juno alive (is the impact on the project as
>   a whole 

Re: [openstack-dev] [release] Release countdown for week R-21, Nov 9-13

2015-11-06 Thread Doug Hellmann
Excerpts from Neil Jerram's message of 2015-11-06 12:15:54 +:
> On 05/11/15 14:22, Doug Hellmann wrote:
> > All deliverables should have reno configured before Mitaka 1. See
> > http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html
> > for details, and follow up on that thread with questions.
> 
> I guess that 'deliverables' do not include projects with
> release:independent.  Is that correct?

We use the term "deliverables" for the things we package because
some are produced from multiple repositories.

> 
> Nevertheless, would use of reno be recommended for release:independent
> projects too?

Yes, certainly. Even release:none repos might benefit from standardizing
on release notes management.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api]

2015-11-06 Thread Sean Dague
On 11/06/2015 04:13 AM, Salvatore Orlando wrote:
> It makes sense to have a single point were response pagination is made
> in API processing, rather than scattering pagination across Nova REST
> controllers; unfortunately if I am not really able to comment how
> feasible that would be in Nova's WSGI framework.
> 
> However, I'd just like to add that there is an approved guideline for
> API response pagination [1], and if would be good if all these effort
> follow the guideline.
> 
> Salvatore
> 
> [1] 
> https://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html

The pagination part is just a TODO in there.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Alexander Kostrikov
Hi, Vladimir!
I think that option (2) 'to backup the master node, then reinstall it from
scratch and then apply backup' is a better way for upgrade.
In that way we are concentrating on two problems in one feature:
backups and upgrades.
That will ease development, testing and also reduce feature creep.

P.S.
It is hard to refer to (2) because You have thee (2)-s.

On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> At the moment I'm working on deprecating Fuel upgrade tarball. Currently,
> it includes the following:
>
> * RPM repository (upstream + mos)
> * DEB repository (mos)
> * openstack.yaml
> * version.yaml
> * upgrade script itself (+ virtualenv)
>
> Apart from upgrading docker containers this upgrade script makes copies of
> the RPM/DEB repositories and puts them on the master node naming these
> repository directories depending on what is written in openstack.yaml and
> version.yaml. My plan was something like:
>
> 1) deprecate version.yaml (move all fields from there to various places)
> 2) deliver openstack.yaml with fuel-openstack-metadata package
> 3) do not put new repos on the master node (instead we should use online
> repos or use fuel-createmirror to make local mirrors)
> 4) deliver fuel-upgrade package (throw away upgrade virtualenv)
>
> Then UX was supposed to be roughly like:
>
> 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
> 2) yum install fuel-upgrade
> 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
> there should have not be parts coping RPM/DEB repos)
>
> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and it
> is not enough to just do things which we usually did during upgrades. Now
> there are two ways to upgrade:
> 1) to use the official Centos upgrade script for upgrading from 6 to 7
> 2) to backup the master node, then reinstall it from scratch and then
> apply backup
>
> Upgrade team is trying to understand which way is more appropriate.
> Regarding to my tarball related activities, I'd say that this package based
> upgrade approach can be aligned with (1) (fuel-upgrade would use official
> Centos upgrade script as a first step for upgrade), but it definitely can
> not be aligned with (2), because it assumes reinstalling the master node
> from scratch.
>
> Right now, I'm finishing the work around deprecating version.yaml and my
> further steps would be to modify fuel-upgrade script so it does not copy
> RPM/DEB repos, but those steps make little sense taking into account Centos
> 7 feature.
>
> Colleagues, let's make a decision about how we are going to upgrade the
> master node ASAP. Probably my tarball related work should be reduced to
> just throwing tarball away.
>
>
> Vladimir Kozhukalov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Kind Regards,

Alexandr Kostrikov,

Mirantis, Inc.

35b/3, Vorontsovskaya St., 109147, Moscow, Russia


Tel.: +7 (495) 640-49-04
Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>

Skype: akostrikov_mirantis

E-mail: akostri...@mirantis.com 

*www.mirantis.com *
*www.mirantis.ru *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread Daniel P. Berrange
On Fri, Nov 06, 2015 at 07:09:59AM -0500, Sean Dague wrote:
> On 11/06/2015 04:49 AM, Daniel P. Berrange wrote:
> > 
> > I think that exposing hypervisor_type is very much the *wrong* approach
> > to this problem. The set of allowed actions varies based on much more than
> > just the hypervisor_type. The hypervisor version may affect it, as may
> > the hypervisor architecture, and even the version of Nova. If horizon
> > restricted its actions based on hypevisor_type alone, then it is going
> > to inevitably prevent the user from performing otherwise valid actions
> > in a number of scenarios.
> > 
> > IMHO, a capabilities based approach is the only viable solution to
> > this kind of problem.
> 
> Right, we just had a super long conversation about this in #openstack-qa
> yesterday with mordred, jroll, and deva around what it's going to take
> to get upgrade tests passing with ironic.
> 
> Capabilities is the right approach, because it means we're future
> proofing our interface by telling users what they can do, not some
> arbitrary string that they need to cary around a separate library to
> figure those things out.
> 
> It seems like capabilities need to exist on flavor, and by proxy instance.
> 
> GET /flavors/bm.large/capabilities
> 
> {
>  "actions": {
>  'pause': False,
>  'unpause': False,
>  'rebuild': True
>  ..
>   }
> 
> A starting point would definitely be the set of actions that you can
> send to the flavor/instance. There may be features beyond that we'd like
> to classify as capabilities, but actions would be a very concrete and
> attainable starting point. With microversions we don't have to solve
> this all at once, start with a concrete thing and move forward.

I think there are two distinct use cases for capabilities we need to
consider.

 1. Before I launch an instance, does the cloud provide features XYZ

 2. Given this running instance, am I able to perform operation XYZ

Having capabilities against the flavour /might/ be sufficient for
#1, but it isn't sufficient for #2.

For example, the ability to hotplug disks to a running instance will
depend on what disk controller the instance is using. The choice of
disk controller used will vary based on image metadata properties,
eg ide vs scsi vs virtio-blk. IDE does not support hotplug, but
scsi & virtio-blk do. So we can't answer the question "does hotplug
disk work for this instance" simply based on the flavour - we need
to ask it against the instance.

What we can answer against the flavour is whether the hypervisor
driver is able to support hotplug in principle, given a suitably
configured instance. That said, even that is not an exact science
if you take into account fact that the cloud could be running
compute nodes with different versions, and the flavour does not
directly control which version of a compute node we'll run against.

Having capabilities against the flavour would certainly allow for
an improvement in Horizon UI vs its current state, but to be able
to perfectly represent what is possible for an instance, Horizon
would ultimately require capabilities against the isntance,

So I think we'll likely end up having to implement both capabilities
against a flavour and against an instance. So you'd end up with a
flow something like

 - Check to see if cloud provider supports hotplug by checking
   flavour capability==disk-hotplug

 - When booting an instance mark it as requiring capability==disk-hotplug
   to ensure its scheduled to a node which supports that capability

 - When presenting UI for operations against an instance, check
   that the running instance supports capability==disk-hotplug


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][bugs] Developers Guide: Who's mergingthat?

2015-11-06 Thread John Garbutt
On 6 November 2015 at 13:38, Markus Zoeller  wrote:
> Jeremy Stanley  wrote on 11/05/2015 07:11:37 PM:
>
>> From: Jeremy Stanley 
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: 11/05/2015 07:17 PM
>> Subject: Re: [openstack-dev] [all][bugs] Developers Guide: Who's merging
> that?
>>
>> On 2015-11-05 16:23:56 +0100 (+0100), Markus Zoeller wrote:
>> > some months ago I wrote down all the things a developer should know
>> > about the bug handling process in general [1]. It is written as a
>> > project agnostic thing and got some +1s but it isn't merged yet.
>> > It would be helpful when I could use it to give this as a pointer
>> > to new contributors as I'm under the impression that the mental image
>> > differs a lot among the contributors. So, my questions are:
>> >
>> > 1) Who's in charge of merging such non-project-specific things?
>> [...]
>>
>> This is a big part of the problem your addition is facing, in my
>> opinion. The OpenStack Infrastructure Manual is an attempt at a
>> technical manual for interfacing with the systems written and
>> maintained by the OpenStack Project Infrastructure team. It has,
>> unfortunately, also grown some sections which contain cultural
>> background and related recommendations because until recently there
>> was no better venue for those topics, but we're going to be ripping
>> those out and proposing them to documents maintained by more
>> appropriate teams at the earliest opportunity.
>
> I've written this for the Nova docs originally but got sent to the
> infra-manual as the "project agnostic thing".
>
>> Bug management falls into a grey area currently, where a lot of the
>> information contributors need is cultural background mixed with
>> workflow information on using Launchpad (which is not really managed
>> by the Infra team). [...]
>
> True, that's what I try to contribute here. I'm aware of the intended
> change in our issue tracker and tried to write the text so it needs
> only a few changes when this transition is done.
>
>> Cultural content about the lifecycle of bugs, standard practices for
>> triage, et cetera are likely better suited to the newly created
>> Project Team Guide;[...]
>
> The Project Team Guide was news to me, I'm going to have a look if
> it would fit.

+1 for trying to see how this fits into the Project Team Guide.

Possibly somewhere in here, add about having an open bug tracker?
http://docs.openstack.org/project-team-guide/open-development.html#specifications

You can see the summit discussions on the project team guide here:
https://etherpad.openstack.org/p/mitaka-crossproject-doc-the-way

Thanks,
johnthetubaguy

>> So anyway, to my main point, topics in collaboratively-maintained
>> documentation are going to end up being closely tied to the
>> expertise of the review team for the document being targeted. In the
>> case of the Infra Manual that's the systems administrators who
>> configure and maintain our community infrastructure. I won't speak
>> for others on the team, but I don't personally feel comfortable
>> deciding what details a user should include in a bug report for
>> python-novaclient, or how the Cinder team should triage their bug
>> reports.
>>
>> I expect that the lack of core reviews are due to:
>>
>> 1. Few of the core reviewers feel they can accurately judge much of
>> the content you've proposed in that change.
>>
>> 2. Nobody feels empowered to tell you that this large and
>> well-written piece of documentation you've spent a lot of time
>> putting together is a poor fit and should be split up and much of it
>> put somewhere else more suitable (especially without a suggestion as
>> to where that might be).
>>
>> 3. The core review team for this is the core review team for all our
>> infrastructure systems, and we're all unfortunately very behind in
>> handling the current review volume.
>
> Maybe the time has come for me to think about starting a blog...
> Thanks Stanley, for your time and feedback.
>
> Regards, Markus Zoeller (markus_z)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread John Garbutt
On 6 November 2015 at 12:09, Sean Dague  wrote:
> On 11/06/2015 04:49 AM, Daniel P. Berrange wrote:
>> On Fri, Nov 06, 2015 at 05:08:59PM +1100, Tony Breeds wrote:
>>> Hello all,
>>> I came across [1] which is notionally an ironic bug in that horizon 
>>> presents
>>> VM operations (like suspend) to users.  Clearly these options don't make 
>>> sense
>>> to ironic which can be confusing.
>>>
>>> There is a horizon fix that just disables migrate/suspened and other 
>>> functaions
>>> if the operator sets a flag say ironic is present.  Clealy this is sub 
>>> optimal
>>> for a mixed hv environment.
>>>
>>> The data needed (hpervisor type) is currently avilable only to admins, a 
>>> quick
>>> hack to remove this policy restriction is functional.
>>>
>>> There are a few ways to solve this.
>>>
>>>  1. Change the default from "rule:admin_api" to "" (for
>>> os_compute_api:os-extended-server-attributes and
>>> os_compute_api:os-hypervisors), and set a list of values we're
>>> comfortbale exposing the user (hypervisor_type and
>>> hypervisor_hostname).  So a user can get the hypervisor_name as part of
>>> the instance deatils and get the hypervisor_type from the
>>> os-hypervisors.  This would work for horizon but increases the API load
>>> on nova and kinda implies that horizon would have to cache the data and
>>> open-code assumptions that hypervisor_type can/can't do action $x
>>>
>>>  2. Include the hypervisor_type with the instance data.  This would place 
>>> the
>>> burdon on nova.  It makes the looking up instance details slightly more
>>> complex but doesn't result in additional API queries, nor caching
>>> overhead in horizon.  This has the same opencoding issues as Option 1.
>>>
>>>  3. Define a service user and have horizon look up the hypervisors details 
>>> via
>>> that role.  Has all the drawbacks as option 1 and I'm struggling to
>>> think of many benefits.
>>>
>>>  4. Create a capabilitioes API of some description, that can be queried so 
>>> that
>>> consumers (horizon) can known
>>>
>>>  5. Some other way for users to know what kind of hypervisor they're on, 
>>> Perhaps
>>> there is an established image property that would work here?
>>>
>>> If we're okay with exposing the hypervisor_type to users, then #2 is pretty
>>> quick and easy, and could be done in Mitaka.  Option 4 is probably the best
>>> long term solution but I think is best done in 'N' as it needs lots of
>>> discussion.
>>
>> I think that exposing hypervisor_type is very much the *wrong* approach
>> to this problem. The set of allowed actions varies based on much more than
>> just the hypervisor_type. The hypervisor version may affect it, as may
>> the hypervisor architecture, and even the version of Nova. If horizon
>> restricted its actions based on hypevisor_type alone, then it is going
>> to inevitably prevent the user from performing otherwise valid actions
>> in a number of scenarios.
>>
>> IMHO, a capabilities based approach is the only viable solution to
>> this kind of problem.
>
> Right, we just had a super long conversation about this in #openstack-qa
> yesterday with mordred, jroll, and deva around what it's going to take
> to get upgrade tests passing with ironic.
>
> Capabilities is the right approach, because it means we're future
> proofing our interface by telling users what they can do, not some
> arbitrary string that they need to cary around a separate library to
> figure those things out.
>
> It seems like capabilities need to exist on flavor, and by proxy instance.
>
> GET /flavors/bm.large/capabilities
>
> {
>  "actions": {
>  'pause': False,
>  'unpause': False,
>  'rebuild': True
>  ..
>   }
>
> A starting point would definitely be the set of actions that you can
> send to the flavor/instance. There may be features beyond that we'd like
> to classify as capabilities, but actions would be a very concrete and
> attainable starting point. With microversions we don't have to solve
> this all at once, start with a concrete thing and move forward.
>
> Sending an action that was "False" for the instance/flavor would return
> a 400 BadRequest high up at the API level, much like input validation
> via jsonschema.

+1

>From memory we couldn't quite decide on the granularity of that
actions list, but we can work through that in a spec.

> This is nothing new, we've talked about it in the abstract in the Nova
> space for a while. We've yet had anyone really take this on. If you
> wanted to run with a spec and code, it would be welcome.

+1

I would love for it to eventually also reflect policy (per flavor
policy) in that list of available actions. That might be one way to
get something quickly, while working on a more automatic solution.

Not to delay that work, but there are related ideas around grouping
flavors, into flavor classes, so you attach policy to the class rather
than every 

Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread Sean Dague
On 11/06/2015 07:28 AM, John Garbutt wrote:
> On 6 November 2015 at 12:09, Sean Dague  wrote:
>> On 11/06/2015 04:49 AM, Daniel P. Berrange wrote:
>>> On Fri, Nov 06, 2015 at 05:08:59PM +1100, Tony Breeds wrote:
 Hello all,
 I came across [1] which is notionally an ironic bug in that horizon 
 presents
 VM operations (like suspend) to users.  Clearly these options don't make 
 sense
 to ironic which can be confusing.

 There is a horizon fix that just disables migrate/suspened and other 
 functaions
 if the operator sets a flag say ironic is present.  Clealy this is sub 
 optimal
 for a mixed hv environment.

 The data needed (hpervisor type) is currently avilable only to admins, a 
 quick
 hack to remove this policy restriction is functional.

 There are a few ways to solve this.

  1. Change the default from "rule:admin_api" to "" (for
 os_compute_api:os-extended-server-attributes and
 os_compute_api:os-hypervisors), and set a list of values we're
 comfortbale exposing the user (hypervisor_type and
 hypervisor_hostname).  So a user can get the hypervisor_name as part of
 the instance deatils and get the hypervisor_type from the
 os-hypervisors.  This would work for horizon but increases the API load
 on nova and kinda implies that horizon would have to cache the data and
 open-code assumptions that hypervisor_type can/can't do action $x

  2. Include the hypervisor_type with the instance data.  This would place 
 the
 burdon on nova.  It makes the looking up instance details slightly more
 complex but doesn't result in additional API queries, nor caching
 overhead in horizon.  This has the same opencoding issues as Option 1.

  3. Define a service user and have horizon look up the hypervisors details 
 via
 that role.  Has all the drawbacks as option 1 and I'm struggling to
 think of many benefits.

  4. Create a capabilitioes API of some description, that can be queried so 
 that
 consumers (horizon) can known

  5. Some other way for users to know what kind of hypervisor they're on, 
 Perhaps
 there is an established image property that would work here?

 If we're okay with exposing the hypervisor_type to users, then #2 is pretty
 quick and easy, and could be done in Mitaka.  Option 4 is probably the best
 long term solution but I think is best done in 'N' as it needs lots of
 discussion.
>>>
>>> I think that exposing hypervisor_type is very much the *wrong* approach
>>> to this problem. The set of allowed actions varies based on much more than
>>> just the hypervisor_type. The hypervisor version may affect it, as may
>>> the hypervisor architecture, and even the version of Nova. If horizon
>>> restricted its actions based on hypevisor_type alone, then it is going
>>> to inevitably prevent the user from performing otherwise valid actions
>>> in a number of scenarios.
>>>
>>> IMHO, a capabilities based approach is the only viable solution to
>>> this kind of problem.
>>
>> Right, we just had a super long conversation about this in #openstack-qa
>> yesterday with mordred, jroll, and deva around what it's going to take
>> to get upgrade tests passing with ironic.
>>
>> Capabilities is the right approach, because it means we're future
>> proofing our interface by telling users what they can do, not some
>> arbitrary string that they need to cary around a separate library to
>> figure those things out.
>>
>> It seems like capabilities need to exist on flavor, and by proxy instance.
>>
>> GET /flavors/bm.large/capabilities
>>
>> {
>>  "actions": {
>>  'pause': False,
>>  'unpause': False,
>>  'rebuild': True
>>  ..
>>   }
>>
>> A starting point would definitely be the set of actions that you can
>> send to the flavor/instance. There may be features beyond that we'd like
>> to classify as capabilities, but actions would be a very concrete and
>> attainable starting point. With microversions we don't have to solve
>> this all at once, start with a concrete thing and move forward.
>>
>> Sending an action that was "False" for the instance/flavor would return
>> a 400 BadRequest high up at the API level, much like input validation
>> via jsonschema.
> 
> +1
> 
> From memory we couldn't quite decide on the granularity of that
> actions list, but we can work through that in a spec.

My suggestion is that phase 1 is the list of ACTIONS you can POST to
/servers/UUID/action. That is already a fixed size list and is a
definitive concept today.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread Sean Dague
On 11/06/2015 07:46 AM, Daniel P. Berrange wrote:
> On Fri, Nov 06, 2015 at 07:09:59AM -0500, Sean Dague wrote:
>> On 11/06/2015 04:49 AM, Daniel P. Berrange wrote:
>>>
>>> I think that exposing hypervisor_type is very much the *wrong* approach
>>> to this problem. The set of allowed actions varies based on much more than
>>> just the hypervisor_type. The hypervisor version may affect it, as may
>>> the hypervisor architecture, and even the version of Nova. If horizon
>>> restricted its actions based on hypevisor_type alone, then it is going
>>> to inevitably prevent the user from performing otherwise valid actions
>>> in a number of scenarios.
>>>
>>> IMHO, a capabilities based approach is the only viable solution to
>>> this kind of problem.
>>
>> Right, we just had a super long conversation about this in #openstack-qa
>> yesterday with mordred, jroll, and deva around what it's going to take
>> to get upgrade tests passing with ironic.
>>
>> Capabilities is the right approach, because it means we're future
>> proofing our interface by telling users what they can do, not some
>> arbitrary string that they need to cary around a separate library to
>> figure those things out.
>>
>> It seems like capabilities need to exist on flavor, and by proxy instance.
>>
>> GET /flavors/bm.large/capabilities
>>
>> {
>>  "actions": {
>>  'pause': False,
>>  'unpause': False,
>>  'rebuild': True
>>  ..
>>   }
>>
>> A starting point would definitely be the set of actions that you can
>> send to the flavor/instance. There may be features beyond that we'd like
>> to classify as capabilities, but actions would be a very concrete and
>> attainable starting point. With microversions we don't have to solve
>> this all at once, start with a concrete thing and move forward.
> 
> I think there are two distinct use cases for capabilities we need to
> consider.
> 
>  1. Before I launch an instance, does the cloud provide features XYZ
> 
>  2. Given this running instance, am I able to perform operation XYZ
> 
> Having capabilities against the flavour /might/ be sufficient for
> #1, but it isn't sufficient for #2.
> 
> For example, the ability to hotplug disks to a running instance will
> depend on what disk controller the instance is using. The choice of
> disk controller used will vary based on image metadata properties,
> eg ide vs scsi vs virtio-blk. IDE does not support hotplug, but
> scsi & virtio-blk do. So we can't answer the question "does hotplug
> disk work for this instance" simply based on the flavour - we need
> to ask it against the instance.
> 
> What we can answer against the flavour is whether the hypervisor
> driver is able to support hotplug in principle, given a suitably
> configured instance. That said, even that is not an exact science
> if you take into account fact that the cloud could be running
> compute nodes with different versions, and the flavour does not
> directly control which version of a compute node we'll run against.
> 
> Having capabilities against the flavour would certainly allow for
> an improvement in Horizon UI vs its current state, but to be able
> to perfectly represent what is possible for an instance, Horizon
> would ultimately require capabilities against the isntance,
> 
> So I think we'll likely end up having to implement both capabilities
> against a flavour and against an instance. So you'd end up with a
> flow something like
> 
>  - Check to see if cloud provider supports hotplug by checking
>flavour capability==disk-hotplug
> 
>  - When booting an instance mark it as requiring capability==disk-hotplug
>to ensure its scheduled to a node which supports that capability
> 
>  - When presenting UI for operations against an instance, check
>that the running instance supports capability==disk-hotplug

Yes, instances would definitely also have capabilities. Once an instance
is launched it has local flavor anyway, and capabilities would transfer
accordingly (plus possibly be modified beyond that for various reasons).

The reason this effort has remained stalled is that no one disagrees
with the concept, but the realm of possible information exposed blows up
to the point that it's like Tasks; a good idea but too big to make any
progress on.

Capabilities by server POST action on flavors/instances is discrete
enough to be proposed and done in a cycle. It definitively makes the
"baremetal flavors/instances can't be paused" discoverable and a thing
which is concretely better for users.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][stable] Understanding stable/branch process for Neutron subprojects

2015-11-06 Thread Neil Jerram
Prompted by the thread about maybe allowing subproject teams to do their
own stable maint, I have some questions about what I should be doing in
networking-calico; and I guess the answers may apply generally to
subprojects.

Let's start from the text at
http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html:

> Stable branches for libraries should be created at the same time when

"libraries"?  Should that say "subprojects"?

> corresponding neutron stable branches are cut off. This is to avoid
> situations when a postponed cut-off results in a stable branch that
> contains some patches that belong to the next release. This would
> require reverting patches, and this is something you should avoid.

(Textually, I think "created" would be clearer here than "cut off", if
that is the intended meaning.  "cut off" could also mean "deleted" or
"stop being used".)

I think I understand the point here.  However, networking-calico doesn't
yet have a stable/liberty branch, and in practice its master branch
currently targets Neutron stable/liberty code.  (For example, its
DevStack setup instructions say "git checkout stable/liberty".)

To get networking-calico into a correct state per the above guideline, I
think I'd need/want to

- create a stable/liberty branch (from the current master, as there is
nothing in master that actually depends on Neutron changes since
stable/liberty)

- continue developing useful enhancements on the stable/liberty branch -
because my primary target for now is the released Liberty - and then
merge those to master

- eventually, develop on the master branch also, to take advantage of
and keep current with changes in Neutron master.

But is that compatible with the permitted stable branch process?  It
sounds like the permitted process requires me to develop everything on
master first, then (ask to) cherry-pick specific changes to the stable
branch - which isn't actually natural for the current situation (or
targeting Liberty releases).

I suspect I'm missing a clue somewhere - thanks for any input!

Regards,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] BIOS Configuration

2015-11-06 Thread Serge Kovaleff
Hi Lucas,


I meant if it's possible to access/update BIOS configuration without any
agent.
Something similar to remote execution engine via Ansible.
I am inspired by agent-less "Ansible-deploy-driver"
https://review.openstack.org/#/c/241946/

There is definitely benefits of using the agent e.g. Heartbeats.
Nevertheless, the idea of minimal agent-less environment is quite appealing
for me.

Cheers,
Serge Kovaleff


On Fri, Oct 23, 2015 at 4:58 PM, Lucas Alvares Gomes 
wrote:

> Hi,
>
> > I am interested in remote BIOS configuration.
> > There is "New driver interface for BIOS configuration specification"
> > https://review.openstack.org/#/c/209612/
> >
> > Is it possible to implement this without REST API endpoint?
> >
>
> I may be missing something here but without the API how will the user
> set the configurations? We need the ReST API so we can abstract the
> interface for this for all the different drivers in Ironic.
>
> Also, feel free to add suggestions in the spec patch itself.
>
> Cheers,
> Lucas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread John Garbutt
On 6 November 2015 at 09:49, Daniel P. Berrange  wrote:
> On Fri, Nov 06, 2015 at 05:08:59PM +1100, Tony Breeds wrote:
>> Hello all,
>> I came across [1] which is notionally an ironic bug in that horizon 
>> presents
>> VM operations (like suspend) to users.  Clearly these options don't make 
>> sense
>> to ironic which can be confusing.
>>
>> There is a horizon fix that just disables migrate/suspened and other 
>> functaions
>> if the operator sets a flag say ironic is present.  Clealy this is sub 
>> optimal
>> for a mixed hv environment.
>>
>> The data needed (hpervisor type) is currently avilable only to admins, a 
>> quick
>> hack to remove this policy restriction is functional.
>>
>> There are a few ways to solve this.
>>
>>  1. Change the default from "rule:admin_api" to "" (for
>> os_compute_api:os-extended-server-attributes and
>> os_compute_api:os-hypervisors), and set a list of values we're
>> comfortbale exposing the user (hypervisor_type and
>> hypervisor_hostname).  So a user can get the hypervisor_name as part of
>> the instance deatils and get the hypervisor_type from the
>> os-hypervisors.  This would work for horizon but increases the API load
>> on nova and kinda implies that horizon would have to cache the data and
>> open-code assumptions that hypervisor_type can/can't do action $x
>>
>>  2. Include the hypervisor_type with the instance data.  This would place the
>> burdon on nova.  It makes the looking up instance details slightly more
>> complex but doesn't result in additional API queries, nor caching
>> overhead in horizon.  This has the same opencoding issues as Option 1.
>>
>>  3. Define a service user and have horizon look up the hypervisors details 
>> via
>> that role.  Has all the drawbacks as option 1 and I'm struggling to
>> think of many benefits.
>>
>>  4. Create a capabilitioes API of some description, that can be queried so 
>> that
>> consumers (horizon) can known
>>
>>  5. Some other way for users to know what kind of hypervisor they're on, 
>> Perhaps
>> there is an established image property that would work here?
>>
>> If we're okay with exposing the hypervisor_type to users, then #2 is pretty
>> quick and easy, and could be done in Mitaka.  Option 4 is probably the best
>> long term solution but I think is best done in 'N' as it needs lots of
>> discussion.
>
> I think that exposing hypervisor_type is very much the *wrong* approach
> to this problem. The set of allowed actions varies based on much more than
> just the hypervisor_type. The hypervisor version may affect it, as may
> the hypervisor architecture, and even the version of Nova. If horizon
> restricted its actions based on hypevisor_type alone, then it is going
> to inevitably prevent the user from performing otherwise valid actions
> in a number of scenarios.
>
> IMHO, a capabilities based approach is the only viable solution to
> this kind of problem.

+1 to capabilities approach.

This also feels very related to the policy discovery piece we have
debated previously.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] re-implementation in golang - hummingbird details

2015-11-06 Thread Brian Cline
There was a talk at the Summit in Tokyo last week which you can find here:
https://youtu.be/Jfat_FReZIE

Here is a blog post that was pushed about a week before:
http://blog.rackspace.com/making-openstack-powered-rackspace-cloud-files-buzz-with-hummingbird/

--
Brian
Fat-fingered from a Victrola

 Original Message 
Subject: [openstack-dev] [swift] re-implementation in golang - hummingbird  
details
From: Rahul Nair 
To: openstack-dev@lists.openstack.org
CC:
Date: Thu, October 29, 2015 12:23 PM



Hi All,

I was reading about the "hummingbird" re-implementation of some parts of swift 
in golang, can someone kindly point to documentation/blogs on the changes made, 
where I can understand the new implementation before going into the code.

​Thanks,
Rahul U Nair
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Mark Baker
Worth mentioning that OpenStack releases that come out at the same time as
Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka) are
supported for 5 years by Canonical so are already kind of an LTS. Support
in this context means patches, updates and commercial support (for a fee).
For paying customers 3 years of patches, updates and commercial support for
April releases, (Kilo, O, Q etc..) is also available.



Best Regards


Mark Baker

On Fri, Nov 6, 2015 at 5:03 PM, James King  wrote:

> +1 for some sort of LTS release system.
>
> Telcos and risk-averse organizations working with sensitive data might not
> be able to upgrade nearly as fast as the releases keep coming out. From the
> summit in Japan it sounds like companies running some fairly critical
> public infrastructure on Openstack aren’t going to be upgrading to Kilo any
> time soon.
>
> Public clouds might even benefit from this. I know we (Dreamcompute) are
> working towards tracking the upstream releases closer… but it’s not
> feasible for everyone.
>
> I’m not sure whether the resources exist to do this but it’d be a nice to
> have, imho.
>
> > On Nov 6, 2015, at 11:47 AM, Donald Talton 
> wrote:
> >
> > I like the idea of LTS releases.
> >
> > Speaking to my own deployments, there are many new features we are not
> interested in, and wouldn't be, until we can get organizational (cultural)
> change in place, or see stability and scalability.
> >
> > We can't rely on, or expect, that orgs will move to the CI/CD model for
> infra, when they aren't even ready to do that for their own apps. It's
> still a new "paradigm" for many of us. CI/CD requires a considerable
> engineering effort, and given that the decision to "switch" to OpenStack is
> often driven by cost-savings over enterprise virtualization, adding those
> costs back in via engineering salaries doesn't make fiscal sense.
> >
> > My big argument is that if Icehouse/Juno works and is stable, and I
> don't need newer features from subsequent releases, why would I expend the
> effort until such a time that I do want those features? Thankfully there
> are vendors that understand this. Keeping up with the release cycle just
> for the sake of keeping up with the release cycle is exhausting.
> >
> > -Original Message-
> > From: Tony Breeds [mailto:t...@bakeyournoodle.com]
> > Sent: Thursday, November 05, 2015 11:15 PM
> > To: OpenStack Development Mailing List
> > Cc: openstack-operat...@lists.openstack.org
> > Subject: [Openstack-operators] [stable][all] Keeping Juno "alive" for
> longer.
> >
> > Hello all,
> >
> > I'll start by acknowledging that this is a big and complex issue and I
> do not claim to be across all the view points, nor do I claim to be
> particularly persuasive ;P
> >
> > Having stated that, I'd like to seek constructive feedback on the idea
> of keeping Juno around for a little longer.  During the summit I spoke to a
> number of operators, vendors and developers on this topic.  There was some
> support and some "That's crazy pants!" responses.  I clearly didn't make it
> around to everyone, hence this email.
> >
> > Acknowledging my affiliation/bias:  I work for Rackspace in the private
> cloud team.  We support a number of customers currently running Juno that
> are, for a variety of reasons, challenged by the Kilo upgrade.
> >
> > Here is a summary of the main points that have come up in my
> conversations, both for and against.
> >
> > Keep Juno:
> > * According to the current user survey[1] Icehouse still has the
> >   biggest install base in production clouds.  Juno is second, which makes
> >   sense. If we EOL Juno this month that means ~75% of production clouds
> >   will be running an EOL'd release.  Clearly many of these operators have
> >   support contracts from their vendor, so those operators won't be left
> >   completely adrift, but I believe it's the vendors that benefit from
> keeping
> >   Juno around. By working together *in the community* we'll see the best
> >   results.
> >
> > * We only recently EOL'd Icehouse[2].  Sure it was well communicated,
> but we
> >   still have a huge Icehouse/Juno install base.
> >
> > For me this is pretty compelling but for balance 
> >
> > Keep the current plan and EOL Juno Real Soon Now:
> > * There is also no ignoring the elephant in the room that with HP
> stepping
> >   back from public cloud there are questions about our CI capacity, and
> >   keeping Juno will have an impact on that critical resource.
> >
> > * Juno (and other stable/*) resources have a non-zero impact on *every*
> >   project, esp. @infra and release management.  We need to ensure this
> >   isn't too much of a burden.  This mostly means we need enough
> trustworthy
> >   volunteers.
> >
> > * Juno is also tied up with Python 2.6 support. When
> >   Juno goes, so will Python 2.6 which is a happy feeling for a number of
> >   people, and more importantly reduces complexity 

[openstack-dev] [Openstack-operators] [logs] Neutron not logging user information on wsgi requests by default

2015-11-06 Thread Kris G. Lindgren
Hello all,

I noticed the otherday that in our Openstack install (Kilo) Neutron seems to be 
the only project that was not logging the username/tenant information on every 
wsgi request.  Nova/Glance/heat all log a username and/or project on each 
request.  Our wsgi logs from neutron look like the following:

2015-11-05 13:45:24.302 14549 INFO neutron.wsgi 
[req-ab633261-da6d-4ac7-8a35-5d321a8b4a8f ] 10.224.48.132 - - [05/Nov/2015 
13:45:24]
"GET /v2.0/networks.json?id=2d5fe344-4e98-4ccc-8c91-b8064d17c64c HTTP/1.1" 200 
655 0.027550

I did a fair amount of digging and it seems that devstack is by default 
overriding the context log format for neutron to add the username/tenant 
information into the logs.  However, there is active work to remove this 
override from devstack[1].  However, using the devstack way I was able to true 
up our neutron wsgi logs to be inline with what other services are providing.

If you add:
loggin_context_format_string = %(asctime)s.%(msecs)03d %(levelname)s %(name)s 
[%(request_id)s %(user_name)s %(project_name)s] %(instance)s%(message)s

To the [DEFAULT] section of neutron.conf and restart neutron-server.  You will 
now get log output like the following:

 2015-11-05 18:07:31.033 INFO neutron.wsgi 
[req-ebf1d3c9-b556-48a7-b1fa-475dd9df0bf7  ] 10.224.48.132 - - [05/Nov/2015 18:07:31]
"GET /v2.0/networks.json?id=55e1b92a-a2a3-4d64-a2d8-4b0bee46f3bf HTTP/1.1" 200 
617 0.035515

So go forth, check your logs, and before you need to use your logs to debug who 
did what,when, and where.  Get the information that you need added to the wsgi 
logs.  if you are not seeing wsgi logs for your projects trying enabling 
verbose=true in the [DEFAULT] section as well.

Adding [logs] tag since it would be nice to have all projects logging to a 
standard wsgi format out of the gate.

[1] - https://review.openstack.org/#/c/172510/2
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DevStack errors...

2015-11-06 Thread Neil Jerram
Thanks for following up with these details.  Good news!

Neil


From: Thales [mailto:thale...@yahoo.com] 
Sent: 05 November 2015 20:16
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] DevStack errors...

Neil Jerram wrote:
"When you say 'on Ubuntu 14.04', are we talking a completely fresh install with 
nothing else on it?  That's the most reliable way to run DevStack - people 
normally create a fresh disposable VM for this kind of work."

   -- I finally got it running!   I did what you said, and created a VM.   I 
basically followed this guys video tutorial.  The only difference is I used the 
stable/liberty instead of the stable/icehouse (which I guess no longer exists). 
  It is, however, *very* slow on my machine, with 4 giga bytes and 30 GB HDD.  

   I did have some problems getting VirtualBox working (I know others are using 
VMware) with their "guest additions", because none of the standard instructions 
worked.    Some user on askubuntu.com here had the answer.  This gave me the 
bigger screen.
http://askubuntu.com/questions/451805/screen-resolution-problem-with-ubuntu-14-04-and-virtualbox



  The answer given by the guy named "Chip" and then the reply to him by "Snark" 
did the trick.   

The tutorial I used:
https://www.youtube.com/watch?v=zoi8WpGwrXM



  I supplied details here in case anyone else has the same difficulties.

   Thanks for the help!

Regards,
...John
On Tuesday, November 3, 2015 3:35 AM, Neil Jerram  
wrote:

On 02/11/15 23:56, Thales wrote:
I'm trying to get DevStack to work, but am getting errors.  Is this a good list 
to ask questions for this?  I can't seem to get answers anywhere I look.   I 
tried the openstack list, but it kind of moves slow.

Thanks for any help.

Regards, John

In case it helps, I had no problem using DevStack's stable/liberty branch 
yesterday.  If you don't specifically need master, you might try that too:

  # Clone the DevStack repository.
  git clone https://git.openstack.org/openstack-dev/devstack

  # Use the stable/liberty branch.
  cd devstack
  git checkout stable/liberty

  ...

I also just looked again at your report on openstack@.  Were you using Python 
2.7?

I expect you'll have seen discussions like 
http://stackoverflow.com/questions/23176697/importerror-no-module-named-io-in-ubuntu-14-04.
  It's not obvious to me how those can be relevant, though, as they seem to 
involve corruption of an existing virtualenv, whereas DevStack I believe 
creates a virtualenv from scratch.

When you say 'on Ubuntu 14.04', are we talking a completely fresh install with 
nothing else on it?  That's the most reliable way to run DevStack - people 
normally create a fresh disposable VM for this kind of work.

Regards,
    Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Nova] Tempest Hypervisor Feature Tagging

2015-11-06 Thread John Garbutt
On 5 November 2015 at 19:31, Rafael Folco  wrote:
> Is there any way to know what hypervisor features[1] were tested in a
> Tempest run?
> From what I’ve seen, currently there is no way to tell what tests cover what
> features.
> Looks like Tempest has UUID and service tagging, but no reference to the
> hypervisor features.
>
> It would be good to track/map covered features and generate a report for CI.
> In case of any interest in that, I’d like to validate if the metadata
> tagging (similar to UUID) is a reasonable approach.
>
> [1] http://docs.openstack.org/developer/nova/support-matrix.html

I have proposed a change to take the support-matrix and include test
coverage and doc coverage:
https://review.openstack.org/#/c/215664/4/doc/source/feature_classification.rst,cm

The plan was certainly to (in a similar way to DefCore) map tempest
uuids to groups of features.

I would love help to push that effort forward, as I just don't have
the bandwidth to do it myself right now.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [doc] How to support Microversions and Actions in Swagger Spec

2015-11-06 Thread John Garbutt
On 6 November 2015 at 03:31, Alex Xu  wrote:
> Hi, folks
>
> Nova API sub-team is working on the swagger generation. And there is PoC
> https://review.openstack.org/233446
>
> But before we are going to next step, I really hope we can get agreement
> with how to support Microversions and Actions. The PoC have demo about
> Microversions. It generates min version action as swagger spec standard, for
> the other version actions, it named as extended attribute, like:
>
> {
> '/os-keypairs': {
> "get": {
> 'x-start-version': '2.1',
> 'x-end-version': '2.1',
> 'description': '',
>
> },
> "x-get-2.2-2.9": {
> 'x-start-version': '2.2',
> 'x-end-version': '2.9',
> 'description': '',
> .
> }
> }
> }
>
> x-start-version and x-end-version are the metadata for Microversions, which
> should be used by UI code to parse.
>
> This is just based on my initial thought, and there is another thought is
> generating a set full swagger specs for each Microversion. But I think how
> to show Microversions and Actions should be depended how the doc UI to parse
> that also.
>
> As there is doc project to turn swagger to UI:
> https://github.com/russell/fairy-slipper  But it didn't support
> Microversions. So hope doc team can work with us and help us to find out
> format to support Microversions and Actions which good for UI parse and
> swagger generation.
>
> Any thoughts folks?

I can't find the URL to the example, but I though the plan was each
microversion generates a full doc tree.

It also notes the changes between the versions, so you look at the
latest version, you can tell between which versions the API was
modified.

I remember annegentle had a great example of this style, will try ping
here about that next week.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] Weekly Status Report

2015-11-06 Thread Kashyap Chamarthy
On Fri, Nov 06, 2015 at 05:54:59PM +0100, Markus Zoeller wrote:
> Hey folks,
> 
> below is the first report of bug stats I intend to post weekly.
> We discussed it shortly during the Mitaka summit that this report
> could be useful to keep the attention of the open bugs at a certain
> level. Let me know if you think it's missing something.

Nice.  Thanks for this super useful report (especially the queries)!

For cadence, I feel a week flies by too quickly, which is likely to
cause people to train their muscle memory to mark these emails as read.
Maybe bi-weekly?

> 
> New bugs which are *not* assigned to any subteam
> 
> count: 19
> query: http://bit.ly/1WF68Iu
> 
> 
> New bugs which are *not* triaged
> 
> subteam: libvirt 
> count: 14 
> query: http://bit.ly/1Hx3RrL
> subteam: volumes 
> count: 11
> query: http://bit.ly/1NU2DM0
> subteam: network : 
> count: 4
> query: http://bit.ly/1LVAQdq
> subteam: db : 
> count: 4
> query: http://bit.ly/1LVATWG
> subteam: 
> count: 67
> query: http://bit.ly/1RBVZLn
> 
> 
> High prio bugs which are *not* in progress
> --
> count: 39
> query: http://bit.ly/1MCKoHA
> 
> 
> Critical bugs which are *not* in progress
> -
> count: 0
> query: http://bit.ly/1kfntfk
> 
> 
> Readings
> 
> * https://wiki.openstack.org/wiki/BugTriage
> * https://wiki.openstack.org/wiki/Nova/BugTriage
> * 
> http://lists.openstack.org/pipermail/openstack-dev/2015-November/078252.html
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Clint Byrum
Excerpts from Dan Smith's message of 2015-11-06 09:37:44 -0800:
> > Worth mentioning that OpenStack releases that come out at the same time
> > as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
> > are supported for 5 years by Canonical so are already kind of an LTS.
> > Support in this context means patches, updates and commercial support
> > (for a fee).
> > For paying customers 3 years of patches, updates and commercial support
> > for April releases, (Kilo, O, Q etc..) is also available.
> 
> Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
> maintaining an older release for so long is a good use of people or CI
> resources, especially given how hard it can be for us to keep even
> recent stable releases working and maintained.
> 

The argument in the original post, I think, is that we should not
stand in the way of the vendors continuing to collaborate on stable
maintenance in the upstream context after the EOL date. We already have
distro vendors doing work in the stable branches, but at EOL we push
them off to their respective distro-specific homes.

As much as I'd like everyone to get on the CD train, I think it might
make sense to enable the vendors to not diverge, but instead let them
show up with people and commitment and say "Hey we're going to keep
Juno/Mitaka/etc alive!".

So perhaps what would make sense is defining a process by which they can
make that happen.

Note that it's not just backporters though. It's infra resources too.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Clint Byrum
Excerpts from Erik McCormick's message of 2015-11-06 09:36:44 -0800:
> On Fri, Nov 6, 2015 at 12:28 PM, Mark Baker  wrote:
> > Worth mentioning that OpenStack releases that come out at the same time as
> > Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka) are
> > supported for 5 years by Canonical so are already kind of an LTS. Support in
> > this context means patches, updates and commercial support (for a fee).
> > For paying customers 3 years of patches, updates and commercial support for
> > April releases, (Kilo, O, Q etc..) is also available.
> >
> 
> Does that mean that you are actually backporting and gate testing
> patches downstream that aren't being done upstream? I somehow doubt
> it, but if so, then it would be great if you could lead some sort of
> initiative to push those patches back upstream.
> 

If Canonical and Ubuntu still work the way they worked when I was
involved, then yes and no. The initial patches still happen upstream,
in trunk. But the difference is the backporting can't happen upstream in
stable branches after EOL, because those branches are shut down. That
seems a shame, as the community at large would likely be better served
if the vendors can continue to land their stable patches for as long as
they're working on them.

That said, I think it would take a bit of a shift in participation to get
the needed resources in the right place (like infra) to make that happen.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >