Re: [openstack-dev] [openstack-ansible] network question and documentation

2016-02-19 Thread Fabrice Grelaud
Le 19/02/2016 00:31, Ian Cordasco a écrit :
>  
>
> -Original Message-
> From: Fabrice Grelaud 
> Reply: OpenStack Development Mailing List (not for usage questions) 
> 
> Date: February 17, 2016 at 09:02:49
> To: openstack-dev@lists.openstack.org 
> Subject:  [openstack-dev] [openstack-ansible] network question and 
> documentation
>
>> Hi,
>>  
>> after a first test architecture of openstack (juno then upgrade to kilo), 
>> installed  
>> from scratch, and because we use Ansible in our organization, we decided to 
>> deploy our  
>> next openstack generation architecture from the project openstack-ansible.
>>  
>> I studied your documentation (very good work and very appreciate, 
>> http://docs.openstack.org/developer/openstack-ansible/[kilo|liberty]/install-guide/index.html)
>>   
>> and i will need some more clarification compared to network architecture.
>>  
>> I'm not sure to be on the good mailing-list because it 's dev oriented here, 
>> for all that,  
>> i fear my request to be embedded in the openstack overall list, because it's 
>> very specific  
>> to the architecture proposed by your project (bond0 (br-mngt, br-storage), 
>> bond1 (br-vxlan,  
>> br-vlan)).
>>  
>> I'm sorry about that if that is the case...
>>  
>> So, i would like to know if i'm going in the right direction.
>> We want to use both, existing vlan from our existing physical architecture 
>> inside openstack  
>> (vlan provider) and "private tenant network" with IP floating offer (from a 
>> flat network).  
>>  
>> My question is about switch configuration:
>>  
>> On Bond0:
>> the switch port connected to bond0 need to be configured as trunks with:
>> - the host management network (vlan untagged but can be tagged ?)
>> - container(mngt) network (vlan-container)
>> - storage network (vlan-storage)
>>  
>> On Bond1:
>> the switch port connected to bond1 need to be configured as trunks with:
>> - vxlan network (vlan-vxlan)
>> - vlan X (existing vlan in our existing network infra)
>> - vlan Y (existing vlan in our existing network infra)
>>  
>> Is that right ?
>>  
>> And do i have to define a new network (a new vlan, flat network) that offer 
>> floatting IP  
>> for private tenant (not using existing vlan X or Y)? Is that new vlan have 
>> to be connected  
>> to bond1 and/or bond0 ?
>> Is that host management network could play this role ?
>>  
>> Thank you to consider my request.
>> Regards
>>  
>> ps: otherwise, about the documentation, for great understanding and perhaps 
>> consistency  
>> In Github (https://github.com/openstack/openstack-ansible), in the file 
>> openstack_interface.cfg.example,  
>> you point out that for br-vxlan and br-storage, "only compute node have an 
>> IP on this bridge.  
>> When used by infra nodes, IPs exist in the containers and inet should be set 
>> to manual".  
>>  
>> I think it will be good (but i may be wrong ;-) ) that in chapter 3 of the 
>> "install guide: configuring  
>> the network on target host", you propose the /etc/network/interfaces for 
>> both controller  
>> node (br-vxlan, br-storage: manual without IP) and compute node (br-vxlan, 
>> br-storage:  
>> static with IP).
> Hi Fabrice,
>
> Has anyone responded to your questions yet?
>
> --  
> Ian Cordasco
>
>
Hi Ian,

alas ! Not at the moment...

Thanks,

-- 
Fabrice Grelaud
Université de Bordeaux


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-19 Thread John Garbutt
On 17 February 2016 at 17:52, Clint Byrum  wrote:
> Excerpts from Cheng, Yingxin's message of 2016-02-14 21:21:28 -0800:
>> Hi,
>>
>> I've uploaded a prototype https://review.openstack.org/#/c/280047/ to 
>> testify its design goals in accuracy, performance, reliability and 
>> compatibility improvements. It will also be an Austin Summit Session if 
>> elected: 
>> https://www.openstack.org/summit/austin-2016/vote-for-speakers/Presentation/7316

Long term, I see a world where there are multiple scheduler Nova is
able to use, depending on the deployment scenario.

We have tried to stop any more scheduler going in tree (like the
solver scheduler) while we get the interface between the
nova-scheduler and the rest of Nova straightened out, to make that
much easier.

So a big question for me is, does the new scheduler interface work if
you look at slotting in your prototype scheduler?

Specifically I am thinking about this interface:
https://github.com/openstack/nova/blob/master/nova/scheduler/client/__init__.py

>> I want to gather opinions about this idea:
>> 1. Is this feature possible to be accepted in the Newton release?
>> 2. Suggestions to improve its design and compatibility.
>> 3. Possibilities to integrate with resource-provider bp series: I know 
>> resource-provider is the major direction of Nova scheduler, and there will 
>> be fundamental changes in the future, especially according to the bp 
>> https://review.openstack.org/#/c/271823/1/specs/mitaka/approved/resource-providers-scheduler.rst.
>>  However, this prototype proposes a much faster and compatible way to make 
>> schedule decisions based on scheduler caches. The in-memory decisions are 
>> made at the same speed with the caching scheduler, but the caches are kept 
>> consistent with compute nodes as quickly as possible without db refreshing.
>>
>> Here is the detailed design of the mentioned prototype:
>>
>> >>
>> Background:
>> The host state cache maintained by host manager is the scheduler resource 
>> view during schedule decision making. It is updated whenever a request is 
>> received[1], and all the compute node records are retrieved from db every 
>> time. There are several problems in this update model, proven in 
>> experiments[3]:
>> 1. Performance: The scheduler performance is largely affected by db access 
>> in retrieving compute node records. The db block time of a single request is 
>> 355ms in average in the deployment of 3 compute nodes, compared with only 
>> 3ms in in-memory decision-making. Imagine there could be at most 1k nodes, 
>> even 10k nodes in the future.
>> 2. Race conditions: This is not only a parallel-scheduler problem, but also 
>> a problem using only one scheduler. The detailed analysis of 
>> one-scheduler-problem is located in bug analysis[2]. In short, there is a 
>> gap between the scheduler makes a decision in host state cache and the
>> compute node updates its in-db resource record according to that decision in 
>> resource tracker. A recent scheduler resource consumption in cache can be 
>> lost and overwritten by compute node data because of it, result in cache 
>> inconsistency and unexpected retries. In a one-scheduler experiment using 
>> 3-node deployment, there are 7 retries out of 31 concurrent schedule 
>> requests recorded, results in 22.6% extra performance overhead.
>> 3. Parallel scheduler support: The design of filter scheduler leads to an 
>> "even worse" performance result using parallel schedulers. In the same 
>> experiment with 4 schedulers on separate machines, the average db block time 
>> is increased to 697ms per request and there are 16 retries out of 31 
>> schedule requests, namely 51.6% extra overhead.
>
> This mostly agrees with recent tests I've been doing simulating 1000
> compute nodes with the fake virt driver.

Overall this agrees with what I saw in production before moving us to
the caching scheduler driver.

I would love a nova functional test that does that test. It will help
us compare these different schedulers and find the strengths and
weaknesses.

> My retry rate is much lower,
> because there's less window for race conditions since there is no latency
> for the time between nova-compute getting the message that the VM is
> scheduled to it, and responding with a host update. Note that your
> database latency numbers seem much higher, we see about 200ms, and I
> wonder if you are running in a very resource constrained database
> instance.

Just to double check, you are using pymysql rather than MySQL-python
as the sqlalchemy backend?

If you use a driver that doesn't work well with eventlet, things can
get very bad, very quickly. Particularly because of the way the
scheduling works around handing back the results of the DB call. You
can get some benefits by shrinking the db and greenlet pools to reduce
the concurrency.

>> Improvements:
>> This prototype solved the mentioned issues above by implementing a new 

Re: [openstack-dev] [Fuel] Wildcards instead of

2016-02-19 Thread Kyrylo Galanov
Hi,

So who is voting for the path to be abandoned?

By the way, there is already a task running by the wildcard:
https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/fuel_pkgs/tasks.yaml#L4
However, it this case it might work with plugins.

Best regards,
Kyrylo

On Fri, Feb 19, 2016 at 1:09 AM, Igor Kalnitsky 
wrote:

> Hey Kyrylo,
>
> As it was mentioned in the review: you're about to break roles defined
> by plugins. That's not good move, I believe.
>
> Regarding 'exclude' directive, I have no idea what you're talking
> about. We don't support it now, and, anyway, there should be no
> difference between roles defined by plugins and core roles.
>
> - Igor
>
> On Thu, Feb 18, 2016 at 12:53 PM, Kyrylo Galanov 
> wrote:
> > Hello,
> >
> > We are about to switch to wildcards instead of listing all groups in
> tasks
> > explicitly [0].
> > This change must make deployment process more obvious for developers.
> > However, it might lead to confusion when new groups are added either by
> > plugin or fuel team in future.
> >
> > As mention by Bogdan, it is possible to use 'exclude' directive to
> mitigate
> > the risk.
> > Any thoughts on the topic are appreciated.
> >
> >
> > [0] https://review.openstack.org/#/c/273596/
> >
> > Best regards,
> > Kyrylo
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stable][oslo][all] oslo.service 0.9.1 release (liberty)

2016-02-19 Thread Victor Stinner

Hi,

Oh, I noticed myself the bug 
https://bugs.launchpad.net/nova/+bug/1538204 (nova failed to stop) on 
Grenade tests of one of my Cinder changes. Grenade started on Liberty 
with oslo.service 1.1.0, a version which didn't have my fixes for signal 
handling.


Good news: since upper-constraints.txt of Liberty was updated to use 
oslo.service 0.9.1, the random bug should now be gone (with my fixes for 
signal handling).


I hope that it will help to make Grenade tests more reliable!

Please keep me in touch if you still see the failure, especially this 
message in logs:

""AssertionError: Cannot switch to MAINLOOP from MAINLOOP" error"

Victor

Le 18/02/2016 11:05, Victor Stinner a écrit :

Hi,


Le 17/02/2016 19:29, no-re...@openstack.org a écrit :
 > We are chuffed to announce the release of:
 >
 > oslo.service 0.9.1: oslo.service library
 > (...)
 >
 > Changes in oslo.service 0.9.0..0.9.1
 > 
 >
 > 8b6e2f6 Fix race condition on handling signals
 > eb1a4aa Fix a race condition in signal handlers

This release contains two major changes to fix race conditions in signal
handling. Related bugs:

"Race condition in SIGTERM signal handler"
https://bugs.launchpad.net/oslo.service/+bug/1524907
=> "AssertionError: Cannot switch to MAINLOOP from MAINLOOP" error

"Failed to stop nova-api in grenade tests"
https://bugs.launchpad.net/nova/+bug/1538204
=> "oslo_service.threadgroup RuntimeError: dictionary changed size
during iteration"

oslo.service 0.9.1 is now in upper-contraints.txt and so will be
deployed on Liberty CIs:
https://review.openstack.org/#/c/280934/

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wildcards instead of

2016-02-19 Thread Bulat Gaifullin
+1 to use wildcards for common tasks like netconfig and setup repositories. 
This tasks should run on all nodes and it does not matter, the node has role 
from plugin or core-role.
In my opinion we should one approach for basic configuration of node.

Regards,
Bulat Gaifullin
Mirantis Inc.



> On 19 Feb 2016, at 13:36, Kyrylo Galanov  wrote:
> 
> Hi,
> 
> So who is voting for the path to be abandoned?
> 
> By the way, there is already a task running by the wildcard: 
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/fuel_pkgs/tasks.yaml#L4
>  
> 
> However, it this case it might work with plugins.
> 
> Best regards,
> Kyrylo
> 
> On Fri, Feb 19, 2016 at 1:09 AM, Igor Kalnitsky  > wrote:
> Hey Kyrylo,
> 
> As it was mentioned in the review: you're about to break roles defined
> by plugins. That's not good move, I believe.
> 
> Regarding 'exclude' directive, I have no idea what you're talking
> about. We don't support it now, and, anyway, there should be no
> difference between roles defined by plugins and core roles.
> 
> - Igor
> 
> On Thu, Feb 18, 2016 at 12:53 PM, Kyrylo Galanov  > wrote:
> > Hello,
> >
> > We are about to switch to wildcards instead of listing all groups in tasks
> > explicitly [0].
> > This change must make deployment process more obvious for developers.
> > However, it might lead to confusion when new groups are added either by
> > plugin or fuel team in future.
> >
> > As mention by Bogdan, it is possible to use 'exclude' directive to mitigate
> > the risk.
> > Any thoughts on the topic are appreciated.
> >
> >
> > [0] https://review.openstack.org/#/c/273596/ 
> > 
> >
> > Best regards,
> > Kyrylo
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> > 
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance]Glance v2 api support in Nova

2016-02-19 Thread Sean Dague
On 02/15/2016 06:00 PM, Flavio Percoco wrote:
> On 12/02/16 18:24 +0300, Mikhail Fedosin wrote:
>> Hello!
>>
>> In late December I wrote several messages about glance v2 support in
>> Nova and
>> Nova's xen plugin. Many things have been done after that and now I'm
>> happy to
>> announce that there we have a set of commits that makes Nova fully v2
>> compatible (xen plugin works too)!
>>
>> Here's the link to the top commit
>> https://review.openstack.org/#/c/259097/
>> Here's the link to approved spec for Mitaka https://github.com/openstack/
>> nova-specs/blob/master/specs/mitaka/approved/use-glance-v2-api.rst
>>
>> I think it'll be a big step for OpenStack, because api v2 is much more
>> stable
>> and RESTful than v1.  We would very much like to deprecate v1 at some
>> point. v2
>> is 'Current' since Juno, and after that there we've had a lot of
>> attempts to
>> adopt it in Nova, and every time it was postponed to next release cycle.
>>
>> Unfortunately, it may not happen this time - this work was marked as
>> 'non-priority' when the related patches had been done. I think it's a big
>> omission, because this work is essential for all OpenStack, and it
>> will be a
>> shame if we won't be able to land it in Mitaka.
>> As far as I know, Feature Freeze will be announced on March, 3rd, and
>> we still
>> have enough time and people to test it before. All patches are split
>> into small
>> commits (100 LOC max), so they should be relatively easy to review.
>>
>> I wonder if Nova community members may change their decision and
>> unblock this
>> patches? Thanks in advance!
> 
> A couple of weeks ago, I had a chat with Sean Dague and John Garbutt and we
> agreed that it was probably better to wait until Newton. After that
> chat, we
> held a Glance virtual mid-cycle where Mikhail mentioned that he would
> rather
> sprint on getting Nova on v2 than waiting for Newton. The terms and code
> Mikhail
> worked on aligns with what has been discussed throughout the cycle in
> numerous
> chats, patch sets, etc.
> 
> After all the effort that has been put on this (including getting a py24
> environment ready to test the xenplugin) it'd be a real shame to have
> this work
> pushed to Newton. The Glance team *needs* to be able to deprecate v1 and
> the
> team has been working on this ever since Kilo, when this effort of
> moving Nova
> to v2 started.
> 
> I believe it has to be an OpenStack priority to make this happen or, at
> the very
> least, a cross-project effort that involves all services relying on
> Glance. Nova
> is the last service in the list, AFAICT, and the Glance team has been very
> active on this front. This is not to imply the Nova team hasn't help, in
> fact,
> there's been lots of support/feedback from the nova team during Mitaka.
> It is
> because of that that I believe we should grant this patches an exception
> and let
> them in.
> 
> Part of the feedback the Nova team has provided is that some of that
> code that
> has been proposed should live in glanceclient. The Glance team is ready
> to react
> and merge that code, release glanceclient, and get Nova on v2.

Right, I think this was the crux of the problem. It took a while to get
consensus on that point, and now we're deep into the priority part of
the Nova cycle, and the runway is gone. I'm happy to help review early
during the Newton cycle.

I also think as prep work for that we should probably get either glance
folks or citrix folks to enhance the testing around the xenserver /
glance paths in Nova. That will make reviews go faster in Newton because
we can be a lot more sure that patches aren't breaking anything.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [stable] Suggestion to remove stable/liberty and stable branches support from ironic-python-agent

2016-02-19 Thread Dmitry Tantsur

Hi all!

Initially we didn't plan on having stable branches for IPA at all. Our 
gate is using the prebuilt image generated from the master branch even 
on Ironic/Inspector stable branches. The branch in question was added by 
request of RDO folks, and today I got a request from trown to remove it:


 dtantsur: btw, what do you think the chances are that IPA gets 
rid of stable branch?
 I'm +1 on that, because currently only tripleo is using this 
stable branch, our own gates are using tarball from master

 s/tarball/prebuilt image/
 cool, from RDO perspective, I would prefer to have master 
package in our liberty delorean server, but I cant do that (without 
major hacks) if there is a stable/liberty branch

 LIO support being the main reason
 fwiw, I have tested master IPA on liberty and it works great

So I suggest we drop stable branches from IPA. This won't affect the 
Ironic gate in any regard, as we don't use stable IPA there anyway, as I 
mentioned before. As we do know already, we'll keep IPA compatible with 
all supported Ironic and Inspector versions.


Opinions?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] Suggestion to remove stable/liberty and stable branches support from ironic-python-agent

2016-02-19 Thread Lucas Alvares Gomes
Hi,

By removing stable branches you mean stable branches for mitaka and
newer releases or that includes stable/liberty which already exist as
well?

I think the latter is more complicated, I don't think we should drop
stable/liberty like that because other people (apart from TripleO) may
also depend on that. I mean, it wouldn't be very "stable" if stable
branches were deleted before their supported phases.

But that said, I'm +1 to not have stable branches for newer releases.

Cheers,
Lucas

On Fri, Feb 19, 2016 at 12:17 PM, Dmitry Tantsur  wrote:
> Hi all!
>
> Initially we didn't plan on having stable branches for IPA at all. Our gate
> is using the prebuilt image generated from the master branch even on
> Ironic/Inspector stable branches. The branch in question was added by
> request of RDO folks, and today I got a request from trown to remove it:
>
>  dtantsur: btw, what do you think the chances are that IPA gets rid
> of stable branch?
>  I'm +1 on that, because currently only tripleo is using this
> stable branch, our own gates are using tarball from master
>  s/tarball/prebuilt image/
>  cool, from RDO perspective, I would prefer to have master package in
> our liberty delorean server, but I cant do that (without major hacks) if
> there is a stable/liberty branch
>  LIO support being the main reason
>  fwiw, I have tested master IPA on liberty and it works great
>
> So I suggest we drop stable branches from IPA. This won't affect the Ironic
> gate in any regard, as we don't use stable IPA there anyway, as I mentioned
> before. As we do know already, we'll keep IPA compatible with all supported
> Ironic and Inspector versions.
>
> Opinions?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]one more use case for Image Import Refactor from OPNFV

2016-02-19 Thread Jay Pipes

On 02/18/2016 10:29 PM, joehuang wrote:

There is difference between " An end user is able to import image from another 
Glance in another OpenStack cloud while sharing same identity management( KeyStone )"


This is an invalid use case, IMO. What's wrong with exporting the image 
from one OpenStack cloud and importing it to another? What does a shared 
identity management service have to do with anything?


> and other use cases. The difference is the image import need to reuse 
the token in the source Glance, other ones don't need this.


Again, this use case is not valid, IMO.

I don't care to cater to these kinds of use cases.

Best,
-jay


Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Friday, February 19, 2016 10:43 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance]one more use case for Image Import 
Refactor from OPNFV

On 02/18/2016 08:34 PM, joehuang wrote:

Hello,

Glad to know that the "Image Import Refactor" is the essential BP in
Mitaka. One more use case from OPNFV as following:

In OPNFV, one deployment scenario is each data center will be deployed
with independent OpenStack instance, but shared identity management.


That's not independent OpenStack instances.


That means there will be one Glance with its backend in each datacenter.
This is to make each datacenter can work standalone as much as
possible, even others crashed.


If the identity management is shared, it's not standalone.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade - Nova metadata failure

2016-02-19 Thread Sean Dague
On 02/18/2016 09:50 PM, Armando M. wrote:
> 
> 
> On 18 February 2016 at 08:41, Sean M. Collins  > wrote:
> 
> This week's update:
> 
> Armando was kind enough to take a look[1], since he's got a fresh
> perspective. I think I've been suffering from Target Fixation[1]
> where I failed to notice a couple other failures in the logs.
> 
> 
> It's been fun, and I am glad I was able to help. Once I validated the
> root cause of the metadata failure [1], I got run [2] and a clean pass
> in [3] :)
> 
> There are still a few things to iron out, ie. choosing metadata over
> config-drive, testing both in the gate etc. But that's for another day.
> 
> Cheers,
> Armando
> 
> [1] https://bugs.launchpad.net/nova/+bug/1545101/comments/4
> [2] 
> http://logs.openstack.org/00/281600/6/experimental/gate-grenade-dsvm-neutron-multinode/40e16c8/
> [3] 
> http://logs.openstack.org/00/281600/6/experimental/gate-grenade-dsvm-neutron-multinode/40e16c8/logs/testr_results.html.gz

I want to thank everyone that's been working on this issue profusely.
This exposed a release critical bug in Nova that we would not have
caught otherwise. Finding that before milestone 3 is a huge win and
gives us a lot more options in fixing it correctly.

I think we've got the proper fix now -
https://review.openstack.org/#/c/279721/ (fingers crossed). The metadata
server is one of the least tested components we've got on the Nova side,
so I'll be looking at ways to fix that problem and hopefully avoid
situations like this again.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] Suggestion to remove stable/liberty and stable branches support from ironic-python-agent

2016-02-19 Thread John Trowbridge


On 02/19/2016 07:29 AM, Lucas Alvares Gomes wrote:
> Hi,
> 
> By removing stable branches you mean stable branches for mitaka and
> newer releases or that includes stable/liberty which already exist as
> well?
> 
> I think the latter is more complicated, I don't think we should drop
> stable/liberty like that because other people (apart from TripleO) may
> also depend on that. I mean, it wouldn't be very "stable" if stable
> branches were deleted before their supported phases.
>
I would argue it is also not very stable if there is not testing against
it :).

For the RDO use case in particular, it is about having LIO support in
liberty, so that it is feature complete with the bash ramdisk. Then the
bash ramdisk can return to the bit bucket.

The tricky bit is that RDO does not include patches in our packages
built from trunk (trunk.rdoproject.org), and for liberty we first check
if stable/liberty exists, then fallback to master if it does not. So the
presence of stable/liberty that is not actually the recommended way to
build IPA for liberty is a bit not ideal for us.

All of that said, I totally understand not wanting to delete a branch.
Especially since I think I am the one who Dmitry is referring to asking
for it. (Though I think what I wanted was releases which is subtly
different)

I think there are some hacks I could make in our trunk builder if I at
least have a ML post like this as justification. I am not 100% sure that
is possible though.

> But that said, I'm +1 to not have stable branches for newer releases.
> 
> Cheers,
> Lucas
> 
> On Fri, Feb 19, 2016 at 12:17 PM, Dmitry Tantsur  wrote:
>> Hi all!
>>
>> Initially we didn't plan on having stable branches for IPA at all. Our gate
>> is using the prebuilt image generated from the master branch even on
>> Ironic/Inspector stable branches. The branch in question was added by
>> request of RDO folks, and today I got a request from trown to remove it:
>>
>>  dtantsur: btw, what do you think the chances are that IPA gets rid
>> of stable branch?
>>  I'm +1 on that, because currently only tripleo is using this
>> stable branch, our own gates are using tarball from master
>>  s/tarball/prebuilt image/
>>  cool, from RDO perspective, I would prefer to have master package in
>> our liberty delorean server, but I cant do that (without major hacks) if
>> there is a stable/liberty branch
>>  LIO support being the main reason
>>  fwiw, I have tested master IPA on liberty and it works great
>>
>> So I suggest we drop stable branches from IPA. This won't affect the Ironic
>> gate in any regard, as we don't use stable IPA there anyway, as I mentioned
>> before. As we do know already, we'll keep IPA compatible with all supported
>> Ironic and Inspector versions.
>>
>> Opinions?
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] Suggestion to remove stable/liberty and stable branches support from ironic-python-agent

2016-02-19 Thread Dmitry Tantsur

On 02/19/2016 01:29 PM, Lucas Alvares Gomes wrote:

Hi,

By removing stable branches you mean stable branches for mitaka and
newer releases or that includes stable/liberty which already exist as
well?

I think the latter is more complicated, I don't think we should drop
stable/liberty like that because other people (apart from TripleO) may
also depend on that. I mean, it wouldn't be very "stable" if stable
branches were deleted before their supported phases.


Yeah, this is a valid concern. Maybe we should recommend RDO somehow 
ignore stable/liberty, and then no longer have stable branches..




But that said, I'm +1 to not have stable branches for newer releases.

Cheers,
Lucas

On Fri, Feb 19, 2016 at 12:17 PM, Dmitry Tantsur  wrote:

Hi all!

Initially we didn't plan on having stable branches for IPA at all. Our gate
is using the prebuilt image generated from the master branch even on
Ironic/Inspector stable branches. The branch in question was added by
request of RDO folks, and today I got a request from trown to remove it:

 dtantsur: btw, what do you think the chances are that IPA gets rid
of stable branch?
 I'm +1 on that, because currently only tripleo is using this
stable branch, our own gates are using tarball from master
 s/tarball/prebuilt image/
 cool, from RDO perspective, I would prefer to have master package in
our liberty delorean server, but I cant do that (without major hacks) if
there is a stable/liberty branch
 LIO support being the main reason
 fwiw, I have tested master IPA on liberty and it works great

So I suggest we drop stable branches from IPA. This won't affect the Ironic
gate in any regard, as we don't use stable IPA there anyway, as I mentioned
before. As we do know already, we'll keep IPA compatible with all supported
Ironic and Inspector versions.

Opinions?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] network question and documentation

2016-02-19 Thread Major Hayden
On 02/17/2016 09:00 AM, Fabrice Grelaud wrote:
> So, i would like to know if i'm going in the right direction.
> We want to use both, existing vlan from our existing physical architecture 
> inside openstack (vlan provider) and "private tenant network" with IP 
> floating offer (from a flat network).
> 
> My question is about switch configuration:
> 
> On Bond0:
> the switch port connected to bond0 need to be configured as trunks with:
> - the host management network (vlan untagged but can be tagged ?)
> - container(mngt) network (vlan-container)
> - storage network (vlan-storage)
> 
> On Bond1:
> the switch port connected to bond1 need to be configured as trunks with:
> - vxlan network (vlan-vxlan)
> - vlan X (existing vlan in our existing network infra)
> - vlan Y (existing vlan in our existing network infra)
> 
> Is that right ?

You have a good plan here, Fabrice.  Although I don't have bonding configured 
in my own production environment, I'm doing much the same as you are with 
individual network interfaces.

> And do i have to define a new network (a new vlan, flat network) that offer 
> floatting IP for private tenant (not using existing vlan X or Y)? Is that new 
> vlan have to be connected to bond1 and/or bond0 ?
> Is that host management network could play this role ?

You *could* use the host management network as your floating IP pool network, 
but you'd need to create a flat network in OpenStack for that (unless your host 
management network is tagged).  I prefer to use a specific VLAN for those 
public-facing, floating IP addresses.  You'll need routers between your 
internal networks and that floating IP VLAN to make the floating IP addresses 
work (if I remember correctly).

> ps: otherwise, about the documentation, for great understanding and perhaps 
> consistency
> In Github (https://github.com/openstack/openstack-ansible), in the file 
> openstack_interface.cfg.example, you point out that for br-vxlan and 
> br-storage, "only compute node have an IP on this bridge. When used by infra 
> nodes, IPs exist in the containers and inet should be set to manual".
> 
> I think it will be good (but i may be wrong ;-) ) that in chapter 3 of the 
> "install guide: configuring the network on target host", you propose the 
> /etc/network/interfaces for both controller node (br-vxlan, br-storage: 
> manual without IP) and compute node (br-vxlan, br-storage: static with IP).

That makes sense.  Would you be able to open a bug for us?  I'll be glad to 
help you write some documentation if you're interested in learning that process.

Our bug tracker is here in LaunchPad:

  https://bugs.launchpad.net/openstack-ansible

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wildcards instead of

2016-02-19 Thread Igor Kalnitsky
Kyrylo G. wrote:
> So who is voting for the path to be abandoned?

I vote to abandon it. Let's do not break existing plugins, and do not
add *undo* tasks for plugin developers. If they want to configure
network, they'll ask it explicitly.


Kyrylo G. wrote:
> By the way, there is already a task running by the wildcard:
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/fuel_pkgs/tasks.yaml#L4

Yes, exactly, but the thing is that our original task for setuping
repos was executed on all nodes before, including ones provided by
plugins. Making it executing on core nodes only may break plugins that
rely on it. So generally, it's about backward compatibility.


Bulat G. wrote:
> This tasks should run on all nodes and it does not matter, the node
> has role from plugin or core-role.

Nope, they shouldn't. Why do I need to install the following packages

  'screen',
  'tmux',
  'htop',
  'tcpdump',
  'strace',
  'fuel-misc',
  'man-db',
  'fuel-misc',
  'fuel-ha'

if I have no plans to use them? As a deployer engineer, I'd prefer to
keep my role as clear as possible, and decide what to install in my
own way.


On Fri, Feb 19, 2016 at 1:06 PM, Bulat Gaifullin
 wrote:
> +1 to use wildcards for common tasks like netconfig and setup repositories.
> This tasks should run on all nodes and it does not matter, the node has role
> from plugin or core-role.
> In my opinion we should one approach for basic configuration of node.
>
> Regards,
> Bulat Gaifullin
> Mirantis Inc.
>
>
>
> On 19 Feb 2016, at 13:36, Kyrylo Galanov  wrote:
>
> Hi,
>
> So who is voting for the path to be abandoned?
>
> By the way, there is already a task running by the wildcard:
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/fuel_pkgs/tasks.yaml#L4
> However, it this case it might work with plugins.
>
> Best regards,
> Kyrylo
>
> On Fri, Feb 19, 2016 at 1:09 AM, Igor Kalnitsky 
> wrote:
>>
>> Hey Kyrylo,
>>
>> As it was mentioned in the review: you're about to break roles defined
>> by plugins. That's not good move, I believe.
>>
>> Regarding 'exclude' directive, I have no idea what you're talking
>> about. We don't support it now, and, anyway, there should be no
>> difference between roles defined by plugins and core roles.
>>
>> - Igor
>>
>> On Thu, Feb 18, 2016 at 12:53 PM, Kyrylo Galanov 
>> wrote:
>> > Hello,
>> >
>> > We are about to switch to wildcards instead of listing all groups in
>> > tasks
>> > explicitly [0].
>> > This change must make deployment process more obvious for developers.
>> > However, it might lead to confusion when new groups are added either by
>> > plugin or fuel team in future.
>> >
>> > As mention by Bogdan, it is possible to use 'exclude' directive to
>> > mitigate
>> > the risk.
>> > Any thoughts on the topic are appreciated.
>> >
>> >
>> > [0] https://review.openstack.org/#/c/273596/
>> >
>> > Best regards,
>> > Kyrylo
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] is puppet-keystone using v3 credentials correctly ?

2016-02-19 Thread Ptacek, MichalX
Hi all,

I was playing some time with puppet-keystone deployments,
and also reported one issue related to this:
https://bugs.launchpad.net/puppet-keystone/+bug/1547394
but in general my observations are that keystone_service is using v3 
credentials with openstack cli commands that are not compatible

e.g.
Error: Failed to apply catalog: Execution of '/bin/openstack service list 
--quiet --format csv --long' returned 2: usage: openstack service list [-h] [-f 
{csv,table}] [-c COLUMN]
  [--max-width ]
  [--quote {all,minimal,none,nonnumeric}]
openstack service list: error: unrecognized arguments: --long


It can't be bug, because whole module will not work due to this :)
I think I miss something important somewhere ...

My latest manifest file is :


Exec { logoutput => 'on_failure' }

package { 'curl': ensure => present }



node keystone {



  class { '::mysql::server': }

  class { '::keystone::db::mysql':

password => 'keystone',

  }



  class { '::keystone':

verbose => true,

debug   => true,

database_connection => 'mysql://keystone:keystone@127.0.0.1/keystone',

catalog_type=> 'sql',

admin_token => 'admin_token',

  }



  class { '::keystone::roles::admin':

email=> 'exam...@abc.com',

password => 'ChangeMe',

  }



  class { '::keystone::endpoint':

public_url => "http://${::fqdn}:5000/v2.0;,

admin_url  => "http://${::fqdn}:35357/v2.0;,

  }

}

Env variables looks as follows(before service list is called with --long)
{"OS_IDENTITY_API_VERSION"=>"3", "OS_TOKEN"=>"admin_token", 
"OS_URL"=>"http://127.0.0.1:35357/v3"}
Debug: Executing: '/bin/openstack service list --quiet --format csv --long'

Thanks for any hint,
Michal
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wildcards instead of

2016-02-19 Thread Bulat Gaifullin

> On 19 Feb 2016, at 17:09, Igor Kalnitsky  wrote:
> 
> Kyrylo G. wrote:
>> So who is voting for the path to be abandoned?
> 
> I vote to abandon it. Let's do not break existing plugins, and do not
> add *undo* tasks for plugin developers. If they want to configure
> network, they'll ask it explicitly.
> 
> 
> Kyrylo G. wrote:
>> By the way, there is already a task running by the wildcard:
>> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/fuel_pkgs/tasks.yaml#L4
> 
> Yes, exactly, but the thing is that our original task for setuping
> repos was executed on all nodes before, including ones provided by
> plugins. Making it executing on core nodes only may break plugins that
> rely on it. So generally, it's about backward compatibility.
> 
> 
> Bulat G. wrote:
>> This tasks should run on all nodes and it does not matter, the node
>> has role from plugin or core-role.
> 
> Nope, they shouldn't. Why do I need to install the following packages
> 
>  'screen',
>  'tmux',
>  'htop',
>  'tcpdump',
>  'strace',
>  'fuel-misc',
>  'man-db',
>  'fuel-misc',
>  'fuel-ha’
> 
It is big problem?

> if I have no plans to use them? As a deployer engineer, I'd prefer to
> keep my role as clear as possible, and decide what to install in my
> own way.

IMO: The plugin developer wants to install additional applications to extend 
functionality, It do not want configure low-level things, like specify some 
banch of task for configure network, configure repositories etc.
How can we manage new node if network is not configured or fuel-agent is not 
installed?

> 
> 
> On Fri, Feb 19, 2016 at 1:06 PM, Bulat Gaifullin
>  wrote:
>> +1 to use wildcards for common tasks like netconfig and setup repositories.
>> This tasks should run on all nodes and it does not matter, the node has role
>> from plugin or core-role.
>> In my opinion we should one approach for basic configuration of node.
>> 
>> Regards,
>> Bulat Gaifullin
>> Mirantis Inc.
>> 
>> 
>> 
>> On 19 Feb 2016, at 13:36, Kyrylo Galanov  wrote:
>> 
>> Hi,
>> 
>> So who is voting for the path to be abandoned?
>> 
>> By the way, there is already a task running by the wildcard:
>> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/fuel_pkgs/tasks.yaml#L4
>> However, it this case it might work with plugins.
>> 
>> Best regards,
>> Kyrylo
>> 
>> On Fri, Feb 19, 2016 at 1:09 AM, Igor Kalnitsky 
>> wrote:
>>> 
>>> Hey Kyrylo,
>>> 
>>> As it was mentioned in the review: you're about to break roles defined
>>> by plugins. That's not good move, I believe.
>>> 
>>> Regarding 'exclude' directive, I have no idea what you're talking
>>> about. We don't support it now, and, anyway, there should be no
>>> difference between roles defined by plugins and core roles.
>>> 
>>> - Igor
>>> 
>>> On Thu, Feb 18, 2016 at 12:53 PM, Kyrylo Galanov 
>>> wrote:
 Hello,
 
 We are about to switch to wildcards instead of listing all groups in
 tasks
 explicitly [0].
 This change must make deployment process more obvious for developers.
 However, it might lead to confusion when new groups are added either by
 plugin or fuel team in future.
 
 As mention by Bogdan, it is possible to use 'exclude' directive to
 mitigate
 the risk.
 Any thoughts on the topic are appreciated.
 
 
 [0] https://review.openstack.org/#/c/273596/
 
 Best regards,
 Kyrylo
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [puppet] is puppet-keystone using v3 credentials correctly ?

2016-02-19 Thread Matt Fischer
What version of openstack client do you have? What version of the module
are you using?
On Feb 19, 2016 7:20 AM, "Ptacek, MichalX"  wrote:

> Hi all,
>
>
>
> I was playing some time with puppet-keystone deployments,
>
> and also reported one issue related to this:
>
> https://bugs.launchpad.net/puppet-keystone/+bug/1547394
>
> but in general my observations are that keystone_service is using v3
> credentials with openstack cli commands that are not compatible
>
>
>
> e.g.
>
> Error: Failed to apply catalog: Execution of '/bin/openstack service list
> --quiet --format csv --long' returned 2: usage: openstack service list [-h]
> [-f {csv,table}] [-c COLUMN]
>   [--max-width ]
>   [--quote {all,minimal,none,nonnumeric}]
> openstack service list: error: unrecognized arguments: --long
>
>
>
>
>
> It can’t be bug, because whole module will not work due to this J
>
> I think I miss something important somewhere …
>
>
>
> My latest manifest file is :
>
>
>
> Exec { logoutput => 'on_failure' }
>
> package { 'curl': ensure => present }
>
>
>
> node keystone {
>
>
>
>   class { '::mysql::server': }
>
>   class { '::keystone::db::mysql':
>
> password => 'keystone',
>
>   }
>
>
>
>   class { '::keystone':
>
> verbose => true,
>
> debug   => true,
>
> database_connection => 'mysql://keystone:keystone@127.0.0.1/keystone',
>
> catalog_type=> 'sql',
>
> admin_token => 'admin_token',
>
>   }
>
>
>
>   class { '::keystone::roles::admin':
>
> email=> 'exam...@abc.com',
>
> password => 'ChangeMe',
>
>   }
>
>
>
>   class { '::keystone::endpoint':
>
> public_url => "http://${::fqdn}:5000/v2.0;,
>
> admin_url  => "http://${::fqdn}:35357/v2.0;,
>
>   }
>
> }
>
>
>
> Env variables looks as follows(before service list is called with --long)
>
> {"OS_IDENTITY_API_VERSION"=>"3", "OS_TOKEN"=>"admin_token", "OS_URL"=>"
> http://127.0.0.1:35357/v3"}
>
> Debug: Executing: '/bin/openstack service list --quiet --format csv --long'
>
>
>
> Thanks for any hint,
>
> Michal
>
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-19 Thread Andrew Laski


On Thu, Feb 18, 2016, at 05:34 PM, melanie witt wrote:
> On Feb 12, 2016, at 14:49, Jay Pipes  wrote:
> 
> > This would be my preference as well, even though it's technically a 
> > backwards-incompatible API change.
> > 
> > The idea behind get-me-a-network was specifically to remove the current 
> > required complexity of the nova boot command with regards to networking 
> > options and allow a return to the nova-net model where an admin could 
> > auto-create a bunch of unassigned networks and the first time a user booted 
> > an instance and did not specify any network configuration (the default, 
> > sane behaviour in nova-net), one of those unassigned networks would be 
> > grabbed for the troject, I mean prenant, sorry.
> > 
> > So yeah, the "opt-in to having no networking at all with a --no-networking 
> > or --no-nics option" would be my preference.
> 
> +1 to this, especially opting in to have no network at all. It seems most
> friendly to me to have the network allocation automatically happen if
> nothing special is specified.
> 
> This is something where it seems like we need a "reset" to a default
> behavior that is user-friendly. And microversions is the way we have to
> "fix" an undesirable current default behavior.

The question I would still like to see addressed is why do we need to
have a default behavior here? The get-me-a-network effort is motivated
by the current complexity of setting up a network for an instance
between Nova and Neutron and wants to get back to a simpler time of
being able to just boot an instance and get a network. But it still
isn't clear to me why requiring something like "--nic auto" wouldn't
work here, and eliminate the confusion of changing a default behavior.



> 
> While I get that a backward-incompatible change may appear to "sneak in"
> for a user specifying a later microversion to get an unrelated feature,
> it seems reasonable to me that a user specifying a microversion would
> consult the documentation for the version delta to get a clear picture of
> what to expect once they specify the new version. This of course hinges
> on users knowing how microversions work and being familiar with
> consulting documentation when changing versions. I hope that is the case
> and I hope this change will come with a very clear and concise release
> note with a link to [1].

I do hope that users read the documentation and aren't caught unaware if
we make a change like this. But we can make it even easier for them to
know that something has changed if instead of changing default behavior
we make that behavior explicit.


> 
> -melanie
> 
> [1]
> http://docs.openstack.org/developer/nova/api_microversion_history.html
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Email had 1 attachment:
> + signature.asc
>   1k (application/pgp-signature)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] is puppet-keystone using v3 credentials correctly ?

2016-02-19 Thread Matthew Mosesohn
Hi Michal,

Just add --os-identity-api-version=3 to your command it will work. The
provider uses v3 openstackclient via env var
OS_IDENTITY_API_VERSION=3. The default is still 2.

Best Regards,
Matthew Mosesohn

On Fri, Feb 19, 2016 at 5:25 PM, Matt Fischer  wrote:
> What version of openstack client do you have? What version of the module are
> you using?
>
> On Feb 19, 2016 7:20 AM, "Ptacek, MichalX"  wrote:
>>
>> Hi all,
>>
>>
>>
>> I was playing some time with puppet-keystone deployments,
>>
>> and also reported one issue related to this:
>>
>> https://bugs.launchpad.net/puppet-keystone/+bug/1547394
>>
>> but in general my observations are that keystone_service is using v3
>> credentials with openstack cli commands that are not compatible
>>
>>
>>
>> e.g.
>>
>> Error: Failed to apply catalog: Execution of '/bin/openstack service list
>> --quiet --format csv --long' returned 2: usage: openstack service list [-h]
>> [-f {csv,table}] [-c COLUMN]
>>   [--max-width ]
>>   [--quote {all,minimal,none,nonnumeric}]
>> openstack service list: error: unrecognized arguments: --long
>>
>>
>>
>>
>>
>> It can’t be bug, because whole module will not work due to this J
>>
>> I think I miss something important somewhere …
>>
>>
>>
>> My latest manifest file is :
>>
>>
>>
>> Exec { logoutput => 'on_failure' }
>>
>> package { 'curl': ensure => present }
>>
>>
>>
>> node keystone {
>>
>>
>>
>>   class { '::mysql::server': }
>>
>>   class { '::keystone::db::mysql':
>>
>> password => 'keystone',
>>
>>   }
>>
>>
>>
>>   class { '::keystone':
>>
>> verbose => true,
>>
>> debug   => true,
>>
>> database_connection => 'mysql://keystone:keystone@127.0.0.1/keystone',
>>
>> catalog_type=> 'sql',
>>
>> admin_token => 'admin_token',
>>
>>   }
>>
>>
>>
>>   class { '::keystone::roles::admin':
>>
>> email=> 'exam...@abc.com',
>>
>> password => 'ChangeMe',
>>
>>   }
>>
>>
>>
>>   class { '::keystone::endpoint':
>>
>> public_url => "http://${::fqdn}:5000/v2.0;,
>>
>> admin_url  => "http://${::fqdn}:35357/v2.0;,
>>
>>   }
>>
>> }
>>
>>
>>
>> Env variables looks as follows(before service list is called with --long)
>>
>> {"OS_IDENTITY_API_VERSION"=>"3", "OS_TOKEN"=>"admin_token",
>> "OS_URL"=>"http://127.0.0.1:35357/v3"}
>>
>> Debug: Executing: '/bin/openstack service list --quiet --format csv
>> --long'
>>
>>
>>
>> Thanks for any hint,
>>
>> Michal
>>
>> --
>> Intel Research and Development Ireland Limited
>> Registered in Ireland
>> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
>> Registered Number: 308263
>>
>> This e-mail and any attachments may contain confidential material for the
>> sole use of the intended recipient(s). Any review or distribution by others
>> is strictly prohibited. If you are not the intended recipient, please
>> contact the sender and delete all copies.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] is puppet-keystone using v3 credentials correctly ?

2016-02-19 Thread Ptacek, MichalX
Hello Matt & Matthew,

I am using following versions:
python-openstackclient-1.0.1-1.fc22.noarch
openstack-keystone(v7.0.0)

for me observations are still the same:
when v2 version is used (in OS_IDENTITY_API_VERSION & OS_AUTH_URL), command 
"'/bin/openstack service list --quiet --format csv --long" works just fine
but on fresh installation v3 is used by keystone/openstacklib
 {"OS_IDENTITY_API_VERSION"=>"3", "OS_TOKEN"=>"admin_token", 
"OS_URL"=>"http://127.0.0.1:35357/v3"}

Which simply not works :-(

Thanks,
Michal

-Original Message-
From: Matthew Mosesohn [mailto:mmoses...@mirantis.com] 
Sent: Friday, February 19, 2016 3:39 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [puppet] is puppet-keystone using v3 credentials 
correctly ?

Hi Michal,

Just add --os-identity-api-version=3 to your command it will work. The provider 
uses v3 openstackclient via env var OS_IDENTITY_API_VERSION=3. The default is 
still 2.

Best Regards,
Matthew Mosesohn

On Fri, Feb 19, 2016 at 5:25 PM, Matt Fischer  wrote:
> What version of openstack client do you have? What version of the 
> module are you using?
>
> On Feb 19, 2016 7:20 AM, "Ptacek, MichalX"  wrote:
>>
>> Hi all,
>>
>>
>>
>> I was playing some time with puppet-keystone deployments,
>>
>> and also reported one issue related to this:
>>
>> https://bugs.launchpad.net/puppet-keystone/+bug/1547394
>>
>> but in general my observations are that keystone_service is using v3 
>> credentials with openstack cli commands that are not compatible
>>
>>
>>
>> e.g.
>>
>> Error: Failed to apply catalog: Execution of '/bin/openstack service 
>> list --quiet --format csv --long' returned 2: usage: openstack 
>> service list [-h] [-f {csv,table}] [-c COLUMN]
>>   [--max-width ]
>>   [--quote {all,minimal,none,nonnumeric}] 
>> openstack service list: error: unrecognized arguments: --long
>>
>>
>>
>>
>>
>> It can’t be bug, because whole module will not work due to this J
>>
>> I think I miss something important somewhere …
>>
>>
>>
>> My latest manifest file is :
>>
>>
>>
>> Exec { logoutput => 'on_failure' }
>>
>> package { 'curl': ensure => present }
>>
>>
>>
>> node keystone {
>>
>>
>>
>>   class { '::mysql::server': }
>>
>>   class { '::keystone::db::mysql':
>>
>> password => 'keystone',
>>
>>   }
>>
>>
>>
>>   class { '::keystone':
>>
>> verbose => true,
>>
>> debug   => true,
>>
>> database_connection => 
>> 'mysql://keystone:keystone@127.0.0.1/keystone',
>>
>> catalog_type=> 'sql',
>>
>> admin_token => 'admin_token',
>>
>>   }
>>
>>
>>
>>   class { '::keystone::roles::admin':
>>
>> email=> 'exam...@abc.com',
>>
>> password => 'ChangeMe',
>>
>>   }
>>
>>
>>
>>   class { '::keystone::endpoint':
>>
>> public_url => "http://${::fqdn}:5000/v2.0;,
>>
>> admin_url  => "http://${::fqdn}:35357/v2.0;,
>>
>>   }
>>
>> }
>>
>>
>>
>> Env variables looks as follows(before service list is called with 
>> --long)
>>
>> {"OS_IDENTITY_API_VERSION"=>"3", "OS_TOKEN"=>"admin_token", 
>> "OS_URL"=>"http://127.0.0.1:35357/v3"}
>>
>> Debug: Executing: '/bin/openstack service list --quiet --format csv 
>> --long'
>>
>>
>>
>> Thanks for any hint,
>>
>> Michal
>>
>> --
>> Intel Research and Development Ireland Limited Registered in Ireland 
>> Registered Office: Collinstown Industrial Park, Leixlip, County 
>> Kildare Registered Number: 308263
>>
>> This e-mail and any attachments may contain confidential material for 
>> the sole use of the intended recipient(s). Any review or distribution 
>> by others is strictly prohibited. If you are not the intended 
>> recipient, please contact the sender and delete all copies.
>>
>>
>> _
>> _ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, 

Re: [openstack-dev] [Fuel] Wildcards instead of

2016-02-19 Thread Aleksandr Didenko
> I vote to abandon it. Let's do not break existing plugins, and do not
> add *undo* tasks for plugin developers. If they want to configure
> network, they'll ask it explicitly.

+1 to this gentleman. It's safe to add wildcards only to tasks that were
moved from pre/post deployment stages, which were executed everywhere
anyway.

Regards,
Alex

On Fri, Feb 19, 2016 at 3:22 PM, Bulat Gaifullin 
wrote:

>
> > On 19 Feb 2016, at 17:09, Igor Kalnitsky 
> wrote:
> >
> > Kyrylo G. wrote:
> >> So who is voting for the path to be abandoned?
> >
> > I vote to abandon it. Let's do not break existing plugins, and do not
> > add *undo* tasks for plugin developers. If they want to configure
> > network, they'll ask it explicitly.
> >
> >
> > Kyrylo G. wrote:
> >> By the way, there is already a task running by the wildcard:
> >>
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/fuel_pkgs/tasks.yaml#L4
> >
> > Yes, exactly, but the thing is that our original task for setuping
> > repos was executed on all nodes before, including ones provided by
> > plugins. Making it executing on core nodes only may break plugins that
> > rely on it. So generally, it's about backward compatibility.
> >
> >
> > Bulat G. wrote:
> >> This tasks should run on all nodes and it does not matter, the node
> >> has role from plugin or core-role.
> >
> > Nope, they shouldn't. Why do I need to install the following packages
> >
> >  'screen',
> >  'tmux',
> >  'htop',
> >  'tcpdump',
> >  'strace',
> >  'fuel-misc',
> >  'man-db',
> >  'fuel-misc',
> >  'fuel-ha’
> >
> It is big problem?
>
> > if I have no plans to use them? As a deployer engineer, I'd prefer to
> > keep my role as clear as possible, and decide what to install in my
> > own way.
>
> IMO: The plugin developer wants to install additional applications to
> extend functionality, It do not want configure low-level things, like
> specify some banch of task for configure network, configure repositories
> etc.
> How can we manage new node if network is not configured or fuel-agent is
> not installed?
>
> >
> >
> > On Fri, Feb 19, 2016 at 1:06 PM, Bulat Gaifullin
> >  wrote:
> >> +1 to use wildcards for common tasks like netconfig and setup
> repositories.
> >> This tasks should run on all nodes and it does not matter, the node has
> role
> >> from plugin or core-role.
> >> In my opinion we should one approach for basic configuration of node.
> >>
> >> Regards,
> >> Bulat Gaifullin
> >> Mirantis Inc.
> >>
> >>
> >>
> >> On 19 Feb 2016, at 13:36, Kyrylo Galanov  wrote:
> >>
> >> Hi,
> >>
> >> So who is voting for the path to be abandoned?
> >>
> >> By the way, there is already a task running by the wildcard:
> >>
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/fuel_pkgs/tasks.yaml#L4
> >> However, it this case it might work with plugins.
> >>
> >> Best regards,
> >> Kyrylo
> >>
> >> On Fri, Feb 19, 2016 at 1:09 AM, Igor Kalnitsky <
> ikalnit...@mirantis.com>
> >> wrote:
> >>>
> >>> Hey Kyrylo,
> >>>
> >>> As it was mentioned in the review: you're about to break roles defined
> >>> by plugins. That's not good move, I believe.
> >>>
> >>> Regarding 'exclude' directive, I have no idea what you're talking
> >>> about. We don't support it now, and, anyway, there should be no
> >>> difference between roles defined by plugins and core roles.
> >>>
> >>> - Igor
> >>>
> >>> On Thu, Feb 18, 2016 at 12:53 PM, Kyrylo Galanov <
> kgala...@mirantis.com>
> >>> wrote:
>  Hello,
> 
>  We are about to switch to wildcards instead of listing all groups in
>  tasks
>  explicitly [0].
>  This change must make deployment process more obvious for developers.
>  However, it might lead to confusion when new groups are added either
> by
>  plugin or fuel team in future.
> 
>  As mention by Bogdan, it is possible to use 'exclude' directive to
>  mitigate
>  the risk.
>  Any thoughts on the topic are appreciated.
> 
> 
>  [0] https://review.openstack.org/#/c/273596/
> 
>  Best regards,
>  Kyrylo
> 
> 
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
>  Unsubscribe:
>  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing 

Re: [openstack-dev] [puppet] is puppet-keystone using v3 credentials correctly ?

2016-02-19 Thread Matt Fischer
You shouldn't have to do any of that, it should just work. I have OSC 2.0.0
in my environment though (Ubuntu). I'm just guessing but perhaps that
client is too old? Maybe a Fedora user could recommend a version.

On Fri, Feb 19, 2016 at 7:38 AM, Matthew Mosesohn 
wrote:

> Hi Michal,
>
> Just add --os-identity-api-version=3 to your command it will work. The
> provider uses v3 openstackclient via env var
> OS_IDENTITY_API_VERSION=3. The default is still 2.
>
> Best Regards,
> Matthew Mosesohn
>
> On Fri, Feb 19, 2016 at 5:25 PM, Matt Fischer 
> wrote:
> > What version of openstack client do you have? What version of the module
> are
> > you using?
> >
> > On Feb 19, 2016 7:20 AM, "Ptacek, MichalX" 
> wrote:
> >>
> >> Hi all,
> >>
> >>
> >>
> >> I was playing some time with puppet-keystone deployments,
> >>
> >> and also reported one issue related to this:
> >>
> >> https://bugs.launchpad.net/puppet-keystone/+bug/1547394
> >>
> >> but in general my observations are that keystone_service is using v3
> >> credentials with openstack cli commands that are not compatible
> >>
> >>
> >>
> >> e.g.
> >>
> >> Error: Failed to apply catalog: Execution of '/bin/openstack service
> list
> >> --quiet --format csv --long' returned 2: usage: openstack service list
> [-h]
> >> [-f {csv,table}] [-c COLUMN]
> >>   [--max-width ]
> >>   [--quote {all,minimal,none,nonnumeric}]
> >> openstack service list: error: unrecognized arguments: --long
> >>
> >>
> >>
> >>
> >>
> >> It can’t be bug, because whole module will not work due to this J
> >>
> >> I think I miss something important somewhere …
> >>
> >>
> >>
> >> My latest manifest file is :
> >>
> >>
> >>
> >> Exec { logoutput => 'on_failure' }
> >>
> >> package { 'curl': ensure => present }
> >>
> >>
> >>
> >> node keystone {
> >>
> >>
> >>
> >>   class { '::mysql::server': }
> >>
> >>   class { '::keystone::db::mysql':
> >>
> >> password => 'keystone',
> >>
> >>   }
> >>
> >>
> >>
> >>   class { '::keystone':
> >>
> >> verbose => true,
> >>
> >> debug   => true,
> >>
> >> database_connection => 'mysql://
> keystone:keystone@127.0.0.1/keystone',
> >>
> >> catalog_type=> 'sql',
> >>
> >> admin_token => 'admin_token',
> >>
> >>   }
> >>
> >>
> >>
> >>   class { '::keystone::roles::admin':
> >>
> >> email=> 'exam...@abc.com',
> >>
> >> password => 'ChangeMe',
> >>
> >>   }
> >>
> >>
> >>
> >>   class { '::keystone::endpoint':
> >>
> >> public_url => "http://${::fqdn}:5000/v2.0;,
> >>
> >> admin_url  => "http://${::fqdn}:35357/v2.0;,
> >>
> >>   }
> >>
> >> }
> >>
> >>
> >>
> >> Env variables looks as follows(before service list is called with
> --long)
> >>
> >> {"OS_IDENTITY_API_VERSION"=>"3", "OS_TOKEN"=>"admin_token",
> >> "OS_URL"=>"http://127.0.0.1:35357/v3"}
> >>
> >> Debug: Executing: '/bin/openstack service list --quiet --format csv
> >> --long'
> >>
> >>
> >>
> >> Thanks for any hint,
> >>
> >> Michal
> >>
> >> --
> >> Intel Research and Development Ireland Limited
> >> Registered in Ireland
> >> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> >> Registered Number: 308263
> >>
> >> This e-mail and any attachments may contain confidential material for
> the
> >> sole use of the intended recipient(s). Any review or distribution by
> others
> >> is strictly prohibited. If you are not the intended recipient, please
> >> contact the sender and delete all copies.
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-19 Thread Sean Dague
On 02/18/2016 10:38 AM, D'Angelo, Scott wrote:
> Cinder team is proposing to add support for API microversions [1]. It came up 
> at our mid-cycle that we should add a new /v3 endpoint [2]. Discussions on 
> IRC have raised questions about this [3]
> 
> Please weigh in on the design decision to add a new /v3 endpoint for Cinder 
> for clients to use when they wish to have api-microversions.
> 
> PRO add new /v3 endpoint: A client should not ask for new-behaviour against 
> old /v2 endpoint, because that might hit an old pre-microversion (i.e. 
> Liberty) server, and that server might carry on with old behaviour. The 
> client would not know this without checking, and so strange things happen 
> silently.
> It is possible for client to check the response from the server, but his 
> requires an extra round trip.
> It is possible to implement some type of caching of supported 
> (micro-)version, but not all clients will do this.
> Basic argument is that  continuing to use /v2 endpoint either requires an 
> extra trip for each request (absent caching) meaning performance slow-down, 
> or possibility of unnoticed errors.
> 
> CON add new endpoint:
> Downstream cost of changing endpoints is large. It took ~3 years to move from 
> /v1 -> /v2 and we will have to support the deprecated /v2 endpoint forever.
> If we add microversions with /v2 endpoint, old scripts will keep working on 
> /v2 and they will continue to work.
> We would assume that people who choose to use microversions will check that 
> the server supports it.

The concern as I understand it is that by extending the v2 API with
microversions the following failure scenario exists

If:

1) a client already is using the /v2 API
2) a client opt's into using microversions on /v2
3) that client issues a request on a Cinder API v2 endpoint without
microversion support
4) that client fails check if micoversions are supported by a GET of /v2
or by checking the return of the OpenStack-API-Version return header
5) that client issues a request against a resource on /v2 with
parameters that would create a radically different situation that would
be hard to figure out later.

And, only if all these things happen is there a concern.

So let's look at each one.

1) clients already using /v2 API

Last cycle when we tried to drop v1 from devstack we got a bunch of
explosions. In researching it it was determined that very little
supported cinder v2 -
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075760.html


At that point not even OpenStack Client itself, or Rally. And definitely
no libraries except python cinderclient. So the entire space of #1 is
python cinderclient, or non open rest clients.

2 & 4) are coupled. A good client that does 2 should do 4, and not only
depend on the 406 failure. cinderclient definitely should be made to do
that. Which means we are completely left with only custom non open
access code that's a concern. That's definitely still a concern, but
again the problem space is smaller.

3) can be mitigated if cinder backports patches to stable branches to
throw the 406 when sending the header. It's mitigation. Code already is
out in the wild, however it does help. And given other security fixes
people will probably take these patches into production.

5) is there an example where this is expected? or is this theoretical.


My very high concern is the fact that v2 adoption remains quite low, and
that a v3 will hurt that even further. Especially as it means a whole
other endpoint... "volumev2" was already a big problem in teaching a
bunch of software that it needs a new type, "volumev3" is something I
don't think anyone wants to see. I'd really like to see more of these
improvements get out there.

At the end of the day, this is the call of the Cinder team.

However, I've seen real 3rd party vendor software hitting the Nova API
that completely bypasses the service catalog, and hits /v2.1 directly
(it's not using microversions). Which means that it can't work on a Kilo
cloud. For actually no reason. As /v2.1 and /v2 are semantically
equivalent. Vendors do weird things. They read the docs, say "oh this is
the latest API" and only implement to that. They don't need any new
features, don't realize the time delay in these things getting out
there. It's a big regret that we have multiple endpoints because it
means these kinds of applications basically break for no good reason.

So my recommendation is to extend out from the /v2 endpoint. This is
conceptually what you are actually doing. The base microversion will be
v2 API as it exists today, and you are negotiating up from there.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-19 Thread Ben Swartzlander

On 02/19/2016 10:57 AM, Sean Dague wrote:

On 02/18/2016 10:38 AM, D'Angelo, Scott wrote:

Cinder team is proposing to add support for API microversions [1]. It came up 
at our mid-cycle that we should add a new /v3 endpoint [2]. Discussions on IRC 
have raised questions about this [3]

Please weigh in on the design decision to add a new /v3 endpoint for Cinder for 
clients to use when they wish to have api-microversions.

PRO add new /v3 endpoint: A client should not ask for new-behaviour against old 
/v2 endpoint, because that might hit an old pre-microversion (i.e. Liberty) 
server, and that server might carry on with old behaviour. The client would not 
know this without checking, and so strange things happen silently.
It is possible for client to check the response from the server, but his 
requires an extra round trip.
It is possible to implement some type of caching of supported (micro-)version, 
but not all clients will do this.
Basic argument is that  continuing to use /v2 endpoint either requires an extra 
trip for each request (absent caching) meaning performance slow-down, or 
possibility of unnoticed errors.

CON add new endpoint:
Downstream cost of changing endpoints is large. It took ~3 years to move from /v1 
-> /v2 and we will have to support the deprecated /v2 endpoint forever.
If we add microversions with /v2 endpoint, old scripts will keep working on /v2 
and they will continue to work.
We would assume that people who choose to use microversions will check that the 
server supports it.


The concern as I understand it is that by extending the v2 API with
microversions the following failure scenario exists

If:

1) a client already is using the /v2 API
2) a client opt's into using microversions on /v2
3) that client issues a request on a Cinder API v2 endpoint without
microversion support
4) that client fails check if micoversions are supported by a GET of /v2
or by checking the return of the OpenStack-API-Version return header


It disagree that this (step 4) is a failure. Clients should not have to 
do a check at all. The client should tell the server what it wants to do 
(send the request and version) and the server should do exactly that if 
and only if it can. Any requirement that the client check the server's 
version is a massive violation of good API design and will cause either 
performance problems or correctness problems or both.


-Ben Swartzlander


5) that client issues a request against a resource on /v2 with
parameters that would create a radically different situation that would
be hard to figure out later.

And, only if all these things happen is there a concern.

So let's look at each one.

1) clients already using /v2 API

Last cycle when we tried to drop v1 from devstack we got a bunch of
explosions. In researching it it was determined that very little
supported cinder v2 -
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075760.html


At that point not even OpenStack Client itself, or Rally. And definitely
no libraries except python cinderclient. So the entire space of #1 is
python cinderclient, or non open rest clients.

2 & 4) are coupled. A good client that does 2 should do 4, and not only
depend on the 406 failure. cinderclient definitely should be made to do
that. Which means we are completely left with only custom non open
access code that's a concern. That's definitely still a concern, but
again the problem space is smaller.

3) can be mitigated if cinder backports patches to stable branches to
throw the 406 when sending the header. It's mitigation. Code already is
out in the wild, however it does help. And given other security fixes
people will probably take these patches into production.

5) is there an example where this is expected? or is this theoretical.


My very high concern is the fact that v2 adoption remains quite low, and
that a v3 will hurt that even further. Especially as it means a whole
other endpoint... "volumev2" was already a big problem in teaching a
bunch of software that it needs a new type, "volumev3" is something I
don't think anyone wants to see. I'd really like to see more of these
improvements get out there.

At the end of the day, this is the call of the Cinder team.

However, I've seen real 3rd party vendor software hitting the Nova API
that completely bypasses the service catalog, and hits /v2.1 directly
(it's not using microversions). Which means that it can't work on a Kilo
cloud. For actually no reason. As /v2.1 and /v2 are semantically
equivalent. Vendors do weird things. They read the docs, say "oh this is
the latest API" and only implement to that. They don't need any new
features, don't realize the time delay in these things getting out
there. It's a big regret that we have multiple endpoints because it
means these kinds of applications basically break for no good reason.

So my recommendation is to extend out from the /v2 endpoint. This is
conceptually what you are actually doing. The base microversion 

Re: [openstack-dev] [trove] Start to port Trove to Python 3 in Mitaka cycle?

2016-02-19 Thread Thomas Goirand
On 02/18/2016 08:20 PM, Victor Stinner wrote:
> I discussed with some Trove developers who are interested to start the
> Python 3 port right now. What do you think?

Mitaka b3 is just around the corner (in less than 10 days now), so at
the end, it doesn't change things much, unless all of your patches are
accepted before that. IMO, it's time to get momentum to port things to
Py3. I wish it happens.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-19 Thread Sean Dague
On 02/19/2016 09:30 AM, Andrew Laski wrote:
> 
> 
> On Thu, Feb 18, 2016, at 05:34 PM, melanie witt wrote:
>> On Feb 12, 2016, at 14:49, Jay Pipes  wrote:
>>
>>> This would be my preference as well, even though it's technically a 
>>> backwards-incompatible API change.
>>>
>>> The idea behind get-me-a-network was specifically to remove the current 
>>> required complexity of the nova boot command with regards to networking 
>>> options and allow a return to the nova-net model where an admin could 
>>> auto-create a bunch of unassigned networks and the first time a user booted 
>>> an instance and did not specify any network configuration (the default, 
>>> sane behaviour in nova-net), one of those unassigned networks would be 
>>> grabbed for the troject, I mean prenant, sorry.
>>>
>>> So yeah, the "opt-in to having no networking at all with a --no-networking 
>>> or --no-nics option" would be my preference.
>>
>> +1 to this, especially opting in to have no network at all. It seems most
>> friendly to me to have the network allocation automatically happen if
>> nothing special is specified.
>>
>> This is something where it seems like we need a "reset" to a default
>> behavior that is user-friendly. And microversions is the way we have to
>> "fix" an undesirable current default behavior.
> 
> The question I would still like to see addressed is why do we need to
> have a default behavior here? The get-me-a-network effort is motivated
> by the current complexity of setting up a network for an instance
> between Nova and Neutron and wants to get back to a simpler time of
> being able to just boot an instance and get a network. But it still
> isn't clear to me why requiring something like "--nic auto" wouldn't
> work here, and eliminate the confusion of changing a default behavior.

The point was the default behavior was a major concern to people. It's
not like this was always the behavior. If you were (or are) on nova net,
you don't need that option at all.

The major reason we implemented API microversions was so that we could
make the base API experience better for people, some day. One day, we'll
have an API we love, hopefully. Doing so means that we do need to make
changes to defaults. Deprecate some weird and unmaintained bits.

The principle of least surprise to me is that you don't need that
attribute at all. Do the right thing with the least amount of work.
Instead of making the majority of clients and users do extra work
because once upon a time when we brought in neutron a thing happen.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-19 Thread Sean McGinnis
On Fri, Feb 19, 2016 at 10:57:38AM -0500, Sean Dague wrote:
> The concern as I understand it is that by extending the v2 API with
> microversions the following failure scenario exists
> 
> If:
> 
> 1) a client already is using the /v2 API
> 2) a client opt's into using microversions on /v2
> 3) that client issues a request on a Cinder API v2 endpoint without
> microversion support
> 4) that client fails check if micoversions are supported by a GET of /v2
> or by checking the return of the OpenStack-API-Version return header
> 5) that client issues a request against a resource on /v2 with
> parameters that would create a radically different situation that would
> be hard to figure out later.
> 
> And, only if all these things happen is there a concern.

I think it's actually even simpler than that. And possibly therefore
more likely to actually happen in the wild.

1) a client already is using microversions
2) that client issues a request to a non-microversion release without
   first doing a check for microversion support
3) the request is serviced as best the non-microversion service knows
   how
4) the client checks the response to validate the microversion header
   and too late realizes it wasn't supported and a slightly different
   action was performed than what they expected

> 
> So let's look at each one.
> 
> 1) clients already using /v2 API
> 
> Last cycle when we tried to drop v1 from devstack we got a bunch of
> explosions. In researching it it was determined that very little
> supported cinder v2 -
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/075760.html
> 
> 
> At that point not even OpenStack Client itself, or Rally. And definitely
> no libraries except python cinderclient. So the entire space of #1 is
> python cinderclient, or non open rest clients.
> 
> 2 & 4) are coupled. A good client that does 2 should do 4, and not only
> depend on the 406 failure. cinderclient definitely should be made to do
> that. Which means we are completely left with only custom non open
> access code that's a concern. That's definitely still a concern, but
> again the problem space is smaller.
> 
> 3) can be mitigated if cinder backports patches to stable branches to
> throw the 406 when sending the header. It's mitigation. Code already is
> out in the wild, however it does help. And given other security fixes
> people will probably take these patches into production.

This is one thing I was thinking about. But even if we do, there is no
guarantee that older releases will have been updated.

On the other hand, microversion adoption probably isn't going to take
off immediately. You are correct that there are still many clients using
v1. If it's taken folks this long to get to v2, the chances of a
wholesale migration to microversions is pretty low. Chances are by the
time it is prevalent, this will no longer be an issue.

> 
> 5) is there an example where this is expected? or is this theoretical.

Good point. At this point it is theoretical.

> 
> 
> My very high concern is the fact that v2 adoption remains quite low, and
> that a v3 will hurt that even further. Especially as it means a whole
> other endpoint... "volumev2" was already a big problem in teaching a
> bunch of software that it needs a new type, "volumev3" is something I
> don't think anyone wants to see. I'd really like to see more of these
> improvements get out there.
> 
> At the end of the day, this is the call of the Cinder team.
> 
> However, I've seen real 3rd party vendor software hitting the Nova API
> that completely bypasses the service catalog, and hits /v2.1 directly
> (it's not using microversions). Which means that it can't work on a Kilo
> cloud. For actually no reason. As /v2.1 and /v2 are semantically
> equivalent. Vendors do weird things. They read the docs, say "oh this is
> the latest API" and only implement to that. They don't need any new
> features, don't realize the time delay in these things getting out
> there. It's a big regret that we have multiple endpoints because it
> means these kinds of applications basically break for no good reason.

This is a bit of a circular argument in my opinion. We don't expect them
to pay attention enough to the difference between /v2 and /v3 (or the
lack of a difference), yet we expect them to pay attention enough to
know to check for microversion support before making an API call.

I'm really not arguing for one way or the other here. I really
appreciate the input. Just trying to think through implications and see
what makes the most sense.

The one thing that makes me slightly lean towards a new endpoint is that
it is risky to expect consumers of the API to pay enough attention to know
to do these checks. A /v3 endpoint would be the safest route to protect
against folks doing something stupid.

> 
> So my recommendation is to extend out from the /v2 endpoint. This is
> conceptually what you are actually doing. The base microversion will be
> v2 API as it exists 

Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-19 Thread Sean Dague
On 02/19/2016 11:15 AM, Ben Swartzlander wrote:
> On 02/19/2016 10:57 AM, Sean Dague wrote:
>> On 02/18/2016 10:38 AM, D'Angelo, Scott wrote:
>>> Cinder team is proposing to add support for API microversions [1]. It
>>> came up at our mid-cycle that we should add a new /v3 endpoint [2].
>>> Discussions on IRC have raised questions about this [3]
>>>
>>> Please weigh in on the design decision to add a new /v3 endpoint for
>>> Cinder for clients to use when they wish to have api-microversions.
>>>
>>> PRO add new /v3 endpoint: A client should not ask for new-behaviour
>>> against old /v2 endpoint, because that might hit an old
>>> pre-microversion (i.e. Liberty) server, and that server might carry
>>> on with old behaviour. The client would not know this without
>>> checking, and so strange things happen silently.
>>> It is possible for client to check the response from the server, but
>>> his requires an extra round trip.
>>> It is possible to implement some type of caching of supported
>>> (micro-)version, but not all clients will do this.
>>> Basic argument is that  continuing to use /v2 endpoint either
>>> requires an extra trip for each request (absent caching) meaning
>>> performance slow-down, or possibility of unnoticed errors.
>>>
>>> CON add new endpoint:
>>> Downstream cost of changing endpoints is large. It took ~3 years to
>>> move from /v1 -> /v2 and we will have to support the deprecated /v2
>>> endpoint forever.
>>> If we add microversions with /v2 endpoint, old scripts will keep
>>> working on /v2 and they will continue to work.
>>> We would assume that people who choose to use microversions will
>>> check that the server supports it.
>>
>> The concern as I understand it is that by extending the v2 API with
>> microversions the following failure scenario exists
>>
>> If:
>>
>> 1) a client already is using the /v2 API
>> 2) a client opt's into using microversions on /v2
>> 3) that client issues a request on a Cinder API v2 endpoint without
>> microversion support
>> 4) that client fails check if micoversions are supported by a GET of /v2
>> or by checking the return of the OpenStack-API-Version return header
> 
> It disagree that this (step 4) is a failure. Clients should not have to
> do a check at all. The client should tell the server what it wants to do
> (send the request and version) and the server should do exactly that if
> and only if it can. Any requirement that the client check the server's
> version is a massive violation of good API design and will cause either
> performance problems or correctness problems or both.

That is a fair concern. However the Cinder API today doesn't do strict
input validation (in my understanding). Which means it's never given
users that guaruntee. Adding ?foo=bar to random resources, or extra
headers, it likely to just get silently dropped.

Strict input validation is a good thing to do, and would make a very
sensible initial microversion to get onto that path.

So this isn't really worse than the current situation. And the upside is
easier adoption.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-19 Thread Sean Dague
On 02/19/2016 11:20 AM, Sean McGinnis wrote:
> On Fri, Feb 19, 2016 at 10:57:38AM -0500, Sean Dague wrote:
>> The concern as I understand it is that by extending the v2 API with
>> microversions the following failure scenario exists
>>
>> If:
>>
>> 1) a client already is using the /v2 API
>> 2) a client opt's into using microversions on /v2
>> 3) that client issues a request on a Cinder API v2 endpoint without
>> microversion support
>> 4) that client fails check if micoversions are supported by a GET of /v2
>> or by checking the return of the OpenStack-API-Version return header
>> 5) that client issues a request against a resource on /v2 with
>> parameters that would create a radically different situation that would
>> be hard to figure out later.
>>
>> And, only if all these things happen is there a concern.
> 
> I think it's actually even simpler than that. And possibly therefore
> more likely to actually happen in the wild.
> 
> 1) a client already is using microversions

But, there are no such clients today. And there is no library that does
this yet. It will be 4 - 6 months (or even more likely 12+) until that's
in the ecosystem. Which is why adding the header validation to existing
v2 API, and backporting to liberty / kilo, will provide really
substantial coverage for the concern the bswartz is bringing forward.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-19 Thread Andrew Laski


On Fri, Feb 19, 2016, at 11:14 AM, Sean Dague wrote:
> On 02/19/2016 09:30 AM, Andrew Laski wrote:
> > 
> > 
> > On Thu, Feb 18, 2016, at 05:34 PM, melanie witt wrote:
> >> On Feb 12, 2016, at 14:49, Jay Pipes  wrote:
> >>
> >>> This would be my preference as well, even though it's technically a 
> >>> backwards-incompatible API change.
> >>>
> >>> The idea behind get-me-a-network was specifically to remove the current 
> >>> required complexity of the nova boot command with regards to networking 
> >>> options and allow a return to the nova-net model where an admin could 
> >>> auto-create a bunch of unassigned networks and the first time a user 
> >>> booted an instance and did not specify any network configuration (the 
> >>> default, sane behaviour in nova-net), one of those unassigned networks 
> >>> would be grabbed for the troject, I mean prenant, sorry.
> >>>
> >>> So yeah, the "opt-in to having no networking at all with a 
> >>> --no-networking or --no-nics option" would be my preference.
> >>
> >> +1 to this, especially opting in to have no network at all. It seems most
> >> friendly to me to have the network allocation automatically happen if
> >> nothing special is specified.
> >>
> >> This is something where it seems like we need a "reset" to a default
> >> behavior that is user-friendly. And microversions is the way we have to
> >> "fix" an undesirable current default behavior.
> > 
> > The question I would still like to see addressed is why do we need to
> > have a default behavior here? The get-me-a-network effort is motivated
> > by the current complexity of setting up a network for an instance
> > between Nova and Neutron and wants to get back to a simpler time of
> > being able to just boot an instance and get a network. But it still
> > isn't clear to me why requiring something like "--nic auto" wouldn't
> > work here, and eliminate the confusion of changing a default behavior.
> 
> The point was the default behavior was a major concern to people. It's
> not like this was always the behavior. If you were (or are) on nova net,
> you don't need that option at all.

Which is why I would prefer to shy away from default behaviors.

> 
> The major reason we implemented API microversions was so that we could
> make the base API experience better for people, some day. One day, we'll
> have an API we love, hopefully. Doing so means that we do need to make
> changes to defaults. Deprecate some weird and unmaintained bits.
> 
> The principle of least surprise to me is that you don't need that
> attribute at all. Do the right thing with the least amount of work.
> Instead of making the majority of clients and users do extra work
> because once upon a time when we brought in neutron a thing happen.

The principal of least surprise to me is that a user explicitly asks for
something rather than relying on a default that changes based on network
service and/or microversion. This is the only area in the API where
something did, and would, happen without explicitly being requested by a
user. I just don't understand why it's special compared to
flavor/image/volume which we require to be explicit. But I think we just
need to agree to disagree here.

> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-19 Thread Morgan Fainberg
On Fri, Feb 19, 2016 at 8:24 AM, Sean Dague  wrote:

> On 02/19/2016 11:15 AM, Ben Swartzlander wrote:
> > On 02/19/2016 10:57 AM, Sean Dague wrote:
> >> On 02/18/2016 10:38 AM, D'Angelo, Scott wrote:
> >>> Cinder team is proposing to add support for API microversions [1]. It
> >>> came up at our mid-cycle that we should add a new /v3 endpoint [2].
> >>> Discussions on IRC have raised questions about this [3]
> >>>
> >>> Please weigh in on the design decision to add a new /v3 endpoint for
> >>> Cinder for clients to use when they wish to have api-microversions.
> >>>
> >>> PRO add new /v3 endpoint: A client should not ask for new-behaviour
> >>> against old /v2 endpoint, because that might hit an old
> >>> pre-microversion (i.e. Liberty) server, and that server might carry
> >>> on with old behaviour. The client would not know this without
> >>> checking, and so strange things happen silently.
> >>> It is possible for client to check the response from the server, but
> >>> his requires an extra round trip.
> >>> It is possible to implement some type of caching of supported
> >>> (micro-)version, but not all clients will do this.
> >>> Basic argument is that  continuing to use /v2 endpoint either
> >>> requires an extra trip for each request (absent caching) meaning
> >>> performance slow-down, or possibility of unnoticed errors.
> >>>
> >>> CON add new endpoint:
> >>> Downstream cost of changing endpoints is large. It took ~3 years to
> >>> move from /v1 -> /v2 and we will have to support the deprecated /v2
> >>> endpoint forever.
> >>> If we add microversions with /v2 endpoint, old scripts will keep
> >>> working on /v2 and they will continue to work.
> >>> We would assume that people who choose to use microversions will
> >>> check that the server supports it.
> >>
> >> The concern as I understand it is that by extending the v2 API with
> >> microversions the following failure scenario exists
> >>
> >> If:
> >>
> >> 1) a client already is using the /v2 API
> >> 2) a client opt's into using microversions on /v2
> >> 3) that client issues a request on a Cinder API v2 endpoint without
> >> microversion support
> >> 4) that client fails check if micoversions are supported by a GET of /v2
> >> or by checking the return of the OpenStack-API-Version return header
> >
> > It disagree that this (step 4) is a failure. Clients should not have to
> > do a check at all. The client should tell the server what it wants to do
> > (send the request and version) and the server should do exactly that if
> > and only if it can. Any requirement that the client check the server's
> > version is a massive violation of good API design and will cause either
> > performance problems or correctness problems or both.
>
> That is a fair concern. However the Cinder API today doesn't do strict
> input validation (in my understanding). Which means it's never given
> users that guaruntee. Adding ?foo=bar to random resources, or extra
> headers, it likely to just get silently dropped.
>
> Strict input validation is a good thing to do, and would make a very
> sensible initial microversion to get onto that path.
>
> So this isn't really worse than the current situation. And the upside is
> easier adoption.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

AS a point we are also trying to drop "versioned endpoints" as a thing from
the catalog going forward. Please do not add a "cinderv3" or "volumev3"
entry to the catalog. This is something that enourages adding for every
version a new endpoint. If every service had an entry for each endpoint
version in the catalog it rapidly balloons the size (think of, the ~14?
services we have now, each with now three entries per "actual api
endpoint"). The catalog is good to a point, but if everyone added a
versioned endpoint it would rapidly become more of a beast that it is and
potentially become a bigger bottleneck/performance issue than it already is.

Asking the endpoint a single time what versions it supports and relevant
information not encoded in the catalog (lets be fair, the catlog does not
contain everything, heck, you don't even know what version of v2 cinder API
an endpoint has, you should probably ask for discoverability to provide
good responses to the user vs random/spurious 404 because a new
cinderclient knows extra APIs than the juno cinder API provides).

I think Sean is giving good guidance on extending from /v2 personally. I
have other mixed feelings on microversions (see the nova thread on
indicating compatibility), but if microversions are to be supported, it
isn't terrible to extend from your existing point as long as the client is
expected to get "current" behavior 

Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-19 Thread Sean McGinnis
On Fri, Feb 19, 2016 at 11:28:09AM -0500, Sean Dague wrote:
> On 02/19/2016 11:20 AM, Sean McGinnis wrote:
> > On Fri, Feb 19, 2016 at 10:57:38AM -0500, Sean Dague wrote:
> >> The concern as I understand it is that by extending the v2 API with
> >> microversions the following failure scenario exists
> >>
> >> If:
> >>
> >> 1) a client already is using the /v2 API
> >> 2) a client opt's into using microversions on /v2
> >> 3) that client issues a request on a Cinder API v2 endpoint without
> >> microversion support
> >> 4) that client fails check if micoversions are supported by a GET of /v2
> >> or by checking the return of the OpenStack-API-Version return header
> >> 5) that client issues a request against a resource on /v2 with
> >> parameters that would create a radically different situation that would
> >> be hard to figure out later.
> >>
> >> And, only if all these things happen is there a concern.
> > 
> > I think it's actually even simpler than that. And possibly therefore
> > more likely to actually happen in the wild.
> > 
> > 1) a client already is using microversions
> 
> But, there are no such clients today. And there is no library that does
> this yet. It will be 4 - 6 months (or even more likely 12+) until that's
> in the ecosystem. Which is why adding the header validation to existing
> v2 API, and backporting to liberty / kilo, will provide really
> substantial coverage for the concern the bswartz is bringing forward.

Yeah, I have to agree with that. We can certainly have the protection
out in time.

The only concern there is the admin who set up his Kilo initial release
cloud and doesn't want to touch it for updates. But they likely have
more pressing issues than this any way.

> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance]Glance v2 api support in Nova

2016-02-19 Thread John Garbutt
On 19 February 2016 at 11:45, Sean Dague  wrote:
> On 02/15/2016 06:00 PM, Flavio Percoco wrote:
>> On 12/02/16 18:24 +0300, Mikhail Fedosin wrote:
>>> Hello!
>>>
>>> In late December I wrote several messages about glance v2 support in
>>> Nova and
>>> Nova's xen plugin. Many things have been done after that and now I'm
>>> happy to
>>> announce that there we have a set of commits that makes Nova fully v2
>>> compatible (xen plugin works too)!
>>>
>>> Here's the link to the top commit
>>> https://review.openstack.org/#/c/259097/
>>> Here's the link to approved spec for Mitaka https://github.com/openstack/
>>> nova-specs/blob/master/specs/mitaka/approved/use-glance-v2-api.rst
>>>
>>> I think it'll be a big step for OpenStack, because api v2 is much more
>>> stable
>>> and RESTful than v1.  We would very much like to deprecate v1 at some
>>> point. v2
>>> is 'Current' since Juno, and after that there we've had a lot of
>>> attempts to
>>> adopt it in Nova, and every time it was postponed to next release cycle.
>>>
>>> Unfortunately, it may not happen this time - this work was marked as
>>> 'non-priority' when the related patches had been done. I think it's a big
>>> omission, because this work is essential for all OpenStack, and it
>>> will be a
>>> shame if we won't be able to land it in Mitaka.
>>> As far as I know, Feature Freeze will be announced on March, 3rd, and
>>> we still
>>> have enough time and people to test it before. All patches are split
>>> into small
>>> commits (100 LOC max), so they should be relatively easy to review.
>>>
>>> I wonder if Nova community members may change their decision and
>>> unblock this
>>> patches? Thanks in advance!
>>
>> A couple of weeks ago, I had a chat with Sean Dague and John Garbutt and we
>> agreed that it was probably better to wait until Newton. After that
>> chat, we
>> held a Glance virtual mid-cycle where Mikhail mentioned that he would
>> rather
>> sprint on getting Nova on v2 than waiting for Newton. The terms and code
>> Mikhail
>> worked on aligns with what has been discussed throughout the cycle in
>> numerous
>> chats, patch sets, etc.
>>
>> After all the effort that has been put on this (including getting a py24
>> environment ready to test the xenplugin) it'd be a real shame to have
>> this work
>> pushed to Newton. The Glance team *needs* to be able to deprecate v1 and
>> the
>> team has been working on this ever since Kilo, when this effort of
>> moving Nova
>> to v2 started.
>>
>> I believe it has to be an OpenStack priority to make this happen or, at
>> the very
>> least, a cross-project effort that involves all services relying on
>> Glance. Nova
>> is the last service in the list, AFAICT, and the Glance team has been very
>> active on this front. This is not to imply the Nova team hasn't help, in
>> fact,
>> there's been lots of support/feedback from the nova team during Mitaka.
>> It is
>> because of that that I believe we should grant this patches an exception
>> and let
>> them in.
>>
>> Part of the feedback the Nova team has provided is that some of that
>> code that
>> has been proposed should live in glanceclient. The Glance team is ready
>> to react
>> and merge that code, release glanceclient, and get Nova on v2.
>
> Right, I think this was the crux of the problem. It took a while to get
> consensus on that point, and now we're deep into the priority part of
> the Nova cycle, and the runway is gone. I'm happy to help review early
> during the Newton cycle.
>
> I also think as prep work for that we should probably get either glance
> folks or citrix folks to enhance the testing around the xenserver /
> glance paths in Nova. That will make reviews go faster in Newton because
> we can be a lot more sure that patches aren't breaking anything.

+1 Sean's points here.

We should totally make time to get the Nova and Glance team together
during the design summit. I keep thinking we understand each other,
and every time we did a little deeper on this topic we find more
difference in opinions.

I think we all agree with the long term view:
* Nova's image API calls maintain their existing contract (glance v1 like)
* Nova can talk to glance v2 for everything, zero dependency on glance v1
* Nova will need to support both v1 and v2 glance for at least one cycle

It's the how we get to that point that we we don't all agree on. It
feels like an in depth face to face discussion will be the best way to
resolve that.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer, gnocchi]How to use gnocchi in OpenStack Liberty

2016-02-19 Thread gordon chung


On 19/02/2016 3:09 AM, phot...@126.com wrote:
> I install OpenStack Liberty follow the instruction of "Installation
> Guide for Red Hat Enterprise Linux 7 and CentOS 7”. Then, I want to use
> Gnocchi instead of MongoDB in ceilometer. I read the doc of gnocchi in
> http://gnocchi.xyz . But in this doc, It only install gnocchi using
> devstack.
>
> I want to know after I install Openstack Liberty follow the instruction
> of “Installation Guide for Red Had Enterprise Linux 7 and CentOS 7”, how
> can i use gnocchi instead of MongoDB in ceilometer.
>

to start, i'd probably try using gnocchi2.0 to avoid/minimise any migration.

i'm not familiar with the current state of packages so it's something to 
consider. gnocchi is also accessible via pypi.

one possible way would be use create a test environment and install with 
devstack. you can see the user roles required, and a few other config 
settings

if you understand bash, you can look at how it's setup in devstack[1]. 
if you run into issues you can always post them.

from ceilometer pov, you should:
- ensure gnocchi_archive_policy_map.yaml and gnocchi_resources.yaml 
config files are place alongside ceilometer.conf.
- set dispatcher config option to gnocchi in ceilometer.conf
- set [dispatcher_gnocchi]/url = gnocchi url
- set [dispatcher_gnocchi]/archive_policy = 
- disable store_events (in liberty)

NOTE: you might experience slow write performance from 
ceilometer->gnocchi in Liberty as we started to address that issue in 
Mitaka.

[1] https://github.com/openstack/gnocchi/blob/master/devstack/plugin.sh

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][i18n] Please review the patch to add I18n team ATCs

2016-02-19 Thread Ying Chun Guo
Hi,
 
There is a patch to governance, adding a list of I18n ATCs as extra-atcs. Please review and vote. https://review.openstack.org/#/c/281145/
 
One major contribution of I18n active contributors is to provide translations through http://translate.openstack.org. Since Liberty, active translators have been regarded as ATCs. There is no automatic method to grant translators ATC currently. So I have to refresh the i18n ATC list in governance repo, adding active contributors in the past 6 months.
 
Please support it. We need I18n ATCs both in the following PTL vote and Austin summit. Thank you for your time.
 
Best regardsYing Chun Guo (Daisy)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade - Nova metadata failure

2016-02-19 Thread Armando M.
On 19 February 2016 at 04:43, Sean Dague  wrote:

> On 02/18/2016 09:50 PM, Armando M. wrote:
> >
> >
> > On 18 February 2016 at 08:41, Sean M. Collins  > > wrote:
> >
> > This week's update:
> >
> > Armando was kind enough to take a look[1], since he's got a fresh
> > perspective. I think I've been suffering from Target Fixation[1]
> > where I failed to notice a couple other failures in the logs.
> >
> >
> > It's been fun, and I am glad I was able to help. Once I validated the
> > root cause of the metadata failure [1], I got run [2] and a clean pass
> > in [3] :)
> >
> > There are still a few things to iron out, ie. choosing metadata over
> > config-drive, testing both in the gate etc. But that's for another day.
> >
> > Cheers,
> > Armando
> >
> > [1] https://bugs.launchpad.net/nova/+bug/1545101/comments/4
> > [2]
> http://logs.openstack.org/00/281600/6/experimental/gate-grenade-dsvm-neutron-multinode/40e16c8/
> > [3]
> http://logs.openstack.org/00/281600/6/experimental/gate-grenade-dsvm-neutron-multinode/40e16c8/logs/testr_results.html.gz
>
> I want to thank everyone that's been working on this issue profusely.
> This exposed a release critical bug in Nova that we would not have
> caught otherwise. Finding that before milestone 3 is a huge win and
> gives us a lot more options in fixing it correctly.
>
> I think we've got the proper fix now -
> https://review.openstack.org/#/c/279721/ (fingers crossed). The metadata
> server is one of the least tested components we've got on the Nova side,
> so I'll be looking at ways to fix that problem and hopefully avoid
> situations like this again.
>

Now that the blocking issue has been identified, I filed project-config
change [1] to enable us to test the Neutron Grenade multinode more
thoroughly.

[1] https://review.openstack.org/#/c/282428/


> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] intrinsic function bugfixes and hot versioning

2016-02-19 Thread Steven Hardy
On Thu, Feb 18, 2016 at 11:40:16AM +0100, Thomas Herve wrote:
> On Wed, Feb 17, 2016 at 7:54 PM, Steven Hardy  wrote:
> > Hi all,
> >
> > So, Zane and I have discussed $subject and it was suggested I take this to
> > the list to reach consensus.
> >
> > Recently, I've run into a couple of small but inconvenient limitations in
> > our intrinsic function implementations, specifically for str_replace and
> > repeat, both of which did not behave the way I expected when referencing
> > things via get_param/get_attr:
> 
> Disclaimer: compatibility is not black and white, especially in these
> cases. We need to make decisions based on the impact we can imagine on
> users, so it's certainly subjective. That said:
> 
> > https://bugs.launchpad.net/heat/+bug/1539737
> 
> I think it works fine as a bug fix.

Ok, I've followed up on Zanes comments re the fix here:

https://review.openstack.org/#/c/282394/

And squashed both patches as a backport to stable/liberty:

https://review.openstack.org/#/c/282403/

> > https://bugs.launchpad.net/heat/+bug/1546684
> 
> I agree that a new version would be better.
> 
> The main difference for me is that even if it's arguable, you could
> build a working template relying on the current behavior (having a
> template returned by a function).
> If you find a way to keep the current behavior *and* have the one you
> expect, then I can see it as a bug fix.

Yeah, given the feedback here, on the review and in the bug, I've abandoned
the patch and will submit a spec instead so we can work through the
interface concerns and figure out a backwards-compatible way to do it.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Please do *not* use git (and specifically "git log") when generating the docs

2016-02-19 Thread Javier Pena


- Original Message -
> Hi,
> 
> I've seen Reno doing it, then some more. It's time that I raise the
> issue globally in this list before the epidemic spreads to the whole of
> OpenStack ! :)
> 
> The last occurence I have found is in oslo.config (but please keep in
> mind this message is for all projects), which has, its doc/source/conf.py:
> 
> git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'",
>"--date=local","-n1"]
> html_last_updated_fmt = subprocess.check_output(git_cmd,
> stdin=subprocess.PIPE)
> 
> Of course, the .git folder is *NOT* available when building a package in
> Debian (and more generally, in downstream distros). This means that this
> kind of joke *will* break the build of the packages when they also build
> the docs of your project. And consequently, the package maintainers have
> to patch out the above lines from conf.py. It'd be best if it wasn't
> needed to do so.
> 
> As a consequence, it is *not ok* to do "git log" anywhere in the sphinx
> docs. Please keep this in mind.
> 

We have hit the same issue in our automated builds for RDO Trunk, and 
https://bugs.launchpad.net/reno/+bug/1520096 is tracking it for reno.

while it is possible to work around it from the packagers' perspective, it 
would be better to not assume the source is obtained via a git clone.

Cheers,
Javier

> More generally, it is wrong to assume that even the git command is
> present. For Mitaka b2, I had to add git as build-dependency on nearly
> all server packages, otherwise they would FTBFS (fail to build from
> source). This is plain wrong and makes no sense. I hope this can be
> reverted somehow.
> 
> Thanks in advance for considering the above, and to try to see things
> from the package maintainer's perspective,
> Cheers,
> 
> Thomas Goirand (zigo)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-19 Thread John Garbutt
On 19 February 2016 at 16:28, Andrew Laski  wrote:
> On Fri, Feb 19, 2016, at 11:14 AM, Sean Dague wrote:
>> On 02/19/2016 09:30 AM, Andrew Laski wrote:
>> >
>> >
>> > On Thu, Feb 18, 2016, at 05:34 PM, melanie witt wrote:
>> >> On Feb 12, 2016, at 14:49, Jay Pipes  wrote:
>> >>
>> >>> This would be my preference as well, even though it's technically a 
>> >>> backwards-incompatible API change.
>> >>>
>> >>> The idea behind get-me-a-network was specifically to remove the current 
>> >>> required complexity of the nova boot command with regards to networking 
>> >>> options and allow a return to the nova-net model where an admin could 
>> >>> auto-create a bunch of unassigned networks and the first time a user 
>> >>> booted an instance and did not specify any network configuration (the 
>> >>> default, sane behaviour in nova-net), one of those unassigned networks 
>> >>> would be grabbed for the troject, I mean prenant, sorry.
>> >>>
>> >>> So yeah, the "opt-in to having no networking at all with a 
>> >>> --no-networking or --no-nics option" would be my preference.
>> >>
>> >> +1 to this, especially opting in to have no network at all. It seems most
>> >> friendly to me to have the network allocation automatically happen if
>> >> nothing special is specified.
>> >>
>> >> This is something where it seems like we need a "reset" to a default
>> >> behavior that is user-friendly. And microversions is the way we have to
>> >> "fix" an undesirable current default behavior.
>> >
>> > The question I would still like to see addressed is why do we need to
>> > have a default behavior here? The get-me-a-network effort is motivated
>> > by the current complexity of setting up a network for an instance
>> > between Nova and Neutron and wants to get back to a simpler time of
>> > being able to just boot an instance and get a network. But it still
>> > isn't clear to me why requiring something like "--nic auto" wouldn't
>> > work here, and eliminate the confusion of changing a default behavior.
>>
>> The point was the default behavior was a major concern to people. It's
>> not like this was always the behavior. If you were (or are) on nova net,
>> you don't need that option at all.
>
> Which is why I would prefer to shy away from default behaviors.
>
>>
>> The major reason we implemented API microversions was so that we could
>> make the base API experience better for people, some day. One day, we'll
>> have an API we love, hopefully. Doing so means that we do need to make
>> changes to defaults. Deprecate some weird and unmaintained bits.
>>
>> The principle of least surprise to me is that you don't need that
>> attribute at all. Do the right thing with the least amount of work.
>> Instead of making the majority of clients and users do extra work
>> because once upon a time when we brought in neutron a thing happen.
>
> The principal of least surprise to me is that a user explicitly asks for
> something rather than relying on a default that changes based on network
> service and/or microversion. This is the only area in the API where
> something did, and would, happen without explicitly being requested by a
> user. I just don't understand why it's special compared to
> flavor/image/volume which we require to be explicit. But I think we just
> need to agree to disagree here.

Consider a user that uses these four clouds:
* nova-network flat DHCP
* nova-network VLAN manager
* neutron with a single provider network setup
* neutron where user needs to create their own network

For the first three, the user specifies no network, and they just get
a single NIC with some semi-sensible IP address, likely with a gateway
to the internet.

For the last one, the user ends up with a network with zero NICs. If
they then go and configure a network in neutron (and they can now use
the new easy one shot give-me-a-network CLI), they start to get VMs
just like they would have with nova-network VLAN manager.

We all agree the status quo is broken. For me, this is a bug in the
API where we need to fix the consistency. Because its a change in the
behaviour, it needs to be gated by a micro version.

Now, if we step back and created this again, I would agree that
--nic=auto is a good idea, so its explicit. However, all our users are
used to automatic being the default, all be it a very patchy default.
So I think the best evolution here is to fix the inconsistency by
making a VM with no network being the explicit option (--no-nic or
something?), and failing the build if we are unable to get a nic using
an "automatic guess" route. So now the default is more consistent, and
those that what a VM with no NIC have a way to get their special case
sorted.

I think this means I like "option 2" in the summary mail on the ops list.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Neutron] - DVR L3 data plane performance results and scenarios

2016-02-19 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Gal Sagie,
Let me try to pull in the data and will provide you the information.
Thanks
Swami

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: Thursday, February 18, 2016 9:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Yuli Stremovsky; Shlomo Narkolayev; Eran Gampel
Subject: Re: [openstack-dev] [Neutron] - DVR L3 data plane performance results 
and scenarios

Hi Swami,

Thanks for the reply, is there any detailed links that describe this that we 
can look at?

(Of course that having results without the full setup (hardware/ NIC, CPU and 
threads for OVS and so on..) details
and without the full scenario details is a bit hard, regardless however i hope 
it will give us at least
an estimation where we are at)

Thanks
Gal.

On Thu, Feb 18, 2016 at 9:34 PM, Vasudevan, Swaminathan (PNB Roseville) 
> wrote:
Hi Gal Sagie,
Yes there was some performance results on DVR that we shared with the community 
during the Liberty summit in Vancouver.

Also I think there was a performance analysis that was done by Oleg Bondarev on 
DVR during the Paris summit.

We have done lot more changes to the control plane to improve the scale and 
performance in DVR during the Mitaka cycle and will be sharing some performance 
results in the upcoming summit.

Definitely we can align on our approach and have all those results captured in 
the upstream for the reference.

Please let me know if you need any other information.

Thanks
Swami

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: Thursday, February 18, 2016 6:06 AM
To: OpenStack Development Mailing List (not for usage questions); Eran Gampel; 
Shlomo Narkolayev; Yuli Stremovsky
Subject: [openstack-dev] [Neutron] - DVR L3 data plane performance results and 
scenarios

Hello All,

We have started to test Dragonflow [1] data plane L3 performance and was 
wondering
if there is any results and scenarios published for the current Neutron DVR
that we can compare and learn the scenarios to test.

We mostly want to validate and understand if our results are accurate and also 
join the
community in defining base standards and scenarios to test any solution out 
there.

For that we also plan to join and contribute to openstack-performance [2] 
efforts which to me
are really important.

Would love any results/information you can share, also interested in control 
plane
testing and API stress tests (either using Rally or not)

Thanks
Gal.

[1] http://docs.openstack.org/developer/dragonflow/distributed_dragonflow.html
[2] https://github.com/openstack/performance-docs

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade - Nova metadata failure

2016-02-19 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Folks,
Great Job!

Thanks
Swami

From: Armando M. [mailto:arma...@gmail.com]
Sent: Friday, February 19, 2016 9:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial 
upgrade - Nova metadata failure



On 19 February 2016 at 04:43, Sean Dague 
> wrote:
On 02/18/2016 09:50 PM, Armando M. wrote:
>
>
> On 18 February 2016 at 08:41, Sean M. Collins 
> 
> >> wrote:
>
> This week's update:
>
> Armando was kind enough to take a look[1], since he's got a fresh
> perspective. I think I've been suffering from Target Fixation[1]
> where I failed to notice a couple other failures in the logs.
>
>
> It's been fun, and I am glad I was able to help. Once I validated the
> root cause of the metadata failure [1], I got run [2] and a clean pass
> in [3] :)
>
> There are still a few things to iron out, ie. choosing metadata over
> config-drive, testing both in the gate etc. But that's for another day.
>
> Cheers,
> Armando
>
> [1] https://bugs.launchpad.net/nova/+bug/1545101/comments/4
> [2] 
> http://logs.openstack.org/00/281600/6/experimental/gate-grenade-dsvm-neutron-multinode/40e16c8/
> [3] 
> http://logs.openstack.org/00/281600/6/experimental/gate-grenade-dsvm-neutron-multinode/40e16c8/logs/testr_results.html.gz

I want to thank everyone that's been working on this issue profusely.
This exposed a release critical bug in Nova that we would not have
caught otherwise. Finding that before milestone 3 is a huge win and
gives us a lot more options in fixing it correctly.

I think we've got the proper fix now -
https://review.openstack.org/#/c/279721/ (fingers crossed). The metadata
server is one of the least tested components we've got on the Nova side,
so I'll be looking at ways to fix that problem and hopefully avoid
situations like this again.

Now that the blocking issue has been identified, I filed project-config change 
[1] to enable us to test the Neutron Grenade multinode more thoroughly.

[1] https://review.openstack.org/#/c/282428/


-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-19 Thread Ben Swartzlander

On 02/19/2016 11:24 AM, Sean Dague wrote:

On 02/19/2016 11:15 AM, Ben Swartzlander wrote:

On 02/19/2016 10:57 AM, Sean Dague wrote:

On 02/18/2016 10:38 AM, D'Angelo, Scott wrote:

Cinder team is proposing to add support for API microversions [1]. It
came up at our mid-cycle that we should add a new /v3 endpoint [2].
Discussions on IRC have raised questions about this [3]

Please weigh in on the design decision to add a new /v3 endpoint for
Cinder for clients to use when they wish to have api-microversions.

PRO add new /v3 endpoint: A client should not ask for new-behaviour
against old /v2 endpoint, because that might hit an old
pre-microversion (i.e. Liberty) server, and that server might carry
on with old behaviour. The client would not know this without
checking, and so strange things happen silently.
It is possible for client to check the response from the server, but
his requires an extra round trip.
It is possible to implement some type of caching of supported
(micro-)version, but not all clients will do this.
Basic argument is that  continuing to use /v2 endpoint either
requires an extra trip for each request (absent caching) meaning
performance slow-down, or possibility of unnoticed errors.

CON add new endpoint:
Downstream cost of changing endpoints is large. It took ~3 years to
move from /v1 -> /v2 and we will have to support the deprecated /v2
endpoint forever.
If we add microversions with /v2 endpoint, old scripts will keep
working on /v2 and they will continue to work.
We would assume that people who choose to use microversions will
check that the server supports it.


The concern as I understand it is that by extending the v2 API with
microversions the following failure scenario exists

If:

1) a client already is using the /v2 API
2) a client opt's into using microversions on /v2
3) that client issues a request on a Cinder API v2 endpoint without
microversion support
4) that client fails check if micoversions are supported by a GET of /v2
or by checking the return of the OpenStack-API-Version return header


It disagree that this (step 4) is a failure. Clients should not have to
do a check at all. The client should tell the server what it wants to do
(send the request and version) and the server should do exactly that if
and only if it can. Any requirement that the client check the server's
version is a massive violation of good API design and will cause either
performance problems or correctness problems or both.


That is a fair concern. However the Cinder API today doesn't do strict
input validation (in my understanding). Which means it's never given
users that guaruntee. Adding ?foo=bar to random resources, or extra
headers, it likely to just get silently dropped.

Strict input validation is a good thing to do, and would make a very
sensible initial microversion to get onto that path.

So this isn't really worse than the current situation. And the upside is
easier adoption.


I'm not okay with shipping a broken design just because adoption will be 
easier.


I agree the current situation could be better, but let's not let a bad 
status quo give us an excuse to build a bad future. I'm also in favor of 
input validation. Arguably it was harder to do in the past because we 
didn't have a clear versioning mechanism and we needed to to give 
ourselves a way to make backwards-compatible changes to APIs. With a 
proper versioning scheme, input validation is very practical, and the 
only hurdle to getting it implemented is the amount of work.


-Ben



-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] network question and documentation

2016-02-19 Thread fabrice grelaud

> Le 19 févr. 2016 à 14:20, Major Hayden  a écrit :
> 
> On 02/17/2016 09:00 AM, Fabrice Grelaud wrote:
>> So, i would like to know if i'm going in the right direction.
>> We want to use both, existing vlan from our existing physical architecture 
>> inside openstack (vlan provider) and "private tenant network" with IP 
>> floating offer (from a flat network).
>> 
>> My question is about switch configuration:
>> 
>> On Bond0:
>> the switch port connected to bond0 need to be configured as trunks with:
>> - the host management network (vlan untagged but can be tagged ?)
>> - container(mngt) network (vlan-container)
>> - storage network (vlan-storage)
>> 
>> On Bond1:
>> the switch port connected to bond1 need to be configured as trunks with:
>> - vxlan network (vlan-vxlan)
>> - vlan X (existing vlan in our existing network infra)
>> - vlan Y (existing vlan in our existing network infra)
>> 
>> Is that right ?
> 
> You have a good plan here, Fabrice.  Although I don't have bonding configured 
> in my own production environment, I'm doing much the same as you are with 
> individual network interfaces.
> 
>> And do i have to define a new network (a new vlan, flat network) that offer 
>> floatting IP for private tenant (not using existing vlan X or Y)? Is that 
>> new vlan have to be connected to bond1 and/or bond0 ?
>> Is that host management network could play this role ?
> 
> You *could* use the host management network as your floating IP pool network, 
> but you'd need to create a flat network in OpenStack for that (unless your 
> host management network is tagged).  I prefer to use a specific VLAN for 
> those public-facing, floating IP addresses.  

Thanks a lot for your answer.
I prefer to use a specific vlan too. Could you confirm to me that this new vlan 
has to be part of the trunk between the switch port and the bond1 interface 
(where we have the br-vlan) ?

> You'll need routers between your internal networks and that floating IP VLAN 
> to make the floating IP addresses work (if I remember correctly).

Absolutely.

> 
>> ps: otherwise, about the documentation, for great understanding and perhaps 
>> consistency
>> In Github (https://github.com/openstack/openstack-ansible), in the file 
>> openstack_interface.cfg.example, you point out that for br-vxlan and 
>> br-storage, "only compute node have an IP on this bridge. When used by infra 
>> nodes, IPs exist in the containers and inet should be set to manual".
>> 
>> I think it will be good (but i may be wrong ;-) ) that in chapter 3 of the 
>> "install guide: configuring the network on target host", you propose the 
>> /etc/network/interfaces for both controller node (br-vxlan, br-storage: 
>> manual without IP) and compute node (br-vxlan, br-storage: static with IP).
> 
> That makes sense.  Would you be able to open a bug for us?  I'll be glad to 
> help you write some documentation if you're interested in learning that 
> process.
> 
> Our bug tracker is here in LaunchPad:
> 
>  https://bugs.launchpad.net/openstack-ansible

I open a bug (https://bugs.launchpad.net/openstack-ansible/+bug/1547598 
).

I’ll be delighted to contribute at the documentation, at my level. So, i’m 
interesting in learning that process.
We (my project team) plan to follow your guide then i’ll go back with pleasure 
that might be misunderstood to improve this guide.

Regards,

Fabrice Grelaud


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade - Nova metadata failure

2016-02-19 Thread Sean M. Collins
Armando M. wrote:
> Now that the blocking issue has been identified, I filed project-config
> change [1] to enable us to test the Neutron Grenade multinode more
> thoroughly.
> 
> [1] https://review.openstack.org/#/c/282428/


Indeed - I want to profusely thank everyone that I reached out to during
these past months when I got stuck on this. Ihar, Matt K, Kevin B,
Armando - this is a huge win.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][vote] Proposing Angus Salkeld for kolla-core

2016-02-19 Thread Steven Dake (stdake)
Angus is already in kolla-mesos-core but doesn't have broad ability to approve 
changes for all of kolla-core.  We agreed by majority vote in Tokyo that folks 
in kolla-mesos-core that integrated well with the project would be moved from 
kolla-mesos-core to kolla-core.  Once kolla-mesos-core is empty, we will 
deprecate that group.

Angus has clearly shown his commitment to Kolla:
He is #9 in reviews for Mitaka and #3 in commits(!) as well as shows a solid 
PDE of 64 (meaning 64 days of interaction with either reviews, commits, or 
mailing list participation.

Count my vote as a +1.  If your on the fence, feel free to abstain.  A vote of 
-1 is a VETO vote, which terminates the voting process.  If there is unanimous 
approval prior to February 26, or a veto vote, the voting will be closed and 
appropriate changes made.

Remember now we agreed it takes a majority vote to approve a core reviewer, 
which means Angus needs a +1 support from at least 6 core reviewers with no 
veto votes.

Regards,
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] how to run rspec tests? r10k issue

2016-02-19 Thread Colleen Murphy
On Thu, Feb 18, 2016 at 2:26 PM, Matt Fischer  wrote:

> Is anyone able to share the secret of running spec tests since the r10k
> transition? bundle install && bundle exec rake spec have issues because
> r10k is not being installed. Since I'm not the only one hopefully this
> question will help others.
>
> +
> PUPPETFILE=/etc/puppet/modules/keystone/openstack/puppet-openstack-integration/Puppetfile
> + /var/lib/gems/1.9.1/bin/r10k puppetfile install -v
> /etc/puppet/modules/keystone/openstack/puppet-openstack-integration/functions:
> line 51: /var/lib/gems/1.9.1/bin/r10k: No such file or directory
> rake aborted!
>
> The script is written with the assumption that gem binaries are installed
in $GEM_HOME/bin, because in our CI we install them to a local directory
and that's where bundler will put them[1]. If you're used to just
installing gems with 'bundle install' with either system ruby or a ruby
version manager such as RVM or rbenv, it's possible that your binaries
don't end up in $GEM_HOME/bin. This was true for me using rbenv.

Our CI used to use sudo to install gems at the system level, which meant
binaries were just available in the normal PATH. It might be a good idea to
reconsider going back to that, since that would allow any ruby manager to
work normally with this script.

Colleen

[1]
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/puppet-module-jobs.yaml#n15
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Adding folks to the core team

2016-02-19 Thread Steven Dake (stdake)
I was asked if anyone can propose someone for the core reviewer team, and the 
answer is absolutely yes.  Any core reviewer can trigger a vote by using the 
[vote] tag in an email for anything related to project policy (core reviewers 
fall into this category).  The vote must be majority (except for core reviewers 
where special rules are applied permitting a veto).

I'd ask that you run core reviewer nominations by the PTL so that we don't end 
up in a situation with a veto vote, which just makes everyone angry.  If a veto 
vote is going to happen, the PTL will be aware of It and take effective 
measures to coach the candidate, if they are interested, into meeting the 
various requirements of the core reviewers (their future peers).

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Account ACL with keystone auth

2016-02-19 Thread Sampath, Lakshmi

Account ACL for allowing other accounts administration access to create 
containers looks to be accepting the request but doesn't seem to be persisting 
the information with keystone auth.

For example if admin:admin user allows demo:demo "admin" access on its account, 
the following request succeeds but later when I try creating a container, using 
demo account in admin account it fails.

As admin:admin user
curl -X POST -i -H "X-Auth-Token: 57eb097f3b8e4c9e8a927a71c7f18e9c" -H 
'X-Account-Access-Control: {"admin":["AUTH_demo"]}' 
http://127.0.0.1:8080/v1/AUTH_admin
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: txefcd03a9b0ea4c2ab28a3-0056c75dae
Date: Fri, 19 Feb 2016 18:23:42 GMT


As demo:demo user
curl -XPUT -i -H "X-Auth-Token: 9173236daaa3470886410934c467fd7e"  
http://127.0.0.1:8080/v1/AUTH_admin/container1
HTTP/1.1 403 Forbidden
Content-Length: 73
Content-Type: text/html; charset=UTF-8
X-Trans-Id: txbd54e9b8f5c64419bf689-0056c75c25
Date: Fri, 19 Feb 2016 18:17:09 GMT


Is Account ACL supported using keystone auth?

Thanks
Lakshmi.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-19 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 02/19/2016 11:49 AM, John Garbutt wrote:

> Now, if we step back and created this again, I would agree that 
> --nic=auto is a good idea, so its explicit. However, all our users
> are used to automatic being the default, all be it a very patchy
> default. So I think the best evolution here is to fix the
> inconsistency by making a VM with no network being the explicit
> option (--no-nic or something?), and failing the build if we are
> unable to get a nic using an "automatic guess" route. So now the
> default is more consistent, and those that what a VM with no NIC
> have a way to get their special case sorted.

If we expect Nova to be in use for years to come, it makes sense to
accept a little short-term discomfort for a much better long-term
experience. Given that there is no magic solution that will make
everyone happy in all cases, we should favor the one that over the
long haul creates the most number of happy users. The microversion
bump to auto-create a network, IMO, is the least disruptive to
existing use cases and the best choice for the future. We can't wring
our hands forever because we can't make everyone happy.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJWx2GtAAoJEKMgtcocwZqL61wQALYd9VMWXtNiG31Y2G8p4sPE
Ya9jb4baoGIbWPE9YBnojZQiFcpaFJt6Z0puWS6ohQq/CLXMqRsrzZuG7WgX5Juw
RL+LJAwdZKYVaO7RO0qU91Xf8oYWMojebWx8lJybEgrdnlMtWcP43cGNdA+0qvQt
EQZEcRDm2MO5qLRKJSn3f1QYDNjRK4OnmJUK9HwMK83J3A18qS6YFzH65PLUshjP
UAYd4co0A6tBiHQ3XWr/xvCYcNvcSAw+k6qm3gN2IjMaA/L1kNir4ZdxTdv83P44
G0EWdS1SM1fWkv98caCq8swsN3OtyqbouVlFusaifysUzJYJIWdqNqk3gyKCkE34
mCWGq51rM5C3wXBJ4F5AfI9NnnL6jel5CXw7GNxlld9HKB0NQX3bxANzctH+yBDf
/BROU2lUvLtwUsTOnYMFcRUQJilnyF+MZMWY2bo7Bc/HrylMU3RgHoCBNo31sTuK
PnwdTNf8rBQzM6ieBJMYtcYsSUhkfxXfJGIlLeaYHVSQz38rZAdII3GRQHyvKqaL
0WTvpgb1yWoz6LHjdDOWhAY4fG2FAzcCOAfIJg7mryCBmL7kiA9D3gjDXBMaKRzV
eQ7XHotBIQ/A759/8f0/8hbkyB+iBjW5FW+NmeK3uKFUS1p0Q98dnyJC7oqPDkHf
i/EDeUEJrrNwO/tnYgXO
=hJLe
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Midcycle summary part 6/6

2016-02-19 Thread Paul Belanger
On Thu, Feb 18, 2016 at 06:13:08PM -0800, Jim Rollenhagen wrote:
> Hi all,
> 
> As our midcycle is virtual and split into 6 "sessions" for the sake of
> timezones, we'll be sending a brief summary of each session so that
> folks can catch up before the next one. All of this info should be on
> the etherpad as well.
> 
> Session 6/6 was February 19, -0400 UTC.
> 
> This will be a quick one. There were four of us present with nothing
> relevant to talk about. We talked about John's ansible automation for
> setting up a gate-like host for a bit. Then we talked about keyboards
> for a few minutes. Then we decided to drop off and call it a day.
> 
> This virtual midcycle went far better than I'd expected. In the coming
> week or so, I'll be writing a blog post and/or email here with a better
> overall summary, and some thoughts on virtual midcycles as a thing.
> 
> Thanks to everyone who participated, and a *huge* thanks to the infra
> team for providing an awesome VOIP system that had almost zero blips. :D
> 
It was good seeing people using pbx.o.o for virtual sprints. I plan to add a
dashboard to grafana.o.o in the coming days to provide some realtime stats of
the server. Number active calls, conference rooms, etc.

Additionally, the issues we did have exposed the need for conference admins.
So, we need to provide the ability for the chair of the conference to control
each conference room better.  EG: mute / unmute people, kick, lock, etc.  I have
some ideas how to do it, but will likely bring it up in future -infra meetings.

> // jim
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Trove Mitaka mid-cycle sprint summary

2016-02-19 Thread Amrith Kumar
tl;dr
-

The Trove team held its mid-cycle sprint in Raleigh, NC last week. My
thanks to Red Hat and Pete MacKinnon (IRC: pmackinn) who hosted us for
this mid-cycle meeting. Several attendees from companies active in the
project (HP, Tesora, IBM and Red Hat) attended the meeting in person and
remotely via teleconference. Our special thanks also goes to Kengo-san
and Masaki-san of NTT who made the trip all the way from Japan to attend
the mid-cycle.This summary is informational for those who were not able
to attend the meetup, and a reminder for those who had action items
assigned to them to get cracking. Unfortunately a number of them are
assigned to me so I'll be working through these for some time.

Details
---

The agenda for the mid-cycle is online [1]. The meetings lasted two and
a half days and covered a variety of topics. Etherpads were maintained
for each of the sessions. The full list of attendees is available in the
"Introductions and Icebreakers" etherpad.

We reviewed most outstanding blueprints for projects that were committed
for the Mitaka release. At this point we believe that all projects are
on track to being merged in time for the release, with the exception of
one that is potentially in jeopardy of not making the deadline.
Significant outstanding bugs were reviewed and triaged.

We discussed review velocity and how some recent changes to our weekly
meetings have helped bring a focus to the reviews that are in the queue.
We discussed some approaches to improving the responsiveness to patches
that are in review as there have been some cases where significant
patches have sat for a long time without getting reviews. We concluded
that at the weekly meeting(s) we would highlight specific patches that
are in need of review and proactively get reviewers to look at them.

We reviewed (as a team) the specifications and in some cases the code
for some significant features that we seek to release in the Mitaka
timeframe. We found at a previous mid-cycle (Liberty) that having the
contributor(s) lead the team through the salient aspects of the review
considerably improved the value of the review, in some cases resulting
in comments and suggestions for improvement that could be quickly turned
around. We feel that this is a good model and will look to continue this
in the future. At a previous mid-cycle we discussed the possibility of
having this kind of meetings throughout the year; we have not yet done
that but it is something that, should the opportunity arise, we will
experiment with.

The projects we reviewed include:

- MariaDB GTID Replication
- MariaDB Clustering
- Cassandra User Functions
- Cassandra Configuration Groups
- Cassandra B
- Trove Module Management
- Vertica Cluster Grow and Shrink
- Implement DBaaS Ceilometer Notifications
- DB2 Backup and Restore

In addition, we discussed

- the process we wish to follow in graduating MongoDB and Redis
datastores which are currently deemed "experimental". The plan is to
create a non-voting gate and observe how it performs between now and
when we open Newton. Then we will move these datastores either to
tech-preview or stable based on the performance in the non-voting gate

- the need for a management client to provide a number of capabilities
that we need urgently in the project [SlickNik, amrith]

- the project to make it easier to build guest images. [pmackinn et al].
The plan is to write a specification to describe the effort, the new
repository that is to be created, the repository will contain the
elements required to build guest images, describe how those images would
be built, describe how this would relate to trove, to the existing
trove-integration repository, to the CI process, and so on

- the issues surrounding secure deployment of trove, and how this could
be improved [barclaac, amrith]. Related to this, we also discussed the
trove "superconductor", a project that could considerably improve the
capabilities of trove, while at the same time addressing many of the
security issues discussed.

- how to further improve the modularity of the code in the guest by
building an abstraction between the guest agent and the guest image [amrith]

- the issues surrounding upgrades and how we will handle those moving
forward

- the issues surrounding release notes

- python3 support. We reviewed our earlier conversations about this (at
a Trove meeting) and felt that given the things that were already
committed, that we should look to address python3 in the Newton cycle.
The wiki page for the python3 project was updated to indicate that the
project was in progress in trove

- a project to extend back-end (persistent storage) support for trove.
Currently only cinder is supported, the plan is to introduce several
others including manila.

Kengo-san and Masaki-san provided us with a description of the issues
that they are attempting to address in their OpenStack based DBaaS
platform and inquired about the suitability of trove for the purpose.

Re: [openstack-dev] [glance]one more use case for Image Import Refactor from OPNFV

2016-02-19 Thread Ian Cordasco
 

-Original Message-
From: Jay Pipes 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: February 19, 2016 at 06:45:38
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [glance]one more use case for Image Import 
Refactor from OPNFV

> On 02/18/2016 10:29 PM, joehuang wrote:
> > There is difference between " An end user is able to import image from 
> > another Glance  
> in another OpenStack cloud while sharing same identity management( KeyStone )"
>  
> This is an invalid use case, IMO. What's wrong with exporting the image
> from one OpenStack cloud and importing it to another? What does a shared
> identity management service have to do with anything?

I have to agree with Jay. I'm not sure I understand the value of adding this 
scenario when what we're concerned with is not clouds uploading to other clouds 
(or importing from other clouds) but instead how a cloud's users would import 
data into Glance.

> > and other use cases. The difference is the image import need to reuse
> the token in the source Glance, other ones don't need this.
>  
> Again, this use case is not valid, IMO.
>  
> I don't care to cater to these kinds of use cases.

I'd like to understand the needs better before dismissing them out of hand, but 
I'm leaning towards agreeing with Jay.

What you might prefer is a way to get something akin to Swift's TempURL so you 
could give that as a location to your other Glance instance. We don't support 
that though and there doesn't seem any use case that we would like to support 
that would necessitate that.

--  
Ian Cordasco
Glance Core Reviewer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][puppet] Fuel CI for puppet-openstack modules

2016-02-19 Thread Igor Belikov
Hey folks,

I'm glad to announce that Fuel CI for puppet-openstack modules is live and 
running in it's initial stage, you can look at the builds here[0]. It's running 
in silent mode now to allow us to gather some results and ensure that 
everything is running stable, so you won't see any comments in your gerrit 
reviews just yet.
At this moment it will be useful mostly for Fuel folks only and will help to 
keep fuel-library working with the changes merged to upstream modules, but I'm 
sure it won't be long till these jobs will be able to serve puppet-openstack 
community as well, providing results of deployment tests for every patchset to 
a number of puppet-openstack modules.
We're running these tests only for the modules used in fuel-library, so the 
current list of puppet-openstack projects tested by Fuel CI is:
 * puppet-aodh
 * puppet-ceilometer
 * puppet-cinder
 * puppet-glance
 * puppet-ironic
 * puppet-heat
 * puppet-horizon
 * puppet-keystone
 * puppet-murano
 * puppet-neutron
 * puppet-nova
 * puppet-openstacklib
 * puppet-sahara
 * puppet-swift

At this initial stage we're running fuel-library noop tests and same deployment 
scenarios that are used for fuel-library tests on Fuel CI, but I suppose we'll 
move to more specific deployment test cases for each module eventually.

[0]https://ci.fuel-infra.org/view/puppet-openstack/
--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][puppet] Fuel CI for puppet-openstack modules

2016-02-19 Thread Dmitry Borodaenko
Thanks Igor!

With this CI up and running we're one more step closer to completing the
integration between Fuel and Puppet OpenStack projects that has started
with the introduction of the puppet-librarian-simple in fuel-library.

Consider the whole picture:

- Fuel CI is now using mitaka-2 packages to test all Fuel commits [0]

- fuel-library has switched from snapshots of upstream modules to
  tracking every commit in upstream master branches [1]

- Fuel CI can now verify every upstream commit for potential regressions
  it could introduce in Fuel

Thanks to all this, Fuel can now deploy Mitaka, Puppet OpenStack can now
test Mitaka support with yet another platform, and both projects are on
track to each have a Mitaka release soon if not immediately after the
integrated release, addressing the risk I've raised in December [2].

[0] http://lists.openstack.org/pipermail/openstack-dev/2016-February/086842.html
[1] https://review.openstack.org/279460
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-December/082655.html

-- 
Dmitry Borodaenko


On Fri, Feb 19, 2016 at 11:15:47PM +0300, Igor Belikov wrote:
> Hey folks,
> 
> I'm glad to announce that Fuel CI for puppet-openstack modules is live
> and running in it's initial stage, you can look at the builds here[0].
> It's running in silent mode now to allow us to gather some results and
> ensure that everything is running stable, so you won't see any
> comments in your gerrit reviews just yet.
> 
> At this moment it will be useful mostly for Fuel folks only and will
> help to keep fuel-library working with the changes merged to upstream
> modules, but I'm sure it won't be long till these jobs will be able to
> serve puppet-openstack community as well, providing results of
> deployment tests for every patchset to a number of puppet-openstack
> modules.
> 
> We're running these tests only for the modules used in fuel-library,
> so the current list of puppet-openstack projects tested by Fuel CI is:
> 
>  * puppet-aodh
>  * puppet-ceilometer
>  * puppet-cinder
>  * puppet-glance
>  * puppet-ironic
>  * puppet-heat
>  * puppet-horizon
>  * puppet-keystone
>  * puppet-murano
>  * puppet-neutron
>  * puppet-nova
>  * puppet-openstacklib
>  * puppet-sahara
>  * puppet-swift
> 
> At this initial stage we're running fuel-library noop tests and same
> deployment scenarios that are used for fuel-library tests on Fuel CI,
> but I suppose we'll move to more specific deployment test cases for
> each module eventually.
> 
> [0] https://ci.fuel-infra.org/view/puppet-openstack/
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-19 Thread Walter A. Boring IV



But, there are no such clients today. And there is no library that does
this yet. It will be 4 - 6 months (or even more likely 12+) until that's
in the ecosystem. Which is why adding the header validation to existing
v2 API, and backporting to liberty / kilo, will provide really
substantial coverage for the concern the bswartz is bringing forward.

Yeah, I have to agree with that. We can certainly have the protection
out in time.

The only concern there is the admin who set up his Kilo initial release
cloud and doesn't want to touch it for updates. But they likely have
more pressing issues than this any way.


-Sean




Not that I'm adding much to this conversation that hasn't been said 
already, but I am pro v2 API, purely because of how painful and long 
it's been to get the official OpenStack projects to adopt the v2 API 
from v1.  I know we need to be sort of concerned about other 'client's 
that call the API, but for me that's way down the lists of concerns.   
If we go to v3 API, most likely it's going to be another 3+ years before 
folks can use the new Cinder features that the microversioned changes 
will provides.  This in effect invalidates the microversion capability 
in Cinder's API completely.


/sadness
Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] [Tempest] Tempest tests using tempest-plugin

2016-02-19 Thread Michael Johnson
I really do not want to see tempest code copied into the Octavia
repository.  We cannot keep them in sync and maintain the tests that
way.  It has been a recurring problem with neutron-lbaas that we are
trying to get back out of[1], so I really do not want to repeat that
with Octavia.

[1] https://review.openstack.org/#/c/273817/

Michael

On Thu, Feb 18, 2016 at 10:53 PM, Madhusudhan Kandadai
 wrote:
> Hi,
>
> We are trying to implement tempest tests for Octavia using tempest-plugin. I
> am wondering whether we can import *tempest* common files and use them as a
> base to support Octavia tempest tests rather than copying everything in
> Octavia tree. I am in favor of importing files directly from tempest to
> follow tempest structure. If this is not permissible to import from tempest
> directly, do we need to propose any common files in tempest_lib, so we can
> import it from tempest_lib instead? I wanted to check with other for
> suggestions.
>
> Thanks,
> Madhu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Proposing Angus Salkeld for kolla-core

2016-02-19 Thread Michal Rostecki

On 02/19/2016 07:04 PM, Steven Dake (stdake) wrote:

Angus is already in kolla-mesos-core but doesn't have broad ability to
approve changes for all of kolla-core.  We agreed by majority vote in
Tokyo that folks in kolla-mesos-core that integrated well with the
project would be moved from kolla-mesos-core to kolla-core.  Once
kolla-mesos-core is empty, we will deprecate that group.

Angus has clearly shown his commitment to Kolla:
He is #9 in reviews for Mitaka and #3 in commits(!) as well as shows a
solid PDE of 64 (meaning 64 days of interaction with either reviews,
commits, or mailing list participation.

Count my vote as a +1.  If your on the fence, feel free to abstain.  A
vote of –1 is a VETO vote, which terminates the voting process.  If
there is unanimous approval prior to February 26, or a veto vote, the
voting will be closed and appropriate changes made.

Remember now we agreed it takes a majority vote to approve a core
reviewer, which means Angus needs a +1 support from at least 6 core
reviewers with no veto votes.

Regards,
-steve



+1
Good job, Angus!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Do we need lock fencing?

2016-02-19 Thread Joshua Harlow

Hi all,

After reading over the following interesting article about redis and 
redlock (IMHO it's good overview of distributed locking in general):


http://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html#protecting-a-resource-with-a-lock 
(I personally recommend people read the whole article as well, as it's 
rather interesting, as well as the response from the redis author at 
http://antirez.com/news/101).


It got me wondering if with all the locking and such that is getting 
used in openstack (distributed or not) that as we move to more 
distributed locking mechanisms (for scale reasons, HA, active-active...) 
that we might need to have a way to fence modifications of a storage 
entry (say belonging to a resource, ie a volume, a network...) with a 
token (or sequence-id) so that the problems mentioned in that blog do 
not affect openstack (apparently issues like it have affected hbase) and 
the more we think about it now (vs. later) the better we will be.


Anyone have any thoughts on this?

Perhaps tooz can along with its lock API also provide a token for each 
lock that can be used to interact with a storage layer (and that token 
can checked by the storage layer to avoid storage layer corruption).


-Josh


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] next Team meeting cancelled (Feb-22)

2016-02-19 Thread Armando M.
Hi Neutrinos,

This week is Mid-cycle week [1], and some of us will be potentially enroute
to the destination. For this reason, the meeting is cancelled.

If you're interested in participating remotely, please keep an eye on the
etherpad for updates.

Cheers,
Armando

[1] https://etherpad.openstack.org/p/neutron-mitaka-midcycle
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-19 Thread Mike Perez

On 02/17/2016 06:30 AM, Doug Hellmann wrote:

Excerpts from Mike Perez's message of 2016-02-17 03:21:51 -0800:

On 02/16/2016 11:30 AM, Doug Hellmann wrote:

So I think the project team is doing everything we've asked.  We
changed our policies around new projects to emphasize the social
aspects of projects, and community interactions. Telling a bunch
of folks that they "are not OpenStack" even though they follow those
policies is rather distressing.  I think we should be looking for
ways to say "yes" to new projects, rather than "no."


My disagreements with accepting Poppy has been around testing, so let me
reiterate what I've already said in this thread.

The governance currently states that under Open Development "The project
has core reviewers and adopts a test-driven gate in the OpenStack
infrastructure for changes" [1].

If we don't have a solution like OpenCDN, Poppy has to adopt a reference
implementation that is a commercial entity, and infra has to also be
dependent on it. I get Infra is already dependent on public cloud
donations, but if we start opening the door to allow projects to bring
in those commercial dependencies, that's not good.


Only Poppy's test suite would rely on that, though, right? And other
projects can choose whether to co-gate with Poppy or not. So I don't see
how this limitation has an effect on anyone other than the Poppy team.


I maybe reading the words to closely, so someone please correct me, but 
"adopts a test-driven gate in the OpenStack infrastructure for changes" 
seems to imply that it would be *in the OpenStack infrastructure*, which 
is exactly the problem I'm outlining here.


Read my earlier quote above about commercial dependencies in our 
infrastructure and that being bad.



--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-19 Thread Mike Perez

On 02/18/2016 09:05 PM, Cody A.W. Somerville wrote:

There is no implicit (or explicit) requirement for the tests to be a
full integration/end-to-end test. Mocks and/or unit tests would be
sufficient to satisfy "test-driven gate".


While I do agree there is no requirement, I would not be satisfied with 
us giving up on having functional or integration tests from a project 
because of the available implementations. It's reasons like this that 
highlight Poppy being different from the rest of OpenStack.


--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-19 Thread Armando M.
On 19 February 2016 at 09:49, John Garbutt  wrote:

> On 19 February 2016 at 16:28, Andrew Laski  wrote:
> > On Fri, Feb 19, 2016, at 11:14 AM, Sean Dague wrote:
> >> On 02/19/2016 09:30 AM, Andrew Laski wrote:
> >> >
> >> >
> >> > On Thu, Feb 18, 2016, at 05:34 PM, melanie witt wrote:
> >> >> On Feb 12, 2016, at 14:49, Jay Pipes  wrote:
> >> >>
> >> >>> This would be my preference as well, even though it's technically a
> backwards-incompatible API change.
> >> >>>
> >> >>> The idea behind get-me-a-network was specifically to remove the
> current required complexity of the nova boot command with regards to
> networking options and allow a return to the nova-net model where an admin
> could auto-create a bunch of unassigned networks and the first time a user
> booted an instance and did not specify any network configuration (the
> default, sane behaviour in nova-net), one of those unassigned networks
> would be grabbed for the troject, I mean prenant, sorry.
> >> >>>
> >> >>> So yeah, the "opt-in to having no networking at all with a
> --no-networking or --no-nics option" would be my preference.
> >> >>
> >> >> +1 to this, especially opting in to have no network at all. It seems
> most
> >> >> friendly to me to have the network allocation automatically happen if
> >> >> nothing special is specified.
> >> >>
> >> >> This is something where it seems like we need a "reset" to a default
> >> >> behavior that is user-friendly. And microversions is the way we have
> to
> >> >> "fix" an undesirable current default behavior.
> >> >
> >> > The question I would still like to see addressed is why do we need to
> >> > have a default behavior here? The get-me-a-network effort is motivated
> >> > by the current complexity of setting up a network for an instance
> >> > between Nova and Neutron and wants to get back to a simpler time of
> >> > being able to just boot an instance and get a network. But it still
> >> > isn't clear to me why requiring something like "--nic auto" wouldn't
> >> > work here, and eliminate the confusion of changing a default behavior.
> >>
> >> The point was the default behavior was a major concern to people. It's
> >> not like this was always the behavior. If you were (or are) on nova net,
> >> you don't need that option at all.
> >
> > Which is why I would prefer to shy away from default behaviors.
> >
> >>
> >> The major reason we implemented API microversions was so that we could
> >> make the base API experience better for people, some day. One day, we'll
> >> have an API we love, hopefully. Doing so means that we do need to make
> >> changes to defaults. Deprecate some weird and unmaintained bits.
> >>
> >> The principle of least surprise to me is that you don't need that
> >> attribute at all. Do the right thing with the least amount of work.
> >> Instead of making the majority of clients and users do extra work
> >> because once upon a time when we brought in neutron a thing happen.
> >
> > The principal of least surprise to me is that a user explicitly asks for
> > something rather than relying on a default that changes based on network
> > service and/or microversion. This is the only area in the API where
> > something did, and would, happen without explicitly being requested by a
> > user. I just don't understand why it's special compared to
> > flavor/image/volume which we require to be explicit. But I think we just
> > need to agree to disagree here.
>
> Consider a user that uses these four clouds:
> * nova-network flat DHCP
> * nova-network VLAN manager
> * neutron with a single provider network setup
> * neutron where user needs to create their own network
>
> For the first three, the user specifies no network, and they just get
> a single NIC with some semi-sensible IP address, likely with a gateway
> to the internet.
>
> For the last one, the user ends up with a network with zero NICs. If
> they then go and configure a network in neutron (and they can now use
> the new easy one shot give-me-a-network CLI), they start to get VMs
> just like they would have with nova-network VLAN manager.
>
> We all agree the status quo is broken. For me, this is a bug in the
> API where we need to fix the consistency. Because its a change in the
> behaviour, it needs to be gated by a micro version.
>
> Now, if we step back and created this again, I would agree that
> --nic=auto is a good idea, so its explicit. However, all our users are
> used to automatic being the default, all be it a very patchy default.
> So I think the best evolution here is to fix the inconsistency by
> making a VM with no network being the explicit option (--no-nic or
> something?), and failing the build if we are unable to get a nic using
> an "automatic guess" route. So now the default is more consistent, and
> those that what a VM with no NIC have a way to get their special case
> sorted.
>

As much as I can see why a '--nic auto' option makes sense to some, 

Re: [openstack-dev] [kolla][vote] Proposing Angus Salkeld for kolla-core

2016-02-19 Thread Sam Yaple
+1 of course. I mean, its Angus. Who can say no to Angus?

Sam Yaple

On Fri, Feb 19, 2016 at 10:57 PM, Michal Rostecki 
wrote:

> On 02/19/2016 07:04 PM, Steven Dake (stdake) wrote:
>
>> Angus is already in kolla-mesos-core but doesn't have broad ability to
>> approve changes for all of kolla-core.  We agreed by majority vote in
>> Tokyo that folks in kolla-mesos-core that integrated well with the
>> project would be moved from kolla-mesos-core to kolla-core.  Once
>> kolla-mesos-core is empty, we will deprecate that group.
>>
>> Angus has clearly shown his commitment to Kolla:
>> He is #9 in reviews for Mitaka and #3 in commits(!) as well as shows a
>> solid PDE of 64 (meaning 64 days of interaction with either reviews,
>> commits, or mailing list participation.
>>
>> Count my vote as a +1.  If your on the fence, feel free to abstain.  A
>> vote of –1 is a VETO vote, which terminates the voting process.  If
>> there is unanimous approval prior to February 26, or a veto vote, the
>> voting will be closed and appropriate changes made.
>>
>> Remember now we agreed it takes a majority vote to approve a core
>> reviewer, which means Angus needs a +1 support from at least 6 core
>> reviewers with no veto votes.
>>
>> Regards,
>> -steve
>>
>>
> +1
> Good job, Angus!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Proposing Angus Salkeld for kolla-core

2016-02-19 Thread Michał Jastrzębski
+1 on condition that he will appear in kolla itself, after
all...you'll be a kolla core as well right?;)

On 19 February 2016 at 21:44, Sam Yaple  wrote:
> +1 of course. I mean, its Angus. Who can say no to Angus?
>
> Sam Yaple
>
> On Fri, Feb 19, 2016 at 10:57 PM, Michal Rostecki 
> wrote:
>>
>> On 02/19/2016 07:04 PM, Steven Dake (stdake) wrote:
>>>
>>> Angus is already in kolla-mesos-core but doesn't have broad ability to
>>> approve changes for all of kolla-core.  We agreed by majority vote in
>>> Tokyo that folks in kolla-mesos-core that integrated well with the
>>> project would be moved from kolla-mesos-core to kolla-core.  Once
>>> kolla-mesos-core is empty, we will deprecate that group.
>>>
>>> Angus has clearly shown his commitment to Kolla:
>>> He is #9 in reviews for Mitaka and #3 in commits(!) as well as shows a
>>> solid PDE of 64 (meaning 64 days of interaction with either reviews,
>>> commits, or mailing list participation.
>>>
>>> Count my vote as a +1.  If your on the fence, feel free to abstain.  A
>>> vote of –1 is a VETO vote, which terminates the voting process.  If
>>> there is unanimous approval prior to February 26, or a veto vote, the
>>> voting will be closed and appropriate changes made.
>>>
>>> Remember now we agreed it takes a majority vote to approve a core
>>> reviewer, which means Angus needs a +1 support from at least 6 core
>>> reviewers with no veto votes.
>>>
>>> Regards,
>>> -steve
>>>
>>
>> +1
>> Good job, Angus!
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] [Tempest] Tempest tests using tempest-plugin

2016-02-19 Thread Andrea Frittoli
All the code that is in tempest-lib is stable interface that can be
consumed safely.
The code in tempest instead does not provide a guaranteed stable interface,
as it was not originally meant for external consumption.
So while you can import it, it may change without warning (i.e. with no new
tempest tag), and break your plugin.

In Mitaka we have an ongoing effort in the QA team to make more parts of
the tempest core framework as stable interfaces.
Service clients, client managers, credentials providers are all part of
this effort.

What part of tempest are you looking to import? If it's something that is
planned to become stable, you could start importing the current code, and
you may have to adapt your plugin code one the interface becomes stable.
And surely you are very welcome to help in the process of turning the part
of tempest you need in your plugin into a stable interface.

andrea

On Fri, Feb 19, 2016 at 10:26 PM Michael Johnson 
wrote:

> I really do not want to see tempest code copied into the Octavia
> repository.  We cannot keep them in sync and maintain the tests that
> way.  It has been a recurring problem with neutron-lbaas that we are
> trying to get back out of[1], so I really do not want to repeat that
> with Octavia.
>
> [1] https://review.openstack.org/#/c/273817/
>
> Michael
>
> On Thu, Feb 18, 2016 at 10:53 PM, Madhusudhan Kandadai
>  wrote:
> > Hi,
> >
> > We are trying to implement tempest tests for Octavia using
> tempest-plugin. I
> > am wondering whether we can import *tempest* common files and use them
> as a
> > base to support Octavia tempest tests rather than copying everything in
> > Octavia tree. I am in favor of importing files directly from tempest to
> > follow tempest structure. If this is not permissible to import from
> tempest
> > directly, do we need to propose any common files in tempest_lib, so we
> can
> > import it from tempest_lib instead? I wanted to check with other for
> > suggestions.
> >
> > Thanks,
> > Madhu
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Please do *not* use git (and specifically "git log") when generating the docs

2016-02-19 Thread Thomas Goirand
On 02/19/2016 05:39 AM, Dolph Mathews wrote:
> 
> On Thu, Feb 18, 2016 at 11:17 AM, Thomas Goirand  > wrote:
> 
> Hi,
> 
> I've seen Reno doing it, then some more. It's time that I raise the
> issue globally in this list before the epidemic spreads to the whole of
> OpenStack ! :)
> 
> The last occurence I have found is in oslo.config (but please keep in
> mind this message is for all projects), which has, its
> doc/source/conf.py:
> 
> git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'",
>"--date=local","-n1"]
> html_last_updated_fmt = subprocess.check_output(git_cmd,
> stdin=subprocess.PIPE)
> 
> 
> Probably a dumb question, but why do you need to build the HTML docs
> when you're building a package for Debian?

If the doc is useful at all, why wouldn't it be useful in Debian?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev