Re: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs

2018-01-18 Thread Ghanshyam Mann
On Thu, Jan 18, 2018 at 11:22 PM, Graham Hayes  wrote:
> On 18/01/18 16:25, Doug Hellmann wrote:
>> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +:
>
> 
>
>>
>> In the past the QA team agreed to accept trademark-related tests from
>> all projects in the tempest repo. Has that changed?
>>
>
> There has not been an explict rejection but in all conversations the
> response has been "non core projects are outside the scope of tempest".
>
> Honestly, everytime we have tried to do something to core tempest
> we have had major pushback, and I want to clarify this before I or
> someone else put in the work of porting the base clients, getting CI
> configured*, and proposing the tests to tempest.

Yes, i do not remember that we have rejected the actual proposal or
patches or ever said that "QA team not going to accept interop needed
test inside Tempest even those are for other project than tempest
scope". Rather its been discussed that we need to check the situation
when new project going to be make in interop program. At least i
remember the discussion during Barcelona summit where there talked
about heat tests. We discussed we can check that based on when heat is
going to make in interop program.

Anyways let's analysts the current situation and work on best possible
solution than past things. I agree with Doug point about previous
resolution was passed and then why this new resolution ? and what is
not clear in previous resolution?. I think main issue is in
understanding the difference between  'trademark' program and 'adds-on
trademark' program.  Let me add the things point by point.

1. What is difference between "Trademark" program and "Adds-on
Trademark" program from interop certification? Can new projects go
under "Trademark" program.
This will be helpful to understand the situation of keeping all
"Trademark" program tests and "Adds-on" program tests together or
separate. For example: any difference of doing their certification,
logo etc.

2.  As per previous resolution, and with all point of centralized test
location, expertise review, project independent ownership etc etc i
agree with option#1 and no "NO" to that now also. Now question comes
to practice implementation of that resolution which depends on 2
factor:

1. scale and number of program going to be in interop:
As per current proposal, (i think its heat and designate and
around 20-30 tests as total) there is no issue for tempest team to
add/review/maintain them. But if that grows in number of program (than
number tests for e.x. having 50 tests of designate than 10 is not much
different things) and say 10 more program then it is difficult for QA
team to maintain those.

2. QA team review bandwidth.
This is one of the big obstacle to extend the tempest scope. Like
other project, QA team face less contributors issues. Since 1-2 years,
I have been trying to attract new contributor in QA during upstream
training, mentorship program etc but people gets disappear after month
or so. Even all QA members are trying their best in this area but
unfortunately no success.

With both these factor i feel we can go with current resolution
(option#1- below solution) and help QA team also if situation gets
worst (QA team also human beings and need time to sleep :)).

1. QA team accept all interop defined program tests (tests only needed
by interop ).
2. Define a very clear process for collaboration between Interop, QA,
project team to help on adding/maintaining tests. Something like clear
guidelines of test req from interop,  MUST +1 from interop and project
PTL.
3. If interop program grows more which become difficult to maintain by
QA team then accept the necessary change to resolution.

-gmann

>
> - Graham
>
>
> * With zuulv3 this is *much* easier, so not as big a deal as it once was
>
> 
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs

2018-01-18 Thread Ghanshyam Mann
On Thu, Jan 11, 2018 at 10:06 PM, Colleen Murphy  wrote:
> Hi everyone,
>
> We have governance review under debate[1] that we need the community's help 
> on.
> The debate is over what recommendation the TC should make to the Interop team
> on where the tests it uses for the OpenStack trademark program should be
> located, specifically those for the new add-on program being introduced. Let 
> me
> badly summarize:
>
> A couple of years ago we issued a resolution[2] officially recommending that
> the Interop team use solely tempest as its source of tests for capability
> verification. The Interop team has always had the view that the developers,
> being the people closest to the project they're creating, are the best people
> to write tests verifying correct functionality, and so the Interop team 
> doesn't
> maintain its own test suite, instead selecting tests from those written in
> coordination between the QA team and the other project teams. These tests are
> used to validate clouds applying for the OpenStack Powered tag, and since all
> of the projects included in the OpenStack Powered program already had tests in
> tempest, this was a natural fit. When we consider adding new trademark 
> programs
> comprising of other projects, the test source is less obvious. Two examples 
> are
> designate, which has never had tests in the tempest repo, and heat, which
> recently had its tests removed from the tempest repo.
>
> So far the patch proposes three options:
>
> 1) All trademark-related tests should go in the tempest repo, in accordance
>with the original resolution. This would mean that even projects that have
>never had tests in tempest would now have to add at least some of their
>black-box tests to tempest.
>
> The value of this option is that centralizes tests used for the Interop 
> program
> in a location where interop-minded folks from the QA team can control them. 
> The
> downside is that projects that so far have avoided having a dependency on
> tempest will now lose some control over the black-box tests that they use for
> functional and integration that would now also be used for trademark
> certification.
> There's also concern for the review bandwidth of the QA team - we can't expect
> the QA team to be continually responsible for an ever-growing list of projects
> and their trademark tests.
>
> 2) All trademark-related tests for *add-on projects* should be sourced from
>plugins external to tempest.
>
> The value of this option is it allows project teams to retain control over
> these tests. The potential problem with it is that individual project teams 
> are
> not necessarily reviewing test changes with an eye for interop concerns and so
> could inadvertently change the behavior of the trademark-verification tools.
>
> 3) All trademark-related tests should go in a single separate tempest plugin.
>
> This has the value of giving the QA and Interop teams control over
> interop-related tests while also making clear the distinction between tests
> used for trademark verification and tests used for CI. Matt's argument against
> this is that there actually is very little distinction between those two 
> cases,
> and that a given test could have many different applications.

options#3 can solve centralize test location issue but there is
another issue it leads. If we start moving all interop test to
separate interop repo then, many of exiting tempest test (used by
interop) also falls under this category. Which means those existing
tempest tests need to stay in 2 location one in new interop plugin and
second in tempest also as tempest is being used for lot other purpose
also, gate, production Cloud testing & stability etc. Duplication
tests in 2 location is not good option.


>
> Other ideas that have been thrown around are:
>
> * Maintaining a branch in the tempest repo that Interop tests are pulled from.
>
> * Tagging Interop-related tests with decorators to make it clear that they 
> need
>   to be handled carefully.

Nice and imp point. This is been take care very carefully in Tempest
till now . While changing tests or removing test, we have a very clear
and strict  process [4] to not affect any interop tests and i think it
is 100% success till now, i have not heard any complained that we have
changed any test which has broken interop. Adding new decorator etc
has different issues to we did not accepted but main problem is solved
by defining process..

>
> At the heart of the issue is the perception that projects that keep their
> integration tests within the tempest tree are somehow blessed, maybe by the QA
> team or by the TC. It would be nice to try to clarify what technical
> and political
> reasons we have for why different projects have tests in different places -
> review bandwidth of the QA team, ownership/control by the project teams,
> technical interdependency between certain projects, or otherwise.
>
> Ultimately, as Jeremy said in the comments 

Re: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG

2018-01-18 Thread Zhipeng Huang
Feel free to add your name to the wiki :)

On Fri, Jan 19, 2018 at 10:14 AM, Alex Xu  wrote:

> ++, I also want to join this party :)
>
> 2018-01-09 8:40 GMT+08:00 Zhipeng Huang :
>
>> Agree 100% to avoid regular meeting and it is better to have bi-weekly
>> email report. Meeting should be arranged event based, and I think given the
>> status of OpenStack community's work on resource provider, mostly what we
>> need to do is attend k8s meetings (sig-scheduler, wg-resource-management,
>> etc.)
>>
>> BTW for the RM SIG proposed here, let's not limit the scope to k8s only
>> since we might have broader collaborative efforts happening in the future.
>> k8s is our first primary target community to sync up with.
>>
>> On Tue, Jan 9, 2018 at 4:12 AM, Jay Pipes  wrote:
>>
>>> On 01/08/2018 12:26 PM, Zhipeng Huang wrote:
>>>
 Hi all,

 With the maturing of resource provider/placement feature landing in
 OpenStack in recent release, and also in light of Kubernetes community
 increasing attention to the similar effort, I want to propose to form a
 Resource Management SIG as a contact point for OpenStack community to
 communicate with Kubernetes Resource Management WG[0] and other related
 SIGs.

 The formation of the SIG is to provide a gathering of similar
 interested parties and establish an official channel. Currently we have
 already OpenStack developers actively participating in kubernetes
 discussion (e.g. [1]), we would hope the ResMgmt SIG could further help
 such activities and better align the resource mgmt mechanism, especially
 the data modeling between the two communities (or even more communities
 with similar desire).

 I have floated the idea with Jay Pipes and Chris Dent and received
 positive feedback. The SIG will have a co-lead structure so that people
 could spearheading in the area they are most interested in. For example for
 me as Cyborg dev, I will mostly lead in the area of acceleration[2].

 If you are also interested please reply to this thread, and let's find
 a efficient way to form this SIG. Efficient means no extra unnecessary
 meetings and other undue burdens.

>>>
>>> +1
>>>
>>> From the Nova perspective, the scheduler meeting (which is Mondays at
>>> 1400 UTC) is the primary meeting where resource tracking and accounting
>>> issues are typically discussed.
>>>
>>> Chris Dent has done a fabulous job recording progress on the resource
>>> providers and placement work over the last couple releases by issuing
>>> status emails to the openstack-dev@ mailing list each Friday.
>>>
>>> I think having a bi-weekly cross-project (or even cross-ecosystem if
>>> we're talking about OpenStack+k8s) status email reporting any big events in
>>> the resource tracking world would be useful. As far as regular meetings for
>>> a resource management SIG, I'm +0 on that. I prefer to have targeted
>>> topical meetings over regular meetings.
>>>
>>> Best,
>>> -jay
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Zhipeng (Howard) Huang
>>
>> Standard Engineer
>> IT Standard & Patent/IT Product Line
>> Huawei Technologies Co,. Ltd
>> Email: huangzhip...@huawei.com
>> Office: Huawei Industrial Base, Longgang, Shenzhen
>>
>> (Previous)
>> Research Assistant
>> Mobile Ad-Hoc Network Lab, Calit2
>> University of California, Irvine
>> Email: zhipe...@uci.edu
>> Office: Calit2 Building Room 2402
>>
>> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG

2018-01-18 Thread Alex Xu
++, I also want to join this party :)

2018-01-09 8:40 GMT+08:00 Zhipeng Huang :

> Agree 100% to avoid regular meeting and it is better to have bi-weekly
> email report. Meeting should be arranged event based, and I think given the
> status of OpenStack community's work on resource provider, mostly what we
> need to do is attend k8s meetings (sig-scheduler, wg-resource-management,
> etc.)
>
> BTW for the RM SIG proposed here, let's not limit the scope to k8s only
> since we might have broader collaborative efforts happening in the future.
> k8s is our first primary target community to sync up with.
>
> On Tue, Jan 9, 2018 at 4:12 AM, Jay Pipes  wrote:
>
>> On 01/08/2018 12:26 PM, Zhipeng Huang wrote:
>>
>>> Hi all,
>>>
>>> With the maturing of resource provider/placement feature landing in
>>> OpenStack in recent release, and also in light of Kubernetes community
>>> increasing attention to the similar effort, I want to propose to form a
>>> Resource Management SIG as a contact point for OpenStack community to
>>> communicate with Kubernetes Resource Management WG[0] and other related
>>> SIGs.
>>>
>>> The formation of the SIG is to provide a gathering of similar interested
>>> parties and establish an official channel. Currently we have already
>>> OpenStack developers actively participating in kubernetes discussion (e.g.
>>> [1]), we would hope the ResMgmt SIG could further help such activities and
>>> better align the resource mgmt mechanism, especially the data modeling
>>> between the two communities (or even more communities with similar desire).
>>>
>>> I have floated the idea with Jay Pipes and Chris Dent and received
>>> positive feedback. The SIG will have a co-lead structure so that people
>>> could spearheading in the area they are most interested in. For example for
>>> me as Cyborg dev, I will mostly lead in the area of acceleration[2].
>>>
>>> If you are also interested please reply to this thread, and let's find a
>>> efficient way to form this SIG. Efficient means no extra unnecessary
>>> meetings and other undue burdens.
>>>
>>
>> +1
>>
>> From the Nova perspective, the scheduler meeting (which is Mondays at
>> 1400 UTC) is the primary meeting where resource tracking and accounting
>> issues are typically discussed.
>>
>> Chris Dent has done a fabulous job recording progress on the resource
>> providers and placement work over the last couple releases by issuing
>> status emails to the openstack-dev@ mailing list each Friday.
>>
>> I think having a bi-weekly cross-project (or even cross-ecosystem if
>> we're talking about OpenStack+k8s) status email reporting any big events in
>> the resource tracking world would be useful. As far as regular meetings for
>> a resource management SIG, I'm +0 on that. I prefer to have targeted
>> topical meetings over regular meetings.
>>
>> Best,
>> -jay
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Product Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs

2018-01-18 Thread Ken'ichi Ohmichi
2018-01-18 12:36 GMT-08:00 Doug Hellmann :
> Excerpts from Doug Hellmann's message of 2018-01-18 15:21:12 -0500:
>> Excerpts from Graham Hayes's message of 2018-01-18 19:25:02 +:
>> >
>> > On 18/01/18 18:52, Doug Hellmann wrote:
>> > > Excerpts from Graham Hayes's message of 2018-01-18 17:52:39 +:
>> > >> On 18/01/18 16:25, Doug Hellmann wrote:
>> > >>> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +:
>> > >>
>> > >> 
>> > >>
>> > >>>
>> > >>> In the past the QA team agreed to accept trademark-related tests from
>> > >>> all projects in the tempest repo. Has that changed?
>> > >>>
>> > >>
>> > >> There has not been an explict rejection but in all conversations the
>> > >> response has been "non core projects are outside the scope of tempest".
>> > >>
>> > >> Honestly, everytime we have tried to do something to core tempest
>> > >> we have had major pushback, and I want to clarify this before I or
>> > >> someone else put in the work of porting the base clients, getting CI
>> > >> configured*, and proposing the tests to tempest.
>> > >
>> > > OK.
>> > >
>> > > The current policy doesn't say anything about "core" or different
>> > > trademark programs or any other criteria.
>> > >
>> > >   The TC therefore encourages the DefCore committee to consider it an
>> > >   indication of future technical direction that we do not want tests
>> > >   outside of the Tempest repository used for trademark enforcement, and
>> > >   that any new or existing tests that cover capabilities they want to
>> > >   consider for trademark enforcement should be placed in Tempest.
>> > >
>> > > That all seems very clear to me (setting aside some specific word
>> > > choices like "future technical direction" that tie the resolution
>> > > to language in the bylaws).  Regardless of technical reasons why
>> > > it may not be necessary, we still have many social justifications
>> > > for doing it the way we originally set out to do it.  Tests related
>> > > to trademark enforcement need to go into the tempest repository.
>> > >
>> > > The way I think this should work (and the way I remember us describing
>> > > it at the time the policy was established) is the Interop WG
>> > > (previously DefCore) should identify capabilities and tests, then
>> > > ask project teams to reproduce those tests in the tempest repo.
>> > > When the tests land, they can be used by the trademark program.
>> > > Teams can also, at their leisure, decide whether to remove the
>> > > original versions of the tests from whatever repo they existed in
>> > > to begin with.
>> > >
>> > > Graham, you've proposed a new resolution with several options for
>> > > where to put tests for "add-on programs." I don't think we need
>> > > that resolution if we want the tests to continue to live in tempest.
>> > > The existing resolution doesn't qualify which tests, beyond "for
>> > > trademark enforcement" and more words won't make that more clear,
>> > > IMO.
>> > >
>> > > Now if you *do* want to change the policy, we should talk about
>> > > that.  But I can't tell whether you want to change it, you're worried
>> > > the policy is unclear, or it is not being followed.  Can you clarify
>> > > which it is?
>> >
>> > It is not being followed.
>> >
>> > I have brought this up at every forum session on these programs, and the
>> > people in the room from QA have *always* pushed back on it.
>>
>> OK, so that's a problem. I need to hear from the QA team why they've
>> reversed that decision.
>>
>> >
>> > And, for clarity (I saw this in a few logs) QA have *never* said that
>> > they will take the interop designated tests for the DNS project into
>> > openstack/tempest.
>>
>> When we approved the resolution that describes the current policy, the
>> QA team agreed that they would take tests for trademark. There was no
>> stipulation about which projects those apply to.
>
> I feel pretty sure that was discussed in a TC meeting, but I can't
> find that. I do find Matt and Ken'ichi voting +1 on the resolution
> itself.  https://review.openstack.org/#/c/312718/. If I remember
> correctly, Ken'ichi was the PTL at the time.

Yeah, I have still agreed with the resolution.
When I voted +1 on that, core projects were defined as 6 projects like
Nova, Cinder, Glance, Keystone, Neutron and Swift.
And the project navigator also showed these 6 projects as core projects.
Now I cannot find such definition on the project navigator[1], the
definition has been changed?
I just want to clarify "is it true that designate and heat become core
projects?"
If there is a concrete decision, I don't have any objections against
that we have these projects tests in Tempest as the resolution.

Thanks
Ken Ohmichi

---
[1]: https://www.openstack.org/software/project-navigator

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata

2018-01-18 Thread Mathieu Gagné
On Thu, Jan 18, 2018 at 5:19 PM, Jay Pipes  wrote:
> On 01/18/2018 03:54 PM, Mathieu Gagné wrote:
>>
>> Hi,
>>
>> On Tue, Jan 16, 2018 at 4:24 PM, melanie witt  wrote:
>>>
>>> Hello Stackers,
>>>
>>> This is a heads up to any of you using the AggregateCoreFilter,
>>> AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler.
>>> These filters have effectively allowed operators to set overcommit ratios
>>> per aggregate rather than per compute node in <= Newton.
>>>
>>> Beginning in Ocata, there is a behavior change where aggregate-based
>>> overcommit ratios will no longer be honored during scheduling. Instead,
>>> overcommit values must be set on a per compute node basis in nova.conf.
>>>
>>> Details: as of Ocata, instead of considering all compute nodes at the
>>> start
>>> of scheduler filtering, an optimization has been added to query resource
>>> capacity from placement and prune the compute node list with the result
>>> *before* any filters are applied. Placement tracks resource capacity and
>>> usage and does *not* track aggregate metadata [1]. Because of this,
>>> placement cannot consider aggregate-based overcommit and will exclude
>>> compute nodes that do not have capacity based on per compute node
>>> overcommit.
>>>
>>> How to prepare: if you have been relying on per aggregate overcommit,
>>> during
>>> your upgrade to Ocata, you must change to using per compute node
>>> overcommit
>>> ratios in order for your scheduling behavior to stay consistent.
>>> Otherwise,
>>> you may notice increased NoValidHost scheduling failures as the
>>> aggregate-based overcommit is no longer being considered. You can safely
>>> remove the AggregateCoreFilter, AggregateRamFilter, and
>>> AggregateDiskFilter
>>> from your enabled_filters and you do not need to replace them with any
>>> other
>>> core/ram/disk filters. The placement query takes care of the
>>> core/ram/disk
>>> filtering instead, so CoreFilter, RamFilter, and DiskFilter are
>>> redundant.
>>>
>>> Thanks,
>>> -melanie
>>>
>>> [1] Placement has been a new slate for resource management and prior to
>>> placement, there were conflicts between the different methods for setting
>>> overcommit ratios that were never addressed, such as, "which value to
>>> take
>>> if a compute node has overcommit set AND the aggregate has it set? Which
>>> takes precedence?" And, "if a compute node is in more than one aggregate,
>>> which overcommit value should be taken?" So, the ambiguities were not
>>> something that was desirable to bring forward into placement.
>>
>>
>> So we are a user of this feature and I do have some questions/concerns.
>>
>> We use this feature to segregate capacity/hosts based on CPU
>> allocation ratio using aggregates.
>> This is because we have different offers/flavors based on those
>> allocation ratios. This is part of our business model.
>> A flavor extra_specs is use to schedule instances on appropriate hosts
>> using AggregateInstanceExtraSpecsFilter.
>
>
> The AggregateInstanceExtraSpecsFilter will continue to work, but this filter
> is run *after* the placement service would have already eliminated compute
> node records due to placement considering the allocation ratio set for the
> compute node provider's inventory records.

Ok. Does it mean I will have to use something else to properly filter
compute nodes based on flavor?
Is there a way for a compute node to expose some arbitrary
feature/spec instead and still use flavor extra_specs to filter?
(I still have to read on placement API)

I don't mind migrating out of aggregates but I need to find a way to
make it "self service" through the API with granular control like
aggregates used to offer.
We won't be giving access to our configuration manager to our
technicians and even less direct access to the database.
I see that you are suggesting using the placement API below, see my
comments below.


>> Our setup has a configuration management system and we use aggregates
>> exclusively when it comes to allocation ratio.
>
>
> Yes, that's going to be a problem. You will need to use your configuration
> management system to write the nova.CONF.XXX_allocation_ratio configuration
> option values appropriately for each compute node.

Yes, that's my understanding and which is a concern for us.


>> We do not rely on cpu_allocation_ratio config in nova-scheduler or
>> nova-compute.
>> One of the reasons is we do not wish to have to
>> update/package/redeploy our configuration management system just to
>> add one or multiple compute nodes to an aggregate/capacity pool.
>
>
> Yes, I understand.
>
>> This means anyone (likely an operator or other provisioning
>> technician) can perform this action without having to touch or even
>> know about our configuration management system.
>> We can also transfer capacity from one aggregate to another if there
>> is a need, again, using aggregate memberships.
>
>
> Aggregates don't have "capacity". 

Re: [openstack-dev] [gate][devstack][neutron][qa][release] Switch to lib/neutron in gate

2018-01-18 Thread Ihar Hrachyshka
On Thu, Jan 18, 2018 at 8:33 AM, Michael Johnson  wrote:
> This sounds great Ihar!
>
> Let us know when we should make the changes to the neutron-lbaas projects.
>
> Michael

Hi Michael!

You can already start, by introducing new service names without q-*
for your services. For example, neutron-lbaasv2 instead of q-lbaasv2.
You can have both in parallel, behaving the same way, like we do in
neutron devstack plugin:
https://github.com/openstack/neutron/blob/master/devstack/plugin.sh#L34
Once you have it in your devstack plugin, you should be able to safely
replace all occurrences of q-lbaasv2 in infra projects with the new
name. Handly link to detect them:

http://codesearch.openstack.org/?q=q-lbaas=nope==

Thanks!
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] requirements-tox-validate-projects FAILURE

2018-01-18 Thread Clark Boylan
On Thu, Jan 18, 2018, at 1:54 PM, Kwan, Louie wrote:
> Would like to add the following module to openstack.masakari project
> 
> https://github.com/pytransitions/transitions
> 
> https://review.openstack.org/#/c/534990/
> 
> requirements-tox-validate-projects failed:
> 
> http://logs.openstack.org/90/534990/6/check/requirements-tox-validate-projects/ed69273/ara/result/4ee4f7a1-456c-4b89-933a-fe282cf534a3/
> 
> What else need to be done?

Reading the log [0] the job failed because python-cratonclient removed its 
check-requirements job. This was done in 
https://review.openstack.org/#/c/535344/ as part of the craton retirement and 
should be fixed on the requirements side by 
https://review.openstack.org/#/c/535351/. I think a recheck at this point will 
come back green (so I have done that for you).

[0] 
http://logs.openstack.org/90/534990/6/check/requirements-tox-validate-projects/ed69273/job-output.txt.gz#_2018-01-18_20_07_54_531014

Hope this helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata

2018-01-18 Thread Jay Pipes

On 01/18/2018 03:54 PM, Mathieu Gagné wrote:

Hi,

On Tue, Jan 16, 2018 at 4:24 PM, melanie witt  wrote:

Hello Stackers,

This is a heads up to any of you using the AggregateCoreFilter,
AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler.
These filters have effectively allowed operators to set overcommit ratios
per aggregate rather than per compute node in <= Newton.

Beginning in Ocata, there is a behavior change where aggregate-based
overcommit ratios will no longer be honored during scheduling. Instead,
overcommit values must be set on a per compute node basis in nova.conf.

Details: as of Ocata, instead of considering all compute nodes at the start
of scheduler filtering, an optimization has been added to query resource
capacity from placement and prune the compute node list with the result
*before* any filters are applied. Placement tracks resource capacity and
usage and does *not* track aggregate metadata [1]. Because of this,
placement cannot consider aggregate-based overcommit and will exclude
compute nodes that do not have capacity based on per compute node
overcommit.

How to prepare: if you have been relying on per aggregate overcommit, during
your upgrade to Ocata, you must change to using per compute node overcommit
ratios in order for your scheduling behavior to stay consistent. Otherwise,
you may notice increased NoValidHost scheduling failures as the
aggregate-based overcommit is no longer being considered. You can safely
remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter
from your enabled_filters and you do not need to replace them with any other
core/ram/disk filters. The placement query takes care of the core/ram/disk
filtering instead, so CoreFilter, RamFilter, and DiskFilter are redundant.

Thanks,
-melanie

[1] Placement has been a new slate for resource management and prior to
placement, there were conflicts between the different methods for setting
overcommit ratios that were never addressed, such as, "which value to take
if a compute node has overcommit set AND the aggregate has it set? Which
takes precedence?" And, "if a compute node is in more than one aggregate,
which overcommit value should be taken?" So, the ambiguities were not
something that was desirable to bring forward into placement.


So we are a user of this feature and I do have some questions/concerns.

We use this feature to segregate capacity/hosts based on CPU
allocation ratio using aggregates.
This is because we have different offers/flavors based on those
allocation ratios. This is part of our business model.
A flavor extra_specs is use to schedule instances on appropriate hosts
using AggregateInstanceExtraSpecsFilter.


The AggregateInstanceExtraSpecsFilter will continue to work, but this 
filter is run *after* the placement service would have already 
eliminated compute node records due to placement considering the 
allocation ratio set for the compute node provider's inventory records.



Our setup has a configuration management system and we use aggregates
exclusively when it comes to allocation ratio.


Yes, that's going to be a problem. You will need to use your 
configuration management system to write the 
nova.CONF.XXX_allocation_ratio configuration option values appropriately 
for each compute node.



We do not rely on cpu_allocation_ratio config in nova-scheduler or nova-compute.
One of the reasons is we do not wish to have to
update/package/redeploy our configuration management system just to
add one or multiple compute nodes to an aggregate/capacity pool.


Yes, I understand.


This means anyone (likely an operator or other provisioning
technician) can perform this action without having to touch or even
know about our configuration management system.
We can also transfer capacity from one aggregate to another if there
is a need, again, using aggregate memberships.


Aggregates don't have "capacity". Aggregates are not capacity pools. 
Only compute nodes provide resources for guests to consume.


> (we do "evacuate" the

node if there are instances on it)
Our capacity monitoring is based on aggregate memberships and this
offer an easy overview of the current capacity.


By "based on aggregate membership", I believe you are referring to a 
system where you have all compute nodes in a particular aggregate only 
schedule instances with a particular flavor "A" and so you manage 
"capacity" by saying things like "aggregate X can fit 10 more instances 
of flavor A in it"?


Do I understand you correctly?

> Note that a host can

be in one and only one aggregate in our setup.


In *your* setup. And that's the only reason this works for you. You'd 
get totally unpredictable behaviour if your compute nodes were in 
multiple aggregates.



What's the migration path for us?

My understanding is that we will now be forced to have people rely on
our configuration management system (which they don't have access to)
to perform simple task we used 

[openstack-dev] [requirements] requirements-tox-validate-projects FAILURE

2018-01-18 Thread Kwan, Louie
Would like to add the following module to openstack.masakari project

https://github.com/pytransitions/transitions

https://review.openstack.org/#/c/534990/

requirements-tox-validate-projects failed:

http://logs.openstack.org/90/534990/6/check/requirements-tox-validate-projects/ed69273/ara/result/4ee4f7a1-456c-4b89-933a-fe282cf534a3/

What else need to be done?

Thanks.
louie.k...@windriver.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] FFE - Requesting FFE for Routed Networks support.

2018-01-18 Thread Harald Jensås
On Wed, 2018-01-17 at 16:05 +0100, Dmitry Tantsur wrote:
> Hi!
> 
> I'm essentially +1 on granting this FFE, as it's a low-risk work for
> a great 
> feature. See one comment inline.
> 
> On 01/17/2018 10:54 AM, Harald Jensås wrote:
> > Requesting FFE for Routed Network support in networking-baremetal.
> > ---
> > 
> > 
> > # Pros
> > --
> > With the patches up for review[7] we have a working ml2 agent;
> > __depends on neutron fix__; and mechanism driver combination that
> > enables support to bind ports on neutron routed networks.
> > 
> > Specifically we report the bridge_mappings data to neutron, which
> > enable the _find_candidate_subnets() method in neutron ipam[1] to
> > succeed in finding a candidate subnet available to the ironic node
> > when
> > ports on routed segments are bound.
> > 
> > This functionality will allow users to take advantage of the
> > functionality added in DHCP Agent[2] which enables the DHCP agent
> > to
> > service other subnets on the network via DHCP relay. For Ironic
> > this
> > means we can support deploying nodes on a remote L3 network, e.g
> > different datacenter or different rack/rack-row.
> > 
> > 
> > 
> > # Cons
> > --
> > Integration with placement does not currently work.
> > 
> > Neutron uses Nova host-aggregates in combination with Placement.
> > Specifically hosts are added to a host-aggregate for segments based
> > on
> > SEGMENT_HOST_MAPPING. Ironic nodes cannot currently be added to
> > host-
> > aggregates in Nova. Because of this the following will appear in
> > the
> > neutron logs when ironic-neutron agent is started:
> > RESP BODY: {"itemNotFound": {"message": "Compute host  > node-
> > id> could not be found.", "code": 404}}
> > 
> > Also the placement api cannot be used to find good candidate ironic
> > nodes with a baremetal port on the correct segment. This will have
> > to be worked around by the operator via capabilities and flavor
> > properties or manual additions to resource providers in placement.
> > 
> > Depending on the direction of other projects, neutron and nova, the
> > way
> > placement will finally work is not certain.
> > 
> > Either the nova work [3] and [4], or a neutron change to use
> > placement
> > only or a fallback to placement in neutron would be possible. In
> > either
> > case there should be no need to change the networking-baremetal
> > agent
> > or mechanism driver.
> > 
> > 
> > # Risks
> > ---
> > Unless this bug[5] is fixed we might break the current baremetal
> > mechanism driver functionality. I have proposed a patch[6] to
> > neutron
> > that fix the issue. In case no fix lands for this neutron bug soon
> > we
> > should probably push these changes to Rocky.
> 
> Let's add Depends-On to the first patch in the chain to make sure
> your patches 
> don't merge until the fix is merged.
> 

The fix for the neutron issue was approved and is now merged.
https://review.openstack.org/#/c/534449/



> > 
> > 
> > # Core reviewers
> > 
> > Julia Kreger, Sam Betts
> > 
> > 
> > 
> > 
> > [1] https://git.openstack.org/cgit/openstack/neutron/tree/neutron/d
> > b/ip
> > am_backend_mixin.py#n697
> > [2] https://review.openstack.org/#/c/468744/
> > [3] https://review.openstack.org/#/c/421009/
> > [4] https://review.openstack.org/#/c/421011/
> > [5] https://bugs.launchpad.net/neutron/+bug/1743579
> > [6] https://review.openstack.org/#/c/534449/
> > [7] https://review.openstack.org/#/q/project:openstack/networking-b
> > arem
> > etal
> > 
> > 
> 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
|Harald Jensås    
|hjen...@redhat.com   |  www.redhat.com
|+46 (0)701 91 23 17  |  hjensas:irc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata

2018-01-18 Thread Mathieu Gagné
Hi,

On Tue, Jan 16, 2018 at 4:24 PM, melanie witt  wrote:
> Hello Stackers,
>
> This is a heads up to any of you using the AggregateCoreFilter,
> AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler.
> These filters have effectively allowed operators to set overcommit ratios
> per aggregate rather than per compute node in <= Newton.
>
> Beginning in Ocata, there is a behavior change where aggregate-based
> overcommit ratios will no longer be honored during scheduling. Instead,
> overcommit values must be set on a per compute node basis in nova.conf.
>
> Details: as of Ocata, instead of considering all compute nodes at the start
> of scheduler filtering, an optimization has been added to query resource
> capacity from placement and prune the compute node list with the result
> *before* any filters are applied. Placement tracks resource capacity and
> usage and does *not* track aggregate metadata [1]. Because of this,
> placement cannot consider aggregate-based overcommit and will exclude
> compute nodes that do not have capacity based on per compute node
> overcommit.
>
> How to prepare: if you have been relying on per aggregate overcommit, during
> your upgrade to Ocata, you must change to using per compute node overcommit
> ratios in order for your scheduling behavior to stay consistent. Otherwise,
> you may notice increased NoValidHost scheduling failures as the
> aggregate-based overcommit is no longer being considered. You can safely
> remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter
> from your enabled_filters and you do not need to replace them with any other
> core/ram/disk filters. The placement query takes care of the core/ram/disk
> filtering instead, so CoreFilter, RamFilter, and DiskFilter are redundant.
>
> Thanks,
> -melanie
>
> [1] Placement has been a new slate for resource management and prior to
> placement, there were conflicts between the different methods for setting
> overcommit ratios that were never addressed, such as, "which value to take
> if a compute node has overcommit set AND the aggregate has it set? Which
> takes precedence?" And, "if a compute node is in more than one aggregate,
> which overcommit value should be taken?" So, the ambiguities were not
> something that was desirable to bring forward into placement.

So we are a user of this feature and I do have some questions/concerns.

We use this feature to segregate capacity/hosts based on CPU
allocation ratio using aggregates.
This is because we have different offers/flavors based on those
allocation ratios. This is part of our business model.
A flavor extra_specs is use to schedule instances on appropriate hosts
using AggregateInstanceExtraSpecsFilter.

Our setup has a configuration management system and we use aggregates
exclusively when it comes to allocation ratio.
We do not rely on cpu_allocation_ratio config in nova-scheduler or nova-compute.
One of the reasons is we do not wish to have to
update/package/redeploy our configuration management system just to
add one or multiple compute nodes to an aggregate/capacity pool.
This means anyone (likely an operator or other provisioning
technician) can perform this action without having to touch or even
know about our configuration management system.
We can also transfer capacity from one aggregate to another if there
is a need, again, using aggregate memberships. (we do "evacuate" the
node if there are instances on it)
Our capacity monitoring is based on aggregate memberships and this
offer an easy overview of the current capacity. Note that a host can
be in one and only one aggregate in our setup.

What's the migration path for us?

My understanding is that we will now be forced to have people rely on
our configuration management system (which they don't have access to)
to perform simple task we used to be able to do through the API.
I find this unfortunate and I would like to be offered an alternative
solution as the current proposed solution is not acceptable for us.
We are loosing "agility" in our operational tasks.

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata

2018-01-18 Thread Jay Pipes

On 01/18/2018 03:06 PM, Logan V. wrote:

We have used aggregate based scheduler filters since deploying our
cloud in Kilo. This explains the unpredictable scheduling we have seen
since upgrading to Ocata. Before this post, was there some indication
I missed that these filters can no longer be used? Even now reading
the Ocata release notes[1] or checking the filter scheduler docs[2] I
cannot find any indication that AggregateCoreFilter,
AggregateRamFilter, and AggregateDiskFilter are useless in Ocata+. If
I missed something I'd like to know where it is so I can avoid that
mistake again!


We failed to provide a release note about it. :( That's our fault and I 
apologize.



Just to make sure I understand correctly, given this list of filters
we used in Newton:
AggregateInstanceExtraSpecsFilter,AggregateNumInstancesFilter,AggregateCoreFilter,AggregateRamFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

I should remove AggregateCoreFilter, AggregateRamFilter, and RamFilter
from the list because they are no longer useful, and replace them with
the appropriate nova.conf settings instead, correct?


Yes, correct.


What about AggregateInstanceExtraSpecsFilter and
AggregateNumInstancesFilter? Do these still work?


Yes.

Best,
-jay


Thanks
Logan

[1] https://docs.openstack.org/releasenotes/nova/ocata.html
[2] https://docs.openstack.org/ocata/config-reference/compute/schedulers.html

On Wed, Jan 17, 2018 at 7:57 AM, Sylvain Bauza  wrote:



On Wed, Jan 17, 2018 at 2:22 PM, Jay Pipes  wrote:


On 01/16/2018 08:19 PM, Zhenyu Zheng wrote:


Thanks for the info, so it seems we are not going to implement aggregate
overcommit ratio in placement at least in the near future?



As @edleafe alluded to, we will not be adding functionality to the
placement service to associate an overcommit ratio with an aggregate. This
was/is buggy functionality that we do not wish to bring forward into the
placement modeling system.

Reasons the current functionality is poorly architected and buggy
(mentioned in @melwitt's footnote):

1) If a nova-compute service's CONF.cpu_allocation_ratio is different from
the host aggregate's cpu_allocation_ratio metadata value, which value should
be considered by the AggregateCoreFilter filter?

2) If a nova-compute service is associated with multiple host aggregates,
and those aggregates contain different values for their cpu_allocation_ratio
metadata value, which one should be used by the AggregateCoreFilter?

The bottom line for me is that the AggregateCoreFilter has been used as a
crutch to solve a **configuration management problem**.

Instead of the configuration management system (Puppet, etc) setting
nova-compute service CONF.cpu_allocation_ratio options *correctly*, having
the admin set the HostAggregate metadata cpu_allocation_ratio value is
error-prone for the reasons listed above.



Well, the main cause why people started to use AggregateCoreFilter and
others is because pre-Newton, it was litterally impossible to assign
different allocation ratios in between computes except if you were grouping
them in aggregates and using those filters.
Now that ratios are per-compute, there is no need to keep those filters
except if you don't touch computes nova.conf's so that it defaults to the
scheduler ones. The crazy usecase would be like "I have 1000+ computes and I
just want to apply specific ratios to only one or two" but then, I'd second
Jay and say "Config management is the solution to your problem".




Incidentally, this same design flaw is the reason that availability zones
are so poorly defined in Nova. There is actually no such thing as an
availability zone in Nova. Instead, an AZ is merely a metadata tag (or a
CONF option! :( ) that may or may not exist against a host aggregate.
There's lots of spaghetti in Nova due to the decision to use host aggregate
metadata for availability zone information, which should have always been
the domain of a **configuration management system** to set. [*]



IMHO, not exactly the root cause why we have spaghetti code for AZs. I
rather like the idea to see an availability zone as just a user-visible
aggregate, because it makes things simple to understand.
What the spaghetti code is due to is because the transitive relationship
between an aggregate, a compute and an instance is misunderstood and we
introduced the notion of "instance AZ" which is a fool. Instances shouldn't
have a field saying "here is my AZ", it should rather be a flag saying "what
the user wanted as AZ ? (None being a choice) "



In the Placement service, we have the concept of aggregates, too. However,
in Placement, an aggregate (note: not "host aggregate") is merely a grouping
mechanism for resource providers. Placement aggregates do not have any
attributes themselves -- they merely represent the relationship between
resource 

Re: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs

2018-01-18 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2018-01-18 15:21:12 -0500:
> Excerpts from Graham Hayes's message of 2018-01-18 19:25:02 +:
> > 
> > On 18/01/18 18:52, Doug Hellmann wrote:
> > > Excerpts from Graham Hayes's message of 2018-01-18 17:52:39 +:
> > >> On 18/01/18 16:25, Doug Hellmann wrote:
> > >>> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +:
> > >>
> > >> 
> > >>
> > >>>
> > >>> In the past the QA team agreed to accept trademark-related tests from
> > >>> all projects in the tempest repo. Has that changed?
> > >>>
> > >>
> > >> There has not been an explict rejection but in all conversations the
> > >> response has been "non core projects are outside the scope of tempest".
> > >>
> > >> Honestly, everytime we have tried to do something to core tempest
> > >> we have had major pushback, and I want to clarify this before I or
> > >> someone else put in the work of porting the base clients, getting CI
> > >> configured*, and proposing the tests to tempest.
> > > 
> > > OK.
> > > 
> > > The current policy doesn't say anything about "core" or different
> > > trademark programs or any other criteria.
> > > 
> > >   The TC therefore encourages the DefCore committee to consider it an
> > >   indication of future technical direction that we do not want tests
> > >   outside of the Tempest repository used for trademark enforcement, and
> > >   that any new or existing tests that cover capabilities they want to
> > >   consider for trademark enforcement should be placed in Tempest.
> > > 
> > > That all seems very clear to me (setting aside some specific word
> > > choices like "future technical direction" that tie the resolution
> > > to language in the bylaws).  Regardless of technical reasons why
> > > it may not be necessary, we still have many social justifications
> > > for doing it the way we originally set out to do it.  Tests related
> > > to trademark enforcement need to go into the tempest repository.
> > > 
> > > The way I think this should work (and the way I remember us describing
> > > it at the time the policy was established) is the Interop WG
> > > (previously DefCore) should identify capabilities and tests, then
> > > ask project teams to reproduce those tests in the tempest repo.
> > > When the tests land, they can be used by the trademark program.
> > > Teams can also, at their leisure, decide whether to remove the
> > > original versions of the tests from whatever repo they existed in
> > > to begin with.
> > > 
> > > Graham, you've proposed a new resolution with several options for
> > > where to put tests for "add-on programs." I don't think we need
> > > that resolution if we want the tests to continue to live in tempest.
> > > The existing resolution doesn't qualify which tests, beyond "for
> > > trademark enforcement" and more words won't make that more clear,
> > > IMO.
> > > 
> > > Now if you *do* want to change the policy, we should talk about
> > > that.  But I can't tell whether you want to change it, you're worried
> > > the policy is unclear, or it is not being followed.  Can you clarify
> > > which it is?
> > 
> > It is not being followed.
> > 
> > I have brought this up at every forum session on these programs, and the
> > people in the room from QA have *always* pushed back on it.
> 
> OK, so that's a problem. I need to hear from the QA team why they've
> reversed that decision.
> 
> > 
> > And, for clarity (I saw this in a few logs) QA have *never* said that
> > they will take the interop designated tests for the DNS project into
> > openstack/tempest.
> 
> When we approved the resolution that describes the current policy, the
> QA team agreed that they would take tests for trademark. There was no
> stipulation about which projects those apply to.

I feel pretty sure that was discussed in a TC meeting, but I can't
find that. I do find Matt and Ken'ichi voting +1 on the resolution
itself.  https://review.openstack.org/#/c/312718/. If I remember
correctly, Ken'ichi was the PTL at the time.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs

2018-01-18 Thread Doug Hellmann
Excerpts from Graham Hayes's message of 2018-01-18 19:25:02 +:
> 
> On 18/01/18 18:52, Doug Hellmann wrote:
> > Excerpts from Graham Hayes's message of 2018-01-18 17:52:39 +:
> >> On 18/01/18 16:25, Doug Hellmann wrote:
> >>> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +:
> >>
> >> 
> >>
> >>>
> >>> In the past the QA team agreed to accept trademark-related tests from
> >>> all projects in the tempest repo. Has that changed?
> >>>
> >>
> >> There has not been an explict rejection but in all conversations the
> >> response has been "non core projects are outside the scope of tempest".
> >>
> >> Honestly, everytime we have tried to do something to core tempest
> >> we have had major pushback, and I want to clarify this before I or
> >> someone else put in the work of porting the base clients, getting CI
> >> configured*, and proposing the tests to tempest.
> > 
> > OK.
> > 
> > The current policy doesn't say anything about "core" or different
> > trademark programs or any other criteria.
> > 
> >   The TC therefore encourages the DefCore committee to consider it an
> >   indication of future technical direction that we do not want tests
> >   outside of the Tempest repository used for trademark enforcement, and
> >   that any new or existing tests that cover capabilities they want to
> >   consider for trademark enforcement should be placed in Tempest.
> > 
> > That all seems very clear to me (setting aside some specific word
> > choices like "future technical direction" that tie the resolution
> > to language in the bylaws).  Regardless of technical reasons why
> > it may not be necessary, we still have many social justifications
> > for doing it the way we originally set out to do it.  Tests related
> > to trademark enforcement need to go into the tempest repository.
> > 
> > The way I think this should work (and the way I remember us describing
> > it at the time the policy was established) is the Interop WG
> > (previously DefCore) should identify capabilities and tests, then
> > ask project teams to reproduce those tests in the tempest repo.
> > When the tests land, they can be used by the trademark program.
> > Teams can also, at their leisure, decide whether to remove the
> > original versions of the tests from whatever repo they existed in
> > to begin with.
> > 
> > Graham, you've proposed a new resolution with several options for
> > where to put tests for "add-on programs." I don't think we need
> > that resolution if we want the tests to continue to live in tempest.
> > The existing resolution doesn't qualify which tests, beyond "for
> > trademark enforcement" and more words won't make that more clear,
> > IMO.
> > 
> > Now if you *do* want to change the policy, we should talk about
> > that.  But I can't tell whether you want to change it, you're worried
> > the policy is unclear, or it is not being followed.  Can you clarify
> > which it is?
> 
> It is not being followed.
> 
> I have brought this up at every forum session on these programs, and the
> people in the room from QA have *always* pushed back on it.

OK, so that's a problem. I need to hear from the QA team why they've
reversed that decision.

> 
> And, for clarity (I saw this in a few logs) QA have *never* said that
> they will take the interop designated tests for the DNS project into
> openstack/tempest.

When we approved the resolution that describes the current policy, the
QA team agreed that they would take tests for trademark. There was no
stipulation about which projects those apply to.

> 
> To the point that the interop tooling was developed to support plugins
> (which would seem to be in breach of this resolution, but I am sure
> there is reasons for this.)

I can see it being useful to support plugins for evaluating tests before
they are accepted.

> 
> I do want to have option 3 (interop-tempest-plugin), but right now I
> will settle for us either:
> 
> A: Doing what we planned on before (Option 1) (Prefered)
> B: Documenting the fact that things have changed (Option 2), and 
>articulate and record the reasoning for the change.
> 
> I think Add Ons are going to the Board in Dublin for the change from
> Advisory, in the 2018.02 standard so we need to get clarity on this.

I agree.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata

2018-01-18 Thread Logan V.
We have used aggregate based scheduler filters since deploying our
cloud in Kilo. This explains the unpredictable scheduling we have seen
since upgrading to Ocata. Before this post, was there some indication
I missed that these filters can no longer be used? Even now reading
the Ocata release notes[1] or checking the filter scheduler docs[2] I
cannot find any indication that AggregateCoreFilter,
AggregateRamFilter, and AggregateDiskFilter are useless in Ocata+. If
I missed something I'd like to know where it is so I can avoid that
mistake again!

Just to make sure I understand correctly, given this list of filters
we used in Newton:
AggregateInstanceExtraSpecsFilter,AggregateNumInstancesFilter,AggregateCoreFilter,AggregateRamFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

I should remove AggregateCoreFilter, AggregateRamFilter, and RamFilter
from the list because they are no longer useful, and replace them with
the appropriate nova.conf settings instead, correct?

What about AggregateInstanceExtraSpecsFilter and
AggregateNumInstancesFilter? Do these still work?

Thanks
Logan

[1] https://docs.openstack.org/releasenotes/nova/ocata.html
[2] https://docs.openstack.org/ocata/config-reference/compute/schedulers.html

On Wed, Jan 17, 2018 at 7:57 AM, Sylvain Bauza  wrote:
>
>
> On Wed, Jan 17, 2018 at 2:22 PM, Jay Pipes  wrote:
>>
>> On 01/16/2018 08:19 PM, Zhenyu Zheng wrote:
>>>
>>> Thanks for the info, so it seems we are not going to implement aggregate
>>> overcommit ratio in placement at least in the near future?
>>
>>
>> As @edleafe alluded to, we will not be adding functionality to the
>> placement service to associate an overcommit ratio with an aggregate. This
>> was/is buggy functionality that we do not wish to bring forward into the
>> placement modeling system.
>>
>> Reasons the current functionality is poorly architected and buggy
>> (mentioned in @melwitt's footnote):
>>
>> 1) If a nova-compute service's CONF.cpu_allocation_ratio is different from
>> the host aggregate's cpu_allocation_ratio metadata value, which value should
>> be considered by the AggregateCoreFilter filter?
>>
>> 2) If a nova-compute service is associated with multiple host aggregates,
>> and those aggregates contain different values for their cpu_allocation_ratio
>> metadata value, which one should be used by the AggregateCoreFilter?
>>
>> The bottom line for me is that the AggregateCoreFilter has been used as a
>> crutch to solve a **configuration management problem**.
>>
>> Instead of the configuration management system (Puppet, etc) setting
>> nova-compute service CONF.cpu_allocation_ratio options *correctly*, having
>> the admin set the HostAggregate metadata cpu_allocation_ratio value is
>> error-prone for the reasons listed above.
>>
>
> Well, the main cause why people started to use AggregateCoreFilter and
> others is because pre-Newton, it was litterally impossible to assign
> different allocation ratios in between computes except if you were grouping
> them in aggregates and using those filters.
> Now that ratios are per-compute, there is no need to keep those filters
> except if you don't touch computes nova.conf's so that it defaults to the
> scheduler ones. The crazy usecase would be like "I have 1000+ computes and I
> just want to apply specific ratios to only one or two" but then, I'd second
> Jay and say "Config management is the solution to your problem".
>
>
>>
>> Incidentally, this same design flaw is the reason that availability zones
>> are so poorly defined in Nova. There is actually no such thing as an
>> availability zone in Nova. Instead, an AZ is merely a metadata tag (or a
>> CONF option! :( ) that may or may not exist against a host aggregate.
>> There's lots of spaghetti in Nova due to the decision to use host aggregate
>> metadata for availability zone information, which should have always been
>> the domain of a **configuration management system** to set. [*]
>>
>
> IMHO, not exactly the root cause why we have spaghetti code for AZs. I
> rather like the idea to see an availability zone as just a user-visible
> aggregate, because it makes things simple to understand.
> What the spaghetti code is due to is because the transitive relationship
> between an aggregate, a compute and an instance is misunderstood and we
> introduced the notion of "instance AZ" which is a fool. Instances shouldn't
> have a field saying "here is my AZ", it should rather be a flag saying "what
> the user wanted as AZ ? (None being a choice) "
>
>
>> In the Placement service, we have the concept of aggregates, too. However,
>> in Placement, an aggregate (note: not "host aggregate") is merely a grouping
>> mechanism for resource providers. Placement aggregates do not have any
>> attributes themselves -- they merely represent the relationship between
>> resource 

Re: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs

2018-01-18 Thread Graham Hayes


On 18/01/18 18:52, Doug Hellmann wrote:
> Excerpts from Graham Hayes's message of 2018-01-18 17:52:39 +:
>> On 18/01/18 16:25, Doug Hellmann wrote:
>>> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +:
>>
>> 
>>
>>>
>>> In the past the QA team agreed to accept trademark-related tests from
>>> all projects in the tempest repo. Has that changed?
>>>
>>
>> There has not been an explict rejection but in all conversations the
>> response has been "non core projects are outside the scope of tempest".
>>
>> Honestly, everytime we have tried to do something to core tempest
>> we have had major pushback, and I want to clarify this before I or
>> someone else put in the work of porting the base clients, getting CI
>> configured*, and proposing the tests to tempest.
> 
> OK.
> 
> The current policy doesn't say anything about "core" or different
> trademark programs or any other criteria.
> 
>   The TC therefore encourages the DefCore committee to consider it an
>   indication of future technical direction that we do not want tests
>   outside of the Tempest repository used for trademark enforcement, and
>   that any new or existing tests that cover capabilities they want to
>   consider for trademark enforcement should be placed in Tempest.
> 
> That all seems very clear to me (setting aside some specific word
> choices like "future technical direction" that tie the resolution
> to language in the bylaws).  Regardless of technical reasons why
> it may not be necessary, we still have many social justifications
> for doing it the way we originally set out to do it.  Tests related
> to trademark enforcement need to go into the tempest repository.
> 
> The way I think this should work (and the way I remember us describing
> it at the time the policy was established) is the Interop WG
> (previously DefCore) should identify capabilities and tests, then
> ask project teams to reproduce those tests in the tempest repo.
> When the tests land, they can be used by the trademark program.
> Teams can also, at their leisure, decide whether to remove the
> original versions of the tests from whatever repo they existed in
> to begin with.
> 
> Graham, you've proposed a new resolution with several options for
> where to put tests for "add-on programs." I don't think we need
> that resolution if we want the tests to continue to live in tempest.
> The existing resolution doesn't qualify which tests, beyond "for
> trademark enforcement" and more words won't make that more clear,
> IMO.
> 
> Now if you *do* want to change the policy, we should talk about
> that.  But I can't tell whether you want to change it, you're worried
> the policy is unclear, or it is not being followed.  Can you clarify
> which it is?

It is not being followed.

I have brought this up at every forum session on these programs, and the
people in the room from QA have *always* pushed back on it.

And, for clarity (I saw this in a few logs) QA have *never* said that
they will take the interop designated tests for the DNS project into
openstack/tempest.

To the point that the interop tooling was developed to support plugins
(which would seem to be in breach of this resolution, but I am sure
there is reasons for this.)

I do want to have option 3 (interop-tempest-plugin), but right now I
will settle for us either:

A: Doing what we planned on before (Option 1) (Prefered)
B: Documenting the fact that things have changed (Option 2), and
   articulate and record the reasoning for the change.

I think Add Ons are going to the Board in Dublin for the change from
Advisory, in the 2018.02 standard so we need to get clarity on this.

- Graham

> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs

2018-01-18 Thread Doug Hellmann
Excerpts from Graham Hayes's message of 2018-01-18 17:52:39 +:
> On 18/01/18 16:25, Doug Hellmann wrote:
> > Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +:
> 
> 
> 
> > 
> > In the past the QA team agreed to accept trademark-related tests from
> > all projects in the tempest repo. Has that changed?
> > 
> 
> There has not been an explict rejection but in all conversations the
> response has been "non core projects are outside the scope of tempest".
> 
> Honestly, everytime we have tried to do something to core tempest
> we have had major pushback, and I want to clarify this before I or
> someone else put in the work of porting the base clients, getting CI
> configured*, and proposing the tests to tempest.

OK.

The current policy doesn't say anything about "core" or different
trademark programs or any other criteria.

  The TC therefore encourages the DefCore committee to consider it an
  indication of future technical direction that we do not want tests
  outside of the Tempest repository used for trademark enforcement, and
  that any new or existing tests that cover capabilities they want to
  consider for trademark enforcement should be placed in Tempest.

That all seems very clear to me (setting aside some specific word
choices like "future technical direction" that tie the resolution
to language in the bylaws).  Regardless of technical reasons why
it may not be necessary, we still have many social justifications
for doing it the way we originally set out to do it.  Tests related
to trademark enforcement need to go into the tempest repository.

The way I think this should work (and the way I remember us describing
it at the time the policy was established) is the Interop WG
(previously DefCore) should identify capabilities and tests, then
ask project teams to reproduce those tests in the tempest repo.
When the tests land, they can be used by the trademark program.
Teams can also, at their leisure, decide whether to remove the
original versions of the tests from whatever repo they existed in
to begin with.

Graham, you've proposed a new resolution with several options for
where to put tests for "add-on programs." I don't think we need
that resolution if we want the tests to continue to live in tempest.
The existing resolution doesn't qualify which tests, beyond "for
trademark enforcement" and more words won't make that more clear,
IMO.

Now if you *do* want to change the policy, we should talk about
that.  But I can't tell whether you want to change it, you're worried
the policy is unclear, or it is not being followed.  Can you clarify
which it is?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs

2018-01-18 Thread Graham Hayes
On 18/01/18 16:25, Doug Hellmann wrote:
> Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +:



> 
> In the past the QA team agreed to accept trademark-related tests from
> all projects in the tempest repo. Has that changed?
> 

There has not been an explict rejection but in all conversations the
response has been "non core projects are outside the scope of tempest".

Honestly, everytime we have tried to do something to core tempest
we have had major pushback, and I want to clarify this before I or
someone else put in the work of porting the base clients, getting CI
configured*, and proposing the tests to tempest.

- Graham


* With zuulv3 this is *much* easier, so not as big a deal as it once was






signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID

2018-01-18 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2018-01-18 11:45:28 -0500:
> Excerpts from Saverio Proto's message of 2018-01-18 14:49:21 +0100:
> > Hello all,
> > 
> > well this oslo.log library looks like a core thing that is used by
> > multiple projects. I feel scared hearing that bugs opened on that
> > project are probably just ignored.
> > 
> > should I reach out to the current PTL of OSLO ?
> > https://github.com/openstack/governance/blob/master/reference/projects.yaml#L2580
> > 
> > ChangBo Guo are you reading this thread ? Do you think this is a bug or
> > a missing feature ? And moreover is really nobody looking at these
> > oslo.log bugs ?
> 
> The Oslo team is small, but we do pay attention to bug reports. I
> don't think this issue is going to rise to the level of "drop what
> you're doing and help because the world is on fire", so I think
> Sean is just encouraging you to have a little patience.
> 
> Please do go ahead and open a bug and attach (or paste into the
> description) an example of what the log output for your service looks
> like.
> 
> Doug

Earlier in the thread you mentioned running the newton versions of
neutron and oslo.log. The newton release has been marked end-of-life
and is not supported by the community any longer. You may find
support from your vendor, but if you're deploying on your own we'll
have to work something else out. If we determine that this is a bug
in the newton version of the library I won't have any way to give
you a new release because the branch is closed.

It should be possible for you to update just oslo.log to a more
recent (and supported), although to do so you would have to get the
package separately or build your own and that may complicate your
deployment.

More recent versions of the JSON formatter change the structure of
the data to include the entire context (including the request id)
in a separate key.  Are you updating to newton as part of upgrading
further than that?  If so, we probably want to wait to debug this
until you hit the latest supported version you're planning to deploy,
in case the problem is already fixed there.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Dublin PTG proposed track schedule

2018-01-18 Thread Emilien Macchi
On Thu, Jan 18, 2018 at 2:20 AM, Thierry Carrez  wrote:
[...]
> We'd like to publish this schedule on the event website ASAP, so please
> check that it still matches your needs (number of days, room size vs.
> expected attendance) and does not create too many conflicts. [...]

ack & works for us (TripleO).
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Rocky PTG planning

2018-01-18 Thread Daniel Mellado
On 01/18/2018 05:49 PM, Daniel Mellado wrote:
> Hi everyone!
> 
> Unlike winter, PTG is coming! I've created an etherpad to track the
> topics and attendees, so please add your attendance information in the
> there.
> 
> Besides work items, maybe we can also use it to try to organize some
> kind of social event in Dublin.
> 
> Looking forward to see you all there!
> 
Forgot to put the link xD!

https://etherpad.openstack.org/p/kuryr-ptg-rocky

Best!



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Rocky PTG planning

2018-01-18 Thread Daniel Mellado
Hi everyone!

Unlike winter, PTG is coming! I've created an etherpad to track the
topics and attendees, so please add your attendance information in the
there.

Besides work items, maybe we can also use it to try to organize some
kind of social event in Dublin.

Looking forward to see you all there!



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID

2018-01-18 Thread Doug Hellmann
Excerpts from Saverio Proto's message of 2018-01-18 14:49:21 +0100:
> Hello all,
> 
> well this oslo.log library looks like a core thing that is used by
> multiple projects. I feel scared hearing that bugs opened on that
> project are probably just ignored.
> 
> should I reach out to the current PTL of OSLO ?
> https://github.com/openstack/governance/blob/master/reference/projects.yaml#L2580
> 
> ChangBo Guo are you reading this thread ? Do you think this is a bug or
> a missing feature ? And moreover is really nobody looking at these
> oslo.log bugs ?

The Oslo team is small, but we do pay attention to bug reports. I
don't think this issue is going to rise to the level of "drop what
you're doing and help because the world is on fire", so I think
Sean is just encouraging you to have a little patience.

Please do go ahead and open a bug and attach (or paste into the
description) an example of what the log output for your service looks
like.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-01-18 Thread Ed Leafe
Greetings OpenStack community,

A very quiet meeting today [7], which you would expect with the absence of 
cdent and elmiko. The main discussion was about the guideline on exposing 
microversions in SDKs [8] by dtantsur. The focus of the discussion was about 
how to handle the distinction between what he calls a "low-level SDK" (such as 
novaclient, ironicclient, etc.), and a "high-level SDK" (such as Shade, 
jclouds, or OpenStack.NET). We agreed to continue the discussion next week when 
we can have additional points of view available to come up with more clarity.

Oh, and we merged the improvement to the guideline on pagination. Thanks, 
mordred!

As always if you're interested in helping out, in addition to coming to the 
meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for changes 
over time. If you find something that's not quite right, submit a patch [6] to 
fix it.
* Have you done something for which you think guidance would have made things 
easier but couldn't find any? Submit a patch and help others [6].

# Newly Published Guidelines

* Expand note about rfc5988 link header
  https://review.openstack.org/#/c/531914/

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week.

# Guidelines Currently Under Review [3]

* Add guideline on exposing microversions in SDKs
  https://review.openstack.org/#/c/532814/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

* WIP: Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the 
OpenStack developer mailing list[1] with the tag "[api]" in the subject. In 
your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] 
http://eavesdrop.openstack.org/meetings/api_sig/2018/api_sig.2018-01-18-16.00.log.html
[8] https://review.openstack.org/#/c/532814/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate][devstack][neutron][qa][release] Switch to lib/neutron in gate

2018-01-18 Thread Michael Johnson
This sounds great Ihar!

Let us know when we should make the changes to the neutron-lbaas projects.

Michael


On Wed, Jan 17, 2018 at 11:26 AM, Ihar Hrachyshka  wrote:
> Hi all,
>
> tl;dr I propose to switch to lib/neutron devstack library in Queens. I
> ask for buy-in to the plan from release and QA teams, something that
> infra asked me to do.
>
> ===
>
> Last several cycles we were working on getting lib/neutron - the new
> in-tree devstack library to deploy neutron services - ready to deploy
> configurations we may need in our gates. Some pieces of the work
> involved can be found in:
>
> https://review.openstack.org/#/q/topic:new-neutron-devstack-in-gate
>
> I am happy to announce that the work finally got to the point where we
> can consistently pass both devstack-gate and neutron gates:
>
> (devstack-gate) https://review.openstack.org/436798
> (neutron) https://review.openstack.org/441579
>
> One major difference between the old lib/neutron-legacy library and
> the new lib/neutron one is that service names for neutron are
> different. For example, q-svc is now neutron-api, q-dhcp is now
> neutron-dhcp, etc. (In case you wonder, this q- prefix links us back
> to times when Neutron was called Quantum.) The way lib/neutron is
> designed is that whenever a single q-* service name is present in
> ENABLED_SERVICES, the old lib/neutron-legacy code is triggered to
> deploy services.
>
> Service name changes are a large part of the work. The way the
> devstack-gate change linked above is designed is that it changes names
> for deployed neutron services starting from Queens (current master),
> so old branches and grenade jobs are not affected by the change.
>
> While we validated the change switching to new names against both
> devstack-gate and neutron gates that should cover 90% of our neutron
> configurations, and followed up with several projects that - we
> induced - may be affected by the change - there is always a chance
> that some job in some project gate would fail because of it, and we
> would need to push a (probably rather simple) follow-up to unbreak the
> affected job. Due to the nature of the work, the span of impact, and
> the fact that infra repos are not easily gated against with Depends-On
> links, we may need to live with the risk.
>
> Of course, there are several aspects of the project life involved,
> including QA and release delivery efforts. I was advised to reach out
> to both of those teams to get a buy-in to proceed with the move. If we
> have support for the switch now, as per Clark, infra is ready to
> support the switch.
>
> Note that the effort span several cycles, partially due to low review
> velocity in several affected repos (devstack, devstack-gate),
> partially because new changes in all affected repos were pulling us
> back from the end goal. This is one of the reasons why I would like us
> to do the switch sooner rather than later, since chasing this moving
> goalpost became rather burdensome.
>
> What are QA and release team thoughts on the switch? Are we ready to
> do it in next weeks?
>
> Thanks for attention,
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs

2018-01-18 Thread Doug Hellmann
Excerpts from Graham Hayes's message of 2018-01-18 15:33:12 +:

> I had hoped for more of a discussion on this before I jumped back into
> this debate  - but it seams to be stalled still, so here it goes.
> 
> I proposed this initially as we were unclear on where the tests should
> go - we had a resolution that said all tests go into openstack/tempest
> (with a list of reasons why), and the guidance and discussion that been
> had in various summits was that "add-ons" should stay in plugins.
> 
> So right now, we (by the governance rules) should be pushing tests to
> tempest for the new programs.
> 
> In the resolution that placed the tests in tempest there was a few
> reasons proposed:
> 
>   For example, API and behavioral changes must be carefully managed, as
>   must mundane aspects such as test and module naming and location
>   within the test suite. Even changes that leave tests functionally
>   equivalent may cause unexpected consequences for their use in DefCore
>   processes and validation. Placing the tests in a central repository
>   will make it easier to maintain consistency and avoid breaking the
>   trademark enforcement tool.
> 
> This still applies, and even more so for teams that traditionally do not
> have a strong QE contributor / reviewer base (aka projects not in
> "core").
> 
>   Centralizing the tests also makes it easier for anyone running the
>   validation tool against their cloud or cloud distribution to use the
>   tests. It is easier to install the test suite and its dependencies,
>   and it is easier to read and understand a set of tests following a
>   consistent implementation pattern.
> 
> Apparently users do not need central tests anymore, feedback from
> RefStack is that people who run these tests are comfortable dealing
> with extra python packages.
> 
> The point about a single set of tests, in a single location and style
> still stands.
> 
>   Finally, having the tests in a central location makes it easier to
>   ensure that all members of the community have equal input into what
>   the tests do and how they are implemented and maintained.
> 
> Seems like a good value to strive for.
> 
> One of the items that has been used to push back against adding
> "add-ons" to tempest has been that tempest has a defined scope, and
> neither of the current add-ons fit in that scope.
> 
> Can someone clarify what the set of criteria is? I think it will help
> this discussion.
> 
> Another push back is the "scaling" issue - adding more tests will
> overload the QA team.

In the past the QA team agreed to accept trademark-related tests from
all projects in the tempest repo. Has that changed?

> 
> Right now, DNS adds 10 tests, Orchestration adds 22, to a current suite
> of 353.
> 
> I do not think there is many other add-ons proposed yet, and the new
> Vertical programs will probably mainly be re-using tests in the
> openstack/tempest repos as is.
> 
> This is not a big tent-esque influx of programs - the only projects
> that can be added to the trademarks are programs in tc-approved-release
> [4], so I do not see scaling as a big issue, especially as these tests
> are such base concepts that if they need to be changed there is a
> completely new API, so the only overhead will be ensuring that nothing
> in tempest breaks the new tests (which is a good thing for trademark
> tests).
> 
> Personally, for me, I like option 3. I did not initially add it, as I
> knew it would cause endless bikesheding, but I do think it fits both
> a technical and social model.
> 
> I see 2 immediate routes forward:
> 
>  - Option 1, and we start adding these tests asap
>  - Pseudo Option 2, were we delete the resolution at [2] as it clearly
>does not apply anymore, and abandon the review at [1].
> 
> Finally - do not conflate my actions with those of the Designate team.
> I have seen people talking about how this resolution was leverage the
> team needed to move our tests in tree. This is definitely *not* true.
> Having our tests in a plugin is useful to us, and if the above
> resolution passed, I cannot see a situation where we would try to
> move any tests that were not listed in the interop standards.
> 
> This is something I have done as an individual in the community, not
> something the designate team have pushed for.

Thanks for pushing for a clear resolution to this, Graham.

> 
> 
> [4] -
> https://governance.openstack.org/tc/reference/tags/tc_approved-release.html
> 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [glance] priorities for the coming week (18 Jan - 24 Jan)

2018-01-18 Thread Brian Rosmaita
As discussed at today's Glance meeting, the Q-3 milestone is next
week.  Please focus on the following:

(1) image metadata injection
https://review.openstack.org/#/c/527635/

(2) interoperable image import
https://review.openstack.org/532502
https://review.openstack.org/532501
may be some more, watch the ML

(3) use only E-M-C strategy in glance-manage tool
not sure the patch is up yet, will leave a note on
https://review.openstack.org/#/c/433934

Have a good week!
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Dublin PTG proposed track schedule

2018-01-18 Thread Matthew Thode
On 18-01-18 11:20:21, Thierry Carrez wrote:
> Hi everyone,
> 
> Here is the proposed pre-allocated track schedule for the Dublin PTG:
> 
> https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307=true
> 
> You'll also notice that some teams (in orange below the table in above
> link) do not have pre-allocated slots. One key difference this time
> around is that we set aside a larger number of rooms and meeting spots
> for dynamically scheduling tracks. The idea is to avoid pre-allocating
> smaller tracks to a specific time slot that might or might not create
> conflicts, and let that team book a space at a time that makes the most
> sense for them, while the event happens. This dynamic booking will be
> done through the PTGbot.
> 
> So the unscheduled teams (in orange) are expected to take advantage of
> this flexibility and schedule themselves during the event. This
> flexibility is not limited to those orange teams: other teams may want
> to meet for more than their pre-allocated time slots, and can book extra
> space as well. For example if you are on the First Contact SIG and
> realize on Tuesday afternoon that you would like to continue the
> discussions on Friday morning, it's easy to extend your track to a time
> slot there.
> 
> Note that this system also replaces the old ethercalc-scheduled
> "reservable rooms". If a group of people forms around a specific issue
> and wants to discuss it, they can ask for their own additional "track"
> and schedule it dynamically as well.
> 
> Finally, you'll notice we have extra space set aside on Monday-Tuesday
> to discuss "hot" last-minute cross-project issues -- if you have ideas
> of topics that we need to discuss in-person, please let us know.
> 

As one of the teams in orange, what specific steps, if any, do we need to
take to schedule a specific time/place for a meeting?

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Many timeouts in zuul gates for TripleO

2018-01-18 Thread Emilien Macchi
On Thu, Jan 18, 2018 at 6:34 AM, Or Idgar  wrote:
> Hi,
> we're encountering many timeouts for zuul gates in TripleO.
> For example, see
> http://logs.openstack.org/95/508195/28/check-tripleo/tripleo-ci-centos-7-ovb-ha-oooq/c85fcb7/.
>
> rechecks won't help and sometimes specific gate is end successfully and
> sometimes not.
> The problem is that after recheck it's not always the same gate which is
> failed.
>
> Is there someone who have access to the servers load to see what cause this?
> alternatively, is there something we can do in order to reduce the running
> time for each gate?

We're migrating to RDO Cloud for OVB jobs:
https://review.openstack.org/#/c/526481/
It's a work in progress but will help a lot for OVB timeouts on RH1.

I'll let the CI folks comment on that topic.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-5, January 20 - 26

2018-01-18 Thread Sean McGinnis
On Thu, Jan 18, 2018 at 08:56:55AM -0600, Sean McGinnis wrote:
> Development Focus
> -
> 
> Teams should be focused on implementing planned work. Work should be wrapping
> up on client libraries to meet the client lib deadline Thursday, the 25th.
> 
> General Information
> ---
> 
> Next Thursday is the final client library release. Releases will only be
> allowed for critical fixes in libraries after this point as we stabilize
> requirements and give time for any unforeseen impacts from lib changes to
> trickle through.
> 
> When requesting these library releases, you should also include the stable
> branching request with the review (as an example, see the "branches" section
> here:
> http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike/os-brick.yaml#n2)
> 
> Speaking of branching... for projects following the cycle-with-milestones
> release model, please check membership of your $project-release group. This
> group should be limited to those aware of the final release activity for the
> project to make sure only important things are allowed to be backported into
> the stable/queens branch leading up to the final release. For new projects 
> this
> cycle, you may need to request the infra team create this group for you.
> 
> Upcoming Deadlines & Dates
> --
> 
> Final client library release deadline: January 25
> Queens-3 Milestone: January 25
> Start of Rocky PTL nominations: January 29
> Start of Rocky PTL election: February 7
> OpenStack Summit Vancouver CFP deadline: February 8
> Rocky PTG in Dublin: Week of February 26, 2018
> 
> -- 
> Sean McGinnis (smcginnis)
> 

I got too focused on the client lib freeze that there are a few significant
things I forgot to mention.

Please be aware that the 25th (that's a Thursday Jay) is also the Queens-3
deadline. With that, we are also in Feature Freeze. Only bug fixes and wrap up
work should be accepted after this point without explicit approval from the PTL
and some mention on the mailing list.

With these final library releases, the requirements repo is also now locked
down to allow projects to stabilize before RC.

And in order to help the I18n team have a chance of getting any translations
done, we also enter String Freeze. Translatable strings (anything in _() in
exceptions) should not be changed unless absolutely necessary.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs

2018-01-18 Thread Graham Hayes


On 11/01/18 16:36, Colleen Murphy wrote:
> Hi everyone,
> 
> We have governance review under debate[1] that we need the community's help 
> on.
> The debate is over what recommendation the TC should make to the Interop team
> on where the tests it uses for the OpenStack trademark program should be
> located, specifically those for the new add-on program being introduced. Let 
> me
> badly summarize:
> 
> A couple of years ago we issued a resolution[2] officially recommending that
> the Interop team use solely tempest as its source of tests for capability
> verification. The Interop team has always had the view that the developers,
> being the people closest to the project they're creating, are the best people
> to write tests verifying correct functionality, and so the Interop team 
> doesn't
> maintain its own test suite, instead selecting tests from those written in
> coordination between the QA team and the other project teams. These tests are
> used to validate clouds applying for the OpenStack Powered tag, and since all
> of the projects included in the OpenStack Powered program already had tests in
> tempest, this was a natural fit. When we consider adding new trademark 
> programs
> comprising of other projects, the test source is less obvious. Two examples 
> are
> designate, which has never had tests in the tempest repo, and heat, which
> recently had its tests removed from the tempest repo.
> 



> 
> As I'm not deeply steeped in the history of either the Interop or QA teams I 
> am
> sure I've misrepresented some details here, I'm sorry about that. But we'd 
> like
> to get this resolution moving forward and we're currently stuck, so this 
> thread
> is intended to gather enough community input to get unstuck and avoid letting
> this proposal become stale. Please respond to this thread or comment on the
> resolution proposal[1] if you have any thoughts.
> 
> Colleen
> 
> [1] https://review.openstack.org/#/c/521602
> [2] 
> https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html
> [3] 
> https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
> 

I had hoped for more of a discussion on this before I jumped back into
this debate  - but it seams to be stalled still, so here it goes.

I proposed this initially as we were unclear on where the tests should
go - we had a resolution that said all tests go into openstack/tempest
(with a list of reasons why), and the guidance and discussion that been
had in various summits was that "add-ons" should stay in plugins.

So right now, we (by the governance rules) should be pushing tests to
tempest for the new programs.

In the resolution that placed the tests in tempest there was a few
reasons proposed:

  For example, API and behavioral changes must be carefully managed, as
  must mundane aspects such as test and module naming and location
  within the test suite. Even changes that leave tests functionally
  equivalent may cause unexpected consequences for their use in DefCore
  processes and validation. Placing the tests in a central repository
  will make it easier to maintain consistency and avoid breaking the
  trademark enforcement tool.

This still applies, and even more so for teams that traditionally do not
have a strong QE contributor / reviewer base (aka projects not in
"core").

  Centralizing the tests also makes it easier for anyone running the
  validation tool against their cloud or cloud distribution to use the
  tests. It is easier to install the test suite and its dependencies,
  and it is easier to read and understand a set of tests following a
  consistent implementation pattern.

Apparently users do not need central tests anymore, feedback from
RefStack is that people who run these tests are comfortable dealing
with extra python packages.

The point about a single set of tests, in a single location and style
still stands.

  Finally, having the tests in a central location makes it easier to
  ensure that all members of the community have equal input into what
  the tests do and how they are implemented and maintained.

Seems like a good value to strive for.

One of the items that has been used to push back against adding
"add-ons" to tempest has been that tempest has a defined scope, and
neither of the current add-ons fit in that scope.

Can someone clarify what the set of criteria is? I think it will help
this discussion.

Another push back is the "scaling" issue - adding more tests will
overload the QA team.

Right now, DNS adds 10 tests, Orchestration adds 22, to a current suite
of 353.

I do not think there is many other add-ons proposed yet, and the new
Vertical programs will probably mainly be re-using tests in the
openstack/tempest repos as is.

This is not a big tent-esque influx of programs - the only projects
that can be added to the trademarks are programs in tc-approved-release
[4], so I do not see scaling as a big issue, especially as these tests
are such base concepts that 

[openstack-dev] [cliff][osc][barbican][oslo][sdk][all] avoiding option name conflicts with cliff and OSC plugins

2018-01-18 Thread Doug Hellmann
We've been working this week to resolve an issue between cliff and
barbicanclient due to a collision in the option namespace [0].
Barbicanclient was using the -s option, and then cliff's lister
command base class added that option as an alias for sort-columns.

The first attempt at resolving the conflict was to set the conflict
handler in argparse to 'resolve' [1]. Several reviewers pointed out
that this would have the unwanted side-effect of making some OSC
commands support the -s as an alias for --sort-columns while the
barbicanclient commands would use it for a different purpose.

For now we have removed the -s alias from within cliff. However,
we want to avoid this problem coming up in the future so we want a
better solution.

The OSC project has a policy that its command plugins do not use
short options (single letter). There are relatively few of them,
and it's easy to introduce collisions.  Therefore, they are seen
as reserved for more common "global" options such as provided by
the base classes in OSC and cliff.

I propose that for Rocky we update cliff to change the way options
are registered so that conflicting options added by command subclasses
are ignored. This would effectively let cliff "own" the short option
namespace, and require command classes to use long option names.

The implementation details need to be worked out a bit, but I think
we can do that by subclassing ArgumentParser and adding a new
conflict handler method similar to the existing _handle_conflict_error()
and _handle_conflict_resolve().

This is going to introduce backwards-incompatible changes in the
commands derived from cliff, so before we take any action I wanted
to solicit input in the plan.

Please let me know what you think,
Doug

[0] https://bugs.launchpad.net/python-barbicanclient/+bug/1743578
[1] https://docs.python.org/3.5/library/argparse.html#conflict-handler

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-5, January 20 - 26

2018-01-18 Thread Sean McGinnis
Development Focus
-

Teams should be focused on implementing planned work. Work should be wrapping
up on client libraries to meet the client lib deadline Thursday, the 25th.

General Information
---

Next Thursday is the final client library release. Releases will only be
allowed for critical fixes in libraries after this point as we stabilize
requirements and give time for any unforeseen impacts from lib changes to
trickle through.

When requesting these library releases, you should also include the stable
branching request with the review (as an example, see the "branches" section
here:
http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike/os-brick.yaml#n2)

Speaking of branching... for projects following the cycle-with-milestones
release model, please check membership of your $project-release group. This
group should be limited to those aware of the final release activity for the
project to make sure only important things are allowed to be backported into
the stable/queens branch leading up to the final release. For new projects this
cycle, you may need to request the infra team create this group for you.

Upcoming Deadlines & Dates
--

Final client library release deadline: January 25
Queens-3 Milestone: January 25
Start of Rocky PTL nominations: January 29
Start of Rocky PTL election: February 7
OpenStack Summit Vancouver CFP deadline: February 8
Rocky PTG in Dublin: Week of February 26, 2018

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Clarifying testing recommendation for interop programs

2018-01-18 Thread Graham Hayes


On 12/01/18 09:49, Luigi Toscano wrote:
> On Thursday, 11 January 2018 23:52:00 CET Matt Riedemann wrote:
>> On 1/11/2018 10:36 AM, Colleen Murphy wrote:
>>> 1) All trademark-related tests should go in the tempest repo, in
>>> accordance
>>>
>>> with the original resolution. This would mean that even projects that
>>> have
>>> never had tests in tempest would now have to add at least some of
>>> their
>>> black-box tests to tempest.
>>>
>>> The value of this option is that centralizes tests used for the Interop
>>> program in a location where interop-minded folks from the QA team can
>>> control them. The downside is that projects that so far have avoided
>>> having a dependency on tempest will now lose some control over the
>>> black-box tests that they use for functional and integration that would
>>> now also be used for trademark certification.
>>> There's also concern for the review bandwidth of the QA team - we can't
>>> expect the QA team to be continually responsible for an ever-growing list
>>> of projects and their trademark tests.
>>
>> How many tests are we talking about for designate and heat? Half a
>> dozen? A dozen? More?
>>
>> If it's just a couple of tests per project it doesn't seem terrible to
>> have them live in Tempest so you get the "interop eye" on reviews, as
>> noted in your email. If it's a considerable amount, then option 2 seems
>> the best for the majority of parties.
> 
> I would argue that it does not scale; what if some test is taken out from the 
> interoperability, and others are added? It would mean moving tests from one 
> repository to another, with change of paths. I think that the solution 2, 
> where the repository where a test belong and the functionality of a test are 
> not linked, is better.
> 
> Ciao
> 

How is this different from the 6 programs already in tempest?



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] Meeting Next Week + Other Highlights

2018-01-18 Thread Graham Hayes
Hi All,

I am going to be on vacation next week, and will not be able to run
the weekly IRC meeting. We can either skip, or if someone steps up to
run it we can go ahead.

I will be around enough to do the q3 release.

Also a reminder that the etherpad for planning the PTG in Dublin is
available [1]. We have 2 full days + I am sure we can find a free room /
hallway for the Friday if we run over, so please fill ideas.

Thanks!

- Graham


1 - https://etherpad.openstack.org/p/DUB-designate-PTG-planning




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] Won't be able to make todays meeting

2018-01-18 Thread Jeremy Stanley
On 2018-01-18 10:05:19 + (+), Luke Hinds wrote:
> I won't be able to attend the security project meeting today, and as there
> are no hot topics I suggest we postpone until next week (if there are, then
> feel free to #startmeeting and I will catch up tomorrow through meetbot
> logs).

Sounds fine to me, I didn't have anything new to bring up (also,
I'll be indisposed/travelling next week and unable to attend then as
well, just FYI, though other members of the VMT will likely be
around some if we need to be reached on an urgent matter).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Many timeouts in zuul gates for TripleO

2018-01-18 Thread Or Idgar
Hi,
we're encountering many timeouts for zuul gates in TripleO.
For example, see
http://logs.openstack.org/95/508195/28/check-tripleo/tripleo-ci-centos-7-ovb-ha-oooq/c85fcb7/
.

rechecks won't help and sometimes specific gate is end successfully and
sometimes not.
The problem is that after recheck it's not always the same gate which is
failed.

Is there someone who have access to the servers load to see what cause this?
alternatively, is there something we can do in order to reduce the running
time for each gate?

Thanks in advance.

-- 
Best regards,
Or Idgar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] adding Gage Hugo to keystone core

2018-01-18 Thread Rodrigo Duarte
Congrats, Gage. Well deserved!

On Tue, Jan 16, 2018 at 6:16 PM, Harry Rybacki  wrote:

> +100 -- congratulations, Gage!
>
>
> On Tue, Jan 16, 2018 at 2:24 PM, Raildo Mascena de Sousa Filho <
> rmasc...@redhat.com> wrote:
>
>> +1
>>
>> Congrats Gage, very well deserved!
>>
>> Cheers,
>>
>> On Tue, Jan 16, 2018 at 4:02 PM Lance Bragstad 
>> wrote:
>>
>>> Hey folks,
>>>
>>> In today's keystone meeting we made the announcement to add Gage Hugo
>>> (gagehugo) as a keystone core reviewer [0]! Gage has been actively
>>> involved in keystone over the last several cycles. Not only does he
>>> provide thorough reviews, but he's really stepped up to help move the
>>> project forward by keeping a handle on bugs, fielding questions in the
>>> channel, and being diligent about documentation (especially during
>>> in-person meet ups).
>>>
>>> Thanks for all the hard work, Gage!
>>>
>>> [0]
>>> http://eavesdrop.openstack.org/meetings/keystone/2018/keysto
>>> ne.2018-01-16-18.00.log.html
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> --
>>
>> Raildo mascena
>>
>> Software Engineer, Identity Managment
>>
>> Red Hat
>>
>> 
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [craton] Retirement of craton and python-cratonclient

2018-01-18 Thread Andreas Jaeger
Craton development is frozen since around March 2017, I've discussed
with Sulochan and will start the retirement process now,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID

2018-01-18 Thread Saverio Proto
Hello all,

well this oslo.log library looks like a core thing that is used by
multiple projects. I feel scared hearing that bugs opened on that
project are probably just ignored.

should I reach out to the current PTL of OSLO ?
https://github.com/openstack/governance/blob/master/reference/projects.yaml#L2580

ChangBo Guo are you reading this thread ? Do you think this is a bug or
a missing feature ? And moreover is really nobody looking at these
oslo.log bugs ?

thanks

Saverio


On 18.01.18 12:37, Sean Dague wrote:
> A bug would be fine. I'm not sure how many people are keeping an eye on
> oslo.log bugs at this point, so be realistic in when it might get looked at.
> 
> On 01/18/2018 03:06 AM, Saverio Proto wrote:
>> Hello Sean,
>> after the brief chat we had on IRC, do you think I should open a bug
>> about this issue ?
>>
>> thank you
>>
>> Saverio
>>
>>
>> On 13.01.18 07:06, Saverio Proto wrote:
 I don't think this is a configuration problem.

 Which version of the oslo.log library do you have installed?
>>>
>>> Hello Doug,
>>>
>>> I use the Ubuntu packages, at the moment I have this version:
>>>
>>> python-oslo.log   3.16.0-0ubuntu1~cloud0
>>>
>>> thank you
>>>
>>> Saverio
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
> 
> 


-- 
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch

http://www.switch.ch/stories

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID

2018-01-18 Thread Sean Dague
A bug would be fine. I'm not sure how many people are keeping an eye on
oslo.log bugs at this point, so be realistic in when it might get looked at.

On 01/18/2018 03:06 AM, Saverio Proto wrote:
> Hello Sean,
> after the brief chat we had on IRC, do you think I should open a bug
> about this issue ?
> 
> thank you
> 
> Saverio
> 
> 
> On 13.01.18 07:06, Saverio Proto wrote:
>>> I don't think this is a configuration problem.
>>>
>>> Which version of the oslo.log library do you have installed?
>>
>> Hello Doug,
>>
>> I use the Ubuntu packages, at the moment I have this version:
>>
>> python-oslo.log   3.16.0-0ubuntu1~cloud0
>>
>> thank you
>>
>> Saverio
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][l2gw] Preventing DB out-of-sync

2018-01-18 Thread Ricardo Noriega De Soto
Just for awareness on this subject. Peng has proposed an initial patch to
tackle this issue:

https://review.openstack.org/#/c/529009/6



On Tue, Dec 12, 2017 at 11:20 AM, Ricardo Noriega De Soto <
rnori...@redhat.com> wrote:

> Peng, I think you are right. We should have a common behavior among the
> drivers, and move the implementation to the proper methods like
> post-commits, do the validation on the pre-commits, etc, etc.
>
> Second phase to tackle the out-of-sync could be the "revision number"
> approach from networking-ovn.
>
> Cheers
>
> On Mon, Dec 11, 2017 at 4:32 PM, Peng Liu  wrote:
>
>> Hi Michael,
>>
>> Yes, it's a similar issue but different aspect. Actually, the case for
>> l2gw is worse, considering we have to deal with 2 existing back-end driver
>> which have different understanding for the interfaces. But I think the
>> proposed approach for networking-ovn is inspiring and helpful for l2gw.
>>
>> Thanks,
>>
>> On Fri, Dec 8, 2017 at 11:59 PM, Michael Bayer  wrote:
>>
>>> On Wed, Dec 6, 2017 at 3:46 AM, Peng Liu  wrote:
>>> > Hi,
>>> >
>>> > During working on this patch[0], I encounter some DB out-of-sync
>>> problem. I
>>> > think maybe the design can be improved. Here is my thought, all
>>> comments are
>>> > welcome.
>>>
>>>
>>> see also https://review.openstack.org/#/c/490834/ which I think is
>>> dealing with a similar (if not the same) issue.
>>>
>>> >
>>> > In plugin code, I found all the resource operations follow the pattern
>>> in
>>> > [1]. It is a very misleading design compare to [2].
>>> >
>>> > "For every action that can be taken on a resource, the mechanism driver
>>> > exposes two methods - ACTION_RESOURCE_precommit, which is called
>>> within the
>>> > database transaction context, and ACTION_RESOURCE_postcommit, called
>>> after
>>> > the database transaction is complete."
>>> >
>>> > In result, if we focus on the out-of-sync between plugin DB and
>>> > driver/backend DB:
>>> >
>>> > 1) In RPC driver, only methods Action_Resource and are implemented.
>>> Which
>>> > means the action is token before it was written in plugin DB. In case
>>> of
>>> > action partial succeed (especially for update case) or plugin DB
>>> operation
>>> > failure, it will cause DB out-of-sync.
>>> > 2) In ODL driver, only methods Action_Resource_postcommit are
>>> implemented,
>>> > which means there is no validation in ODL level before the record is
>>> written
>>> > in the plugin DB. In case of, ODL side failure, there is no rollback
>>> for
>>> > plugin DB.
>>> >
>>> > So, to fix this issue is costly. Both plugin and driver sides need to
>>> be
>>> > altered.
>>> >
>>> > The other side of this issue is a period db monitor mechanism between
>>> plugin
>>> > and drivers, and it is another story.
>>> >
>>> >
>>> > [0] https://review.openstack.org/#/c/516256
>>> > [1]
>>> > ...
>>> > def Action_Resource
>>> > self.validate_Resource_for_Action
>>> > self.driver.Action_Resource
>>> > with context.session.begin(subtransactions=True):
>>> > super.Action_Resource
>>> > self.driver.Action_Resource_precommit
>>> > try:
>>> > self.driver.Action_Resource_postcommit
>>> > ...
>>> > [2] https://wiki.openstack.org/wiki/Neutron/ML2
>>> >
>>> > --
>>> > Peng Liu
>>> >
>>> > 
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Peng Liu | Senior Software Engineer
>>
>> Tel: +86 10 62608046 (direct)
>> Mobile: +86 13801193245 <+86%20138%200119%203245>
>>
>> Red Hat Software (Beijing) Co., Ltd.
>> 9/F, North Tower C,
>> Raycom Infotech Park,
>> No.2 Kexueyuan Nanlu, Haidian District,
>> Beijing, China, POC 100190
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Ricardo Noriega
>
> Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
> Red Hat
> irc: rnoriega @freenode
>
>


-- 
Ricardo Noriega

Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
Red Hat
irc: rnoriega @freenode
__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [QA] [all] QA Rocky PTG Planning

2018-01-18 Thread Andrea Frittoli
and the link [1]

[1] https://etherpad.openstack.org/p/qa-rocky-ptg

On Thu, Jan 18, 2018 at 10:28 AM Andrea Frittoli 
wrote:

> Dear all,
>
> I started the etherpad for planning the QA work in Dublin.
> Please add your ideas / proposals for sessions and intention of attending.
> We have a room for the QA team for three full days Wed-Fri.
>
> This time I also included a "Request for Sessions" - if anyone would like
> to discuss a QA related topic with the QA team or do a hands-on / sprint on
> something feel free to add it in there. We can also handle them in a less
> unstructured format during the PTG - but if there are a few requests on
> similar topics we can setup a session on Mon/Tue for everyone interested to
> attend.
>
> Andrea Frittoli (andreaf)
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] [all] QA Rocky PTG Planning

2018-01-18 Thread Andrea Frittoli
Dear all,

I started the etherpad for planning the QA work in Dublin.
Please add your ideas / proposals for sessions and intention of attending.
We have a room for the QA team for three full days Wed-Fri.

This time I also included a "Request for Sessions" - if anyone would like
to discuss a QA related topic with the QA team or do a hands-on / sprint on
something feel free to add it in there. We can also handle them in a less
unstructured format during the PTG - but if there are a few requests on
similar topics we can setup a session on Mon/Tue for everyone interested to
attend.

Andrea Frittoli (andreaf)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptg] Dublin PTG proposed track schedule

2018-01-18 Thread Thierry Carrez
Hi everyone,

Here is the proposed pre-allocated track schedule for the Dublin PTG:

https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307=true

The proposed allocation takes into account the estimated group size and
number of days that was communicated to Kendall and I by the team PTL.
We'd like to publish this schedule on the event website ASAP, so please
check that it still matches your needs (number of days, room size vs.
expected attendance) and does not create too many conflicts. There are
lots of constraints, so we can't promise we'll accommodate your remarks,
but we'll do our best.

If your team is not listed, that's probably because you haven't
confirmed that your team intends to meet at the PTG yet. Let us know
ASAP if the situation changed, and we'll see if we can add extra space
to host you.

You'll also notice that some teams (in orange below the table in above
link) do not have pre-allocated slots. One key difference this time
around is that we set aside a larger number of rooms and meeting spots
for dynamically scheduling tracks. The idea is to avoid pre-allocating
smaller tracks to a specific time slot that might or might not create
conflicts, and let that team book a space at a time that makes the most
sense for them, while the event happens. This dynamic booking will be
done through the PTGbot.

So the unscheduled teams (in orange) are expected to take advantage of
this flexibility and schedule themselves during the event. This
flexibility is not limited to those orange teams: other teams may want
to meet for more than their pre-allocated time slots, and can book extra
space as well. For example if you are on the First Contact SIG and
realize on Tuesday afternoon that you would like to continue the
discussions on Friday morning, it's easy to extend your track to a time
slot there.

Note that this system also replaces the old ethercalc-scheduled
"reservable rooms". If a group of people forms around a specific issue
and wants to discuss it, they can ask for their own additional "track"
and schedule it dynamically as well.

Finally, you'll notice we have extra space set aside on Monday-Tuesday
to discuss "hot" last-minute cross-project issues -- if you have ideas
of topics that we need to discuss in-person, please let us know.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action name in request url

2018-01-18 Thread TommyLike Hu
Hey all,
   Recently We found an issue related to our OpenStack action APIs. We
usually expose our OpenStack APIs by registering them to our API Gateway
(for instance Kong [1]), but it becomes very difficult when regarding to
action APIs. We can not register and control them seperately because them
all share the same request url which will be used as the identity in the
gateway service, not say rate limiting and other advanced gateway features,
take a look at the basic resources in OpenStack

   1. *Server*: "/servers/{server_id}/action"  35+ APIs are include.
   2. *Volume*: "/volumes/{volume_id}/action"  14 APIs are include.
   3. Other resource

We have tried to register different interfaces with same upstream url, such
as:

  * api gateway*: /version/resource_one/action/action1 =>* upstream*:
/version/resource_one/action
*   api gateway*: /version/resource_one/action/action2 =>* upstream*:
/version/resource_one/action

But it's not secure enough cause we can pass action2 in the request body
while invoking /action/action1, also, try to read the full body for route
is not supported by most of the api gateways(maybe plugins) and will have a
performance impact when proxy. So my question is do we have any solution or
suggestion for this case? Could we support specify action name both in
request body and url such as:

*URL:*/volumes/{volume_id}/action
*BODY:*{'extend':{}}

and:

*URL:*/volumes/{volume_id}/action/extend
*BODY:* {'extend':{}}

Thanks
Tommy

[1]: https://github.com/Kong/kong
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [security] Won't be able to make todays meeting

2018-01-18 Thread Luke Hinds
Hello,

I won't be able to attend the security project meeting today, and as there
are no hot topics I suggest we postpone until next week (if there are, then
feel free to #startmeeting and I will catch up tomorrow through meetbot
logs).

Cheers,

Luke
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Ocata to Pike upgrade job is working as of today.

2018-01-18 Thread Javier Pena


- Original Message -
> Hi,
> 
> So join us (upgrade squad) to celebrate the working ocata->pike upgrade
> job[1], without any depends-on whatsoever.
> 
> We would like it to be voting as soon as possible. It has been a
> rather consuming task to revive that forgotten but important jobs, and
> the only way for it to not drift into oblivion again is to have it
> voting.
> 

I see the job is actually voting (it should have a -nv suffix to be non-voting).

Regards,
Javier

> Eventually, let’s thanks rdo-cloud people for their support (especially
> David Manchado), James Slagle for Traas[2] and Alfredo Moralejo for his
> constant availability to answer our questions.
> 
> Thanks,
> 
> [1] https://review.openstack.org/#/c/532791/, look for
> «gate-tripleo-ci-centos-7-containers-multinode-upgrades-pike»
> [2] https://github.com/slagle/traas … the repo we use ->
> https://github.com/sathlan/traas (so many pull requests to make that it
> would be cool for it to be an openstack project … :))
> --
> Sofer Athlan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] about rebuild instance booted from volume

2018-01-18 Thread 李杰
Hi,all


This is the spec about rebuild  a instance booted from volume, 
anyone who is interested in
  booted from volume can help to review this. Any suggestion is welcome.
  The link is here.
  Re:the rebuild spec:https://review.openstack.org/#/c/532407/














Best Regards
Lijie__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][masakari][api] masakari service-type, docs, api-ref and releasenotes

2018-01-18 Thread Sam P
Hi Monty,
 Thanks for the patches.
 Agree that 'ha' is a pretty broad service-type for Masakari.
 compute-ha or instance-ha are both OK, as  Andreas proposed,
 I would like to change it to instance-ha.
 If no objections, I will fix the python-masakariclient, devstack
pulgin etc.. from masakari side.
--- Regards,
Sampath



On Thu, Jan 18, 2018 at 4:42 PM, Andreas Jaeger  wrote:
> On 2018-01-17 20:23, Monty Taylor wrote:
>> Hey everybody,
>>
>> I noticed today while preparing patches to projects that are using
>> openstacksdk that masakari is not listed in service-types-authority. [0]
>>
>> I pushed up a patch to fix that [1] as well as to add api-ref, docs and
>> releasenotes jobs to the masakari repo so that each of those will be
>> published appropriately.
>>
>> As part of doing this, it came up that 'ha' is a pretty broad
>> service-type and that perhaps it should be 'compute-ha' or 'instance-ha'.
>>
>> The service-type is a unique key for identifying a service in the
>> catalog, so the same service-type cannot be shared amongst openstack
>> services. It is also used for api-ref publication (to
>> https://developer.openstack.org/api-ref/{service-type} ) - and in
>> openstacksdk as the name used for the service attribute on the
>> Connection object. (So the service-type 'ha' would result in having
>> conn.ha on an openstack.connection.Connection)
>>
>> We do support specifying historical aliases. Since masakari has been
>> using ha up until now, we'll need to list is in the aliases at the very
>> least.
>>
>> Do we want to change it? What should we change it to?
>
> Yes, I would change it - instance-ha sounds best seeing that the
> governance file has:
> "  service: Instances High Availability Service"
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.log] JSON logs are missing the request ID

2018-01-18 Thread Saverio Proto
Hello Sean,
after the brief chat we had on IRC, do you think I should open a bug
about this issue ?

thank you

Saverio


On 13.01.18 07:06, Saverio Proto wrote:
>> I don't think this is a configuration problem.
>>
>> Which version of the oslo.log library do you have installed?
> 
> Hello Doug,
> 
> I use the Ubuntu packages, at the moment I have this version:
> 
> python-oslo.log   3.16.0-0ubuntu1~cloud0
> 
> thank you
> 
> Saverio
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch

http://www.switch.ch/stories

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev