[openstack-dev] [kolla] ptl non candidacy

2018-07-24 Thread Jeffrey Zhang
Hi all,

I just wanna to say I am not running PTL for Stein cycle. I have been
involved in Kolla project for almost 3 years. And recently my work changes
a little, too. So I may not have much time in the community in the
future. Kolla
is a great project and the community is also awesome. I would encourage
everyone in the community to consider for running.

Thanks for your support :D.
-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins

2018-07-24 Thread MONTEIRO, FELIPE C
> > Hi,
> >
> > ** Intention **
> >
> > Intention is to expand Patrole testing to some service clients that
> > already exist in some Tempest plugins, for core services only.
> 
> What exact projects does Patrole consider "core", and how are you making
> that decision? Is it a tag, InterOp, or some other criteria?
> 
> 

We mean "core" only in the sense that Tempest means it: "the six client groups 
for the six core services covered by tempest in the big tent" [1]. That 
includes Nova, Neutron, Glance, Cinder and Keystone. Swift is not included in 
Patrole because Swift doesn't use oslo.policy for RBAC.

[1] 
https://specs.openstack.org/openstack/qa-specs/specs/tempest/client-manager-refactor.html
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins

2018-07-24 Thread Ghanshyam Mann
  On Wed, 25 Jul 2018 10:27:26 +0900 MONTEIRO, FELIPE C  
wrote  
 > Please see comments inline.  
 >  
 > >   On Tue, 24 Jul 2018 04:22:47 +0900 MONTEIRO, FELIPE C 
 > >  wrote  
 > >  >   Hi, 
 > >  > 
 > >  >  ** Intention ** 
 > >  >  Intention is to expand Patrole testing to some service clients that 
 > > already 
 > > exist in some Tempest plugins, for core services only. 
 > >  > 
 > >  >  ** Background ** 
 > >  >  Digging through Neutron testing, it seems like there is currently a 
 > > lot of 
 > > test duplication between neutron-tempest-plugin and Tempest [1]. Under 
 > > some circumstances it seems OK to have redundant testing/parallel  
 > > testing: 
 > > “Having potential duplication between testing is not a big deal especially 
 > > compared to the alternative of removing something which is actually 
 > > providing value and is actively catching bugs, or blocking incorrect 
 > > patches 
 > > from landing” [2]. 
 > >  
 > > We really need to minimize the test duplication. If there is test in 
 > > tempest 
 > > plugin for core services then, we do not need to add those in Tempest repo 
 > > until it is interop requirement. This is for new tests so we can avoid the 
 > > duplication in future. I will write this in Tempest reviewer guide. 
 > > For existing duplicate tests, as per bug you mentioned[1] we need to 
 > > cleanup 
 > > the duplicate tests and they should live in their respective repo(either 
 > > in 
 > > neutron tempest plugin or tempest) which is categorized in etherpad[7]. 
 > > How 
 > > many tests are duplicated now? I will plan this as one of cleanup working 
 > > item in stein. 
 > >  
 > >  > 
 > >  >  This leads me to the following question: If API test duplication is 
 > > OK, what 
 > > about service client duplication? Patches like [3] and [4]  promote 
 > > service 
 > > client duplication with neutron-tempest-plugin. As far as I can tell, 
 > > Neutron 
 > > builds out some of its service clients dynamically here: [5]. Which 
 > > includes 
 > > segments service client (proposed as an addition to tempest.lib in [4]) 
 > > here: 
 > > [6]. 
 > >  
 > > Yeah, they are very dynamic in neutron plugins and its because of old 
 > > legacy 
 > > code. That is because when neutron tempest plugin was forked from 
 > > Tempest as it is. These dynamic generation of service clients are really 
 > > hard 
 > > to debug and maintain. This can easily lead to backward incompatible 
 > > changes if we make those service clients stable interface to consume 
 > > outside. For those reason, we did fixed those in Tempest 3 years back [8] 
 > > and 
 > > made them  static and consistent service client methods like other 
 > > services 
 > > clients. 
 > >  
 > >  > 
 > >  >  This leads to a situation where if we want to offer RBAC testing for 
 > > these 
 > > APIs (to validate their policy enforcement), we can’t really do so without 
 > > adding the service client to Tempest, unless  we rely on the 
 > > neutron-tempest- 
 > > plugin (for example) in Patrole’s .zuul.yaml. 
 > >  > 
 > >  >  ** Path Forward ** 
 > >  >  Option #1: For the core services, most service clients should live in 
 > > tempest.lib for standardization/governance around documentation and 
 > > stability for those clients. Service client duplication  should try to be 
 > > minimized as much as possible. API testing related to some service 
 > > clients, 
 > > though, should remain in the Tempest plugins. 
 > >  > 
 > >  >  Option #2: Proceed with service client duplication, either by adding 
 > > the 
 > > service client to Tempest (or as yet another alternative, Patrole). This 
 > > leads 
 > > to maintenance overhead: have to maintain  service clients in the plugins 
 > > and 
 > > Tempest itself. 
 > >  > 
 > >  >  Option #3: Don’t offer RBAC testing in Patrole plugin for those APIs. 
 > >  
 > > We need to share the service clients among Tempest plugins. And each 
 > > service clients which are being shared across repo has to be declared as 
 > > stable interface like Tempest does. Idea here is service clients will live 
 > > in the 
 > > repo where their original tests were added or going to be added. For 
 > > example in case of neutron tempest plugin, if rbac-policy API tests are in 
 > > neutron then its service client needs to be owned by 
 > > neutron-tempest-plugin. 
 > > further rbac-policy service client can be consumed by Patrole. It is same 
 > > case 
 > > for congress tempest plugin, where they consume mistral service client. I 
 > > recommended the same in that thread also of using service client from 
 > > Mistral and Mistral make the service client as stable interface [9]. Which 
 > > is 
 > > being done in congress[10] 
 > >  
 > > Here are the general recommendation for Tempest Plugins for service 
 > > clients 
 > > : 
 > > - Tempest Plugins should make their service clients as stable interface 
 > > which 
 > > gives 2 advantage: 
 >  
 > In this case we should also 

Re: [openstack-dev] [all][election] Nominations for OpenStack PTLs (Project Team Leads) are now open

2018-07-24 Thread Jeremy Stanley
On 2018-07-25 08:54:51 +0900 (+0900), Emmet Hikory wrote:
[...]
> All nominations must be submitted as a text file to the openstack/election
> repository as explained at
> http://governance.openstack.org/election/#how-to-submit-your-candidacy
> 
> Please make sure to follow the new candidacy file naming convention:
> $cycle_name/$project_name/$ircname.txt.
[...]

The directions on the Web page are correct, but it looks like we
need to update our E-mail template to reflect last cycle's change
from $ircname to $email_address instead.

Just to be clear, the candidacy filename should be an E-mail address
you use both with your Gerrit account and your OpenStack Foundation
Individual Member profile since we'll use it both to confirm you
have a qualifying change merged to a relevant deliverable repository
and that you have an active foundation membership.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins

2018-07-24 Thread MONTEIRO, FELIPE C
Please see comments inline. 

>   On Tue, 24 Jul 2018 04:22:47 +0900 MONTEIRO, FELIPE C
>  wrote 
>  >   Hi,
>  >
>  >  ** Intention **
>  >  Intention is to expand Patrole testing to some service clients that 
> already
> exist in some Tempest plugins, for core services only.
>  >
>  >  ** Background **
>  >  Digging through Neutron testing, it seems like there is currently a lot of
> test duplication between neutron-tempest-plugin and Tempest [1]. Under
> some circumstances it seems OK to have redundant testing/parallel  testing:
> “Having potential duplication between testing is not a big deal especially
> compared to the alternative of removing something which is actually
> providing value and is actively catching bugs, or blocking incorrect patches
> from landing” [2].
> 
> We really need to minimize the test duplication. If there is test in tempest
> plugin for core services then, we do not need to add those in Tempest repo
> until it is interop requirement. This is for new tests so we can avoid the
> duplication in future. I will write this in Tempest reviewer guide.
> For existing duplicate tests, as per bug you mentioned[1] we need to cleanup
> the duplicate tests and they should live in their respective repo(either in
> neutron tempest plugin or tempest) which is categorized in etherpad[7]. How
> many tests are duplicated now? I will plan this as one of cleanup working
> item in stein.
> 
>  >
>  >  This leads me to the following question: If API test duplication is OK, 
> what
> about service client duplication? Patches like [3] and [4]  promote service
> client duplication with neutron-tempest-plugin. As far as I can tell, Neutron
> builds out some of its service clients dynamically here: [5]. Which includes
> segments service client (proposed as an addition to tempest.lib in [4]) here:
> [6].
> 
> Yeah, they are very dynamic in neutron plugins and its because of old legacy
> code. That is because when neutron tempest plugin was forked from
> Tempest as it is. These dynamic generation of service clients are really hard
> to debug and maintain. This can easily lead to backward incompatible
> changes if we make those service clients stable interface to consume
> outside. For those reason, we did fixed those in Tempest 3 years back [8] and
> made them  static and consistent service client methods like other services
> clients.
> 
>  >
>  >  This leads to a situation where if we want to offer RBAC testing for these
> APIs (to validate their policy enforcement), we can’t really do so without
> adding the service client to Tempest, unless  we rely on the neutron-tempest-
> plugin (for example) in Patrole’s .zuul.yaml.
>  >
>  >  ** Path Forward **
>  >  Option #1: For the core services, most service clients should live in
> tempest.lib for standardization/governance around documentation and
> stability for those clients. Service client duplication  should try to be
> minimized as much as possible. API testing related to some service clients,
> though, should remain in the Tempest plugins.
>  >
>  >  Option #2: Proceed with service client duplication, either by adding the
> service client to Tempest (or as yet another alternative, Patrole). This leads
> to maintenance overhead: have to maintain  service clients in the plugins and
> Tempest itself.
>  >
>  >  Option #3: Don’t offer RBAC testing in Patrole plugin for those APIs.
> 
> We need to share the service clients among Tempest plugins. And each
> service clients which are being shared across repo has to be declared as
> stable interface like Tempest does. Idea here is service clients will live in 
> the
> repo where their original tests were added or going to be added. For
> example in case of neutron tempest plugin, if rbac-policy API tests are in
> neutron then its service client needs to be owned by neutron-tempest-plugin.
> further rbac-policy service client can be consumed by Patrole. It is same case
> for congress tempest plugin, where they consume mistral service client. I
> recommended the same in that thread also of using service client from
> Mistral and Mistral make the service client as stable interface [9]. Which is
> being done in congress[10]
> 
> Here are the general recommendation for Tempest Plugins for service clients
> :
> - Tempest Plugins should make their service clients as stable interface which
> gives 2 advantage:

In this case we should also expand the Tempest plugin stable interface 
documentation here (which currently gives people a narrow understanding of what 
stable interface means) to include stable interfaces in other plugins: 
https://docs.openstack.org/tempest/latest/plugin.html#stable-tempest-apis-plugins-may-use
 

>   1. By this you make sure that you are not allowing to change the API calling
> interface(service clietns) which indirectly means you are not allowing to
> change the APIs. Makes your tempest plugin testing more reliable.
> 
>2. Your service clients can be used in other Tempest 

[openstack-dev] [all][election] Nominations for OpenStack PTLs (Project Team Leads) are now open

2018-07-24 Thread Emmet Hikory
Nominations for OpenStack PTLs (Program Team Leads) are now open and will
remain open until July 31st, 2018 23:45 UTC.  This term is expected to be
slightly longer than usual, as the release cycle is expected to adjust to
match the Summit schedule.

All nominations must be submitted as a text file to the openstack/election
repository as explained at
http://governance.openstack.org/election/#how-to-submit-your-candidacy

Please make sure to follow the new candidacy file naming convention:
$cycle_name/$project_name/$ircname.txt.

In order to be an eligible candidate (and be allowed to vote) in a given
PTL election, you need to have contributed to the corresponding team[0]
during the Queens-Rocky timeframe (February 5th, 2018 00:00 UTC to
July 24th, 2018 00:00 UTC). You must also be an OpenStack Foundation
Individual Member in good standing. To check if your membership
http://governance.openstack.org/election/#how-to-submit-your-candidacy

Additional information about the nomination process can be found here:
https://governance.openstack.org/election/

Shortly after election officials approve candidates, they will be listed
here: https://governance.openstack.org/election/#stein-ptl-candidates

The electorate is requested to confirm their email address in gerrit[1],
prior to July 24th, 2018 midnight UTC so that the emailed ballots are
mailed to the correct email address. This email address should match
that which was provided in your foundation member profile[2] as well.

Happy running,

[0] https://governance.openstack.org/tc/reference/projects/
[1] https://review.openstack.org/#/settings/contact
[2] https://www.openstack.org/profile/

-- 
Emmet HIKORY

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] The Weekly Owl - 26th Edition

2018-07-24 Thread Emilien Macchi
Welcome to the twenty-sixth edition of a weekly update in TripleO world!
The goal is to provide a short reading (less than 5 minutes) to learn
what's new this week.
Any contributions and feedback are welcome.
Link to the previous version:
http://lists.openstack.org/pipermail/openstack-dev/2018-July/132301.html

+-+
| General announcements |
+-+

+--> Rocky Milestone 3 is this week. The team should focus on
stabilization, bug fixing, testing, so we can make our rocky release more
awesome.
+--> Reminder about PTG etherpad, feel free to propose topics:
https://etherpad.openstack.org/p/tripleo-ptg-stein
+--> PTL elections are open! If you want to be the next TripleO PTL, it's
the right time to send your candidacy *now* !

+--+
| Continuous Integration |
+--+

+--> Sprint theme: migration to Zuul v3 (More on
https://trello.com/c/vyWXcKOB/841-sprint-16-goals)
+--> Sagi is the rover and Chandan is the ruck. Please tell them any CI
issue.
+--> Promotion on master is 4 days, 0 days on Queens, 2 days on Pike and 0
day on Ocata.
+--> RDO Third Party jobs are currently down:
https://tree.taiga.io/project/morucci-software-factory/issue/1560
+--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

+-+
| Upgrades |
+-+

+--> Progress on work for updates/upgrades with external installers:
https://review.openstack.org/#/q/status:open+branch:master+topic:external-update-upgrade
+--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status

+---+
| Containers |
+---+

+--> Lot of testing around containerized undercloud, please let us know any
problem.
+--> Image prepare via workflow is still work in progress.
+--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status

+--+
| config-download |
+--+

+--> UI integration needs review.
+--> Bug with failure listing is in progress:
https://bugs.launchpad.net/tripleo/+bug/1779093
+--> More:
https://etherpad.openstack.org/p/tripleo-config-download-squad-status

+--+
| Integration |
+--+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status

+-+
| UI/CLI |
+-+

+--> Major Network Configuration patches landed! Congrats team!
+--> Config-download patches are being reviewed and a lot of testing is
going on.
+--> The team is working on a Tempest Plugin for TripleO UI:
https://review.openstack.org/#/c/575730/
+--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status

+---+
| Validations |
+---+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status

+---+
| Networking |
+---+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status

+--+
| Workflows |
+--+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status

+---+
| Security |
+---+

+--> Working on Secrets management.
+--> Last meeting notes:
http://eavesdrop.openstack.org/meetings/security_squad/2018/security_squad.2018-07-18-12.07.html
+--> More: https://etherpad.openstack.org/p/tripleo-security-squad

++
| Owl fact  |
++
Owls feed the strongest babies first.
As harsh as it sounds, the parents always feed the oldest and strongest
owlet before its sibling. This means that if food is scarce, the youngest
chicks will starve. After an owlet leaves the nest, it often lives nearby
in the same tree, and its parents still bring it food. If it can survive
the first winter on its own, its chances of survival are good.

Source: http://mentalfloss.com/article/68473/15-mysterious-facts-about-owls

Thank you all for reading and stay tuned!
--
Your fellow reporter, Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Lots of slow tests timing out jobs

2018-07-24 Thread Matt Riedemann
While going through our uncategorized gate failures [1] I found that we 
have a lot of jobs failing (161 in 7 days) due to the tempest run timing 
out [2]. I originally thought it was just the networking scenario tests, 
but I was able to identify a handful of API tests that are also taking 
nearly 3 minutes each, which seems like they should be moved to scenario 
tests and/or marked slow so they can be run in a dedicated tempest-slow job.


I'm not sure how to get the history on the longest-running tests on 
average to determine where to start drilling down on the worst 
offenders, but it seems like an audit is in order.


[1] http://status.openstack.org/elastic-recheck/data/integrated_gate.html
[2] https://bugs.launchpad.net/tempest/+bug/1783405

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Please use neutron-lib 1.18.0 for Rocky

2018-07-24 Thread Boden Russell
On 7/23/18 9:46 PM, Sangho Shin wrote:
> It applies also to the networking- projects. Right?

Yes. It should apply to any project that's using/depending-on
neutron/master today.

Note that I "think" the neutron-lib version required by neutron will
trump the project's required version anyway, but it would be ideal if
all such projects required the same/proper version.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova] Running NFV tests in CI

2018-07-24 Thread Chris Friesen

On 07/24/2018 12:47 PM, Clark Boylan wrote:


Can you get by with qemu or is nested virt required?


Pretty sure that nested virt is needed in order to test CPU pinning.


As for hugepages, I've done a quick survey of cpuinfo across our clouds and all 
seem to have pse available but not all have pdpe1gb available. Are you using 
1GB hugepages?


If we want to test nova's handling of 1G hugepages then I think we'd need 
pdpe1gb.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova] Running NFV tests in CI

2018-07-24 Thread Clark Boylan
On Tue, Jul 24, 2018, at 10:21 AM, Artom Lifshitz wrote:
> On Tue, Jul 24, 2018 at 12:30 PM, Clark Boylan  wrote:
> > On Tue, Jul 24, 2018, at 9:23 AM, Artom Lifshitz wrote:
> >> Hey all,
> >>
> >> tl;dr Humbly requesting a handful of nodes to run NFV tests in CI
> >>
> >> Intel has their NFV tests tempest plugin [1] and manages a third party
> >> CI for Nova. Two of the cores on that project (Stephen Finucane and
> >> Sean Mooney) have now moved to Red Hat, but the point still stands
> >> that there's a need and a use case for testing things like NUMA
> >> topologies, CPU pinning and hugepages.
> >>
> >> At Red Hat, we also have a similar tempest plugin project [2] that we
> >> use for downstream whitebox testing. The scope is a bit bigger than
> >> just NFV, but the main use case is still testing NFV code in an
> >> automated way.
> >>
> >> Given that there's a clear need for this sort of whitebox testing, I
> >> would like to humbly request a handful of nodes (in the 3 to 5 range)
> >> from infra to run an "official" Nova NFV CI. The code doing the
> >> testing would initially be the current Intel plugin, bug we could have
> >> a separate discussion about keeping "Intel" in the name or forking
> >> and/or renaming it to something more vendor-neutral.
> >
> > The way you request nodes from Infra is through your Zuul configuration. 
> > Add jobs to a project to run tests on the node labels that you want.
> 
> Aha, thanks, I'll look into that. I was coming from a place of
> complete ignorance about infra.
> >
> > I'm guessing this process doesn't work for NFV tests because you have 
> > specific hardware requirements that are not met by our current VM resources?
> > If that is the case it would probably be best to start by documenting what 
> > is required and where the existing VM resources fall
> > short.
> 
> Well, it should be possible to do most of what we'd like with nested
> virt and virtual NUMA topologies, though things like hugepages will
> need host configuration, specifically the kernel boot command [1]. Is
> that possible with the nodes we have?

https://docs.openstack.org/infra/manual/testing.html attempts to give you an 
idea for what is currently available via the test environments.

Nested virt has historically been painful because not all clouds support it and 
those that do did not do so in a reliable way (VMs and possibly hypervisors 
would crash). This has gotten better recently as nested virt is something more 
people have an interest in getting working but it is still hit and miss 
particularly as you use newer kernels in guests. I think if we can continue to 
work together with our clouds (thank you limestone, OVH, and vexxhost!) we may 
be able to work out nested virt that is redundant across multiple clouds. We 
will likely need individuals willing to keep caring for that though and debug 
problems when the next release of your favorite distro shows up. Can you get by 
with qemu or is nested virt required?

As for hugepages, I've done a quick survey of cpuinfo across our clouds and all 
seem to have pse available but not all have pdpe1gb available. Are you using 
1GB hugepages? Keep in mind that the test VMs only have 8GB of memory total. As 
for booting with special kernel parameters you can have your job make those 
modifications to the test environment then reboot the test environment within 
the job. There is some Zuul specific housekeeping that needs to be done post 
reboot, we can figure that out if we decide to go down this route. Would your 
setup work with 2M hugepages?

> 
> > In general though we operate on top of donated cloud resources, and if 
> > those do not work we will have to identify a source of resources that would 
> > work.
> 
> Right, as always it comes down to resources and money. I believe
> historically Red Hat has been opposed to running an upstream third
> party CI (this is by no means an official Red Hat position, just
> remembering what I think I heard), but I can always see what I can do.
> 
> [1] 
> https://docs.openstack.org/nova/latest/admin/huge-pages.html#enabling-huge-pages-on-the-host


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova] Running NFV tests in CI

2018-07-24 Thread Artom Lifshitz
On Tue, Jul 24, 2018 at 12:30 PM, Clark Boylan  wrote:
> On Tue, Jul 24, 2018, at 9:23 AM, Artom Lifshitz wrote:
>> Hey all,
>>
>> tl;dr Humbly requesting a handful of nodes to run NFV tests in CI
>>
>> Intel has their NFV tests tempest plugin [1] and manages a third party
>> CI for Nova. Two of the cores on that project (Stephen Finucane and
>> Sean Mooney) have now moved to Red Hat, but the point still stands
>> that there's a need and a use case for testing things like NUMA
>> topologies, CPU pinning and hugepages.
>>
>> At Red Hat, we also have a similar tempest plugin project [2] that we
>> use for downstream whitebox testing. The scope is a bit bigger than
>> just NFV, but the main use case is still testing NFV code in an
>> automated way.
>>
>> Given that there's a clear need for this sort of whitebox testing, I
>> would like to humbly request a handful of nodes (in the 3 to 5 range)
>> from infra to run an "official" Nova NFV CI. The code doing the
>> testing would initially be the current Intel plugin, bug we could have
>> a separate discussion about keeping "Intel" in the name or forking
>> and/or renaming it to something more vendor-neutral.
>
> The way you request nodes from Infra is through your Zuul configuration. Add 
> jobs to a project to run tests on the node labels that you want.

Aha, thanks, I'll look into that. I was coming from a place of
complete ignorance about infra.
>
> I'm guessing this process doesn't work for NFV tests because you have 
> specific hardware requirements that are not met by our current VM resources?
> If that is the case it would probably be best to start by documenting what is 
> required and where the existing VM resources fall
> short.

Well, it should be possible to do most of what we'd like with nested
virt and virtual NUMA topologies, though things like hugepages will
need host configuration, specifically the kernel boot command [1]. Is
that possible with the nodes we have?

> In general though we operate on top of donated cloud resources, and if those 
> do not work we will have to identify a source of resources that would work.

Right, as always it comes down to resources and money. I believe
historically Red Hat has been opposed to running an upstream third
party CI (this is by no means an official Red Hat position, just
remembering what I think I heard), but I can always see what I can do.

[1] 
https://docs.openstack.org/nova/latest/admin/huge-pages.html#enabling-huge-pages-on-the-host

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova] Running NFV tests in CI

2018-07-24 Thread Clark Boylan
On Tue, Jul 24, 2018, at 9:23 AM, Artom Lifshitz wrote:
> Hey all,
> 
> tl;dr Humbly requesting a handful of nodes to run NFV tests in CI
> 
> Intel has their NFV tests tempest plugin [1] and manages a third party
> CI for Nova. Two of the cores on that project (Stephen Finucane and
> Sean Mooney) have now moved to Red Hat, but the point still stands
> that there's a need and a use case for testing things like NUMA
> topologies, CPU pinning and hugepages.
> 
> At Red Hat, we also have a similar tempest plugin project [2] that we
> use for downstream whitebox testing. The scope is a bit bigger than
> just NFV, but the main use case is still testing NFV code in an
> automated way.
> 
> Given that there's a clear need for this sort of whitebox testing, I
> would like to humbly request a handful of nodes (in the 3 to 5 range)
> from infra to run an "official" Nova NFV CI. The code doing the
> testing would initially be the current Intel plugin, bug we could have
> a separate discussion about keeping "Intel" in the name or forking
> and/or renaming it to something more vendor-neutral.

The way you request nodes from Infra is through your Zuul configuration. Add 
jobs to a project to run tests on the node labels that you want.

I'm guessing this process doesn't work for NFV tests because you have specific 
hardware requirements that are not met by our current VM resources? If that is 
the case it would probably be best to start by documenting what is required and 
where the existing VM resources fall short. In general though we operate on top 
of donated cloud resources, and if those do not work we will have to identify a 
source of resources that would work.

> 
> I won't be at PTG (conflict with personal travel), so I'm kindly
> asking Stephen and Sean to represent this idea in Denver.
> 
> Cheers!
> 
> [1] https://github.com/openstack/intel-nfv-ci-tests
> [2] 
> https://review.rdoproject.org/r/#/admin/projects/openstack/whitebox-tempest-plugin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC

2018-07-24 Thread Spyros Trigazis
Hello list,

After trial and error this is the new layout of the magnum meetings plus
office hours.

1. The meeting moves to Tuesdays 2100 UTC starting today
2.1 Office hours for strigazi Tuesdays: 1300 to 1400 UTC
2.2 Office hours for flwang Wednesdays : 2200 to 2300 UTC

Cheers,
Spyros

[0] https://wiki.openstack.org/wiki/Meetings/Containers


On Tue, 26 Jun 2018 at 04:46, Fei Long Wang  wrote:

> Hi Spyros,
>
> Thanks for posting the discussion output. I'm not sure I can follow the
> idea of simplifying CNI configuration. Though we have both calico and
> flannel for k8s, but if we put both of them into single one config script.
> The script could be very complex. That's why I think we should define some
> naming and logging rules/policies for those scripts for long term
> maintenance to make our life easier. Thoughts?
>
> On 25/06/18 19:20, Spyros Trigazis wrote:
>
> Hello again,
>
> After Thursday's meeting I want to summarize what we discussed and add
> some pointers.
>
>
>- Work on using the out-of-tree cloud provider and move to the new
>model of defining it
>https://storyboard.openstack.org/#!/story/1762743
>https://review.openstack.org/#/c/577477/
>- Configure kubelet and kube-proxy on master nodes
>This story of the master node label can be extened
>https://storyboard.openstack.org/#!/story/2002618
>or we can add a new one
>- Simplify CNI configuration, we have calico and flannel. Ideally we
>should a single config script for each
>one. We could move flannel to the kubernetes hosted version that uses
>kubernetes objects for storage.
>(it is the recommended way by flannel and how it is done with kubeadm)
>- magum support in gophercloud
>https://github.com/gophercloud/gophercloud/issues/1003
>- *needs discussion *update version of heat templates (pike or queens)
>This need its own tread
>- Post deployment scripts for clusters, I have this since some time
>for my but doing it in
>heat is slightly (not a lot) complicated. Most magnum users favor  the
>simpler solution
>of passing a url of a manifest or script to the cluster (at least
>let's add sha512sum).
>- Simplify addition of custom labels/parameters. To avoid patcing
>magnum, it would be
>more ops friendly to have a generic field of custom parameters
>
> Not discussed in the last meeting but we should in the next ones:
>
>- Allow cluster scaling from different users in the same project
>https://storyboard.openstack.org/#!/story/2002648
>- Add the option to remove node from a resource group for swarm
>clusters like
>in kubernetes
>https://storyboard.openstack.org/#!/story/2002677
>
> Let's follow these up in the coming meetings, Tuesday 1000UTC and Thursday
> 1700UTC.
>
> You can always consult this page [1] for future meetings.
>
> Cheers,
> Spyros
>
> [1] https://wiki.openstack.org/wiki/Meetings/Containers
>
> On Wed, 20 Jun 2018 at 18:05, Spyros Trigazis  wrote:
>
>> Hello list,
>>
>> We are going to have a second weekly meeting for magnum for 3 weeks
>> as a test to reach out to contributors in the Americas.
>>
>> You can join us tomorrow (or today for some?) at 1700UTC in
>> #openstack-containers .
>>
>> Cheers,
>> Spyros
>>
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Cheers & Best regards,
> Feilong Wang (王飞龙)
> --
> Senior Cloud Software Engineer
> Tel: +64-48032246
> Email: flw...@catalyst.net.nz
> Catalyst IT Limited
> Level 6, Catalyst House, 150 Willis Street, Wellington
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][nova] Running NFV tests in CI

2018-07-24 Thread Artom Lifshitz
Hey all,

tl;dr Humbly requesting a handful of nodes to run NFV tests in CI

Intel has their NFV tests tempest plugin [1] and manages a third party
CI for Nova. Two of the cores on that project (Stephen Finucane and
Sean Mooney) have now moved to Red Hat, but the point still stands
that there's a need and a use case for testing things like NUMA
topologies, CPU pinning and hugepages.

At Red Hat, we also have a similar tempest plugin project [2] that we
use for downstream whitebox testing. The scope is a bit bigger than
just NFV, but the main use case is still testing NFV code in an
automated way.

Given that there's a clear need for this sort of whitebox testing, I
would like to humbly request a handful of nodes (in the 3 to 5 range)
from infra to run an "official" Nova NFV CI. The code doing the
testing would initially be the current Intel plugin, bug we could have
a separate discussion about keeping "Intel" in the name or forking
and/or renaming it to something more vendor-neutral.

I won't be at PTG (conflict with personal travel), so I'm kindly
asking Stephen and Sean to represent this idea in Denver.

Cheers!

[1] https://github.com/openstack/intel-nfv-ci-tests
[2] 
https://review.rdoproject.org/r/#/admin/projects/openstack/whitebox-tempest-plugin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions

2018-07-24 Thread Matt Young
I've captured this as a point of discussion for the TripleO CI Team's
planning session(s).

Matt
On Tue, Jul 24, 2018 at 4:59 AM Bogdan Dobrelya  wrote:
>
> On 7/23/18 9:33 PM, Emilien Macchi wrote:
> > But it seems like, starting with Ansible 2.5 (what we already have in
> > Rocky and beyond), we should encourage the usage of ansible_facts
> > dictionary.
> > Example:
> > var=hostvars[inventory_hostname].ansible_facts.hostname
> > instead of:
> > var=ansible_hostname
>
> If that means rewriting all ansible_foo things around the globe, we'd
> have a huge scope for changes. Those are used literally everywhere. Here
> is only a search for tripleo-quickstart [0]
>
> [0]
> http://codesearch.openstack.org/?q=%5B%5C.%27%22%5Dansible_%5CS%2B%5B%5E%3A%5D=nope=roles=tripleo-quickstart
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] PTL on vacation

2018-07-24 Thread Daniel Mellado Area
Hi all,

I'll be on vacation until July 31st, without easy access to email and
computer. During that time Antoni Segura Puimedon (apuimedo) will be acting
as my deputy (thanks in advance!)

Best!

Daniel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [opentack-dev][kuryr][ptl] PTL Candidacy for Kuryr - Stein

2018-07-24 Thread Daniel Mellado Area
Dear all,

I'd like to announce my candidacy for Kuryr's PTL for the Stein cycle.

In case you don't know me, I was fortunate to work as PTL on Kuryr and its
related projects for the Rocky cycle where I'm happy to say that we've
achieved most of the milestones we set. I would be honoured to continue
doing
this for the next six months.

During Stein, I would like to focus on some of these topics. We've also
started
efforts on Rocky which I'd like to lead to completion.

* Network Policy Support: This feature maps K8s network policies into
Neutron
  segurity groups and it's something I'd personally like to lead to
completion.

* Neutron pooling resource speedups: Tied closesly with the previous
feature,
  it'll be needed as a way to further improve the speed Neutron handles its
  resources.

* Operator Support

* Octavia providers:

  Native OVN Layer 4 load balancing for services
  Amphora provider for Routes

* Native router supports via Octavia

* Multi device/net support

Also, I'd like to coordinate finishing some features that might not be
making it
for the Rocky cycle, such as SRIOV and DPDK support, and adopt the usage of
CRDs
within the project.

Outside of this key areas, my priority is also helping the community by
acting
as an interface for the cross-project sessions and further improve our
presence
in initiatives such as Openlab, OPNFV and so.

Thanks a lot!

Daniel Mellado (dmellado)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits

2018-07-24 Thread Sergii Golovatiuk
++1

On Mon, Jul 23, 2018 at 6:50 PM, Jiří Stránský  wrote:
> +1!
>
>
> On 20.7.2018 10:07, Carlos Camacho Gonzalez wrote:
>>
>> Hi!!!
>>
>> I'll like to propose Jose Luis Franco [1][2] for core reviewer in all the
>> TripleO upgrades bits. He shows a constant and active involvement in
>> improving and fixing our updates/upgrades workflows, he helps also trying
>> to develop/improve/fix our upstream support for testing the
>> updates/upgrades.
>>
>> Please vote -1/+1, and consider this my +1 vote :)
>>
>> [1]: https://review.openstack.org/#/q/owner:jfrancoa%2540redhat.com
>> [2]: http://stackalytics.com/?release=all=commits_id=jfrancoa
>>
>> Cheers,
>> Carlos.
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Best Regards,
Sergii Golovatiuk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Octavia][Congress] The usage of Neutron API

2018-07-24 Thread Hongbin Lu
Hi folks,

Neutron has landed a patch to enable strict validation on query parameters when 
listing resources [1]. I tested the Neutorn's change in your project's gate and 
the result suggested that your projects would need the fixes [2][3][4] to keep 
the gate functioning.

Please feel free to reach out if there is any question or concern.

[1] https://review.openstack.org/#/c/574907/
[2] https://review.openstack.org/#/c/583990/
[3] https://review.openstack.org/#/c/584000/
[4] https://review.openstack.org/#/c/584112/

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Summit Berlin - Community Voting Open

2018-07-24 Thread Ashlee Ferguson
Hi everyone,

Session voting is now open for the November 2018 OpenStack Summit in Berlin!

VOTE HERE 

Hurry, voting closes Thursday, July 26 at 11:59pm Pacific Time (Friday, July 27 
at 6:59 UTC).

The Programming Committees will ultimately determine the final schedule. 
Community votes are meant to help inform the decision, but are not considered 
to be the deciding factor. The Programming Committee members exercise judgment 
in their area of expertise and help ensure diversity. View full details of the 
session selection process here.

Continue to visit https://www.openstack.org/summit/berlin-2018 
 for all Summit-related 
information.

REGISTER
Register for the Summit  
before prices increase in late August!

VISA APPLICATION PROCESS
Make sure to secure your Visa soon. More information 
 about the 
Visa application process.

TRAVEL SUPPORT PROGRAM
August 30 is the last day to submit applications. Please submit your 
applications 
 by 
11:59pm Pacific Time (August 31 at 6:59am UTC).

If you have any questions, please email sum...@openstack.org 
.

Cheers,
Ashlee


Ashlee Ferguson
OpenStack Foundation
ash...@openstack.org




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 18-30

2018-07-24 Thread Chris Dent


HTML: https://anticdent.org/tc-report-18-30.html

Yet another slow week at TC office hours. This is part of the normal
ebb and flow of work, especially with feature freeze looming, but
for some reason it bothers me. It reinforces my fears that the TC is
either not particularly relevant or looking at the wrong things.

Help make sure we are looking at the right things by:

* coming to office hours and telling us what matters
* responding to these reports and the ones that Doug produces
* adding something to the [PTG planning
  etherpad](https://etherpad.openstack.org/p/tc-stein-ptg).

[Last
Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-19.log.html#t2018-07-19T15:07:31)
there was some discussion about forthcoming elections. First up are
PTL elections for Stein. Note that it is quite likely that _if_ (as
far as I can tell there's not much if about it, it is going to
happen, sadly there's not very much transparency on these decisions
and discussions, I wish there were) the Denver PTG is the last
standalone PTG, then the Stein cycle may be longer than normal to
sync up with summit schedules.

[On
Friday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-20.log.html#t2018-07-20T14:14:12)
there was a bit of discussion on progress towards upgrading to
Mailman 3 and using that as an opportunity to shrink the number of
mailing lists. By having fewer, the hope is that some of the
boundaries between groups within the community will be more
permeable and will help email be the reliable information sharing
mechanism.

[This
morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-24.log.html#t2018-07-24T12:08:03)
there was yet more discussion about differences of opinion and
approach when it comes to accepting projects to be official
OpenStack projects. This is something that will be discussed at the
PTG. It would be helpful if people who care about this could make
their positions known.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-24 Thread Sean McGinnis
On Tue, Jul 24, 2018 at 06:07:24PM +0800, Rambo wrote:
> Hi,all
> 
> 
>  In the Cinder repository, I noticed that the BlockDeviceDriver driver is 
> being deprecated, and was eventually be removed with the Queens release.
> 
> 
> https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
>  
> 
> 
>  However,I want to use it out of tree,but I don't know how to use it out 
> of tree,Can you share me a doc? Thank you very much!
> 

I don't think we have any community documentation on how to use out of tree
drivers, but it's fairly straightforward.

You can just drop in that block_device.py file in the cinder/volumes/drivers
directory and configure its use in cinder.conf using the same volume_driver
setting as before.

I'm not sure if anything has been changed since Ocata that would require
updates to the driver, but I would expect most base functionality should still
work. But just a word of warning that there may be some updates to the driver
needed if you find issues with it.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins

2018-07-24 Thread Graham Hayes

On 23/07/2018 20:22, MONTEIRO, FELIPE C wrote:

Hi,

** Intention **

Intention is to expand Patrole testing to some service clients that 
already exist in some Tempest plugins, for core services only.


What exact projects does Patrole consider "core", and how are you making
that decision? Is it a tag, InterOp, or some other criteria?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sig][upgrades][ansible][charms][tripleo][kolla][airship] reboot or poweroff?

2018-07-24 Thread Jean-Philippe Evrard
Sorry about the lack of participation too.

Monthly sounds good.

Regards,
JP

On July 24, 2018 9:34:56 AM UTC, Paul Bourke  wrote:
>Hi James,
>
>Sorry to hear about the lack of participation. I for one am guilty of 
>not taking part, there just seems to be never enough time in the day to
>
>cram in all the moving parts that a project like OpenStack requires.
>
>That being said, this effort is definitely one of the most important to
>
>the project imo, so I'm keen to step up.
>
>Moving to a monthly meeting sounds a good idea, at least till things
>get 
>back on foot. Could you share what the current times / location for the
>
>meeting is?
>
>Cheers,
>-Paul
>
>On 23/07/18 17:01, James Page wrote:
>> Hi All
>> 
>> tl;dr we (the original founders) have not managed to invest the time
>to 
>> get the Upgrades SIG booted - time to hit reboot or time to poweroff?
>> 
>> Since Vancouver, two of the original SIG chairs have stepped down 
>> leaving me in the hot seat with minimal participation from either 
>> deployment projects or operators in the IRC meetings.  In addition
>I've 
>> only been able to make every 3rd IRC meeting, so they have generally
>not 
>> being happening.
>> 
>> I think the current timing is not good for a lot of folk so finding a
>
>> better slot is probably a must-have if the SIG is going to continue -
>
>> and maybe moving to a monthly or bi-weekly schedule rather than the 
>> weekly slot we have now.
>> 
>> In addition I need some willing folk to help with leadership in the 
>> SIG.  If you have an interest and would like to help please let me
>know!
>> 
>> I'd also like to better engage with all deployment projects -
>upgrades 
>> is something that deployment tools should be looking to encapsulate
>as 
>> features, so it would be good to get deployment projects engaged in
>the 
>> SIG with nominated representatives.
>> 
>> Based on the attendance in upgrades sessions in Vancouver and 
>> developer/operator appetite to discuss all things upgrade at said 
>> sessions I'm assuming that there is still interest in having a SIG
>for 
>> Upgrades but I may be wrong!
>> 
>> Thoughts?
>> 
>> James
>> 
>> 
>> 
>> 
>> 
>> 
>>
>__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe:
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-24 Thread Lee Yarwood
On 20-07-18 08:10:37, Erlon Cruz wrote:
> Nice, good to know. Thanks all for the feedback. We will fix that in our
> drivers.

FWIW Nova does not and AFAICT never has called os-force_detach.

We previously used os-terminate_connection with v2 where the connector
was optional. Even then we always provided one, even providing the
destination connector during an evacuation when the source connector
wasn't stashed in connection_info.
 
> @Walter, so, in this case, if Cinder has the connector, it should not need
> to call the driver passing a None object right?

Yeah I don't think this is an issue with v3 given the connector is
stashed with the attachment, so all we require is a reference to the
attachment to cleanup the connection during evacuations etc.

Lee
 
> Erlon
> 
> Em qua, 18 de jul de 2018 às 12:56, Walter Boring 
> escreveu:
> 
> > The whole purpose of this test is to simulate the case where Nova doesn't
> > know where the vm is anymore,
> > or may simply not exist, but we need to clean up the cinder side of
> > things.   That being said, with the new
> > attach API, the connector is being saved in the cinder database for each
> > volume attachment.
> >
> > Walt
> >
> > On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor 
> > wrote:
> >
> >> On 17/07, Sean McGinnis wrote:
> >> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> >> > > Hi Cinder and Nova folks,
> >> > >
> >> > > Working on some tests for our drivers, I stumbled upon this tempest
> >> test
> >> > > 'force_detach_volume'
> >> > > that is calling Cinder API passing a 'None' connector. At the time
> >> this was
> >> > > added several CIs
> >> > > went down, and people started discussing whether this
> >> (accepting/sending a
> >> > > None connector)
> >> > > would be the proper behavior for what is expected to a driver to
> >> do[1]. So,
> >> > > some of CIs started
> >> > > just skipping that test[2][3][4] and others implemented fixes that
> >> made the
> >> > > driver to disconnected
> >> > > the volume from all hosts if a None connector was received[5][6][7].
> >> >
> >> > Right, it was determined the correct behavior for this was to
> >> disconnect the
> >> > volume from all hosts. The CIs that are skipping this test should stop
> >> doing so
> >> > (once their drivers are fixed of course).
> >> >
> >> > >
> >> > > While implementing this fix seems to be straightforward, I feel that
> >> just
> >> > > removing the volume
> >> > > from all hosts is not the correct thing to do mainly considering that
> >> we
> >> > > can have multi-attach.
> >> > >
> >> >
> >> > I don't think multiattach makes a difference here. Someone is forcibly
> >> > detaching the volume and not specifying an individual connection. So
> >> based on
> >> > that, Cinder should be removing any connections, whether that is to one
> >> or
> >> > several hosts.
> >> >
> >>
> >> Hi,
> >>
> >> I agree with Sean, drivers should remove all connections for the volume.
> >>
> >> Even without multiattach there are cases where you'll have multiple
> >> connections for the same volume, like in a Live Migration.
> >>
> >> It's also very useful when Nova and Cinder get out of sync and your
> >> volume has leftover connections. In this case if you try to delete the
> >> volume you get a "volume in use" error from some drivers.
> >>
> >> Cheers,
> >> Gorka.
> >>
> >>
> >> > > So, my questions are: What is the best way to fix this problem? Should
> >> > > Cinder API continue to
> >> > > accept detachments with None connectors? If, so, what would be the
> >> effects
> >> > > on other Nova
> >> > > attachments for the same volume? Is there any side effect if the
> >> volume is
> >> > > not multi-attached?
> >> > >
> >> > > Additionally to this thread here, I should bring this topic to
> >> tomorrow's
> >> > > Cinder's meeting,
> >> > > so please join if you have something to share.
> >> > >
> >> >
> >> > +1 - good plan.
> >> >
> >> >
> >> >
> >> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage 

[openstack-dev] [cinder] about block device driver

2018-07-24 Thread Rambo
Hi,all


 In the Cinder repository, I noticed that the BlockDeviceDriver driver is 
being deprecated, and was eventually be removed with the Queens release.


https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
 


 However,I want to use it out of tree,but I don't know how to use it out of 
tree,Can you share me a doc? Thank you very much!


















Best Regards
Rambo__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sig][upgrades][ansible][charms][tripleo][kolla][airship] reboot or poweroff?

2018-07-24 Thread Paul Bourke

Hi James,

Sorry to hear about the lack of participation. I for one am guilty of 
not taking part, there just seems to be never enough time in the day to 
cram in all the moving parts that a project like OpenStack requires.


That being said, this effort is definitely one of the most important to 
the project imo, so I'm keen to step up.


Moving to a monthly meeting sounds a good idea, at least till things get 
back on foot. Could you share what the current times / location for the 
meeting is?


Cheers,
-Paul

On 23/07/18 17:01, James Page wrote:

Hi All

tl;dr we (the original founders) have not managed to invest the time to 
get the Upgrades SIG booted - time to hit reboot or time to poweroff?


Since Vancouver, two of the original SIG chairs have stepped down 
leaving me in the hot seat with minimal participation from either 
deployment projects or operators in the IRC meetings.  In addition I've 
only been able to make every 3rd IRC meeting, so they have generally not 
being happening.


I think the current timing is not good for a lot of folk so finding a 
better slot is probably a must-have if the SIG is going to continue - 
and maybe moving to a monthly or bi-weekly schedule rather than the 
weekly slot we have now.


In addition I need some willing folk to help with leadership in the 
SIG.  If you have an interest and would like to help please let me know!


I'd also like to better engage with all deployment projects - upgrades 
is something that deployment tools should be looking to encapsulate as 
features, so it would be good to get deployment projects engaged in the 
SIG with nominated representatives.


Based on the attendance in upgrades sessions in Vancouver and 
developer/operator appetite to discuss all things upgrade at said 
sessions I'm assuming that there is still interest in having a SIG for 
Upgrades but I may be wrong!


Thoughts?

James






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions

2018-07-24 Thread Bogdan Dobrelya

On 7/23/18 9:33 PM, Emilien Macchi wrote:
But it seems like, starting with Ansible 2.5 (what we already have in 
Rocky and beyond), we should encourage the usage of ansible_facts 
dictionary.

Example:
var=hostvars[inventory_hostname].ansible_facts.hostname
instead of:
var=ansible_hostname


If that means rewriting all ansible_foo things around the globe, we'd 
have a huge scope for changes. Those are used literally everywhere. Here 
is only a search for tripleo-quickstart [0]


[0] 
http://codesearch.openstack.org/?q=%5B%5C.%27%22%5Dansible_%5CS%2B%5B%5E%3A%5D=nope=roles=tripleo-quickstart


--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg]New Meeting Time Starting This Week

2018-07-24 Thread Zhipeng Huang
Hi Folks,

As indicated in https://review.openstack.org/#/c/584389/, PCWG is moving
towards a tick-tock meeting arrangements to better accommodate participants
along the globe.

For even weeks starting this Wed, we will have a new meeting time on
UTC0700. For odd weeks we will remain for the UTC1400 time slot.

Look forward to meet you all at #openstack-publiccloud on Wed !

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev