[openstack-dev] [Zun] Removal inactive cores

2018-03-23 Thread Hongbin Lu
 Hi all,

This is an announcement about change of Zun's core team membership. The
people below were removed from the Zun's core reviewer team due to their
inactiveness at the last 180 days [1]. This change was voted by the
existing core team and was unanimously approved.

I would like to thanks for their contributions to the Zun team. They are
welcomed to re-join the core team once they become active in the future.

- Eli Qiao
- Motohiro/Yuanying Otsuka
- Qiming Teng
- Shubham Kumar Sharma
- Sudipta Biswas
- Wenzhi Yu

[1] http://stackalytics.com/report/contribution/zun-group/180

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ops][heat][PTG] Heat PTG Summary

2018-03-23 Thread Zane Bitter

On 15/03/18 04:01, Rico Lin wrote:

Hi Heat devs and ops

It's a great PTG plus SnowpenStack experience. Now Rocky started. We 
really need all kind of input and effort to make sure we're heading 
toward the right way.


Here is what we been discussed during PTG:

  * Future strategy for heat-tempest-plugin & functional tests
  * Multi-cloud support
  * Next plan for Heat Dashboard
  * Race conditions for clients updating/deleting stacks
  * Swift Template/file object support
  * heat dashboard needs of clients
  * Resuming after an engine failure
  * Moving SyncPoints from DB to DLM
  * toggle the debug option at runtime
  * remove mox
  * Allow partial success in ASG
  * Client Plugins and OpenStackSDK
  * Global Request Id support
  * Heat KeyStone Credential issue
  * (How we going to survive on the island)


(No developers were eaten to bring you this summary.)

You can find *all Etherpads links* in 
*https://etherpad.openstack.org/p/heat-rocky-ptg*


We try to document down as much as we can(Thanks Zane for picking it 
up), including discussion and actions. *Will try to target all actions 
in Rocky*.
If you do like to input on any topic (or any topic you think we 
missing), *please try to provide inputs to the etherpad* (and be kind to 
leave messages in ML or meeting so we won't miss it.)


*Use Cases*
If you have any use case for us (What's your usecase, what's not 
working/ what's working well),

please help us and input to*https://etherpad.openstack.org/p/heat-usecases*


Here are *Team photos* we took: 
*https://www.dropbox.com/sh/dtei3ovfi7z74vo/AADX_s3PXFiC3Fod8Yj_RO4na/Heat?dl=0*




--
May The Force of OpenStack Be With You,
*/Rico Lin
/*irc: ricolin




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Review runways this cycle

2018-03-23 Thread melanie witt

On Tue, 20 Mar 2018 16:44:57 -0700, Melanie Witt wrote:

As mentioned in the earlier "Rocky PTG summary - miscellaneous topics
from Friday" email, this cycle we're going to experiment with a
"runways" system for focusing review on approved blueprints in
time-boxes. The goal here is to use a bit more structure and process in
order to focus review and complete merging of approved work more quickly
and reliably.

We were thinking of starting the runways process after the spec review
freeze (which is April 19) so that reviewers won't be split between spec
reviews and reviews of work in runways.

The process and instructions are explained in detail on this etherpad,
which will also serve as the place we queue and track blueprints for
runways:

https://etherpad.openstack.org/p/nova-runways-rocky

Please bear with us as this is highly experimental and we will be giving
it a go knowing it's imperfect and adjusting the process iteratively as
we learn from it.


Okay, based on the responses on the discussion in the last nova meeting 
and this ML thread, I think we have consensus to go ahead and start 
using runways next week after the spec review day, so:


  Spec review day: Tuesday March 27

  Start using runways: Wednesday March 28

Please add your blueprints to the Queue if the requirements explained on 
the etherpad are met. And please ask questions, in #openstack-nova or on 
this thread, if you have any questions about the process.


We will be moving spec freeze out to r-2 (June 7) to lessen pressure on 
spec review while runways are underway, to get more time to review the 
current queue of approved implementations via runways, and to give 
ourselves the chance to approve more specs along the way if we find 
we're reducing the queue enough by completing blueprints.


Thanks,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] EC2 cleanup ?

2018-03-23 Thread Ed Leafe
On Mar 23, 2018, at 10:16 AM, Matt Riedemann  wrote:
> 
>> seems we have a EC2 implementation in api layer and deprecated since Mitaka, 
>> maybe eligible to be removed this cycle?
> 
> That is easier said than done. There have been a couple of related attempts 
> in the past:
> 
> https://review.openstack.org/#/c/266425/
> 
> https://review.openstack.org/#/c/282872/
> 
> I don't remember exactly where those fell down, but it's worth looking at 
> this first before trying to do this again.

If we do, let’s also remove the unnecessary extra directory level in 
nova/api/openstack. There is only one Nova API, so the extra ‘openstack’ level 
is no longer needed.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 19 March 2018

2018-03-23 Thread Colleen Murphy
# Keystone Team Update - Week of 19 March 2018

## News

### Spec review meeting

During our Tuesday office hours we had a call to discuss some of our open 
specs. We were able to reduce some of the scope creep that had arisen in the 
application credentials enhancement spec[1], iron out some details in the MFA 
enhancement spec[2], and reaffirm our mission to keep the default roles spec[3] 
as simple as possible for this round of RBAC improvements.

[1] https://review.openstack.org/396331
[2] https://review.openstack.org/553670
[3] https://review.openstack.org/523973

### oslo.limit library created

The oslo.limit repository has been created[4] and an Oslo spec was merged[5] to 
outline the purpose of the new library.

[4] https://review.openstack.org/#/c/550496/
[5] https://review.openstack.org/#/c/552907/

## Open Specs

Search query: https://goo.gl/eyTktx

Since last week, a new spec has been proposed to add a new static catalog 
backend[6]. This is work that was started last cycle but that we still need to 
flesh out properly.

[6] https://review.openstack.org/554320

## Recently Merged Changes

Search query: https://goo.gl/hdD9Kw

We merged a whopping 6 changes this week. In fairness a lot of our energy has 
been spent reviewing our awesome spec proposals.

## Changes that need Attention

Search query: https://goo.gl/tW5PiH

There are 36 changes that are passing CI, not in merge conflict, have no 
negative reviews and aren't proposed by bots. Among these are a few changes to 
add a lower-constraints job to our repos as part of a plan to eventually stop 
syncing global requirements[7], which we might want to have a quick chat about 
before merging.

[7] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html

## Milestone Outlook

https://releases.openstack.org/rocky/schedule.html

The next deadline is the Rocky-1 milestone spec proposal freeze.

## Help with this newsletter

Help contribute to this newsletter by editing the etherpad: 
https://etherpad.openstack.org/p/keystone-team-newsletter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-23 Thread Eric Fried
Sundar-

First thought is to simplify by NOT keeping inventory information in
the cyborg db at all.  The provider record in the placement service
already knows the device (the provider ID, which you can look up in the
cyborg db) the host (the root_provider_uuid of the provider representing
the device) and the inventory, and (I hope) you'll be augmenting it with
traits indicating what functions it's capable of.  That way, you'll
always get allocation candidates with devices that *can* load the
desired function; now you just have to engage your weigher to prioritize
the ones that already have it loaded so you can prefer those.

Am I missing something?

efried

On 03/22/2018 11:27 PM, Nadathur, Sundar wrote:
> Hi all,
>     There seems to be a possibility of a race condition in the
> Cyborg/Nova flow. Apologies for missing this earlier. (You can refer to
> the proposed Cyborg/Nova spec
> 
> for details.)
> 
> Consider the scenario where the flavor specifies a resource class for a
> device type, and also specifies a function (e.g. encrypt) in the extra
> specs. The Nova scheduler would only track the device type as a
> resource, and Cyborg needs to track the availability of functions.
> Further, to keep it simple, say all the functions exist all the time (no
> reprogramming involved).
> 
> To recap, here is the scheduler flow for this case:
> 
>   * A request spec with a flavor comes to Nova conductor/scheduler. The
> flavor has a device type as a resource class, and a function in the
> extra specs.
>   * Placement API returns the list of RPs (compute nodes) which contain
> the requested device types (but not necessarily the function).
>   * Cyborg will provide a custom filter which queries Cyborg DB. This
> needs to check which hosts contain the needed function, and filter
> out the rest.
>   * The scheduler selects one node from the filtered list, and the
> request goes to the compute node.
> 
> For the filter to work, the Cyborg DB needs to maintain a table with
> triples of (host, function type, #free units). The filter checks if a
> given host has one or more free units of the requested function type.
> But, to keep the # free units up to date, Cyborg on the selected compute
> node needs to notify the Cyborg API to decrement the #free units when an
> instance is spawned, and to increment them when resources are released.
> 
> Therein lies the catch: this loop from the compute node to controller is
> susceptible to race conditions. For example, if two simultaneous
> requests each ask for function A, and there is only one unit of that
> available, the Cyborg filter will approve both, both may land on the
> same host, and one will fail. This is because Cyborg on the controller
> does not decrement resource usage due to one request before processing
> the next request.
> 
> This is similar to this previous Nova scheduling issue
> .
> That was solved by having the scheduler claim a resource in Placement
> for the selected node. I don't see an analog for Cyborg, since it would
> not know which node is selected.
> 
> Thanks in advance for suggestions and solutions.
> 
> Regards,
> Sundar
> 
> 
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] TripleO CI Sprint end

2018-03-23 Thread Arx Cruz
*Hello,On March 21 we came the end of sprint using our new team structure,
and here’s the highlights.Sprint Review:Due the outage in our infra a few
weeks ago, we decided to work on the automation on all the servers used in
our CI, in this way, in case of a any outage, we are able to teardown all
the servers, and bring it up again without the need of manually configure
anything.One can see the results of the sprint via
https://tinyurl.com/yd8wmqxz Ruck and
RoverWhat is Ruck and RoverOne person in our team is designated Ruck and
another Rover, one is responsible to monitoring the CI, checking for
failures, opening bugs, participate on meetings, and this is your focal
point to any CI issues. The other person, is responsible to work on these
bugs, fix problems and the rest of the team are focused on the sprint. For
more information about our structure, check [1]List of bugs that Ruck and
Rover were working on: - https://bugs.launchpad.net/tripleo/+bug/1756892
 - dlrnapi promoter:
promotion of older link > newer link- causing ocata rdo2 incorrect
promotion- https://bugs.launchpad.net/tripleo/+bug/1754036
 - fs020, tempest, image
corrupted after upload to glance (checksum mismatch)- MTU values were not
being passed to UC/OC- https://bugs.launchpad.net/tripleo/+bug/1755485
 - Barbican tempest test
failing to ssh to cirros image- LP + merge skip list-
https://bugs.launchpad.net/tripleo/+bug/1755891
 - OVB based jobs are not
collecting logs from OC nodes- changes made in past weeks to move log
collection outside upstream jobs had negative side effects-
https://bugs.launchpad.net/tripleo/+bug/1755865
 - BMU job(s) failing on
installation of missing package (ceph-ansible)-
https://bugs.launchpad.net/tripleo/+bug/1755478
 - all BM jobs are not
able to be hand edited, JJB contains deprecated element-
https://bugs.launchpad.net/tripleo/+bug/1753580
 - newton, cache image
script is looking in the wrong place.We also have our new Ruck and Rover
for this week: - Ruck- Rafael Folco - rfolco|ruck- Rover- Arx Cruz  -
arxcruz|roverIf you have any questions and/or suggestions, please contact
us[1]
https://specs.openstack.org/openstack/tripleo-specs/specs/policy/ci-team-structure.html
*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Alternative to empty string for default values in Heat

2018-03-23 Thread Ben Nemec



On 03/23/2018 11:54 AM, Giulio Fidente wrote:

On 03/23/2018 05:43 PM, Wojciech Dec wrote:

Hi All,

I'm converting a few heat service templates that have been working ok
with puppet3 modules to run with Puppet 4, and am wondering if there is
a way to pass an "undefined" default via heat to allow "default" values
(eg params.pp) of the puppet modules to be used?
The previous (puppet 3 working) way of passing an empty string in heat
doesn't work, since Puppet 4 interprets this now as the actual setting.


yaml allows use of ~ to represent null

it looks like in a hiera lookup that is resolved as the "nil" value, not
sure if that is enough to make the default values for a class to apply



Interesting.  That would be simpler than what we've been doing, which is 
to use a Heat conditional to determine whether a particular piece of 
hieradata is populated.  At least that's the method I'm aware of.  The 
workers settings are an example of this: 
https://github.com/openstack/tripleo-heat-templates/blob/c9310097027ed2448f721c7be1f6350ca3117d23/puppet/services/nova-metadata.yaml#L75


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk] Repo rename complete

2018-03-23 Thread Monty Taylor
The openstack/python-openstacksdk repo has been renamed to 
openstack/openstacksdk.


The following patch:

https://review.openstack.org/#/c/555875

Updates the .gitreview file (and other things) to point at the new repo.

You'll want to update your local git remotes to pull from and submit to 
the correct location. There are git commands you can use - I personally 
just edit the .git/config file in the repo. :)


Monty

PS. Gerrit will not show lists of openstacksdk reviews until its online 
reindex has completed, which may take a few hours.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-03-23 Thread Doug Hellmann
Excerpts from Stephen Finucane's message of 2018-03-23 17:25:42 +:
> On Fri, 2018-03-23 at 12:23 -0400, Doug Hellmann wrote:
> > Excerpts from Monty Taylor's message of 2018-03-23 08:03:22 -0500:
> > > On 03/22/2018 05:43 AM, Stephen Finucane wrote:
> > > > 
> > > > That's unfortunate. What we really need is a migration path from the
> > > > 'pbr' way of doing things to something else. I see three possible
> > > > avenues at this point in time:
> > > > 
> > > > 1. Start using 'sphinx.ext.autosummary'. Apparently this can do 
> > > > similar
> > > >things to 'sphinx-apidoc' but it takes the form of an extension.
> > > >From my brief experiments, the output generated from this is
> > > >radically different and far less comprehensive than what 'sphinx-
> > > >apidoc' generates. However, it supports templating so we could
> > > >probably configure this somehow and add our own special directive
> > > >somewhere like 'openstackdocstheme'
> > > > 2. Push for the 'sphinx.ext.apidoc' extension I proposed some time 
> > > > back
> > > >against upstream Sphinx [1]. This essentially does what the PBR
> > > >extension does but moves configuration into 'conf.py'. However, 
> > > > this
> > > >is currently held up as I can't adequately explain the 
> > > > differences
> > > >between this and 'sphinx.ext.autosummary' (there's definite 
> > > > overlap
> > > >but I don't understand 'autosummary' well enough to compare 
> > > > them).
> > > > 3. Modify the upstream jobs that detect the pbr integration and have
> > > >them run 'sphinx-apidoc' before 'sphinx-build'. This is the least
> > > >technically appealing approach as it still leaves us unable to 
> > > > build
> > > >stuff locally and adds yet more "magic" to the gate, but it does 
> > > > let
> > > >us progress.
> > > 
> > > I'd suggest a #4:
> > > 
> > > Take the sphinx.ext.apidoc extension and make it a standalone extension 
> > > people can add to doc/requirements.txt and conf.py. That way we don't 
> > > have to convince the sphinx folks to land it.
> > > 
> > > I'd been thinking for a while "we should just write a sphinx extension 
> > > with the pbr logic in it" - but hadn't gotten around to doing anything 
> > > about it. If you've already written that extension - I think we're in 
> > > potentially great shape!
> > 
> > That also has the benefit that we don't have to wait for a new sphinx
> > release to start using it.
> 
> I can do this. Where will it live? pbr? openstackdocstheme? Somewhere
> else?
> 
> Stephen
> 

I think the idea is to make a new thing. If you put it in the
sphinx-contrib org on github it will be easy for other people to
contribute and use it.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-03-23 Thread Stephen Finucane
On Fri, 2018-03-23 at 12:23 -0400, Doug Hellmann wrote:
> Excerpts from Monty Taylor's message of 2018-03-23 08:03:22 -0500:
> > On 03/22/2018 05:43 AM, Stephen Finucane wrote:
> > > 
> > > That's unfortunate. What we really need is a migration path from the
> > > 'pbr' way of doing things to something else. I see three possible
> > > avenues at this point in time:
> > > 
> > > 1. Start using 'sphinx.ext.autosummary'. Apparently this can do 
> > > similar
> > >things to 'sphinx-apidoc' but it takes the form of an extension.
> > >From my brief experiments, the output generated from this is
> > >radically different and far less comprehensive than what 'sphinx-
> > >apidoc' generates. However, it supports templating so we could
> > >probably configure this somehow and add our own special directive
> > >somewhere like 'openstackdocstheme'
> > > 2. Push for the 'sphinx.ext.apidoc' extension I proposed some time 
> > > back
> > >against upstream Sphinx [1]. This essentially does what the PBR
> > >extension does but moves configuration into 'conf.py'. However, 
> > > this
> > >is currently held up as I can't adequately explain the differences
> > >between this and 'sphinx.ext.autosummary' (there's definite overlap
> > >but I don't understand 'autosummary' well enough to compare them).
> > > 3. Modify the upstream jobs that detect the pbr integration and have
> > >them run 'sphinx-apidoc' before 'sphinx-build'. This is the least
> > >technically appealing approach as it still leaves us unable to 
> > > build
> > >stuff locally and adds yet more "magic" to the gate, but it does 
> > > let
> > >us progress.
> > 
> > I'd suggest a #4:
> > 
> > Take the sphinx.ext.apidoc extension and make it a standalone extension 
> > people can add to doc/requirements.txt and conf.py. That way we don't 
> > have to convince the sphinx folks to land it.
> > 
> > I'd been thinking for a while "we should just write a sphinx extension 
> > with the pbr logic in it" - but hadn't gotten around to doing anything 
> > about it. If you've already written that extension - I think we're in 
> > potentially great shape!
> 
> That also has the benefit that we don't have to wait for a new sphinx
> release to start using it.

I can do this. Where will it live? pbr? openstackdocstheme? Somewhere
else?

Stephen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Alternative to empty string for default values in Heat

2018-03-23 Thread Giulio Fidente
On 03/23/2018 05:43 PM, Wojciech Dec wrote:
> Hi All,
> 
> I'm converting a few heat service templates that have been working ok
> with puppet3 modules to run with Puppet 4, and am wondering if there is
> a way to pass an "undefined" default via heat to allow "default" values
> (eg params.pp) of the puppet modules to be used?
> The previous (puppet 3 working) way of passing an empty string in heat
> doesn't work, since Puppet 4 interprets this now as the actual setting.

yaml allows use of ~ to represent null

it looks like in a hiera lookup that is resolved as the "nil" value, not
sure if that is enough to make the default values for a class to apply

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Alternative to empty string for default values in Heat

2018-03-23 Thread Wojciech Dec
Hi All,

I'm converting a few heat service templates that have been working ok with
puppet3 modules to run with Puppet 4, and am wondering if there is a way to
pass an "undefined" default via heat to allow "default" values (eg
params.pp) of the puppet modules to be used?
The previous (puppet 3 working) way of passing an empty string in heat
doesn't work, since Puppet 4 interprets this now as the actual setting.

Thanks,
Wojciech.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-03-23 Thread Doug Hellmann
Excerpts from Monty Taylor's message of 2018-03-23 08:03:22 -0500:
> On 03/22/2018 05:43 AM, Stephen Finucane wrote:
> > 
> > That's unfortunate. What we really need is a migration path from the
> > 'pbr' way of doing things to something else. I see three possible
> > avenues at this point in time:
> > 
> > 1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar
> >things to 'sphinx-apidoc' but it takes the form of an extension.
> >From my brief experiments, the output generated from this is
> >radically different and far less comprehensive than what 'sphinx-
> >apidoc' generates. However, it supports templating so we could
> >probably configure this somehow and add our own special directive
> >somewhere like 'openstackdocstheme'
> > 2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back
> >against upstream Sphinx [1]. This essentially does what the PBR
> >extension does but moves configuration into 'conf.py'. However, this
> >is currently held up as I can't adequately explain the differences
> >between this and 'sphinx.ext.autosummary' (there's definite overlap
> >but I don't understand 'autosummary' well enough to compare them).
> > 3. Modify the upstream jobs that detect the pbr integration and have
> >them run 'sphinx-apidoc' before 'sphinx-build'. This is the least
> >technically appealing approach as it still leaves us unable to build
> >stuff locally and adds yet more "magic" to the gate, but it does let
> >us progress.
> 
> I'd suggest a #4:
> 
> Take the sphinx.ext.apidoc extension and make it a standalone extension 
> people can add to doc/requirements.txt and conf.py. That way we don't 
> have to convince the sphinx folks to land it.
> 
> I'd been thinking for a while "we should just write a sphinx extension 
> with the pbr logic in it" - but hadn't gotten around to doing anything 
> about it. If you've already written that extension - I think we're in 
> potentially great shape!

That also has the benefit that we don't have to wait for a new sphinx
release to start using it.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] storyboard evaluation

2018-03-23 Thread Emilien Macchi
On Thu, Mar 22, 2018 at 12:11 PM, Kendall Nelson 
wrote:

> Sounds like we have fungi set to run the migration of tripleO bugs with
> the 'ui' tag for tomorrow after he gets done with the ironic migration. So
> excited to have you guys start moving over!
>

Cool, please let us know (ping me as well) when it's done.


> Any idea what squad will want to go next/when they might want to go? No
> rush, I'm curious more than anything.
>

Good question! TBH I don't know yet, we'll see how it goes with UI squad.

Thanks,
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-23 Thread Matt Riedemann

On 3/21/2018 6:34 AM, 李杰 wrote:
So what should we do then about rebuild the volume backed server?Until 
the cinder could re-image a volume?


I've added the spec to the 'stuck reviews' section of the nova meeting 
agenda so it can at least get some discussion there next week.


https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] EC2 cleanup ?

2018-03-23 Thread Matt Riedemann

On 3/22/2018 10:30 PM, Chen CH Ji wrote:
seems we have a EC2 implementation in api layer and deprecated since 
Mitaka, maybe eligible to be removed this cycle?


That is easier said than done. There have been a couple of related 
attempts in the past:


https://review.openstack.org/#/c/266425/

https://review.openstack.org/#/c/282872/

I don't remember exactly where those fell down, but it's worth looking 
at this first before trying to do this again.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-23 Thread Matt Riedemann

On 3/22/2018 10:47 PM, 李杰 wrote:
             This is the spec about  rebuild a instance booted from 
volume, anyone who is interested in
       booted from volume can help to review this. Any suggestion is 
welcome.Thank you very much!

       The link is here.
       Re:the rebuild spec:https://review.openstack.org/#/c/532407


Once again, there are already existing threads about this topic, please 
don't continue to try and start new threads or send new reminders about 
it. You can reply on the existing discussion thread if you have new info.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] [ptl] PTL vacation from March 26 - March 30

2018-03-23 Thread Dougal Matthews
I'll be out for the dates in the subject, so all of next week.

Renat Akhmerov (rakhmerov on IRC) will be standing in for anything that
comes up.

Cheers,
Dougal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] placement update 18-12

2018-03-23 Thread Chris Dent


Another week, another pile of code and specs to write and review.

This week will be a "contract" style update: No new links to code
and specs in the listings sections. Next week will be an "expand",
when there will be. Perhaps this can help make sure that in progress
stuff doesn't get lost in the face of the latest thing? Dunno, worth
trying.

# Most Important

While work has started on some of the already approved specs, there
are still a fair few under review, and a couple yet to be written.
Given the number of specs we've got going it's entirely likely we've
bitten off more than we can chew, but we'll see. Getting specs
landed early makes it easier to get the functionality merged sooner,
so: review some specs.

In active code reviews, the update provider tree and nested
providers in allocation candidates work remains crucial foundations
for nearly everything required on the nova side.

# What's Changed

'member_of' has been added to GET /allocation_candidates making it
possible to do a sort of pre-filter based on aggregate membership.
There are some potential improvements to this, being discussed in an
ammendment to the spec:

https://review.openstack.org/#/c/555413/

Placement service exceptions have been moved to
nova/api/openstack/placement/exception.py

# Questions

[Add yours here?]

# Bugs

* Placement related bugs not yet in progress:  https://goo.gl/TgiPXb
   15, same as last week
* In progress placement bugs: https://goo.gl/vzGGDQ
   13, +2 on last week

# Specs

(There are more than this, but this is a contract week.)

* https://review.openstack.org/#/c/549067/
VMware: place instances on resource pool
(using update_provider_tree)

* https://review.openstack.org/#/c/418393/
Provide error codes for placement API

* https://review.openstack.org/#/c/545057/
mirror nova host aggregates to placement API

* https://review.openstack.org/#/c/552924/
   Proposes NUMA topology with RPs

* https://review.openstack.org/#/c/544683/
   Account for host agg allocation ratio in placement

* https://review.openstack.org/#/c/552927/
   Spec for isolating configuration of placement database

* https://review.openstack.org/#/c/552105/
   Support default allocation ratios

* https://review.openstack.org/#/c/438640/
   Spec on preemptible servers

# Main Themes

## Update Provider Tree

The ability of virt drivers to represent what resource providers
they know about--whether that be numa, or clustered resources--is
supported by the update_provider_tree method. Part of it is done,
but some details remain:

  https://review.openstack.org/#/q/topic:bp/update-provider-tree

There's new stuff in here for the add/remove traits and aggregates
stuff discussed above.

## Nested providers in allocation candidates

This is making progress but during review we identified potential
inconsistencies of the semantics of the various filtering
mechanisms. Jay soldiers on at:

https://review.openstack.org/#/q/topic:bp/nested-resource-providers

https://review.openstack.org/#/q/topic:bp/nested-resource-providers-allocation-candidates

## Request Filters

A generic mechanism to allow the scheduler to futher refine the
query made to /allocation_candidates to account for things like
aggregates.

https://review.openstack.org/#/q/topic:bp/placement-req-filter

## Mirror nova host aggregates to placement

This makes it so some kinds of aggregate filtering can be done
"placement side" by mirroring nova host aggregates into placement
aggregates.

  https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates

It's part of what will make the req filters above useful.

## Forbidden Traits

A way of expressing "I'd like resources that do _not_ have trait X".
I've started this, but it is mostly just feeling around at this
point:

https://review.openstack.org/#/q/topic:bp/placement-forbidden-traits

## Consumer Generations

Edleafe will start the ball rolling on this and I (cdent) will be
his virtual pair.

# Extraction

There's now a specless blueprint on which to hang extraction related
changes:

https://blueprints.launchpad.net/nova/+spec/placement-extract

See:

https://review.openstack.org/#/q/topic:bp/placement-extract

for changes in progress, including moving some tests.

A spec was requested to explain the issues surrounding the optional
placement database connection. That's here:

https://review.openstack.org/#/c/552927/

This is _very_ useful when doing experiments for finding the
important boundaries between nova and placement and also has no
impact on a configuration that doesn't use it. Some of those
experiments are in a blog post series ending with

https://anticdent.org/placement-container-playground-5.html

The other major need for the extraction work is creating an
os-resources-classes library. Are there any volunteers for this?

# Other

This is not everything, because this is contract week. That is:
considering reviewing this older stuff, get it out of the 

Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-03-23 Thread Ade Lee
The failing tests have been addressed in a dependent patch.  As soon as
that patch merges, we'll merge your patch.

Ade
On Wed, 2018-03-14 at 18:36 -0400, Paul Belanger wrote:
> On Wed, Mar 14, 2018 at 04:44:07PM -0400, Paul Belanger wrote:
> > On Wed, Mar 14, 2018 at 03:20:40PM -0400, Paul Belanger wrote:
> > > On Wed, Mar 14, 2018 at 03:53:59AM +, na...@vn.fujitsu.com
> > > wrote:
> > > > Hello Paul,
> > > > 
> > > > I am Nam from Barbican team. I would like to notify a problem
> > > > when using fedora-27. 
> > > > 
> > > > Currently, fedora-27 is using mariadb at 10.2.12. But there is
> > > > a bug in this version and it is the main reason for failure
> > > > Barbican database upgrading [1], the bug was fixed at 10.2.13
> > > > [2]. Would you mind updating the version of mariadb before
> > > > removing fedora-26.
> > > > 
> > > > [1] https://bugs.launchpad.net/barbican/+bug/1734329 
> > > > [2] https://jira.mariadb.org/browse/MDEV-13508 
> > > > 
> > > 
> > > Looking at https://apps.fedoraproject.org/packages/mariadb seems
> > > 10.2.13 has
> > > already been updated. Let me recheck the patch and see if it will
> > > use the newer
> > > version.
> > > 
> > 
> > Okay, it looks like our AFS mirrors for fedora our out of sync,
> > I've proposed a
> > patch to fix that[3]. Once landed, I'll recheck the job.
> > 
> 
> Okay, database looks to be fixed, but there are tests failing[4].
> I'll defer
> back to you to continue work on the migration.
> 
> [4] http://logs.openstack.org/20/547120/2/check/barbican-dogtag-devst
> ack-functional-fedora-27/4cd64e0/job-output.txt.gz#_2018-03-
> 14_22_29_49_400822
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Does Cell v2 support for muti-cell deployment in Pike?

2018-03-23 Thread Dan Smith
> Does Cell v2 support for multi-cell deployment in pike? Is there any
> good document about the deployment?

In the release notes of Pike:

  https://docs.openstack.org/releasenotes/nova/pike.html

is this under 16.0.0 Prelude:

  Nova now supports a Cells v2 multi-cell deployment. The default
  deployment is a single cell. There are known limitations with multiple
  cells. Refer to the Cells v2 Layout page for more information about
  deploying multiple cells.

There are some links to documentation in that paragraph which should be
helpful.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-03-23 Thread Monty Taylor

On 03/22/2018 05:43 AM, Stephen Finucane wrote:

On Wed, 2018-03-21 at 09:57 -0500, Sean McGinnis wrote:

On Wed, Mar 21, 2018 at 10:49:02AM +, Stephen Finucane wrote:

tl;dr: Make sure you stop using pbr's autodoc feature before converting
them to the new PTI for docs.

[snip]

I've gone through and proposed a couple of reverts to fix projects
we've already broken. However, going forward, there are two things
people should do to prevent issues like this popping up.


Unfortunately this will not work to just revert the changes. That may fix
things locally, but they will not pass in gate by going back to the old way.

Any cases of this will have to actually be updated to not use the unsupported
pieces you point out. But the doc builds will still need to be done the way
they are now, as that is what the PTI requires at this point.


That's unfortunate. What we really need is a migration path from the
'pbr' way of doing things to something else. I see three possible
avenues at this point in time:

1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar
   things to 'sphinx-apidoc' but it takes the form of an extension.
   From my brief experiments, the output generated from this is
   radically different and far less comprehensive than what 'sphinx-
   apidoc' generates. However, it supports templating so we could
   probably configure this somehow and add our own special directive
   somewhere like 'openstackdocstheme'
2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back
   against upstream Sphinx [1]. This essentially does what the PBR
   extension does but moves configuration into 'conf.py'. However, this
   is currently held up as I can't adequately explain the differences
   between this and 'sphinx.ext.autosummary' (there's definite overlap
   but I don't understand 'autosummary' well enough to compare them).
3. Modify the upstream jobs that detect the pbr integration and have
   them run 'sphinx-apidoc' before 'sphinx-build'. This is the least
   technically appealing approach as it still leaves us unable to build
   stuff locally and adds yet more "magic" to the gate, but it does let
   us progress.


I'd suggest a #4:

Take the sphinx.ext.apidoc extension and make it a standalone extension 
people can add to doc/requirements.txt and conf.py. That way we don't 
have to convince the sphinx folks to land it.


I'd been thinking for a while "we should just write a sphinx extension 
with the pbr logic in it" - but hadn't gotten around to doing anything 
about it. If you've already written that extension - I think we're in 
potentially great shape!



Try as I may, I don't really have the bandwidth to work on this for
another few weeks so I'd appreciate help from anyone with sufficient
Sphinx-fu to come up with a long-term solution to this issue.





Cheers,
Stephen

[1] https://github.com/sphinx-doc/sphinx/pull/4101/files


  * Firstly, you should remove the '[build_sphinx]' and '[pbr]' sections
from 'setup.cfg' in any patches that aim to convert a project to use
the new PTI. This will ensure the gate catches any potential
issues.
  * In addition, if your project uses the pbr autodoc feature, you
should either (a) remove these docs from your documentation tree or
(b) migrate to something else like the 'sphinx.ext.autosummary'
extension [5]. I aim to post instructions on the latter shortly.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-03-23 Thread Dmitry Tantsur

On 03/22/2018 04:39 PM, Sean McGinnis wrote:


That's unfortunate. What we really need is a migration path from the
'pbr' way of doing things to something else. I see three possible
avenues at this point in time:

1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar
   things to 'sphinx-apidoc' but it takes the form of an extension.
   From my brief experiments, the output generated from this is
   radically different and far less comprehensive than what 'sphinx-
   apidoc' generates. However, it supports templating so we could
   probably configure this somehow and add our own special directive
   somewhere like 'openstackdocstheme'
2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back
   against upstream Sphinx [1]. This essentially does what the PBR
   extension does but moves configuration into 'conf.py'. However, this
   is currently held up as I can't adequately explain the differences
   between this and 'sphinx.ext.autosummary' (there's definite overlap
   but I don't understand 'autosummary' well enough to compare them).
3. Modify the upstream jobs that detect the pbr integration and have
   them run 'sphinx-apidoc' before 'sphinx-build'. This is the least
   technically appealing approach as it still leaves us unable to build
   stuff locally and adds yet more "magic" to the gate, but it does let
   us progress.

Try as I may, I don't really have the bandwidth to work on this for
another few weeks so I'd appreciate help from anyone with sufficient
Sphinx-fu to come up with a long-term solution to this issue.

Cheers,
Stephen



I think we could probably go with 1 until and if 2 becomes an option. It does
change output quite a bit.

I played around with 3, but I think we will have enough differences between
projects as to _where_ specifically this generated content needs to be placed
that it will make that approach a little more brittle.



One other things that comes to mind - I think most service projects, if they
are even using this, could probably just drop it. I've found the generated
"API" documentation for service modules to be of very limited use.

That would at least narrow things down to lib projects. So this would still be
an issue for the oslo libs for sure. In that case, you do what that module API
documentation in most cases.


This is also an issue for clients. I would kindly ask people doing this work to 
stop proposing patches that just remove the API reference without any replacement.




But personally, I would encourage service projects to get around this issue by
just not doing it. It would appear that would take care of a large chunk of the
current usage:

http://codesearch.openstack.org/?q=autodoc_index_modules=nope=setup.cfg=


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sdk] git repo rename and storyboard migration

2018-03-23 Thread Akihiro Motoki
As we talked in #opensatck-sdks channel yesterday, I can help storyboard
migration on OSC bugs. Dean and Monty looks fine with the migration.
We can migrate OSC bugs to storyboard along with openstack SDK storyboard
migration.

Thanks,
Akihiro


2018-03-23 1:28 GMT+09:00 Kendall Nelson :

> I can run test migrations today for the rest of the OSC launchpad projects
> just to make sure it all goes smoothly and report back.
>
> -Kendall (diablo_rojo)
>
>
> On Thu, 22 Mar 2018, 5:54 am Dean Troyer,  wrote:
>
>> On Thu, Mar 22, 2018 at 7:42 AM, Akihiro Motoki 
>> wrote:
>> > 2018-03-22 21:29 GMT+09:00 Monty Taylor :
>> >> I could see waiting until we move python-openstackclient. However,
>> we've
>> >> got the issue already with shade bugs being in storyboard already and
>> sdk
>> >> bugs being in launchpad. With shade moving to having its
>> implementation be
>> >> in openstacksdk, over this cycle I expect the number of bugs people
>> report
>> >> against shade wind up actually being against openstacksdk to increase
>> quite
>> >> a bit.
>> >>
>> >> Maybe we should see if the python-openstackclient team wants to migrate
>> >> too?
>> >
>> > Although I have limited experience on storyboard, I think it is ready
>> for
>> > our bug tracking.
>> > As Jens mentioned, not a small number of bugs are referred to from both
>> OSC
>> > and SDK.
>> > One good news on OSC launchpad bug is that we do not use tag
>> aggressively.
>> > If Dean is okay, I believe we can migrate to storyboard.
>>
>> I am all in favor of migrating OSC to use to Storyboard, however I am
>> totally unable to give it any time in the near future.  If Akhiro or
>> anyone else wants to take on that task, you will have my support and
>> as much help as I am able to give.
>>
>> dt
>>
>> --
>>
>> Dean Troyer
>> dtro...@gmail.com
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Refused to connect port 8774.

2018-03-23 Thread manoj kumar
I would check for:

1) Telnet to controller on port 8774.
2) check if controller service is listening on 8774 

Sent from my iPhone

> On 23-Mar-2018, at 1:07 PM, __ mango. <935540...@qq.com> wrote:
> 
> 
> I run the openstack compute service list with the following error:
> 
> # openstack compute service list
> Unable to establish connection to http://controller:8774/v2.1/os-services:
> HTTPConnectionPool(host='controller', port=8774): 
> Max retries exceeded with url: /v2.1/os-services (Caused by 
> NewConnectionError(' 0x7efdbbee9e50>: 
> Failed to establish a new connection: [Errno 111] 
> \xe6\x8b\x92\xe7\xbb\x9d\xe8\xbf\x9e\xe6\x8e\xa5',))
> 
> My port 8774 didn't work and restart the nova- API doesn't work.
> Is there any way to solve this problem?  thank you.  
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee Status update, March 23rd

2018-03-23 Thread Thierry Carrez
Hi!

This is the weekly summary of Technical Committee initiatives. You can
find the full list of currently-considered changes at:
https://wiki.openstack.org/wiki/Technical_Committee_Tracker

We also track TC objectives for the cycle using StoryBoard at:
https://storyboard.openstack.org/#!/project/923


== Recently-approved changes ==

* Resolution about stable branch EOL and "extended maintenance" [1]
* Clarify testing for interop programs [2]
* Rename and clarify scope for PowerStackers [3]
* New repos: oslo.limit, cyborg-specs, ansible-config_template,
ansible-role-systemd* + several xstatic-* repositories
* Removed repos: ironic-inspector-tempest-plugin, zuul-*, nodepool

[1] https://review.openstack.org/#/c/548916/
[2] https://review.openstack.org/#/c/550571/
[3] https://review.openstack.org/#/c/551413/

Two major items were finally approved this week. The first one is the
creation of an "extended maintenance" concept. Stable branches will no
longer be automatically terminated once the maintenance period is over.
They will be kept around for as long as they are maintained by
volunteers. This should enable new resources to step up if they want to
continue maintaining stable branches beyond the minimal period
guaranteed by the stable maintenance team:

https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html

The second major item approved this week is the long-awaited
clarification on acceptable location for interoperability tests, in the
age of add-on trademark programs. The adopted resolution relaxes
potential locations and got support from all parties involved:

https://governance.openstack.org/tc/resolutions/20180307-trademark-program-test-location.html

In addition to those two, another noticeable change this week is the
removal of Zuul and Nodepool from OpenStack project governance, as they
are looking into forming a separately-branded project supported by the
OpenStack Foundation. For more details, you can read Jim Blair's email at:

http://lists.openstack.org/pipermail/openstack-dev/2018-March/128396.html


== Voting in progress ==

Melvin and I posted a resolution proposing a minimal governance model
for SIGs. It proposes that any escalated conflict inside a SIG or
between SIGs should be arbitrated by a SIGs admin group formed of one TC
member and one UC member, with the Foundation executive director
breaking ties in case of need. A similar resolution was adopted by the
UC. This resolution has majority support already and will be approved on
Tuesday, unless new objections are posted. So please review at:

https://review.openstack.org/#/c/554254/

Tony proposed an update to the new projects requirements list, to match
our current guidelines in terms of IRC meetings. This resolution also
has majority support already and will be approved on Tuesday, unless new
objections are posted:

https://review.openstack.org/#/c/552728/


== Under discussion ==

Jeffrey Zhang's proposal about splitting the Kolla-kubernetes team out
of the Kolla/Kolla-ansible team is still under discussion, with
questions about the effect of the change on the Kolla-k8s team. A thread
on the mailing-list would be good to make sure this discussion gets
wider input from affected parties. Please chime in on:

https://review.openstack.org/#/c/552531/

The discussion is also on-going the Adjutant project team addition.
Concerns about the scope of Adjutant, as well as fears that it would
hurt interoperability between OpenStack deployments were raised. A
deeper analysis and discussion needs to happen befoer the TC can make a
final call on this one. You can jump in the discussion here:

https://review.openstack.org/#/c/553643/


== TC member actions/focus/discussions for the coming week(s) ==

For the coming week I expect discussions to continue around the Kolla
split and the Adjutant team addition.

I'll be spending time organizing the Kubernetes/OpenStack community
discussions that will happen around KubeCon EU in Copenhagen in May.

We'll continue brainstorming ideas of topics for the Forum in Vancouver.
You can add ideas to:

https://etherpad.openstack.org/p/YVR-forum-TC-sessions


== Office hours ==

To be more inclusive of all timezones and more mindful of people for
which English is not the primary language, the Technical Committee
dropped its dependency on weekly meetings. So that you can still get
hold of TC members on IRC, we instituted a series of office hours on
#openstack-tc:

* 09:00 UTC on Tuesdays
* 01:00 UTC on Wednesdays
* 15:00 UTC on Thursdays

Feel free to add your own office hour conversation starter at:
https://etherpad.openstack.org/p/tc-office-hour-conversation-starters

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Refused to connect port 8774.

2018-03-23 Thread __ mango.
I run the openstack compute service list with the following error:

# openstack compute service list
Unable to establish connection to http://controller:8774/v2.1/os-services:
HTTPConnectionPool(host='controller', port=8774): 
Max retries exceeded with url: /v2.1/os-services (Caused by 
NewConnectionError(': 
Failed to establish a new connection: [Errno 111] 
\xe6\x8b\x92\xe7\xbb\x9d\xe8\xbf\x9e\xe6\x8e\xa5',))

My port 8774 didn't work and restart the nova- API doesn't work.
Is there any way to solve this problem?  thank you.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev