Re: [openstack-dev] [tc][security] Item #5 of the VMT

2016-06-04 Thread Steven Dake (stdake)
Rob,

Thanks!
-steve

From: Rob C >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, June 3, 2016 at 4:40 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [tc][security] Item #5 of the VMT

Doug Chivers might have some thoughts on this but I'm happy with your proposal 
Steve, kind of you to do the leg-work.

-rob

On Fri, Jun 3, 2016 at 1:29 AM, Steven Dake (stdake) 
> wrote:
Hi folks,

I think we are nearly done with Item #5 [1] of the VMT.  One question remains.

We need to know which repo the analysis documentation will land in .  There is 
security-doc we could use for this purpose, but we could also create a new 
repository called "security-analysis" (or open to other names).  I'll create 
the repo, get reno integrated with it, get sphinx integrated with it, and get a 
basic documentation index.rst in place using cookiecutter + extra git reviews.  
I'll also set up project-config for you.  After that, I don't think there is 
much I can do as my plate is pretty full :)

Regards
-steve

[1] https://review.openstack.org/#/c/300698/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [security][infra][tc] security-analysis repo needs +1 from Robert to be merged by Infrastructure

2016-06-04 Thread Steven Dake (stdake)
Robert,

The infrastructure team will only merge new repos that are going into big tent 
repositories if the PTL +1s it (as in takes responsibility for it).

The review is here:
https://review.openstack.org/#/c/325049/

If you click through, the needs-by you will see how it relates to the VMT 
section #5 rework.

As I said, I will do the basic prep work on getting the repo merged and 
including reno and some basic infrastructure to build documentation.

Andreas has been super helpful (thanks Andreas!) on handholding me through a 
docs-only repo addition - I've never seen Andreas be wrong on pc and I've done 
a lot of em :)

Regards,
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] conducting, ovo and the placement api

2016-06-04 Thread Dan Smith
> There was a conversation earlier this week in which, if I understood
> things correctly, one of the possible outcomes was that it might make
> sense for the new placement service (which will perform the function
> currently provided by the scheduler in Nova) to only get used over its
> REST API, as this will ease its extraction to a standalone service.

FWIW, I think that has been the long term expectation for a while.
Eventually that service is separate, which means no RPC to/from Nova
itself. The thing that came up last week was more a timing issue of
whether we go straight to that in newton or stop at an intermediate
stage where we're still using RPC. Because of the cells thing, I was
hoping to be able to go straight to HTTP, which would be slightly nicer
than what is actually an upcall from the cell to the API (even though
we're still pretty flat).

> * The way to scale is to add more placement API servers and more
>   nodes in the galera (or whatever) cluster. The API servers just
>   talk to the persistence layer themselves. There are no tasks to
>   "conduct".

I'm not sure that we'll want to consider this purely a data service. The
new resource filter approach is mostly a data operation, but it's not
complete -- it doesn't actually select a resource. For that we still
need to run some of the more complicated scheduler filters. I'm not sure
that completely dispensing with the queue (or queuing of some sort) and
doing all the decisions in the API worker while we hold the HTTP client
waiting is the right approach. I'd have to think about it more.

> * If API needs to be versioned then it can version the external
>   representations that it uses.

It certainly needs to be versioned, and certainly that versioned
external representation should be decoupled from the actual persistence.

> * Nova-side versioned objects that are making http calls in
>   themselves. For example, an Inventory object that knows how to
>   save itself to the placement API over HTTP. Argh. No. Magical
>   self-persisting objects are already messy enough. Adding a second
>   medium over which persistence can happen is dire. Let's do
>   something else please.

Um, why? What's the difference between a remotable object that uses
rabbit and RPC to ask a remote service to persist a thing, versus a
remotable object that uses HTTP to do it? It seems perfectly fine to me
to have the object be our internal interface, and the implementation
behind it does whatever we want it to do at a given point in time (i.e.
RPC to the scheduler or HTTP to the placement service). The
indirection_api in ovo is pluggable for a reason...

> * A possible outcome here is that we're going to have objects in Nova
>   and objects in Placement that people will expect to co-evolve.

I don't think so. I think the objects in Nova will mirror the external
representation of the placement API, much like the nova client tracks
evolution in nova's external API. As placement tries to expand its scope
it is likely to need to evolve its internal data structures like
anything else.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-04 Thread Na Zhu
Hi John,

OK, please keep me posted once you done, thanks very much.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   John McDowall 
To: Na Zhu/China/IBM@IBMCN
Cc: "disc...@openvswitch.org" , "OpenStack 
Development Mailing List" , Ryan Moats 
, Srilatha Tangirala 
Date:   2016/06/03 13:15
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC andOVN



Juno 

Whatever gets it done faster- let me get the three repos aligned. I need 
to get the ovs/ovn work done so networking-ovn can call it, and the 
networking-sfc can call networking-ovn.

Hopefully I will have it done tomorrow or over the weekend - let's touch 
base Monday or Sunday night.

Regards 

John

Sent from my iPhone

On Jun 2, 2016, at 6:30 PM, Na Zhu  wrote:

Hi John,

I agree with submitting WIP patches to community, because you already did 
many works on networking-sfc and networking-ovn, it is better that you 
submit the initial patches about networking-sfc and networking-ovn, then 
me and Srilatha take over the patches. Do you have time to do it? if not, 
me and Srilatha can help to do it and you are always the co-author.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" , "OpenStack 
Development Mailing List" , Ryan Moats 
, Srilatha Tangirala 
Date:2016/06/03 00:08
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC andOVN



Juno,

Sure make sense. I will have ovs/ovn in rough shape by end of week 
(hopefully) that will allow you to call the interfaces from 
networking-ovn. Ryan has asked that we submit WIP patches etc so hopefully 
that will kickstart the review process.
Also, hopefully some of the networking-sfc team will also be able to help 
�C I will let them speak for themselves.

Regards

John

From: Na Zhu 
Date: Wednesday, June 1, 2016 at 7:02 PM
To: John McDowall 
Cc: "disc...@openvswitch.org" , OpenStack 
Development Mailing List , Ryan Moats <
rmo...@us.ibm.com>, Srilatha Tangirala 
Subject: Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN

Hi John,

Thanks your reply.

Seems you have covered everything :)
The development work can be broken down in 3 parts:
1, add ovn driver to networking-sfc
2, provide APIs in networking-ovn for networking-sfc 
3, implement the sfc in ovn

So what about we take part 1 and part 2, and you take part 3? because we 
are familiar with networking-sfc and networking-ovn and we can do it 
faster:)





Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:Ryan Moats , OpenStack Development Mailing 
List , "disc...@openvswitch.org" <
disc...@openvswitch.org>, Srilatha Tangirala 
Date:2016/06/01 23:26
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC andOVN



Na/Srilatha,

Great, I am working from three repos:

https://github.com/doonhammer/networking-sfc
https://github.com/doonhammer/networking-ovn
https://github.com/doonhammer/ovs

I had an original prototype working that used an API I created. Since 
then, based on feedback from everyone I have been moving the API to the 
networking-sfc model and then supporting that API in networking-ovn and 
ovs/ovn. I have created a new driver in networking-sfc for ovn.

I am in the process of moving networking-ovn and ovs to support the sfc 
model. Basically I am intending to pass a deep copy of the port-chain 
(sample attached, sfc_dict.py) from the ovn driver in networking-sfc to 
networking-ovn.  This , as Ryan pointed out will minimize the dependancies 
between networking-sfc and networking-ovn. I have created additional 
schema for ovs/ovn (attached) that will provide the linkage between 
networking-ovn and ovs/ovn. I have the schema in ovs/ovn and I am in the 
process of  updating my code to support it.

Not sure where you guys want to jump in �C but I can help in any way you 
need.

Regards

John


[openstack-dev] [nova] [placement] conducting, ovo and the placement api

2016-06-04 Thread Chris Dent


(Warning, there might be crazy ideas ahead, mostly sending this out
to engender some discussion and if these ideas are indeed crazy use
their debunking as ways to strengthen and clarifying existing plans.)

There was a conversation earlier this week in which, if I understood
things correctly, one of the possible outcomes was that it might make
sense for the new placement service (which will perform the function
currently provided by the scheduler in Nova) to only get used over its
REST API, as this will ease its extraction to a standalone service.

Until now the various proposed patches related to resource providers,
generic resource pools, inventories and allocations have been calling
methods on versioned objects to do persistence direct to the database.

If we're going to move to an HTTP-only API I wonder if that gives us
some opportunities to minimize complexity in the longer term
architecture:

* If we have many placement API servers talking to the same logical
  datastore
* And if we don't need a conductor
* And we have schematized representations in the API

Then might it be possible that we don't need oslo versioned objects nor
RPC and we can separate persistence from data structure?

If desired we could still use OVO to version the structure of, for
example, inventory information, but rather than telling an Inventory (or
InventoryList) to save itself, the API would save the information. The
complexity seems to enter picture when we try to make this stuff
remotable and self-persistable. If there's no conductor we don't need
really need that, do we? Or do we? If so, why?

Then:

* The way to scale is to add more placement API servers and more
  nodes in the galera (or whatever) cluster. The API servers just
  talk to the persistence layer themselves. There are no tasks to
  "conduct".
* If API needs to be versioned then it can version the external
  representations that it uses.

What I'm concerned about is mainly two things:

* Nova-side versioned objects that are making http calls in
  themselves. For example, an Inventory object that knows how to
  save itself to the placement API over HTTP. Argh. No. Magical
  self-persisting objects are already messy enough. Adding a second
  medium over which persistence can happen is dire. Let's do
  something else please.

* A possible outcome here is that we're going to have objects in Nova
  and objects in Placement that people will expect to co-evolve.
  That's a tedious dependency. Using versioned objects on both side
  would allow us to co-evolve and make it relatively easy in some
  ways, but that's bad in some ways. I think we're better off making it
  hard to change the on-the-wire representation and keep the
  expectations of the placement API strict and as static as possible.
  Since we don't have to represent fungible things like
  an-extremely-verbose-description-of-a-computer-and-all-extra-bits,
  only Inventory of some type, Allocation of some of that Inventory,
  and a thing which has some inventory we should try to get away
  with it, for the sake of simplicity[1].

I'm talking from a position of ignorance here, so I may be talking
completely BS, but I wanted to get this stuff out there so it was in
the shared mental mix.

[1] What I'm suggesting here is that strictness of representation is
a useful constraint for the development of the system.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [Infra][tricircle] patch not able to be merged

2016-06-04 Thread Shinobu Kinjo
Thank you for investigation and some manual operations to fix out the problem.
If there would be anything to prevent such as unexpected behaviour,
please let us know.

Cheers,
Shinobu


On Fri, Jun 3, 2016 at 11:04 PM, Jeremy Stanley  wrote:
> On 2016-06-03 03:45:31 + (+), joehuang wrote:
>> There is one quite strange issue in Tricircle stable/mitaka branch
>> (https://github.com/openstack/tricircle/tree/stable/mitaka) . Even
>> the patch ( https://review.openstack.org/#/c/324209/ ) were given
>> Code-Review +2 and Workflow +1, the gating job not started, and
>> the patch was not merged.
>>
>> This also happen even we cherry pick a patch from the master
>> branch to the stable/mitaka branch, for example,
>> https://review.openstack.org/#/c/307627/.
>>
>> Is there configuration missing for the stable branch after
>> tagging, or some issue in infra?
>
> For some reason, (based on my reading of debugging logs) when Zuul
> queried Gerrit for open changes needed by that one it decided
> https://review.openstack.org/306278 was required to merge first but
> thought it wasn't merged. Then it attempted to enqueue it but
> couldn't because it was actually already merged a couple months ago.
> I manually instructed Zuul to try and requeue 324209 and that seems
> to have worked.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Email:
shin...@linux.com
shin...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [probably forge email可能是仿冒邮件]Re: [Kolla] About kolla-ansible reconfigure

2016-06-04 Thread Steven Dake (stdake)
Hu,

Comments inline.

From: "hu.zhiji...@zte.com.cn" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Saturday, June 4, 2016 at 12:11 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] 答复: [probably forge email可能是仿冒邮件]Re: [Kolla] About 
kolla-ansible reconfigure

Hi Steven,


Thanks for the information. Some further questions:

> Reconfigure was not designed to handle changes to globals.yml.  I think its a 
> good goal that it should be able to do so, but it does not today.

So waht is the prefered method to change kolla_internal_vip_address and make it 
effective?

I can't think of any known way to do this.  Internal VIPs are typically 
permanent allocations.  Reconfigure iIRC does not copy configuration files if 
there are no changes to the relevant /etc/kolla/config/* file (such as 
nova.conf).  Since you can't get new config files generated on your deployed 
targets, the containers won't pick them up.  If they did pick them up, they 
wouldn't actually restart because they are designed to only restart on a 
reconfigure operation that reconfigures something (I.e. there is new config 
content).

One option that comes to mind is to login to every host in your cluster, sed 
replace the original internal VIP address in every file in /etc/kolla/* with 
the new one, then docker stop every container on every node, then docker start 
every container on every node.  I know, not optimal and only works with 
COPY_ALWAYS.  This could be automated in an ansible playbook with relative ease.

Another way that may work (less confidence here) is to stop every container on 
every node, and run kolla-ansible deploy.  There is no stop operation in 
kolla-ansible – but you can look at the cleanup code here to craft your own:

https://github.com/openstack/kolla/blob/master/ansible/cleanup.yml
https://github.com/openstack/kolla/blob/master/tools/kolla-ansible#L45
https://github.com/openstack/kolla/blob/master/ansible/roles/cleanup/tasks/main.yml#L1-L4

Make certain to leave out line 6 – as that removes named volumes (you would 
lose your persistent data).  You only need lines 1-4 (of main.yml).

Please file a bug.

Maybe someone else has a more elegant solution.


> Reconfigure was designed to handle changes to /etc/kolla/config/* (where 
> custom config for services live).  Reconfigure in its current incarnation in 
> all our branches and master is really a misnomer – it should be  
> service-reconfgiure – but that is wordy and we plan to make globals.yml 
> reconfigurable if feasible – but probably not anytime soon.

There is no /etc/kolla/config/* located in my env before or after successful 
deployment. Is that right?

That is right.  To provide custom configuration for nova for example, you could 
add a /etc/kolla/config/nova.conf file and put:

[libvirt]
virt_typer=qemu

(documented here: 
http://docs.openstack.org/developer/kolla/deployment-philosophy.html)

Run reconfigure and from that point forward all of your machines would use QEMU 
software emulation instead of KVM hardware virt.  The use case the reconfigure 
action was designed to handle was reconfiguring /etc/kolla/config files, (e.g. 
Merging custom config with defaults while overriding when that is called for).

Handling a reconfiguration of globals.yml and passwords.yml is much more 
complex.  I'd like to see us get reconfigure for passwords working next for 
password rotation required by various government regulations.

There are a lot of problems that come to mind with reconfiguring globals.yml.  
I'm not quite sure how it could sanely be implemented without a full cluster 
shutdown followed by a full cluster start.  This would require adding a stop 
operation.  Deploy already starts containers for us.

Regards
-steve






发件人: "Steven Dake (stdake)" >
收件人: 
"openstack-dev@lists.openstack.org" 
>,
日期: 2016-06-03 06:16
主题:[probably forge email可能是仿冒邮件]Re: [openstack-dev] [Kolla] About 
kolla-ansible reconfigure




Hu,

Reconfigure was not designed to handle changes to globals.yml.  I think its a 
good goal that it should be able to do so, but it does not today.

Reconfigure was designed to handle changes to /etc/kolla/config/* (where custom 
config for services live).  Reconfigure in its current incarnation in all our 
branches and master is really a misnomer – it should be service-reconfgiure – 
but that is wordy and we plan to make globals.yml reconfigurable if feasible – 
but probably not anytime soon.

Regards
-steve

Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-04 Thread Monty Taylor
On 06/04/2016 01:53 AM, Morgan Fainberg wrote:
> 
> On Jun 3, 2016 12:42, "Lance Bragstad"  > wrote:
>>
>>
>>
>> On Fri, Jun 3, 2016 at 11:20 AM, Henry Nash  > wrote:
>>>
>>>
 On 3 Jun 2016, at 16:38, Lance Bragstad  > wrote:



 On Fri, Jun 3, 2016 at 3:20 AM, Henry Nash  > wrote:
>
>
>> On 3 Jun 2016, at 01:22, Adam Young  > wrote:
>>
>> On 06/02/2016 07:22 PM, Henry Nash wrote:
>>>
>>> Hi
>>>
>>> As you know, I have been working on specs that change the way we
> handle the uniqueness of project names in Newton. The goal of this is to
> better support project hierarchies, which as they stand today are
> restrictive in that all project names within a domain must be unique,
> irrespective of where in the hierarchy that projects sits (unlike, say,
> the unix directory structure where a node name only has to be unique
> within its parent). Such a restriction is particularly problematic when
> enterprise start modelling things like test, QA and production as
> branches of a project hierarchy, e.g.:
>>>
>>> /mydivsion/projectA/dev
>>> /mydivsion/projectA/QA
>>> /mydivsion/projectA/prod
>>> /mydivsion/projectB/dev
>>> /mydivsion/projectB/QA
>>> /mydivsion/projectB/prod
>>>
>>> Obviously the idea of a project name (née tenant) being unique
> has been around since near the beginning of (OpenStack) time, so we must
> be cautions. There are two alternative specs proposed:
>>>
>>> 1) Relax project name
> constraints: https://review.openstack.org/#/c/310048/ 
>>> 2) Hierarchical project
> naming: https://review.openstack.org/#/c/318605/
>>>
>>> First, here’s what they have in common:
>>>
>>> a) They both solve the above problem
>>> b) They both allow an authorization scope to use a path rather
> than just a simple name, hence allowing you to address a project
> anywhere in the hierarchy
>>> c) Neither have any impact if you are NOT using a hierarchy -
> i.e. if you just have a flat layer of projects in a domain, then they
> have no API or semantic impact (since both ensure that a project’s name
> must still be unique within a parent)
>>>
>>> Here’s how the differ:
>>>
>>> - Relax project name constraints (1), keeps the meaning of the
> ‘name’ attribute of a project to be its node-name in the hierarchy, but
> formally relaxes the uniqueness constraint to say that it only has to be
> unique within its parent. In other words, let’s really model this a bit
> like a unix directory tree.

 I think I lean towards relaxing the project name constraint. The
> reason is because we already expose `domain_id`, `parent_id`, and `name`
> of a project. By relaxing the constraint we can give the user everything
> the need to know about a project without really changing any of these.
> When using 3.7, you know what domain your project is in, you know the
> identifier of the parent project, and you know that your project name is
> unique within the parent.  
>>>
>>> - Hierarchical project naming (2), formally changes the meaning
> of the ‘name’ attribute to include the path to the node as well as the
> node name, and hence ensures that the (new) value of the name attribute
> remains unique.

 Do we intend to *store* the full path as the name, or just build it
> out on demand? If we do store the full path, we will have to think about
> our current data model since the depth of the organization or domain
> would be limited by the max possible name length. Will performance be
> something to think about building the full path on every request?   
>>>
>>> I now mention this issue in the spec for hierarchical project naming
> (the relax naming approach does not suffer this issue). As you say,
> we’ll have to change the DB (today it is only 64 chars) if we do store
> the full path . This is slightly problematic since the maximum depth of
> hierarchy is controlled by a config option, and hence could be changed.
> We will absolutely have be able to build the path on-the-fly in order to
> support legacy drivers (who won’t be able to store more than 64 chars).
> We may need to do some performance tests to ascertain if we can get away
> with building the path on-the-fly in all cases and avoid changing the
> table.  One additional point is that, of course, the API will return
> this path whenever it returns a project - so clients will need to be
> aware of this increase in size. 
>>
>>
>> These are all good points that continue to push me towards relaxing
> the project naming constraint :) 
>>>
>>>
>>> While whichever approach we chose would only be included in a new
> microversion (3.7) of the Identity API, although some relevant 

[openstack-dev] 答复: [probably forge email可能是仿冒邮件]Re: [Kolla] About kolla-ansible reconfigure

2016-06-04 Thread hu . zhijiang
Hi Steven,


Thanks for the information. Some further questions:

> Reconfigure was not designed to handle changes to globals.yml.  I think 
its a good goal that it should be able to do so, but it does not today.

So waht is the prefered method to change kolla_internal_vip_address and 
make it effective?

> Reconfigure was designed to handle changes to /etc/kolla/config/* (where 
custom config for services live).  Reconfigure in its current incarnation 
in all our branches and master is really a misnomer �C it should be 
service-reconfgiure �C but that is wordy and we plan to make globals.yml 
reconfigurable if feasible �C but probably not anytime soon.

There is no /etc/kolla/config/* located in my env before or after 
successful deployment. Is that right? 





发件人: "Steven Dake (stdake)" 
收件人: "openstack-dev@lists.openstack.org" 
, 
日期:   2016-06-03 06:16
主题:   [probably forge email可能是仿冒邮件]Re: [openstack-dev] [Kolla] 
About kolla-ansible reconfigure



Hu,

Reconfigure was not designed to handle changes to globals.yml.  I think 
its a good goal that it should be able to do so, but it does not today.

Reconfigure was designed to handle changes to /etc/kolla/config/* (where 
custom config for services live).  Reconfigure in its current incarnation 
in all our branches and master is really a misnomer �C it should be 
service-reconfgiure �C but that is wordy and we plan to make globals.yml 
reconfigurable if feasible �C but probably not anytime soon.

Regards
-steve


From: "hu.zhiji...@zte.com.cn" 
Reply-To: "openstack-dev@lists.openstack.org" <
openstack-dev@lists.openstack.org>
Date: Wednesday, June 1, 2016 at 6:24 PM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [Kolla] About kolla-ansible reconfigure

Hi 

After modifying the kolla_internal_vip_address in /etc/kolla/global.yml , 
I use kolla-ansible reconfigure to reconfigure OpenStack. But I got the 
following error.

TASK: [mariadb | Restart containers] 
**
skipping: [localhost] => (item=[{'group': 'mariadb', 'name': 'mariadb'}, 
{'KOLLA_BASE_DISTRO': 'centos', 'PS1': '$(tput bold)($(printenv 
KOLLA_SERVICE_NAME))$(tput sgr0)[$(id -un)@$(hostname -s) $(pwd)]$ ', 
'KOLLA_INSTALL_TYPE': 'binary', 'changed': False, 'item': {'group': 
'mariadb', 'name': 'mariadb'}, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 
'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 
'invocation': {'module_name': u'kolla_docker', 'module_complex_args': 
{'action': 'get_container_env', 'name': u'mariadb'}, 'module_args': ''}, 
'KOLLA_SERVICE_NAME': 'mariadb', 'KOLLA_INSTALL_METATYPE': 'rdo'}, {'cmd': 
['docker', 'exec', 'mariadb', '/usr/local/bin/kolla_set_configs', 
'--check'], 'end': '2016-06-02 11:32:18.866276', 'stderr': 
'INFO:__main__:Loading config file at 
/var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config 
file\nINFO:__main__:The config files are in the expected state', 'stdout': 
u'', 'item': {'group': 'mariadb', 'name': 'mariadb'}, 'changed': False, 
'rc': 0, 'failed': False, 'warnings': [], 'delta': '0:00:00.075316', 
'invocation': {'module_name': u'command', 'module_complex_args': {}, 
'module_args': u'docker exec mariadb /usr/local/bin/kolla_set_configs 
--check'}, 'stdout_lines': [], 'failed_when_result': False, 'start': 
'2016-06-02 11:32:18.790960'}]) 

TASK: [mariadb | Waiting for MariaDB service to be ready through VIP] 
*
failed: [localhost] => {"attempts": 6, "changed": false, "cmd": ["docker", 
"exec", "mariadb", "mysql", "-h", "10.43.114.148/24", "-u", "haproxy", 
"-e", "show databases;"], "delta": "0:00:03.924516", "end": "2016-06-02 
11:33:57.928139", "failed": true, "rc": 1, "start": "2016-06-02 
11:33:54.003623", "stdout_lines": [], "warnings": []}
stderr: ERROR 2005 (HY000): Unknown MySQL server host '10.43.114.148/24' 
(-2)
msg: Task failed as maximum retries was encountered

FATAL: all hosts have already failed -- aborting


It seems that mariadb was not restart as expected.



ZTE Information Security Notice: The information contained in this mail 
(and any attachment transmitted herewith) is privileged and confidential 
and is intended for the exclusive use of the addressee(s).  If you are not 
an intended recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly prohibited. 
If you have received this mail in error, please delete it and notify us 
immediately.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev