Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-14 Thread Joshua Harlow

Monty Taylor wrote:

On 03/14/2017 06:04 PM, Davanum Srinivas wrote:

Team,

So one more thing popped up again on IRC:
https://etherpad.openstack.org/p/oslo.config_etcd_backend

What do you think? interested in this work?

Thanks,
Dims

PS: Between this thread and the other one about Tooz/DLM and
os-lively, we can probably make a good case to add etcd as a base
always-on service.


As I mentioned in the other thread, there was specific and strong
anti-etcd sentiment in Tokyo which is why we decided to use an
abstraction. I continue to be in favor of us having one known service in
this space, but I do think that it's important to revisit that decision
fully and in context of the concerns that were raised when we tried to
pick one last time.


I'm in agreement with this.

I don't mind tooz either (it's good at what it is for) since I took a 
part in creating it... Given that I can't help but wonder how nice it 
would be to pick one (etcd, zookeeper, consul?) and just do nice things 
with it (perhaps u know even work with the etcd or zookeeper or consul 
developers [depending on which one we pick] on features and bug fixes 
and such).




It's worth noting that there is nothing particularly etcd-ish about
storing config that couldn't also be done with zk and thus just be an
additional api call or two added to Tooz with etcd and zk drivers for it.


Ya, to me zookeeper and etcd look pretty much the same now-a-days.

Which I guess is why https://github.com/coreos/zetcd ('A ZooKeeper 
"personality" for etcd) (although I'm not sure I'd want to run that, ha) 
exists as a thing.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-14 Thread Monty Taylor
On 03/14/2017 06:04 PM, Davanum Srinivas wrote:
> Team,
> 
> So one more thing popped up again on IRC:
> https://etherpad.openstack.org/p/oslo.config_etcd_backend
> 
> What do you think? interested in this work?
> 
> Thanks,
> Dims
> 
> PS: Between this thread and the other one about Tooz/DLM and
> os-lively, we can probably make a good case to add etcd as a base
> always-on service.

As I mentioned in the other thread, there was specific and strong
anti-etcd sentiment in Tokyo which is why we decided to use an
abstraction. I continue to be in favor of us having one known service in
this space, but I do think that it's important to revisit that decision
fully and in context of the concerns that were raised when we tried to
pick one last time.

It's worth noting that there is nothing particularly etcd-ish about
storing config that couldn't also be done with zk and thus just be an
additional api call or two added to Tooz with etcd and zk drivers for it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-14 Thread Monty Taylor
On 03/15/2017 12:05 AM, Joshua Harlow wrote:
> So just fyi, this has been talked about before (but prob in context of
> zookeeper or various other pluggable config backends).
> 
> Some links:
> 
> - https://review.openstack.org/#/c/243114/
> - https://review.openstack.org/#/c/243182/
> - https://blueprints.launchpad.net/oslo.config/+spec/oslo-config-db
> - https://review.openstack.org/#/c/130047/
> 
> I think the general questions that seem to reappear are around the
> following:
> 
> * How does reloading work (does it)?
> 
> * What's the operational experience (editing a ini file is about the
> lowest bar we can possible get to, for better and/or worse).

As a person who operates many softwares (but who does not necessarily
operate OpenStack specifically) I will say that services that store
their config in a service that do not have an injest/update facility
from file are a GIANT PITA to deal with. Config management is great at
laying down config files. It _can_ put things into services, but that's
almost always more work.

Which is my way of saying - neat, but please please please whoever
writes this make a simple facility that will let someone plop config
into a file on disk and get that noticed and slurped into the config
service. A one-liner command line tool that one runs on the config file
to splat into the config service would be fine.

> * Does this need to be a new oslo.config backend or is it better suited
> by something like the following (external programs loop)::
> 
>etcd_client = make_etcd_client(args)
>while True:
>has_changed = etcd_client.get_new_config("/blahblah") # or use a
> watch
>if has_changed:
>   fetch_and_write_ini_file(etcd_client)
>   trigger_reload()
>time.sleep(args.wait)
> 
> * Is an external loop better (maybe, maybe not?)
> 
> Pretty sure there are some etherpad discussions around this also somewhere.
> 
> Clint Byrum wrote:
>> Excerpts from Davanum Srinivas's message of 2017-03-14 13:04:37 -0400:
>>> Team,
>>>
>>> So one more thing popped up again on IRC:
>>> https://etherpad.openstack.org/p/oslo.config_etcd_backend
>>>
>>> What do you think? interested in this work?
>>>
>>> Thanks,
>>> Dims
>>>
>>> PS: Between this thread and the other one about Tooz/DLM and
>>> os-lively, we can probably make a good case to add etcd as a base
>>> always-on service.
>>>
>>
>> This is a cool idea, and I think we should do it.
>>
>> A few loose ends I'd like to see in a spec:
>>
>> * Security Security Security. (Hoping if I say it 3 times a real
>>security person will appear and ask the hard questions).
>> * Explain clearly how operators would inspect, edit, and diff their
>>configs.
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Monty Taylor
On 03/15/2017 03:13 AM, Jay Pipes wrote:
> On 03/14/2017 05:01 PM, Clint Byrum wrote:
>> Excerpts from Jay Pipes's message of 2017-03-14 15:30:32 -0400:
>>> On 03/14/2017 02:50 PM, Julien Danjou wrote:
 On Tue, Mar 14 2017, Jay Pipes wrote:

> Not tooz, because I'm not interested in a DLM nor leader election
> library
> (that's what the underlying etcd3 cluster handles for me), only a
> fast service
> liveness/healthcheck system, but it shows usage of etcd3 and Google
> Protocol
> Buffers implementing a simple API for liveness checking and host
> maintenance
> reporting.

 Cool cool. So that's the same feature that we implemented in tooz 3
 years ago. It's called "group membership". You create a group, make
 nodes join it, and you know who's dead/alive and get notified when
 their
 status change.
>>>
>>> The point of os-lively is not to provide a thin API over ZooKeeper's
>>> group membership interface. The point of os-lively is to remove the need
>>> to have a database (RDBMS) record of a service in Nova.
>>
>> That's also the point of tooz's group membership API:
>>
>> https://docs.openstack.org/developer/tooz/compatibility.html#grouping
> 
> Did you take a look at the code I wrote in os-lively? What part of the
> tooz group membership API do you think I would have used?
> 
> Again, this was a weekend project that I was moving fast on. I looked at
> tooz and didn't see how I could use it for my purposes, which was to
> store a versioned object in a consistent key/value store with support
> for transactional semantics when storing index and data records at the
> same time [1]
> 
> https://github.com/jaypipes/os-lively/blob/master/os_lively/service.py#L468-L511
> 
> 
> etcd3 -- and specifically etcd3, not etcd2 -- supports the transactional
> semantics in a consistent key/value store that I needed.
> 
> tooz is cool, but it's not what I was looking for. It's solving a
> different problem than I was trying to solve.
> 
> This isn't a case of NIH, despite what Julien is trying to intimate in
> his emails.
> 
>>> tooz simply abstracts a group membership API across a number of drivers.
>>> I don't need that. I need a way to maintain a service record (with
>>> maintenance period information, region, and an evolvable data record
>>> format) and query those service records in an RDBMS-like manner but
>>> without the RDBMS being involved.
>>>
> servicegroup API with os-lively and eliminate Nova's use of an
> RDBMS for
> service liveness checking, which should dramatically reduce the
> amount of both
> DB traffic as well as conductor/MQ service update traffic.

 Interesting. Joshua and Vilob tried to push usage of tooz group
 membership a couple of years ago, but it got nowhere. Well, no, they
 got
 2 specs written IIRC:

  
 https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/service-group-using-tooz.html


 But then it died for whatever reasons on Nova side.
>>>
>>> It died because it didn't actually solve a problem.
>>>
>>> The problem is that even if we incorporate tooz, we would still need to
>>> have a service table in the RDBMS and continue to query it over and over
>>> again in the scheduler and API nodes.
>>
>> Most likely it was designed with hesitance to have a tooz requirement
>> to be a source of truth. But it's certainly not a problem for most tooz
>> backends to be a source of truth. Certainly not for etcd or ZK, which
>> are both designed to be that.
>>
>>> I want all service information in the same place, and I don't want to
>>> use an RDBMS for that information. etcd3 provides an ideal place to
>>> store service record information. Google Protocol Buffers is an ideal
>>> data format for evolvable versioned objects. os-lively presents an API
>>> that solves the problem I want to solve in Nova. tooz didn't.
>>
>> Was there something inherent in tooz's design that prevented you from
>> adding it to tooz's group API? Said API already includes liveness (watch
>> the group that corresponds to the service you want).
> 
> See above about transactional semantics.
> 
> I'm actually happy to add an etcd3 group membership driver to tooz,
> though. After the experience gained this weekend using etcd3, I'd like
> to do that.
> 
> Still doesn't mean that tooz would be the appropriate choice for what I
> was trying to do with os-lively, though.
> 
>> The only thing missing is being able to get groups and group members
>> by secondary indexes. etcd3's built in indexes by field are pretty nice
> 
> Not sure what you're talking about. etcd3 doesn't have any indexing by
> field. I built the os-lively library primarily as a well-defined set of
> index overlays (by uuid, by host, by service type, and by region) over
> etcd3's key/value store.
> 
>> for that, but ZK can likely also do it too by maintaining the index in
>> the driver.
> 
> Maybe, I'm not sure, I didn't 

[openstack-dev] [trove] reminder, DST adjustment

2017-03-14 Thread Amrith Kumar
Just a reminder that the Trove weekly meeting[1] is at 1800 UTC and with the
DST correction, the meeting is now

1400EDT
1100PDT

Thanks,

-amrith 

[1] http://eavesdrop.openstack.org/#Trove_(DBaaS)_Team_Meeting

--
Amrith Kumar
amrith.ku...@gmail.com




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][gate] functional job busted

2017-03-14 Thread Ihar Hrachyshka
Hi all,

the patch that started to produce log index file for logstash [1] and
the patch that switched metadata proxy to haproxy [2] landed and
together busted the functional job because the latter produces log
messages with null-bytes inside, while os-log-merger is not resilient
against it.

If functional job would be in gate and not just in check queue, that
would not happen.

Attempt to fix the situation in multiple ways at [3]. (For
os-log-merger patches, we will need new release and then bump the
version used in gate, so short term neutron patches seem more viable.)

I will need support from both authors of os-log-merger as well as
other neutron members to unravel that. I am going offline in a moment,
and hope someone will take care of patches up for review, and land
what's due.

[1] https://review.openstack.org/#/c/442804/
[2] https://review.openstack.org/#/c/431691/
[3] https://review.openstack.org/#/q/topic:fix-os-log-merger-crash

Thanks,
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Jay Pipes

On 03/14/2017 05:01 PM, Clint Byrum wrote:

Excerpts from Jay Pipes's message of 2017-03-14 15:30:32 -0400:

On 03/14/2017 02:50 PM, Julien Danjou wrote:

On Tue, Mar 14 2017, Jay Pipes wrote:


Not tooz, because I'm not interested in a DLM nor leader election library
(that's what the underlying etcd3 cluster handles for me), only a fast service
liveness/healthcheck system, but it shows usage of etcd3 and Google Protocol
Buffers implementing a simple API for liveness checking and host maintenance
reporting.


Cool cool. So that's the same feature that we implemented in tooz 3
years ago. It's called "group membership". You create a group, make
nodes join it, and you know who's dead/alive and get notified when their
status change.


The point of os-lively is not to provide a thin API over ZooKeeper's
group membership interface. The point of os-lively is to remove the need
to have a database (RDBMS) record of a service in Nova.


That's also the point of tooz's group membership API:

https://docs.openstack.org/developer/tooz/compatibility.html#grouping


Did you take a look at the code I wrote in os-lively? What part of the 
tooz group membership API do you think I would have used?


Again, this was a weekend project that I was moving fast on. I looked at 
tooz and didn't see how I could use it for my purposes, which was to 
store a versioned object in a consistent key/value store with support 
for transactional semantics when storing index and data records at the 
same time [1]


https://github.com/jaypipes/os-lively/blob/master/os_lively/service.py#L468-L511

etcd3 -- and specifically etcd3, not etcd2 -- supports the transactional 
semantics in a consistent key/value store that I needed.


tooz is cool, but it's not what I was looking for. It's solving a 
different problem than I was trying to solve.


This isn't a case of NIH, despite what Julien is trying to intimate in 
his emails.



tooz simply abstracts a group membership API across a number of drivers.
I don't need that. I need a way to maintain a service record (with
maintenance period information, region, and an evolvable data record
format) and query those service records in an RDBMS-like manner but
without the RDBMS being involved.


servicegroup API with os-lively and eliminate Nova's use of an RDBMS for
service liveness checking, which should dramatically reduce the amount of both
DB traffic as well as conductor/MQ service update traffic.


Interesting. Joshua and Vilob tried to push usage of tooz group
membership a couple of years ago, but it got nowhere. Well, no, they got
2 specs written IIRC:

  
https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/service-group-using-tooz.html

But then it died for whatever reasons on Nova side.


It died because it didn't actually solve a problem.

The problem is that even if we incorporate tooz, we would still need to
have a service table in the RDBMS and continue to query it over and over
again in the scheduler and API nodes.


Most likely it was designed with hesitance to have a tooz requirement
to be a source of truth. But it's certainly not a problem for most tooz
backends to be a source of truth. Certainly not for etcd or ZK, which
are both designed to be that.


I want all service information in the same place, and I don't want to
use an RDBMS for that information. etcd3 provides an ideal place to
store service record information. Google Protocol Buffers is an ideal
data format for evolvable versioned objects. os-lively presents an API
that solves the problem I want to solve in Nova. tooz didn't.


Was there something inherent in tooz's design that prevented you from
adding it to tooz's group API? Said API already includes liveness (watch
the group that corresponds to the service you want).


See above about transactional semantics.

I'm actually happy to add an etcd3 group membership driver to tooz, 
though. After the experience gained this weekend using etcd3, I'd like 
to do that.


Still doesn't mean that tooz would be the appropriate choice for what I 
was trying to do with os-lively, though.



The only thing missing is being able to get groups and group members
by secondary indexes. etcd3's built in indexes by field are pretty nice


Not sure what you're talking about. etcd3 doesn't have any indexing by 
field. I built the os-lively library primarily as a well-defined set of 
index overlays (by uuid, by host, by service type, and by region) over 
etcd3's key/value store.



for that, but ZK can likely also do it too by maintaining the index in
the driver.


Maybe, I'm not sure, I didn't spend much time this weekend looking at 
ZooKeeper.



I understand abstractions can seem pretty cumbersome when you're moving
fast. It's not something I want to see stand in your way. But it would
be nice to see where there's deficiency in tooz so we can be there for
the next project that needs it and maybe eventually factor out direct
etcd3 usage so users who have maybe chosen ZK as 

Re: [openstack-dev] [keystone] slide deck

2017-03-14 Thread Lance Bragstad
Of course I would make changes to the template *right* after sending this
email. I'll just share the presentation that I have [0].

https://docs.google.com/presentation/d/1s9BNHI4aHs_fEcCYuekDCFwMg1VTsKCHMkSko92Gqco/edit?usp=sharing

On Tue, Mar 14, 2017 at 8:54 PM, Lance Bragstad  wrote:

> Hi all,
>
> With the forum approaching, I threw together a slide deck that
> incorporates the new mascot. I wanted to send this out in enough advance
> for folks to use them at the forum.
>
> This is in no way our *official* deck and you're not required to use it
> for keystone related talks or presentations. It's just something you can
> use if you wish. If you make edits, or have invested time into your own
> decks and feel like sharing, let me know. I think it would be great to have
> several decks available for people to choose from.
>
> Thanks,
>
> Lance
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] slide deck

2017-03-14 Thread Lance Bragstad
Hi all,

With the forum approaching, I threw together a slide deck that incorporates
the new mascot. I wanted to send this out in enough advance for folks to
use them at the forum.

This is in no way our *official* deck and you're not required to use it for
keystone related talks or presentations. It's just something you can use if
you wish. If you make edits, or have invested time into your own decks and
feel like sharing, let me know. I think it would be great to have several
decks available for people to choose from.

Thanks,

Lance


keystone-template.pptx
Description: MS-Powerpoint 2007 presentation
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][neutron-fwaas][networking-bgpvpn][nova-lxd] Removing old data_utils from Tempest

2017-03-14 Thread Takashi Yamamoto
thank you for heads up.

On Wed, Mar 15, 2017 at 2:18 AM, Ken'ichi Ohmichi  wrote:
> Hi,
>
> Many projects are using data_utils library which is provided by
> Tempest for creating test resources with random resource names.
> Now the library is provided as stable interface (tempest.lib) and old
> unstable interface (tempest.common) will be removed after most
> projects are switching to the new one.
> We can see remaining projects from
> https://review.openstack.org/#/q/status:open+branch:master+topic:tempest-data_utils

are you going to backport?

>
> I hope these patches also can be merged before the
> patch(https://review.openstack.org/#/c/72/) which removes the old
> one to keep all gates stable.
>
> Thanks
> Ken Ohmichi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-14 Thread Davanum Srinivas
On Tue, Mar 14, 2017 at 9:27 PM, Clint Byrum  wrote:
> Excerpts from Doug Hellmann's message of 2017-03-14 20:05:54 -0400:
>> Excerpts from Doug Hellmann's message of 2017-03-14 19:20:08 -0400:
>> > Excerpts from Clint Byrum's message of 2017-03-13 13:49:22 -0700:
>> > > Excerpts from Doug Hellmann's message of 2017-03-13 15:12:42 -0400:
>> > > > Excerpts from Farr, Kaitlin M.'s message of 2017-03-13 18:55:18 +:
>> > > > > Proposed library name: Rename Castellan to oslo.keymanager
>> > > > >
>> > > > > Proposed library mission/motivation: Castellan's goal is to provide a
>> > > > > generic key manager interface that projects can use for their key
>> > > > > manager needs, e.g., storing certificates or generating keys for
>> > > > > encrypting data.  The interface passes the commands and Keystone
>> > > > > credentials on to the configured back end. Castellan is not a service
>> > > > > and does not maintain state. The library can grow to have multiple
>> > > > > back ends, as long as the back end can authenticate Keystone
>> > > > > credentials.  The only two back end options now in Castellan are
>> > > > > Barbican and a limited mock key manager useful only for unit tests.
>> > > > > If someone wrote a Keystone auth plugin for Vault, we could also 
>> > > > > have a
>> > > > > Vault back end for Castellan.
>> > > > >
>> > > > > The benefit of using Castellan versus using Barbican directly
>> > > > > is Castellan allows the option of swapping out for other key 
>> > > > > managers,
>> > > > > mainly for testing.  If projects want their own custom back end for
>> > > > > Castellan, they can write a back end that implements the Castellan
>> > > > > interface but lives in their own code base, i.e., ConfKeyManager in
>> > > > > Nova and Cinder. Additionally, Castellan already has oslo.config
>> > > > > options defined which are helpful for configuring the project to talk
>> > > > > to Barbican.
>> > > > >
>> > > > > When the Barbican team first created the Castellan library, we had
>> > > > > reached out to oslo to see if we could name it oslo.keymanager, but 
>> > > > > the
>> > > > > idea was not accepted because the library didn't have enough 
>> > > > > traction.
>> > > > > Now, Castellan is used in many projects, and we thought we would
>> > > > > suggest renaming again.  At the PTG, the Barbican team met with the 
>> > > > > AWG
>> > > > > to discuss how we could get Barbican integrated with more projects, 
>> > > > > and
>> > > > > the rename was also suggested at that meeting.  Other projects are
>> > > > > interested in creating encryption features, and a rename will help
>> > > > > clarify the difference between Barbican and Castellan.
>> > > >
>> > > > Can you expand on why you think that is so? I'm not disagreeing with 
>> > > > the
>> > > > statement, but it's not obviously true to me, either. I vaguely 
>> > > > remember
>> > > > having it explained at the PTG, but I don't remember the details.
>> > > >
>> > >
>> > > To me, Oslo is a bunch of libraries that encompass "the way OpenStack
>> > > does ". When  is key management, projects are, AFAICT, 
>> > > universally
>> > > using Castellan at the moment. So I think it fits in Oslo conceptually.
>> > >
>> > > As far as what benefit there is to renaming it, the biggest one is
>> > > divesting Castellan of the controversy around Barbican. There's no
>> > > disagreement that explicitly handling key management is necessary. There
>> > > is, however, still hesitance to fully adopt Barbican in that role. In
>> > > fact I heard about some alternatives to Barbican, namely "Vault"[1] and
>> > > "Tang"[2], that may be useful for subsets of the community, or could
>> > > even grow into de facto standards for key management.
>> > >
>> > > So, given that there may be other backends, and the developers would
>> > > like to embrace that, I see value in renaming. It would help, I think,
>> > > Castellan's developers to be able to focus on key management and not
>> > > have to explain to every potential user "no we're not Barbican's cousin,
>> > > we're just an abstraction..".
>> > >
>> > > > > Existing similar libraries (if any) and why they aren't being used: 
>> > > > > N/A
>> > > > >
>> > > > > Reviewer activity: Barbican team
>> > > >
>> > > > If the review team is going to be largely the same, I'm not sure I
>> > > > see the benefit of changing the ownership of the library. We certainly
>> > > > have other examples of Oslo libraries being managed mainly by
>> > > > sub-teams made up of folks who primarily focus on other projects.
>> > > > oslo.policy and oslo.versionedobjects come to mind, but in both of
>> > > > those cases the code was incubated in Oslo or brought into Oslo
>> > > > before the tools for managing shared libraries were widely used
>> > > > outside of the Oslo team. We now have quite a few examples of project
>> > > > teams managing shared libraries (other than their clients).
>> > > >
>> > >
>> > > While this 

Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-14 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2017-03-14 20:05:54 -0400:
> Excerpts from Doug Hellmann's message of 2017-03-14 19:20:08 -0400:
> > Excerpts from Clint Byrum's message of 2017-03-13 13:49:22 -0700:
> > > Excerpts from Doug Hellmann's message of 2017-03-13 15:12:42 -0400:
> > > > Excerpts from Farr, Kaitlin M.'s message of 2017-03-13 18:55:18 +:
> > > > > Proposed library name: Rename Castellan to oslo.keymanager
> > > > > 
> > > > > Proposed library mission/motivation: Castellan's goal is to provide a
> > > > > generic key manager interface that projects can use for their key
> > > > > manager needs, e.g., storing certificates or generating keys for
> > > > > encrypting data.  The interface passes the commands and Keystone
> > > > > credentials on to the configured back end. Castellan is not a service
> > > > > and does not maintain state. The library can grow to have multiple
> > > > > back ends, as long as the back end can authenticate Keystone
> > > > > credentials.  The only two back end options now in Castellan are
> > > > > Barbican and a limited mock key manager useful only for unit tests.
> > > > > If someone wrote a Keystone auth plugin for Vault, we could also have 
> > > > > a
> > > > > Vault back end for Castellan.
> > > > > 
> > > > > The benefit of using Castellan versus using Barbican directly
> > > > > is Castellan allows the option of swapping out for other key managers,
> > > > > mainly for testing.  If projects want their own custom back end for
> > > > > Castellan, they can write a back end that implements the Castellan
> > > > > interface but lives in their own code base, i.e., ConfKeyManager in
> > > > > Nova and Cinder. Additionally, Castellan already has oslo.config
> > > > > options defined which are helpful for configuring the project to talk
> > > > > to Barbican.
> > > > > 
> > > > > When the Barbican team first created the Castellan library, we had
> > > > > reached out to oslo to see if we could name it oslo.keymanager, but 
> > > > > the
> > > > > idea was not accepted because the library didn't have enough traction.
> > > > > Now, Castellan is used in many projects, and we thought we would
> > > > > suggest renaming again.  At the PTG, the Barbican team met with the 
> > > > > AWG
> > > > > to discuss how we could get Barbican integrated with more projects, 
> > > > > and
> > > > > the rename was also suggested at that meeting.  Other projects are
> > > > > interested in creating encryption features, and a rename will help
> > > > > clarify the difference between Barbican and Castellan.
> > > > 
> > > > Can you expand on why you think that is so? I'm not disagreeing with the
> > > > statement, but it's not obviously true to me, either. I vaguely remember
> > > > having it explained at the PTG, but I don't remember the details.
> > > > 
> > > 
> > > To me, Oslo is a bunch of libraries that encompass "the way OpenStack
> > > does ". When  is key management, projects are, AFAICT, universally
> > > using Castellan at the moment. So I think it fits in Oslo conceptually.
> > > 
> > > As far as what benefit there is to renaming it, the biggest one is
> > > divesting Castellan of the controversy around Barbican. There's no
> > > disagreement that explicitly handling key management is necessary. There
> > > is, however, still hesitance to fully adopt Barbican in that role. In
> > > fact I heard about some alternatives to Barbican, namely "Vault"[1] and
> > > "Tang"[2], that may be useful for subsets of the community, or could
> > > even grow into de facto standards for key management.
> > > 
> > > So, given that there may be other backends, and the developers would
> > > like to embrace that, I see value in renaming. It would help, I think,
> > > Castellan's developers to be able to focus on key management and not
> > > have to explain to every potential user "no we're not Barbican's cousin,
> > > we're just an abstraction..".
> > > 
> > > > > Existing similar libraries (if any) and why they aren't being used: 
> > > > > N/A
> > > > > 
> > > > > Reviewer activity: Barbican team
> > > > 
> > > > If the review team is going to be largely the same, I'm not sure I
> > > > see the benefit of changing the ownership of the library. We certainly
> > > > have other examples of Oslo libraries being managed mainly by
> > > > sub-teams made up of folks who primarily focus on other projects.
> > > > oslo.policy and oslo.versionedobjects come to mind, but in both of
> > > > those cases the code was incubated in Oslo or brought into Oslo
> > > > before the tools for managing shared libraries were widely used
> > > > outside of the Oslo team. We now have quite a few examples of project
> > > > teams managing shared libraries (other than their clients).
> > > > 
> > > 
> > > While this makes sense, I'm not so sure any of those are actually
> > > specifically in the same category as Castellan. Perhaps you can expand
> > > on which libraries have done this, and how they're 

Re: [openstack-dev] [Keystone] Admin or certain roles should be able to list full project subtree

2017-03-14 Thread Adrian Turjak
See, subdomains I can kind of see working, but the problem I have with
all this in general is that it is kind of silly to try and stop access
down the tree. If you have a role that lets you do 'admin'-like things
at a high point in the tree, you inherently always have access to the
whole tree below you. For standard non-resellers (the common case
really), if you have access at a given project, you probably have access
to everything below you, so 'secret_project_d' is one you have access
to. Not to mention as an 'admin' or some such, you need to know about
'secret_project_d' so that you can confirm the quota it has isn't
stupid, or possibly because you will be paying an invoice for it. Or
because as an admin you need to know that the other people at your
company aren't making crazy project structures below what they should.

Then even for the reseller problem, you need to know about
'secret_project_d' so you can make an invoice for it, because as the
reseller you have to have enough access to bill. Which means pretty much
full access to ceilometer/gnocchi and all the usage data, which will
give you far far more info than the project data in Keystone. In fact,
it kind of feels like we're trying to solve a problem that may not even
be there, or one that is too hard to avoid.

I've always viewed it as, you give someone access to a subtree because
you don't want them to access anything above them, or adjacent subtrees.
No upwards or sideways permission, and that's the power of HMT.
Downwards permission is the way Keystone works so you really can't avoid
it, as anyone who has the power to add their roles downward, or set to
to inherit, will always be able to access and find out about
'secret_project_d', either embrace that, or completely rework Keystone.

Really if you don't want someone to access or know about
'secret_project_d' you make sure 'secret_project_d' is in a totally
unrelated domain from the people you are trying to hide it from.

On 15/03/17 03:27, Rodrigo Duarte wrote:
> On Tue, Mar 14, 2017 at 10:36 AM, Lance Bragstad  > wrote:
>
> Rodrigo,
>
> Isn't what you just described the reseller use case [0]? Was that
> work ever fully finished? I thought I remember having discussions
> in Tokyo about it.
>
>
> You are right, one of the goals of reseller is to have an even
> stronger separation in the hierarchy by having subdomains, but this is
> not implemented yet. However, I was referring only to the project
> hierarchy and having or not inherited role assignments to grant access
> to the subtree. 
>  
>
>
>
> [0] 
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html
> 
> 
>
> On Tue, Mar 14, 2017 at 7:38 AM, Rodrigo Duarte
> > wrote:
>
> Hi Adrian,
>
> In project hierarchies, it is not that simple to have a "tree
> admin". Imagine you have something like the following:
>
> A -> B -> C
>
> You are an admin of project C and want to create a project
> called "secret_D" under C:
>
> A -> B -> C -> secret_D
>
> This is an example of a hierarchy where the admin of project A
> is not supposed to "see" the whole tree. Of course we could
> implement this in a different way, like using a flag "secret"
> in project "secret_D", but the implementation we chose was the
> one that made more sense for the way role assignments are
> organized. For example, we can assign to project A admin an
> inherited role assignment, which will give access to the whole
> subtree and make it impossible to create a "secret_D" project
> like we defined above - it is basically a choice between the
> possibility to have "hidden" projects or not in the subtree.
>
> However, we can always improve! Please submit a spec where we
> will be able to discuss in further detail the options we have
> to improve the current UX of the HMT feature :)
>
> On Tue, Mar 14, 2017 at 12:24 AM, Adrian Turjak
> > wrote:
>
> Hello Keystone Devs,
>
> I've been playing with subtrees in Keystone for the last
> while, and one
> thing that hit me recently is that as admin, I still can't
> actually do
> subtree_as_list unless I have a role in all projects in
> the subtree.
> This is kind of silly.
>
>
> I can understand why this limitation was implemented, but
> it's also a
> frustrating constraint because as an admin, I have the
> power to add
> myself to all these projects anyway, why then can't I just
> list 

Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-14 Thread Fox, Kevin M
+1 for having the option. I think in general it is a great idea.

I think there are some devils in the implementation. How do you prevent a 
service from getting way more secrets then it strictly needs? Maybe this is the 
start of an activity thought to split out all secrets from config, which would 
be an awesome thing to do. Having secrets commingled with config has always 
rubbed me the wrong way.

The other issue I can think of is overridability. the current separate file per 
instantiated service has some flexibility that a simple implementation of just 
looking in etcd for the keys may not work for. Some, look here first, then look 
for overrides over here thing would work though.

Thanks,
Kevin

From: Michał Jastrzębski [inc...@gmail.com]
Sent: Tuesday, March 14, 2017 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] 
Storing configuration options in etcd(?)

So from kolla perspective that would be totally awesome. For
kolla-ansible it will make things easier, for kolla-k8s...well, it
would solve a lot (**a lot**) of issues we're dealing with.

Pretty pretty please? <:)

On 14 March 2017 at 10:48, Emilien Macchi  wrote:
> On Tue, Mar 14, 2017 at 1:04 PM, Davanum Srinivas  wrote:
>> Team,
>>
>> So one more thing popped up again on IRC:
>> https://etherpad.openstack.org/p/oslo.config_etcd_backend
>>
>> What do you think? interested in this work?
>
> Wow, it seems like our minds are crossing.
> We had this discussion at the PTG and I've seen this topic quite often
> mentioned during different sessions.
>
> It's also somehow mentioned here:
> https://etherpad.openstack.org/p/deployment-pike
> See "Allow services to make use of KV stores instead of just INI files
> for config".
>
> What Thomas is doing in the PoC is actually what we thought it could
> be the next steps forward configuration management in OpenStack in a
> way that could be shared across all tools.
> Partially related, Ben Nemec is working on a spec that would extract
> all OpenStack parameters and generate YAML files:
> https://review.openstack.org/#/c/440835/
> And we thought that we could re-use this file to inject the
> configuration into etcd.
>
> I see a connection here where :
>
> 1. With Ben's work, we would generate a list of parameters available
> in OpenStack and expose it to the User Interface of the deployment
> tool.
> 2. The deployment tool would grab inputs from users and write the
> values into etcd. The installers would also configure some parameters
> that users don't want to provide (with all the logic around).
> 3. OpenStack services would read the config directly from etcd, thanks
> to Thomas's work.
>
> That way, 1. and 3. belong to oslo.config and 2. is done by OpenStack
> deployment tools.
> Does it make sense?
>
> I see a lot of collaboration and consolidation here, in how we do
> configuration management in OpenStack. I hope we can move forward and
> find some consensus here; and why not proposing a first architecture
> for Pike.
>
> Thanks,
>
>> Thanks,
>> Dims
>>
>> PS: Between this thread and the other one about Tooz/DLM and
>> os-lively, we can probably make a good case to add etcd as a base
>> always-on service.
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-14 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2017-03-14 19:20:08 -0400:
> Excerpts from Clint Byrum's message of 2017-03-13 13:49:22 -0700:
> > Excerpts from Doug Hellmann's message of 2017-03-13 15:12:42 -0400:
> > > Excerpts from Farr, Kaitlin M.'s message of 2017-03-13 18:55:18 +:
> > > > Proposed library name: Rename Castellan to oslo.keymanager
> > > > 
> > > > Proposed library mission/motivation: Castellan's goal is to provide a
> > > > generic key manager interface that projects can use for their key
> > > > manager needs, e.g., storing certificates or generating keys for
> > > > encrypting data.  The interface passes the commands and Keystone
> > > > credentials on to the configured back end. Castellan is not a service
> > > > and does not maintain state. The library can grow to have multiple
> > > > back ends, as long as the back end can authenticate Keystone
> > > > credentials.  The only two back end options now in Castellan are
> > > > Barbican and a limited mock key manager useful only for unit tests.
> > > > If someone wrote a Keystone auth plugin for Vault, we could also have a
> > > > Vault back end for Castellan.
> > > > 
> > > > The benefit of using Castellan versus using Barbican directly
> > > > is Castellan allows the option of swapping out for other key managers,
> > > > mainly for testing.  If projects want their own custom back end for
> > > > Castellan, they can write a back end that implements the Castellan
> > > > interface but lives in their own code base, i.e., ConfKeyManager in
> > > > Nova and Cinder. Additionally, Castellan already has oslo.config
> > > > options defined which are helpful for configuring the project to talk
> > > > to Barbican.
> > > > 
> > > > When the Barbican team first created the Castellan library, we had
> > > > reached out to oslo to see if we could name it oslo.keymanager, but the
> > > > idea was not accepted because the library didn't have enough traction.
> > > > Now, Castellan is used in many projects, and we thought we would
> > > > suggest renaming again.  At the PTG, the Barbican team met with the AWG
> > > > to discuss how we could get Barbican integrated with more projects, and
> > > > the rename was also suggested at that meeting.  Other projects are
> > > > interested in creating encryption features, and a rename will help
> > > > clarify the difference between Barbican and Castellan.
> > > 
> > > Can you expand on why you think that is so? I'm not disagreeing with the
> > > statement, but it's not obviously true to me, either. I vaguely remember
> > > having it explained at the PTG, but I don't remember the details.
> > > 
> > 
> > To me, Oslo is a bunch of libraries that encompass "the way OpenStack
> > does ". When  is key management, projects are, AFAICT, universally
> > using Castellan at the moment. So I think it fits in Oslo conceptually.
> > 
> > As far as what benefit there is to renaming it, the biggest one is
> > divesting Castellan of the controversy around Barbican. There's no
> > disagreement that explicitly handling key management is necessary. There
> > is, however, still hesitance to fully adopt Barbican in that role. In
> > fact I heard about some alternatives to Barbican, namely "Vault"[1] and
> > "Tang"[2], that may be useful for subsets of the community, or could
> > even grow into de facto standards for key management.
> > 
> > So, given that there may be other backends, and the developers would
> > like to embrace that, I see value in renaming. It would help, I think,
> > Castellan's developers to be able to focus on key management and not
> > have to explain to every potential user "no we're not Barbican's cousin,
> > we're just an abstraction..".
> > 
> > > > Existing similar libraries (if any) and why they aren't being used: N/A
> > > > 
> > > > Reviewer activity: Barbican team
> > > 
> > > If the review team is going to be largely the same, I'm not sure I
> > > see the benefit of changing the ownership of the library. We certainly
> > > have other examples of Oslo libraries being managed mainly by
> > > sub-teams made up of folks who primarily focus on other projects.
> > > oslo.policy and oslo.versionedobjects come to mind, but in both of
> > > those cases the code was incubated in Oslo or brought into Oslo
> > > before the tools for managing shared libraries were widely used
> > > outside of the Oslo team. We now have quite a few examples of project
> > > teams managing shared libraries (other than their clients).
> > > 
> > 
> > While this makes sense, I'm not so sure any of those are actually
> > specifically in the same category as Castellan. Perhaps you can expand
> > on which libraries have done this, and how they're similar to Castellan?
> 
> oslo.versionedobjects was extracted from nova, and came with a small
> set of contributors who have made up a subteam of Oslo. As far as
> I know, they rarely contribute outside of that library (I haven't
> checked lately, so apologies if my info is out of 

Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-14 Thread Emilien Macchi
On Tue, Mar 14, 2017 at 6:17 PM, Clint Byrum  wrote:
> Excerpts from Davanum Srinivas's message of 2017-03-14 13:04:37 -0400:
>> Team,
>>
>> So one more thing popped up again on IRC:
>> https://etherpad.openstack.org/p/oslo.config_etcd_backend
>>
>> What do you think? interested in this work?
>>
>> Thanks,
>> Dims
>>
>> PS: Between this thread and the other one about Tooz/DLM and
>> os-lively, we can probably make a good case to add etcd as a base
>> always-on service.
>>
>
> This is a cool idea, and I think we should do it.
>
> A few loose ends I'd like to see in a spec:
>
> * Security Security Security. (Hoping if I say it 3 times a real
>   security person will appear and ask the hard questions).

I don't consider myself as a Security expert but in little knowledge:

- etcd v2 API allows authentification:
https://coreos.com/etcd/docs/latest/v2/authentication.html
- etcd supports SSL/TLS as well as authentication through client
certificates, both for clients to server as well as peer (server to
server / cluster) communication

Which sounds pretty secure at this stage, comparing to what we have
now: config files with passwords and secrets everywhere.

> * Explain clearly how operators would inspect, edit, and diff their
>   configs.

That's a good question and we clearly need a tool to query etcd and
grab all parameters + values from a project in particular.
One other aspect that we could see is, thanks to
https://review.openstack.org/#/c/440835/ - we would have a single
interface that expose all parameters in a human readable format
and let operators manage these parameters (through an UI or just by
reading in the file).

-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-14 Thread Joshua Harlow
So just fyi, this has been talked about before (but prob in context of 
zookeeper or various other pluggable config backends).


Some links:

- https://review.openstack.org/#/c/243114/
- https://review.openstack.org/#/c/243182/
- https://blueprints.launchpad.net/oslo.config/+spec/oslo-config-db
- https://review.openstack.org/#/c/130047/

I think the general questions that seem to reappear are around the 
following:


* How does reloading work (does it)?

* What's the operational experience (editing a ini file is about the 
lowest bar we can possible get to, for better and/or worse).


* Does this need to be a new oslo.config backend or is it better suited 
by something like the following (external programs loop)::


   etcd_client = make_etcd_client(args)
   while True:
   has_changed = etcd_client.get_new_config("/blahblah") # or use a 
watch

   if has_changed:
  fetch_and_write_ini_file(etcd_client)
  trigger_reload()
   time.sleep(args.wait)

* Is an external loop better (maybe, maybe not?)

Pretty sure there are some etherpad discussions around this also somewhere.

Clint Byrum wrote:

Excerpts from Davanum Srinivas's message of 2017-03-14 13:04:37 -0400:

Team,

So one more thing popped up again on IRC:
https://etherpad.openstack.org/p/oslo.config_etcd_backend

What do you think? interested in this work?

Thanks,
Dims

PS: Between this thread and the other one about Tooz/DLM and
os-lively, we can probably make a good case to add etcd as a base
always-on service.



This is a cool idea, and I think we should do it.

A few loose ends I'd like to see in a spec:

* Security Security Security. (Hoping if I say it 3 times a real
   security person will appear and ask the hard questions).
* Explain clearly how operators would inspect, edit, and diff their
   configs.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Freezer] Forum Brainstorming - Boston 2017 - Freezer etherpad

2017-03-14 Thread Saad Zaher
Hello Guys,

I've created Freezer etherpad
https://etherpad.openstack.org/p/BOS-Freezer-brainstorming for
brainstorming for Boston 2017 summit. Please share ideas that you would
like to discuss around Freezer (Backup/Restore/DR).


Your input is highly appreciated!

Best Regards,
Saad!
irc: szaher
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [congress] DST transition reminder

2017-03-14 Thread Eric K
Hi all,

Friendly reminder that for US folks the IRC meeting this week is one hour
³later² (5PM PST, 8PM EST) due DST.

Proposed topics for the next congress irc meeting are tracked in this
etherpad: https://etherpad.openstack.org/p/congress-meeting-topics
Feel free to add additional topics and/or comment on existing ones. Thanks!



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-14 Thread Clint Byrum
Excerpts from Davanum Srinivas's message of 2017-03-14 13:04:37 -0400:
> Team,
> 
> So one more thing popped up again on IRC:
> https://etherpad.openstack.org/p/oslo.config_etcd_backend
> 
> What do you think? interested in this work?
> 
> Thanks,
> Dims
> 
> PS: Between this thread and the other one about Tooz/DLM and
> os-lively, we can probably make a good case to add etcd as a base
> always-on service.
> 

This is a cool idea, and I think we should do it.

A few loose ends I'd like to see in a spec:

* Security Security Security. (Hoping if I say it 3 times a real
  security person will appear and ask the hard questions).
* Explain clearly how operators would inspect, edit, and diff their
  configs.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Julien Danjou
On Tue, Mar 14 2017, Clint Byrum wrote:

> I understand abstractions can seem pretty cumbersome when you're moving
> fast.

Considering the problem is at least 5 years old in Nova and tooz itself
is at least 3 years old, let's say that "moving fast" has a… funny
taste.

> It's not something I want to see stand in your way. But it would be
> nice to see where there's deficiency in tooz so we can be there for the
> next project that needs it and maybe eventually factor out direct etcd3
> usage so users who have maybe chosen ZK as their tooz backend can also
> benefit from your work.

+1

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swg][tc] Moving Stewardship Working Group meeting

2017-03-14 Thread Colette Alexander
Hi there!

Currently the Stewardship Working Group meetings every other Thursday at
1400 UTC.

We've had a couple of pings from folks who are interested in joining us for
meetings that live in US Pacific Time, and that Thursday time isn't
terribly conducive to them being able to make meetings. So - the question
is when to move it to, if we can.

A quick glance at the rest of the Thursday schedule shows the 1500 and 1600
time slots available (in #openstack-meeting I believe). I'm hesitant to go
beyond that in the daytime because we also need to accommodate attendees in
Western Europe.

Thoughts on whether either of those works from SWG members and anyone who
might like to drop in? We can also look into having meetings once a week,
and potentially alternating times between the two to help accommodate the
spread of people.

Let me know what everyone thinks - and for this week I'll see anyone who
can make it at 1400 UTC on Thursday.

Thank you!

-colette
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] March 17 Price Increase - OpenStack Summit Boston

2017-03-14 Thread Kendall Waters
Hi everyone, 

It’s your last chance to save on your OpenStack Summit Boston tickets before 
prices increase this Friday, March 17 at 11:59pm PDT (March 18 at 7:59 UTC).

REGISTER HERE: https://openstacksummit2017boston.eventbrite.com 

Contact sum...@openstack.org  if you have any 
questions. 

IMPORTANT LINKS
Check out the Summit schedule 

Book a discounted hotel room 

Interested in sponsoring the Summit? Find more information here 
. 
Request a visa invitation letter 

Cheers,
Kendall
 
Kendall Waters
OpenStack Marketing
kend...@openstack.org


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Forum Brainstorming - Boston 2017

2017-03-14 Thread Emilien Macchi
On Wed, Mar 1, 2017 at 8:56 AM, Emilien Macchi  wrote:
> Hi everyone,
>
> We need to start brainstorming topics we'd like to discuss with the
> rest of the community during our first "Forum" at the OpenStack Summit
> in Boston. The event should look a lot like the cross-project
> workshops or the Ops day at the old Design Summit : open, timeboxed
> discussions in a fishbowl setting facilitated by a moderator (no
> speaker or formal presentations). We'll gather feedback from users and
> operators on the Ocata release, and start gathering early requirements
> and priorities for the Queens release cycle. We aim to ensure the
> broadest coverage of topics that will allow for all parts of our
> community (upstream contributors, ops working groups, application
> developers...) getting together to discuss (and get alignment on) key
> areas within our community/projects.
>
> Examples of the types of discussions and some sessions that might fit
> within each one:
>
> # Strategic, whole-of-community discussions:
> To think about the big picture, including beyond just one release
> cycle and new technologies. An example could be "Making OpenStack One
> Platform for containers/VMs/Bare Metal", where the entire community
> congregates to share opinions on how to make OpenStack achieve its
> integration engine goal.
>
> # Cross-project sessions:
> In a similar vein to what has happened at past design summits, but
> with increased emphasis on issues that are relevant to all areas of
> the community. An example could be "Rolling Upgrades at Scale", where
> the Large Deployments Team collaborates with Nova, Cinder and Keystone
> to tackle issues that come up with rolling upgrades when there’s a
> large number of machines.
>
> # Project-specific sessions:
> Where developers can ask users specific questions about their
> experience, users can provide feedback from the last release and
> cross-community collaboration on the priorities, and ‘blue sky’ ideas
> for the next release.An example could be "Neutron Pain Points",
> co-organized by neutron developers and users, where Neutron developers
> bring some specific questions they want answered, Neutron users bring
> feedback from the latest release and ideas about the future.
>
>
> There are two stages to the brainstorming:
>
> 1. Starting today, set up an etherpad with your group/team, or use one
> on the list and start discussing ideas you'd like to talk about at the
> Forum. Then, through +1s on etherpads and mailing list discussion,
> work out which ones are the most needed.
> 2. Then, in a couple of weeks, we will open up a more formal web-based
> tool for submission of abstracts that came out of the brainstorming on
> top. A committee with TC, UC and Foundation staff members will work on
> the final selection and scheduling.
>
> We expect working groups may make their own etherpads, however the
> Technical Committee offers one for cross-project and strategic topics:
> https://etherpad.openstack.org/p/BOS-TC-brainstorming
>
> Feel free to use that, or make one for your group and add it to the list at:
> https://wiki.openstack.org/wiki/Forum/Boston2017
>
> Thanks,
> Emilien and Thierry, on behalf of the Technical Committee

A gentle reminder to let projects know about the brainstorming that is
happening.

Looking at the Wiki page, I still see some projects that didn't
publish their ideas yet (e.g. Neutron, Cinder, Telemetry, etc).
It would be nice if PTLs could engage some work here.

Please let me know if any help is needed, we're likely going to
postpone the submission deadline to somewhere in April so we let more
time to people in organizing the topics.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2017-03-14 15:30:32 -0400:
> On 03/14/2017 02:50 PM, Julien Danjou wrote:
> > On Tue, Mar 14 2017, Jay Pipes wrote:
> >
> >> Not tooz, because I'm not interested in a DLM nor leader election library
> >> (that's what the underlying etcd3 cluster handles for me), only a fast 
> >> service
> >> liveness/healthcheck system, but it shows usage of etcd3 and Google 
> >> Protocol
> >> Buffers implementing a simple API for liveness checking and host 
> >> maintenance
> >> reporting.
> >
> > Cool cool. So that's the same feature that we implemented in tooz 3
> > years ago. It's called "group membership". You create a group, make
> > nodes join it, and you know who's dead/alive and get notified when their
> > status change.
> 
> The point of os-lively is not to provide a thin API over ZooKeeper's 
> group membership interface. The point of os-lively is to remove the need 
> to have a database (RDBMS) record of a service in Nova.
> 

That's also the point of tooz's group membership API:

https://docs.openstack.org/developer/tooz/compatibility.html#grouping

> tooz simply abstracts a group membership API across a number of drivers. 
> I don't need that. I need a way to maintain a service record (with 
> maintenance period information, region, and an evolvable data record 
> format) and query those service records in an RDBMS-like manner but 
> without the RDBMS being involved.
> 
> >> servicegroup API with os-lively and eliminate Nova's use of an RDBMS for
> >> service liveness checking, which should dramatically reduce the amount of 
> >> both
> >> DB traffic as well as conductor/MQ service update traffic.
> >
> > Interesting. Joshua and Vilob tried to push usage of tooz group
> > membership a couple of years ago, but it got nowhere. Well, no, they got
> > 2 specs written IIRC:
> >
> >   
> > https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/service-group-using-tooz.html
> >
> > But then it died for whatever reasons on Nova side.
> 
> It died because it didn't actually solve a problem.
> 
> The problem is that even if we incorporate tooz, we would still need to 
> have a service table in the RDBMS and continue to query it over and over 
> again in the scheduler and API nodes.
> 

Most likely it was designed with hesitance to have a tooz requirement
to be a source of truth. But it's certainly not a problem for most tooz
backends to be a source of truth. Certainly not for etcd or ZK, which
are both designed to be that.

> I want all service information in the same place, and I don't want to 
> use an RDBMS for that information. etcd3 provides an ideal place to 
> store service record information. Google Protocol Buffers is an ideal 
> data format for evolvable versioned objects. os-lively presents an API 
> that solves the problem I want to solve in Nova. tooz didn't.
> 

Was there something inherent in tooz's design that prevented you from
adding it to tooz's group API? Said API already includes liveness (watch
the group that corresponds to the service you want).

The only thing missing is being able to get groups and group members
by secondary indexes. etcd3's built in indexes by field are pretty nice
for that, but ZK can likely also do it too by maintaining the index in
the driver.

I understand abstractions can seem pretty cumbersome when you're moving
fast. It's not something I want to see stand in your way. But it would
be nice to see where there's deficiency in tooz so we can be there for
the next project that needs it and maybe eventually factor out direct
etcd3 usage so users who have maybe chosen ZK as their tooz backend can
also benefit from your work.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Jay Pipes

On 03/14/2017 04:22 PM, Joshua Harlow wrote:

Jay Pipes wrote:

On 03/14/2017 02:50 PM, Julien Danjou wrote:

On Tue, Mar 14 2017, Jay Pipes wrote:


Not tooz, because I'm not interested in a DLM nor leader election
library
(that's what the underlying etcd3 cluster handles for me), only a
fast service
liveness/healthcheck system, but it shows usage of etcd3 and Google
Protocol
Buffers implementing a simple API for liveness checking and host
maintenance
reporting.


Cool cool. So that's the same feature that we implemented in tooz 3
years ago. It's called "group membership". You create a group, make
nodes join it, and you know who's dead/alive and get notified when their
status change.


The point of os-lively is not to provide a thin API over ZooKeeper's
group membership interface. The point of os-lively is to remove the need
to have a database (RDBMS) record of a service in Nova.

tooz simply abstracts a group membership API across a number of drivers.
I don't need that. I need a way to maintain a service record (with
maintenance period information, region, and an evolvable data record
format) and query those service records in an RDBMS-like manner but
without the RDBMS being involved.


My plan is to push some proof-of-concept patches that replace Nova's
servicegroup API with os-lively and eliminate Nova's use of an RDBMS
for
service liveness checking, which should dramatically reduce the
amount of both
DB traffic as well as conductor/MQ service update traffic.


Interesting. Joshua and Vilob tried to push usage of tooz group
membership a couple of years ago, but it got nowhere. Well, no, they got
2 specs written IIRC:

https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/service-group-using-tooz.html

But then it died for whatever reasons on Nova side.


It died because it didn't actually solve a problem.


Hmmm, idk about that, more likely other things involved, but point taken
(and not meant personally).


That should have read "it died because it didn't actually solve *the* 
problem". Meaning, the problem of having to store service and 
maintenance information in the RDBMS. Sorry, I didn't mean that tooz 
doesn't solve problems. That's not at all how I meant to come across!



The problem is that even if we incorporate tooz, we would still need to
have a service table in the RDBMS and continue to query it over and over
again in the scheduler and API nodes.

I want all service information in the same place, and I don't want to
use an RDBMS for that information. etcd3 provides an ideal place to
store service record information. Google Protocol Buffers is an ideal
data format for evolvable versioned objects. os-lively presents an API
that solves the problem I want to solve in Nova. tooz didn't.


Def looks like u are doing some custom service indexes and such in etcd,
so ya, the default in tooz may not fit that kind of specialized model
(though I can't say such a model would be unique to nova).

https://gist.github.com/harlowja/57394357e81703a595a15d6dd7c774eb was
something I threw together, tooz may not be a perfect match, but still
seems like it can evolve to store something like your indexes @
https://github.com/jaypipes/os-lively/blob/master/os_lively/service.py#L524-L542




Best,
-jay

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][deployment] Deployment Working Group (DWG)

2017-03-14 Thread Emilien Macchi
OpenStack community has been a welcoming place for Deployment tools.
They bring great value to OpenStack because most of them are used to
deploy OpenStack in production, versus a development environment.

Over the last few years, deployment tools project have been trying
to solve similar challenges. Recently we've seen some desire to
collaborate, work on common topics and resolve issues seen by all
these tools.

Some examples of collaboration:

* OpenStack Ansible and Puppet OpenStack have been collaborating on
  Continuous Integration scenarios but also on Nova upgrades orchestration.
* TripleO and Kolla share the same tool for container builds.
* TripleO and Fuel share the same Puppet OpenStack modules.
* OpenStack and Kubernetes are interested in collaborating on configuration
  management.
* Most of tools want to collect OpenStack parameters for configuration
  management in a common fashion.
* [more]

The big tent helped to make these projects part of OpenStack, but no official
group was created to share common problems across tools until now.

During the Atlanta Project Team Gathering in 2017 [1], most of current active
deployment tools project team leaders met in a room and decided to start actual
collaboration on different topics.
This resolution is a first iteration of creating a new working group
for Deployment Tools.

The mission of the Deployment Working Group would be the following::

  To collaborate on best practices for deploying and configuring OpenStack
  in production environments.


Note: in some cases, some challenges will be solved by adopting a technology
or a tool. But sometimes, it won't happen because of the deployment tool
background. This group would have to figure out how we can increase
this adoption and converge to the same technologies eventually.


For now, we'll use the wiki to document how we work together:
https://wiki.openstack.org/wiki/Deployment

The etherpad presented in [1] might be transformed in a Wiki page if
needed but for now we expect people to update it.

[1] https://etherpad.openstack.org/p/deployment-pike


Let's make OpenStack deployments better and together ;-)
Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Joshua Harlow

Jay Pipes wrote:

On 03/14/2017 02:50 PM, Julien Danjou wrote:

On Tue, Mar 14 2017, Jay Pipes wrote:


Not tooz, because I'm not interested in a DLM nor leader election
library
(that's what the underlying etcd3 cluster handles for me), only a
fast service
liveness/healthcheck system, but it shows usage of etcd3 and Google
Protocol
Buffers implementing a simple API for liveness checking and host
maintenance
reporting.


Cool cool. So that's the same feature that we implemented in tooz 3
years ago. It's called "group membership". You create a group, make
nodes join it, and you know who's dead/alive and get notified when their
status change.


The point of os-lively is not to provide a thin API over ZooKeeper's
group membership interface. The point of os-lively is to remove the need
to have a database (RDBMS) record of a service in Nova.

tooz simply abstracts a group membership API across a number of drivers.
I don't need that. I need a way to maintain a service record (with
maintenance period information, region, and an evolvable data record
format) and query those service records in an RDBMS-like manner but
without the RDBMS being involved.


My plan is to push some proof-of-concept patches that replace Nova's
servicegroup API with os-lively and eliminate Nova's use of an RDBMS for
service liveness checking, which should dramatically reduce the
amount of both
DB traffic as well as conductor/MQ service update traffic.


Interesting. Joshua and Vilob tried to push usage of tooz group
membership a couple of years ago, but it got nowhere. Well, no, they got
2 specs written IIRC:

https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/service-group-using-tooz.html


But then it died for whatever reasons on Nova side.


It died because it didn't actually solve a problem.


Hmmm, idk about that, more likely other things involved, but point taken 
(and not meant personally).




The problem is that even if we incorporate tooz, we would still need to
have a service table in the RDBMS and continue to query it over and over
again in the scheduler and API nodes.

I want all service information in the same place, and I don't want to
use an RDBMS for that information. etcd3 provides an ideal place to
store service record information. Google Protocol Buffers is an ideal
data format for evolvable versioned objects. os-lively presents an API
that solves the problem I want to solve in Nova. tooz didn't.


Def looks like u are doing some custom service indexes and such in etcd, 
so ya, the default in tooz may not fit that kind of specialized model 
(though I can't say such a model would be unique to nova).


https://gist.github.com/harlowja/57394357e81703a595a15d6dd7c774eb was 
something I threw together, tooz may not be a perfect match, but still 
seems like it can evolve to store something like your indexes @ 
https://github.com/jaypipes/os-lively/blob/master/os_lively/service.py#L524-L542 





Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-14 Thread Michał Jastrzębski
So from kolla perspective that would be totally awesome. For
kolla-ansible it will make things easier, for kolla-k8s...well, it
would solve a lot (**a lot**) of issues we're dealing with.

Pretty pretty please? <:)

On 14 March 2017 at 10:48, Emilien Macchi  wrote:
> On Tue, Mar 14, 2017 at 1:04 PM, Davanum Srinivas  wrote:
>> Team,
>>
>> So one more thing popped up again on IRC:
>> https://etherpad.openstack.org/p/oslo.config_etcd_backend
>>
>> What do you think? interested in this work?
>
> Wow, it seems like our minds are crossing.
> We had this discussion at the PTG and I've seen this topic quite often
> mentioned during different sessions.
>
> It's also somehow mentioned here:
> https://etherpad.openstack.org/p/deployment-pike
> See "Allow services to make use of KV stores instead of just INI files
> for config".
>
> What Thomas is doing in the PoC is actually what we thought it could
> be the next steps forward configuration management in OpenStack in a
> way that could be shared across all tools.
> Partially related, Ben Nemec is working on a spec that would extract
> all OpenStack parameters and generate YAML files:
> https://review.openstack.org/#/c/440835/
> And we thought that we could re-use this file to inject the
> configuration into etcd.
>
> I see a connection here where :
>
> 1. With Ben's work, we would generate a list of parameters available
> in OpenStack and expose it to the User Interface of the deployment
> tool.
> 2. The deployment tool would grab inputs from users and write the
> values into etcd. The installers would also configure some parameters
> that users don't want to provide (with all the logic around).
> 3. OpenStack services would read the config directly from etcd, thanks
> to Thomas's work.
>
> That way, 1. and 3. belong to oslo.config and 2. is done by OpenStack
> deployment tools.
> Does it make sense?
>
> I see a lot of collaboration and consolidation here, in how we do
> configuration management in OpenStack. I hope we can move forward and
> find some consensus here; and why not proposing a first architecture
> for Pike.
>
> Thanks,
>
>> Thanks,
>> Dims
>>
>> PS: Between this thread and the other one about Tooz/DLM and
>> os-lively, we can probably make a good case to add etcd as a base
>> always-on service.
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][heat] Heat memory usage in the TripleO gate: Ocata edition

2017-03-14 Thread Zane Bitter

Following up on the previous thread:

http://lists.openstack.org/pipermail/openstack-dev/2017-January/109748.html

Here is the latest data, which includes the Ocata release:

https://fedorapeople.org/~zaneb/tripleo-memory/20170314/heat_memused.png

As you can see, there has been one jump in memory usage. This was due to 
the TripleO patch https://review.openstack.org/#/c/425717/


Unlike previous increases in memory usage, I was able to warn of this 
one in the advance, and it was deemed an acceptable trade-off. The 
reasons for the increase are unknown - the addition of more stuff to the 
endpoint map seemed like a good bet, but one attempt to mitigate that[1] 
had no effect and I'm increasingly unconvinced that this could account 
for the magnitude of the increase.


In any event, memory usage remains around the 1GiB level, none of the 
other complexity increases during Ocata have had any discernible effect, 
and Heat has had no memory usage regressions.


Stay tuned for the next exciting edition, in which I try to figure out 
how to do more than 3 colors on the plot.


cheers,
Zane.


[1] https://review.openstack.org/#/c/427836/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] UPDATED priorities for the coming week (03/10-03/16)

2017-03-14 Thread Brian Rosmaita
Hello Glancers,

In the interest of getting requirements changes unblocked, I'm adding
the following to the review priorities for this week:

- Fix incompatibilities with WebOb 1.7
  https://review.openstack.org/#/c/423366/
- Invoke monkey_patching early enough for eventlet 0.20.1
  https://review.openstack.org/#/c/419074/

Please give them your attention.  (The other patch mentioned by Dims has
been merged.)

thanks,
brian



On 3/9/17 2:32 PM, Brian Rosmaita wrote:
> Hello Glancers,
> 
> As discussed at today's Glance weekly meeting, here are the review
> priorities for this week:
> 
> 1. image import refactor
> Erno has posted a number of patches related to the backend implementation:
> - https://review.openstack.org/443636
> - https://review.openstack.org/391441
> - https://review.openstack.org/443632
> - https://review.openstack.org/391442
> - https://review.openstack.org/443633
> Please review and leave comments, even on the ones marked "WIP", as Erno
> could use the feedback.
> 
> 
> 2. Hemanth has the first draft of the dev docs for writing E-M-C
> database migrations posted:
> - https://review.openstack.org/#/c/443741/
> It's extremely important to the Glance project that this be clearly and
> accurately documented.  Thus, if you *haven't* been involved in the
> zero-downtime database migration effort yet, your reviews will be
> especially appreciated, as you'll be more likely to spot unstated
> assumptions.
> 
> 
> Other info:
> - spec proposal freeze: 13:59 UTC on Thursday 30 March 2017
> - Boston "Forum" planning - add proposals for discussion topics:
>   https://etherpad.openstack.org/p/BOS-Glance-brainstorming
> - March operators' survey proposal - please leave comments before 16:00
> UTC Monday 13 March:
>   https://etherpad.openstack.org/p/glance-cache-operator-survey
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Jay Pipes

On 03/14/2017 02:50 PM, Julien Danjou wrote:

On Tue, Mar 14 2017, Jay Pipes wrote:


Not tooz, because I'm not interested in a DLM nor leader election library
(that's what the underlying etcd3 cluster handles for me), only a fast service
liveness/healthcheck system, but it shows usage of etcd3 and Google Protocol
Buffers implementing a simple API for liveness checking and host maintenance
reporting.


Cool cool. So that's the same feature that we implemented in tooz 3
years ago. It's called "group membership". You create a group, make
nodes join it, and you know who's dead/alive and get notified when their
status change.


The point of os-lively is not to provide a thin API over ZooKeeper's 
group membership interface. The point of os-lively is to remove the need 
to have a database (RDBMS) record of a service in Nova.


tooz simply abstracts a group membership API across a number of drivers. 
I don't need that. I need a way to maintain a service record (with 
maintenance period information, region, and an evolvable data record 
format) and query those service records in an RDBMS-like manner but 
without the RDBMS being involved.



My plan is to push some proof-of-concept patches that replace Nova's
servicegroup API with os-lively and eliminate Nova's use of an RDBMS for
service liveness checking, which should dramatically reduce the amount of both
DB traffic as well as conductor/MQ service update traffic.


Interesting. Joshua and Vilob tried to push usage of tooz group
membership a couple of years ago, but it got nowhere. Well, no, they got
2 specs written IIRC:

  
https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/service-group-using-tooz.html

But then it died for whatever reasons on Nova side.


It died because it didn't actually solve a problem.

The problem is that even if we incorporate tooz, we would still need to 
have a service table in the RDBMS and continue to query it over and over 
again in the scheduler and API nodes.


I want all service information in the same place, and I don't want to 
use an RDBMS for that information. etcd3 provides an ideal place to 
store service record information. Google Protocol Buffers is an ideal 
data format for evolvable versioned objects. os-lively presents an API 
that solves the problem I want to solve in Nova. tooz didn't.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-14 Thread Mike Bayer

Hello all -

As mentioned previously, SQLAlchemy 1.1 has now been released for about 
six months.   My work now is on SQLAlchemy 1.2 which should hopefully 
see initial releases in late spring.SQLAlchemy 1.1 includes tons of 
features, bugfixes, and improvements, and in particular the most recent 
versions contain some critical performance improvements focused around 
the "joined eager loading" feature, most typically encountered when an 
application makes many, many queries for small, single-row result sets 
with lots of joined eager loading.   In other words, exactly the kinds 
of queries that Openstack applications do a lot; the fixes here were 
identified as a direct result of Neutron query profiling by myself and a 
few other contributors.


For many weeks now, various patches to attempt to bump requirements for 
SQLAlchemy 1.1 have been languishing with little interest, and I do not 
have enough knowledge of the requirements system to get exactly the 
correct patch that will accomplish the goal (nor do others).  The 
current gerrit is at https://review.openstack.org/#/c/423192/, where you 
can see that not just me, but a bunch of folks, have no idea what 
incantations we need to put here that will make this happen.  Tony 
Breeds has chimed in thusly:



To get this in we'll need to remove the cap in global-requirements
*and* at the same time add a heap of entries to 
upper-constratints-xfails.txt. this will allow us to merge the cap 
removal and keep the constraint in the 1.0 family while we wait for the 
requirements sync to propagate out.


I'm not readily familiar with what goes into upper-constraints-xfails 
and this file does not appear to be documented in common places like 
https://wiki.openstack.org/wiki/Requirements or 
https://git.openstack.org/cgit/openstack/requirements/tree/README.rst .


I'm asking on the list here for some assistance in moving this forward. 
SQLAlchemy development these days is closely attuned to the needs of 
Openstack now, a series of Openstack test suites are part of 
SQLAlchemy's own CI servers to ensure backwards compatibility with all 
changes, and 1.2 will have even more features that are directly 
consumable by oslo.db (features everyone will want, I promise you). 
Being able to bump requirements across Openstack so that new versions 
can be tested and integrated in a timely manner would be very helpful.


Thanks for reading!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Julien Danjou
On Tue, Mar 14 2017, Jay Pipes wrote:

> Not tooz, because I'm not interested in a DLM nor leader election library
> (that's what the underlying etcd3 cluster handles for me), only a fast service
> liveness/healthcheck system, but it shows usage of etcd3 and Google Protocol
> Buffers implementing a simple API for liveness checking and host maintenance
> reporting.

Cool cool. So that's the same feature that we implemented in tooz 3
years ago. It's called "group membership". You create a group, make
nodes join it, and you know who's dead/alive and get notified when their
status change.

> My plan is to push some proof-of-concept patches that replace Nova's
> servicegroup API with os-lively and eliminate Nova's use of an RDBMS for
> service liveness checking, which should dramatically reduce the amount of both
> DB traffic as well as conductor/MQ service update traffic.

Interesting. Joshua and Vilob tried to push usage of tooz group
membership a couple of years ago, but it got nowhere. Well, no, they got
2 specs written IIRC:

  
https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/service-group-using-tooz.html

But then it died for whatever reasons on Nova side.

Cheers,
-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][diskimage-builder] Status of diskimage-builder

2017-03-14 Thread Emilien Macchi
Here's the proposal that will move DIB to Infra umbrella:
https://review.openstack.org/445617

Let's move forward and vote on this proposal.

Thanks all,

On Mon, Mar 6, 2017 at 3:23 PM, Gregory Haynes  wrote:
> On Sat, Mar 4, 2017, at 12:13 PM, Andre Florath wrote:
>> Hello!
>>
>> Thanks Greg for sharing your thoughts.  The idea of splitting off DIB
>> from OpenStack is new for me, therefore I collect some pros and
>> cons:
>>
>> Stay in OpenStack:
>>
>> + Use available OpenStack infrastructure and methods
>> + OpenStack should include a possibility to create images for ironic,
>>   VMs and docker. (Yes - there are others, but DIB is the best! :-) )
>> + Customers use DIB because it's part of OpenStack and for OpenStack
>>   (see e.g. [1])
>> + Popularity of OpenStack attracts more developers than a separate
>>   project (IMHO running DIB as a separate project even lowers the low
>>   number of contributors).
>> + 'Short Distances' if there are special needs for OpenStack.
>> + Some OpenStack projects use DIB - and also use internal 'knowledge'
>>   (like build-, run- or test-dependencies) - it would be not that easy
>>   to completely separate this in short term.
>>
>
> Ah, I may have not been super clear - I definitely agree that we
> wouldn't want to move off of being hosted by OpenStack infra (for all
> the reasons you list). There are actually two classes of project hosted
> by OpenStack infra - OpenStack projects and OpenStack related projects
> which have differing requirements
> (https://docs.openstack.org/infra/manual/creators.html#decide-status-of-your-project).
> What I've noticed is we tend to align more with the openstack-related
> projects in terms of what we ask for / how we develop (e.g. not
> following the normal release cycle, not really being a 'deliverable' of
> an OpenStack release). AIUI though the distinction of whether you're an
> official project team or a related project just distinguishes what
> restrictions are placed on you, not whether you can be hosted by
> OpenStack infra.
>
>> As a separate project:
>>
>> - Possibly less organizational overhead.
>> - Independent releases possible.
>> - Develop / include / concentrate also for / on other non-OpenStack
>>   based virtualization platforms (EC2, Google Cloud, ...)
>> - Extend the use cases to something like 'DIB can install a wide range
>>   of Linux distributions on everything you want'.
>>   Example: DIB Element to install Raspberry Pi [2] (which is currently
>>   not the core use-case but shows how flexible DIB is).
>>
>> In my opinion the '+' arguments are more important, therefore DIB
>> should stay within OpenStack as a sub-project.  I don't really care
>> about the master: TripleO, Infra, glance, ...
>>
>>
>
> Out of this list I think infra is really the only one which makes sense.
> TripleO is the current setup and makes only slightly more sense than
> Glance at this point: we'd be an odd appendage in both situations.
> Having been in this situation for some time I tend to agree that it
> isn't a big issue it tends to just be a mild annoyance every now and
> then. IMO it'd be nice to resolve this issue once and for all, though
> :).
>
>> I want to touch an important point: Greg you are right that there are
>> only a very few developers contributing for DIB.  One reason
>> is IMHO, that it is not very attractive to work on DIB; some examples:
>>
>> o The documentation how to set up a DIB development environment [3]
>>   is out of date.
>> o Testing DIB is nightmare: a developer has no chance to test
>>   as it is done in the CI (which is currently setup by other OpenStack
>>   projects?). Round-trip times of ~2h - and then it often fails,
>>   because of some mirror problem...
>> o It takes sometimes very long until a patch is reviewed and merged
>>   (e.g. still open since 1y1d [6]; basic refactoring [7] was filed
>>   about 9 month ago and still not in the master).
>> o There are currently about 100 elements in DIB. Some of them are
>>   highly hardware dependent; some are known not to work; a lot of them
>>   need refactoring.
>
> I cant agree more on all of this. TBH I think working on docs is
> probably the most effective thing someone could do with DIB ATM because,
> as you say, that's how you enable people to contribute. The theory is
> that this is also what helps with the review latency - ask newer
> contributors to help with initial reviews. That being said, I'd be
> surprised if the large contributor count grows much unless some of the
> use cases change simply because its very much a plumbing tool for many
> of our consumers, not something people are looking to drive feature
> development in to.
>
>>
>> It is important to work on these topics to make DIB more attractive and
>> possible have more contributors.  Discussions about automated
>> development environment setup [4] or better developer tests [5] started
>> but need more attention and discussions (and maybe a different setting
>> 

Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-14 Thread Emilien Macchi
On Tue, Mar 14, 2017 at 1:04 PM, Davanum Srinivas  wrote:
> Team,
>
> So one more thing popped up again on IRC:
> https://etherpad.openstack.org/p/oslo.config_etcd_backend
>
> What do you think? interested in this work?

Wow, it seems like our minds are crossing.
We had this discussion at the PTG and I've seen this topic quite often
mentioned during different sessions.

It's also somehow mentioned here:
https://etherpad.openstack.org/p/deployment-pike
See "Allow services to make use of KV stores instead of just INI files
for config".

What Thomas is doing in the PoC is actually what we thought it could
be the next steps forward configuration management in OpenStack in a
way that could be shared across all tools.
Partially related, Ben Nemec is working on a spec that would extract
all OpenStack parameters and generate YAML files:
https://review.openstack.org/#/c/440835/
And we thought that we could re-use this file to inject the
configuration into etcd.

I see a connection here where :

1. With Ben's work, we would generate a list of parameters available
in OpenStack and expose it to the User Interface of the deployment
tool.
2. The deployment tool would grab inputs from users and write the
values into etcd. The installers would also configure some parameters
that users don't want to provide (with all the logic around).
3. OpenStack services would read the config directly from etcd, thanks
to Thomas's work.

That way, 1. and 3. belong to oslo.config and 2. is done by OpenStack
deployment tools.
Does it make sense?

I see a lot of collaboration and consolidation here, in how we do
configuration management in OpenStack. I hope we can move forward and
find some consensus here; and why not proposing a first architecture
for Pike.

Thanks,

> Thanks,
> Dims
>
> PS: Between this thread and the other one about Tooz/DLM and
> os-lively, we can probably make a good case to add etcd as a base
> always-on service.
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev]  [Horizon] Empty metadata value support

2017-03-14 Thread Rob Cresswell (rcresswe)
You’d be better off checking each API or tagging Glance in this, since I think 
they originally wrote the metadata stuff. Horizon shouldn’t require anything 
the APIs don’t, but I’d imagine it was there for a reason, at least initially.

Rob

> On 14 Mar 2017, at 13:06, Mateusz Kowalski  wrote:
> 
> Hello everyone,
> 
> This mail is to ask for opinion and/or recommendation regarding inconsistent 
> behaviour between CLI and UI re: support of metadata keys with empty values.
> 
> The current behaviour is as follows: most, if not all, of the backend 
> components fully support custom metadata properties with value = null. At the 
> same time Horizon UI by default in all "Update ... Metadata" requires that 
> for each key value is non-empty (that means null is not a valid input).
> 
> We have a following scenario happening just now for one of our customers -- 
> there is an image X uploaded via CLI with property "custom_x:null". User 
> creates a VM from this image and later creates a snapshot of the VM (these 
> two steps are indifferent for CLI and UI). Next, using UI, he tries to rename 
> the snapshot he has just created using "Edit Image" panel. Apparently the 
> operation is not possible because the metadata tab is marked as "mandatory" 
> with property "custom_x" appearing without any value and tagged as 
> "required". This means our user is forced to either put non-null value to the 
> property or completely remove it in order to be able to rename the snapshot. 
> At the same time renaming it using CLI works without any impact on the 
> metadata. The same applies to changing any other detail like "image 
> description", "visibility", "protection".
> 
> Therefore the question - does anyone have a strong "no" against pushing a 
> patch which will allow null as a valid value for a custom metadata across all 
> Horizon ?
> 
> Mateusz,
> CERN
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [tripleo] [deployment] Keystone Fernet keys rotations spec

2017-03-14 Thread Emilien Macchi
Folks,

I found useful to share a spec that I started to write this morning:
https://review.openstack.org/445592

The goal is to do Keystone Fernet keys rotations in a way that scales
and is secure, by using the standard tools and not re-inventing the
wheel.
In other words: if you're working on Keystone or TripleO or any other
deployment tool: please read the spec and give any feedback.

We would like to find a solution that would work for all OpenStack
deployment tools (Kolla, OSA, Fuel, TripleO, Helm, etc) but I sent the
specs to tripleo project
to get some feedback.

If you already has THE solution that you think is the best one, then
we would be very happy to learn from it in a comment directly in the
spec.

Thanks for your time,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][neutron-fwaas][networking-bgpvpn][nova-lxd] Removing old data_utils from Tempest

2017-03-14 Thread Ken'ichi Ohmichi
Hi,

Many projects are using data_utils library which is provided by
Tempest for creating test resources with random resource names.
Now the library is provided as stable interface (tempest.lib) and old
unstable interface (tempest.common) will be removed after most
projects are switching to the new one.
We can see remaining projects from
https://review.openstack.org/#/q/status:open+branch:master+topic:tempest-data_utils

I hope these patches also can be merged before the
patch(https://review.openstack.org/#/c/72/) which removes the old
one to keep all gates stable.

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-14 Thread Davanum Srinivas
Team,

So one more thing popped up again on IRC:
https://etherpad.openstack.org/p/oslo.config_etcd_backend

What do you think? interested in this work?

Thanks,
Dims

PS: Between this thread and the other one about Tooz/DLM and
os-lively, we can probably make a good case to add etcd as a base
always-on service.

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Fox, Kevin M
+1

From: Thierry Carrez [thie...@openstack.org]
Sent: Tuesday, March 14, 2017 3:00 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for 
Tooz/DLM

Fox, Kevin M wrote:
> With my operator hat on, I would like to use the etcd backend, as I'm already 
> paying the cost of maintaining etcd clusters as part of Kubernetes. Adding 
> Zookeeper is a lot more work.

+1

In the spirit of better operationally integrating with Kubernetes, I
think we need to support etcd, at least as an option.

As I mentioned in another thread, for base services like databases,
message queues and distributed lock managers, the Architecture WG
started to promote an expand/contract model. Start by supporting a
couple viable options, and then once operators / the market decides on
one winner, contract to only supporting that winner, and start using the
specific features of that technology.

For databases and message queues, it's more than time for us to
contract. For DLMs, we are in the expand phase. We should only support a
very limited set of valuable options though -- no need to repeat the
mistakes of the past and support a dozen options just because we can.
Here it seems Zookeeper gives us the mature / featureful angle, and etcd
covers the Kubernetes cooperation / non-Java angle. I don't really see
the point of supporting a third option.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Jay Pipes

On 03/14/2017 08:57 AM, Julien Danjou wrote:

On Tue, Mar 14 2017, Davanum Srinivas wrote:


Let's do it!! (etcd v2-v3 in tooz)


Hehe. I'll move that higher in my priority list, I swear. But anyone is
free to beat me to it in the meantime. ;)


A weekend experiment:

https://github.com/jaypipes/os-lively

Not tooz, because I'm not interested in a DLM nor leader election 
library (that's what the underlying etcd3 cluster handles for me), only 
a fast service liveness/healthcheck system, but it shows usage of etcd3 
and Google Protocol Buffers implementing a simple API for liveness 
checking and host maintenance reporting.


My plan is to push some proof-of-concept patches that replace Nova's 
servicegroup API with os-lively and eliminate Nova's use of an RDBMS for 
service liveness checking, which should dramatically reduce the amount 
of both DB traffic as well as conductor/MQ service update traffic.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-14 Thread Roman Podoliaka
Hi Matt,

On Tue, Mar 14, 2017 at 5:27 PM, Matt Riedemann  wrote:
> We did agree to provide an openstackclient plugin purely for CLI
> convenience. That would be in a separate repo, not part of nova or
> novaclient. I've started a blueprint [1] for tracking that work. *The
> placement osc plugin blueprint does not currently have an owner.* If this is
> something someone is interested in working on, please let me know.
>
> [1] https://blueprints.launchpad.net/nova/+spec/placement-osc-plugin

I'll be glad to help with this!

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] How to review the deployment plan import/export feature

2017-03-14 Thread Ana Krivokapic
Hi TripleO devs,

The patches ([1], [2], [3], [4]) implementing the deployment plan
import/export feature have been up for review for some time. (The
corresponding spec is at [5].) In this email, I'd like to explain how to
review this feature.

Patch [1] adds the import functionality to the create and update plan
actions in tripleo-common. Make sure your copy of the
tripleo-heat-templates includes the plan-environment.yaml[6] file. On your
undercloud, clone a local copy of the tripleo-common repo [7] and
cherry-pick the patch [1] on top of it. Then run the steps outlined at [8]
in order to install the newly cloned copy of tripleo-common. This will make
the updated actions available to Mistral. Then you can use the plan create
and plan update commands, then make sure that the contents of the
plan-environment.yaml file get correctly imported into the Mistral
environment. E.g:

  $ openstack overcloud plan create --templates
/usr/share/openstack-tripleo-heat-templates/ new_plan
  $ mistral environment-get new_plan

And similarly for plan update.

Patch [2] adds an action and a workflow for exporting deployment plans and
patch [3] adds a CLI command that triggers the workflow. You can test these
two patches together, or if you want to test only [2] you can manually
execute the Mistral workflow. For patch [2] the same instructions from
above regarding installing a new copy of tripleo-common apply. You will
also need to manually update the plan management workbook:

  $ mistral workbook-update tripleo-common/workbooks/plan_management.yaml

Please note that patch [2] depends on patch [1] - you'll need both of them
applied on your local copy of tripleo-common. As far as
python-tripleoclient goes, it is enough to simply clone a fresh copy to
your undercloud, apply patch [3] and run `python setup.py install`. This
will make the new plan export command available to you:

  $ openstack overcloud plan export new_plan

If all goes well, that should create a plan export tarball
(new_plan.tar.gz) in the directory you ran the command from. You should
check that the tarball contains all the relevant plan files, as well as the
plan-environment.yaml file whose contents should be imported from the
Mistral environment.

Finally, patch [4] integrates the plan export functionality into the
TripleO UI. You will need [1] and [2] applied on your undercloud. Then you
can cherry pick [4] on top of your tripleo-ui checkout. Access the UI,
navigate to Select Deployment -> Manage Deployments. Next to every plan,
there should now be a new Export button which you can use to test the plan
export functionality.

Hope all of this makes sense - if it does not, or you have any other
questions regarding this feature, please feel free to respond here or ping
me on IRC.

Thanks for your patience and I'll look forward to the reviews!


[1] https://review.openstack.org/#/c/414169/
[2] https://review.openstack.org/#/c/422789/
[3] https://review.openstack.org/#/c/425858/
[4] https://review.openstack.org/#/c/437676/
[5]
https://specs.openstack.org/openstack/tripleo-specs/specs/ocata/gui-plan-import-export.html
[6]
https://github.com/openstack/tripleo-heat-templates/blob/master/plan-environment.yaml
[7] https://github.com/openstack/tripleo-common
[8]
https://docs.openstack.org/developer/tripleo-common/readme.html#action-development

-- 
Regards,
Ana Krivokapic
Senior Software Engineer
OpenStack team
Red Hat Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 回复: [nova][nova-scheduler] Instance boot stuck in"Scheduling" state

2017-03-14 Thread Prashant Shetty
No. I was able to bring up multi-node setup with different compute using
devstack branch stable/ocata.
Those nova-manage commands are required in multi-node setup only,
all-in-one setup will be taken care by devstack itself.

Did nova-manage discover_host command detected the computes you configured?.
If you are not seeing any requests in nova-compute means, you have problem
in n-cond or n-sch.

Check logs n-sch.log and n-cond, you should see some clue about problem.

Thanks,
Prashant


On Tue, Mar 14, 2017 at 9:05 PM, Vikash Kumar <
vikash.ku...@oneconvergence.com> wrote:

> Thanks Prashant,
>
> I have chkd that. Does it have anything to run controller and compute
> on single node as solution ?
>
> On Tue, Mar 14, 2017 at 8:52 PM, Prashant Shetty <
> prashantshetty1...@gmail.com> wrote:
>
>> Couple of things to check,
>>
>>- On controller in nova.conf you should have [placement] section with
>>below info
>>   - [placement]
>>   os_region_name = RegionOne
>>   project_domain_name = Default
>>   project_name = service
>>   user_domain_name = Default
>>   password = 
>>   username = placement
>>   auth_url = 
>>   auth_type = password
>>- If nova service-list shows your nova-compute is UP and RUNNING, you
>>need to run discover commands on controller as below
>>   - nova-manage cell_v2 map_cell0 --database_connection 
>>   - nova-manage cell_v2 simple_cell_setup --transport-url
>>   
>>   - nova-manage cell_v2 discover_hosts --verbose
>>
>> Discover command should show message that it has discovered your compute
>> nodes. In case still instance launch fails check nova-conductor and
>> nova-scheduler logs for more info.
>>
>> For more information refer, https://docs.openstack.org/dev
>> eloper/nova/cells.html
>>
>>
>> Thanks,
>>
>> Prashant
>>
>> On Tue, Mar 14, 2017 at 8:33 PM, Vikash Kumar <
>> vikash.ku...@oneconvergence.com> wrote:
>>
>>> That was the weird thing. nova-compute doesn't had any error log.
>>> nova-compute logs also didn't had any instance create request also.
>>>
>>> On Tue, Mar 14, 2017 at 7:50 PM, luogangyi@chinamobile <
>>> luogan...@chinamobile.com> wrote:
>>>
 From your log, we can see nova scheduler has already select target node
 which is u’nfp’.


 So you should check the nova-compute log from node nfp.


 Probably, you are stuck at image downloading.

  原始邮件
 *发件人:* Vikash Kumar
 *收件人:* openstack-dev
 *发送时间:* 2017年3月14日(周二) 18:22
 *主题:* [openstack-dev] [nova][nova-scheduler] Instance boot stuck
 in"Scheduling" state

 All,

 I brought up multinode setup with devstack. I am using Ocata
 release. Instances boot are getting stuck in "scheduling" state. The state
 never gets changed. Below is the link for scheduler log.

 http://paste.openstack.org/show/602635/


 --
 Regards,
 Vikash

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Regards,
>>> Vikash
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Vikash
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-l2gw] Unable to create release tag

2017-03-14 Thread Ihar Hrachyshka
On Tue, Mar 14, 2017 at 6:04 AM, Jeremy Stanley  wrote:
> The ACL for that repo doesn't seem to be configured to allow it
> (yet):
>

Probably fallout of neutron stadium exclusion. While it was in the
stadium, releases were happening thru openstack/releases machinery.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 回复: [nova][nova-scheduler] Instance boot stuck in"Scheduling" state

2017-03-14 Thread Vikash Kumar
Thanks Prashant,

I have chkd that. Does it have anything to run controller and compute
on single node as solution ?

On Tue, Mar 14, 2017 at 8:52 PM, Prashant Shetty <
prashantshetty1...@gmail.com> wrote:

> Couple of things to check,
>
>- On controller in nova.conf you should have [placement] section with
>below info
>   - [placement]
>   os_region_name = RegionOne
>   project_domain_name = Default
>   project_name = service
>   user_domain_name = Default
>   password = 
>   username = placement
>   auth_url = 
>   auth_type = password
>- If nova service-list shows your nova-compute is UP and RUNNING, you
>need to run discover commands on controller as below
>   - nova-manage cell_v2 map_cell0 --database_connection 
>   - nova-manage cell_v2 simple_cell_setup --transport-url
>   
>   - nova-manage cell_v2 discover_hosts --verbose
>
> Discover command should show message that it has discovered your compute
> nodes. In case still instance launch fails check nova-conductor and
> nova-scheduler logs for more info.
>
> For more information refer, https://docs.openstack.org/
> developer/nova/cells.html
>
>
> Thanks,
>
> Prashant
>
> On Tue, Mar 14, 2017 at 8:33 PM, Vikash Kumar <
> vikash.ku...@oneconvergence.com> wrote:
>
>> That was the weird thing. nova-compute doesn't had any error log.
>> nova-compute logs also didn't had any instance create request also.
>>
>> On Tue, Mar 14, 2017 at 7:50 PM, luogangyi@chinamobile <
>> luogan...@chinamobile.com> wrote:
>>
>>> From your log, we can see nova scheduler has already select target node
>>> which is u’nfp’.
>>>
>>>
>>> So you should check the nova-compute log from node nfp.
>>>
>>>
>>> Probably, you are stuck at image downloading.
>>>
>>>  原始邮件
>>> *发件人:* Vikash Kumar
>>> *收件人:* openstack-dev
>>> *发送时间:* 2017年3月14日(周二) 18:22
>>> *主题:* [openstack-dev] [nova][nova-scheduler] Instance boot stuck
>>> in"Scheduling" state
>>>
>>> All,
>>>
>>> I brought up multinode setup with devstack. I am using Ocata
>>> release. Instances boot are getting stuck in "scheduling" state. The state
>>> never gets changed. Below is the link for scheduler log.
>>>
>>> http://paste.openstack.org/show/602635/
>>>
>>>
>>> --
>>> Regards,
>>> Vikash
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Regards,
>> Vikash
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Vikash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 回复: [nova][nova-scheduler] Instance boot stuck in"Scheduling" state

2017-03-14 Thread Matt Riedemann

On 3/14/2017 10:22 AM, Prashant Shetty wrote:

Couple of things to check,

  * On controller in nova.conf you should have [placement] section with
below info
  o [placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
user_domain_name = Default
password = 
username = placement
auth_url = 
auth_type = password
  * If nova service-list shows your nova-compute is UP and RUNNING, you
need to run discover commands on controller as below
  o nova-manage cell_v2 map_cell0 --database_connection 
  o nova-manage cell_v2 simple_cell_setup --transport-url

  o nova-manage cell_v2 discover_hosts --verbose

Discover command should show message that it has discovered your compute
nodes. In case still instance launch fails check nova-conductor and
nova-scheduler logs for more info.

For more information refer,
https://docs.openstack.org/developer/nova/cells.html


Thanks,

Prashant




Those are all good things to check. If placement is not properly 
configured and running though, the nova-compute service won't start and 
the scheduling attempt will result in a NoValidHost error which should 
put the instance into ERROR state.


Starting in Ocata, if you're not using cellsv1, the instance should be 
created in the nova_cell0 database. But it should be there with ERROR 
state and you should still be able to list/show it from the API so you 
can delete it. It shouldn't be left in scheduling state.


You may want to run this to make sure your setup is done properly:

nova-status upgrade check

That should give you some basic readiness/health information about cells 
v2 and placement in your setup.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-14 Thread Matt Riedemann

On 3/13/2017 9:14 AM, Sylvain Bauza wrote:


To be honest, one of the things I think we're missing yet is a separate
client that deployers would package so that Nova and other customer
projects would use for calling the Placement API.
At the moment, we have a huge amount of code in nova.scheduler.report
module that does smart things and I'd love to see that being in a
separate python package (maybe in the novaclient repo, or something
else) so we could ask deployers to package *that only*

The interest in that is that it wouldn't be a separate service project,
just a pure client package at a first try, and we could see how to cut
placement separately the cycle after that.

-Sylvain


We talked about the need, or lack thereof, for a python API client in 
the nova IRC channel today and decided that for now, services should 
just be using a minimal in-tree pattern using keystoneauth to work with 
the placement API. Nova and Neutron are already doing this today. There 
might be common utility code that comes out of that at some point which 
could justify a placement-lib, but let's determine that after more 
projects are using the service, like Cinder and Ironic.


We also agreed to not create a python-placementclient type package that 
mimics novaclient and has a python API binding. We want API consumers to 
use the REST API directly which forces us to have a clean and 
well-documented API, rather than hiding warts within a python API 
binding client package.


We did agree to provide an openstackclient plugin purely for CLI 
convenience. That would be in a separate repo, not part of nova or 
novaclient. I've started a blueprint [1] for tracking that work. *The 
placement osc plugin blueprint does not currently have an owner.* If 
this is something someone is interested in working on, please let me know.


[1] https://blueprints.launchpad.net/nova/+spec/placement-osc-plugin

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 回复: [nova][nova-scheduler] Instance boot stuck in"Scheduling" state

2017-03-14 Thread Prashant Shetty
Couple of things to check,

   - On controller in nova.conf you should have [placement] section with
   below info
  - [placement]
  os_region_name = RegionOne
  project_domain_name = Default
  project_name = service
  user_domain_name = Default
  password = 
  username = placement
  auth_url = 
  auth_type = password
   - If nova service-list shows your nova-compute is UP and RUNNING, you
   need to run discover commands on controller as below
  - nova-manage cell_v2 map_cell0 --database_connection 
  - nova-manage cell_v2 simple_cell_setup --transport-url
  
  - nova-manage cell_v2 discover_hosts --verbose

Discover command should show message that it has discovered your compute
nodes. In case still instance launch fails check nova-conductor and
nova-scheduler logs for more info.

For more information refer,
https://docs.openstack.org/developer/nova/cells.html


Thanks,

Prashant

On Tue, Mar 14, 2017 at 8:33 PM, Vikash Kumar <
vikash.ku...@oneconvergence.com> wrote:

> That was the weird thing. nova-compute doesn't had any error log.
> nova-compute logs also didn't had any instance create request also.
>
> On Tue, Mar 14, 2017 at 7:50 PM, luogangyi@chinamobile <
> luogan...@chinamobile.com> wrote:
>
>> From your log, we can see nova scheduler has already select target node
>> which is u’nfp’.
>>
>>
>> So you should check the nova-compute log from node nfp.
>>
>>
>> Probably, you are stuck at image downloading.
>>
>>  原始邮件
>> *发件人:* Vikash Kumar
>> *收件人:* openstack-dev
>> *发送时间:* 2017年3月14日(周二) 18:22
>> *主题:* [openstack-dev] [nova][nova-scheduler] Instance boot stuck
>> in"Scheduling" state
>>
>> All,
>>
>> I brought up multinode setup with devstack. I am using Ocata release.
>> Instances boot are getting stuck in "scheduling" state. The state never
>> gets changed. Below is the link for scheduler log.
>>
>> http://paste.openstack.org/show/602635/
>>
>>
>> --
>> Regards,
>> Vikash
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Vikash
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] No meeting Mar 14, 2017, next meeting Mar 21, 2017

2017-03-14 Thread Alex Schultz
The agenda[0] was empty. So canceling the meeting for this week.  The
next meeting will be on Mar 21, 2017 @ 1500 UTC.  Feel free put
something on the agenda[1].

Thanks,
-Alex

[0] https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20170314
[1] https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20170321

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-14 Thread Adriano Petrich
Sorry if I'm missing the point here, and for being late on the discussion.
But what about zope interfaces?

Would not that be clearer?



On Tue, Mar 14, 2017 at 11:39 AM, Dougal Matthews  wrote:

>
>
> On 14 March 2017 at 11:14, Renat Akhmerov 
> wrote:
>
>> Yeah, I finally understood too what Thomas meant.
>>
>> Just to clarify, I think mixed two different discussions here:
>>
>>1. Base framework for all actions residing in mistral-lib (what I was
>>trying to discuss)
>>2. The new design for OpenStack actions
>>
>>
>> On #2 I agree with you that NovaAction.get_client(context) should work.
>> No problem with that.
>> And I believe that it doesn’t make sense to use multiple inheritance in
>> this particular case, it’s
>> simply not worth it.
>>
>> Getting back to #1.. Of course, using mixins can be problematic (method
>> and state conflicts etc.).
>> I think mixins is just one of the options that’s possible to use if we
>> want to. Regular class inheritance
>> is also an option I think. At this point if we just agree on action base
>> class I think nothing prevents
>> us from choosing how to evolve in the future. So just agreeing on the
>> base class design seems
>> to be sufficient for now. It’s just a base contract that a runner needs
>> to be aware of (sorry for
>> repeating this thought but I think it’s important). The rest is related
>> with action developer
>> convenience.
>>
>> I think the outstanding questions are;
>>
>> - should the context be a mixin or should run() always accept context?
>>
>>
>> I’m for run() having “context” argument. Not sure why mixin is needed
>> here. If an action doesn’t
>> need context it can be ignored.
>>
>> - should async be a mixin or should we continue with the is_sync() method
>> and overwriting that in the sublcass?
>>
>>
>> I’m for is_sync() method as it is now. It’s more flexible and less
>> confusing (imagine an action inheriting
>> AsyncAction but having is_async() returning False).
>>
>> - should the openstack actions in mistral-extra be mixins?
>>
>>
>> No, not at all. They don’t have to be. This is a different discussion I
>> think. We need to collect what’s bad about
>> the current OpenStack actions and think how to rewrite them (extract the
>> common infrastructure they use,
>> make them more extensible, etc.)
>>
>
> +1 to all of the above, I think we are in complete agreement and this will
> give us both a flexible interface and one that is easy to use and
> understand.
>
>
>
>>
>>
>> Renat Akhmerov
>> @Nokia
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 回复: [nova][nova-scheduler] Instance boot stuck in"Scheduling" state

2017-03-14 Thread Vikash Kumar
That was the weird thing. nova-compute doesn't had any error log.
nova-compute logs also didn't had any instance create request also.

On Tue, Mar 14, 2017 at 7:50 PM, luogangyi@chinamobile <
luogan...@chinamobile.com> wrote:

> From your log, we can see nova scheduler has already select target node
> which is u’nfp’.
>
>
> So you should check the nova-compute log from node nfp.
>
>
> Probably, you are stuck at image downloading.
>
>  原始邮件
> *发件人:* Vikash Kumar
> *收件人:* openstack-dev
> *发送时间:* 2017年3月14日(周二) 18:22
> *主题:* [openstack-dev] [nova][nova-scheduler] Instance boot stuck
> in"Scheduling" state
>
> All,
>
> I brought up multinode setup with devstack. I am using Ocata release.
> Instances boot are getting stuck in "scheduling" state. The state never
> gets changed. Below is the link for scheduler log.
>
> http://paste.openstack.org/show/602635/
>
>
> --
> Regards,
> Vikash
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Vikash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Admin or certain roles should be able to list full project subtree

2017-03-14 Thread Rodrigo Duarte
On Tue, Mar 14, 2017 at 10:36 AM, Lance Bragstad 
wrote:

> Rodrigo,
>
> Isn't what you just described the reseller use case [0]? Was that work
> ever fully finished? I thought I remember having discussions in Tokyo about
> it.
>

You are right, one of the goals of reseller is to have an even stronger
separation in the hierarchy by having subdomains, but this is not
implemented yet. However, I was referring only to the project hierarchy and
having or not inherited role assignments to grant access to the subtree.


>
>
> [0] http://specs.openstack.org/openstack/keystone-specs/
> specs/keystone/mitaka/reseller.html
>
> On Tue, Mar 14, 2017 at 7:38 AM, Rodrigo Duarte 
> wrote:
>
>> Hi Adrian,
>>
>> In project hierarchies, it is not that simple to have a "tree admin".
>> Imagine you have something like the following:
>>
>> A -> B -> C
>>
>> You are an admin of project C and want to create a project called
>> "secret_D" under C:
>>
>> A -> B -> C -> secret_D
>>
>> This is an example of a hierarchy where the admin of project A is not
>> supposed to "see" the whole tree. Of course we could implement this in a
>> different way, like using a flag "secret" in project "secret_D", but the
>> implementation we chose was the one that made more sense for the way role
>> assignments are organized. For example, we can assign to project A admin an
>> inherited role assignment, which will give access to the whole subtree and
>> make it impossible to create a "secret_D" project like we defined above -
>> it is basically a choice between the possibility to have "hidden" projects
>> or not in the subtree.
>>
>> However, we can always improve! Please submit a spec where we will be
>> able to discuss in further detail the options we have to improve the
>> current UX of the HMT feature :)
>>
>> On Tue, Mar 14, 2017 at 12:24 AM, Adrian Turjak 
>> wrote:
>>
>>> Hello Keystone Devs,
>>>
>>> I've been playing with subtrees in Keystone for the last while, and one
>>> thing that hit me recently is that as admin, I still can't actually do
>>> subtree_as_list unless I have a role in all projects in the subtree.
>>> This is kind of silly.
>>
>>
>>> I can understand why this limitation was implemented, but it's also a
>>> frustrating constraint because as an admin, I have the power to add
>>> myself to all these projects anyway, why then can't I just list them?
>>
>>
>>> Right now if I want to get a list of all the subtree projects I need to
>>> do subtree_as_ids, then list ALL projects, and then go through that list
>>> grabbing out only the projects I want. This is a pointless set of
>>> actions, and having to get the full project list when I just need a
>>> small subset is really wasteful.
>>
>>
>>> Beyond the admin case, people may in fact want certain roles to be able
>>> to see the full subtree regardless of access. In fact I have a role
>>> 'project_admin' which allows you to edit your own roles within the scope
>>> of your project, including set those roles to inherit down, and create
>>> subprojects. If you have the project_admin role, it would make sense to
>>> see the full subtree regardless of your actually having access to each
>>> element in the subtree or not.
>>>
>>> Looking at the code in Keystone, I'm not entirely sure there is a good
>>> way to set role based policy for this given how it was setup. Another
>>> option might be to introduce a filter which allows listing of
>>> subprojects. Project list is already an admin/cloud_admin only command
>>> so there is no need to limit it, and the filter could be as simple as
>>> 'subtree=' and would make getting subtrees as admin, or a
>>> given admin-like role, actually doable without the pain of roles
>>> everywhere.
>>>
>>> The HMT stuff in Keystone is very interesting, and potentially very
>>> useful, but it also feels like so many of the features are a little
>>> half-baked. :(
>>>
>>> Anyone with some insight into this and is there any interest in making
>>> subtree listing more flexible/useful?
>>>
>>> Cheers,
>>> Adrian Turjak
>>>
>>>
>>> Further reading:
>>> https://github.com/openstack/keystone-specs/blob/master/spec
>>> s/keystone/kilo/project-hierarchy-retrieval.rst
>>> https://bugs.launchpad.net/keystone/+bug/1434916
>>> https://review.openstack.org/#/c/167231
>>>
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Rodrigo Duarte Sousa
>> Senior Quality Engineer @ Red Hat
>> MSc in Computer Science
>> http://rodrigods.com
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 

[openstack-dev] 回复: [nova][nova-scheduler] Instance boot stuck in"Scheduling" state

2017-03-14 Thread luogangyi@chinamobile
From your log, we can see nova scheduler has already select target node which 
isu’nfp’.


So you should check the nova-compute log from node nfp.


Probably, you are stuck at image downloading.


原始邮件
发件人:Vikash kumarvikash.ku...@oneconvergence.com
收件人:openstack-devopenstack-...@lists.openstack.org
发送时间:2017年3月14日(周二) 18:22
主题:[openstack-dev] [nova][nova-scheduler] Instance boot stuck in"Scheduling" 
state


All,


 I brought up multinode setup with devstack. I am using Ocata release. 
Instances boot are getting stuck in "scheduling" state. The state never gets 
changed. Below is the link for scheduler log.

http://paste.openstack.org/show/602635/



-- 

Regards,

Vikash__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Pike deadlines

2017-03-14 Thread Lance Bragstad
Hello,

Sending out a quick announcement that we've merged our project-specific
deadlines for the Pike release schedule [0]. Our first deadline this
release is spec proposal freeze which is going to be R-20 (April 14th).

Thanks!

[0] https://releases.openstack.org/pike/schedule.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [API-WG] Schema like aws-sdk and API capabilities discovery

2017-03-14 Thread Chris Dent

On Fri, 10 Mar 2017, Gilles Dubreuil wrote:

On a different list we're talking about improving/new features on the client 
side of OpenStack APIs and this one came up (please see below).


Although API-ref is doing a great job, I believe the question is: Can we 
achieve an equivalent of aws-sdk? For instance could we have each project's 
API to publish its own schema?


I'm not sure if I fully understand what you're asking about or for?
Is the request that the various OpenStack APIs publish some kind of
structure API description (using something like
https://www.openapis.org/ )? Various people did some exploration of
this, trying to use swagger (as it was called then) to help with
documenting the APIs. What we discovered at the time was a) using
swagger on existing APIs didn't work as well as using it when
creating new ones, b) microversions and swagger don't play as well
together as we'd like.

If you mean something like WADL or WSDL, the general decision has
been that such things only work if there is sufficient tooling on
the client side, and we don't want to require our users to have that
kind of tooling. Instead we'd prefer that the APIs converge toward
being relatively comprehensible and usable by humans.

If you mean something else (like perhaps using json-home?) please
explain what it is.

In any case, the API-WG has not taken upon itself to make any
assertions or statements about facilitating client creation
automation. Our position is that the APIs should be something you
can consume without a great deal of intermediation and we work
towards ensuring that. That's not a fixed decision, but is certainly
the case for now.

I suppose this would fit under the API-WG umbrella. Would that be correct? 
Could someone in the group provide feedback please?


It could well do if we can figure out what we're all talking about
:)

Trying to find current work around this, I can see the API capabilities 
discovery [1] & [2] is great but, it seems to me that part of the challenge 
for the WG is a lack of schema too. Wouldn't it make sense to have a 
standardized way for all services APIs (at least on the public side) to 
publish their schema (including version/microversions details) before going 
any further?


The capabilities work is oriented towards determining what features
are available either "in this cloud" or "on this specific resource".

I guess the most important questions I can ask at this point are "Can
you please define what you mean by schema?" and "If I had one, what
could I do with it?". That will go a long way to making sure we're
near to the same page. I can make some guesses, but better to be
sure.

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] multi-card servers / direct links

2017-03-14 Thread Abdulhalim Dandoush
Dear all,

We aim at evaluating some recent Data center architectures such as
CamCube, FiConn, and Dcell, that can provide some good features
including scale, bandwidth, fault-tolerance, scalability, overheads and
deployment costs. In the mentioned architectures, servers act as not
only end hosts, but also relay nodes for each other. In other words,
Servers have multi-cards and can be connected directly to other servers
without passing via a bridge or an OVS.

I am wondering whether or not we can add additional interfaces to VMs
and to interconnect any two VMs directly via a virtual cable as we did
with "ip link" for name-spaces. Let us consider a simple scenario where
all the VMs are belonging to the same tenant or project and hosted on
the same physical server.

More specifically, let's say that using network name-spaces we can do it
easily as follow, and we want to do the same for compute VMs.

Create two net NameSpaces and inter-connect them via direct veth link

--
$ sudo ip netns add vm1

$ sudo ip netns add vm2

$ sudo ip link add name veth1 type veth peer name veth2

$ sudo ip link set dev veth2 netns vm2

$ sudo ip link set dev veth1 netns vm1

$ sudo ip netns exec vm2 ip link set dev veth2 up

$ sudo ip netns exec vm1 ip link set dev veth1 up

$ sudo ip netns exec vm2 ip address add 10.1.1.2/24 dev veth2

$ sudo ip netns exec vm1 ip address add 10.1.1.1/24 dev veth1

$ sudo ip netns exec vm1 bash

# ifconfig

veth1 Link encap:Ethernet  HWaddr 1e:d8:69:ba:76:e2

  inet addr:10.1.1.1  Bcast:0.0.0.0  Mask:255.255.255.0

  inet6 addr: fe80::1cd8:69ff:feba:76e2/64 Scope:Link

  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

  RX packets:8 errors:0 dropped:0 overruns:0 frame:0

  TX packets:8 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:1000

  RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)



# ping 10.1.1.2

PING 10.1.1.2 (10.1.1.2) 56(84) bytes of data.

64 bytes from 10.1.1.2: icmp_seq=1 ttl=64 time=0.051 ms

64 bytes from 10.1.1.2: icmp_seq=2 ttl=64 time=0.061 ms

64 bytes from 10.1.1.2: icmp_seq=3 ttl=64 time=0.072 ms

^C

--- 10.1.1.2 ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 1998ms

--

Thanks in advance,

Abdulhalim
-- 
Abdulhalim Dandoush
---
PhD in Information Technology
Researcher-Teacher
ESME Sudria, Paris Sud
Images, Signals and Networks Lab
Tel: +33 1 56 20 62 33
Fax: +33 1 56 20 62 62

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Admin or certain roles should be able to list full project subtree

2017-03-14 Thread Lance Bragstad
Rodrigo,

Isn't what you just described the reseller use case [0]? Was that work ever
fully finished? I thought I remember having discussions in Tokyo about it.


[0]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html

On Tue, Mar 14, 2017 at 7:38 AM, Rodrigo Duarte 
wrote:

> Hi Adrian,
>
> In project hierarchies, it is not that simple to have a "tree admin".
> Imagine you have something like the following:
>
> A -> B -> C
>
> You are an admin of project C and want to create a project called
> "secret_D" under C:
>
> A -> B -> C -> secret_D
>
> This is an example of a hierarchy where the admin of project A is not
> supposed to "see" the whole tree. Of course we could implement this in a
> different way, like using a flag "secret" in project "secret_D", but the
> implementation we chose was the one that made more sense for the way role
> assignments are organized. For example, we can assign to project A admin an
> inherited role assignment, which will give access to the whole subtree and
> make it impossible to create a "secret_D" project like we defined above -
> it is basically a choice between the possibility to have "hidden" projects
> or not in the subtree.
>
> However, we can always improve! Please submit a spec where we will be able
> to discuss in further detail the options we have to improve the current UX
> of the HMT feature :)
>
> On Tue, Mar 14, 2017 at 12:24 AM, Adrian Turjak 
> wrote:
>
>> Hello Keystone Devs,
>>
>> I've been playing with subtrees in Keystone for the last while, and one
>> thing that hit me recently is that as admin, I still can't actually do
>> subtree_as_list unless I have a role in all projects in the subtree.
>> This is kind of silly.
>
>
>> I can understand why this limitation was implemented, but it's also a
>> frustrating constraint because as an admin, I have the power to add
>> myself to all these projects anyway, why then can't I just list them?
>
>
>> Right now if I want to get a list of all the subtree projects I need to
>> do subtree_as_ids, then list ALL projects, and then go through that list
>> grabbing out only the projects I want. This is a pointless set of
>> actions, and having to get the full project list when I just need a
>> small subset is really wasteful.
>
>
>> Beyond the admin case, people may in fact want certain roles to be able
>> to see the full subtree regardless of access. In fact I have a role
>> 'project_admin' which allows you to edit your own roles within the scope
>> of your project, including set those roles to inherit down, and create
>> subprojects. If you have the project_admin role, it would make sense to
>> see the full subtree regardless of your actually having access to each
>> element in the subtree or not.
>>
>> Looking at the code in Keystone, I'm not entirely sure there is a good
>> way to set role based policy for this given how it was setup. Another
>> option might be to introduce a filter which allows listing of
>> subprojects. Project list is already an admin/cloud_admin only command
>> so there is no need to limit it, and the filter could be as simple as
>> 'subtree=' and would make getting subtrees as admin, or a
>> given admin-like role, actually doable without the pain of roles
>> everywhere.
>>
>> The HMT stuff in Keystone is very interesting, and potentially very
>> useful, but it also feels like so many of the features are a little
>> half-baked. :(
>>
>> Anyone with some insight into this and is there any interest in making
>> subtree listing more flexible/useful?
>>
>> Cheers,
>> Adrian Turjak
>>
>>
>> Further reading:
>> https://github.com/openstack/keystone-specs/blob/master/spec
>> s/keystone/kilo/project-hierarchy-retrieval.rst
>> https://bugs.launchpad.net/keystone/+bug/1434916
>> https://review.openstack.org/#/c/167231
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Rodrigo Duarte Sousa
> Senior Quality Engineer @ Red Hat
> MSc in Computer Science
> http://rodrigods.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev]  [Horizon] Empty metadata value support

2017-03-14 Thread Mateusz Kowalski
Hello everyone,

This mail is to ask for opinion and/or recommendation regarding inconsistent 
behaviour between CLI and UI re: support of metadata keys with empty values.

The current behaviour is as follows: most, if not all, of the backend 
components fully support custom metadata properties with value = null. At the 
same time Horizon UI by default in all "Update ... Metadata" requires that for 
each key value is non-empty (that means null is not a valid input).

We have a following scenario happening just now for one of our customers -- 
there is an image X uploaded via CLI with property "custom_x:null". User 
creates a VM from this image and later creates a snapshot of the VM (these two 
steps are indifferent for CLI and UI). Next, using UI, he tries to rename the 
snapshot he has just created using "Edit Image" panel. Apparently the operation 
is not possible because the metadata tab is marked as "mandatory" with property 
"custom_x" appearing without any value and tagged as "required". This means our 
user is forced to either put non-null value to the property or completely 
remove it in order to be able to rename the snapshot. At the same time renaming 
it using CLI works without any impact on the metadata. The same applies to 
changing any other detail like "image description", "visibility", "protection".

Therefore the question - does anyone have a strong "no" against pushing a patch 
which will allow null as a valid value for a custom metadata across all Horizon 
?

Mateusz,
CERN
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-l2gw] Unable to create release tag

2017-03-14 Thread Jeremy Stanley
On 2017-03-14 05:39:35 + (+), Gary Kotton wrote:
> I was asked to create a release tag for stable/ocata. This fails with:
[...]
>  ! [remote rejected] 10.0.0 -> 10.0.0 (prohibited by Gerrit)
[...]

The ACL for that repo doesn't seem to be configured to allow it
(yet):

http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack/networking-l2gw.config

The Infra Manual section documenting that permission is:

https://docs.openstack.org/infra/manual/creators.html#creation-of-tags

It also may be helpful to review the section on manually tagging
releases:

https://docs.openstack.org/infra/manual/drivers.html#tagging-a-release

Hope that helps!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Tomasz Pa
Etcd seems to be a better choice for performance reasons as well:

https://coreos.com/blog/performance-of-etcd.html

TP

On 14 Mar 2017 12:45 am, "Davanum Srinivas"  wrote:

> On Tue, Mar 14 2017, Davanum Srinivas wrote:
>
> > Let's do it!! (etcd v2-v3 in tooz)
>
> Hehe. I'll move that higher in my priority list, I swear. But anyone is
> free to beat me to it in the meantime. ;)
>
> --
> Julien Danjou
> -- Free Software hacker
> -- https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Admin or certain roles should be able to list full project subtree

2017-03-14 Thread Rodrigo Duarte
Hi Adrian,

In project hierarchies, it is not that simple to have a "tree admin".
Imagine you have something like the following:

A -> B -> C

You are an admin of project C and want to create a project called
"secret_D" under C:

A -> B -> C -> secret_D

This is an example of a hierarchy where the admin of project A is not
supposed to "see" the whole tree. Of course we could implement this in a
different way, like using a flag "secret" in project "secret_D", but the
implementation we chose was the one that made more sense for the way role
assignments are organized. For example, we can assign to project A admin an
inherited role assignment, which will give access to the whole subtree and
make it impossible to create a "secret_D" project like we defined above -
it is basically a choice between the possibility to have "hidden" projects
or not in the subtree.

However, we can always improve! Please submit a spec where we will be able
to discuss in further detail the options we have to improve the current UX
of the HMT feature :)

On Tue, Mar 14, 2017 at 12:24 AM, Adrian Turjak 
wrote:

> Hello Keystone Devs,
>
> I've been playing with subtrees in Keystone for the last while, and one
> thing that hit me recently is that as admin, I still can't actually do
> subtree_as_list unless I have a role in all projects in the subtree.
> This is kind of silly.


> I can understand why this limitation was implemented, but it's also a
> frustrating constraint because as an admin, I have the power to add
> myself to all these projects anyway, why then can't I just list them?


> Right now if I want to get a list of all the subtree projects I need to
> do subtree_as_ids, then list ALL projects, and then go through that list
> grabbing out only the projects I want. This is a pointless set of
> actions, and having to get the full project list when I just need a
> small subset is really wasteful.


> Beyond the admin case, people may in fact want certain roles to be able
> to see the full subtree regardless of access. In fact I have a role
> 'project_admin' which allows you to edit your own roles within the scope
> of your project, including set those roles to inherit down, and create
> subprojects. If you have the project_admin role, it would make sense to
> see the full subtree regardless of your actually having access to each
> element in the subtree or not.
>
> Looking at the code in Keystone, I'm not entirely sure there is a good
> way to set role based policy for this given how it was setup. Another
> option might be to introduce a filter which allows listing of
> subprojects. Project list is already an admin/cloud_admin only command
> so there is no need to limit it, and the filter could be as simple as
> 'subtree=' and would make getting subtrees as admin, or a
> given admin-like role, actually doable without the pain of roles
> everywhere.
>
> The HMT stuff in Keystone is very interesting, and potentially very
> useful, but it also feels like so many of the features are a little
> half-baked. :(
>
> Anyone with some insight into this and is there any interest in making
> subtree listing more flexible/useful?
>
> Cheers,
> Adrian Turjak
>
>
> Further reading:
> https://github.com/openstack/keystone-specs/blob/master/spec
> s/keystone/kilo/project-hierarchy-retrieval.rst
> https://bugs.launchpad.net/keystone/+bug/1434916
> https://review.openstack.org/#/c/167231
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rodrigo Duarte Sousa
Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] jy: 8

2017-03-14 Thread patnala003

http://www.suedlichvonstuttgart.de/edxawo.php?patnala003_gmail_com



Sent from my iPad__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-14 Thread Dougal Matthews
On 14 March 2017 at 11:14, Renat Akhmerov  wrote:

> Yeah, I finally understood too what Thomas meant.
>
> Just to clarify, I think mixed two different discussions here:
>
>1. Base framework for all actions residing in mistral-lib (what I was
>trying to discuss)
>2. The new design for OpenStack actions
>
>
> On #2 I agree with you that NovaAction.get_client(context) should work. No
> problem with that.
> And I believe that it doesn’t make sense to use multiple inheritance in
> this particular case, it’s
> simply not worth it.
>
> Getting back to #1.. Of course, using mixins can be problematic (method
> and state conflicts etc.).
> I think mixins is just one of the options that’s possible to use if we
> want to. Regular class inheritance
> is also an option I think. At this point if we just agree on action base
> class I think nothing prevents
> us from choosing how to evolve in the future. So just agreeing on the base
> class design seems
> to be sufficient for now. It’s just a base contract that a runner needs to
> be aware of (sorry for
> repeating this thought but I think it’s important). The rest is related
> with action developer
> convenience.
>
> I think the outstanding questions are;
>
> - should the context be a mixin or should run() always accept context?
>
>
> I’m for run() having “context” argument. Not sure why mixin is needed
> here. If an action doesn’t
> need context it can be ignored.
>
> - should async be a mixin or should we continue with the is_sync() method
> and overwriting that in the sublcass?
>
>
> I’m for is_sync() method as it is now. It’s more flexible and less
> confusing (imagine an action inheriting
> AsyncAction but having is_async() returning False).
>
> - should the openstack actions in mistral-extra be mixins?
>
>
> No, not at all. They don’t have to be. This is a different discussion I
> think. We need to collect what’s bad about
> the current OpenStack actions and think how to rewrite them (extract the
> common infrastructure they use,
> make them more extensible, etc.)
>

+1 to all of the above, I think we are in complete agreement and this will
give us both a flexible interface and one that is easy to use and
understand.



>
>
> Renat Akhmerov
> @Nokia
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral][irc] Mistral team meeting time/channel change

2017-03-14 Thread Renat Akhmerov
Hi all,The Mistral meeting is now going to be 1 hour earlier at 15:00 UTC on Monday.It will move to #openstack-meeting-3 due to a conflict in the current channel.

This move has been made to make it easier for people after daylight savings kicks in.You can also find a calendar invite in the attachments for your convenience.Thanks!BEGIN:VCALENDAR
CALSCALE:GREGORIAN
VERSION:2.0
X-WR-CALNAME:Mistral team meeting
METHOD:PUBLISH
PRODID:-//Apple Inc.//Mac OS X 10.12.3//EN
BEGIN:VTIMEZONE
TZID:Asia/Krasnoyarsk
BEGIN:DAYLIGHT
TZOFFSETFROM:+0700
RRULE:FREQ=YEARLY;UNTIL=20100327T19Z;BYMONTH=3;BYDAY=-1SU
DTSTART:19920329T02
TZNAME:GMT+7
TZOFFSETTO:+0800
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0800
DTSTART:20141026T02
TZNAME:GMT+7
TZOFFSETTO:+0700
RDATE:20141026T02
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
TRANSP:OPAQUE
DTEND;TZID=Asia/Krasnoyarsk:20170313T23
LAST-MODIFIED:20170314T112007Z
UID:FC21A357-4BCF-4A91-9AC4-049A1DA887C2
DTSTAMP:20170314T112007Z
LOCATION:#openstack-meeting-3
DESCRIPTION:Weekly IRC meeting to discuss the current Mistral activities
 .
STATUS:CONFIRMED
SEQUENCE:0
SUMMARY:Mistral team meeting
DTSTART;TZID=Asia/Krasnoyarsk:20170313T22
X-APPLE-TRAVEL-ADVISORY-BEHAVIOR:AUTOMATIC
CREATED:20160404T151943Z
RRULE:FREQ=WEEKLY;UNTIL=20170505T175959Z
BEGIN:VALARM
X-WR-ALARMUID:5D89139C-4888-42BC-9393-4121A6FAE4C7
UID:5D89139C-4888-42BC-9393-4121A6FAE4C7
TRIGGER:-PT5M
DESCRIPTION:This is an event reminder
ACTION:DISPLAY
END:VALARM
END:VEVENT
END:VCALENDAR

Renat Akhmerov@Nokia



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Davanum Srinivas
Julien,

Let's do it!! (etcd v2-v3 in tooz)

Thanks,
Dims

On Tue, Mar 14, 2017 at 5:24 AM, Julien Danjou  wrote:
> On Mon, Mar 13 2017, Joshua Harlow wrote:
>
>> Etcd I think is also switching to grpc sometime in the future (afaik); that
>> feature is in alpha/?beta?/experimental right now.
>
> Yeah I think that's the main "problem" in tooz right now, it's that we
> still rely on etcd v2 and I think v3 is out with that. And the driver
> would need to be updated.
>
> I've that on my plate for a while now, but it never got urgent now. I'm
> still willing to work on it, especially if people are interested in
> using it. We would sure be for Telemetry.
>
> --
> Julien Danjou
> ;; Free Software hacker
> ;; https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-14 Thread lương hữu tuấn
Oh, Thanks Dougal, i am now clear since it is your TripleO use case.

So yes, in this case, IMHO, we should keep the _get_client() as before but
change it to the classmethod. May be other methods too like
_create_client(), etc. We can think this change to be an alternative to the
solution of creating extra Minxin classes.

Br,

Tuan

On Tue, Mar 14, 2017 at 11:38 AM, Dougal Matthews  wrote:

>
>
> On 14 March 2017 at 10:21, lương hữu tuấn  wrote:
>
>>
>>
>> On Tue, Mar 14, 2017 at 10:28 AM, Dougal Matthews 
>> wrote:
>>
>>>
>>>
>>> On 13 March 2017 at 09:49, lương hữu tuấn  wrote:
>>>


 On Mon, Mar 13, 2017 at 9:34 AM, Thomas Herve 
 wrote:

> On Fri, Mar 10, 2017 at 9:52 PM, Ryan Brady  wrote:
> >
> > One of the pain points for me as an action developer is the OpenStack
> > actions[1].  Since they all use the same method name to retrieve the
> > underlying client, you cannot simply inherit from more than one so
> you are
> > forced to rewrite the client access methods.  We saw this in creating
> > actions for TripleO[2].  In the base action in TripleO, we have
> actions that
> > make calls to more than one OpenStack client and so we end up
> re-writing and
> > maintaining code.  IMO the idea of using multiple inheritance there
> would be
> > helpful.  It may not require the mixin approach here, but rather a
> design
> > change in the generator to ensure the method names don't match.
>
> Is there any reason why those methods aren't functions? AFAICT they
> don't use the instance, they could live top level in the action module
> and be accessible by all actions. If you can avoid multiple
> inheritance (or inheritance!) you'll simplify the design. You could
> also do client = NovaAction().get_client() in your own action (if
> get_client was a public method).
>
> --
> Thomas
>
> If you want to do that, you need to change the whole structure of base
 action and the whole way of creating an action
 as you have described and IMHO, i myself do not like this idea:

 1. Mistral is working well (at the standpoint of creating action) and
 changing it is not a short term work.
 2. Using base class to create base action is actually a good idea in
 order to control and make easy to action developers.
 The base class will define the whole mechanism to execute an action,
 developers do not need to take care of it, just only
 providing OpenStack clients (the _create_client() method).
 3. From the #2 point of view, the alternative to
 NovaAction().get_client() does not make sense since the problem here is
 subclass mechanism,
 not the way to call get_client().

>>>
>> Hi,
>>
>> It is hard to me to understand what Thomas wants to say but i just
>> understood based on what he wrote:). Sorry for my misunderstanding.
>>
>>
>>> I might be wrong, but I think you read that Thomas wants to use
>>> functions for actions, not classes. I don't think that is the case. I think
>>> he is referring to the get_client method which is also what rbrady is
>>> referring to. At the moment multiple inheritance wont work if you want to
>>> inherit from NovaAction and KeyStone action because they both provide a
>>> "_get_client" method. If they has a unique name "get_keystone_client" and
>>> "get_nova_client" then the multiple inheritance wouldn't clash.
>>>
>>> Sorry Dougal but i do not get your point. Why the get_client could not
>> be used through instance since it has context?
>>
>
> In Mistral we have various OpenStack action classes. For example
> NovaAction[1] and GlanceAction[2] (and many others in that file). If I want
> to write an action that uses either Nova or Glance I can inherit from them,
> for example:
>
> class MyNovaAction(NovaAction):
>
> def run(self):
>
> client = self._create_client()
> # ... do something with the client and return
>
> However, if I wanted to use use two OpenStack clients, which I admit is a
> special case and I think one that only TripleO uses (that we know of).
>
> class MyNovaAndGlanceActioin(NovaAction, GlanceAction):
>
> def run(self):
>
> nova = self._create_client()
> glance = self._create_client() <- doesn't work because they both
> access the same method on NovaAction.
>
>
> If the method was called "create_nova_client" and "create_glance_client"
> then you could inherit from both without any conflict.
>
> However, based on the reply Thomas sent earlier, I think we should
> consider something like this when the OpenStack actions are moved to
> mistral-extra.
>
> nova = NovaAction.client(context)
>
> This is slight adaptation changes "_create_client" to "client" and makes
> it a class method that accepts the context. I think this would provide a
> 

Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-14 Thread Renat Akhmerov
Yeah, I finally understood too what Thomas meant.

Just to clarify, I think mixed two different discussions here:
Base framework for all actions residing in mistral-lib (what I was trying to 
discuss)
The new design for OpenStack actions

On #2 I agree with you that NovaAction.get_client(context) should work. No 
problem with that.
And I believe that it doesn’t make sense to use multiple inheritance in this 
particular case, it’s
simply not worth it.

Getting back to #1.. Of course, using mixins can be problematic (method and 
state conflicts etc.).
I think mixins is just one of the options that’s possible to use if we want to. 
Regular class inheritance
is also an option I think. At this point if we just agree on action base class 
I think nothing prevents
us from choosing how to evolve in the future. So just agreeing on the base 
class design seems
to be sufficient for now. It’s just a base contract that a runner needs to be 
aware of (sorry for
repeating this thought but I think it’s important). The rest is related with 
action developer
convenience.

> I think the outstanding questions are;
> 
> - should the context be a mixin or should run() always accept context? 

I’m for run() having “context” argument. Not sure why mixin is needed here. If 
an action doesn’t
need context it can be ignored.

> - should async be a mixin or should we continue with the is_sync() method and 
> overwriting that in the sublcass?

I’m for is_sync() method as it is now. It’s more flexible and less confusing 
(imagine an action inheriting
AsyncAction but having is_async() returning False).

> - should the openstack actions in mistral-extra be mixins?


No, not at all. They don’t have to be. This is a different discussion I think. 
We need to collect what’s bad about
the current OpenStack actions and think how to rewrite them (extract the common 
infrastructure they use,
make them more extensible, etc.)


Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-docs] [tripleo] Creating official Deployment guide for TripleO

2017-03-14 Thread Alexandra Settle
Hey Emilien,

You pretty much covered it all! Docs team is happy to provide guidance, but in 
reality, it should be a fairly straight forward process.

The Kolla team just completed their deploy-guide patches and were able to help 
refine the process a bit further. Hopefully this should help the TripleO team :)

Reach out if you have any questions at all :)

Thanks,

Alex 

On 3/13/17, 10:32 PM, "Emilien Macchi"  wrote:

Team,

[adding Alexandra, OpenStack Docs PTL]

It seems like there is a common interest in pushing deployment guides
for different OpenStack Deployment projects: OSA, Kolla.
The landing page is here:
https://docs.openstack.org/project-deploy-guide/newton/

And one example:
https://docs.openstack.org/project-deploy-guide/openstack-ansible/newton/

I think this is pretty awesome and it would bring more visibility for
TripleO project, and help our community to find TripleO documentation
from a consistent place.

The good news, is that openstack-docs team built a pretty solid
workflow to make that happen:
https://docs.openstack.org/contributor-guide/project-deploy-guide.html
And we don't need to create new repos or do any crazy changes. It
would probably be some refactoring and sphinx things.

Alexandra, please add any words if I missed something obvious.

Feedback from the team would be welcome here before we engage any work,

Thanks!
-- 
Emilien Macchi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-14 Thread Dougal Matthews
On 14 March 2017 at 10:21, lương hữu tuấn  wrote:

>
>
> On Tue, Mar 14, 2017 at 10:28 AM, Dougal Matthews 
> wrote:
>
>>
>>
>> On 13 March 2017 at 09:49, lương hữu tuấn  wrote:
>>
>>>
>>>
>>> On Mon, Mar 13, 2017 at 9:34 AM, Thomas Herve  wrote:
>>>
 On Fri, Mar 10, 2017 at 9:52 PM, Ryan Brady  wrote:
 >
 > One of the pain points for me as an action developer is the OpenStack
 > actions[1].  Since they all use the same method name to retrieve the
 > underlying client, you cannot simply inherit from more than one so
 you are
 > forced to rewrite the client access methods.  We saw this in creating
 > actions for TripleO[2].  In the base action in TripleO, we have
 actions that
 > make calls to more than one OpenStack client and so we end up
 re-writing and
 > maintaining code.  IMO the idea of using multiple inheritance there
 would be
 > helpful.  It may not require the mixin approach here, but rather a
 design
 > change in the generator to ensure the method names don't match.

 Is there any reason why those methods aren't functions? AFAICT they
 don't use the instance, they could live top level in the action module
 and be accessible by all actions. If you can avoid multiple
 inheritance (or inheritance!) you'll simplify the design. You could
 also do client = NovaAction().get_client() in your own action (if
 get_client was a public method).

 --
 Thomas

 If you want to do that, you need to change the whole structure of base
>>> action and the whole way of creating an action
>>> as you have described and IMHO, i myself do not like this idea:
>>>
>>> 1. Mistral is working well (at the standpoint of creating action) and
>>> changing it is not a short term work.
>>> 2. Using base class to create base action is actually a good idea in
>>> order to control and make easy to action developers.
>>> The base class will define the whole mechanism to execute an action,
>>> developers do not need to take care of it, just only
>>> providing OpenStack clients (the _create_client() method).
>>> 3. From the #2 point of view, the alternative to
>>> NovaAction().get_client() does not make sense since the problem here is
>>> subclass mechanism,
>>> not the way to call get_client().
>>>
>>
> Hi,
>
> It is hard to me to understand what Thomas wants to say but i just
> understood based on what he wrote:). Sorry for my misunderstanding.
>
>
>> I might be wrong, but I think you read that Thomas wants to use functions
>> for actions, not classes. I don't think that is the case. I think he is
>> referring to the get_client method which is also what rbrady is referring
>> to. At the moment multiple inheritance wont work if you want to inherit
>> from NovaAction and KeyStone action because they both provide a
>> "_get_client" method. If they has a unique name "get_keystone_client" and
>> "get_nova_client" then the multiple inheritance wouldn't clash.
>>
>> Sorry Dougal but i do not get your point. Why the get_client could not be
> used through instance since it has context?
>

In Mistral we have various OpenStack action classes. For example
NovaAction[1] and GlanceAction[2] (and many others in that file). If I want
to write an action that uses either Nova or Glance I can inherit from them,
for example:

class MyNovaAction(NovaAction):

def run(self):

client = self._create_client()
# ... do something with the client and return

However, if I wanted to use use two OpenStack clients, which I admit is a
special case and I think one that only TripleO uses (that we know of).

class MyNovaAndGlanceActioin(NovaAction, GlanceAction):

def run(self):

nova = self._create_client()
glance = self._create_client() <- doesn't work because they both
access the same method on NovaAction.


If the method was called "create_nova_client" and "create_glance_client"
then you could inherit from both without any conflict.

However, based on the reply Thomas sent earlier, I think we should consider
something like this when the OpenStack actions are moved to mistral-extra.

nova = NovaAction.client(context)

This is slight adaptation changes "_create_client" to "client" and makes it
a class method that accepts the context. I think this would provide a very
clear interface. I also can't think of any advantage of inheriting from
NovaAction, there is no state shared with it, so we only want it to create
the class for us.


[1]:
https://github.com/openstack/mistral/blob/master/mistral/actions/openstack/actions.py#L75
[2]:
https://github.com/openstack/mistral/blob/master/mistral/actions/openstack/actions.py#L109


>
>
>
>> Thomas - The difficulty with these methods is that they need to access
>> the context - the context is going to be added to the action class, and
>> thus while the get_client methods 

[openstack-dev] [nova][nova-scheduler] Instance boot stuck in "Scheduling" state

2017-03-14 Thread Vikash Kumar
All,

I brought up multinode setup with devstack. I am using Ocata release.
Instances boot are getting stuck in "scheduling" state. The state never
gets changed. Below is the link for scheduler log.

http://paste.openstack.org/show/602635/


-- 
Regards,
Vikash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-14 Thread lương hữu tuấn
On Tue, Mar 14, 2017 at 10:28 AM, Dougal Matthews  wrote:

>
>
> On 13 March 2017 at 09:49, lương hữu tuấn  wrote:
>
>>
>>
>> On Mon, Mar 13, 2017 at 9:34 AM, Thomas Herve  wrote:
>>
>>> On Fri, Mar 10, 2017 at 9:52 PM, Ryan Brady  wrote:
>>> >
>>> > One of the pain points for me as an action developer is the OpenStack
>>> > actions[1].  Since they all use the same method name to retrieve the
>>> > underlying client, you cannot simply inherit from more than one so you
>>> are
>>> > forced to rewrite the client access methods.  We saw this in creating
>>> > actions for TripleO[2].  In the base action in TripleO, we have
>>> actions that
>>> > make calls to more than one OpenStack client and so we end up
>>> re-writing and
>>> > maintaining code.  IMO the idea of using multiple inheritance there
>>> would be
>>> > helpful.  It may not require the mixin approach here, but rather a
>>> design
>>> > change in the generator to ensure the method names don't match.
>>>
>>> Is there any reason why those methods aren't functions? AFAICT they
>>> don't use the instance, they could live top level in the action module
>>> and be accessible by all actions. If you can avoid multiple
>>> inheritance (or inheritance!) you'll simplify the design. You could
>>> also do client = NovaAction().get_client() in your own action (if
>>> get_client was a public method).
>>>
>>> --
>>> Thomas
>>>
>>> If you want to do that, you need to change the whole structure of base
>> action and the whole way of creating an action
>> as you have described and IMHO, i myself do not like this idea:
>>
>> 1. Mistral is working well (at the standpoint of creating action) and
>> changing it is not a short term work.
>> 2. Using base class to create base action is actually a good idea in
>> order to control and make easy to action developers.
>> The base class will define the whole mechanism to execute an action,
>> developers do not need to take care of it, just only
>> providing OpenStack clients (the _create_client() method).
>> 3. From the #2 point of view, the alternative to
>> NovaAction().get_client() does not make sense since the problem here is
>> subclass mechanism,
>> not the way to call get_client().
>>
>
Hi,

It is hard to me to understand what Thomas wants to say but i just
understood based on what he wrote:). Sorry for my misunderstanding.


> I might be wrong, but I think you read that Thomas wants to use functions
> for actions, not classes. I don't think that is the case. I think he is
> referring to the get_client method which is also what rbrady is referring
> to. At the moment multiple inheritance wont work if you want to inherit
> from NovaAction and KeyStone action because they both provide a
> "_get_client" method. If they has a unique name "get_keystone_client" and
> "get_nova_client" then the multiple inheritance wouldn't clash.
>
> Sorry Dougal but i do not get your point. Why the get_client could not be
used through instance since it has context?


> Thomas - The difficulty with these methods is that they need to access the
> context - the context is going to be added to the action class, and thus
> while the get_client methods don't use the instance now, they will soon -
> unless we change direction.
>
>
>
>> @Renat: I myself not against to multiple inheritance too, the only thing
>> is if we want to make it multiple inheritance, we should think about it
>> more thoroughly for the hierarchy of inheritance, what each inheritance
>> layer does, etc. These work will make the multiple inheritance easy to
>> understand and for action developers as well easy to develop. So, IMHO, i
>> vote for make it simple, easy to understand first (if you continue with
>> mistral-lib) and then do the next thing later.
>>
>> Br,
>>
>> Tuan/Nokia
>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Thierry Carrez
Fox, Kevin M wrote:
> With my operator hat on, I would like to use the etcd backend, as I'm already 
> paying the cost of maintaining etcd clusters as part of Kubernetes. Adding 
> Zookeeper is a lot more work.

+1

In the spirit of better operationally integrating with Kubernetes, I
think we need to support etcd, at least as an option.

As I mentioned in another thread, for base services like databases,
message queues and distributed lock managers, the Architecture WG
started to promote an expand/contract model. Start by supporting a
couple viable options, and then once operators / the market decides on
one winner, contract to only supporting that winner, and start using the
specific features of that technology.

For databases and message queues, it's more than time for us to
contract. For DLMs, we are in the expand phase. We should only support a
very limited set of valuable options though -- no need to repeat the
mistakes of the past and support a dozen options just because we can.
Here it seems Zookeeper gives us the mature / featureful angle, and etcd
covers the Kubernetes cooperation / non-Java angle. I don't really see
the point of supporting a third option.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-14 Thread Thierry Carrez
Clint Byrum wrote:
> Excerpts from Doug Hellmann's message of 2017-03-13 15:12:42 -0400:
>> Excerpts from Farr, Kaitlin M.'s message of 2017-03-13 18:55:18 +:
>>> When the Barbican team first created the Castellan library, we had
>>> reached out to oslo to see if we could name it oslo.keymanager, but the
>>> idea was not accepted because the library didn't have enough traction.
>>> Now, Castellan is used in many projects, and we thought we would
>>> suggest renaming again.  At the PTG, the Barbican team met with the AWG
>>> to discuss how we could get Barbican integrated with more projects, and
>>> the rename was also suggested at that meeting.  Other projects are
>>> interested in creating encryption features, and a rename will help
>>> clarify the difference between Barbican and Castellan.
>>
>> Can you expand on why you think that is so? I'm not disagreeing with the
>> statement, but it's not obviously true to me, either. I vaguely remember
>> having it explained at the PTG, but I don't remember the details.
> 
> To me, Oslo is a bunch of libraries that encompass "the way OpenStack
> does ". When  is key management, projects are, AFAICT, universally
> using Castellan at the moment. So I think it fits in Oslo conceptually.
> 
> As far as what benefit there is to renaming it, the biggest one is
> divesting Castellan of the controversy around Barbican. There's no
> disagreement that explicitly handling key management is necessary. There
> is, however, still hesitance to fully adopt Barbican in that role. In
> fact I heard about some alternatives to Barbican, namely "Vault"[1] and
> "Tang"[2], that may be useful for subsets of the community, or could
> even grow into de facto standards for key management.
> 
> So, given that there may be other backends, and the developers would
> like to embrace that, I see value in renaming. It would help, I think,
> Castellan's developers to be able to focus on key management and not
> have to explain to every potential user "no we're not Barbican's cousin,
> we're just an abstraction..".

Well put.

Long-term, it will also help drive Barbican on the "base services" track
(an oslo.db-compatible database, an oslo.messaging-compatible queue, an
oslo.keymanager-compatible key manager...)

>>> Existing similar libraries (if any) and why they aren't being used: N/A
>>>
>>> Reviewer activity: Barbican team
>>
>> If the review team is going to be largely the same, I'm not sure I
>> see the benefit of changing the ownership of the library. We certainly
>> have other examples of Oslo libraries being managed mainly by
>> sub-teams made up of folks who primarily focus on other projects.
>> oslo.policy and oslo.versionedobjects come to mind, but in both of
>> those cases the code was incubated in Oslo or brought into Oslo
>> before the tools for managing shared libraries were widely used
>> outside of the Oslo team. We now have quite a few examples of project
>> teams managing shared libraries (other than their clients).

While it may be originally seeded by the same people, I think the two
groups may diverge in the future, especially if support for other key
managers is added.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-14 Thread Dougal Matthews
On 14 March 2017 at 05:25, Renat Akhmerov  wrote:

> So again, I’m for simplicity but that kind of simplicity that also allows
> flexibility in the future.
>
> There’s one principle that I usually follow in programming that says:
>
> “*Space around code (absence of code) has more potential than the code
> itself.*”
>
> That means that it’s better to get rid of any stuff that’s not currently
> needed and add things
> as requirements change. However, that doesn’t always work well in
> framework development
> because the cost of initial inflexibility may become too high in future,
> that comes from the
> need to stay backwards compatible. What I’m trying to say is that IMO it’s
> ok just to keep
> it as simple as just a base class with method run() for now and think how
> we can add more
> things in the future, if we need to, using mixin approach. So seems like
> it’s going to be:
>
> class Action(object):
>
>   def run(ctx):
> …
>
>
> class Mixin1(object):
>
>   def method11():
> …
>
>   def method12():
> …
>
>
> class Mixin2(object):
>
>   def method21():
> …
>
>   def method22():
> …
>
>
> Then my concrete action could use a combination of Action and any of the
> mixin:
>
> class MyAction(Action, Mixin1):
>   …
>
>
> class MyAction(Action, Mixin2):
>   …
>
> or just
>
>
> class MyAction(Action):
>   …
>
> Is this flexible enough or does it have any potential issues?
>

Sure, that is fine and it works but I think this is almost exactly what
rbrady has proposed earlier in this thread.

I think the outstanding questions are;

- should the context be a mixin or should run() always accept context?
- should async be a mixin or should we continue with the is_sync() method
and overwriting that in the sublcass?
- should the openstack actions in mistral-extra be mixins?


IMO, base class is still needed to define the contract that all actions
> should follow. So that
> a runner knew what’s possible to do with actions.
>

I agree but I don't think anyone is suggesting we don't have a base class.


>
> Renat Akhmerov
> @Nokia
>
> On 13 Mar 2017, at 16:49, lương hữu tuấn  wrote:
>
>
>
> On Mon, Mar 13, 2017 at 9:34 AM, Thomas Herve  wrote:
>
>> On Fri, Mar 10, 2017 at 9:52 PM, Ryan Brady  wrote:
>> >
>> > One of the pain points for me as an action developer is the OpenStack
>> > actions[1].  Since they all use the same method name to retrieve the
>> > underlying client, you cannot simply inherit from more than one so you
>> are
>> > forced to rewrite the client access methods.  We saw this in creating
>> > actions for TripleO[2].  In the base action in TripleO, we have actions
>> that
>> > make calls to more than one OpenStack client and so we end up
>> re-writing and
>> > maintaining code.  IMO the idea of using multiple inheritance there
>> would be
>> > helpful.  It may not require the mixin approach here, but rather a
>> design
>> > change in the generator to ensure the method names don't match.
>>
>> Is there any reason why those methods aren't functions? AFAICT they
>> don't use the instance, they could live top level in the action module
>> and be accessible by all actions. If you can avoid multiple
>> inheritance (or inheritance!) you'll simplify the design. You could
>> also do client = NovaAction().get_client() in your own action (if
>> get_client was a public method).
>>
>> --
>> Thomas
>>
>> If you want to do that, you need to change the whole structure of base
> action and the whole way of creating an action
> as you have described and IMHO, i myself do not like this idea:
>
> 1. Mistral is working well (at the standpoint of creating action) and
> changing it is not a short term work.
> 2. Using base class to create base action is actually a good idea in order
> to control and make easy to action developers.
> The base class will define the whole mechanism to execute an action,
> developers do not need to take care of it, just only
> providing OpenStack clients (the _create_client() method).
> 3. From the #2 point of view, the alternative to NovaAction().get_client()
> does not make sense since the problem here is subclass mechanism,
> not the way to call get_client().
>
> @Renat: I myself not against to multiple inheritance too, the only thing
> is if we want to make it multiple inheritance, we should think about it
> more thoroughly for the hierarchy of inheritance, what each inheritance
> layer does, etc. These work will make the multiple inheritance easy to
> understand and for action developers as well easy to develop. So, IMHO, i
> vote for make it simple, easy to understand first (if you continue with
> mistral-lib) and then do the next thing later.
>
> Br,
>
> Tuan/Nokia
>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib

Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-14 Thread Dougal Matthews
On 13 March 2017 at 09:49, lương hữu tuấn  wrote:

>
>
> On Mon, Mar 13, 2017 at 9:34 AM, Thomas Herve  wrote:
>
>> On Fri, Mar 10, 2017 at 9:52 PM, Ryan Brady  wrote:
>> >
>> > One of the pain points for me as an action developer is the OpenStack
>> > actions[1].  Since they all use the same method name to retrieve the
>> > underlying client, you cannot simply inherit from more than one so you
>> are
>> > forced to rewrite the client access methods.  We saw this in creating
>> > actions for TripleO[2].  In the base action in TripleO, we have actions
>> that
>> > make calls to more than one OpenStack client and so we end up
>> re-writing and
>> > maintaining code.  IMO the idea of using multiple inheritance there
>> would be
>> > helpful.  It may not require the mixin approach here, but rather a
>> design
>> > change in the generator to ensure the method names don't match.
>>
>> Is there any reason why those methods aren't functions? AFAICT they
>> don't use the instance, they could live top level in the action module
>> and be accessible by all actions. If you can avoid multiple
>> inheritance (or inheritance!) you'll simplify the design. You could
>> also do client = NovaAction().get_client() in your own action (if
>> get_client was a public method).
>>
>> --
>> Thomas
>>
>> If you want to do that, you need to change the whole structure of base
> action and the whole way of creating an action
> as you have described and IMHO, i myself do not like this idea:
>
> 1. Mistral is working well (at the standpoint of creating action) and
> changing it is not a short term work.
> 2. Using base class to create base action is actually a good idea in order
> to control and make easy to action developers.
> The base class will define the whole mechanism to execute an action,
> developers do not need to take care of it, just only
> providing OpenStack clients (the _create_client() method).
> 3. From the #2 point of view, the alternative to NovaAction().get_client()
> does not make sense since the problem here is subclass mechanism,
> not the way to call get_client().
>

I might be wrong, but I think you read that Thomas wants to use functions
for actions, not classes. I don't think that is the case. I think he is
referring to the get_client method which is also what rbrady is referring
to. At the moment multiple inheritance wont work if you want to inherit
from NovaAction and KeyStone action because they both provide a
"_get_client" method. If they has a unique name "get_keystone_client" and
"get_nova_client" then the multiple inheritance wouldn't clash.

Thomas - The difficulty with these methods is that they need to access the
context - the context is going to be added to the action class, and thus
while the get_client methods don't use the instance now, they will soon -
unless we change direction.



> @Renat: I myself not against to multiple inheritance too, the only thing
> is if we want to make it multiple inheritance, we should think about it
> more thoroughly for the hierarchy of inheritance, what each inheritance
> layer does, etc. These work will make the multiple inheritance easy to
> understand and for action developers as well easy to develop. So, IMHO, i
> vote for make it simple, easy to understand first (if you continue with
> mistral-lib) and then do the next thing later.
>
> Br,
>
> Tuan/Nokia
>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Julien Danjou
On Mon, Mar 13 2017, Joshua Harlow wrote:

> Etcd I think is also switching to grpc sometime in the future (afaik); that
> feature is in alpha/?beta?/experimental right now.

Yeah I think that's the main "problem" in tooz right now, it's that we
still rely on etcd v2 and I think v3 is out with that. And the driver
would need to be updated.

I've that on my plate for a while now, but it never got urgent now. I'm
still willing to work on it, especially if people are interested in
using it. We would sure be for Telemetry.

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-14 Thread lương hữu tuấn
Hi,

Agree with the simplicity that Renat has shown. I would not want to change
the base class as well as the way of mistral created up to now.

Br,

Tuan/Nokia

On Mar 14, 2017 6:29 AM, "Renat Akhmerov"  wrote:

> So again, I’m for simplicity but that kind of simplicity that also allows
> flexibility in the future.
>
> There’s one principle that I usually follow in programming that says:
>
> “*Space around code (absence of code) has more potential than the code
> itself.*”
>
> That means that it’s better to get rid of any stuff that’s not currently
> needed and add things
> as requirements change. However, that doesn’t always work well in
> framework development
> because the cost of initial inflexibility may become too high in future,
> that comes from the
> need to stay backwards compatible. What I’m trying to say is that IMO it’s
> ok just to keep
> it as simple as just a base class with method run() for now and think how
> we can add more
> things in the future, if we need to, using mixin approach. So seems like
> it’s going to be:
>
> class Action(object):
>
>   def run(ctx):
> …
>
>
> class Mixin1(object):
>
>   def method11():
> …
>
>   def method12():
> …
>
>
> class Mixin2(object):
>
>   def method21():
> …
>
>   def method22():
> …
>
>
> Then my concrete action could use a combination of Action and any of the
> mixin:
>
> class MyAction(Action, Mixin1):
>   …
>
>
> class MyAction(Action, Mixin2):
>   …
>
> or just
>
>
> class MyAction(Action):
>   …
>
> Is this flexible enough or does it have any potential issues?
>
> IMO, base class is still needed to define the contract that all actions
> should follow. So that
> a runner knew what’s possible to do with actions.
>
> Renat Akhmerov
> @Nokia
>
> On 13 Mar 2017, at 16:49, lương hữu tuấn  wrote:
>
>
>
> On Mon, Mar 13, 2017 at 9:34 AM, Thomas Herve  wrote:
>
>> On Fri, Mar 10, 2017 at 9:52 PM, Ryan Brady  wrote:
>> >
>> > One of the pain points for me as an action developer is the OpenStack
>> > actions[1].  Since they all use the same method name to retrieve the
>> > underlying client, you cannot simply inherit from more than one so you
>> are
>> > forced to rewrite the client access methods.  We saw this in creating
>> > actions for TripleO[2].  In the base action in TripleO, we have actions
>> that
>> > make calls to more than one OpenStack client and so we end up
>> re-writing and
>> > maintaining code.  IMO the idea of using multiple inheritance there
>> would be
>> > helpful.  It may not require the mixin approach here, but rather a
>> design
>> > change in the generator to ensure the method names don't match.
>>
>> Is there any reason why those methods aren't functions? AFAICT they
>> don't use the instance, they could live top level in the action module
>> and be accessible by all actions. If you can avoid multiple
>> inheritance (or inheritance!) you'll simplify the design. You could
>> also do client = NovaAction().get_client() in your own action (if
>> get_client was a public method).
>>
>> --
>> Thomas
>>
>> If you want to do that, you need to change the whole structure of base
> action and the whole way of creating an action
> as you have described and IMHO, i myself do not like this idea:
>
> 1. Mistral is working well (at the standpoint of creating action) and
> changing it is not a short term work.
> 2. Using base class to create base action is actually a good idea in order
> to control and make easy to action developers.
> The base class will define the whole mechanism to execute an action,
> developers do not need to take care of it, just only
> providing OpenStack clients (the _create_client() method).
> 3. From the #2 point of view, the alternative to NovaAction().get_client()
> does not make sense since the problem here is subclass mechanism,
> not the way to call get_client().
>
> @Renat: I myself not against to multiple inheritance too, the only thing
> is if we want to make it multiple inheritance, we should think about it
> more thoroughly for the hierarchy of inheritance, what each inheritance
> layer does, etc. These work will make the multiple inheritance easy to
> understand and for action developers as well easy to develop. So, IMHO, i
> vote for make it simple, easy to understand first (if you continue with
> mistral-lib) and then do the next thing later.
>
> Br,
>
> Tuan/Nokia
>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

[openstack-dev] [nova][cinder] Ceph volumes attached to local deleted instance could not be correctly handled

2017-03-14 Thread Zhenyu Zheng
Hi all,

We have met the following problem:

we deployed our env with ceph as volume backend, we boot an instance and
attach a ceph volume to this instance, when our nova-compute is down and we
delete this instance, it will go local_delete and the ceph volume we
attached to this instance will change to "available" status in cinder but
when we try to delete it, error will happen, so we will have an "available"
volume but can't be attached or delete. We also tested with iscii volumes
and it seems fine.

I reported a bug about this:
https://bugs.launchpad.net/nova/+bug/1672624

Thanks,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-l2gw] Unable to create release tag

2017-03-14 Thread Moshe Levi
Hi,
I had similar problem with networking-mlnx (I think)  and I resolve  by 
creating Annotated Tags
Did you create the tag like this “git tag -am "Adding 10.0.0 Ocata tag" -s 
10.0.0 gerrit/stable/ocata”?


From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Tuesday, March 14, 2017 7:40 AM
To: OpenStack List 
Subject: [openstack-dev] [neutron][networking-l2gw] Unable to create release tag

Hi,
I was asked to create a release tag for stable/ocata. This fails with:

gkotton@ubuntu:~/networking-l2gw$ git push gerrit tag 10.0.0
Enter passphrase for key '/home/gkotton/.ssh/id_rsa':
Counting objects: 1, done.
Writing objects: 100% (1/1), 533 bytes | 0 bytes/s, done.
Total 1 (delta 0), reused 0 (delta 0)
remote: Processing changes: refs: 1, done
To ssh://ga...@review.openstack.org:29418/openstack/networking-l2gw.git
 ! [remote rejected] 10.0.0 -> 10.0.0 (prohibited by Gerrit)
error: failed to push some refs to 
'ssh://ga...@review.openstack.org:29418/openstack/networking-l2gw.git'

Any idea?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev