Re: [openstack-dev] [oslo]Introduction of new driver for oslo.messaging

2017-03-31 Thread ChangBo Guo
Jut find the spec in https://review.openstack.org/#/c/452219/,  I also
added oslo.messaging API maintainers as reivewers

2017-04-01 1:48 GMT+08:00 Davanum Srinivas :

> Let's do this! +1 :)
>
> On Fri, Mar 31, 2017 at 10:40 AM, Deja, Dawid 
> wrote:
> > Hi all,
> >
> > To work around issues with rabbitMQ scalability we'd like to introduce
> > new driver in oslo messaging that have nearly no scaling limits[1].
> > We'd like to have as much eyes on this as possible since we believe
> > that this is the technology of the future. Thanks for all reviews.
> >
> > Dawid Deja
> >
> > [1] https://review.openstack.org/#/c/452219/
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gluon] IRC Meeting cancelled on April 5, 2017 and Re-convene on April 12, 2017

2017-03-31 Thread HU, BIN
Hello folks,



Many of our folks will be in ONS in Santa Clara next week. So we will cancel 
the IRC meeting next week April 5 so that everyone can focus and enjoy the 
sessions in ONS. We will re-convene on April 12.



Thanks



Bin



[1] https://wiki.openstack.org/wiki/Gluon

[2] https://wiki.openstack.org/wiki/Meetings/Gluon



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] logs.o.o corrupt - indicated by POST_FAILURE results

2017-03-31 Thread Andreas Jaeger
Since this morning, part of logs.openstack.org is corrupt due to a
downtime of one of the backing stores. The infra admins are currently
running fsck and then will take everything back in use.

Right now, we put the logs on a spare disks so that everything that is
run, is getting log results.

You might have received "POST_FAILURE" message on jobs since jobs could
not push their data to logs.o.o.

Once the system is up and running again, feel free to "recheck" your
jobs where you miss log files and see "POST_FAILURE" reports.

For now, please do not recheck to not fill up our temporary disk and
keep load low.

Just a reminder:

You can always check the status of the CI infrastructure via:
* https://wiki.openstack.org/wiki/Infrastructure_Status
* by following twitter http://twitter.com/openstackinfra
* Or checking the topic in IRC

And then report problems via #openstack-infra on IRC.

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo]Introduction of new driver for oslo.messaging

2017-03-31 Thread Davanum Srinivas
Let's do this! +1 :)

On Fri, Mar 31, 2017 at 10:40 AM, Deja, Dawid  wrote:
> Hi all,
>
> To work around issues with rabbitMQ scalability we'd like to introduce
> new driver in oslo messaging that have nearly no scaling limits[1].
> We'd like to have as much eyes on this as possible since we believe
> that this is the technology of the future. Thanks for all reviews.
>
> Dawid Deja
>
> [1] https://review.openstack.org/#/c/452219/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] No more 'reverify', use 'recheck'

2017-03-31 Thread Sean Dague
On 03/31/2017 01:29 PM, Andreas Jaeger wrote:
> In July 2014 [1] we aliased 'reverify' to 'recheck' and those two do
> exactly the same since then. It's time IMHO to remove reverify [2] since
> it only confuses people that issue reverify expecting it to do something
> different. Note that our documentation does not mention 'reverify' at all.
> 
> So, for those few that still use 'reverify' - better always use
> 'recheck' since that is the only option available going forward,
> 
> Andreas
> 
> 
> [1] https://review.openstack.org/#/c/111098/
> [2] https://review.openstack.org/#/c/452264
> 

+1000

It still causes confusion in folks.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] No more 'reverify', use 'recheck'

2017-03-31 Thread Andreas Jaeger
In July 2014 [1] we aliased 'reverify' to 'recheck' and those two do
exactly the same since then. It's time IMHO to remove reverify [2] since
it only confuses people that issue reverify expecting it to do something
different. Note that our documentation does not mention 'reverify' at all.

So, for those few that still use 'reverify' - better always use
'recheck' since that is the only option available going forward,

Andreas


[1] https://review.openstack.org/#/c/111098/
[2] https://review.openstack.org/#/c/452264
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo]Introduction of new driver for oslo.messaging

2017-03-31 Thread Jay Faulkner
I thought this spec/proposal was embargoed until tomorrow?!

-
Jay Faulkner
OSIC

> On Mar 31, 2017, at 7:40 AM, Deja, Dawid  wrote:
> 
> Hi all,
> 
> To work around issues with rabbitMQ scalability we'd like to introduce
> new driver in oslo messaging that have nearly no scaling limits[1].
> We'd like to have as much eyes on this as possible since we believe
> that this is the technology of the future. Thanks for all reviews.
> 
> Dawid Deja
> 
> [1] https://review.openstack.org/#/c/452219/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] [heat] [cinder] MuranoPL types question

2017-03-31 Thread Paul Bourke

Thanks for the tips Stan.

> it is stored exactly as it comes from Heat. In theory conversion to
> string could happen on Heat side, but most likely it just the output 
> of reporter made it look like this


Did some digging and it actually seems to be a string on heat's end. 
python-cinderclient seems to present volume attachment info as a list:


"""
from cinderclient.v2 import Client
...
for i in cinder.volumes.list():
  print(type(i.attachments))


"""

whereas printing the value of outputs from python-heatclient gives:

"""
from heatclient.client import Client
...
s = heat.stacks.get('murano--sjsbdj0xwhrkz2')
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(s.outputs)

[   {   u'description': u'No description given',
u'output_key': 
u'vol-08aea08f-f553-4f71-b839-bf4637516eaf-attachments',
u'output_value': u"[{u'server_id': 
u'0f5731c1-da17-4209-a2ef-270c7056f9a3', u'attachment_id': u'88
1a5cea-be9e-4335-ad37-a24d09b36911', u'attached_at': 
u'2017-03-31T14:05:26.00', u'host_name': None, u'
volume_id': u'6c97f825-32e9-4369-8580-a4a576e67612', u'device': 
u'/dev/vda', u'id': u'6c97f825-32e9-4369-8

580-a4a576e67612'}]"},

u'output_key': u'addresses--ae9a638e712d450a87492ed792025a97',
u'output_value': [   {   u'OS-EXT-IPS-MAC:mac_addr': 
u'fa:16:3e:f6:26:ab',

 u'OS-EXT-IPS:type': u'fixed',
 u'addr': u'10.0.61.12',
 u'port': 
u'7967a4de-ccc1-49ec-a35c-f4d515e6cc96',

 u'version': 4}]},

"""

(note the value for 'addresses--ae9a638e712d450a87492ed792025a97' is in 
the correct format)


Changing the schema type of this in heat from STRING to LIST fails to 
validate[0], so somewhere between cinder and heat this is not getting 
deserialised properly.


Anyhow, I think the issue has moved beyond the scope of MuranoPL, so I'm 
just sending this more as a follow up for anyone who happens to be reading.


Thanks again for pointing me in the right direction.

-Paul

[0] 
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/cinder/volume.py#L222-L225



On 30/03/17 19:55, Stan Lagun wrote:

Hi Paul,


Here both key and value appear to be a string (note, I can't confirm this as 
the typeinfo function doesn't appear to be available in Instance.yaml for some 
reason... perhaps I'm using it wrong)


They are not strings. Reporter converts everything that is passed to it
into string by doing unicode(obj). What you see is the Python string
representation of lists and dicts, where every unicode string is
prefixed with "u".
Typinfo function works on objects (instances of MuranoPL classes), not
on primitive types like strings, dicts and lists.


1) Why is the content of this 'get_attr' coming from heat being stored in the 
stack outputs as a string,

it is stored exactly as it comes from Heat. In theory conversion to
string could happen on Heat side, but most likely it just the output of
reporter made it look like this


2) Is there a way I can cast this to a list of dicts or similar

structure so it can be iterated as expected?
It's hard to answer without seeing your code and how you got the results
that you provided.


 when I access this from a sample app, I end up with a list of strings,

shown by $reporter as: ...

Curly braces in the output indicate that this is not a list of strings
but rather the single dict converted to a string by reporter. So what
you see is the value of
that 'vol-7c8ee61f-a444-46a1-ad70-fc6fce6a7b56-attachments' output,
which sounds like what you're wanted it to be. In MuranoPL you can have
very detailed contracts: not just [] (any list) but something like
Contract:
  - key1: string().notNull()
key2: int().notNull()

which is a list of dicts with contract on each dict entry. If you don't
get contract violation exception, you can be sure that the list contains
list of dicts with appropriate keys/values rather than list of strings
or anything else


Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis



On Thu, Mar 30, 2017 at 10:17 AM, Paul Bourke > wrote:

Hi Stan,

I had a quick(hopefully) question about MuranoPL that I hope you
might be able to help with, Felipe had mentioned you are very
knowledgeable in this area. If you don't have time please disregard!

I'm working on a patch for the Murano core library to make volume
attachment info available, which is available at
https://review.openstack.org/451909
. It's mostly working, but I'm
having some issues getting the types correct in MuranoPL to make
this info consumable by end users.

The attachment info from a sample run in the stack $outputs looks
like this (taken from the dashboard using $reporter)

u'vol-7c8ee61f-a444-46a1-ad70-fc6fce6a7b56-attachments':

Re: [openstack-dev] [barbican][castellan] How to share secrets in barbican

2017-03-31 Thread Dave McCowan (dmccowan)

Another option:

If you want to give User-A read access to all Project-B secrets, you could
assign User-A the role of "observer" in Project-B.

This would use the default RBAC policy, not give every user access to the
secrets, and be more convenient than adding each user to the ACL of each
secret.

Tacker would use the Operator's token to retrieve secrets, and not shared
credentials from the configuration file.

On 3/31/17, 2:58 AM, "yanxingan"  wrote:

>
>Thanks Kaitlin Farr.
>
>In tacker vim usecase, an operator [user A] may create a vim with an
>account[user B] to access the NFVI. I want to store user B's password in
>barbican.
>
>There are two methods to store secret:
>1. All user A's vim secrets are stored in one common reserved
>project/user as mentioned.
>2. For each user A, the vim secret is stored in it's own domain
>respectively.
>
>The problem of 2 is:
>1) Vim can not be shared between different projects with default
>barbican RBAC policy.
>2) It's not secure to open the access to all users via RBAC policy. In
>addition, barbican may be invoked by other projects, e.g. nova, neutron
>lb.
>3) It's not convenient to add every user to the ACL of A's secret.
>
>Is barbican ACL suport a "shared" similar attribute to a secret?
>
>
>On 2017/3/31 3:05, Farr, Kaitlin M. wrote:
>>
>>>As i known, the secrets are saved in a user's domain, and other
>>>project/user can not retrieve the secrets.
>>> But i have a situation that many users need retrieve a same secret.
>>>
>>> After looking into the castellan usage,  I see the method that
>>>saving the credentials in configuration,
>>>  then all operators use this pre-created user to create/retrieve
>>>secrets.
>>>  I want to know, is this way typical and easy-accepted? Does other
>>>projects face this issue?
>>
>>
>> ​By default, the secrets in Barbican are available at the project-level
>> [1]. I am not sure specifically which project or feature you are
>> referring to that all users need to access to one secret, but I would
>> suggest that editing the Barbican RBAC policy or ACL is a more elegant
>> solution than storing username/pw in the conf file. You can find more
>> details about RBAC at [2] and a sample policy.json file at [3].
>>
>> Kaitlin Farr
>>
>> 1. 
>>https://developer.openstack.org/api-guide/key-manager/acls.html#default-a
>>cl
>> 2. 
>>https://docs.openstack.org/developer/barbican/admin-guide-cloud/access_co
>>ntrol.html
>> 3. 
>>https://github.com/openstack/barbican/blob/master/etc/barbican/policy.jso
>>n
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo]Introduction of new driver for oslo.messaging

2017-03-31 Thread Jay Pipes

On 03/31/2017 10:40 AM, Deja, Dawid wrote:

Hi all,

To work around issues with rabbitMQ scalability we'd like to introduce
new driver in oslo messaging that have nearly no scaling limits[1].
We'd like to have as much eyes on this as possible since we believe
that this is the technology of the future. Thanks for all reviews.

Dawid Deja

[1] https://review.openstack.org/#/c/452219/


++

As mentioned on the review, I'm on board to do code reviews on this and 
I don't believe the implementation would take more than a couple weeks.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo]Introduction of new driver for oslo.messaging

2017-03-31 Thread Deja, Dawid
Hi all,

To work around issues with rabbitMQ scalability we'd like to introduce
new driver in oslo messaging that have nearly no scaling limits[1].
We'd like to have as much eyes on this as possible since we believe
that this is the technology of the future. Thanks for all reviews.

Dawid Deja

[1] https://review.openstack.org/#/c/452219/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project Navigator Updates - Feedback Request

2017-03-31 Thread Jimmy McArthur
I think this is a bit of a roadblock for us getting the new Navigator 
up, so the sooner we can come to a decision the better :) Thanks for 
raising the issue with the TC!



Thierry Carrez 
March 31, 2017 at 3:43 AM

I like the idea to provide unified API information, however I'm not sure
the Technical Committee should stand in the way of getting that
information updated. The less we force through strict governance
processes, the better, so everything that can be maintained by some
other group should be.

This is closer to documentation than governance, so I wonder if this
document should not be maintained at the API WG or the Docs team or
service catalog level -- just keeping the same team names so that
information can be easily cross-referenced with the governance repository.

I'll raise the discussion in open discussion at the next TC meeting.

Brian Rosmaita 
March 29, 2017 at 9:23 PM
On 3/29/17 12:55 AM, Jimmy McArthur wrote:
[snip]

See what you think of these. They add an "apis" section to the glance
section of projects.yaml in the governance repo.

http://paste.openstack.org/show/604775/ has a complete history, where
for each release, the complete set of versions of the API that are
available in that release (and their statuses) are listed.

http://paste.openstack.org/show/604776/ has an abbreviated history,
where only the changes in available APIs are listed from version to
version, and if there's no change, the release isn't listed at all.

I don't know if this format would work for microversions, though. (And
I don't know if it's a good idea in the first place.)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur 
March 28, 2017 at 11:55 PM

Brian,

Thanks for the response. This is a tough one. Currently we're pulling 
API data manually for each project. That is no longer tenable when 
we're talking about 40+ projects. Plus, this is info is something that 
is really sought after by the community. Some thoughts below:

Brian Rosmaita 
March 28, 2017 at 10:25 PM
On 3/27/17 5:01 PM, Lauren Sell wrote:
I don't have a helpful recommendation here, but the version history 
for Glance is incorrect as well. We maintain a version history in the 
glance api-ref [0], but that's probably not much help (and, as you 
point out, is idiosyncratic to Glance anyway). At this point, though, 
my primary concern is that it's showing a deprecated API version as 
the latest release. What format would it be useful for you to get 
this data in?

What we really need is the following:

* A project history, including the date of project inception that's 
included in the TC tags.
* An API history in an easily digestible format that all projects 
share. So whether you're doing micro releases or not, just something 
that allows us to show, based on a release timeline, which API 
versions per project are applicable for each OpenStack release. This 
really needs to be consistent from project to project b/c at the 
moment everyone handles it differently.

thanks,
brian

[0]
https://developer.openstack.org/api-ref/image/versions/index.html#version-history

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Brian Rosmaita 
March 28, 2017 at 10:25 PM
On 3/27/17 5:01 PM, Lauren Sell wrote:

Hi Matt,

Thanks for the feedback.


On Mar 24, 2017, at 3:50 PM, Matt Riedemann  wrote:

[snip]

2. The "API Version History" section in the bottom right says:

"Version v2.1 (Ocata) - LATEST RELEASE"

And links to https://releases.openstack.org/. 
The latest compute microversion in Ocata was actually 2.42:

https://docs.openstack.org/developer/nova/api_microversion_history.html

I'm wondering how we can better sort that out. I guess "API Version History" in 
the navigator is meant more for major versions and wasn't intended to handle 
microversions? That seems like something that should be dealt with at some point as more 
and more projects are moving to using micro versions.

Agreed, we could use some guidance here. From what 

[openstack-dev] Boston Forum Submission Deadline!

2017-03-31 Thread Melvin Hillsman
Hey everyone,

Please take time to *submit your proposals *from the etherpad(s) or any
other place you have captured/brainstormed. This is  a friendly reminder
that all proposed Forum session leaders must submit their abstracts at:

http://forumtopics.openstack.org/

*before 11:59PM UTC on Sunday April 2nd!*

Regards,

TC/UC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project Navigator Updates - Feedback Request

2017-03-31 Thread Dave McCowan (dmccowan)


On 3/31/17, 4:43 AM, "Thierry Carrez"  wrote:

>Brian Rosmaita wrote:
>> On 3/29/17 12:55 AM, Jimmy McArthur wrote:
>> [snip]
>>> What we really need is the following:
>>>
>>> * A project history, including the date of project inception that's
>>> included in the TC tags.
>>> * An API history in an easily digestible format that all projects
>>>share.
>>> So whether you're doing micro releases or not, just something that
>>> allows us to show, based on a release timeline, which API versions per
>>> project are applicable for each OpenStack release. This really needs to
>>> be consistent from project to project b/c at the moment everyone
>>>handles
>>> it differently.
>> 
>> See what you think of these.  They add an "apis" section to the glance
>> section of projects.yaml in the governance repo.
>> 
>> http://paste.openstack.org/show/604775/ has a complete history, where
>> for each release, the complete set of versions of the API that are
>> available in that release (and their statuses) are listed.
>> 
>> http://paste.openstack.org/show/604776/ has an abbreviated history,
>> where only the changes in available APIs are listed from version to
>> version, and if there's no change, the release isn't listed at all.
>> 
>> I don't know if this format would work for microversions, though.  (And
>> I don't know if it's a good idea in the first place.)
>
>I like the idea to provide unified API information, however I'm not sure
>the Technical Committee should stand in the way of getting that
>information updated. The less we force through strict governance
>processes, the better, so everything that can be maintained by some
>other group should be.
>
>This is closer to documentation than governance, so I wonder if this
>document should not be maintained at the API WG or the Docs team or
>service catalog level -- just keeping the same team names so that
>information can be easily cross-referenced with the governance repository.
>
>I'll raise the discussion in open discussion at the next TC meeting.
>
>-- 
>Thierry Carrez (ttx)

There used to be an item on Navigator for "Existence of quality packages
in popular distributions."   It was removed, with the intent to replace it
with a better way to maintain and update this information. [1][2]

Is now a good time to re-implement this tag as well?

[1] https://bugs.launchpad.net/openstack-org/+bug/1656843
[2] https://github.com/OpenStackweb/openstack-org/pull/59




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] rebuild_instance (nova evacuate) failed to add trunk port

2017-03-31 Thread Bence Romsics
Hi,

On Thu, Mar 30, 2017 at 10:14 PM, KHAN, RAO ADNAN  wrote:
> In Juno, there is an issue with instance rebuild (nova evacuate) when trunk
> port is associated with that instance.

I have a feeling you're not using the publicly available trunk port
version as available in Neutron since the Newton release, but probably
a version derived from Ericsson's older prototype (that was never
merged in the public Neutron as is). I suggest you contact the
organization that delivered that code to you for maintenance.

Cheers,
Bence

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] os-cloud-config retirement

2017-03-31 Thread Jiří Stránský

On 30.3.2017 17:39, Juan Antonio Osorio wrote:

Why not drive the post-config with something like shade over ansible?
Similar to what the kolla-ansible community is doing.


We could use those perhaps, if they bring enough benefit to add them to 
the container image(s) (i think we'd still want to drive it via a 
container rather than fully externally). It's quite tempting to just 
load a yaml file with the endpoint definitions and just iterate over 
them and let Ansible handle the actual API calls...


However, currently i can't see endpoint management in the cloud modules 
docs [1], just service management. Looks like there's still a feature 
gap at the moment.


Jirka

[1] http://docs.ansible.com/ansible/list_of_cloud_modules.html#openstack



On 30 Mar 2017 16:42, "Jiří Stránský"  wrote:


On 30.3.2017 14:58, Dan Prince wrote:


There is one case that I was thinking about reusing this piece of code
within a container to help initialize keystone endpoints. It would
require some changes and updates (to match how puppet-* configures
endpoints).

For TripleO containers we use various puppet modules (along with hiera)
to drive the creation of endpoints. This functionally works fine, but
is quite slow to execute (puppet is slow here) and takes several
minutes to complete. I'm wondering if a single optimized python script
might serve us better here. It could be driven via YAML (perhaps
similar to our Hiera), idempotent, and likely much faster than having
the code driven by puppet. This doesn't have to live in os-cloud-
config, but initially I thought that might be a reasonable place for
it. It is worth pointing out that this would be something that would
need to be driven by our t-h-t workflow and not a post-installation
task. So perhaps that makes it not a good fit for os-cloud-config. But
it is similar to the keystone initialization already there so I thought
I'd mention it.



I agree we could have an optimized python script instead of puppet to do
the init. However, os-cloud-config also doesn't strike me as the ideal
place.

What might be interesting is solving the keystone init within containers
along with our container entrypoint situation. We've talked earlier that we
may have to build our custom entrypoints into the images as we sometimes
need to do things that the current entrypoints don't seem fit for, or don't
give us enough control over what happens. This single optimized python
script for endpoint config you mentioned could be one of such in-image
entrypoint scripts. We could build multiple different scripts like this
into a single image and select the right one when starting the container
(defaulting to a script that handles the usual "worker" case, in this case
Keystone API).

This gets somewhat similar to the os-cloud-config usecase, but even if we
wanted a separate repo, or even a RPM for these, i suppose it would be
cleaner to just start from scratch rather than repurpose os-cloud-config.

Jirka



Dan

On Thu, 2017-03-30 at 08:13 -0400, Emilien Macchi wrote:


Hi,

os-cloud-config was deprecated in the Ocata release and is going to
be
removed in Pike.

TripleO project doesn't need it anymore and after some investigation
in codesearch.openstack.org, nobody is using it in OpenStack.
I'm working on the removal this cycle, please let us know any
concern.

Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
e
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 17

2017-03-31 Thread Chris Dent


Here's an update on resource providers and placement. I've been away
from OpenStack for most of this week, so apologies if I have missed
something important. Please follow up with anything I've missed (or
questions or comments or anything else).

# What Matters Most

Traits are still top of the priority stack. Links below.
Conversation is also happening around the need to make claims,
with some different ideas on scope. More on that, also below.

# What's Changed

Andrey Volkov stepped up to move placement api-ref work forward.
There's now a CI job gate-placement-api-ref-nv which will publish
drafts of placement-api-ref work. With this in place we can start
making real progress on those docs. More in the #docs theme below.

The repo for osc-placement has been created:
https://github.com/openstack/osc-placement

# Help Wanted

Areas where volunteers are needed.

* General attention to bugs tagged placement:
   https://bugs.launchpad.net/nova/+bugs?field.tag=placement

* Helping to create api documentation for placement (see the Docs
   section below).

* Helping to create and evaluate functional tests of the resource
   tracker and the ways in which it and nova-scheduler use the
   reporting client. For some info see
   https://etherpad.openstack.org/p/nova-placement-functional
   and talk to edleafe.

* Performance testing. If you have access to some nodes, some basic
  benchmarking and profiling would be very useful. See the
  performance section below.

# Main Themes

## Traits

The work to implement the traits API in placement is happening at


https://review.openstack.org/#/q/status:open+topic:bp/resource-provider-traits

It's getting close. The sooner we get that happy, the sooner we can
make progress on the rest of the themes. What's left are mostly
details.

## Ironic/Custom Resource Classes

A spec for "custom resource classes in flavors" that describes the
stuff that will actually make use of custom resource classes has merged:

  https://review.openstack.org/#/c/446570/

Over in ironic some functional and integration tests have started:

  https://review.openstack.org/#/c/443628/

## Claims in the Scheduler

A "Super WIP" spec for claims in the scheduler has started at

https://review.openstack.org/#/c/437424/

There are several different points of view on how this is supposed
to work. We need to resolve those differences to make some progress.
We all seem to agree on the long term plan, but not on how to get
there in the medium term.

## Shared Resource Providers

https://blueprints.launchpad.net/nova/+spec/shared-resources-pike

Progress on this will continue once traits and claims have moved forward.

## Nested Resource Providers

https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers

The spec for this has been updated with what was learned at the PTG
and to move it to Pike, needs to be reviewed.

## Docs

https://review.openstack.org/#/q/topic:cd/placement-api-ref

As mentioned above Andrey Volkov has made some excellent progress on
making this work. There's now a draft publishing job in place, but
with the current state of logs.openstack.org it's hard to see if it
is working correctly. If not, we'll fix it.

With this stuff in place it should now be possible to start filling
in the relevant documentation for the API. Note that at the moment
this is mostly a manual process; the author of the docs is expected
to gather the relevant JSON samples from running their own requests
against the API (using, for example, curl) and formatting
appropriately. I've been using gabbi-run for this as it makes it
easy to tweak and capture the JSON being sent.

Find me (cdent) or Andrey (avolkov) if you want to help out or have
other questions.

## Performance

We're aware that there are some redundancies in the resource tracker
that we'd like to clean up

 http://lists.openstack.org/pipermail/openstack-dev/2017-January/110953.html

but it's also the case that we've done no performance testing on the
placement service itself.

We ought to do some testing to make sure there aren't unexpected
performance drains.

# Other Code/Specs

* https://review.openstack.org/#/c/418393/
A spec for improving the level of detail and structure in placement
error responses so that it is easier to distinguish between
different types of, for example, 409 responses.

* https://review.openstack.org/#/c/423872/
 Spec for versioned-object based notification of events in the
 placement API. We had some discussion in the weekly subteam
 meeting about what the use cases are for this, starting

 
http://eavesdrop.openstack.org/meetings/nova_scheduler/2017/nova_scheduler.2017-03-06-14.00.log.html#l-120

 The gist is: "we should do this when we can articulate the
 problem the notifications will solve". Comment on the review if
 you have some opinions on that.

* https://review.openstack.org/#/c/448791/
   Idempotent PUT for resource classes. This is 

Re: [openstack-dev] [kolla] my work on Debian and non-x86 architectures

2017-03-31 Thread Marcin Juszkiewicz
W dniu 17.02.2017 o 22:33, Marcin Juszkiewicz pisze:

> Current stats:

Instead of writing how many images got built I decided to make a page with 
results:

http://people.linaro.org/~marcin.juszkiewicz/kolla/

There you can find which images got built for any combination of those:

distros = centos, debian, ubuntu
architectures = aarch64, ppc64le, x86_64
build types = binary, source

Logs are provided.

https://review.openstack.org/#/q/owner:%22Marcin+Juszkiewicz%22+status:open

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-31 Thread ChangBo Guo
2017-03-31 2:55 GMT+08:00 Doug Hellmann :

> Excerpts from Paul Belanger's message of 2017-03-22 09:58:46 -0400:
> > On Tue, Mar 21, 2017 at 05:53:35PM -0600, Alex Schultz wrote:
> > > On Tue, Mar 21, 2017 at 5:35 PM, John Dickinson  wrote:
> > > >
> > > >
> > > > On 21 Mar 2017, at 15:34, Alex Schultz wrote:
> > > >
> > > >> On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson  wrote:
> > > >>> I've been following this thread, but I must admit I seem to have
> missed something.
> > > >>>
> > > >>> What problem is being solved by storing per-server service
> configuration options in an external distributed CP system that is
> currently not possible with the existing pattern of using local text files?
> > > >>>
> > > >>
> > > >> This effort is partially to help the path to containerization where
> we
> > > >> are delivering the service code via container but don't want to
> > > >> necessarily deliver the configuration in the same fashion.  It's
> about
> > > >> ease of configuration where moving service -> config files (on many
> > > >> hosts/containers) to service -> config via etcd (single source
> > > >> cluster).  It's also about an alternative to configuration
> management
> > > >> where today we have many tools handling the files in various ways
> > > >> (templates, from repo, via code providers) and trying to come to a
> > > >> more unified way of representing the configuration such that the end
> > > >> result is the same for every deployment tool.  All tools load
> configs
> > > >> into $place and services can be configured to talk to $place.  It
> > > >> should be noted that configuration files won't go away because many
> of
> > > >> the companion services still rely on them (rabbit/mysql/apache/etc)
> so
> > > >> we're really talking about services that currently use oslo.
> > > >
> > > > Thanks for the explanation!
> > > >
> > > > So in the future, you expect a node in a clustered OpenStack service
> to be deployed and run as a container, and then that node queries a
> centralized etcd (or other) k/v store to load config options. And other
> services running in the (container? cluster?) will load config from local
> text files managed in some other way.
> > >
> > > No the goal is in the etcd mode, that it  may not be necessary to load
> > > the config files locally at all.  That being said there would still be
> > > support for having some configuration from a file and optionally
> > > provide a kv store as another config point.  'service --config-file
> > > /etc/service/service.conf --config-etcd proto://ip:port/slug'
> > >
> > Hmm, not sure I like this.  Having a service magically read from 2
> different
> > configuration source at run time, merge them, and reload, seems overly
> > complicated. And even harder to debug.
> >
> > > >
> > > > No wait. It's not the *services* that will load the config from a kv
> store--it's the config management system? So in the process of deploying a
> new container instance of a particular service, the deployment tool will
> pull the right values out of the kv system and inject those into the
> container, I'm guessing as a local text file that the service loads as
> normal?
> > > >
> > >
> > > No the thought is to have the services pull their configs from the kv
> > > store via oslo.config.  The point is hopefully to not require
> > > configuration files at all for containers.  The container would get
> > > where to pull it's configs from (ie. http://11.1.1.1:2730/magic/ or
> > > /etc/myconfigs/).  At that point it just becomes another place to load
> > > configurations from via oslo.config.  Configuration management comes
> > > in as a way to load the configs either as a file or into etcd.  Many
> > > operators (and deployment tools) are already using some form of
> > > configuration management so if we can integrate in a kv store output
> > > option, adoption becomes much easier than making everyone start from
> > > scratch.
> > >
> > > > This means you could have some (OpenStack?) service for inventory
> management (like Karbor) that is seeding the kv store, the cloud
> infrastructure software itself is "cloud aware" and queries the central
> distributed kv system for the correct-right-now config options, and the
> cloud service itself gets all the benefits of dynamic scaling of available
> hardware resources. That's pretty cool. Add hardware to the inventory, the
> cloud infra itself expands to make it available. Hardware fails, and the
> cloud infra resizes to adjust. Apps running on the infra keep doing their
> thing consuming the resources. It's clouds all the way down :-)
> > > >
> > > > Despite sounding pretty interesting, it also sounds like a lot of
> extra complexity. Maybe it's worth it. I don't know.
> > > >
> > >
> > > Yea there's extra complexity at least in the
> > > deployment/management/monitoring of the new service or maybe not.
> > > Keeping configuration files synced across 1000s of nodes (or

Re: [openstack-dev] [freezer] Demo of new OpenSatck Freezer features

2017-03-31 Thread Trinath Somanchi
Excellent!

Thanks for sharing this demo.


Thanks,
Trinath Somanchi | HSDC, GSD, DN | NXP – INDA.


From: Julia Odruzova [mailto:jvarlam...@mirantis.com]
Sent: Friday, March 31, 2017 4:16 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [freezer] Demo of new OpenSatck Freezer features


Hi Freezer team!



I prepared a short demo of recently implemented Freezer features.

This is a brief overview of features our Mirantis team took part in – rsync 
engine, backup to local/remote host and Nova tenant backup.




https://www.youtube.com/watch?v=MYPQ5VKddg0



Thanks,

Julia Odruzova






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Demo of new OpenSatck Freezer features

2017-03-31 Thread Saad Zaher
Great!

Thanks Julia!

On Fri, Mar 31, 2017 at 10:46 AM, Julia Odruzova 
wrote:

> Hi Freezer team!
>
>
> I prepared a short demo of recently implemented Freezer features.
>
> This is a brief overview of features our Mirantis team took part in –
> rsync engine, backup to local/remote host and Nova tenant backup.
> 
>
>
> https://www.youtube.com/watch?v=MYPQ5VKddg0
>
>
> Thanks,
>
> Julia Odruzova
>
>
> 
>
>
> 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
--
Best Regards,
Saad!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Demo of new OpenSatck Freezer features

2017-03-31 Thread Mathieu, Pierre-Arthur
That's really cool ! !

Thank you Julia !


From: Julia Odruzova 
Sent: Friday, March 31, 2017 11:46:20 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev]  [freezer] Demo of new OpenSatck Freezer features

Hi Freezer team!


I prepared a short demo of recently implemented Freezer features.

This is a brief overview of features our Mirantis team took part in – rsync 
engine, backup to local/remote host and Nova tenant backup.



https://www.youtube.com/watch?v=MYPQ5VKddg0


Thanks,

Julia Odruzova








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] Demo of new OpenSatck Freezer features

2017-03-31 Thread Julia Odruzova
Hi Freezer team!


I prepared a short demo of recently implemented Freezer features.

This is a brief overview of features our Mirantis team took part in – rsync
engine, backup to local/remote host and Nova tenant backup.



https://www.youtube.com/watch?v=MYPQ5VKddg0


Thanks,

Julia Odruzova






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] - heads up for mechanism drivers that don't use in tree DHCP agent

2017-03-31 Thread Neil Jerram
Thanks for the heads up, Kevin!

Is this still necessary if a deployment disables the Neutron server's DHCP
scheduling, with

self._supported_extension_aliases.remove("dhcp_agent_scheduler")

?

Thanks,
  Neil


On Fri, Mar 31, 2017 at 12:52 AM Kevin Benton  wrote:

> Hi,
>
> Once [1] merges, a port will not transition to ACTIVE on a subnet with
> enable_dhcp=True unless something clears the DHCP provisioning block.
>
> If your mechanism driver uses the in-tree DHCP agent, there is nothing you
> need to do. However, if you do not utilize the DHCP agent in your
> deployment scenarios and you offload DHCP to something else, your mechanism
> driver must now explicitly acknowledge that DHCP has been provisioned for
> that port.
>
> Acknowledging that DHCP is ready for a port is a one-line call to the
> provisioning_blocks module[2]. For more information on provisioning blocks,
> see [3].
>
> 1. https://review.openstack.org/452009
> 2.
> https://github.com/openstack/neutron/blob/4ed53a880714fd33280064c58e6f91b9ecd3823e/neutron/api/rpc/handlers/dhcp_rpc.py#L292-L294
> 3.
> https://docs.openstack.org/developer/neutron/devref/provisioning_blocks.html
>
> Cheers,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] Candidate proposals for TC (Technical Committee) positions are now open

2017-03-31 Thread Tristan Cacqueray
On 03/31/2017 08:58 AM, Thierry Carrez wrote:
> Jens Rosenboom wrote:
>> 2017-03-31 2:00 GMT+02:00 Kendall Nelson :
>>> Candidate proposals for the Technical Committee positions (7 positions) are
>>> now open and will remain open until 2017-04-20, 23:45 UTC
>>
>> The table below states a much earlier closing time.
> 
> Yeah, I think there was confusion between election closing time (April
> 20) and nomination deadline (April 9). The election website has the
> reference dates (it's currently down, but should be fixed soon):
> 
> TC nomination starts   @ 2017-03-30, 23:59 UTC
> TC nomination ends @ 2017-04-09, 23:45 UTC
> TC elections starts@ 2017-04-13, 23:59 UTC
> TC elections ends  @ 2017-04-20, 23:45 UTC
> 
The website has been updated with the above timeline:
  https://governance.openstack.org/election/

-Tristan

>>> The election will be held from March 30th through to 23:45 April 20th, 2017
>>
>> This also doesn't match the dates listed below, please clarify.
> 
> That's correct if you consider the "election" to include nomination,
> campaigning and voting. But then in other places "election" is
> synonymous to "voting", hence the confusion. Again, the timeline has the
> right information.
> 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to find "ovs_mechanism + vxlan tanent_type_driver + L2_pop" configure steps?

2017-03-31 Thread Jens Rosenboom
2017-03-31 3:10 GMT+00:00 Sam :
> Hi all,
>
> In openstack-ocata install docs, I only found "linux_bridge + vxlan + l2pop"
> configure steps, Is there "ovs+vxlan+l2pop" configure steps?

The install-guide only covers the most basic setup, for ovs you will
need to look at the corresponding section in the networking-guide:
https://docs.openstack.org/ocata/networking-guide/deploy-ovs.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] Candidate proposals for TC (Technical Committee) positions are now open

2017-03-31 Thread Thierry Carrez
Jens Rosenboom wrote:
> 2017-03-31 2:00 GMT+02:00 Kendall Nelson :
>> Candidate proposals for the Technical Committee positions (7 positions) are
>> now open and will remain open until 2017-04-20, 23:45 UTC
> 
> The table below states a much earlier closing time.

Yeah, I think there was confusion between election closing time (April
20) and nomination deadline (April 9). The election website has the
reference dates (it's currently down, but should be fixed soon):

TC nomination starts   @ 2017-03-30, 23:59 UTC
TC nomination ends @ 2017-04-09, 23:45 UTC
TC elections starts@ 2017-04-13, 23:59 UTC
TC elections ends  @ 2017-04-20, 23:45 UTC

>> The election will be held from March 30th through to 23:45 April 20th, 2017
> 
> This also doesn't match the dates listed below, please clarify.

That's correct if you consider the "election" to include nomination,
campaigning and voting. But then in other places "election" is
synonymous to "voting", hence the confusion. Again, the timeline has the
right information.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project Navigator Updates - Feedback Request

2017-03-31 Thread Thierry Carrez
Brian Rosmaita wrote:
> On 3/29/17 12:55 AM, Jimmy McArthur wrote:
> [snip]
>> What we really need is the following:
>>
>> * A project history, including the date of project inception that's
>> included in the TC tags.
>> * An API history in an easily digestible format that all projects share.
>> So whether you're doing micro releases or not, just something that
>> allows us to show, based on a release timeline, which API versions per
>> project are applicable for each OpenStack release. This really needs to
>> be consistent from project to project b/c at the moment everyone handles
>> it differently.
> 
> See what you think of these.  They add an "apis" section to the glance
> section of projects.yaml in the governance repo.
> 
> http://paste.openstack.org/show/604775/ has a complete history, where
> for each release, the complete set of versions of the API that are
> available in that release (and their statuses) are listed.
> 
> http://paste.openstack.org/show/604776/ has an abbreviated history,
> where only the changes in available APIs are listed from version to
> version, and if there's no change, the release isn't listed at all.
> 
> I don't know if this format would work for microversions, though.  (And
> I don't know if it's a good idea in the first place.)

I like the idea to provide unified API information, however I'm not sure
the Technical Committee should stand in the way of getting that
information updated. The less we force through strict governance
processes, the better, so everything that can be maintained by some
other group should be.

This is closer to documentation than governance, so I wonder if this
document should not be maintained at the API WG or the Docs team or
service catalog level -- just keeping the same team names so that
information can be easily cross-referenced with the governance repository.

I'll raise the discussion in open discussion at the next TC meeting.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-21, April 3-7

2017-03-31 Thread Thierry Carrez
Welcome to our regular release countdown email!

Development Focus
-

Teams should be working on specs approval and implementation for
priority features for this cycle.

Actions
---

Your team produces libraries, and some of them are still using a version
<1.0 ? Now is a good time to look at those and move them to 1.0 (or
define a time when they will be able to do so). A 0.x release implies
something is not stable and each new release is likely to be a breaking
change. Consumers are often reluctant to use libraries with version
numbers like that. The sooner you can commit to an API and start fully
using semver to signal change, the better!

As a reminder, Pike-1 is coming up on R-20 week. Time is running out to
propose a change in your release model: if you want to switch from
cycle-with-milestones model to a cycle-with-intermediary model (or the
other way around), propose a change to your Pike deliverable files at
[1] now!

[1] http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike

Pike-1 is also the deadline for submitting your team responses to the
Pike release goals[2]. Responses to those goals are needed even if the
work is already done. You can find more information on the Goals process
at [3].

[2] https://governance.openstack.org/tc/goals/pike/index.html
[3] https://governance.openstack.org/tc/goals/

Upcoming Deadlines & Dates
--

Boston Forum topic formal submission deadline: April 2
Pike-1 milestone: April 13 (R-20 week)
Forum at OpenStack Summit in Boston: May 8-11

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] rebuild_instance (nova evacuate) failed to add trunk port

2017-03-31 Thread Balazs Gibizer
On Thu, Mar 30, 2017 at 10:14 PM, KHAN, RAO ADNAN  
wrote:
In Juno, there is an issue with instance rebuild (nova evacuate) when 
trunk port is associated with that instance. On the target, it is not 
provisioning tbr (bridge) and hence 'ovs-vsctl' command failed when 
adding trunk port.


Does Juno version of Neutron has trunk port support? As far as I see 
trunk port feature was released in Newton [1].


Cheers,
gibi


[1] https://wiki.openstack.org/wiki/Neutron/TrunkPort
To me this seems a design gap; but I couldn't pin point it. Can 
someone point me to the right direction?


Thanks much,


Rao Adnan Khan
AT Integrated Cloud (AIC) Development | SE
Software Development & Engineering (SD)
Emai: rk2...@att.com
Cell phone: 972-342-5638

RESTRICTED - PROPRIETARY INFORMATION
This email is the property of AT and intended solely for the use of 
the addressee. If you have reason to believe you have received this 
in error, please delete this immediately; any other use, retention, 
dissemination, copying or printing of this email is strictly 
prohibited.






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][castellan] How to share secrets in barbican

2017-03-31 Thread yanxingan


Thanks Kaitlin Farr.

In tacker vim usecase, an operator [user A] may create a vim with an 
account[user B] to access the NFVI. I want to store user B's password in 
barbican.


There are two methods to store secret:
1. All user A's vim secrets are stored in one common reserved 
project/user as mentioned.
2. For each user A, the vim secret is stored in it's own domain 
respectively.


The problem of 2 is:
1) Vim can not be shared between different projects with default 
barbican RBAC policy.
2) It's not secure to open the access to all users via RBAC policy. In 
addition, barbican may be invoked by other projects, e.g. nova, neutron lb.

3) It's not convenient to add every user to the ACL of A's secret.

Is barbican ACL suport a "shared" similar attribute to a secret?


On 2017/3/31 3:05, Farr, Kaitlin M. wrote:



   As i known, the secrets are saved in a user's domain, and other project/user 
can not retrieve the secrets.
But i have a situation that many users need retrieve a same secret.

After looking into the castellan usage,  I see the method that saving the 
credentials in configuration,
 then all operators use this pre-created user to create/retrieve secrets.
 I want to know, is this way typical and easy-accepted? Does other projects 
face this issue?



​By default, the secrets in Barbican are available at the project-level
[1]. I am not sure specifically which project or feature you are
referring to that all users need to access to one secret, but I would
suggest that editing the Barbican RBAC policy or ACL is a more elegant
solution than storing username/pw in the conf file. You can find more
details about RBAC at [2] and a sample policy.json file at [3].

Kaitlin Farr

1. https://developer.openstack.org/api-guide/key-manager/acls.html#default-acl
2. 
https://docs.openstack.org/developer/barbican/admin-guide-cloud/access_control.html
3. https://github.com/openstack/barbican/blob/master/etc/barbican/policy.json


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev