Re: [openstack-dev] [heat] [devstack] [infra] heat api services with uwsgi

2017-05-24 Thread Rabi Mishra
On Wed, May 24, 2017 at 9:24 AM, Rabi Mishra  wrote:

> On Tue, May 23, 2017 at 11:57 PM, Zane Bitter  wrote:
>
>> On 23/05/17 01:23, Rabi Mishra wrote:
>>
>>> Hi All,
>>>
>>> As per the updated community goal[1]  for api deployment with wsgi,
>>> we've to transition to use uwsgi rather than mod_wsgi at the gate. It
>>> also seems mod_wsgi support would be removed from devstack in Queens.
>>>
>>> I've been working on a patch[2] for the transition and encountered a few
>>> issues as below.
>>>
>>> 1. We encode stack_indentifer( along with the path
>>> separator in heatclient. So, requests with encoded path separators are
>>> dropped by apache (with 404), if we don't have 'AllowEncodedSlashes On'
>>> directive in the site/vhost config[3].
>>>
>>
>> We'd probably want 'AllowEncodedSlashes NoDecode'.
>>
>
> Yeah, that would be ideal  for supporting slashes in stack and resource
> names where we take care of the encoding and decoding.
>
>
>> Setting this for mod_proxy_uwsgi[4] seems to work on fedora but not
>>> ubuntu.  From my testing It seems, it has to be set in 000-default.conf
>>> for ubuntu.
>>>
>>> Rather than messing with the devstack plugin code, I went ahead proposed
>>> a change to not encode the path separators in heatclient[5] ( Anyway
>>> they would be decoded by apache with the directive 'AllowEncodedSlashes
>>> On' before it's consumed by the service) which seem to have fixed those
>>> 404s.
>>>
>>
>> Pasting my comment from the patch:
>>
>> One potential problem with this is that you can probably craft a stack
>> name in such a way that heatclient ends up calling a real but unexpected
>> URL. (I don't think this is a new problem, but it's likely the problem that
>> the default value of AllowEncodedSlashes is designed to fix, and we're
>> circumventing it here.)
>>
>
>> It seems to me the ideal would be to force '/'s to be encoded when they
>> occur in the stack and resource names. Clearly they should never have been
>> encoded when they're actual path separators (e.g. between the stack name
>> and stack ID).
>>
> It'd be even better if Apache were set to "AllowEncodedSlashes NoDecode"
>> and we could then decode stack/resource names that include slashes after
>> splitting at the path separators, so that those would actually work. I
>> don't think the routing framework can handle that though.
>>
>>
> I don't think we even support slashes (encoded or not) in stack name. The
> validation below would not allow it.
>
> https://git.openstack.org/cgit/openstack/heat/tree/heat/
> engine/stack.py#n143
>
> As far as resource names are concerned, we don't encode or decode them
> appropriately for it to work as expected. Creating a stack with resource
> name containing '/' fails with validation error as it's not encoded for
> being inside the template snippet and the validation below would fail.
>
> https://git.openstack.org/cgit/openstack/heat/tree/heat/
> engine/resource.py#n214
>
> For that reason I believe we disallow slashes in stack/resource names. So
>> with "AllowEncodedSlashes Off" we'd get the right behaviour (which is to
>> always 404 when the stack/resource name contains a slash).
>>
>
>>
> Is there a generic way to set the above directive (when using
>>> apache+mod_proxy_uwsgi) in the devstack plugin?
>>>
>>> 2.  With the above, most of the tests seem to work fine other than the
>>> ones using waitcondition, where we signal back from the vm to the api
>>>
>>
>> Not related to the problem below, but I believe that when signalling
>> through the heat-cfn-api we use an arn to identify the stack, and I suspect
>> that slashes in the arn are escaped at or near the source. So we may have
>> no choice but to find a way to turn on AllowEncodedSlashes. Or is it in the
>> query string part anyway?
>>
>> Yeah, it's not related to the problem below as the request not reaching
> apache at all. I've  taken care of the above issue in the patch itself[1]
> and the signal url looks ok to me[2].
>
> [1] https://review.openstack.org/#/c/462216/11/heat/common/identifier.py
>
> [2] http://logs.openstack.org/16/462216/11/check/gate-heat-
> dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-
> xenial/e7d9e90/console.html#_2017-05-20_07_04_30_500696
>
> services. I could see " curl: (7) Failed to connect to 10.0.1.78 port
>>> 80: No route to host" in the vm console logs[6].
>>>
>>> It could connect to heat api services using ports 8004/8000 without this
>>> patch, but not sure why not port 80? I tried testing this locally and
>>> didn't see the issue though.
>>>
>>> Is this due to some infra settings or something else?
>>>
>>>
I finally found out the reason for the above issue. We're explicitly
allowing nova vms to access heat api services with some iptables rules.

I've submitted a project-config patch[1] to add port 80.

[1] https://review.openstack.org/#/c/467703


>
>>> [1] https://governance.openstack.org/tc/goals/pike/deploy-api-in
>>> -wsgi.html
>>>
>>> [2] 

Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-05-24 Thread Rico Lin
*Project: * Heat

*Attendees:*  around 10-15

*PPT:* https://www.slideshare.net/GuanYuLin1/heat-project-onboarding

*Videos: *
https://www.youtube.com/playlist?list=PLIKe-Yb1IV6ETK3HKc7mz8kEtawxlvxJh

*Talker:* Rico Lin and Zane Bitter

*Who we targeting:*

We try to make this usable for new user/ops/dev.

*What was done:*

We use slides through the entire session, with some Q and Experiences
talks. The following is our schedule:


   1. Start with recognizing who is helping to contribute to heat project,
and tell all that we desire any kind of help.
   2. Talk about what repo we got and what it does
   3. Talk about Heat Architecture
   4. Talk new structure of heat (convergence) in concept
   5. Share detail from Heat template to actually resource create
   6. Talk how to update that created resource
   7. Talk how to make your own resource type
   8. Talk about software deploy in workflow
   9. Auto healing+ Autoscaling
   10. Some debug guide (more like for user and ops)
   11. And finally, some roadmap, to hope got some interested to any of
   those items.

Also, we use my smartphone to record that video.
So these are pretty much what we were done.


*What worked:*


   1. People showing their interest in help our team (and some of them
   already start to doing amazing jobs, like LanceHaig:) ), so that worked:)
   2. Hardware works
   3. Room works
   4. Mascot sticker works
   5. Zane works
   6. The space of my smartphone almost works (99%)

*Lessons:*

1. Do hope we can have some video record for Onboarding, to train any
others who might be interested in joining. So the next Onboarding can start
from somewhere more detail, and not start from 0 and end with 101.

2. Don't use weird example like OS::Sled::Dog, that never works when you
try to explain how the actual thing works

3. I found a lot of operators and users not even knows about Onboarding,
maybe we can do something to attract some attention. Like give out what
exactually you will befinfit from it and how your works can be so relative
to upstream that you can even move you amazing job to upstream. And what
you will expected to learn from this session


2017-05-25 6:20 GMT+08:00 Kendall Nelson :

> @Nikhil, we (the organizers of Upstream Institute) sent a few emails
> [1][2] out to the dev mailing list asking for help and representatives from
> various projects to attend and get involved. We are also working on
> building a network of project liaisons to direct newcomers to in each
> project. Would you be interested in being our Glance liaison?
>
> Let me know if you have any other Upstream Institute questions!
>
> - Kendall(diablo_rojo)
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-
> January/110788.html
> [2]  http://lists.openstack.org/pipermail/openstack-dev/2016-
> November/108084.html
>
> On Wed, May 24, 2017 at 4:03 PM Nikhil Komawar 
> wrote:
>
>> Project:  Glance
>>
>> Attendees: ~15
>>
>> What was done:
>>
>> We started by introducing the core team (or whatever existed then), did a
>> run down of Glance API documentation especially for developers, other
>> references like notes for ops, best practices. We went through the
>> architecture of the project. A few were interested in knowing more details
>> and going in depth so we discussed the design patterns that exist today,
>> scope of improvements and any blackholes therein, auxiliary services and
>> performance tradeoffs etc. A lot of the discussion was free form so people
>> asked questions and session was interactive.
>>
>>
>> What worked:
>>
>> 1. The projector worked!
>>
>> 2. Session was free form, there was good turnout and it was interactive.
>> (all the good things)
>>
>> 3. People were serious about contributing as per their
>> availability/capacity to do upstream and one person showed up asking to do
>> reviews.
>>
>>
>> Lessons:
>>
>> 1. Could have been advertised more at least the session description more
>> customized.
>>
>> 2. A representative from the team could have been officially invited to
>> the upstream institute training.
>>
>> 3. The community building sessions and on-boarding sessions seem to
>> overlap a bit so a representative from the team could be help in those
>> sessions for Q or more interaction. Probably more collaboration/prep
>> before the summit for such things. ($0.02)
>>
>>
>> Cheers
>>
>> On Wed, May 24, 2017 at 1:27 PM, Jay S Bryant 
>> wrote:
>>
>>> Project:  Cinder
>>>
>>> Attendees: Approximately 30
>>>
>>> I was really pleased by the number of people that attended the Cinder
>>> session and the fact that they people in the room seemed engaged with the
>>> presentation and asked good questions showing interest in the project.  I
>>> think having the on-boardings rooms was beneficial and hopefully something
>>> that we can continue.
>>>
>>> Given the number of people in the room we didn't go around and introduce
>>> everyone.  I did have the Sean 

[openstack-dev] [cyborg]Nominate Rushil Chugh and Justin Kilpatrick as new core reviewers

2017-05-24 Thread Zhipeng Huang
Hi Team,

This is an email for nomination of rushil and justin to the core team. They
have been very active in our development and the specs they helped draft
have been merged after several rounds of review. The statistics could be
found at
http://stackalytics.com/?project_type=all=cyborg=person-day .

Since we are not an official project and i'm the only core reviewer at the
moment, I think we should have a simple procedure for the first additional
core reviewers to be added. Therefore if there are no outstanding
oppositions by the end of the day of next Wed, I will suppose there is a
consensus and add these guys to the core team to help accelerating our
development.

Please voice your support or concerns if there are any within the next 7
days :)

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Massively distributed] Optimizing the interaction of OpenStack components

2017-05-24 Thread Matt Riedemann

On 5/24/2017 9:59 AM, Matt Riedemann wrote:
I started going down a path the other night of trying to see if we could 
bulk query floating IPs when building the internal instance network info 
cache [1] but it looks like that's not supported. The REST API docs for 
Neutron say that you can't OR filter query parameters together, but at 
the time looking at the code it seemed like it might be possible.


Kevin Benton pointed out the bug in my code, so the bulk query for 
floating IPs is working now it seems:


https://review.openstack.org/#/c/465792/

http://logs.openstack.org/92/465792/3/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/fd0e93f/logs/screen-q-svc.txt.gz#_May_24_20_54_02_457529

So we can probably iterate on that a bit to bulk query other things, but 
I'd have to dig through the code to see where we're doing that.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Create subnetpool on dynamic credentials

2017-05-24 Thread Hongbin Lu
Hi Andrea,

Sorry I just got a chance to get back to this. Yes, an advantage is 
creating/deleting subnetpool once instead of creating/deleting per test. It 
seems neutron doesn’t support setting subnetpool_id after a subnet is created. 
If this is true, it means we cannot leverage the pre-created subnet from 
credential provider because we want to test against a subnet with a subnetpool. 
Eventually, we need to create a pair of subnet/subnetpool for each test and 
take care of the configuration of these resources. This looks complex 
especially for our contributors most of who don’t have a strong networking 
background.

Another motivation of this proposal is that we want to run all the tests 
against a subnet with subnetpool. We currently run tests without subnetpool but 
it doesn’t work well in some dev environment [1]. The issue was tracked down to 
the limitation of the docker networking model that makes its plugin hard to 
identify the correct subnet (unless it has a subnetpool because libnetwork will 
record its uuid). This is why I prefer to run tests against a pre-created 
subnet/subnetpool pair. Ideally, Tempest could provide a feasible solution to 
address our use cases.

[1] https://bugs.launchpad.net/zun/+bug/1690284

Best regards,
Hongbin

From: Andrea Frittoli [mailto:andrea.fritt...@gmail.com]
Sent: May-22-17 9:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [qa] Create subnetpool on dynamic credentials

Hi Hongbin,

If several of your test cases require a subnet pool, I think the simplest 
solution would be creating one in the resource creation step of the tests.
As I understand it, subnet pools can be created by regular projects (they do 
not require admin credentials).

The main advantage that I can think of for having subnet pools provisioned as 
part of the credential provider code is that - in case of pre-provisioned 
credentials - the subnet pool would be created and delete once per test user as 
opposed to once per test class.

That said I'm not opposed to the proposal in general, but if possible I would 
prefer to avoid adding complexity to an already complex part of the code.

andrea

On Sun, May 21, 2017 at 2:54 AM Hongbin Lu 
> wrote:
Hi QA team,

I have a proposal to create subnetpool/subnet pair on dynamic credentials: 
https://review.openstack.org/#/c/466440/ . We (Zun team) have use cases for 
using subnets with subnetpools. I wanted to get some early feedback on this 
proposal. Will this proposal be accepted? If not, would appreciate alternative 
suggestion if any. Thanks in advance.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-24 Thread Adrian Turjak


On 25/05/17 07:47, Lance Bragstad wrote:

> *Option 2*
>
> Implement global role assignments in keystone.
> /
> /
> /How it works:/
> /
> /
> Role assignments in keystone can be scoped to global context. Users
> can then ask for a globally scoped token 
>
> Pros:
> - This approach represents a more accurate long term vision for role
> assignments (at least how we understand it today)
> - Operators can create global roles and assign them as needed after
> the upgrade to give proper global scope to their users
> - It's easier to explain global scope using global role assignments
> instead of a special project
> - token.is_global = True and token.role = 'reader' is easier to
> understand than token.is_admin_project = True and token.role = 'reader'
> - A global token can't be associated to a project, making it harder
> for operations that require a project to consume a global token (i.e.
> I shouldn't be able to launch an instance with a globally scoped token)
>
> Cons:
> - We need to start from scratch implementing global scope in keystone,
> steps for this are detailed in the spec
>

>
> On Wed, May 24, 2017 at 10:35 AM, Lance Bragstad  > wrote:
>
> Hey all,
>
> To date we have two proposed solutions for tackling the admin-ness
> issue we have across the services. One builds on the existing
> scope concepts by scoping to an admin project [0]. The other
> introduces global role assignments [1] as a way to denote elevated
> privileges.
>
> I'd like to get some feedback from operators, as well as
> developers from other projects, on each approach. Since work is
> required in keystone, it would be good to get consensus before
> spec freeze (June 9th). If you have specific questions on either
> approach, feel free to ping me or drop by the weekly policy
> meeting [2].
>
> Thanks!
>

Please option 2. The concept of being an "admin" while you are only
scoped to a project is stupid when that admin role gives you super user
power yet you only have it when scoped to just that project. That
concept never really made sense. Global scope makes so much more sense
when that is the power the role gives.

At same time, it kind of would be nice to make scope actually matter. As
admin you have a role on Project X, yet you can now (while scoped to
this project) pretty much do anything anywhere! I think global roles is
a great step in the right direction, but beyond and after that we need
to seriously start looking at making scope itself matter, so that giving
someone 'admin' or some such on a project actually only gives them
something akin to project_admin or some sort of admin-lite powers scoped
to that project and sub-projects. That though falls into the policy work
being done, but should be noted, as it is related.

Still, at least global scope for roles make the superuser case make some
actual sense, because (and I can't speak for other deployers), we have
one project pretty much dedicated as an "admin_project" and it's just
odd to actually need to give our service users roles in a project when
that project is empty and a pointless construct for their purpose.

Also thanks for pushing this! I've been watching your global roles spec
review in hopes we'd go down that path. :)

-Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-05-24 Thread Kendall Nelson
@Nikhil, we (the organizers of Upstream Institute) sent a few emails [1][2]
out to the dev mailing list asking for help and representatives from
various projects to attend and get involved. We are also working on
building a network of project liaisons to direct newcomers to in each
project. Would you be interested in being our Glance liaison?

Let me know if you have any other Upstream Institute questions!

- Kendall(diablo_rojo)

[1]
http://lists.openstack.org/pipermail/openstack-dev/2017-January/110788.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/108084.html

On Wed, May 24, 2017 at 4:03 PM Nikhil Komawar 
wrote:

> Project:  Glance
>
> Attendees: ~15
>
> What was done:
>
> We started by introducing the core team (or whatever existed then), did a
> run down of Glance API documentation especially for developers, other
> references like notes for ops, best practices. We went through the
> architecture of the project. A few were interested in knowing more details
> and going in depth so we discussed the design patterns that exist today,
> scope of improvements and any blackholes therein, auxiliary services and
> performance tradeoffs etc. A lot of the discussion was free form so people
> asked questions and session was interactive.
>
>
> What worked:
>
> 1. The projector worked!
>
> 2. Session was free form, there was good turnout and it was interactive.
> (all the good things)
>
> 3. People were serious about contributing as per their
> availability/capacity to do upstream and one person showed up asking to do
> reviews.
>
>
> Lessons:
>
> 1. Could have been advertised more at least the session description more
> customized.
>
> 2. A representative from the team could have been officially invited to
> the upstream institute training.
>
> 3. The community building sessions and on-boarding sessions seem to
> overlap a bit so a representative from the team could be help in those
> sessions for Q or more interaction. Probably more collaboration/prep
> before the summit for such things. ($0.02)
>
>
> Cheers
>
> On Wed, May 24, 2017 at 1:27 PM, Jay S Bryant 
> wrote:
>
>> Project:  Cinder
>>
>> Attendees: Approximately 30
>>
>> I was really pleased by the number of people that attended the Cinder
>> session and the fact that they people in the room seemed engaged with the
>> presentation and asked good questions showing interest in the project.  I
>> think having the on-boardings rooms was beneficial and hopefully something
>> that we can continue.
>>
>> Given the number of people in the room we didn't go around and introduce
>> everyone.  I did have the Sean McGinnis introduce himself as PTL and had
>> the other Cinder Core members introduce themselves so that the attendees
>> could put faces with our names.
>>
>> From there we kicked off the presentation [1] which covered the following
>> high level topics:
>>
>>- Introduction of Cinder's Repos and components
>>- Quick overview of Cinder's architecture/organization
>>- Pointers to the Upstream Institute education (Might have done a bit
>>of a sales pitch for the next session here ;-))
>>- Expanded upon the Upstream Institute education to explain how what
>>was taught there specifically applied to Cinder
>>- Walked through the main Cinder code tree
>>- Described how to test changes to Cinder
>>
>> My presentation was designed to assume that attendees had been through
>> Upstream Institute.  I had coverage in the slides in case they had not been
>> through the education.  Unfortunately most of the class had not been
>> through the education so I did spend a portion of time re-iterating those
>> concepts and less time was able to be spent at the end going through real
>> world examples of working with changes in Cinder.  I got feedback from a
>> few people that having some real hands on coding examples would have been
>> helpful.
>>
>> One way we could possible handle this is to split the on-boarding to a
>> introduction section and then a more advanced second session.  The other
>> option is that we require people who are attending the on-boarding to have
>> been through Upstream Institute.  Something to think about.
>>
>> I think it was unfortunate that the session wasn't recorded.  We shared a
>> lot of good information (between good questions and having a good
>> representation of Cinder's Core team in the room) that it would have been
>> nice to capture.  Given this I am planning at some point in the near future
>> to work with Walt Boring to record a version of the presentation that can
>> be uploaded to our Cinder YouTube channel and include some coding examples.
>>
>> In summary, I think the on-boarding rooms were a great addition and the
>> Cinder team is pleased with how we used the time.  I think it is something
>> we would like to continue to invest time into developing and improving.
>>
>> Jay
>>
>> [1]
>> 

Re: [openstack-dev] [TripleO] A proposal for hackathon to reduce deploy time of TripleO

2017-05-24 Thread Jeremy Stanley
On 2017-05-24 16:27:18 -0500 (-0500), Ben Nemec wrote:
[...]
> I spent a _lot_ of time tracking down performance issues and
> optimizing things where I could, and in the end pretty much every
> gain I made was regressed somewhere else within a few weeks,
> leaving us where we are now with jobs timing out all over the
> place.
[...]

The uneasy truth (in many places, not just TripleO) is that if
people only try to improve speed or memory footprint or what have
you when jobs are constantly hitting those thresholds, then the
gains will usually only be very temporary. I agree it has to be an
all-the-time effort to work on minimization across these axes or
else we'll end up with near constant job failures from them. Under
development, software will (often rapidly) grow to fill the resource
constraints placed around it. There's some sort of natural law at
work there.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using keystone right - catalog, endpoints, tokens and noauth

2017-05-24 Thread Monty Taylor

On 05/24/2017 12:51 PM, Eric Fried wrote:

Pavlo-

There's a blueprint [1] whereby we're trying to address a bunch of
these same concerns in nova.  You can see the first part in action here
[2].  However, it has become clear that nova is just one of the many
services that would benefit from get_service_url().  With the full
support of mordred (let's call it The Full Monty), we've got our sights
on moving that method into ksa itself for that purpose.


Yes - this has started with documenting how to consume Keystone Catalog 
and discovery properly.


https://review.openstack.org/#/q/topic:version-discovery

(it's a big stack)

Once we're good with that - the next step is getting ksa updated to be 
able to handle the end-to-end. It does most of it today, but there are 
enough edgecases it doesn't that you wind up having to do something 
else, like efried just did in nova. The goal is to make that not 
necessary - and so that it's both possible and EASY for everyone to 
CORRECTLY consume catalog and version discovery.


(more comments inline below)


Please have a look at this blueprint and change set.  Let us know if
your concerns would be addressed if this were available to you from ksa.

[1]
https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/use-service-catalog-for-endpoints.html
[2] https://review.openstack.org/#/c/458257/

Thanks,
efried

On 05/24/2017 04:46 AM, Pavlo Shchelokovskyy wrote:

Hi all,

There are several problems or inefficiencies in how we are dealing with
auth to other services. Although it became much better in Newton, some
things are still to be improved and I like to discuss how to tackle
those and my ideas for that.

Keystone endpoints
===

Apparently since February-ish DevStack no longer sets up 'internal'
endpoints for most of the services in core devstack [0].
Luckily we were not broken by that right away - although when
discovering a service endpoint from keystone catalog we default to
'internal' endpoint [1], for most services our devstack plugin still
configures explicit service URL in the corresponding config section, and
thus the service discovery from keystone never takes place (or that code
path is not tested by functional/integration testing).

AFAIK different endpoint types (internal vs public) are still quite used
by deployments (and IMO rightfully so), so we have to continue
supporting that. I propose to take the following actions:


I agree you should continue supporting it.

I'm not sure it's important for you to change your defaults ... as long 
at it's possible to consistently set "interface=public" or 
"interface=internal" and have the results be correct, I think that's the 
big win.



- in our devstack plugin, stop setting up the direct service URLs in
config, always use keystone catalog for discovery


YES


- in every conf section related to external service add
'endpoint_type=[internal|public]' option, defaulting to 'internal', with
a warning in option description (and validated on conductor start) that
it will be changed to 'public' in the next release


efried just added a call to keystoneauth which will register all of the 
appropriate CONF options that are needed to request a service endpoint 
from the catalog - register_adapter_conf_options:


http://git.openstack.org/cgit/openstack/keystoneauth/tree/keystoneauth1/loading/__init__.py#n39

The word "adapter" in this case isn't directly important - but there are 
three general concepts in keystoneauth that relate to how you connect:


auth
 - how you authenticate - auth_type, username, password, etc.
session
 - how the transport layer connects - certs, timeouts, etc.
adapter
 - what base endpoint to mount from the catalog - service_type, 
interface, endpoint_override, api_version



- use those values from CONF wherever we ask for service URL from
catalog or instantiate client with session.


YES


- populate these options in our devstack plugin to be 'public'
- in Queens, switch the default to 'public' and use defaults in devstack
plugin, remove warnings.

Unify clients creation


again, in those config sections related to service clients, we have many
options to instantiate clients from (especially glance section, see my
other recent ML about our image service code). Many of those seem to be
from the time when keystone catalog was missing some functionality or
not existing at all, and keystoneauth lib abstracting identity and
client sessions was not there either.

To simplify setup and unify as much code as possible I'd like to propose
the following:

- in each config section for service client add (if missing) a
'_url' option that should point to the API of given service and
will be used *only in noauth mode* when there's no Keystone catalog to
discover the service endpoint from


I disagre with this one.

The option exists and is called "endpoint_override" and it skips the 
catalog completely. It 

Re: [openstack-dev] [TripleO] A proposal for hackathon to reduce deploy time of TripleO

2017-05-24 Thread Ben Nemec



On 05/23/2017 05:47 AM, Sagi Shnaidman wrote:

Hi, all

I'd like to propose an idea to make one or two days hackathon in TripleO
project with main goal - to reduce deployment time of TripleO.

- How could it be arranged?

We can arrange a separate IRC channel and Bluejeans video conference
session for hackathon in these days to create a "presence" feeling.

- How to participate and contribute?

We'll have a few responsibility fields like tripleo-quickstart,
containers, storage, HA, baremetal, etc - the exact list should be ready
before the hackathon so that everybody could assign to one of these
"teams". It's good to have somebody in team to be stakeholder and
responsible for organization and tasks.

- What is the goal?

The goal of this hackathon to reduce deployment time of TripleO as much
as possible.

For example part of CI team takes a task to reduce quickstart tasks
time. It includes statistics collection, profiling and detection of
places to optimize. After this tasks are created, patches are tested and
submitted.

The prizes will be presented to teams which saved most of time :)

What do you think?


I'm happy to see this get more attention, and I will take this 
opportunity to point out 
http://blog.nemebean.com/content/improving-tripleo-ci-throughput which 
discusses the work I did a few months ago to decrease the ci wall time. 
Some of that was ci-specific, some of it was general bugs in TripleO 
performance.


However, the important thing to note is the last couple of paragraphs. 
I spent a _lot_ of time tracking down performance issues and optimizing 
things where I could, and in the end pretty much every gain I made was 
regressed somewhere else within a few weeks, leaving us where we are now 
with jobs timing out all over the place.


I guess my point is two-fold: 1) I'm not sure a day or two sprint is 
going to be sufficient to dig into and fix the performance of TripleO. 
Maybe if there's some low-hanging fruit.  2) It's certainly not going to 
be sufficient to keep TripleO performance at an acceptable level 
long-term.  This will have to be an ongoing effort, and we badly need 
the tracking previously provided by our graphite metrics.  Without hard 
numbers on what is regressing we don't know what to look at.  Related: 
https://review.openstack.org/#/c/462980


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Documenting config drive - what do you want to see?

2017-05-24 Thread Monty Taylor

On 05/24/2017 10:07 AM, Clark Boylan wrote:



On Wed, May 24, 2017, at 07:39 AM, Matt Riedemann wrote:

Rocky tipped me off to a request to document config drive which came up
at the Boston Forum, and I tracked that down to Clark's wishlist
etherpad [1] (L195) which states:

"Document the config drive. The only way I have been able to figure out
how to make a config drive is by either reading nova's source code or by
reading cloud-init's source code."

So naturally I have some questions, and I'm looking to flesh the idea /
request out a bit so we can start something in the in-tree nova devref.

Question the first: is this existing document [2] helpful? At a high
level, that's more about 'how' rather than 'what', as in what's in the
config drive.


This is helpful, but I think it is targeted to the deployer of OpenStack
and not the consumer of OpenStack.


Yup.


Question the second: are people mostly looking for documentation on the
content of the config drive? I assume so, because without reading the
source code you wouldn't know, which is the terrible part.


I'm (due to being a cloud user) mostly noticing the lack of information
on why cloud users might use config drive and how to consume it.
Documentation for the content of the config drive is a major piece of
what is missing. What do the key value pairs mean and how can I use them
to configure my nova instances to operate properly.

But also general information like, config drive can be more reliable
that metadata service as its directly attached to instance. Trade off is
possibly no live migration for the instance (under what circumstances
does live migration work and as a user is that discoverable?). What
filesystems are valid and I need to handle in my instance images? Will
the device id always be config-2? and so on. The user guide doc you
linked does try to address some of this, but seems to do so from the
perspective of the person deploying a cloud, "do this if you want to
avoid dhcp in your cloud", "install these things on compute hosts".


Yah. Being able to point consumers at why they should care and what 
benefits it has is useful since it's a flag in the server api to turn on 
config-drive.


But also ++ to clarkb


Based on this, I can think of a few things we can do:

1. Start documenting the versions which come out of the metadata API
service, which regardless of whether or not you're using it, is used to
build the config drive. I'm thinking we could start with something like
the in-tree REST API version history [3]. This would basically be a
change log of each version, e.g. in 2016-06-30 you got device tags, in
2017-02-22 you got vlan tags, etc.


I like this as it should enable cloud users to implement tooling that
knows what it needs that can error properly if it ends up on a cloud too
old to contain the required information.


++


2. Start documenting the contents similar to the response tables in the
compute API reference [4]. For example, network_data.json has an example
response in this spec [5]. So have an example response and a table with
an explanation of fields in the response, so describe
ethernet_mac_address and vif_id, their type, whether or not they are
optional or required, and in which version they were added to the
response, similar to how we document microversions in the compute REST
API reference.


++


+100!



--

Are there other thoughts here or things I'm missing? At this point I'm
just trying to gather requirements so we can get something started. I
don't have volunteers to work on this, but I'm thinking we can at least
start with some basics and then people can help flesh it out over time.


I like this, starting small to produce something useful then going from
there makes sense to me.


++


Another idea I've had is making a tool that collected (or was fed)
information that goes into config drives and produces the device to
attach to a VM would be nice. Reason for this is while config drive is
something grown out of nova/OpenStack you often want to boot images with
Nova and other tools so making it easy for those other tools to work
properly too would be nice. In the simple case I build images locally,
then boot them with kvm to test that they work before pushing things
into OpenStack and config drive makes that somewhat complicated. Ideally
this would be the same code that nova uses to generate the config drives
just with a command line front end.


That would also help with testing config-drive consuming tools potentially.

We have some fixtures in the glean source repo:

http://git.openstack.org/cgit/openstack-infra/glean/tree/glean/tests/fixtures

which we collected from clouds out in the wild so we could make sure 
were were doing the right things from them. I imagine it could be neat 
to be able to use a tool to generate various combos of config-drive 
content on the fly so they don't have to be hard-coded- but if it's not 
the actual code itself it wouldn't be as awesome.




[1] 

Re: [openstack-dev] [glance] Stepping Down

2017-05-24 Thread Nikhil Komawar
Dharini,

Thanks for your good work. It's been good working with you on Glance. All
the best!

Cheers

On Mon, May 22, 2017 at 8:46 PM, Chandrasekar, Dharini <
dharini.chandrase...@intel.com> wrote:

> Hello Glancers,
>
>
>
> Due to a change in my job role with my employer, I unfortunately do not
> have the bandwidth to contribute to Glance in the capacity of a Core
> Contributor.
>
> I hence would have to step down from my role of a Core Contibutor in
> Glance.
>
>
>
> I had a great experience working in OpenStack Glance. Thank you all for
> your help and support. I wish you all, good luck in all your endeavors.
>
>
>
> Thanks,
>
> Dharini.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
--
Thanks,
Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Stepping Down

2017-05-24 Thread Nikhil Komawar
Hemanth, sorry to see you step down here. Thanks for your astute input all
along. All the best!

Cheers

On Fri, May 19, 2017 at 5:49 PM, Hemanth Makkapati 
wrote:

> Glancers,
> Due to a significant change to my job description, I wouldn't be able to
> contribute to Glance in the capacity of a core reviewer going forward.
> Hence, I'd like to step down from my role immediately.
> For the same reason, I'd like to step down from Glance coresec and release
> liaison roles as well.
>
> Thanks for all the help!
>
> Rooting for Glance to do great things,
> Hemanth Makkapati
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
--
Thanks,
Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][openstack-ansible] Moving on

2017-05-24 Thread Nikhil Komawar
Sorry to see you go, Steve. Your code, feedback and all the work has always
been super helpful so thank you very much. You shall be an asset to any
team you work on. All the best!

On Thu, May 18, 2017 at 11:55 PM, Steve Lewis  wrote:

> It is clear to me now that I won't be able to work on OpenStack as a part
> of my next day job, wherever that ends up being. As such, I’ll no longer be
> able to invest the time and energy required to maintain my involvement in
> the community. It's time to resign my role as a core reviewer, effective
> immediately.
>
> Thanks for all the fish.
> --
> SteveL
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
--
Thanks,
Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-05-24 Thread Nikhil Komawar
Project:  Glance

Attendees: ~15

What was done:

We started by introducing the core team (or whatever existed then), did a
run down of Glance API documentation especially for developers, other
references like notes for ops, best practices. We went through the
architecture of the project. A few were interested in knowing more details
and going in depth so we discussed the design patterns that exist today,
scope of improvements and any blackholes therein, auxiliary services and
performance tradeoffs etc. A lot of the discussion was free form so people
asked questions and session was interactive.


What worked:

1. The projector worked!

2. Session was free form, there was good turnout and it was interactive.
(all the good things)

3. People were serious about contributing as per their
availability/capacity to do upstream and one person showed up asking to do
reviews.


Lessons:

1. Could have been advertised more at least the session description more
customized.

2. A representative from the team could have been officially invited to the
upstream institute training.

3. The community building sessions and on-boarding sessions seem to overlap
a bit so a representative from the team could be help in those sessions for
Q or more interaction. Probably more collaboration/prep before the summit
for such things. ($0.02)


Cheers

On Wed, May 24, 2017 at 1:27 PM, Jay S Bryant  wrote:

> Project:  Cinder
>
> Attendees: Approximately 30
>
> I was really pleased by the number of people that attended the Cinder
> session and the fact that they people in the room seemed engaged with the
> presentation and asked good questions showing interest in the project.  I
> think having the on-boardings rooms was beneficial and hopefully something
> that we can continue.
>
> Given the number of people in the room we didn't go around and introduce
> everyone.  I did have the Sean McGinnis introduce himself as PTL and had
> the other Cinder Core members introduce themselves so that the attendees
> could put faces with our names.
>
> From there we kicked off the presentation [1] which covered the following
> high level topics:
>
>- Introduction of Cinder's Repos and components
>- Quick overview of Cinder's architecture/organization
>- Pointers to the Upstream Institute education (Might have done a bit
>of a sales pitch for the next session here ;-))
>- Expanded upon the Upstream Institute education to explain how what
>was taught there specifically applied to Cinder
>- Walked through the main Cinder code tree
>- Described how to test changes to Cinder
>
> My presentation was designed to assume that attendees had been through
> Upstream Institute.  I had coverage in the slides in case they had not been
> through the education.  Unfortunately most of the class had not been
> through the education so I did spend a portion of time re-iterating those
> concepts and less time was able to be spent at the end going through real
> world examples of working with changes in Cinder.  I got feedback from a
> few people that having some real hands on coding examples would have been
> helpful.
>
> One way we could possible handle this is to split the on-boarding to a
> introduction section and then a more advanced second session.  The other
> option is that we require people who are attending the on-boarding to have
> been through Upstream Institute.  Something to think about.
>
> I think it was unfortunate that the session wasn't recorded.  We shared a
> lot of good information (between good questions and having a good
> representation of Cinder's Core team in the room) that it would have been
> nice to capture.  Given this I am planning at some point in the near future
> to work with Walt Boring to record a version of the presentation that can
> be uploaded to our Cinder YouTube channel and include some coding examples.
>
> In summary, I think the on-boarding rooms were a great addition and the
> Cinder team is pleased with how we used the time.  I think it is something
> we would like to continue to invest time into developing and improving.
>
> Jay
>
> [1] https://www.slideshare.net/JayBryant2/openstack-cinder-
> onboarding-education-boston-summit-2017
>
>
> On 5/19/2017 3:43 PM, Lance Bragstad wrote:
>
> Project: Keystone
> Attendees: 12 - 15
>
> We conflicted with one of the Baremetal/VM sessions
>
> I attempted to document most of the session in my recap [0].
>
> We started out by doing a round-the-room of introductions so that folks
> could put IRC nicks to faces (we also didn't have a packed room so this
> went pretty quick). After that we cruised through a summary of keystone,
> the format of the projects, and the various processes we use. All of this
> took *maybe* 30 minutes.
>
> From there we had an open discussion and things evolved organically. We
> ended up going through:
>
>- the differences between the v2.0 and v3 APIs
>- keystonemiddleware architecture, how it aids 

Re: [openstack-dev] [glance] please approve test patch to fix glanceclient

2017-05-24 Thread Nikhil Komawar
thanks (for your work and to Flavio for the other +2). this is done as well.

On Wed, May 24, 2017 at 1:07 PM, Eric Fried  wrote:

> Thanks Nikhil.  This one is also needed to make py35 pass:
>
> https://review.openstack.org/#/c/396816/
>
> E
>
> On 05/24/2017 10:55 AM, Nikhil Komawar wrote:
> > thanks for bringing it up. this is done.
> >
> > On Wed, May 24, 2017 at 10:54 AM, Sean Dague  > > wrote:
> >
> > python-glanceclient patches have been failing for at least a week
> due to
> > a requests change. The fix was posted 5 days ago -
> > https://review.openstack.org/#/c/466385
> > 
> >
> > It would be nice to get that approved so that other patches could be
> > considered.
> >
> > -Sean
> >
> > --
> > Sean Dague
> > http://dague.net
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >  unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> >
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
Thanks,
Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Sean McGinnis

On 05/24/2017 02:21 PM, Jeremy Stanley wrote:

On 2017-05-24 13:25:21 -0500 (-0500), Sean McGinnis wrote:
[...]

Well, it could be more encouraging this time if the tables aren't
connected together making it difficult to easily rearrange them. :]


A very good point I've now passed along to them, along with a
suggestion to make up a bunch of little signs to throw onto the
tables that say something like, "please move these tables wherever
works best for your team, we mean it!"



Haha, that might work! ;)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-05-24 Thread Robert Li (baoli)
Thanks for the pointer. I think your suggestion in comments #14 
https://bugs.launchpad.net/neutron/+bug/1631371/comments/14 makes sense. I 
actually meant to address the nova side so that trunk details can be exposed in 
the metadata and configdrive.

-Robert



On 5/24/17, 1:52 PM, "Armando M." > 
wrote:



On 24 May 2017 at 08:53, Robert Li (baoli) 
> wrote:
Hi Kevin,

In that case, I will start working on it. Should this be considered a RFE or a 
regular bug?

There have been discussions in the past about this [1]. The conclusion of the 
discussion was: Nova should have everything it needs to expose trunk details to 
guest via the metadata API/config drive and at this stage nothing is required 
from the neutron end (and hence there's no point in filing a Neutron RFE).

While notifying trunk changes to nova require a simple minor enhancement in 
neutron, it seems premature to go down that path when there's no nova 
scaffolding yet. Someone should then figure out how the guest itself gets 
notified of trunk changes so that it can rearrange its networking stack. That 
might as well be left to some special sauce added to the guest image, though no 
meaningful discussion has taken place on how to crack this particular nut.

HTH
Armando

[1] https://bugs.launchpad.net/neutron/+bug/1631371



Thanks,
Robert

On 5/23/17, 12:12 AM, "Kevin Benton" 
> wrote:

I think we just need someone to volunteer to do the work to expose it as 
metadata to the VM in Nova.

On May 22, 2017 1:27 PM, "Robert Li (baoli)" 
> wrote:
Hi Levi,

Thanks for the info. I noticed that support in the nova code, but was wondering 
why something similar is not available for vlan trunking.

--Robert


On 5/22/17, 3:34 PM, "Moshe Levi" 
> wrote:

Hi Robert,
The closes thing that I know about is tagging of SR-IOV physical function’s 
VLAN tag to guests see [1]
Maybe you can leverage the same mechanism to config vlan trunking in guest.

[1] - 
https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/sriov-pf-passthrough-neutron-port-vlan.html


From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, May 22, 2017 8:49 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova][vlan trunking] Guest networking configuration 
for vlan trunk

Hi,

I’m trying to find out if there is support in nova (in terms of metadata and 
cfgdrive) to configure vlan trunking in the guest. In the ‘CLI usage example’ 
provided in this wiki https://wiki.openstack.org/wiki/Neutron/TrunkPort, it 
indicates:

# The typical cloud image will auto-configure the first NIC (eg. eth0) only and 
not the vlan interfaces (eg. eth0.VLAN-ID).
ssh VM0-ADDRESS sudo ip link add link eth0 name eth0.101 type vlan id 101

I’d like to understand why the support of configuring vlan interfaces in the 
guest is not added. And should it be added?

Thanks,
Robert

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-24 Thread Lance Bragstad
I'd like to fill in a little more context here. I see three options with
the current two proposals.

*Option 1*

Use a special admin project to denote elevated privileges. For those
unfamiliar with the approach, it would rely on every deployment having an
"admin" project defined in configuration [0].

*How it works:*

Role assignments on this project represent global scope which is denoted by
a boolean attribute in the token response. A user with an 'admin' role
assignment on this project is equivalent to the global or cloud
administrator. Ideally, if a user has a 'reader' role assignment on the
admin project, they could have access to list everything within the
deployment, pending all the proper changes are made across the various
services. The workflow requires a special project for any sort of elevated
privilege.

Pros:
- Almost all the work is done to make keystone understand the admin
project, there are already several patches in review to other projects to
consume this
- Operators can create roles and assign them to the admin_project as needed
after the upgrade to give proper global scope to their users

Cons:
- All global assignments are linked back to a single project
- Describing the flow is confusing because in order to give someone global
access you have to give them a role assignment on a very specific project,
which seems like an anti-pattern
- We currently don't allow some things to exist in the global sense (i.e. I
can't launch instances without tenancy), the admin project could own
resources
- What happens if the admin project disappears?
- Tooling or scripts will be written around the admin project, instead of
treating all projects equally

*Option 2*

Implement global role assignments in keystone.

*How it works:*

Role assignments in keystone can be scoped to global context. Users can
then ask for a globally scoped token

Pros:
- This approach represents a more accurate long term vision for role
assignments (at least how we understand it today)
- Operators can create global roles and assign them as needed after the
upgrade to give proper global scope to their users
- It's easier to explain global scope using global role assignments instead
of a special project
- token.is_global = True and token.role = 'reader' is easier to understand
than token.is_admin_project = True and token.role = 'reader'
- A global token can't be associated to a project, making it harder for
operations that require a project to consume a global token (i.e. I
shouldn't be able to launch an instance with a globally scoped token)

Cons:
- We need to start from scratch implementing global scope in keystone,
steps for this are detailed in the spec

*Option 3*

We do option one and then follow it up with option two.

*How it works:*

We implement option one and continue solving the admin-ness issues in Pike
by helping projects consume and enforce it. We then target the
implementation of global roles for Queens.

Pros:
- If we make the interface in oslo.context for global roles consistent,
then consuming projects shouldn't know the difference between using the
admin_project or a global role assignment

Cons:
- It's more work and we're already strapped for resources
- We've told operators that the admin_project is a thing but after Queens
they will be able to do real global role assignments, so they should now
migrate *again*
- We have to support two paths for solving the same problem in keystone,
more maintenance and more testing to ensure they both behave exactly the
same way
  - This can get more complicated for projects dedicated to testing policy
and RBAC, like Patrole


Looking for feedback here as to which one is preferred given timing and
payoff, specifically from operators who would be doing the migrations to
implement and maintain proper scope in their deployments.

Thanks for reading!


[0]
https://github.com/openstack/keystone/blob/3d033df1c0fdc6cc9d2b02a702efca286371f2bd/etc/keystone.conf.sample#L2334-L2342

On Wed, May 24, 2017 at 10:35 AM, Lance Bragstad 
wrote:

> Hey all,
>
> To date we have two proposed solutions for tackling the admin-ness issue
> we have across the services. One builds on the existing scope concepts by
> scoping to an admin project [0]. The other introduces global role
> assignments [1] as a way to denote elevated privileges.
>
> I'd like to get some feedback from operators, as well as developers from
> other projects, on each approach. Since work is required in keystone, it
> would be good to get consensus before spec freeze (June 9th). If you have
> specific questions on either approach, feel free to ping me or drop by the
> weekly policy meeting [2].
>
> Thanks!
>
> [0] http://adam.younglogic.com/2017/05/fixing-bug-96869/
> [1] https://review.openstack.org/#/c/464763/
> [2] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting
>
__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [release] Issues with reno

2017-05-24 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2017-05-23 12:10:56 -0400:
> Excerpts from Matt Riedemann's message of 2017-05-22 21:48:37 -0500:
> > I think Doug and I have talked about this before, but it came up again 
> > tonight.
> > 
> > There seems to be an issue where release notes for the current series 
> > don't show up in the published release notes, but unreleased things do.
> > 
> > For example, the python-novaclient release notes:
> > 
> > https://docs.openstack.org/releasenotes/python-novaclient/
> > 
> > Contain Ocata series release notes and the currently unreleased set of 
> > changes for Pike, but doesn't include the 8.0.0 release notes, which is 
> > important for projects impacted by things we removed in the 8.0.0 
> > release (lots of deprecated proxy APIs and CLIs were removed).
> > 
> > I've noticed the same for things in Nova's release notes where 
> > everything between ocata and the p-1 tag is missing.
> > 
> > Is there already a bug for this?
> > 
> 
> I don't think there is a bug, but I have it in my notes to look
> into it this week based on our earlier conversation. Based purely
> on the description, the problem might be related to a similar issue
> the Ironic team reported in https://bugs.launchpad.net/reno/+bug/1682147
> 
> Doug
> 

I believe https://review.openstack.org/#/c/467733/ fixes this behavior.
I've tested it with python-novaclient and ironic. Please take a look at
the results and let me know if that's doing what you all expect.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Jeremy Stanley
On 2017-05-24 13:25:21 -0500 (-0500), Sean McGinnis wrote:
[...]
> Well, it could be more encouraging this time if the tables aren't
> connected together making it difficult to easily rearrange them. :]

A very good point I've now passed along to them, along with a
suggestion to make up a bunch of little signs to throw onto the
tables that say something like, "please move these tables wherever
works best for your team, we mean it!"
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Sean McGinnis

We did end up moving tables closer together from their original
huge square layout


In the Ironic room we also had to move tables a bit closer. They
were really far apart, and not all contributors where comfortable
with raising their voice.

[...]

The organizers tried really hard to convey that rearranging
tables/chairs in each room was not only allowed but encouraged.
Hopefully by the time September rolls around that message will sink
in. ;)



Well, it could be more encouraging this time if the tables aren't
connected together making it difficult to easily rearrange them. :]


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] OpenStack and OVN integration is failing on multi-node physical machines.(probably a bug)

2017-05-24 Thread Numan Siddique
On Tue, May 23, 2017 at 6:48 PM, pranab boruah 
wrote:

> Hi,
> We are building a multi-node physical set-up of OpenStack Newton. The
> goal is to finally integrate the set-up with OVN.
> Lab details:
> 1 Controller, 2 computes
>
> CentOS-7.3, OpenStack Newton, separate network for mgmt and tunnel
> OVS version: 2.6.1
>
> I followed the following guide to deploy OpenStack Newton using the
> PackStack utility:
>
> http://networkop.co.uk/blog/2016/11/27/ovn-part1/
>
> Before I started integrating with OVN, I made sure that the set-up(ML2
> and OVS) was working by launching VMs. VMs on cross compute node were
> able to ping each other.
>
> Now, I followed the official guide for OVN integration:
>
> http://docs.openstack.org/developer/networking-ovn/install.html
>
> Error details :
> Neutron Server log shows :
>
>  ERROR networking_ovn.ovsdb.impl_idl_ovn [-] OVS database connection
> to OVN_Northbound failed with error: '{u'error': u'unknown database',
> u'details': u'get_schema request specifies unknown database
> OVN_Northbound', u'syntax': u'["OVN_Northbound"]'}'. Verify that the
> OVS and OVN services are available and that the 'ovn_nb_connection'
> and 'ovn_sb_connection' configuration options are correct.
>
> The issue is ovsdb-server on the controller binds with the port
> 6641.instead of 6640.
>
>

Hi Pranab,
Normally I have seen this happening when neutron-server (i.e the
networking-ovn ML2 driver) tries to connect to the OVN northbound
ovsdb-server (on port 6641) and fails (mainly because the OVN NB db
ovsdb-server) is not running. In such case the code here [1] runs
"ovs-vsctl add-connection ptcp:6640:.. which causes the main ovsdb-server
(for conf.db) to listen on port 6641.

Can you make sure that ovsdb-server's for OVN are running before the
neutron-server is started.

May be to see if it works you can run "ovs-vsctl del-manager" and then run
netsat -putna | grep 6641 and verify that OVN NB db ovsdb-server listens on
6641.

[1] -
https://github.com/openstack/neutron/blob/stable/newton/neutron/agent/ovsdb/native/connection.py#L82

https://github.com/openstack/neutron/blob/stable/newton/neutron/agent/ovsdb/native/helpers.py#L41

Thanks
Numan

#  netstat -putna | grep 6641
>
> tcp0  0 192.168.10.10:6641  0.0.0.0:*
> LISTEN  809/ovsdb-server
>
> # netstat -putna | grep 6640 (shows no output)
>
> Now, OVN NB DB tries to listen on port 6641, but since it is used by
> the ovsdb-server, it's unable to. PID of ovsdb-server is 809, while
> the pid of OVN NB DB is 4217.
>
> OVN NB DB logs shows this:
>
> 2017-05-23T12:58:09.444Z|01421|ovsdb_jsonrpc_server|ERR|ptcp:6641:0.0.0.0:
> listen failed: Address already in use
> 2017-05-23T12:58:11.946Z|01422|socket_util|ERR|6641:0.0.0.0: bind:
> Address already in use
> 2017-05-23T12:58:14.448Z|01423|socket_util|ERR|6641:0.0.0.0: bind:
> Address already in use
>
> Solutions I tried:
> 1) Completely fresh installing everything.
> 2) Tried with OVS 2.6.0 and 2.7, same issue on all.
> 3) Checked  and verified : SB and NB configuration options in
> plugin.ini are exactly correct.
>
> Please help. Let me know. if additional details are required.
>
> Thanks,
> Pranab
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-05-24 Thread Armando M.
On 24 May 2017 at 08:53, Robert Li (baoli)  wrote:

> Hi Kevin,
>
>
>
> In that case, I will start working on it. Should this be considered a RFE
> or a regular bug?
>

There have been discussions in the past about this [1]. The conclusion of
the discussion was: Nova should have everything it needs to expose trunk
details to guest via the metadata API/config drive and at this stage
nothing is required from the neutron end (and hence there's no point in
filing a Neutron RFE).

While notifying trunk changes to nova require a simple minor enhancement in
neutron, it seems premature to go down that path when there's no nova
scaffolding yet. Someone should then figure out how the guest itself gets
notified of trunk changes so that it can rearrange its networking stack.
That might as well be left to some special sauce added to the guest image,
though no meaningful discussion has taken place on how to crack this
particular nut.

HTH
Armando

[1] https://bugs.launchpad.net/neutron/+bug/1631371


>
> Thanks,
>
> Robert
>
>
>
> On 5/23/17, 12:12 AM, "Kevin Benton"  wrote:
>
>
>
> I think we just need someone to volunteer to do the work to expose it as
> metadata to the VM in Nova.
>
>
>
> On May 22, 2017 1:27 PM, "Robert Li (baoli)"  wrote:
>
> Hi Levi,
>
>
>
> Thanks for the info. I noticed that support in the nova code, but was
> wondering why something similar is not available for vlan trunking.
>
>
>
> --Robert
>
>
>
>
>
> On 5/22/17, 3:34 PM, "Moshe Levi"  wrote:
>
>
>
> Hi Robert,
>
> The closes thing that I know about is tagging of SR-IOV physical
> function’s VLAN tag to guests see [1]
>
> Maybe you can leverage the same mechanism to config vlan trunking in guest.
>
>
>
> [1] - https://specs.openstack.org/openstack/nova-specs/specs/
> ocata/implemented/sriov-pf-passthrough-neutron-port-vlan.html
>
>
>
>
>
> *From:* Robert Li (baoli) [mailto:ba...@cisco.com]
> *Sent:* Monday, May 22, 2017 8:49 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [nova][vlan trunking] Guest networking
> configuration for vlan trunk
>
>
>
> Hi,
>
>
>
> I’m trying to find out if there is support in nova (in terms of metadata
> and cfgdrive) to configure vlan trunking in the guest. In the ‘CLI usage
> example’ provided in this wiki https://wiki.openstack.org/
> wiki/Neutron/TrunkPort, it indicates:
>
>
>
> # The typical cloud image will auto-configure the first NIC (eg. eth0)
> only and not the vlan interfaces (eg. eth0.VLAN-ID).
>
> ssh VM0-ADDRESS sudo ip link add link eth0 name eth0.101 type vlan id 101
>
>
>
> I’d like to understand why the support of configuring vlan interfaces in
> the guest is not added. And should it be added?
>
>
>
> Thanks,
>
> Robert
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using keystone right - catalog, endpoints, tokens and noauth

2017-05-24 Thread Eric Fried
Pavlo-

There's a blueprint [1] whereby we're trying to address a bunch of
these same concerns in nova.  You can see the first part in action here
[2].  However, it has become clear that nova is just one of the many
services that would benefit from get_service_url().  With the full
support of mordred (let's call it The Full Monty), we've got our sights
on moving that method into ksa itself for that purpose.

Please have a look at this blueprint and change set.  Let us know if
your concerns would be addressed if this were available to you from ksa.

[1]
https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/use-service-catalog-for-endpoints.html
[2] https://review.openstack.org/#/c/458257/

Thanks,
efried

On 05/24/2017 04:46 AM, Pavlo Shchelokovskyy wrote:
> Hi all,
> 
> There are several problems or inefficiencies in how we are dealing with
> auth to other services. Although it became much better in Newton, some
> things are still to be improved and I like to discuss how to tackle
> those and my ideas for that.
> 
> Keystone endpoints
> ===
> 
> Apparently since February-ish DevStack no longer sets up 'internal'
> endpoints for most of the services in core devstack [0].
> Luckily we were not broken by that right away - although when
> discovering a service endpoint from keystone catalog we default to
> 'internal' endpoint [1], for most services our devstack plugin still
> configures explicit service URL in the corresponding config section, and
> thus the service discovery from keystone never takes place (or that code
> path is not tested by functional/integration testing).
> 
> AFAIK different endpoint types (internal vs public) are still quite used
> by deployments (and IMO rightfully so), so we have to continue
> supporting that. I propose to take the following actions:
> 
> - in our devstack plugin, stop setting up the direct service URLs in
> config, always use keystone catalog for discovery
> - in every conf section related to external service add
> 'endpoint_type=[internal|public]' option, defaulting to 'internal', with
> a warning in option description (and validated on conductor start) that
> it will be changed to 'public' in the next release
> - use those values from CONF wherever we ask for service URL from
> catalog or instantiate client with session.
> - populate these options in our devstack plugin to be 'public'
> - in Queens, switch the default to 'public' and use defaults in devstack
> plugin, remove warnings.
> 
> Unify clients creation
> 
> 
> again, in those config sections related to service clients, we have many
> options to instantiate clients from (especially glance section, see my
> other recent ML about our image service code). Many of those seem to be
> from the time when keystone catalog was missing some functionality or
> not existing at all, and keystoneauth lib abstracting identity and
> client sessions was not there either.
> 
> To simplify setup and unify as much code as possible I'd like to propose
> the following:
> 
> - in each config section for service client add (if missing) a
> '_url' option that should point to the API of given service and
> will be used *only in noauth mode* when there's no Keystone catalog to
> discover the service endpoint from
> - in the code creating service clients, always create a keystoneauth
> session from config sections, using appropriate keystoneauth identity
> plugin - 'token_endpoint' with fake token _url for noauth mode,
> 'password' for service user client, 'token' when using a token from
> incoming request. The latter will have a benefit to make it possible for
> the session to reauth itself when user token is about to expire, but
> might require changes in some public methods to pass in the full
> task.context instead of just token
> - always create clients from sessions. Although AFAIK all clients ironic
> uses already support this, some in ironic code (e.g. glance) still
> always create a client from token and endpoint directly.
> - deprecate some options explicitly registered by ironic in those
> sections that are becoming redundant - including those that relate to
> HTTP session settings (like timeout, retries, SSL certs and settings) as
> those will be used from options registered by keystoneauth Session, and
> those multiple options that piece together a single service URL.
> 
> This will decrease the complexity of service client-related code and
> will make configuring those cleaner.
> 
> Of course all of this has to be done minding proper deprecation process,
> although that might complicate things (as usual :/).
> 
> Legacy auth
> =
> 
> Probably not worth specific mention, but we implemented a proper
> keystoneauth-based loading of client auth options back in Newton almost
> a year ago, so the code attempting to load auth for clients in a
> deprecated way from "[keystone_authtoken]" section can be safely 

Re: [openstack-dev] Security bug in diskimage-builder

2017-05-24 Thread Ben Nemec



On 05/17/2017 10:46 AM, Jeremy Stanley wrote:

On 2017-05-17 15:57:16 +0300 (+0300), George Shuklin wrote:

There is a bug in diskimage-builder I reported it at 2017-03-10 as 'private
security'. I think this bug is a medium severity.

So far there was no reaction at all. I plan to change this bug to public
security on next Monday. If someone is interested in bumping up CVE count
for DIB, please look at
https://bugs.launchpad.net/diskimage-builder/+bug/1671842 (private-walled
for security group).


Thanks for the heads up! One thing we missed in the migration of DIB
from TripleO to Infra team governance is that the bug tracker for it
was still under TripleO team control (I just now leveraged my
OpenStack Administrator membership on LP to fix that), so the bug
was only visible to https://launchpad.net/~tripleo until moments
ago.

That said, a "private" bug report visible to the 86 people who are
members of that LP team doesn't really qualify as private in my book
so there's probably no additional harm in just switching it to
public security while I work on triaging it with the DIB devs.
Going forward, private security bugs filed for DIB are only visible
to the 18 people who make up the diskimage-builder-core and
openstack-ci-core teams on LP, which is still more than it probably
should be but it's a start at least.


Hmm, this points out a valid issue that we don't have a security group 
for tripleo at all.  We use the tripleo group to include basically all 
tripleo developers so it's definitely not appropriate for this purpose.


Emilien, I think we should create a tripleo-coresec group in launchpad 
that can be used for this.  We have had tripleo-affecting security bugs 
in the past and I imagine we will again.  I'm happy to help out with 
that, although I will admit my launchpad-fu is kind of weak so I don't 
know off the top of my head how to do it.


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-05-24 Thread Jay S Bryant

Project:  Cinder

Attendees: Approximately 30

I was really pleased by the number of people that attended the Cinder 
session and the fact that they people in the room seemed engaged with 
the presentation and asked good questions showing interest in the 
project.  I think having the on-boardings rooms was beneficial and 
hopefully something that we can continue.


Given the number of people in the room we didn't go around and introduce 
everyone.  I did have the Sean McGinnis introduce himself as PTL and had 
the other Cinder Core members introduce themselves so that the attendees 
could put faces with our names.


From there we kicked off the presentation [1] which covered the 
following high level topics:


 * Introduction of Cinder's Repos and components
 * Quick overview of Cinder's architecture/organization
 * Pointers to the Upstream Institute education (Might have done a bit
   of a sales pitch for the next session here ;-))
 * Expanded upon the Upstream Institute education to explain how what
   was taught there specifically applied to Cinder
 * Walked through the main Cinder code tree
 * Described how to test changes to Cinder

My presentation was designed to assume that attendees had been through 
Upstream Institute.  I had coverage in the slides in case they had not 
been through the education.  Unfortunately most of the class had not 
been through the education so I did spend a portion of time re-iterating 
those concepts and less time was able to be spent at the end going 
through real world examples of working with changes in Cinder.  I got 
feedback from a few people that having some real hands on coding 
examples would have been helpful.


One way we could possible handle this is to split the on-boarding to a 
introduction section and then a more advanced second session.  The other 
option is that we require people who are attending the on-boarding to 
have been through Upstream Institute.  Something to think about.


I think it was unfortunate that the session wasn't recorded.  We shared 
a lot of good information (between good questions and having a good 
representation of Cinder's Core team in the room) that it would have 
been nice to capture.  Given this I am planning at some point in the 
near future to work with Walt Boring to record a version of the 
presentation that can be uploaded to our Cinder YouTube channel and 
include some coding examples.


In summary, I think the on-boarding rooms were a great addition and the 
Cinder team is pleased with how we used the time.  I think it is 
something we would like to continue to invest time into developing and 
improving.


Jay

[1] 
https://www.slideshare.net/JayBryant2/openstack-cinder-onboarding-education-boston-summit-2017


On 5/19/2017 3:43 PM, Lance Bragstad wrote:

Project: Keystone
Attendees: 12 - 15

We conflicted with one of the Baremetal/VM sessions

I attempted to document most of the session in my recap [0].

We started out by doing a round-the-room of introductions so that 
folks could put IRC nicks to faces (we also didn't have a packed room 
so this went pretty quick). After that we cruised through a summary of 
keystone, the format of the projects, and the various processes we 
use. All of this took *maybe* 30 minutes.


From there we had an open discussion and things evolved organically. 
We ended up going through:


  * the differences between the v2.0 and v3 APIs
  * keystonemiddleware architecture, how it aids services, and how it
interacts with keystone
  o we essentially followed an API call for creating a instance
from keystone -> nova -> glance
  * how authentication scoping works and why it works that way
  * how federation works and why it's setup the way it is
  * how federated authentication works (https://goo.gl/NfY3mr)

All of this was pretty well-received and generated a lot of productive 
discussion. We also had several seasoned keystone contributors in the 
room, which helped a lot. Most of the attendees were all curious about 
similar topics, which was great, but we totally could have split into 
separate groups given the experience we had in the room (we'll save 
that in our back pocket for next time).


[0] https://www.lbragstad.com/blog/openstack-boston-summit-recap
[1] https://www.slideshare.net/LanceBragstad/keystone-project-onboarding

On Fri, May 19, 2017 at 10:37 AM, Michał Jastrzębski > wrote:


Kolla:
Attendees - full room (20-30?)
Notes - Conflict with kolla-k8s demo probably didn't help

While we didn't have etherpad, slides, recording (and video dongle
that could fit my laptop), we had great session with analog tools
(whiteboard and my voice chords). We walked through architecture of
each Kolla project, how they relate to each other and so on.

Couple things to take out from our onboarding:
1. Bring dongles
2. We could've used bigger room - people were leaving because we had
no chairs 

Re: [openstack-dev] [glance] please approve test patch to fix glanceclient

2017-05-24 Thread Eric Fried
Thanks Nikhil.  This one is also needed to make py35 pass:

https://review.openstack.org/#/c/396816/

E

On 05/24/2017 10:55 AM, Nikhil Komawar wrote:
> thanks for bringing it up. this is done.
> 
> On Wed, May 24, 2017 at 10:54 AM, Sean Dague  > wrote:
> 
> python-glanceclient patches have been failing for at least a week due to
> a requests change. The fix was posted 5 days ago -
> https://review.openstack.org/#/c/466385
> 
> 
> It would be nice to get that approved so that other patches could be
> considered.
> 
> -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] lists.openstack.org maintenance Friday, May 26 20:00-21:00 UTC

2017-05-24 Thread Jeremy Stanley
The Mailman listserv on lists.openstack.org will be offline for an
archive-related maintenance for up to an hour starting at 20:00 UTC
May 26, this coming Friday. This activity is scheduled for a
relatively low-volume period across our lists; during this time,
most messages bound for the server will queue at the senders' MTAs
until the server is back in service and so should not result in any
obvious disruption.

Apologies for cross-posting so widely, but we wanted to make sure
copies of this announcement went to most of our higher-traffic
lists.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] revised structure of the heat-templates repository. Suggestions

2017-05-24 Thread Zane Bitter

On 19/05/17 11:00, Lance Haig wrote:

Hi,

As we know the heat-templates repository has become out of date in some
respects and also has been difficult to be maintained from a community
perspective.

For me the repository is quiet confusing with different styles that are
used to show certain aspects and other styles for older template examples.

This I think leads to confusion and perhaps many people who give up on
heat as a resource as things are not that clear.

From discussions in other threads and on the IRC channel I have seen
that there is a need to change things a bit.


This is why I would like to start the discussion that we rethink the
template example repository.

I would like to open the discussion with mys suggestions.

  * We need to differentiate templates that work on earlier versions of
heat that what is the current supported versions.


I typically use the heat_template_version for this. Technically this is 
entirely independent of what resource types are available in Heat. 
Nevertheless, if I submit e.g. a template that uses new resources only 
available in Ocata, I'll set 'heat_template_version: ocata' even if the 
template doesn't contain any Ocata-only intrinsic functions. We could 
make that a convention.



  o I have suggested that we create directories that relate to
different versions so that you can create a stable version of
examples for the heat version and they should always remain
stable for that version and once it goes out of support can
remain there.


I'm reluctant to move existing things around unless its absolutely 
necessary, because there are a lot of links out in the wild to templates 
that will break. And they're links directly to the Git repo, it's not 
like we publish them somewhere and could add redirects.


Although that gives me an idea: what if we published them somewhere? We 
could make templates actually discoverable by publishing a list of 
descriptions (instead of just the names like you get through browsing 
the Git repo). And we could even add some metadata to indicate what 
versions of Heat they run on.



  o This would mean people can find their version of heat and know
these templates all work on their version


This would mean keeping multiple copies of each template and maintaining 
them all. I don't think this is the right way to do this - to maintain 
old stuff what you need is a stable branch. That's also how you're going 
to be able to test against old versions of OpenStack in the gate.


As I suggested in the other thread, I'd be OK with moving deprecated 
stuff to a 'deprecated' directory and then eventually deleting it. 
Stable branches would then correctly reflect the status of those 
templates at each previous release.



  * We should consider adding a docs section that that includes training
for new users.
  o I know that there are documents hosted in the developer area and
these could be utilized but I would think having a documentation
section in the repository would be a good way to keep the
examples and the documents in the same place.
  o This docs directory could also host some training for new users
and old ones on new features etc.. In a similar line to what is
here in this repo https://github.com/heat-extras/heat-tutorial
  * We should include examples form the default hooks e.g. ansible salt
etc... with SoftwareDeployments.
  o We found this quiet helpful for new users to understand what is
possible.
  * We should make sure that the validation running against the
templates runs without ignoring errors.
  o This was noted in IRC that some errors were ignored as the
endpoints or catalog was not available. It would be good to have
some form of headless catalog server that tests can be run
against so that developers of templates can validate before
submitting patches.


+1 to all of this.


These points are here to open the discussions around this topic

Please feel free to make your suggestions.


Thanks for kicking this off!

cheers,
Zane.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Jeremy Stanley
On 2017-05-24 18:11:12 +0200 (+0200), Dmitry Tantsur wrote:
> On 05/24/2017 05:57 PM, Sean McGinnis wrote:
[...]
> > It was mostly just due to the layout of the tables
[...]
> > We did end up moving tables closer together from their original
> > huge square layout
> 
> In the Ironic room we also had to move tables a bit closer. They
> were really far apart, and not all contributors where comfortable
> with raising their voice.
[...]

The organizers tried really hard to convey that rearranging
tables/chairs in each room was not only allowed but encouraged.
Hopefully by the time September rolls around that message will sink
in. ;)
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cells] Stupid question: Cells v2 & AZs

2017-05-24 Thread David Medberry
Thanks all.

On Wed, May 24, 2017 at 8:08 AM, Dan Smith  wrote:

> > Thanks for answering the base question. So, if AZs are implemented with
> > haggs, then really, they are truly disjoint from cells (ie, not a subset
> > of a cell and not a superset of a cell, just unrelated.) Does that
> > philosophy agree with what you are stating?
>
> Correct, aggregates are at the top level, and they can span cells if you
> so desire (or not if you don't configure any that do). The aggregate
> stuff doesn't know anything about cells, it only knows about hosts, so
> it's really independent.
>
> --Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] revised structure of the heat-templates repository. Suggestions

2017-05-24 Thread Zane Bitter

On 22/05/17 12:49, Lance Haig wrote:

I also asked the other day if there is a list of heat version matched to
Openstack version and I was told that there is not.


You mean like 
https://docs.openstack.org/developer/heat/template_guide/hot_spec.html#ocata 
?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Dmitry Tantsur

On 05/24/2017 05:57 PM, Sean McGinnis wrote:

On 05/24/2017 10:13 AM, Jeremy Stanley wrote:

On 2017-05-24 08:28:56 -0500 (-0500), Sean McGinnis wrote:
[...]

In ATL the Cinder room was very large, to the point that it was
actually a little difficult hearing some people for some of the
discussions.

[...]

Was the room actually full, or were people just failing to properly
huddle up in one spot so they could hear? The former is probably
just something that has to be dealt with through other means than
resizing rooms, but the latter seems perfectly solvable by just
telling people that if they want to hear they should come where all
the action is.



It was mostly just due to the layout of the tables, and the really
loud HVAC fans blowing. We did end up moving tables closer together
from their original huge square layout, but we probably could have
shrunk it down even more. It was fine, we just could have probably
gotten by with 1/2 - 2/3 of the room.


In the Ironic room we also had to move tables a bit closer. They were really far 
apart, and not all contributors where comfortable with raising their voice.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Documenting config drive - what do you want to see?

2017-05-24 Thread Joshua Harlow

Matt Riedemann wrote:

Rocky tipped me off to a request to document config drive which came up
at the Boston Forum, and I tracked that down to Clark's wishlist
etherpad [1] (L195) which states:

"Document the config drive. The only way I have been able to figure out
how to make a config drive is by either reading nova's source code or by
reading cloud-init's source code."

So naturally I have some questions, and I'm looking to flesh the idea /
request out a bit so we can start something in the in-tree nova devref.

Question the first: is this existing document [2] helpful? At a high
level, that's more about 'how' rather than 'what', as in what's in the
config drive.


So yes and no, for example there are afaik operational differences that 
are not mentioned; for example I believe the instance metadata ie (in 
that page the "meta" field) can be changed and then a REST call to the 
metadata REST api will return that updated value; while the config-drive 
will never have that updated value, so these kinds of gotchas really 
should be mentioned somewhere. Are there other kinds of gotchas like the 
above (where the metadata REST api will return a different/changing 
value while the config-drive will stay immutable)?


Those would be really nice to know about/document and/or fix.

One that comes to mind is does the "security-groups" field change in the 
metadata REST api (while the config-drive stays the same) when a 
security group is added/removed...




Question the second: are people mostly looking for documentation on the
content of the config drive? I assume so, because without reading the
source code you wouldn't know, which is the terrible part.


For better/worse idk if there are that many people trying to figure out 
the contents; cloud-init tries to hide it behind the concept of a 
datasource (see 
https://cloudinit.readthedocs.io/en/latest/topics/datasources.html#datasource-documentation 
for a bunch of them) but yes I think a better job could be done 
explaining the contents (if just to make certain cloud-init `like` 
programs easier to make).




Based on this, I can think of a few things we can do:

1. Start documenting the versions which come out of the metadata API
service, which regardless of whether or not you're using it, is used to
build the config drive. I'm thinking we could start with something like
the in-tree REST API version history [3]. This would basically be a
change log of each version, e.g. in 2016-06-30 you got device tags, in
2017-02-22 you got vlan tags, etc.

2. Start documenting the contents similar to the response tables in the
compute API reference [4]. For example, network_data.json has an example
response in this spec [5]. So have an example response and a table with
an explanation of fields in the response, so describe
ethernet_mac_address and vif_id, their type, whether or not they are
optional or required, and in which version they were added to the
response, similar to how we document microversions in the compute REST
API reference.

--

Are there other thoughts here or things I'm missing? At this point I'm
just trying to gather requirements so we can get something started. I
don't have volunteers to work on this, but I'm thinking we can at least
start with some basics and then people can help flesh it out over time.



As one of the developers of cloud-init yes please to all the above.

Fyi,

https://cloudinit.readthedocs.io/en/latest/topics/datasources/configdrive.html 
is something cloud-init has (nothing like the detail that could be 
produced by nova itself).


`network_data.json` was one of those examples that was somewhat hard to 
figure it out, but eventually the other cloud-init folks and myself did.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Sean McGinnis

On 05/24/2017 10:13 AM, Jeremy Stanley wrote:

On 2017-05-24 08:28:56 -0500 (-0500), Sean McGinnis wrote:
[...]

In ATL the Cinder room was very large, to the point that it was
actually a little difficult hearing some people for some of the
discussions.

[...]

Was the room actually full, or were people just failing to properly
huddle up in one spot so they could hear? The former is probably
just something that has to be dealt with through other means than
resizing rooms, but the latter seems perfectly solvable by just
telling people that if they want to hear they should come where all
the action is.



It was mostly just due to the layout of the tables, and the really
loud HVAC fans blowing. We did end up moving tables closer together
from their original huge square layout, but we probably could have
shrunk it down even more. It was fine, we just could have probably
gotten by with 1/2 - 2/3 of the room.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] please approve test patch to fix glanceclient

2017-05-24 Thread Nikhil Komawar
thanks for bringing it up. this is done.

On Wed, May 24, 2017 at 10:54 AM, Sean Dague  wrote:

> python-glanceclient patches have been failing for at least a week due to
> a requests change. The fix was posted 5 days ago -
> https://review.openstack.org/#/c/466385
>
> It would be nice to get that approved so that other patches could be
> considered.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-05-24 Thread Robert Li (baoli)
Hi Kevin,

In that case, I will start working on it. Should this be considered a RFE or a 
regular bug?

Thanks,
Robert

On 5/23/17, 12:12 AM, "Kevin Benton" 
> wrote:

I think we just need someone to volunteer to do the work to expose it as 
metadata to the VM in Nova.

On May 22, 2017 1:27 PM, "Robert Li (baoli)" 
> wrote:
Hi Levi,

Thanks for the info. I noticed that support in the nova code, but was wondering 
why something similar is not available for vlan trunking.

--Robert


On 5/22/17, 3:34 PM, "Moshe Levi" 
> wrote:

Hi Robert,
The closes thing that I know about is tagging of SR-IOV physical function’s 
VLAN tag to guests see [1]
Maybe you can leverage the same mechanism to config vlan trunking in guest.

[1] - 
https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/sriov-pf-passthrough-neutron-port-vlan.html


From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, May 22, 2017 8:49 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova][vlan trunking] Guest networking configuration 
for vlan trunk

Hi,

I’m trying to find out if there is support in nova (in terms of metadata and 
cfgdrive) to configure vlan trunking in the guest. In the ‘CLI usage example’ 
provided in this wiki https://wiki.openstack.org/wiki/Neutron/TrunkPort, it 
indicates:

# The typical cloud image will auto-configure the first NIC (eg. eth0) only and 
not the vlan interfaces (eg. eth0.VLAN-ID).
ssh VM0-ADDRESS sudo ip link add link eth0 name eth0.101 type vlan id 101

I’d like to understand why the support of configuring vlan interfaces in the 
guest is not added. And should it be added?

Thanks,
Robert

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-bagpipe] exabgp dependency

2017-05-24 Thread James Page
Hi Thomas

On Tue, 23 May 2017 at 09:14  wrote:

> Hi James,
>
> FYI, exabgp 4.0.0 has been released and this release can be package to
> satisfy networking-bagpipe needs.
> A request for adding exabgp as a proper OpenStack requirement is in
> flight: https://review.openstack.org/#/c/467068
>

Great - added to my TODO list for pike b2.

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Alexandra Settle


On 5/24/17, 4:34 PM, "Doug Hellmann"  wrote:

Excerpts from Thierry Carrez's message of 2017-05-24 16:44:44 +0200:
> Doug Hellmann wrote:
> > Two questions about theme-based initiatives:
> > 
> > I would like to have a discussion about the work to stop syncing
> > requirements and to see if we can make progress on the implementation
> > while we have the right people together. That seems like a topic
> > for Monday or Tuesday. Would I reserve one of the "last-minute WG"
> > rooms or extra rooms (it doesn't feel very last-minute if I know
> > about it now)?
> 
> We have some flexibility there, depending on how much time we need. If
> it's an hour or two, I would go and book one of the "reservable rooms".
> If it's a day or two, I would assign one of the rooms reserved for
> upcoming workgroups/informal teams. "Last-minute" is confusing (just
> changed it on the spreadsheet), those rooms are more to cater for needs
> that are not represented in formal workgroups today.

Ideally I'd like some time to actually hack on the tools. If that's not
possible, an hour or two to work out a detailed plan would be enough.

> > I don't see a time on here for the Docs team to work together on
> > the major initiative they have going to change how publishing works.
> > I didn't have the impression that we could anticipate that work
> > being completed this cycle. Is the "helproom" going to be enough,
> > or do we need a separate session for that, too?
> 
> Again, we should be pretty flexible on that. We can reuse the doc
> helproom, or we can say that the doc helproom is only on one day and the
> other day is dedicated to making the doc publishing work. And if the doc
> refactoring ends up being a release goal, it could use a release goal
> room instead.

Let's see what Alex thinks.

To be honest, I don’t want to assume we’re going to need all this space when 
potentially, we may not. Attendance for docs and I18N is low, but that was 
greatly increased last PTG due to the ability to have remote attendees. But 
that’s not what’s being discussed here…

I would appreciate the room the grow. I would like to find out, first and 
foremost, what is our action plan for docs. This is going to be a monstrous 
effort, and we’re quite possibly going to need space at events like the PTG to 
hack away and get things going.

So, I know that didn’t overly answer the question right now, but I hope it 
gives some perspective as to why I’m holding back a bit here. The strawman is 
appreciated, thanks Thierry. The small room for docs seems feasible. I just 
hope there will be the ability to spill over and open up the rest of the week 
to hack.

> All in all, the idea is to show that on Mon-Tue we have a number of
> rooms set apart to cover for any inter-project need we may have (already
> identified or not). The room allocation is actually much tighter on
> Wed-Thu :)

OK.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-24 Thread Lance Bragstad
Hey all,

To date we have two proposed solutions for tackling the admin-ness issue we
have across the services. One builds on the existing scope concepts by
scoping to an admin project [0]. The other introduces global role
assignments [1] as a way to denote elevated privileges.

I'd like to get some feedback from operators, as well as developers from
other projects, on each approach. Since work is required in keystone, it
would be good to get consensus before spec freeze (June 9th). If you have
specific questions on either approach, feel free to ping me or drop by the
weekly policy meeting [2].

Thanks!

[0] http://adam.younglogic.com/2017/05/fixing-bug-96869/
[1] https://review.openstack.org/#/c/464763/
[2] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-05-24 16:44:44 +0200:
> Doug Hellmann wrote:
> > Two questions about theme-based initiatives:
> > 
> > I would like to have a discussion about the work to stop syncing
> > requirements and to see if we can make progress on the implementation
> > while we have the right people together. That seems like a topic
> > for Monday or Tuesday. Would I reserve one of the "last-minute WG"
> > rooms or extra rooms (it doesn't feel very last-minute if I know
> > about it now)?
> 
> We have some flexibility there, depending on how much time we need. If
> it's an hour or two, I would go and book one of the "reservable rooms".
> If it's a day or two, I would assign one of the rooms reserved for
> upcoming workgroups/informal teams. "Last-minute" is confusing (just
> changed it on the spreadsheet), those rooms are more to cater for needs
> that are not represented in formal workgroups today.

Ideally I'd like some time to actually hack on the tools. If that's not
possible, an hour or two to work out a detailed plan would be enough.

> > I don't see a time on here for the Docs team to work together on
> > the major initiative they have going to change how publishing works.
> > I didn't have the impression that we could anticipate that work
> > being completed this cycle. Is the "helproom" going to be enough,
> > or do we need a separate session for that, too?
> 
> Again, we should be pretty flexible on that. We can reuse the doc
> helproom, or we can say that the doc helproom is only on one day and the
> other day is dedicated to making the doc publishing work. And if the doc
> refactoring ends up being a release goal, it could use a release goal
> room instead.

Let's see what Alex thinks.

> All in all, the idea is to show that on Mon-Tue we have a number of
> rooms set apart to cover for any inter-project need we may have (already
> identified or not). The room allocation is actually much tighter on
> Wed-Thu :)

OK.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Jeremy Stanley
On 2017-05-24 08:28:56 -0500 (-0500), Sean McGinnis wrote:
[...]
> In ATL the Cinder room was very large, to the point that it was
> actually a little difficult hearing some people for some of the
> discussions.
[...]

Was the room actually full, or were people just failing to properly
huddle up in one spot so they could hear? The former is probably
just something that has to be dealt with through other means than
resizing rooms, but the latter seems perfectly solvable by just
telling people that if they want to hear they should come where all
the action is.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Jeremy Stanley
On 2017-05-24 11:58:15 + (+), Andrea Frittoli wrote:
[...]
> The only concern I have here is that the topics to be discussed by
> the QA and Infra teams will be different and a shared room might
> not be ideal since we would have two ongoing discussions in the
> same room. The three days would be a mix of QA sessions and
> hacking. Sharing a room for the hacking part is not an issue, but
> I would prefer to have a dedicated room for at least 1.5 days.

Agreed, we could still probably make it work, but even with just the
Infra team by itself in a room at the first PTG we had enough
efforts/discussions going on in parallel that it got distracting at
times and we had to ask Monty^H^H^H^H^Hpeople to converse more
quietly. ;)

That said, moving general help to other days will probably cut down
on some of it too. So while I'd prefer the Infra team got its own
room, I understand if there's not enough to go around.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Documenting config drive - what do you want to see?

2017-05-24 Thread Clark Boylan


On Wed, May 24, 2017, at 07:39 AM, Matt Riedemann wrote:
> Rocky tipped me off to a request to document config drive which came up 
> at the Boston Forum, and I tracked that down to Clark's wishlist 
> etherpad [1] (L195) which states:
> 
> "Document the config drive. The only way I have been able to figure out 
> how to make a config drive is by either reading nova's source code or by 
> reading cloud-init's source code."
> 
> So naturally I have some questions, and I'm looking to flesh the idea / 
> request out a bit so we can start something in the in-tree nova devref.
> 
> Question the first: is this existing document [2] helpful? At a high 
> level, that's more about 'how' rather than 'what', as in what's in the 
> config drive.

This is helpful, but I think it is targeted to the deployer of OpenStack
and not the consumer of OpenStack.

> Question the second: are people mostly looking for documentation on the 
> content of the config drive? I assume so, because without reading the 
> source code you wouldn't know, which is the terrible part.

I'm (due to being a cloud user) mostly noticing the lack of information
on why cloud users might use config drive and how to consume it.
Documentation for the content of the config drive is a major piece of
what is missing. What do the key value pairs mean and how can I use them
to configure my nova instances to operate properly.

But also general information like, config drive can be more reliable
that metadata service as its directly attached to instance. Trade off is
possibly no live migration for the instance (under what circumstances
does live migration work and as a user is that discoverable?). What
filesystems are valid and I need to handle in my instance images? Will
the device id always be config-2? and so on. The user guide doc you
linked does try to address some of this, but seems to do so from the
perspective of the person deploying a cloud, "do this if you want to
avoid dhcp in your cloud", "install these things on compute hosts".

> Based on this, I can think of a few things we can do:
> 
> 1. Start documenting the versions which come out of the metadata API 
> service, which regardless of whether or not you're using it, is used to 
> build the config drive. I'm thinking we could start with something like 
> the in-tree REST API version history [3]. This would basically be a 
> change log of each version, e.g. in 2016-06-30 you got device tags, in 
> 2017-02-22 you got vlan tags, etc.

I like this as it should enable cloud users to implement tooling that
knows what it needs that can error properly if it ends up on a cloud too
old to contain the required information.

> 2. Start documenting the contents similar to the response tables in the 
> compute API reference [4]. For example, network_data.json has an example 
> response in this spec [5]. So have an example response and a table with 
> an explanation of fields in the response, so describe 
> ethernet_mac_address and vif_id, their type, whether or not they are 
> optional or required, and in which version they were added to the 
> response, similar to how we document microversions in the compute REST 
> API reference.

++

> 
> --
> 
> Are there other thoughts here or things I'm missing? At this point I'm 
> just trying to gather requirements so we can get something started. I 
> don't have volunteers to work on this, but I'm thinking we can at least 
> start with some basics and then people can help flesh it out over time.

I like this, starting small to produce something useful then going from
there makes sense to me.

Another idea I've had is making a tool that collected (or was fed)
information that goes into config drives and produces the device to
attach to a VM would be nice. Reason for this is while config drive is
something grown out of nova/OpenStack you often want to boot images with
Nova and other tools so making it easy for those other tools to work
properly too would be nice. In the simple case I build images locally,
then boot them with kvm to test that they work before pushing things
into OpenStack and config drive makes that somewhat complicated. Ideally
this would be the same code that nova uses to generate the config drives
just with a command line front end.

> 
> [1] https://etherpad.openstack.org/p/openstack-user-api-improvements
> [2] https://docs.openstack.org/user-guide/cli-config-drive.html
> [3]
> https://docs.openstack.org/developer/nova/api_microversion_history.html
> [4] https://developer.openstack.org/api-ref/compute/
> [5] 
> https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/metadata-service-network-info.html#rest-api-impact

Thank you for bringing this up,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Jeremy Stanley
On 2017-05-24 12:10:17 +0200 (+0200), Thierry Carrez wrote:
[...]
> One of the things you might notice on Mon-Tue is the "helproom" concept.
> In the spirit of separating inter-project and intra-project activities,
> support teams (Infra, QA, Release Management, Stable branch maintenance,
> but also teams that are looking at decentralizing their work like Docs
> or Horizon) would have team members in helprooms available to provide
> guidance to vertical teams on their specific needs. Some of those teams
> (like Infra/QA) could still have a "team meeting" on Wed-Fri to get
> stuff done as a team, though.
[...]

I really like this idea (wish I'd thought of it!), both because I
enjoy helping others solve problems but also because that desire to
help random people with issues prevented me from participating more
meaningfully in our inward-focused teamwork at the first PTG. I'm
entirely in favor of having my cake and eating it too. ;)
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Documenting config drive - what do you want to see?

2017-05-24 Thread Mathieu Gagné
On Wed, May 24, 2017 at 10:39 AM, Matt Riedemann 
wrote:
>
> Rocky tipped me off to a request to document config drive which came up
at the Boston Forum, and I tracked that down to Clark's wishlist etherpad
[1] (L195) which states:
>
> "Document the config drive. The only way I have been able to figure out
how to make a config drive is by either reading nova's source code or by
reading cloud-init's source code."
>
> So naturally I have some questions, and I'm looking to flesh the idea /
request out a bit so we can start something in the in-tree nova devref.
>
> Question the first: is this existing document [2] helpful? At a high
level, that's more about 'how' rather than 'what', as in what's in the
config drive.

Thanks, I didn't know about that page. I usually read sources or boot an
instance and check by myself.

> Question the second: are people mostly looking for documentation on the
content of the config drive? I assume so, because without reading the
source code you wouldn't know, which is the terrible part.

I usually boot an instance and inspect the config drive. Usually it's for
network_data.json since (in our case) we support various network models
(flat, nic teaming, tagged vlan, etc.) and need a concrete example of each.

> Based on this, I can think of a few things we can do:
>
> 1. Start documenting the versions which come out of the metadata API
service, which regardless of whether or not you're using it, is used to
build the config drive. I'm thinking we could start with something like the
in-tree REST API version history [3]. This would basically be a change log
of each version, e.g. in 2016-06-30 you got device tags, in 2017-02-22 you
got vlan tags, etc.

+1

I'm not sure about the format as there is a lot of cases to cover.

* There a multiple supported config drive versions (2012-08-10, ...,
2017-02-22) so we need to document them all.
* How do we plan on making it easy for someone to understand which fields
will be available in each versions?
* If a field is removed, how will it be expressed in the documentation?
* Could a field change type in the future? (object vs list of objects for
example)
* The idea of a documentation similar to the REST API version history is
good. I however wouldn't have the patience to mentally "compute" the
resulting config drive. So I think we need both (history + schema/examples).
* We should document the purpose of each field and how a user can populate
or use that field. For example, I have no idea what's the purpose of the
"launch_index" field but I suspect it's related to the --max-count
parameter with nova boot command.

> 2. Start documenting the contents similar to the response tables in the
compute API reference [4]. For example, network_data.json has an example
response in this spec [5]. So have an example response and a table with an
explanation of fields in the response, so describe ethernet_mac_address and
vif_id, their type, whether or not they are optional or required, and in
which version they were added to the response, similar to how we document
microversions in the compute REST API reference.

+1

> [1] https://etherpad.openstack.org/p/openstack-user-api-improvements
> [2] https://docs.openstack.org/user-guide/cli-config-drive.html
> [3]
https://docs.openstack.org/developer/nova/api_microversion_history.html
> [4] https://developer.openstack.org/api-ref/compute/
> [5]
https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/metadata-service-network-info.html#rest-api-impact

--
Mathieu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Save the Date- Queens PTG

2017-05-24 Thread Marcin Juszkiewicz
W dniu 24.05.2017 o 16:47, Anita Kuno pisze:
> $ means dollar, it doesn't mean USD as a default and Canadian only if we
> feel like it.

Dollars, dollars, dollars, dollars...

I hope for some price quotes and details soon. Have to provide budget
request to be able to attend at all ;D

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Massively distributed] Optimizing the interaction of OpenStack components

2017-05-24 Thread Matt Riedemann

On 5/11/2017 1:44 PM, Georg Kunz wrote:
Nevertheless, one concrete thing which came to my mind, is this proposed 
improvement of the interaction between Nova and Neutron:


https://review.openstack.org/#/c/390513/

In a nutshell, the idea is that Neutron adds more information to a port 
object so that Nova does not need to make multiple calls to Neutron to 
collect all required networking information. It seems to have stalled 
for the time being, but bringing forward the edge computing use case 
might increase the interest again.




Yes we've needed a sort of bulk query capability with the networking API 
for years. I started going down a path the other night of trying to see 
if we could bulk query floating IPs when building the internal instance 
network info cache [1] but it looks like that's not supported. The REST 
API docs for Neutron say that you can't OR filter query parameters 
together, but at the time looking at the code it seemed like it might be 
possible.


Chris Friesen from Wind River has also been looking at some of this 
lately, see [2].


But getting people looking at doing performance profiling at scale and 
then identifying the major pain points would be really really helpful 
for the upstream development team that don't have access to those types 
of resources. Then we could prioritize investigating ways to fix those 
issues to improve performance.


[1] https://review.openstack.org/#/c/465792/
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117096.html

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Jay S Bryant

Thierry,

Thank you for getting this Strawman out.  I think it is helpful!

I think the plan looks good.

Jay


On 5/24/2017 5:10 AM, Thierry Carrez wrote:

Hi everyone,

In a previous thread[1] I introduced the idea of moving the PTG from a
purely horizontal/vertical week split to a more
inter-project/intra-project activities split, and the initial comments
were positive.

We need to solidify how the week will look like before we open up
registration (first week of June), so that people can plan their
attendance accordingly. Based on the currently-signed-up teams and
projected room availability, I built a strawman proposal of how that
could look:

https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true

Let me know what you think. If you're scheduled on the Wed-Fri but would
rather be scheduled on Mon-Tue to avoid conflicting with another team,
let me know. If you're scheduled on Wed-Fri but plan to skip the Friday,
let me know as well, I'll update the spreadsheet accordingly.

One of the things you might notice on Mon-Tue is the "helproom" concept.
In the spirit of separating inter-project and intra-project activities,
support teams (Infra, QA, Release Management, Stable branch maintenance,
but also teams that are looking at decentralizing their work like Docs
or Horizon) would have team members in helprooms available to provide
guidance to vertical teams on their specific needs. Some of those teams
(like Infra/QA) could still have a "team meeting" on Wed-Fri to get
stuff done as a team, though.

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116971.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] please approve test patch to fix glanceclient

2017-05-24 Thread Sean Dague
python-glanceclient patches have been failing for at least a week due to
a requests change. The fix was posted 5 days ago -
https://review.openstack.org/#/c/466385

It would be nice to get that approved so that other patches could be
considered.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][tc][infra][security][stable] Proposal for shipping binaries and containers

2017-05-24 Thread Jeremy Stanley
On 2017-05-24 14:22:14 +0200 (+0200), Thierry Carrez wrote:
[...]
> we ship JARs already:
> http://tarballs.openstack.org/ci/monasca-common/
[...]

Worth pointing out, those all have "SNAPSHOT" in their filenames
which by Apache Maven convention indicates they're not official
releases. Also they're only being hosted from our
tarballs.openstack.org site, not published to the Maven Central
Repository (the equivalent of DockerHub in this analogy).

> That said, only a small fraction of our current OpenStack deliverables
> are supported by the VMT and therefore properly security-maintained "by
> the community" with strong guarantees and processes. So I don't see
> adding such binary deliverables (maintained by their respective teams)
> as a complete revolution. I'd expect the VMT to require a lot more
> staffing (like dedicated people to track those deliverables content)
> before they would consider those security-supported.

The Kolla team _has_ expressed interest in attaining
vulnerability:managed for at least some of their deliverables in the
future, but exactly what that would look like from a coverage
standpoint has yet to be ironed out. I don't expect we would
actually cover general vulnerabilities present in any container
images, and would only focus on direct vulnerabilities in the Kolla
source repositories instead. Rather than extending the VMT to track
vulnerable third-party software present in images, it's more likely
the Kolla team would form their own notifications subgroup to track
and communicate such risks downstream.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Save the Date- Queens PTG

2017-05-24 Thread Jimmy McArthur
I believe he means normalized in the programming sense. Like for the 
sake of comparison.


Jimmy


Anita Kuno 
May 24, 2017 at 9:47 AM

I take offense to the idea that USD is considered normal, especially 
when referencing hotel rates in a Canadian city. $ means dollar, it 
doesn't mean USD as a default and Canadian only if we feel like it.


Are we still an international organization?

I further question that you were unable to get any Montreal hotel to 
take a substantial piece of business for anything less than $270 CDN. 
That seems very usual to me.


Thank you,
Anita.

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thierry Carrez 
May 24, 2017 at 3:16 AM

Yes they were normalized to USD.

Monty Taylor 
May 23, 2017 at 6:46 PM


I believe (although I do not know for absolute certain) that the 
prices given were already normalized to USD. So it wasn't $149USD vs 
$200CDN it was $149USD vs $200USD.


It is, of course, worth clarifying.


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Anita Kuno 
May 23, 2017 at 6:27 PM

My mistake I mis-read, $149 USD is equal to $201 CDN (Canadian 
currency), so it looks like the same price to me.


Thanks,
Anita.

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Anita Kuno 
May 23, 2017 at 6:25 PM


According to xe.com (my favourite currency exchange rate website) $140 
USD is equivalent to $189 CDN (Canadian currency), and $200 CDN 
(Canadian currency) is equal to $147 USD.


Sound like a difference in room rate of about $10 per night in 
Canadian currency and $7 per night in US currency.


I must be missing something.

Thank you,
Anita.

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Massively distributed] Optimizing the interaction of OpenStack components

2017-05-24 Thread Matt Riedemann

On 5/24/2017 6:48 AM, Ronan-Alexandre Cherrueau wrote:

You can find examples of such diagrams
that have been automatically generated on the website where we host
results of our experiments[5].


This is nice, I've seen others do things like this before with Rally and 
osprofiler. The super thin vertical is hard to follow though, it would 
be nice if the graph could be expanded horizontally.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Save the Date- Queens PTG

2017-05-24 Thread Anita Kuno

On 2017-05-24 04:16 AM, Thierry Carrez wrote:

Monty Taylor wrote:

On 05/23/2017 06:27 PM, Anita Kuno wrote:

My mistake I mis-read, $149 USD is equal to $201 CDN (Canadian
currency), so it looks like the same price to me.

I believe (although I do not know for absolute certain) that the prices
given were already normalized to USD. So it wasn't $149USD vs $200CDN it
was $149USD vs $200USD.

It is, of course, worth clarifying.

Yes they were normalized to USD.

I take offense to the idea that USD is considered normal, especially 
when referencing hotel rates in a Canadian city. $ means dollar, it 
doesn't mean USD as a default and Canadian only if we feel like it.


Are we still an international organization?

I further question that you were unable to get any Montreal hotel to 
take a substantial piece of business for anything less than $270 CDN. 
That seems very usual to me.


Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Thierry Carrez
Sean McGinnis wrote:
> I see Cinder is down for an XL room. Any idea of the size of the rooms
> this time around? If they are going to be as spacious as ATL, I think a
> L room might actually work better for us than XL. Of course I say this
> not really knowing how many attendees we will end up with. We can always
> reconfigure the tables in the larger room, which we did end up doing in
> ATL.

Good question -- I don't have the room maps yet, and on the data I have
"XL" rooms are marked for 50-80 people. I'll let you know when I know
more. Don't read too much into the room sizes as that may/will change
anyway. Let's validate the days layout first (i.e. what would be
happening on which day).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Meeting time

2017-05-24 Thread Paul Bourke

Hi Felipe,

From our end I think one hour earlier would be great.

> I can create a patch in infra and add you and others to it to
> allow for people to effectively vote for what times you prefer.

Sure thing that sounds good!

-Paul

On 24/05/17 03:56, Felipe Monteiro wrote:

Hi Paul,

I'm open to changing the meeting time, although I'd like some input from
Murano cores, too. What times work for you and your colleagues? I can
create a patch in infra and add you and others to it to allow for people
to effectively vote for what times you prefer.

Felipe

On Tue, May 23, 2017 at 12:08 PM, Paul Bourke > wrote:

Hi Felipe / Murano community,

I was wondering how would people feel about revising the time for
the Murano weekly meeting?

Personally the current time is difficult for me to attend as it
falls at the end of a work day, I also have some colleagues that
would like to attend but can't at the current time.

Given recent low attendance, would another time suit people better?

Thanks,
-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Thierry Carrez
Doug Hellmann wrote:
> Two questions about theme-based initiatives:
> 
> I would like to have a discussion about the work to stop syncing
> requirements and to see if we can make progress on the implementation
> while we have the right people together. That seems like a topic
> for Monday or Tuesday. Would I reserve one of the "last-minute WG"
> rooms or extra rooms (it doesn't feel very last-minute if I know
> about it now)?

We have some flexibility there, depending on how much time we need. If
it's an hour or two, I would go and book one of the "reservable rooms".
If it's a day or two, I would assign one of the rooms reserved for
upcoming workgroups/informal teams. "Last-minute" is confusing (just
changed it on the spreadsheet), those rooms are more to cater for needs
that are not represented in formal workgroups today.

> I don't see a time on here for the Docs team to work together on
> the major initiative they have going to change how publishing works.
> I didn't have the impression that we could anticipate that work
> being completed this cycle. Is the "helproom" going to be enough,
> or do we need a separate session for that, too?

Again, we should be pretty flexible on that. We can reuse the doc
helproom, or we can say that the doc helproom is only on one day and the
other day is dedicated to making the doc publishing work. And if the doc
refactoring ends up being a release goal, it could use a release goal
room instead.

All in all, the idea is to show that on Mon-Tue we have a number of
rooms set apart to cover for any inter-project need we may have (already
identified or not). The room allocation is actually much tighter on
Wed-Thu :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Documenting config drive - what do you want to see?

2017-05-24 Thread Matt Riedemann
Rocky tipped me off to a request to document config drive which came up 
at the Boston Forum, and I tracked that down to Clark's wishlist 
etherpad [1] (L195) which states:


"Document the config drive. The only way I have been able to figure out 
how to make a config drive is by either reading nova's source code or by 
reading cloud-init's source code."


So naturally I have some questions, and I'm looking to flesh the idea / 
request out a bit so we can start something in the in-tree nova devref.


Question the first: is this existing document [2] helpful? At a high 
level, that's more about 'how' rather than 'what', as in what's in the 
config drive.


Question the second: are people mostly looking for documentation on the 
content of the config drive? I assume so, because without reading the 
source code you wouldn't know, which is the terrible part.


Based on this, I can think of a few things we can do:

1. Start documenting the versions which come out of the metadata API 
service, which regardless of whether or not you're using it, is used to 
build the config drive. I'm thinking we could start with something like 
the in-tree REST API version history [3]. This would basically be a 
change log of each version, e.g. in 2016-06-30 you got device tags, in 
2017-02-22 you got vlan tags, etc.


2. Start documenting the contents similar to the response tables in the 
compute API reference [4]. For example, network_data.json has an example 
response in this spec [5]. So have an example response and a table with 
an explanation of fields in the response, so describe 
ethernet_mac_address and vif_id, their type, whether or not they are 
optional or required, and in which version they were added to the 
response, similar to how we document microversions in the compute REST 
API reference.


--

Are there other thoughts here or things I'm missing? At this point I'm 
just trying to gather requirements so we can get something started. I 
don't have volunteers to work on this, but I'm thinking we can at least 
start with some basics and then people can help flesh it out over time.


[1] https://etherpad.openstack.org/p/openstack-user-api-improvements
[2] https://docs.openstack.org/user-guide/cli-config-drive.html
[3] https://docs.openstack.org/developer/nova/api_microversion_history.html
[4] https://developer.openstack.org/api-ref/compute/
[5] 
https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/metadata-service-network-info.html#rest-api-impact


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Etcd as a base service for ALL OpenStack installations

2017-05-24 Thread Marcin Juszkiewicz
W dniu 24.05.2017 o 15:34, David Medberry pisze:
> There are older versions of etcd (3.1.0) in both Debian and Ubuntu. Is
> that new enough or do we need 3.1.7?

etcd 3.1.4 is in Debian/sid while we use Debian/stretch in Kolla.
Ubuntu/xenial has version 2.2.5 which may be too old.

And if you want to follow crowd and use random binaries from the
Internet please make sure that your source provides also aarch64 (arm64)
and ppc64le binaries.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cells] Stupid question: Cells v2 & AZs

2017-05-24 Thread Dan Smith
> Thanks for answering the base question. So, if AZs are implemented with
> haggs, then really, they are truly disjoint from cells (ie, not a subset
> of a cell and not a superset of a cell, just unrelated.) Does that
> philosophy agree with what you are stating?

Correct, aggregates are at the top level, and they can span cells if you
so desire (or not if you don't configure any that do). The aggregate
stuff doesn't know anything about cells, it only knows about hosts, so
it's really independent.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][l3][dvr] Pike tasks and meeting consolidation

2017-05-24 Thread Brian Haley
At the Summit a few of us met (Miguel, Swami and myself) to talk about our
Pike tasks now that we have lost some resources.  We came up with this
short list, which can also be found on
https://etherpad.openstack.org/p/neutron-l3-subteam :

1. Support for configuring Floating IPs on Centralized router in DVR
environment
https://bugs.launchpad.net/neutron/+bug/1667877
2. Spec for Floating IP subnets in routed networks
3. DHCP Agent support for remote IP subnets (via a DHCP relay)
https://bugs.launchpad.net/neutron/+bug/1668145

We also decided that instead of holding two weekly meetings, one for L3 and
another for DVR, we'd just cover any DVR-related issues in the L3 meeting
(Thursdays 1500 UTC) since neither was taking a full hour.  Anyone is
welcome to add items to the agenda and attend to discuss them.

Thanks,

-Brian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] mount ceph block from an instance

2017-05-24 Thread fabrice grelaud
Hi osa team,

i have a multimode openstack-ansible deployed, ocata 15.1.3, with ceph as 
backend for cinder (with our own ceph infra).

After create an instance with root volume, i would like to mount a ceph block 
or cephfs directly to the vm (not a cinder volume). So i want to attach a new 
interface to the vm that is in the ceph vlan.
How can i do that ? 

We have our ceph vlan propagated on bond0 interface (bond0.xxx and br-storage 
configured as documented) for openstack infrastructure.

Should i have to propagate this vlan on the bond1 interface where my br-vlan is 
attach ?
Should i have to use the existing br-storage where the ceph vlan is already 
propagated (bond0.xxx) ? And how i create the ceph vlan network in neutron (by 
neutron directly or by horizon) ?

Has anyone ever experienced this ?
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Etcd as a base service for ALL OpenStack installations

2017-05-24 Thread Davanum Srinivas
David,

As long as the etcd has this support, we are good

https://github.com/coreos/etcd/issues/5525
https://github.com/coreos/etcd/pull/5669

Thanks,
Dims

On Wed, May 24, 2017 at 9:34 AM, David Medberry  wrote:
>
> On Wed, May 24, 2017 at 6:14 AM, Davanum Srinivas  wrote:
>>
>> http://tarballs.openstack.org/etcd/
>
>
> There are older versions of etcd (3.1.0) in both Debian and Ubuntu. Is that
> new enough or do we need 3.1.7?
>
> As an operator, I'd much prefer to see a packaged version of this available
> if we're going to make it a strict requirement.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-05-24 12:10:17 +0200:
> Hi everyone,
> 
> In a previous thread[1] I introduced the idea of moving the PTG from a
> purely horizontal/vertical week split to a more
> inter-project/intra-project activities split, and the initial comments
> were positive.
> 
> We need to solidify how the week will look like before we open up
> registration (first week of June), so that people can plan their
> attendance accordingly. Based on the currently-signed-up teams and
> projected room availability, I built a strawman proposal of how that
> could look:
> 
> https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true
> 
> Let me know what you think. If you're scheduled on the Wed-Fri but would
> rather be scheduled on Mon-Tue to avoid conflicting with another team,
> let me know. If you're scheduled on Wed-Fri but plan to skip the Friday,
> let me know as well, I'll update the spreadsheet accordingly.
> 
> One of the things you might notice on Mon-Tue is the "helproom" concept.
> In the spirit of separating inter-project and intra-project activities,
> support teams (Infra, QA, Release Management, Stable branch maintenance,
> but also teams that are looking at decentralizing their work like Docs
> or Horizon) would have team members in helprooms available to provide
> guidance to vertical teams on their specific needs. Some of those teams
> (like Infra/QA) could still have a "team meeting" on Wed-Fri to get
> stuff done as a team, though.
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116971.html
> 

Two questions about theme-based initiatives:

I would like to have a discussion about the work to stop syncing
requirements and to see if we can make progress on the implementation
while we have the right people together. That seems like a topic
for Monday or Tuesday. Would I reserve one of the "last-minute WG"
rooms or extra rooms (it doesn't feel very last-minute if I know
about it now)?

I don't see a time on here for the Docs team to work together on
the major initiative they have going to change how publishing works.
I didn't have the impression that we could anticipate that work
being completed this cycle. Is the "helproom" going to be enough,
or do we need a separate session for that, too?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Massively distributed] Optimizing the interaction of OpenStack components

2017-05-24 Thread Ronan-Alexandre Cherrueau
Georg,

> I unfortunately missed to join yesterday's working group meeting. Just
> to confirm, the IRC meetings are every two weeks on Wednesday at 15 UTC.
> Is that still correct?

IRC meetings of Performance WG chaired by Dina Belova are every Tuesday,
15:30 UTC in `#openstack-performance'. IRC meetings of Fog Edge
Massively Distributed Clouds WG chaired by Adrien Lebre and Anthony
Simonet are every Wednesday of even weeks, 15 UTC in
`#openstack-meeting'. So, next Fog Edge Massively Distributed Clouds WG
IRC meeting starts in 1 hour and 20 minutes.

Best regards,

On Wed, May 24, 2017 at 2:16 PM, Georg Kunz  wrote:
> Hi Ronan,
>
>> First of all, sorry for the late response. I took a one-week vacation after 
>> the
>> Summit to visit the US.
>
> No problem at all. I hope you had a great time.
>
>> > Nevertheless, one concrete thing which came to my mind, is this
>> > proposed improvement of the interaction between Nova and Neutron:
>> > https://review.openstack.org/#/c/390513/
>> >
>> > In a nutshell, the idea is that Neutron adds more information to a
>> > port object so that Nova does not need to make multiple calls to
>> > Neutron to collect all required networking information. It seems to
>> > have stalled for the time being, but bringing forward the edge
>> > computing use case might increase the interest again.
>>
>> Thanks for pointing such amelioration. This is a good use case to start 
>> with. It
>> should help to understand common patterns in the workflow that can be
>> optimized. Let's see if we can implement an analysis with osp-utils that
>> automatically highlight such pattern.
>
> Thanks a lot for the links. I'll dig into the tools to get a better 
> understanding and
> make up my mind on how a potential analysis might look like.
>
> I unfortunately missed to join yesterday's working group meeting. Just to 
> confirm,
> the IRC meetings are every two weeks on Wednesday at 15 UTC. Is that still 
> correct?
>
> Best regards
> Georg
>
>> Any comments/remarks on how we can do that is more than welcome.
>>
>> Best regards,
>>
>> [1] https://github.com/BeyondTheClouds/enos
>> [2] https://www.youtube.com/watch?v=xwT08H02Nok
>> [3] https://osprofiler.readthedocs.io/
>> [4] https://github.com/BeyondTheClouds/osp-utils
>> [5] http://enos.irisa.fr/html/seqdiag/v1/
>



-- 
Ronan-A. Cherrueau
https://rcherrueau.github.io

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cells] Stupid question: Cells v2 & AZs

2017-05-24 Thread David Medberry
Thanks Belmiro. So, when using Host Aggregates (haggs) as pure haggs, and
not as AZs, is their any conflict/interaction if we then enable AZs? This
is more of a philosophical question.

Thanks for answering the base question. So, if AZs are implemented with
haggs, then really, they are truly disjoint from cells (ie, not a subset of
a cell and not a superset of a cell, just unrelated.) Does that philosophy
agree with what you are stating?

Many thanks.

-dave

On Wed, May 24, 2017 at 12:38 AM, Belmiro Moreira <
moreira.belmiro.email.li...@gmail.com> wrote:

> Hi David,
> AVZs are basically aggregates.
> In cells_v2 aggregates are defined in the cell_api, so it will be possible
> to have
> multiple AVZs per cell and AVZs that spread between different cells.
>
> Belmiro
>
> On Wed, May 24, 2017 at 5:14 AM, David Medberry 
> wrote:
>
>> Hi Devs and Implementers,
>>
>> A question came up tonight in the Colorado OpenStack meetup regarding
>> cells v2 and availability zones.
>>
>> Can a cell contain multiple AZs? (I assume this is yes.)
>>
>> Can an AZ contain mutliple cells (I assumed this is no, but now in
>> thinking about it, that's probably not right.)
>>
>> What's the proper way to think about this? In general, I'm considering
>> AZs primarily as a fault zone type of mechanism (though they can be used in
>> other ways.)
>>
>> Is there a clear diagram/documentation about this?
>>
>> And consider this to be an Ocata/Pike and later only type of question.
>>
>> Thanks.
>>
>> -dave
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Etcd as a base service for ALL OpenStack installations

2017-05-24 Thread David Medberry
On Wed, May 24, 2017 at 6:14 AM, Davanum Srinivas  wrote:

> http://tarballs.openstack.org/etcd/


There are older versions of etcd (3.1.0) in both Debian and Ubuntu. Is that
new enough or do we need 3.1.7?

As an operator, I'd much prefer to see a packaged version of this available
if we're going to make it a strict requirement.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Sean McGinnis



On 05/24/2017 05:10 AM, Thierry Carrez wrote:

Hi everyone,

In a previous thread[1] I introduced the idea of moving the PTG from a
purely horizontal/vertical week split to a more
inter-project/intra-project activities split, and the initial comments
were positive.

We need to solidify how the week will look like before we open up
registration (first week of June), so that people can plan their
attendance accordingly. Based on the currently-signed-up teams and
projected room availability, I built a strawman proposal of how that
could look:

https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true



This looks fine for Cinder. Good idea putting up a strawman for folks to
review.

One question on room sizes. In ATL the Cinder room was very large, to 
the point that it was actually a little difficult hearing some people 
for some of the discussions.


I see Cinder is down for an XL room. Any idea of the size of the rooms 
this time around? If they are going to be as spacious as ATL, I think a 
L room might actually work better for us than XL. Of course I say this 
not really knowing how many attendees we will end up with. We can always 
reconfigure the tables in the larger room, which we did end up doing in ATL.


This feels really odd saying we have too much room. ;)

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Undercloud backup and restore

2017-05-24 Thread Jiří Stránský

On 24.5.2017 15:02, Marios Andreou wrote:

On Wed, May 24, 2017 at 10:26 AM, Carlos Camacho Gonzalez <
ccama...@redhat.com> wrote:


Hey folks,

Based on what we discussed yesterday in the TripleO weekly team meeting,
I'll like to propose a blueprint to create 2 features, basically to backup
and restore the Undercloud.

I'll like to follow in the first iteration the available docs for this
purpose [1][2].

With the addition of backing up the config files on /etc/ specifically to
be able to recover from a failed Undercloud upgrade, i.e. recover the repos
info removed in [3].

I'll like to target this for P as I think I have enough time for
coding/testing these features.

I already have created a blueprint to track this effort
https://blueprints.launchpad.net/tripleo/+spec/undercloud-backup-restore

What do you think about it?



+1 from me as you know but adding my support on the list too. I think it is
a great idea - there are cases especially around changing network config
during an upgrade for example where the best remedy is to restore the
undercloud for the network definitions (both neutron config and heat db).


+1 i think there's not really an easy way out of these issues other than 
a restore. We already recommend doing a backup before upgrading [1], so 
having something that can further help operators in this regard would be 
good.


Jirka

[1] http://tripleo.org/post_deployment/upgrade.html



thanks,




Thanks,
Carlos.

[1]: https://access.redhat.com/documentation/en-us/red_hat_
enterprise_linux_openstack_platform/7/html/back_up_and_
restore_red_hat_enterprise_linux_openstack_platform/restore

[2]: https://docs.openstack.org/developer/tripleo-docs/post_
deployment/backup_restore_undercloud.html

[3]: https://docs.openstack.org/developer/tripleo-docs/
installation/updating.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-upstream-institute] Alternating meeting time poll

2017-05-24 Thread Ildiko Vancsa
Hi,

As we are over the Boston training now we can revise a bit how our team 
operates and make improvements.

One area is certainly the meetings as we only have one dedicated meeting slot, 
but team members all around the globe, so we decided to try to continue with 
alternating slots.

If you are already a team member or interested in joining and participating in 
our weekly meetings, and you’re located in the above areas, please fill out 
this poll: https://doodle.com/poll/dmyrf72wfnrp7e5m

The poll is in __UTC__ and please __avoid the dates__ and fill it out 
considering your __general__ availability. The poll contains primarily Europe 
and Asia friendly slots.

If you have any questions or comments please let me know.

Thanks and Best Regards,
Ildikó
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Trove meeting time update

2017-05-24 Thread MCCASLAND, TREVOR
This did merge!

Simple reminder to everyone we will be meeting today at Wednesday 1500 UTC in 
#openstack-meeting-alt for our regular team weekly meeting.

-Original Message-
From: Amrith Kumar [mailto:amrith.ku...@gmail.com] 
Sent: Friday, May 19, 2017 7:13 PM
To: 'OpenStack Development Mailing List (not for usage questions)' 

Subject: Re: [openstack-dev] [trove] Trove meeting time update

Thanks for doing this Trevor, I am hoping that this patch merges soon so we can 
do the new meeting time on Wednesday.

--
Amrith Kumar
amrith.ku...@gmail.com


> -Original Message-
> From: MCCASLAND, TREVOR [mailto:tm2...@att.com]
> Sent: Friday, May 19, 2017 2:49 PM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: [openstack-dev] [trove] Trove meeting time update
> 
> I have submitted a patch for updating the regular trove meeting time 
> to 1500 UTC. This was decided during the trove project update meeting 
> last week[1]
> 
> If you weren't able to make it and want to voice your opinion or If 
> you feel like a better time is more suitable, feel free to make a 
> suggestion here[2]
> 
> [1] 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__youtu.be_g8tKXn-5
> FAxhs-3Ft-3D23m50s=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDq
> AHhTowQ=8jPWWIRYNRojLJksK8HanpSSnDwOHw9n6zRPLnEe7ls=ntJFdq-JQ7NUJE
> ZFE46LazesZzlT-DcmT9AkLOcstiM= [2] 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.
> org_-23_c_466381_=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqA
> HhTowQ=8jPWWIRYNRojLJksK8HanpSSnDwOHw9n6zRPLnEe7ls=RQS47UAtXe5Vcpm
> 3_K_lQ3sz_hF-gwiJr5aEPI2DVcc=
> 
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.or
> g_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=LFYZ-o9_HUMeMT
> SQicvjIg=afs6hELVfCxZDDqAHhTowQ=8jPWWIRYNRojLJksK8HanpSSnDwOHw9n6z
> RPLnEe7ls=1BzxmsyTJ3IBQeP3BIf_pDcVc2F5-nt6jtjiffXDykE=


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=afs6hELVfCxZDDqAHhTowQ=8jPWWIRYNRojLJksK8HanpSSnDwOHw9n6zRPLnEe7ls=1BzxmsyTJ3IBQeP3BIf_pDcVc2F5-nt6jtjiffXDykE=
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Undercloud backup and restore

2017-05-24 Thread Marios Andreou
On Wed, May 24, 2017 at 10:26 AM, Carlos Camacho Gonzalez <
ccama...@redhat.com> wrote:

> Hey folks,
>
> Based on what we discussed yesterday in the TripleO weekly team meeting,
> I'll like to propose a blueprint to create 2 features, basically to backup
> and restore the Undercloud.
>
> I'll like to follow in the first iteration the available docs for this
> purpose [1][2].
>
> With the addition of backing up the config files on /etc/ specifically to
> be able to recover from a failed Undercloud upgrade, i.e. recover the repos
> info removed in [3].
>
> I'll like to target this for P as I think I have enough time for
> coding/testing these features.
>
> I already have created a blueprint to track this effort
> https://blueprints.launchpad.net/tripleo/+spec/undercloud-backup-restore
>
> What do you think about it?
>

+1 from me as you know but adding my support on the list too. I think it is
a great idea - there are cases especially around changing network config
during an upgrade for example where the best remedy is to restore the
undercloud for the network definitions (both neutron config and heat db).

thanks,


>
> Thanks,
> Carlos.
>
> [1]: https://access.redhat.com/documentation/en-us/red_hat_
> enterprise_linux_openstack_platform/7/html/back_up_and_
> restore_red_hat_enterprise_linux_openstack_platform/restore
>
> [2]: https://docs.openstack.org/developer/tripleo-docs/post_
> deployment/backup_restore_undercloud.html
>
> [3]: https://docs.openstack.org/developer/tripleo-docs/
> installation/updating.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Davanum Srinivas
Thanks for the help Mikhail,

So just FYI for others, the etcd 3.2.0 is in RC1, We will get a full
set of arch(es) covered once that goes GA

Thanks,
Dims

On Wed, May 24, 2017 at 8:45 AM, Mikhail S Medvedev  wrote:
>
> On 05/24/2017 06:59 AM, Sean Dague wrote:
>>
>> On 05/24/2017 07:48 AM, Andreas Scheuring wrote:
>> > Hi together,
>> >
>> > recently etcd3 was enabled as service in devstack [1]. This breaks
>> > devstack on s390x Linux, as there are no s390x binaries availabe and
>> > there's no way to disable the etcd3 service.
>> >
>> > I pushed a patch to allow disabling the etcd3 service in local.conf [2].
>> > It would be great if we could get that merged soon to get devstack going
>> > again. It seems like that is not used by any of the default services
>> > (nova, neutron, cinder,...) right now.
>> >
>> > In the long run I would like to understand the plans of etcd3 in
>> > devstack. Are the plans to make the default services dependent on etcd3
>> > in the future?
>> >
>> > Thanks a lot!
>> >
>> > Andreas
>> >
>> >
>> > [1]
>> >
>> > https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
>> > [2] https://review.openstack.org/467597
>>
>> Yes, it is designed to be required by base services. See -
>> http://lists.openstack.org/pipermail/openstack-dev/2017-May/117370.html
>>
>> -Sean
>>
> It is designed to be required, but please be aware of other arches. E.g. the
> original change do DevStack [3] did not allow much flexibility, and only
> worked on x86 and aarch. The d-g change [4] broke d-g for non-x86 arches. I
> have submitted [5] to add some flexibility to be able to specify a different
> mirror from which to pull non-x86 etcd3.
>
> In the last couple days I am playing a whack-a-mole with all of that and
> more. At some point I did request a permission to add PowerKVM CI (ppc64) to
> devstack-gate patches, which might have helped to identify the problem
> earlier. Maybe it should be revisited?
>
> [3] https://review.openstack.org/#/c/445432/
> [4] https://review.openstack.org/#/c/466817/
> [5] https://review.openstack.org/#/c/467437/
>
> ---
> Mikhail Medvedev (mmedvede)
> OpenStack CI for KVM on Power
> IBM
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Andreas Scheuring
In the meanwhile I found some more information like [1].

I understood that devstack downloads the binaries from github as distros
don't have the latest version available. But the binaries for s390x are
not yet provided there. I opened a issue to figure out what would need
to be done to get the s390x binary posted as well[2].

If that is not working, we might need to start thinking in a different
direction, e.g. 

- enhance devstack build etcd3 from source (for certain architectures)
- check that etcd3 is already installed (we could install it upfront on
our systems)

I opened a bug against devstack to track the discussion [3]



[1] https://review.openstack.org/#/c/467436/
[2] https://github.com/coreos/etcd/issues/7978
[3] https://bugs.launchpad.net/devstack/+bug/1693192


-- 
-
Andreas 
IRC: andreas_s



On Mi, 2017-05-24 at 13:48 +0200, Andreas Scheuring wrote:
> Hi together, 
> 
> recently etcd3 was enabled as service in devstack [1]. This breaks
> devstack on s390x Linux, as there are no s390x binaries availabe and
> there's no way to disable the etcd3 service.
> 
> I pushed a patch to allow disabling the etcd3 service in local.conf [2].
> It would be great if we could get that merged soon to get devstack going
> again. It seems like that is not used by any of the default services
> (nova, neutron, cinder,...) right now.
> 
> In the long run I would like to understand the plans of etcd3 in
> devstack. Are the plans to make the default services dependent on etcd3
> in the future?
> 
> Thanks a lot!
> 
> Andreas
> 
> 
> [1]
> https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
> [2] https://review.openstack.org/467597
> 
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Stackalytics] How to deploy a "stackalytics"

2017-05-24 Thread Hanxi Liu
Hi folks,

I got stuck after I installed stackalytics. The stackalytics-dashboard
doesn't work.
The main process followed the guide[1]. I want to know if there is some
other ways to deploy a "stackalytics" in my environment. Stackalytics wiki[2]
is the only guide I can find. I'm very appreciate it
if you can help.

[1]https://wiki.openstack.org/wiki/Stackalytics/HowToRun
[2]https://wiki.openstack.org/wiki/Stackalytics

Best regards,

Hanxi Liu

(IRC:lhx_)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Dmitry Tantsur

On 05/24/2017 02:30 PM, Thierry Carrez wrote:

Dmitry Tantsur wrote:

On 05/24/2017 12:10 PM, Thierry Carrez wrote:

https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true

Let me know what you think. If you're scheduled on the Wed-Fri but would
rather be scheduled on Mon-Tue to avoid conflicting with another team,
let me know. If you're scheduled on Wed-Fri but plan to skip the Friday,
let me know as well, I'll update the spreadsheet accordingly.


This looks good.

I'm only surprised that Ironic is under question for the VM-BM room.
We're the Bare Metal service after all :) Please correct me if I'm
misunderstanding that room's purpose.


No, you're totally right. I placed it with the question mark because
there were some suggestions after Atlanta that Ironic would prefer to
have their room on Mon-Tue to not conflict with anything else, so I left
that door open. Now with the Nova-Ironic discussion happening in that WG
room on Mon-Tue it probably makes sense to stay on Wed-Fri for the team
meeting :)



+1, this makes sense to me.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Mikhail S Medvedev


On 05/24/2017 06:59 AM, Sean Dague wrote:

On 05/24/2017 07:48 AM, Andreas Scheuring wrote:
> Hi together,
>
> recently etcd3 was enabled as service in devstack [1]. This breaks
> devstack on s390x Linux, as there are no s390x binaries availabe and
> there's no way to disable the etcd3 service.
>
> I pushed a patch to allow disabling the etcd3 service in local.conf [2].
> It would be great if we could get that merged soon to get devstack going
> again. It seems like that is not used by any of the default services
> (nova, neutron, cinder,...) right now.
>
> In the long run I would like to understand the plans of etcd3 in
> devstack. Are the plans to make the default services dependent on etcd3
> in the future?
>
> Thanks a lot!
>
> Andreas
>
>
> [1]
> 
https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
> [2] https://review.openstack.org/467597

Yes, it is designed to be required by base services. See -
http://lists.openstack.org/pipermail/openstack-dev/2017-May/117370.html

-Sean


It is designed to be required, but please be aware of other arches. E.g. the 
original change do DevStack [3] did not allow much flexibility, and only worked 
on x86 and aarch. The d-g change [4] broke d-g for non-x86 arches. I have 
submitted [5] to add some flexibility to be able to specify a different mirror 
from which to pull non-x86 etcd3.

In the last couple days I am playing a whack-a-mole with all of that and more. 
At some point I did request a permission to add PowerKVM CI (ppc64) to 
devstack-gate patches, which might have helped to identify the problem earlier. 
Maybe it should be revisited?

[3] https://review.openstack.org/#/c/445432/
[4] https://review.openstack.org/#/c/466817/
[5] https://review.openstack.org/#/c/467437/

---
Mikhail Medvedev (mmedvede)
OpenStack CI for KVM on Power
IBM

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Thierry Carrez
Dmitry Tantsur wrote:
> On 05/24/2017 12:10 PM, Thierry Carrez wrote:
>> https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true
>>
>> Let me know what you think. If you're scheduled on the Wed-Fri but would
>> rather be scheduled on Mon-Tue to avoid conflicting with another team,
>> let me know. If you're scheduled on Wed-Fri but plan to skip the Friday,
>> let me know as well, I'll update the spreadsheet accordingly.
> 
> This looks good.
> 
> I'm only surprised that Ironic is under question for the VM-BM room.
> We're the Bare Metal service after all :) Please correct me if I'm
> misunderstanding that room's purpose.

No, you're totally right. I placed it with the question mark because
there were some suggestions after Atlanta that Ironic would prefer to
have their room on Mon-Tue to not conflict with anything else, so I left
that door open. Now with the Nova-Ironic discussion happening in that WG
room on Mon-Tue it probably makes sense to stay on Wed-Fri for the team
meeting :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][tc][infra][security][stable] Proposal for shipping binaries and containers

2017-05-24 Thread Thierry Carrez
Doug Hellmann wrote:
> Excerpts from Davanum Srinivas (dims)'s message of 2017-05-23 10:44:30 -0400:
>> For projects based on Go and Containers we need to ship binaries, for
> 
> Can you elaborate on the use of the term "need" here. Is that because
> otherwise the projects can't be consumed? Is it the "norm" for
> projects from those communities? Something else?

dims will likely answer directly, but I would say because it is the
"norm" there. If we were a Java project, we would definitely be
publishing JARs, for the exact same reason. Oh, wait. We actually do
Java stuff, so we ship JARs already:
http://tarballs.openstack.org/ci/monasca-common/

>> example Kubernetes, etcd both ship binaries and maintain stable
>> branches as well.
>>   https://github.com/kubernetes/kubernetes/releases
>>   https://github.com/coreos/etcd/releases/
>>
>> Kubernetes for example ships container images to public registeries as well:
>>   
>> https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/hyperkube?pli=1
>>   
>> https://github.com/kubernetes/kubernetes/tree/master/cluster/images/hyperkube
> 
> What are the support lifetimes for those images? Who maintains them?

That's a good question. Due to various bundling and dependency
inclusion, security maintenance on those artifacts is definitely more
costly than our usual artifacts. Here by default I would say it's
probably best effort from the teams themselves.

That said, only a small fraction of our current OpenStack deliverables
are supported by the VMT and therefore properly security-maintained "by
the community" with strong guarantees and processes. So I don't see
adding such binary deliverables (maintained by their respective teams)
as a complete revolution. I'd expect the VMT to require a lot more
staffing (like dedicated people to track those deliverables content)
before they would consider those security-supported.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Massively distributed] Optimizing the interaction of OpenStack components

2017-05-24 Thread Georg Kunz
Hi Ronan,

> First of all, sorry for the late response. I took a one-week vacation after 
> the
> Summit to visit the US.

No problem at all. I hope you had a great time.

> > Nevertheless, one concrete thing which came to my mind, is this
> > proposed improvement of the interaction between Nova and Neutron:
> > https://review.openstack.org/#/c/390513/
> >
> > In a nutshell, the idea is that Neutron adds more information to a
> > port object so that Nova does not need to make multiple calls to
> > Neutron to collect all required networking information. It seems to
> > have stalled for the time being, but bringing forward the edge
> > computing use case might increase the interest again.
> 
> Thanks for pointing such amelioration. This is a good use case to start with. 
> It
> should help to understand common patterns in the workflow that can be
> optimized. Let's see if we can implement an analysis with osp-utils that
> automatically highlight such pattern.

Thanks a lot for the links. I'll dig into the tools to get a better 
understanding and
make up my mind on how a potential analysis might look like.

I unfortunately missed to join yesterday's working group meeting. Just to 
confirm,
the IRC meetings are every two weeks on Wednesday at 15 UTC. Is that still 
correct?

Best regards
Georg

> Any comments/remarks on how we can do that is more than welcome.
> 
> Best regards,
> 
> [1] https://github.com/BeyondTheClouds/enos
> [2] https://www.youtube.com/watch?v=xwT08H02Nok
> [3] https://osprofiler.readthedocs.io/
> [4] https://github.com/BeyondTheClouds/osp-utils
> [5] http://enos.irisa.fr/html/seqdiag/v1/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Etcd as a base service for ALL OpenStack installations

2017-05-24 Thread Davanum Srinivas
Marcin,

Yes, we need ubuntu/debian packagers to please package etcd3.

Right now, devstack uses either github [1] or the tarballs [2] to pick
up etcd3 and installs it.

Thanks,
Dims

[1] https://github.com/openstack-dev/devstack/blob/master/lib/etcd3#L27
[2] http://tarballs.openstack.org/etcd/

On Wed, May 24, 2017 at 7:05 AM, Marcin Juszkiewicz
 wrote:
> W dniu 24.05.2017 o 12:11, Davanum Srinivas pisze:
>> Here's the proposal:
>> https://review.openstack.org/#/c/467436/
>
> There is no etcd package in Debian/stretch.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [acceleration]Cyborg Weekly Team Meeting 2017.05.24

2017-05-24 Thread Zhipeng Huang
Hi Team,

A kind reminder for our weekly meeting today, agenda at
https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting#Agenda_for_next_meeting
and meeting is at #openstack-cyborg around UTC 15:00

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Sean Dague
On 05/24/2017 07:48 AM, Andreas Scheuring wrote:
> Hi together, 
> 
> recently etcd3 was enabled as service in devstack [1]. This breaks
> devstack on s390x Linux, as there are no s390x binaries availabe and
> there's no way to disable the etcd3 service.
> 
> I pushed a patch to allow disabling the etcd3 service in local.conf [2].
> It would be great if we could get that merged soon to get devstack going
> again. It seems like that is not used by any of the default services
> (nova, neutron, cinder,...) right now.
> 
> In the long run I would like to understand the plans of etcd3 in
> devstack. Are the plans to make the default services dependent on etcd3
> in the future?
> 
> Thanks a lot!
> 
> Andreas
> 
> 
> [1]
> https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
> [2] https://review.openstack.org/467597

Yes, it is designed to be required by base services. See -
http://lists.openstack.org/pipermail/openstack-dev/2017-May/117370.html

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Andrea Frittoli
On Wed, May 24, 2017 at 11:40 AM Dmitry Tantsur  wrote:

> On 05/24/2017 12:10 PM, Thierry Carrez wrote:
> > Hi everyone,
> >
> > In a previous thread[1] I introduced the idea of moving the PTG from a
> > purely horizontal/vertical week split to a more
> > inter-project/intra-project activities split, and the initial comments
> > were positive.
> >
> > We need to solidify how the week will look like before we open up
> > registration (first week of June), so that people can plan their
> > attendance accordingly. Based on the currently-signed-up teams and
> > projected room availability, I built a strawman proposal of how that
> > could look:
> >
> >
> https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true
> >
> > Let me know what you think. If you're scheduled on the Wed-Fri but would
> > rather be scheduled on Mon-Tue to avoid conflicting with another team,
> > let me know. If you're scheduled on Wed-Fri but plan to skip the Friday,
> > let me know as well, I'll update the spreadsheet accordingly.
>
> This looks good.
>
> I'm only surprised that Ironic is under question for the VM-BM room. We're
> the
> Bare Metal service after all :) Please correct me if I'm misunderstanding
> that
> room's purpose.
>
> >
> > One of the things you might notice on Mon-Tue is the "helproom" concept.
> > In the spirit of separating inter-project and intra-project activities,
> > support teams (Infra, QA, Release Management, Stable branch maintenance,
> > but also teams that are looking at decentralizing their work like Docs
> > or Horizon) would have team members in helprooms available to provide
> > guidance to vertical teams on their specific needs.


This looks fine for me, one thing that I missed in Atlanta was the
possibility to join
any of the WG sessions. By moving QA sessions to the 2nd part of the week,
QA team members could take turn attending the QA help room and join WG
conversations where needed.


> Some of those teams
> > (like Infra/QA) could still have a "team meeting" on Wed-Fri to get
> > stuff done as a team, though.
>

The only concern I have here is that the topics to be discussed by the QA
and Infra
teams will be different and a shared room might not be ideal since we would
have two
ongoing discussions in the same room.
The three days would be a mix of QA sessions and hacking. Sharing a room
for the
hacking part is not an issue, but I would prefer to have a dedicated room
for at least 1.5 days.

andrea


>
> I don't have a strong opinion here, but this sounds like a great idea.
>
> >
> > [1]
> http://lists.openstack.org/pipermail/openstack-dev/2017-May/116971.html
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Massively distributed] Optimizing the interaction of OpenStack components

2017-05-24 Thread Ronan-Alexandre Cherrueau
Hi Georg,

First of all, sorry for the late response. I took a one-week vacation
after the Summit to visit the US.

> Hi Ronan, hi Adrien,
>
> we talked yesterday in the edge/massively distributed working group
> meeting about optimizations of the control traffic between OpenStack
> components. Besides investigating different message busses, we agreed
> that it is also worth looking at how different OpenStack components
> interact. My understanding is that you have already developed tracing
> tools for this.

Actually, we (Inria) developed a tool called EnOS[1] that lets us
conduct performance analysis of different OpenStack deployments.
Briefly, EnOS provides a simple topology definition and traffic shaping
so you can deploy an OpenStack you envision, and then run benchmarking
tools such as Rally (for control plane) and Shaker (for data plane) to
measure OpenStack performance. We made a presentation during the Boston
Summit that shows how we use EnOS to measure OpenStack performance in a
Wide Area Network context[2].

During our experiments, most of the time we struggle to understand the
workflow of OpenStack. Not everybody here is an expert in Nova plus
Neutron plus Glance plus Keystone plus "whatever OpenStack services you
consider in your deployment" so we add into EnOS the ability to produce
OSProfiler traces[3] based on a Rally scenario. OSProfiler is an
OpenStack profiling tool developed by folks of the performance team.
Traces generated by OSProfiler give a fine-grained understanding of your
workflow. Maybe too much fine-grained. Thus, we also developed a tool,
called osp-utils[4], that enables us query an OSProfiler trace (filter,
fold, count...) and automatically produce a sequence diagram to show
interactions between services. You can find examples of such diagrams
that have been automatically generated on the website where we host
results of our experiments[5]. In these sequence diagrams, we choose to
filter calls so that we only keep those that end by an RPC. This is why
there are no interactions with Keystone or with the Database. But you
can also choose to filter or fold on any information available in a
call. Anyway, we believe that such view that clearly states interactions
between services is helpful for the community. We also think that a
query language for OSProfiler traces would help to run some analysis for
the optimizations of the control traffic between OpenStack components.

Please note that osp-utils tool is just a prototype right now -- something
developed during some sleepless nights ;). So the code and documentation
must be improved.

> Nevertheless, one concrete thing which came to my mind, is this proposed
> improvement of the interaction between Nova and Neutron:
> https://review.openstack.org/#/c/390513/
>
> In a nutshell, the idea is that Neutron adds more information to a port
> object so that Nova does not need to make multiple calls to Neutron to
> collect all required networking information. It seems to have stalled
> for the time being, but bringing forward the edge computing use case
> might increase the interest again.

Thanks for pointing such amelioration. This is a good use case to start
with. It should help to understand common patterns in the workflow that
can be optimized. Let's see if we can implement an analysis with
osp-utils that automatically highlight such pattern.

Any comments/remarks on how we can do that is more than welcome.

Best regards,

[1] https://github.com/BeyondTheClouds/enos
[2] https://www.youtube.com/watch?v=xwT08H02Nok
[3] https://osprofiler.readthedocs.io/
[4] https://github.com/BeyondTheClouds/osp-utils
[5] http://enos.irisa.fr/html/seqdiag/v1/

On Thu, May 11, 2017 at 8:44 PM, Georg Kunz  wrote:
> Hi Ronan,
>
> hi Adrien,
>
>
>
> we talked yesterday in the edge/massively distributed working group meeting
> about optimizations of the control traffic between OpenStack components.
> Besides investigating different message busses, we agreed that it is also
> worth looking at how different OpenStack components interact. My
> understanding is that you have already developed tracing tools for this.
>
>
>
> Nevertheless, one concrete thing which came to my mind, is this proposed
> improvement of the interaction between Nova and Neutron:
>
> https://review.openstack.org/#/c/390513/
>
>
>
> In a nutshell, the idea is that Neutron adds more information to a port
> object so that Nova does not need to make multiple calls to Neutron to
> collect all required networking information. It seems to have stalled for
> the time being, but bringing forward the edge computing use case might
> increase the interest again.
>
>
>
> Best regards
>
> Georg
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

[openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Andreas Scheuring
Hi together, 

recently etcd3 was enabled as service in devstack [1]. This breaks
devstack on s390x Linux, as there are no s390x binaries availabe and
there's no way to disable the etcd3 service.

I pushed a patch to allow disabling the etcd3 service in local.conf [2].
It would be great if we could get that merged soon to get devstack going
again. It seems like that is not used by any of the default services
(nova, neutron, cinder,...) right now.

In the long run I would like to understand the plans of etcd3 in
devstack. Are the plans to make the default services dependent on etcd3
in the future?

Thanks a lot!

Andreas


[1]
https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
[2] https://review.openstack.org/467597



-- 
-
Andreas 
IRC: andreas_s





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Etcd as a base service for ALL OpenStack installations

2017-05-24 Thread Marcin Juszkiewicz
W dniu 24.05.2017 o 12:11, Davanum Srinivas pisze:
> Here's the proposal:
> https://review.openstack.org/#/c/467436/

There is no etcd package in Debian/stretch.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Dmitry Tantsur

On 05/24/2017 12:10 PM, Thierry Carrez wrote:

Hi everyone,

In a previous thread[1] I introduced the idea of moving the PTG from a
purely horizontal/vertical week split to a more
inter-project/intra-project activities split, and the initial comments
were positive.

We need to solidify how the week will look like before we open up
registration (first week of June), so that people can plan their
attendance accordingly. Based on the currently-signed-up teams and
projected room availability, I built a strawman proposal of how that
could look:

https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true

Let me know what you think. If you're scheduled on the Wed-Fri but would
rather be scheduled on Mon-Tue to avoid conflicting with another team,
let me know. If you're scheduled on Wed-Fri but plan to skip the Friday,
let me know as well, I'll update the spreadsheet accordingly.


This looks good.

I'm only surprised that Ironic is under question for the VM-BM room. We're the 
Bare Metal service after all :) Please correct me if I'm misunderstanding that 
room's purpose.




One of the things you might notice on Mon-Tue is the "helproom" concept.
In the spirit of separating inter-project and intra-project activities,
support teams (Infra, QA, Release Management, Stable branch maintenance,
but also teams that are looking at decentralizing their work like Docs
or Horizon) would have team members in helprooms available to provide
guidance to vertical teams on their specific needs. Some of those teams
(like Infra/QA) could still have a "team meeting" on Wed-Fri to get
stuff done as a team, though.


I don't have a strong opinion here, but this sounds like a great idea.



[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116971.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all] Etcd as a base service for ALL OpenStack installations

2017-05-24 Thread Davanum Srinivas
Team,

Here's the proposal:
https://review.openstack.org/#/c/467436/

Thanks,
Dims

PS: Please look through older discussions for more context:
https://www.mail-archive.com/search?a=1=openstack-dev%40lists.openstack.org=etcd3=newest
https://www.mail-archive.com/search?a=1=openstack-dev%40lists.openstack.org=etcd=newest

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-05-24 Thread Thierry Carrez
Hi everyone,

In a previous thread[1] I introduced the idea of moving the PTG from a
purely horizontal/vertical week split to a more
inter-project/intra-project activities split, and the initial comments
were positive.

We need to solidify how the week will look like before we open up
registration (first week of June), so that people can plan their
attendance accordingly. Based on the currently-signed-up teams and
projected room availability, I built a strawman proposal of how that
could look:

https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true

Let me know what you think. If you're scheduled on the Wed-Fri but would
rather be scheduled on Mon-Tue to avoid conflicting with another team,
let me know. If you're scheduled on Wed-Fri but plan to skip the Friday,
let me know as well, I'll update the spreadsheet accordingly.

One of the things you might notice on Mon-Tue is the "helproom" concept.
In the spirit of separating inter-project and intra-project activities,
support teams (Infra, QA, Release Management, Stable branch maintenance,
but also teams that are looking at decentralizing their work like Docs
or Horizon) would have team members in helprooms available to provide
guidance to vertical teams on their specific needs. Some of those teams
(like Infra/QA) could still have a "team meeting" on Wed-Fri to get
stuff done as a team, though.

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116971.html

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] why is glance image service code so complex?

2017-05-24 Thread Pavlo Shchelokovskyy
Hi,

regarding #1: there are actually 4 methods there that are not used anywhere
in ironic, which are to list images and create/update/delete an image in
Glance.
The question is do we consider those classes to be a part of public ironic
Python API? Are we safe to remove them right away? Or should we go a
standard deprecation process on those - log runtime warnings when they are
used in Pike (unfortunately it seems it won't be possible to issue a single
warning on conductor start) and remove in Queens?

I'd also like to add a question #4:

In the image-related code we have special handling of "glance://" URL
scheme. Is anyone using that still? Do we really have to support it or can
we deprecate it as a recognized URL scheme for image_source?

Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com

On Tue, May 23, 2017 at 7:11 PM, Dmitry Tantsur  wrote:

> On 05/23/2017 05:52 PM, Pavlo Shchelokovskyy wrote:
>
>> Hi all,
>>
>> I've started to dig through the part of Ironic code that deals with
>> glance and I am confused by some things:
>>
>> 1) Glance image service classes have methods to create, update and delete
>> images. What's the use case behind them? Is ironic supposed to actively
>> manage images? Besides, these do not seem to be used anywhere else in
>> ironic code.
>>
>
> Yeah, I don't think we upload anything to glance. We may upload stuff to
> Swift though, but that's another story.
>
>
>> 2) Some parts of code (and quite a handful of options in [glance] config
>> section) AFAIU target a situation when both ironic and glance are deployed
>> standalone with possibly multiple glance API services so there is no
>> keystone catalog to discover the (load-balanced) glance endpoint from. We
>> even have our own round-robin implementation for those multiple glance
>> hosts o_0
>>
>> 3) Glance's direct_url handling - AFAIU this will work iff there is a
>> single conductor service and single glance registry service configured with
>> simple file backend deployed on the same host (with appropriate file access
>> permissions between ironic and glance), and glance is configured to
>> actually provide direct_url for the image - very much a DevStack (though
>> with non-standard settings).
>>
>> Do we actually have to support such narrow deployment scenarios as in 2)
>> and 3)? While for 2) we probably should continue support standalone Glance,
>> keeping implementations for our own round-robin load-balancing and retries
>> seems out of ironic scope.
>>
>
> Yeah, I'd expect people to deploy HA proxy or something similar for
> load-balancing. Not sure what you mean by retries though.
>
> Number 3, I suspect, is for simple all-in-one deployments. I don't
> remember the whole background, so I can't comment more.
>
>
>> Most of those do seem to be a legacy code crust from nova-baremetal era,
>> but I might be missing something. I'm eager to hear your comments.
>>
>
> #1 and #2 probably. I'm fine with getting rid of them.
>
>
>> Cheers,
>>
>> Dr. Pavlo Shchelokovskyy
>> Senior Software Engineer
>> Mirantis Inc
>> www.mirantis.com
>> 
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Global Request ID progress

2017-05-24 Thread Sean Dague
The Global Request ID effort made some good progress this week
https://review.openstack.org/#/c/464746/ (and was merged last night).

The new TL;DR - we're going to pass around a value that is strongly
validated to req-$uuid. It's stored in a different field in oslo.context
(global_request_id). If you don't want to trust it, just don't log that
field.

All the patches up for review are here -
https://review.openstack.org/#/q/topic:global_request_id

Through great hackery here is a devstack-gate scenario demonstrating
this in action with Nova -> Neutron -
https://review.openstack.org/#/c/467417/ -
http://logs.openstack.org/17/467417/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/1cccf3f/logs/screen-q-svc.txt.gz#_May_24_00_38_15_067393

You can see 2 request_ids in that line, the first is the originator from
Nova (ignore the errors that are around that, the Nova patch has some
extra error logs for visible debugging) -
http://logs.openstack.org/17/467417/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/1cccf3f/logs/screen-n-cpu.txt.gz#_May_24_00_38_16_446058

Dims, Doug, and cbg have been really responsive on reviews to make this
move forward quite quickly, much thanks to all of them.

My hope is that we can get this working with Nova, Neutron, Cinder,
Glance this cycle, and let it be a candidate cross project goal in the
future.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] using keystone right - catalog, endpoints, tokens and noauth

2017-05-24 Thread Pavlo Shchelokovskyy
Hi all,

There are several problems or inefficiencies in how we are dealing with
auth to other services. Although it became much better in Newton, some
things are still to be improved and I like to discuss how to tackle those
and my ideas for that.

Keystone endpoints
===

Apparently since February-ish DevStack no longer sets up 'internal'
endpoints for most of the services in core devstack [0].
Luckily we were not broken by that right away - although when discovering a
service endpoint from keystone catalog we default to 'internal' endpoint
[1], for most services our devstack plugin still configures explicit
service URL in the corresponding config section, and thus the service
discovery from keystone never takes place (or that code path is not tested
by functional/integration testing).

AFAIK different endpoint types (internal vs public) are still quite used by
deployments (and IMO rightfully so), so we have to continue supporting
that. I propose to take the following actions:

- in our devstack plugin, stop setting up the direct service URLs in
config, always use keystone catalog for discovery
- in every conf section related to external service add
'endpoint_type=[internal|public]' option, defaulting to 'internal', with a
warning in option description (and validated on conductor start) that it
will be changed to 'public' in the next release
- use those values from CONF wherever we ask for service URL from catalog
or instantiate client with session.
- populate these options in our devstack plugin to be 'public'
- in Queens, switch the default to 'public' and use defaults in devstack
plugin, remove warnings.

Unify clients creation


again, in those config sections related to service clients, we have many
options to instantiate clients from (especially glance section, see my
other recent ML about our image service code). Many of those seem to be
from the time when keystone catalog was missing some functionality or not
existing at all, and keystoneauth lib abstracting identity and client
sessions was not there either.

To simplify setup and unify as much code as possible I'd like to propose
the following:

- in each config section for service client add (if missing) a
'_url' option that should point to the API of given service and
will be used *only in noauth mode* when there's no Keystone catalog to
discover the service endpoint from
- in the code creating service clients, always create a keystoneauth
session from config sections, using appropriate keystoneauth identity
plugin - 'token_endpoint' with fake token _url for noauth mode,
'password' for service user client, 'token' when using a token from
incoming request. The latter will have a benefit to make it possible for
the session to reauth itself when user token is about to expire, but might
require changes in some public methods to pass in the full task.context
instead of just token
- always create clients from sessions. Although AFAIK all clients ironic
uses already support this, some in ironic code (e.g. glance) still always
create a client from token and endpoint directly.
- deprecate some options explicitly registered by ironic in those sections
that are becoming redundant - including those that relate to HTTP session
settings (like timeout, retries, SSL certs and settings) as those will be
used from options registered by keystoneauth Session, and those multiple
options that piece together a single service URL.

This will decrease the complexity of service client-related code and will
make configuring those cleaner.

Of course all of this has to be done minding proper deprecation process,
although that might complicate things (as usual :/).

Legacy auth
=

Probably not worth specific mention, but we implemented a proper
keystoneauth-based loading of client auth options back in Newton almost a
year ago, so the code attempting to load auth for clients in a deprecated
way from "[keystone_authtoken]" section can be safely removed already.

As always, I'm eager to hear your comments.

[0] https://review.openstack.org/#/c/433272/
[1] http://git.openstack.org/cgit/openstack/ironic/tree/
ironic/common/keystone.py#n118

Best regards,
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Save the Date- Queens PTG

2017-05-24 Thread Thierry Carrez
Monty Taylor wrote:
> On 05/23/2017 06:27 PM, Anita Kuno wrote:
>> My mistake I mis-read, $149 USD is equal to $201 CDN (Canadian
>> currency), so it looks like the same price to me.
> 
> I believe (although I do not know for absolute certain) that the prices
> given were already normalized to USD. So it wasn't $149USD vs $200CDN it
> was $149USD vs $200USD.
> 
> It is, of course, worth clarifying.

Yes they were normalized to USD.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Undercloud backup and restore

2017-05-24 Thread Yolanda Robla Mota
Hi Carlos
A common request that i hear, is that customers need a way to rollback or
downgrade a system after an upgrade. So that will be useful of course. What
about the overcloud, are you considering that possibility? If they find
that an upgrade on a controller node breaks something for example.

On Wed, May 24, 2017 at 9:26 AM, Carlos Camacho Gonzalez <
ccama...@redhat.com> wrote:

> Hey folks,
>
> Based on what we discussed yesterday in the TripleO weekly team meeting,
> I'll like to propose a blueprint to create 2 features, basically to backup
> and restore the Undercloud.
>
> I'll like to follow in the first iteration the available docs for this
> purpose [1][2].
>
> With the addition of backing up the config files on /etc/ specifically to
> be able to recover from a failed Undercloud upgrade, i.e. recover the repos
> info removed in [3].
>
> I'll like to target this for P as I think I have enough time for
> coding/testing these features.
>
> I already have created a blueprint to track this effort
> https://blueprints.launchpad.net/tripleo/+spec/undercloud-backup-restore
>
> What do you think about it?
>
> Thanks,
> Carlos.
>
> [1]: https://access.redhat.com/documentation/en-us/red_hat_
> enterprise_linux_openstack_platform/7/html/back_up_and_
> restore_red_hat_enterprise_linux_openstack_platform/restore
>
> [2]: https://docs.openstack.org/developer/tripleo-docs/post_
> deployment/backup_restore_undercloud.html
>
> [3]: https://docs.openstack.org/developer/tripleo-docs/
> installation/updating.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Yolanda Robla Mota

Principal Software Engineer, RHCE

Red Hat



C/Avellana 213

Urb Portugal

yrobl...@redhat.comM: +34605641639


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] cancel weekly meeting in next week (05/29)

2017-05-24 Thread ChangBo Guo
Hi folks,

Just a reminder that we will skip oslo weekly meeting in next week, due to
the holiday of the USA. Coincidently, It's also a holiday for Chinese.

-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Undercloud backup and restore

2017-05-24 Thread Carlos Camacho Gonzalez
 Hey folks,

Based on what we discussed yesterday in the TripleO weekly team meeting,
I'll like to propose a blueprint to create 2 features, basically to backup
and restore the Undercloud.

I'll like to follow in the first iteration the available docs for this
purpose [1][2].

With the addition of backing up the config files on /etc/ specifically to
be able to recover from a failed Undercloud upgrade, i.e. recover the repos
info removed in [3].

I'll like to target this for P as I think I have enough time for
coding/testing these features.

I already have created a blueprint to track this effort
https://blueprints.launchpad.net/tripleo/+spec/undercloud-backup-restore

What do you think about it?

Thanks,
Carlos.

[1]:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_openstack_platform/7/html/back_up_and_restore_red_hat_enterprise_linux_openstack_platform/restore

[2]:
https://docs.openstack.org/developer/tripleo-docs/post_deployment/backup_restore_undercloud.html

[3]:
https://docs.openstack.org/developer/tripleo-docs/installation/updating.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] A proposal for hackathon to reduce deploy time of TripleO

2017-05-24 Thread Marios Andreou
On Tue, May 23, 2017 at 4:40 PM, Emilien Macchi  wrote:

> On Tue, May 23, 2017 at 6:47 AM, Sagi Shnaidman 
> wrote:
> > Hi, all
> >
> > I'd like to propose an idea to make one or two days hackathon in TripleO
> > project with main goal - to reduce deployment time of TripleO.
> >
> > - How could it be arranged?
> >
> > We can arrange a separate IRC channel and Bluejeans video conference
> session
> > for hackathon in these days to create a "presence" feeling.
>
> +1 for IRC. We already have #openstack-sprint, that we could re-use.
> Also +1 for video conference, to get face to face interactions,
> promptly and unscheduled.
>
> > - How to participate and contribute?
> >
> > We'll have a few responsibility fields like tripleo-quickstart,
> containers,
> > storage, HA, baremetal, etc - the exact list should be ready before the
> > hackathon so that everybody could assign to one of these "teams". It's
> good
> > to have somebody in team to be stakeholder and responsible for
> organization
> > and tasks.
>
> Before running the sprint, we should first track bugs / blueprints
> related to deployment speed.
> Not everyone in our team understands why some parts of deployments
> take time, so we need to make it visible so everyone can know how they
> can help during the sprint.
>
> Maybe we could create a Launchpad tag "deployment-time" to track bugs
> related to it. We should also make prioritization so we can work on
> the most critical ones first.
>
> I like the idea of breaking down the skills into small groups:
>
> - High Availability: deployment & runtime of Pacemaker optimization
> - Puppet: anything related to the steps (a bit more general but only a
> few of us have expertise on it, we could improve it).
> - Heat: work with the Heat team if we have some pending bugs about
> slowness.
> - Baremetal: ironic / workflows
> - tripleo-quickstart: tasks that can be improved / optimized
>
> This is a proposal ^ feel free to (comment,add|remove) anything.
>
>
> > - What is the goal?
> >
> > The goal of this hackathon to reduce deployment time of TripleO as much
> as
> > possible.
> >
> > For example part of CI team takes a task to reduce quickstart tasks
> time. It
> > includes statistics collection, profiling and detection of places to
> > optimize. After this tasks are created, patches are tested and submitted.
> >
> > The prizes will be presented to teams which saved most of time :)
> >
> > What do you think?
>
>
o/ Sagi - fwiw I really like the idea - like a remote project gathering...
especially having an open bluejeans session (I'd still do that even if an
irc chan is the preferred medium for the hackathon in the end) and I also
like the topic.

Personally I will find it hard to block-book a day (for example) to work on
this particular thing - depending on where we are in the cycle/current BZ
or customer cases etc etc and that is the main reason I wouldn't be able to
participate - at least not for the whole thing, but would at least try to
track it  (Even if I could block book a day it would probably be hard to
match the day that other folks can do it). I guess if you've gone to the
trouble of filing the bugs/blueprints and the 'deployment-time' tag as
suggested by Emilien then in in the worst case this effort can proceed in
the 'usual' way (gerrit,mailing list etc).

thanks


> Excellent idea, thanks Sagi for proposing it.
>
> Another thought: before doing the sprint, we might want to make sure
> our tripleo-ci is in stable shape (which is not the case right now, we
> have 4 alerts and one of them affects ovb-ha)...
>
> > Thanks
> > --
> > Best regards
> > Sagi Shnaidman
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running tests

2017-05-24 Thread Julien Danjou
On Tue, May 23 2017, aalvarez wrote:

> Ok so started the tests using:
>
> tox -e py27-postgresql-file
>
> The suite starts running fine, but then I get a failing test:

Can you reproduce it each time?

That's weird, I don't think we ever saw that.

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >